Quantcast
Channel: TechNet Technology News
Viewing all 13502 articles
Browse latest View live

6 signs your company has outgrown its free email solution

$
0
0

6-signs-your-company-has-outgrown-your-free-email-solution-1

A free email solution is great when your organization is first starting out. But how do you know when you have grown beyond the capabilities of your free email service and you need a more sophisticated solution? The business email requirements of a growing company are much different than those of a small startup. Whether you’re using Outlook.com or another provider, there are signs that indicate your business has outgrown free email.

The following list identifies six important capabilities you need now (or in the future) to ensure your email solution doesn’t hold you back. Needing any one of these capabilities is an indication that it’s time to consider moving from a free to a paid email solution:

  1. Security—If you’ve experienced a security breach—or are even worried about it happening—you need a more sophisticated solution that offers increased, enterprise-class security capabilities.
  1. Storage—You’ve reached your storage max: a very simple reason to upgrade. Storage space should never cause you to delete or change how you use your email, especially when paid solutions offer large stores of data.
  1. Tools—Free email tools don’t typically provide robust inbox- and user-management tools. Paid email solutions offer a range of features for managing users as well as extensive rules for managing your inbox—enabling you to spend less time managing your inbox and more time managing your business.

Bump your email up to business class

Learn how paid, hosted email solutions offer enhanced security features and ease of use that you don’t get from free email services.

Get the eBook
  1. Domain names—Using a custom domain for your business email is a vital way to ensure your business appears professional. For example, “yourname@yourcompany.com” has a lot more credibility than “yourcompany@domain.com.” While free custom email addresses are available, they often leave you open to security threats, because the company you host through will likely have access to your data and other information.
  1. Data—If you’ve ever felt the need to own and manage your email data, it’s time to move to a paid solution. Often, when you agree to free email terms, you’re granting the email provider permission to mine your data and send you ads—which is how their companies remain profitable while offering free services. Not only can this distract from your work, it also puts your company’s data at risk.
  1. File sharing—Need to share files and collaborate securely with your team? Paid email solutions enable team-based collaboration and sharing without putting confidential company information at risk.

Graduate to an email solution that offers your growing business increased security, enhanced customization and a variety of features and capabilities to improve collaboration. Not only will this keep your company running smoothly and increase teamwork—your IT team will no longer spend time on common free email service-level issues.

Learn more

The post 6 signs your company has outgrown its free email solution appeared first on Office Blogs.


How do you tell the story behind your data? Find out on the next Modern Workplace

$
0
0

According to a recent study conducted by Glassdoor, data scientists are the most in-demand job of 2016. But gathering the information is only half of your organization’s challenge—it’s what you do with it that counts. How can you be sure your data isn’t just telling you the story you want to hear, but showing you what you urgently need to know?

Join us for the next episode of Modern Workplace,” Visualize: The power of data storytelling,” airing January 10, 2017, at 8 a.m. PST / 4 p.m. GMT, and learn how to unlock the hidden potential within your data through visualization. Plus, get an exclusive demonstration of Microsoft intelligent tools, such as Power BI and PowerPoint Designer.

  • Industry analyst at Altimeter Group, Susan Etlinger, shares how to strategize your data into actionable tips you can use today and provides insights into creating your organization’s most insightful data yet.
  • Data visualization specialist for The Washington Post, Gabi Steele, explains how to communicate insights through design and data storytelling.

Register now!

Related content

The post How do you tell the story behind your data? Find out on the next Modern Workplace appeared first on Office Blogs.

Exploring the Tessel 2 IoT and robotics development board

$
0
0

13841-01I'm still on vacation and still on the mend from surgery. I'm continuing to play around with IoT devices on my staycation. Last week I looked at these devices:

Today I'm messing with the Tessel 2. You can buy it from SparkFun for the next few weeks for US$40. The  Tessel is pretty cool as a tiny device because it includes WiFi on the board as well as two USB ports AND on-board Ethernet. It includes a two custom "module" ports where you can pop in 10-pin modules like Accelerometers, Climate sensors, IR and more. There's also community-created Tessel modules for things like Color Sensing and Motion.

Tessel is programmable in JavaScript and runs Node. Here's the tech specs:

  • 580MHz Mediatek MT7620n
  • Linux built on OpenWRT
  • 802.11bgn WiFi
  • WEP, WPA, WPA2-PSK, WPA2-Enterprise
  • 64MB DDR2 RAM
  • 32MB Flash
  • 16 pins GPIO, 7 of which support analog in
  • 2 USB 2.0 ports with per-port power switching

Tessel isn't a company, it's a open source project! They are on Twitter at @tesselproject and on GitHub here https://github.com/tessel.

NOTE: Some users - including me - have had issues with some Windows machines not recognizing the Tessel 2 over USB. I spent some time exploring this thread on their support site and had to update its firmware but I haven't had issues since.

Once you've plugged your Tessel in, you talk to it with their node based "t2" command line:

>t2 list
INFO Searching for nearby Tessels...
USB Tessel-02A3226BCFA3
LAN Tessel-02A3226BCFA3

It's built on OpenWRT and you can even SSH into it if you want. I haven't needed to though as I just want to write JavaScript and push  projects to it. It's nice to know that you CAN get to the low-level stuff I you need to, though.

For example, here's a basic "blink an LED" bit of code:

// Import the interface to Tessel hardware
var tessel = require('tessel');
// Turn one of the LEDs on to start.
tessel.led[2].on();
// Blink!
setInterval(function () {
  tessel.led[2].toggle();
  tessel.led[3].toggle();
}, 600);
console.log("I'm blinking! (Press CTRL + C to stop)");

The programming model is very familiar, and they've abstracted away the complexities of most of the hardware. Here's a GPS example:

var tessel = require('tessel');
var gpsLib = require('gps-a2235h');

var gps = gpsLib.use(tessel.port['A']);

// Wait until the module is connected
gps.on('ready', function () {
console.log('GPS module powered and ready. Waiting for satellites...');
// Emit coordinates when we get a coordinate fix
gps.on('coordinates', function (coords) {
console.log('Lat:', coords.lat, '\tLon:', coords.lon, '\tTimestamp:', coords.timestamp);
});

// Emit altitude when we get an altitude fix
gps.on('altitude', function (alt) {
console.log('Got an altitude of', alt.alt, 'meters (timestamp: ' + alt.timestamp + ')');
});

// Emitted when we have information about a fix on satellites
gps.on('fix', function (data) {
console.log(data.numSat, 'fixed.');
});

gps.on('dropped', function(){
// we dropped the gps signal
console.log("gps signal dropped");
});
});

gps.on('error', function(err){
console.log("got this error", err);
});

Of course, since it's using node and it has great Wifi or wired, the Tessel can also be a web server! Here we return the image from a USB camera.

var av = require('tessel-av');
var os = require('os');
var http = require('http');
var port = 8000;
var camera = new av.Camera();

http.createServer((request, response) => {
response.writeHead(200, { 'Content-Type': 'image/jpg' });

camera.capture().pipe(response);

}).listen(port, () => console.log(`http://${os.hostname()}.local:${port}`));

I'll make a Hello World webserver:

var tessel = require('tessel');
var http = require('http');
var server = http.createServer(function (request, response) {
  response.writeHead(200, {"Content-Type": "text/plain"});
  response.end("Hello from Tessel!\n");
});
server.listen(8080);
console.log("Server running at http://192.168.1.101:8080/");

Then push the code to the Tessel like this:

>t2 push index.js
INFO Looking for your Tessel...
INFO Connected to Tessel-02A3226BCFA3.
INFO Building project.
INFO Writing project to Flash on Tessel-02A3226BCFA3 (3.072 kB)...
INFO Deployed.
INFO Your Tessel may now be untethered.
INFO The application will run whenever Tessel boots up.
INFO To remove this application, use "t2 erase".
INFO Running index.js...

Where is my Tessel on my network?

>t2 wifi
INFO Looking for your Tessel...
INFO Connected to Tessel-02A3226BCFA3.
INFO Connected to "HANSELMAN"
INFO IP Address: 192.168.0.147
INFO Signal Strength: (33/70)
INFO Bitrate: 29mbps

Now I'll hit the webserver and there it is!

image

There's a lot of cool community work happening around Tessel.  You can get involved with the Tessel community if you're interested:

  • Join us on Slack— Collaboration and real time discussions (Recommended! - ask your questions here).
  • Tessel Forums— General discussion and support by the Tessel community.
  • tessel.hackster.io— Community-submitted projects made with Tessel.
  • tessel.io/community— Join a Tessel meetup near you! Meetups happen around the world and are the easiest way to play with hardware in person.
  • #tessel on Freenode— IRC channel for development questions and live help.
  • Stack Overflow— Technical questions about using Tessel

Sponsor: Big thanks to Telerik! They recently published a comprehensive whitepaper on The State of C#, discussing the history of C#, what’s new in C# 7 and whether C# is the top tech to know. Check it out!



© 2016 Scott Hanselman. All rights reserved.
     

Basic Network Capture Methods

$
0
0

Hi everyone. This is Michael Rendino, a Premier Field Engineer from Charlotte, NC and former member of the CTS networking support team. With my networking background, I have spent years reviewing network captures. One thing I always run into with my customers is that they often don’t know the best or easiest solution to get a network capture. There are many solutions you can use and choosing the right one often depends on the scenario. While colleagues have created blogs on getting a trace with a single tool, I wanted to provide a location that someone can bookmark to be a single set of instructions for a number of solutions. Please note that when reviewing traces, you can use one or more of these tools and aren’t necessarily tied to what was used to collect the trace.

The Options

First, let’s cover each of the tools that can be used to collect a network trace, in order from older to newer

  1. Network Monitor 3.4 (Netmon) – https://www.microsoft.com/en-us/download/details.aspx?id=4865
  2. Wireshark (v 2.2.2 as of 11/16/16) – https://wireshark.org/#download
  3. Netsh Trace – built-in to operating system
  4. Microsoft Message Analyzer (MMA) (v 1.4 as of 6/13/16) – https://www.microsoft.com/en-us/download/details.aspx?id=44226

Comparison

Network MonitorWiresharkNetsh TraceMMA
Download requiredYesYesNoYes
Received updatesNo (archived)FrequentNoOccasional
GUIYesYesNoYes
Command-line NmcapDumpcapNetsh tracePowerShell (PEF)
Default format.cap.pcapng.etl.matp
Parsing toolNetmon, Wireshark or MMAWireshark, MMA or Netmon (when traced saved in tcpdump format)Netmon or MMA (MMA can save in CAP format)MMA (Netmon or Wireshark if saved in CAP format)
Capture multiple points concurrently*NoNoNoYes
Ability to capture a rolling set of files**Yes**Yes**NoNo
Promiscuous mode***Off by defaultOn by defaultNoOff by default
Capture at logon/rebootNoNoYesNo
Troubleshooting ATAYes***NoNoNo

*MMA gives you the ability to setup and collect captures from multiple systems (e.g. client and server) using a single client.

**Wireshark can capture X files of Y size and roll as needed. Network Monitor can capture a chained set of files, but will not overwrite old files and can only be done via command line.

***Network Monitor is currently the only supported tool to install on an Advanced Threat Analytics server.

The basics

Right off the bat, it should become apparent from the above table that one of these options — netsh trace – has one benefit over the others as it is ready to go without any further installation. It does require an elevated command prompt to run, but nothing beyond that. In many environments where change control is strict and the necessary software hasn’t already been installed, this often makes it the only option. Another item to note is that “netsh trace” is a command-line tool and the other three each have command-line alternatives for network captures. Getting a trace that way is often beneficial to eliminate the overhead of the GUI showing data and refreshing in real-time. As pointed out in the table, netsh traces can be opened with Netmon or MMA, but not Wireshark.

When collecting a short-term, simple trace for a set amount of time, there is not much of a difference in capturing with any of the tools. Each will let you create a trace, capture multiple NICs, and define capture rules (typically, please don’t as you may filter out something important). One item to note is regarding promiscuous mode. Be sure to enable it when you are doing port mirroring to allow a computer to capture all traffic on the port — not just the packets destined for its own MAC address.

Requirements

The only one with special requirements is Message Analyzer as certain features (like remote capture) are only possible on Windows 8.1, Server 2012 R2 and newer operating systems.

Instructions

And now the part you’ve been anxiously waiting for, the steps for each solution. I’ll provide both GUI and command line (where applicable) for getting a basic capture.

Network Monitor

GUI

  1. Launch Network Monitor. If you need promiscuous mode to capture traffic that is destined for machines other than the one where the capture is running, check the P-Mode box first, and then click “New Capture.” NOTE: You can select and deselect network adapters if you prefer, but these were the “quick” instructions, remember?

  1. Once you have the new capture open, simply click “Start” to begin tracing and “Stop” after you have captured the data you need. You can then click “Save As” to save the trace before starting your analysis.

  1. If you have applied a display filter or have selected certain frames and only want to retain that subset in a smaller file, you can save just those frames to a file if you wish:

Command Line

  1. Open an elevated command prompt for all of the following steps.
  2. Decide if you want to create multiple chained files of a particular size or if you want a single capture file with a max size of 500 MB.
  3. Run one of the following commands
    1. For chained files – “nmcap /network * /capture /file %computername%.chn:100MB”
      1. This command will create a series of 100MB captures in the current folder (adjust the size as you wish)

    NOTE: Monitor the volume where the traces are being stored to ensure that it doesn’t consume too much diskspace. Due to Wireshark’s ability to have a set number of files, if you are unsure how long the trace must run, Wireshark may be a better solution.

    1. In the above example, the name of the computer will be the name of the files, but you can replace %computername% with whatever you want.

    2. It will capture all network interfaces in the computer.

    3. If you wish to store the captures in a different folder, either run the command from another folder or put the full path before %computername%.chn.

    1. For a single file – “nmcap /network * /capture /file %computername%.cap”
      1. As previously noted, this command will create a single capture with a max size of 500 MB in the current folder.

      2. In the above example, the name of the computer will be the name of the files, but you can replace %computername% with whatever you want.

      3. It will capture all network interfaces in the computer.

      4. If you wish to store the captures in a different folder, either run the command from another folder or put the full path before %computername%.cap

        NOTE: You must keep the command window open while the capture runs.

  1. Once the issue reproduces, use Ctrl+C to stop the capture

Wireshark

GUI

Single File

  1. Launch Wireshark and select the NIC(s) you want to capture.

  1. Click the blue shark fin icon to start the trace.

  1. After reproducing the issue, to stop the capture, click the red stop icon.

  1. Save the file. Note that if you save it in .pcapng format (the default), it can’t be opened in Network Monitor but can be opened in MMA.

Chained Files

  1. If you want to capture multiple files, select Capture – Options (or Ctrl+K):

  1. Select the NIC(s) you want to capture.

  1. Click on the Output tab, enter a path and name for the files. The name will be appended with a file number, the date and time. Select the “Create a new file automatically after” option and then choose a size for each file and for ring buffer, enter the number of files you want to create. In the image below, ten 100 MB files would be created and the oldest file would be overwritten until the capture is stopped.

  1. If you wish to reduce the impact on the computer where the trace is being collected, click the Options tab, then deselect the “Update list of packets in real-time” and “Automatically scroll during live capture” options.

  1. Click the Start button to start the trace. After reproducing the issue, click the red stop icon to terminate the trace.

Command Line

  1. Open an elevated command prompt and switch to the Wireshark directory (usually c:\program files\Wireshark).
  2. From the Wireshark directory, run “dumpcap -D” to get a list of interfaces. You’ll need the interface number in the command to start the capture.
  3. Run “dumpcap -b filesize:100000 -b files:10 -i -w c:\temp\%computername%.pcap”. You can use a different path and filename if you wish. This command will create ten rolling 100 MB captures until the trace is stopped. Adjust those numbers as desired

NOTE: You must keep the command window open until the problem returns.

  1. Once the issue reproduces, use Ctrl+C to stop it.

Netsh Trace

One extra cool thing about “netsh trace” is that by default, it creates a .cab file along with the trace, that contains a bunch of helpful diagnostic information like various settings, event logs, and registry keys.

  1. Open an elevated command prompt and run the command “netsh trace start capture=yes tracefile=c:\temp\%computername%.etl.” You can close the command prompt if you wish.

    NOTE: If you want to capture an issue that occurs during boot or logon, use the “persistent=yes” switch. By doing that, the trace will run through the reboot and can be stopped after logon with the command in step 2. Also, if you don’t want the .cab file, simply add “report=no” to the command in step 1.

  2. Once the issue reproduces, open an elevated command prompt and run the command “netsh trace stop.”

Microsoft Message Analyzer

GUI

MMA is the most powerful and flexible of the network capture tools and fortunately, is the easiest for getting a trace.

  1. Run MMA as an administrator.
  2. Under “Favorite Scenarios,” click “Local Network Interfaces” if you are running Windows 8.1 or 2012 and newer. If you are running Windows 7 or 2008 R2, choose the “Loopback and Unencrypted IPSEC” option. The session will be created and the capture will start.

  1. Once you have reproduced the issue, click the blue stop icon:

Command Line

Command line captures with Message Analyzer are done with the PowerShell PEF module. The fact that it uses PowerShell makes it extremely powerful and flexible for setting up the capture. However, this article is for basic captures so following is the example from https://technet.microsoft.com/en-us/library/dn456526(v=wps.630).aspx. You can always save the following as a script.

$TraceSession01 = NewPefTraceSession Mode Circular Force Path “C:\Traces\Trace01.matu” TotalSize 50 SaveOnStop
AddPefMessageSource PEFSession $TraceSession01–Source “Microsoft-Pef-WFP-MessageProvider”
StartPefTraceSession PEFSession $TraceSession01

The above script will create a 50 MB capture, overwrites an existing file in that path if it exists and saves the file once the script is stopped.

Conclusion

As you can see, the tools and methods available to collect a network capture are numerous, but this variety enables you to get traces for any situation. You may eventually get to prefer a particular tool for capturing traces and yet another to review them or use more than one to view the same trace. I highly recommend that you become familiar with them all and run through the process prior to the time when you actually need to get a trace. Again, these instructions are basic ones just to get you all information from the computer where the trace runs. There’s a plethora of options and capabilities for the tools, so feel free to dig in! I’ll include some helpful links below so you can continue your learning. Good luck!

Additional Information

  1. To learn more about your nmcap options, enter “nmcap /?” or “nmcap /examples”
  2. Wireshark training can be found at https://www.wireshark.org/#learnWS.
  3. For more information on Message Analyzer, check out the blog at https://blogs.technet.microsoft.com/messageanalyzer/.
  4. Message Analyzer training videos can be found at https://www.youtube.com/playlist?list=PLszrKxVJQz5Uwi90w9j4sQorZosTYgDO4.
  5. Message Analyzer Operating Guide – https://technet.microsoft.com/en-us/library/jj649776.aspx
  6. Information on the Message Analyzer PowerShell module can be found at https://technet.microsoft.com/en-us/library/dn456518(v=wps.630).aspx.
  7. Remote captures with MMA – https://blogs.technet.microsoft.com/messageanalyzer/2013/10/17/remote-capture-with-message-analyzer-and-windows-8-1/

NOTE: This articles references a 3rd party product. 3rd party products are not supported under any Microsoft standard support program or service. The information here is provided AS IS without warranty of any kind. Microsoft disclaims all implied warranties including, without limitation, any implied warranties of merchantability or of fitness for a particular purpose. The entire risk arising out of the use or performance of these solutions remains with you. In no event shall Microsoft, its authors, or anyone else involved in the creation, production, or delivery of the scripts be liable for any damages whatsoever (including, without limitation, damages for loss of business profits, business interruption, loss of business information, or other pecuniary loss) arising out of the use of or inability to use this documentation, even if Microsoft has been advised of the possibility of such damages.

Michael Rendino

Senior Premier Field Engineer

The week in .NET – On .NET with Steve Smith, Jint

$
0
0

To read last week’s post, see The week in .NET – .NET Core triage on On .NET, ShareX. Next week, the post will be a little late like this week.

On .NET

Last week, I published a short interview with Steve Smith that was shot during the MVP Summit. We talked about ASP.NET Core and its documentation, that Steve has been contributing to, about his consulting activity, and about his Kickstarter-funded software craftsmanship calendar.

This week, I’ll publish another MVP Summit short interview.

Package of the week: Jint

Jint is a Javascript interpreter for .NET which provides full ECMA 5.1 compliance (6.0 work is underway), and can run on .NET Framework 4.5 and .NET Standard 1.3. It’s an ideal solution to provide scripting abilities to a .NET application. RavenDB uses it to perform small transformations on document fragments, for instance. It’s also commonly used as a scripting engine by games.

Running JavaScript code with Jint is as simple as spinning up an interpreter, and handing it the objects and parameters it’s allowed to interact with:

Interoperability in Jint works both ways, with simple translations between both type systems that even include generics support:

.NET

ASP.NET

F#

Azure

Data

And this is it for this week!

Contribute to the week in .NET

As always, this weekly post couldn’t exist without community contributions, and I’d like to thank all those who sent links and tips. The F# section is provided by Phillip Carter, the gaming section by Stacey Haffner, and the Xamarin section by Dan Rigby.

You can participate too. Did you write a great blog post, or just read one? Do you want everyone to know about an amazing new contribution or a useful library? Did you make or play a great game built on .NET?
We’d love to hear from you, and feature your contributions on future posts:

This week’s post (and future posts) also contains news I first read on The ASP.NET Community Standup, on Weekly Xamarin, on F# weekly, and on Chris Alcock’s The Morning Brew.

When it comes to group collaboration, one size doesn’t fit all

$
0
0

when-it-comes-to-group-collaboration-1

The way people work together is evolving, and unique projects, work styles, functional roles and workforce diversity call for a complete set of collaboration tools. As the appetite for new ways of working together grows, new developments in communications, mobility and cloud services are enabling professionals to communicate and collaborate in ways that were previously not possible.

Not your typical online event

Each 90-minute session starts with an online business roundtable discussing your biggest business challenges with a trained facilitator and then transitions into a live environment in the cloud. You will receive a link to connect your own device to a remote desktop loaded with our latest and greatest technology, so you can experience first-hand how Microsoft tools can solve your biggest challenges.

Why should I attend?

During this interactive online session, you will explore:

  • How Microsoft Teams, the newest collaboration tool:
    • Keeps everyone engaged with threaded persistent chat.
    • Creates a hub for teamwork that works together with your other Office 365 apps.
    • Builds customized options for each team with channels, connectors, tabs and bots.
    • Adds your personality to your team with emojis, GIFs and stickers.
  • How to keep information secure while being productive—Make it easier to work securely and maintain compliance without inhibiting your workflow.
  • How to quickly visualize and analyze complex data—Zero in on the data and insights you need without having to involve a BI expert.
  • How to co-author and share content quickly—Access and edit documents even while others are editing and reviewing them all at the same time.
  • How to get immediate productivity gains—Most attendees leave with enough time-saving skills that time invested to attend a Customer Immersion Experience more than pays for itself in a few short days.

Space is limited. Each session is only open to 12 participants. Reserve your seat now.

The post When it comes to group collaboration, one size doesn’t fit all appeared first on Office Blogs.

Cumulative Update #3 for SQL Server 2014 SP2

$
0
0

The 3rd cumulative update release for SQL Server 2014 SP2 is now available for download at the Microsoft Downloads site. Please note that registration is no longer required to download Cumulative updates.

To learn more about the release or servicing model, please visit:

 

Cumulative Update #10 for SQL Server 2014 SP1

$
0
0

The 10th cumulative update release for SQL Server 2014 SP1 is now available for download at the Microsoft Downloads site. Please note that registration is no longer required to download Cumulative updates.

To learn more about the release or servicing model, please visit:


Are you ready to host large-scale virtual presentations?

$
0
0

Large-scale virtual meetings have unique needs and considerations that your IT team should keep in mind to ensure you can deliver a high-quality virtual experience. Beyond selecting the right virtual meeting software, what does your IT department need to do to ensure successful large-scale virtual presentations?

Identify what type of virtual meeting solution you need

First, know when to use a large-scale virtual meeting over other video meeting solutions, so your IT team isn’t tasked with preparing unnecessary technology. The chart below outlines some potential solution types—based on communication type, number of attendees, use cases and benefits.

are-you-ready-to-host-large-scale-virtual-presentations-1

Recommendations based on general numbers and software; actual numbers may vary with different platforms.

If you require a large-scale solution for your virtual meeting, there are some important factors your IT team needs to consider—such as location logistics and broadcast bandwidth—to ensure a high-quality experience for everyone.

Presentation location logistics

Ensure your selected location has all the necessary elements to support all your technical needs during the presentation. Look for things like:

  • Internet/Wi-Fi strength.
  • Computer availability and features (webcams, etc.).
  • Installed software updates (to avoid last-minute downloads).
  • Number of outlets, extra internet connections and other connectivity considerations.
  • Space for extra people, cameras and equipment to ensure there’s room for someone from your IT team to be on-site for support during the presentation.

Best broadcast bandwidth

Both weeks before and the day of the presentation, check to ensure that your company’s internet connection is strong enough to broadcast in high definition. This is important for two reasons: broadcasting and streaming. For example, if people on location will be tuning in virtually, will broadcasting bog down their viewing experience? Is your network able to support large-scale, concurrent sessions or might you need a software-based media distribution solution? While there’s no way you can manage offsite attendee’s bandwidth, you can make sure that the internet used to broadcast your virtual meeting is ready for the task.

Host large-scale meetings like a pro

Learn how to produce compelling virtual meetings and broadcasts. Today’s technology can help, but good pre-production planning is key to your success.

Get the eBook

Other important considerations

While not necessarily your IT department’s responsibilities, there are a number of different parts of a large-scale broadcast which might affect your team’s ability to successfully broadcast a large-scale presentation.

  • Camera quality—Whether you’re broadcasting with a built-in webcam or an external device, look for a high-quality camera that shoots in HD.
  • Sound quality—Unbalanced or extreme sound fluctuations distract audiences. Your IT department can help by researching the best mics available and ensuring that none of the other technology in the room will distract from the broadcast.

And, action!

Broadcast meetings are a great format for getting messages out to large audiences, no matter where they are. But low-quality presentations are a surefire way to lose attendee attention. Equipped to reach up to 10,000 attendees in a secure and familiar platform, Skype Meeting Broadcast puts two paramount features first: ease of use and meeting quality. By mastering professional-quality broadcast techniques, you can create engaging online broadcasts for any audience.

For additional insights on how to effectively and efficiently create the best possible meetings for your organization, check out “The Ultimate Meeting Guide.”

The post Are you ready to host large-scale virtual presentations? appeared first on Office Blogs.

Power BI Desktop – 2016 Year in Review

$
0
0
2016 has been quite the year for the Power BI Desktop. Over the course of our 11 releases this year, we’ve released over 190 features and improvements based on the feedback that you've given us throughout the year. For December, we are taking a break from our monthly releases to focus on quality and we’ll be back early in the new year with a brand-new Desktop release. So instead of our typical monthly blog post, we’re looking back at the past year, recapping several of our favorite features, and taking a look at our big focus areas from this past year.

This Week on Windows: tips, tricks & the best of Windows 10 for your new PC!

$
0
0

If you got a new Windows 10 PC this holiday, we’re here to show you the latest features to keep you productive and having fun! Here are a few ways you can get started with Windows 10:

Check out our weekly tips for more on getting started with Windows 10 with Cortana, Microsoft Edge, Windows Ink and more.

Here’s what’s new in the Windows Store this week:

Getting Started Collection

Getting Started Collection

The holidays are almost over, but the adventures with your new Windows device are just beginning. Our Getting Started Collection brings together titles chosen specifically to get you up to speed and having fun right from the start, including Netflix, Hulu, Pandora, Dropbox and more!

Countdown Collection

We’re counting down to the New Year with a collection in the Windows Store*! For a limited time, save up to 30% on the hottest games, get apps, software and chart-topping music, find movies as low as $8.99, today’s hit TV shows up to 50% off and more.

The Accountant

The Accountant

Math savant and CPA Christian Wolff (Ben Affleck) works as an accountant for the world’s most dangerous criminal organizations – but when he takes on a new client, the body count begins to rise as the government closes in. Get The Accountant ($19.99 HD, $14.99 SD) two full weeks before it comes to Blu-ray in the Movies & TV section of the Windows Store.

Follow your favorite College Football team to the championship on Sling TV

Sling TV

For college football fans, this week is the most wonderful time of the year, and with Sling TV (free download, subscription or trial required) you won’t miss a playoff or Bowl game. Get ready for gridiron glory and a not-to-be-missed lineup of college football action, and get it all on Sling TV.

Have a great weekend!

*Available through Jan. 9, 2017, on Windows 10 devices in the US, UK, Canada, France, Germany, Spain, Italy, Mexico, Brazil, and Australia. Offers and content varies by market. Limited availability; offers may change at any time.

The post This Week on Windows: tips, tricks & the best of Windows 10 for your new PC! appeared first on Windows Experience Blog.

Released: Exchange Generate Message Profile Script v2.0.

$
0
0

Greetings Exchange Community!

Today, I am pleased to announce the release of a major update to the Exchange Generate Message Profile script. This release primarily focuses on two enhancements.

The first enhancement is script will now use multiple “threads” (currently in the form of PowerShell jobs with Runspaces under consideration for a future version) to collect data from multiple servers simultaneously. This should significantly speed up data collection in environments with a large number of servers in a site. A few notes on this:

  1. Each thread creates its own fully RBAC compliant connection to a local Exchange server (defaulting to itself). Because each session is using the RBAC compliant IIS based PowerShell proxy, the CPU utilization of the server running the script is not only increased by the multiple threads that are spawned, but also by the IIS service that they are each are connected to.
  2. When run on a full Exchange Server, the script defaults to using the number of threads equal to approximately ¼ of the CPU cores on the server. This helps ensure that by default the script does not use more than ~50% CPU resources on the server running the script, as it accounts for the CPU load of the threads (1/4 = 25%) and the load they place (another 25%) on the IIS service.
  3. When run on an system with just the Exchange Management tools loaded, the script default to using the number of threads equal to approximately ½ of the CPU cores of the system.
  4. The script will gracefully shut down any background jobs still running if CTRL-C is used to stop the script.

The second enhancement is regarding the script’s behavior of skipping the creation of a message profile for a site when a single server in it is unavailable or has data collection issues. This is still the default behavior of the script, but this behavior can now be overridden by specifying the minimum percentage of servers in a site that must be online and return data for a message profile to still be generated for that site. It is highly recommended to leave this at the default setting, but this requested feature will allow the script to provide even a slightly skewed message profile versus nothing when a small percentage of servers were unresponsive or have data collection issues.

For a complete list of all enhancements and bug fixes please review the Version History on the TechNet Gallery posting.

As always I welcome feedback through the TechNet Gallery posting.

Dan Sheehan
Senior Premier Field Engineer

Evaluating Shared Expressions in Tabular 1400 Models

$
0
0

In our December blog post; Introducing a Modern Get Data Experience for SQL Server vNext on Windows CTP 1.1 for Analysis Services, we mentioned SSDT Tabular does not yet support shared expressions, but the CTP 1.1 Analysis Services engine already does. So, how can you get started using this exciting new enhancement to Tabular models now? Let’s take a look.

With shared expressions, you can encapsulate complex or frequently used logic through parameters, functions, or queries. A classic example is a table with numerous partitions. Instead of duplicating a source query with minor modifications in the WHERE clause for each partition, the modern Get Data experience lets you define the query once as a shared expression and then use it in each partition. If you need to modify the source query later, you only need to change the shared expression and all partitions that refer to it to automatically pick up the changes.

In a forthcoming SSDT Tabular release, you’ll find an Expressions node in Tabular Model Explorer which will contain all your shared expressions. However, if you want to evaluate this capability now, you’ll have to create your shared expressions programmatically. Here’s how:

  1. Create a Tabular 1400 Model by using the December release of SSDT 17.0 RC2 for SQL Server vNext CTP 1.1 Analysis Services. Remember that this is an early preview. Only install the Analysis Services, but not the Reporting Services and Integration Services components. Don’t use this version in a production environment. Install fresh. Don’t attempt to upgrade from previous SSDT versions. Only work with Tabular 1400 models using this preview version. For Multidimensional as well as Tabular 1100, 1103, and 1200 models, use SSDT version 16.5.
  2. Modify the Model.bim file from your Tabular 1400 project by using the Tabular Object Model (TOM). Apply your changes programmatically and then serialize the changes back into the Model.bim file.
  3. Process the model in the preview version of SSDT Tabular. Just keep in-mind that SSDT Tabular doesn’t know yet how to deal with shared expressions, so don’t attempt to modify the source query of a table or partition that relies on a shared expression as SSDT Tabular may become unresponsive.

Let’s go through these steps in greater detail by converting the source query of a presumably large table into a shared query, and then defining multiple partitions based on this shared query. As an optional step, afterwards you can modify the shared query and evaluate the effects of the changes across all partitions. For your reference, download the Shared Expression Code Sample.

Step 1) Create a Tabular 1400 model

If you want to follow the explanations on your own workstation, create a new Tabular 1400 model as explained in Introducing a Modern Get Data Experience for SQL Server vNext on Windows CTP 1.1 for Analysis Services. Connect to an instance of the AdventureWorksDW database, and import among others the FactInternetSales table. A simple source query suffices, as in the following screenshot.

factinternetsalessourcequery

Step 2) Modify the Model.bim file by using TOM

As you’re going to modify the Model.bim file of a Tabular project outside of SSDT, make sure you close the Tabular project at this point. Then start Visual Studio, create a new Console Application project, and add references to the TOM libraries as explained under “Working with Tabular 1400 models programmatically” in Introducing a Modern Get Data Experience for SQL Server vNext on Windows CTP 1.1 for Analysis Services.

The first task is to deserialize the Model.bim file into an offline database object. The following code snippet gets this done (you might have to update the bimFilePath variable). Of course, you can have a more elaborate implementation using OpenFileDialog and error handling, but that’s not the focus of this article.

string bimFilePath = @”C:\Users\Administrator\Documents\Visual Studio 2015\Projects\TabularProject1\TabularProject1\Model.bim”;
var tabularDB = TOM.JsonSerializer.DeserializeDatabase(File.ReadAllText(bimFilePath));

The next task is to add a shared expression to the model, as the following code snippet demonstrates. Again, this is a bare-bones minimum implementation. The code will fail if an expression named SharedQuery already exists. You could check for its existence by using: if(tabularDB.Model.Expressions.Contains(“SharedQuery”)) and skip the creation if it does.

tabularDB.Model.Expressions.Add(new TOM.NamedExpression()
{
    Kind = TOM.ExpressionKind.M,
    Name = “SharedQuery”,
    Description = “A shared query for the FactInternetSales Table”,
    Expression = “let”
        +      Source = AS_AdventureWorksDW,”
        +      dbo_FactInternetSales = Source{[Schema=\”dbo\”,Item=\”FactInternetSales\”]}[Data]”
        +  “in”
        +      dbo_FactInternetSales”,
});

Perhaps the most involved task is to remove the existing partition from the target (FactInternetSales) table and create the desired number of new partitions based on the shared expression. The following code sample creates 10 partitions and uses the Table.Range function to split the shared expression into chunks of up to 10,000 rows. This is a simple way to slice the source data. Typically, you would partition based on the values from a date column or other criteria.

tabularDB.Model.Tables[“FactInternetSales”].Partitions.Clear();
for(int i = 0; i < 10; i++)
{
    tabularDB.Model.Tables[“FactInternetSales”].Partitions.Add(new TOM.Partition()
    {
        Name = string.Format(“FactInternetSalesP{0}”, i),
        Source = new TOM.MPartitionSource()
        {
            Expression = string.Format(“Table.Range(SharedQuery,{0},{1})”, i*10000, 10000),
        }
    });
}

The final step is to serialize the resulting Tabular database object with all the modifications back into the Model.bim file, as the following line of code demonstrates.

File.WriteAllText(bimFilePath, TOM.JsonSerializer.SerializeDatabase(tabularDB));

Step 3) Process the modified model in SSDT Tabular

Having serialized the changes back into the Model.bim file, you can open the Tabular project again in SSDT. In Tabular Model Explorer, expand Tables, FactInternetSales, and Partitions, and verify that 10 partitions exist, as illustrated in the following screenshot. Verify that SSDT can process the table by opening the Model menu, pointing to Process, and then clicking Process Table.

processtable

You can also verify the query expression for each partition in Partition Manager. Just remember, however, that you must click the Cancel button to close the Partition Manager window. Do not click OK –   with the December 2016 preview release, SSDT could become unresponsive.

Wrapping Things Up

Congratulations! Your FactInternetSales now effectively uses a centralized source query shared across all partitions. You can now modify the source query without having to update each individual partition. For example, you might decide to remove the ‘SO’ part from the values in the SalesOrderNumber column to get the order number in numeric form. The following screenshot shows the modified source query in the Advanced Editor window.

modifiedquery

Of course, you cannot edit the shared query in SSDT yet. But you could import the FactInternetSales table a second time and then edit the source query on that table. When you achieve the desired result, copy the M script into your TOM application to modify the shared expression accordingly. The following lines of code correspond to the screenshot above.

tabularDB.Model.Expressions[“SharedQuery”].Expression = “let”
    +     Source = AS_AdventureWorksDW,”
    +     dbo_FactInternetSales = Source{[Schema=\”dbo\”,Item=\”FactInternetSales\”]}[Data],”
    +     #\”Split Column by Position\” = Table.SplitColumn(dbo_FactInternetSales,\”SalesOrderNumber\”,Splitter.SplitTextByPositions({0, 2}, false),{\”SalesOrderNumber.1\”, \”SalesOrderNumber\”}),”
    +     #\”Changed Type\” = Table.TransformColumnTypes(#\”Split Column by Position\”,{{\”SalesOrderNumber.1\”, type text}, {\”SalesOrderNumber\”, Int64.Type}}),”
    +     #\”Removed Columns\” = Table.RemoveColumns(#\”Changed Type\”,{\”SalesOrderNumber.1\”})”
    + “in”
    +     #\”Removed Columns\””;

One final note of caution: If you remove columns in your shared expression that already exist on the table, make sure you also remove these columns from the table’s Columns collection to bring the table back into a consistent state.

That’s about it on shared expressions for now. Hopefully in the not-so-distant future, you’ll be able to create shared parameters, functions, and queries directly in SSDT Tabular. Stay tuned for more updates on the modern Get Data experience. And, as always, please send us your feedback via the SSASPrev email alias here at Microsoft.com or use any other available communication channels such as UserVoice or MSDN forums. You can influence the evolution of the Analysis Services connectivity stack to the benefit of all our customers.

Columnstore Index Perfomance: Column Elimination

Columnstore Index Performance: Rowgroup Elimination


Windows 10 Tip: Enable “Hey Cortana” and teach Cortana to recognize your voice

$
0
0

Today, we’re showing you how you can enable “Hey Cortana” so you can simply ask Cortana for whatever it is you need with your voice!

“Hey Cortana” lets you interact with Cortana hands-free so you can master multitasking. You can even use “Hey Cortana” to interact with Cortana above your lock screen so you don’t even have to login to your computer to have Cortana help you out. Here’s how to enable “Hey Cortana:”

Windows 10 Tip: Enable “Hey Cortana” and teach Cortana to recognize your voice

To get started, click on the Search bar, then click on the Settings icon and find the button to enable Hey Cortana.

Here are a couple things you can ask for when you say, “Hey Cortana:”

  • Ask Cortana to set a reminder for a specific time
  • Ask Cortana about local services, such as the nearest Thai restaurant
  • Ask Cortana to add something to your grocery list
  • Ask Cortana for your next meeting or what’s on your schedule
  • Ask Cortana questions about just about anything, such as how many ounces are in a cup

To enable Cortana above the lock, go to settings and enable “Use Cortana Even When My Device is Locked”.

You can also train Cortana to recognize your voice so she can better understand you:

Windows 10 Tip: Enable “Hey Cortana” and teach Cortana to recognize your voice

Once you have “Hey Cortana” enabled, you can click on “Try to respond only to me.” Then just underneath that, click on “Learn how to say, ‘Hey Cortana.’” Cortana will prompt you to repeat six phrases out loud. Then you’re all set!

For more Windows 10 tips on Cortana, head over to these blog posts to read about using Cortana above your lock screen or searching your PC and the web with Cortana, and have a great week!

*Cortana available in select markets

The post Windows 10 Tip: Enable “Hey Cortana” and teach Cortana to recognize your voice appeared first on Windows Experience Blog.

Columnstore Index Performance: Batch Mode Execution

Extending the support of platforms for SCSM 2016

$
0
0

Hi everyone! Thanks for sharing your feedback and experiences with the Service Manager 2016 deployment. We heard you, and to make your deployment and upgrade experience better, we are now extending the support of following platforms for Service Manager 2016 –

  • Supportof SQL server 2014 SP2 for both SM 2016 & 2012 R2
    As some of you were waiting to upgrade your SQL servers, Service Manager 2016 and Service Manager 2012 R2 (with Update Rollup 9) now officially support Service Pack 2 for SQL Server 2014 for hosting your Service Manager CMDB and Data Warehouse databases.
  • Support of SM 2016 console on Windows 7
    Service Manager 2016 console installation will now be supported on Microsoft Windows 7. This will require installation of .NET 4.5.1 as a pre-requisite for Windows 7 to support the SM 2016 console. You can download it from here.

    Please note that the new spell check feature which was introduced in the Service Manager 2016 console, will have limited language support for Windows 7 installations. The supported languages on Windows 7 include English, French, German and Spanish.

  • Support of SM 2016 connectors with System Center 2012 R2 components
    We heard your feedback that the seamless and easier upgrade to Service Manager 2016 requires keeping the support of SM connectors with System Center 2012 R2 components. Hence, we will be supporting System Center 2012 R2 Virtual Machine manager, Orchestrator, Operations Manager and Configuration Manager (including SCCM 1511, 1602 and 1606) to be used with Service Manager 2016 connectors.

We have done a fair amount of validation to make sure that everything continues to work as expected. That said, if there is anything which seems suspicious let us know via your comments below.

CES 2017: Lenovo updates the ThinkPad X1 family and introduces new gaming PCs

$
0
0

Each of these new devices was thoughtfully designed to meet your needs and habits, whether at home, in the office or on-the-go. These devices also take advantage of great Windows 10 features, including Windows Hello with built-in face recognition cameras and fingerprint readers, and touch screens that light up Windows Ink. Also, built into these new devices are noise-cancelling microphones to support Cortana*, your personal digital assistant.

Here’s a closer look at what Lenovo announced today:

A premium PC experience with the ThinkPad X1 family

ThinkPad X1 Carbon

The 2017 ThinkPad X1 Carbon, available in classic ThinkPad Black and a new Silver color.

The 2017 Lenovo ThinkPad X1 Carbon available in a new Silver color

The next generation of ThinkPad X1 products are committed to uncompromised innovation. The 2017 ThinkPad X1 Carbon, available in classic ThinkPad Black and a new Silver color, offers more than 15 hours of battery life, weighs just 2.5 pounds and packs a 14-inch high quality IPS display into a new sleek 13-inch form factor.

You can login easily and securely with Windows Hello using either the fingerprint sensor or the new face recognition Infrared camera on the new Lenovo ThinkPad X1 Carbon.

You can login easily and securely with Windows Hello using either the fingerprint sensor or the new face recognition Infrared camera.

The Lenovo ThinkPad X1 Carbon has been redesigned to include Thunderbolt 3 ports, which allows for super-fast and slim port connection that lets you transfer data quickly.

Now offering more than 15 hours of battery life, the X1 Carbon has been redesigned to include Thunderbolt 3 ports, which allows for super-fast and slim port connection that lets you transfer data quickly, super-fast LTE-A Wireless WAN and WIFI CERTIFIED WiGig options that deliver superlative performance and connectivity.

ThinkPad X1 Yoga and the ThinkPad X1 Tablet

Also, available in the new Silver color, the updated ThinkPad X1 Yoga is flawlessly flexible and adapts to your needs offering true multi-mode capability and can deliver stunning colors and absolute blacks in its 14-inch notebook size with an OLED screen. If you’re not familiar with OLED, an OLED screen is one of those things you must see to believe, it’s higher resolution than 4K.

Lenovo ThinkPad X1 Yoga

Lenovo ThinkPad X1 Yoga

A redesigned rechargeable pen and improved ‘rise and fall” keyboard complete the improved experience.

Lenovo ThinkPad X1 Tablet

Lenovo ThinkPad X1 Tablet

Alongside the ThinkPad X1 Yoga is the ThinkPad X1 Tablet, continuing to impress with its lightweight, modularity and serviceability.

The Lenovo ThinkPad X1 Tablet features an integrated projector option making it easier to go from creation to presentation.

The Lenovo ThinkPad X1 Tablet features an integrated projector option making it easier to go from creation to presentation.

The ThinkPad X1 Tablet has unique modules that offer port expansion and up to 5 hours of additional battery life, and an integrated projector option making it easier to go from creation to presentation. It also comes with new Intel Core processors for better performance and graphics than ever.

Pricing and availability

The ThinkPad X1 Carbon starts at $1,349, and will be available beginning in February 2017. The ThinkPad X1 Yoga starts at $1,499, and will be available beginning in February 2017. The ThinkPad X1 Tablet starts at $949, and will be available beginning in March 2017. All available in the US.

Work-life integration with the Miix 720

The ultra-chic Miix 720 detachable includes the features needed to work smarter. You can simply remove the keyboard to instantly transform the Miix 720 into a feature-rich touchscreen tablet. Paired with a Lenovo Active Pen 2 and Windows Ink, the Miix 720 is ideal for effortless note taking with OneNote or Microsoft Edge.

The Lenovo Miix 720 Detachable with Lenovo Active Pen 2

Lenovo Miix 720

The integrated infrared camera unlocks Windows Hello facial recognition which allows you to login without the hassle of having to recall or re-type your password, and the up to 150-degree tablet kickstand easily adjusts to any angle.

Other features include:

  • An ultra-crisp 12-inch QHD+ display for super crisp screen resolution and precision high end touchpad-enabled keyboard that makes for an awesome input experience
  • Up to 7th Gen Intel Core i7 processor and Thunderbolt 3, the fastest port available on a PC today

Pricing and availability

The Miix 720 (keyboard included) starts at $999.99, available in two color options Champagne Gold and Iron Gray beginning in April 2017. The Lenovo Active Pen 2 starts at $59.99, and will be available beginning in February 2017. All available in the US.

Two new powerful gaming laptops designed for PC gamers: the Lenovo Legion Y720 Laptop and Lenovo Legion Y520 Laptop with Windows 10

This year, Lenovo is launching a dedicated sub-brand for Lenovo gaming PCs – called Lenovo Legion. The new gaming sub-brand offers gamers powerful gaming devices and community engagement. In their quest to provide gamers the most immersive PC gaming experiences, Lenovo spent time building and listening to a community of gamers to better understand what they value most.

Lenovo’s first new Legion offerings come in the form of two powerful gaming laptops designed for PC gamers: the Lenovo Legion Y720 Laptop and Lenovo Legion Y520 Laptop, powered by Windows 10. Lenovo Legion laptops are outfitted with up to the latest NVIDIA graphics cards and Intel’s latest 7th Gen Core i7 processors for the speed needed to win. More RAM also means better gameplay, and boosting up RAM specs delivers that extra edge as the second most desired upgrade based on gamers’ feedback. Gamers can run and stream their favorite game, respond to chat questions and play music all at the same time without a hitch – powered by the two laptops’ 16 GB DDR4 memory. With Windows 10, both of these new laptops are equipped with the Xbox app and can take advantage of Xbox Play Anywhere. You can read more about the Lenovo Legion Y720 Laptop and Lenovo Legion Y520 Laptop over on Lenovo’s blog.

The Lenovo Legion Y720 Laptop also features:

The Lenovo Legion Y720 Laptop

The Lenovo Legion Y720 Laptop

  • Lenovos’ first laptop with integrated Xbox Wireless Controller Receiver built-in
The Lenovo Legion Y720 Laptop

The Lenovo Legion Y720 Laptop

  • VR-ready with up to NVIDIA GeForce GTX 1060 6GB DDR5 graphics
  • World’s first Windows based Dolby Atmos PC1 with two 2W JBL speakers and a 3W subwoofer for incredible sound
  • Thunderbolt 3, the fastest port available on a PC today to plug and play at lightening speeds and the option of an integrated Xbox One Wireless controller, which is able to support up to four controllers simultaneously for gaming with friends.
  • High-quality screen resolution with up to UHD (3840 x 2160) IPS anti-glare display
  • An optional RGB keyboard for more precise gaming in the dark

The Lenovo Legion Y520 Laptop also features:

The Lenovo Legion Y520 Laptop

  • 6-inch FHD (1920 x 1080) display for an amazing visual experience
  • Immersive Dolby Audio Premium sound with two 2W Harmon Kardon speakers
  • Optional red backlit keyboard for gaming in the dark

Pricing and availability

The Lenovo Legion Y720 Laptop starts at $1,399.99, and will be available beginning in April 2017. The Lenovo Legion Y520 Laptop starts at $899.99, and will be available beginning in February 2017. All available in the US.

Taking Control with the Lenovo 500 Multimedia Controller

Click to view slideshow.

Want to browse the web from your couch? Or turn on your favorite playlist from the dining table? The Lenovo 500 Multimedia Controller is not only wireless a mouse and keyboard in one, it’s also a remote control that fits in the palm of your hand and dramatically improves the way you connect to your PCs, the web and displays. The keyboard area doubles as a Windows 10 gesture-supported, multi-touch capacitive touchpad – simply press it to type, or slide your fingers to navigate. The entire keypad surface is your touchpad, and its lightweight design won’t weigh you down. Lean back on the couch and browse the web, stream movies and more.

Pricing and availability

The Lenovo 500 Multimedia Controller starts at $54.99, and will be available beginning in March 2017.

It’s great to see partners like Lenovo pushing the boundaries of what’s possible and creating beautiful hardware that lights up Windows 10. You can learn more about Lenovo’s news at lenovo.com/ces.

*Cortana available in select markets.
1Based on Lenovo’s internal analysis as of Dec 9, 2016 of PCs using Windows sold by major competitors shipping >1 million units worldwide annually. Requires RS2 update for Windows 10 coming April 2017.   

The post CES 2017: Lenovo updates the ThinkPad X1 family and introduces new gaming PCs appeared first on Windows Experience Blog.

End-to-end cloud experiences for developers at Node.js Interactive North America

$
0
0

Over 700 developers, DevOps engineers and other Node.js enthusiasts met in Austin last month for Node.js Interactive North America. Microsoft is proud to have been a platinum sponsor and I’m particularly thrilled to have keynoted the event, kicking off a great week of community engagement: from meetups to workshops, sessions covering from debugging to robots and content that highlights our work in technologies like TypeScript or Node-Chakracore.

For those of us working with Node.js at Microsoft, this event is an important milestone in our own Node.js journey. Today we support Node.js broadly in Microsoft Azure, providing developers with architectural choices to build applications on the infrastructure, with VM Scale Sets and Linux-based Node.js stacks, on container infrastructure, with Azure Container Service, on our PaaS, with support for Node.js for Web, IoT, mobile or serverless applications, and through 3rd party Node.js IaaS and PaaS solutions in Azure.

Whatever choice you make, we add value to those investments by providing Node.js SDKs for multiple Azure services and client drivers for many of our data solutions, including MongoDB protocol support in DocumentDB. And what’s more important – we value developer productivity in whatever platform you choose and that’s why we continue investing in great DevOps tooling and a redefined coding experience with Visual Studio Code, providing IntelliSense & debugging, powerful extensibility and out-of-the-box Azure support that works everywhere.

On my keynote at Node.js Interactive North America, I gave attendees some perspective on what our vision for Node.js is and where we are going, including walking participants through how those experiences look and feel like for Node.js developers in the cloud using solutions like Docker support in Azure App Service, and covering debugging use cases (even if you couldn’t attend the event, you can follow along on GitHub)

Developers around the globe are adopting this powerful combination of open source, Node.js and the cloud at a rapid pace. Enterprise adoption continues growing, and is already popular in polyglot scenarios, including amongst Microsoft customers.

clip_image002

In fact, during my keynote I shared with the community our perspective on how and why organizations are adopting Node.js in the cloud, based on our experience as an open and flexible global cloud platform where over 60% of Marketplace solutions integrate open source, nearly 1 in 3 VMs run Linux, the number of customers running containers is quadrupling and over a dozen Node.js solutions coexist in Marketplace, signaling we can expect the growth of Node.js in Azure to continue.

Our Node.js vision focuses on developer productivity, flexible and powerful cloud deployments and production operations & diagnostics that support the enterprise business needs. Some of our products like TypeScript, Visual Studio or Microsoft Azure are helping customers bring this vision to reality today, and we will continue investing in this portfolio as well as in the community and ecosystem to ensure that we can maintain a learning loop that empowers developers to do more with Node.js in the cloud.

For example, Microsoft is part of the Node Foundation (which, by the way, is looking for your input) and an active participant of the CTC, TSC & a number of WGs contributing on areas like TypeScript, Chakra and more, and we learn a lot through our developer support team focused on open source experiences, including Node.js.

As we close a great year of Node.js momentum in the cloud, make sure you check out all the content from Node.js Interactive North America on YouTube, explore the end-to-end Node.js demo and get started with Node.js in Azure.

Viewing all 13502 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>