Quantcast
Channel: TechNet Technology News
Viewing all 13502 articles
Browse latest View live

Use Docker Compose and Service Discovery on Windows to scale-out your multi-service container application

$
0
0

Article by Kallie Bracken and Jason Messer

The containers revolution popularized by Docker has come to Windows so that developers on Windows 10 (Anniversary Edition) or IT Pros using Windows Server 2016 can rapidly build, test, and deploy Windows “containerized” applications!

Based on community feedback, we have made several improvements to the Windows containers networking stack to enable multi-container, multi-service application scenarios. Support for Service Discovery and the ability to create (or re-use existing) networks are at the center of the improvements that were made to bring the efficiency of Docker Compose to Windows. Docker Compose enables developers to instantly build, deploy and scale-out their “containerized” applications running in Windows containers with just a few simple commands. Developers define their application using a ‘Compose file’ to specify the services, corresponding container images, and networking infrastructure required to run their application. Service Discovery itself is a key requirement to scale-out multi-service applications using DNS-based load-balancing and we are proud to announce support for Service Discovery in the most recent versions of Windows 10 and Windows Server 2016.

Take your next step in mastering development with Windows Containers, and keep letting us know what great capabilities you would like to see next!


When it comes to using Docker to manage Windows containers, with just a little background it’s easy to get simple container instances up and running. Once you’ve covered the basics, the next step is to build your own custom container images using Dockerfiles to install features, applications and other configuration layers on top of the Windows base container images. From there, the next step is to get your hands dirty building multi-tier applications, composed of multiple services running in multiple container instances. It’s here—in the modularization and scaling-out of your application—that Docker Compose comes in; Compose is the perfect tool for streamlining the specification and deployment of multi-tier, multi-container applications. Docker Compose registers each container instance by service name through the Docker engine thereby allowing containers to ‘discover’ each other by name when sending intra-application network traffic. Application services can also be scaled-out to multiple container instances using Compose. Network traffic destined to a multi-container service is then round-robin’d using DNS load-balancing across all container instances implementing that service.

This post walks through the process of creating and deploying a multi-tier blog application using Docker Compose (Compose file and application shown in Figure 1).

ComposeFile

Figure 1: The Compose File used to create the blog application, including its BlogEngine.NET front-end (the ‘web’ service) and SQL Server back-end (the ‘db’ service).

Note: Docker Compose can be used to scale-out applications on a single host which is the scope of this post. To scale-out your ‘containerized’ application across multiple hosts, the application should be deployed on a multi-node cluster using a tool such as Docker Swarm. Look for multi-host networking support in Docker Swarm on Windows in the near future.

The first tier of the application is an ASP.NET web app, BlogEngine.NET, and the back-end tier is a database built on SQL Server Express 2014. The database is created to manage and store blog posts from different users which are subsequently displayed through the Blog Engine app.

New to Docker or Windows Containers?

This post assumes familiarity with the basics of Docker, Windows containers and ‘containerized’ ASP.NET applications. Here are some good places to start if you need to brush up on your knowledge:

Setup

System Prerequisites

Before you walk through the steps described in this post, check that your environment meets the following requirements and has the most recent versions of Docker and Windows updates installed:

  • Windows 10 Anniversary Edition (Professional or Enterprise) or Windows Server 2016
    Windows Containers requires your system to have critical updates installed. Check your OS version by running winver.exe, and ensure you have installed the latest KB 3192366 and/or Windows 10 updates.
  • The latest version of Docker-Compose (available with Docker-for-Windows) must be installed on your system.

NOTE: The current version of Docker Compose on Windows requires that the Docker daemon be configured to listen to a TCP socket for new connections. A Pull Request (PR) to fix for this issue is in review and will be merged soon. For now, please ensure that you do the following:

Please configure the Docker Engine by adding a “hosts” key to the daemon.json file (example shown below) following the instructions here. Be sure to restart the Docker service after making this change.

{
…
"hosts":["tcp://0.0.0.0:2375", “npipe:////./pipe/win_engine"]
…
}

When running docker-compose, you will either need to explicitly reference the host port by adding the option “-H tcp://localhost:2375” to the end of this command (e.g. docker-compose -H “tcp://localhost:2375” or by setting your DOCKER_HOST environment variable to always use this port (e.g. $env:DOCKER_HOST=”tcp://localhost:2375”

Blog Application Source with Compose and Dockerfiles

This blog application is based on the Blog Engine ASP.NET web app availably publicly here: http://www.dnbe.net/docs/.  To follow this post and build the described application, a complete set of files is available on GitHub. Download the Blog Application files from GitHub and extract them to a location somewhere on your machine, e.g. ‘C:\build’ directory.

The blog application directory includes:

  • A ‘web’ folder that contains the Dockerfile and resources that you’ll need to build the image for the blog application’s ASP.NET front-end.
  • A ‘db’ folder that contains the Dockerfile and resources that you’ll need to build the blog application’s SQL database back-end.
  • A ‘docker-compose.yml’ file that you will use to build and run the application using Docker Compose.

The top-level of the blog application source folder is the main working directory for the directions in this post.Open an elevated PowerShell session and navigate there now – e.g.

PS C:\> cd c:\build\

The Blog Application Container Images

Database Back-End Tier: The ‘db’ Service

The database back-end Dockerfile is located in the ‘db’ sub-folder of the blog application source files and can be referenced here: The Blog Database Dockerfile. The main function of this Dockerfile is to run two scripts over the Windows Server Core base OS image to define a new database as well as the tables required by the BlogEngine.NET application.

The SQL scripts referenced by the Dockerfile to construct the blog database are included in the ‘db’ folder, and copied from host to container when the container image is created so that they can be run on the container.

BlogEngine.NET Front-End

The BlogEngine.NET Dockerfile is in the ‘web’ sub-folder of the blog application source files.

This Dockerfile refers to a PowerShell script (buildapp.ps1) that does the majority of the work required to configure the web service image. The buildapp.ps1 PowerShell Script obtains the BlogEngine.NET project files using a download link from Codeplex, configures the blog application using the default IIS site, grants full permission over the BlogEngine.NET project files (something that is required by the application) and executes the commands necessary to build an IIS web application from the BlogEngine.NET project files.

After running the script to obtain and configure the BlogEngine.NET web application, the Dockerfile finishes by copying the Web.config file included in the ‘web’ sub-folder to the container, to overwrite the file that was downloaded from Codeplex. The config file provided has been altered to point the ‘web’ service to the ‘db’ back-end service.

Streamlining with Docker Compose

When dealing with only one or two independent containers, it is simple to use the ‘docker run’ command to create and start a container image. However, as soon as an application begins to gain complexity, perhaps by including several inter-dependent services or by deploying multiple instances of any one service, the notion of configuring and running that app “manually” becomes impractical. To simplify the definition and deployment of an application, we can use Docker Compose.

A Compose file is used to define our “containerized” application using two services—a ‘web’ service and a ‘db’ service.  The blog application’s Compose File (available here for reference) defines the ‘web’ service which runs the BlogEngine.NET web front-end tier of the application and the ‘db’ service which runs the SQL Server 2014 Express back-end database tier. The compose file also handles network configuration for the blog application (with both application-level and service-level granularity).

Something to note in the blog application Compose file, is that the ‘expose’ option is used in place of the ‘ports’ option for the ‘db’ service. The ‘ports’ option is analogous to using the ‘-p’ argument in a ‘docker run’ command, and specifies HOST:CONTAINER port mapping for a service. However, this ‘ports’ option specifies a specific container host port to use for the service thereby limiting the service to only one container instance since multiple instances can’t re-use the same host port. The ‘expose’ option, on the other hand, can be used to define the internal container port with a dynamic, external port selected automatically by Docker through the Windows Host Networking Service – HNS. This allows for the creation of multiple container instances to run a single service; where the ‘ports’ option requires that every container instance for a service be mapped as specified, the ‘expose’ option allows Docker Compose to handle port mapping as required for scaled-out scenarios.

The ‘networks’ key in the Compose file specifies the network to which the application services will be connected. In this case, we define the default network for all services to use as external meaning a network will not be created by Docker Compose. The ‘nat’ network referenced is the default NAT network created by the Docker Engine when Docker is originally installed.

‘docker-compose build’

In this step, Docker Compose is used to build the blog application. The Compose file references the Dockerfiles for the ‘web’ and ‘db’ services and uses them to build the container image for each service.

From an elevated PowerShell session, navigate to the top level of the Blog Application directory. For example,

cd C:\build\

Now use Docker Compose to build the blog application:

docker-compose build

‘docker-compose up’

Now use Docker Compose to run the blog application:

docker-compose up

This will cause a container instance to be run for each application service. Execute the command to see that the blog application is now up and running.

docker-compose ps

You can access the blog application through a browser on your local machine, as described below.

Define Multiple, Custom NAT Networks

In previous Windows Server 2016 technical previews, Windows was limited to a single NAT network per container host. While this is still technically the case, it is possible to define custom NAT networks by segmenting the default NAT network’s large, internal prefix into multiple subnets.

For instance, if the default NAT internal prefix was 172.31.211.0/20, a custom NAT network could be carved out from this prefix. The ‘networks’ section in the Compose file could be replaced with the following:

networks:
  default:
    driver: nat
    ipam:
      driver: default
      config:
      - subnet: 172.31.212.0/24

This would create a user-defined NAT network with a user-defined IP subnet prefix (in this case, 172.31.211.0/24). The ipam option is used to specify this custom IPAM configuration.

Note: Ensure that any custom nat network defined is a subset of the larger nat internal prefix previously created. To obtain your host nat network’s internal prefix, run ‘docker network inspect nat’.

View the Blog Application

Now that the containers for the ‘web’ and ‘db’ services are running, the blog application can be accessed from the local container host using the internal container IP and port (80). Use the command docker inspect to determine this internal IP address.

To access the application, open an internet browser on the container host and navigate to the following URL: “http:// ip>//BlogEngine/” appended. For instance, you might enter: http://172.16.12.216/BlogEngine

To access the application from an external host that is connected to the container host’s network, you must use the Container Host IP address and mapped port of the web container. The mapped port of the web container endpoint is displayed from docker-compose ps or docker ps commands. For instance, you might enter: http://10.123.174.107:3658/BlogEngine

The blog application may take a moment to load, but soon your browser should present the following page.

Screenshot of page

Screenshot of page

Taking Advantage of Service Discovery

Built in to Docker is Service Discovery, which offers two key benefits: service registration and service name to IP (DNS) mapping. Service Discovery is especially valuable in the context of scaled-out applications, as it allows multi-container services to be discovered and referenced in the same way as single container services; with Service Discovery, intra-application communication is simple and concise—any service can be referenced by name, regardless of the number of container instances that are being used to run that service.

Service registration is the piece of Service Discovery that makes it possible for containers/services on a given network to discover each other by name. As a result of service registration, every application service is registered with a set of internal IP addresses for the container endpoints that are running that service. With this mapping, DNS resolution in the Docker Engine responds to any application endpoint seeking to communicate with a given service by sending a randomly ordered list of the container IP addresses associated with that service. The DNS client in the requesting container then chooses one of these IPs for container-container communication. This is referred to as DNS load-balancing.

Through DNS mapping Docker abstracts away the added complexity of managing multiple container endpoints; because of this piece of Service Discovery a single service can be treated as an atomic entity, no matter how many container instances it has running behind the scenes.

Note: For further context on Service Discovery, visit this Docker resource. However, note that Windows does not support the “-link” options.

Scale-Out with ‘docker-compose scale’

DockerCompose Scale

While the service registration benefit of Service Discovery is leveraged by an application even when one container instance is running for each application service, a scaled-out scenario is required for the benefit of DNS load-balancing to truly take effect.

To run a scaled-out version of the blog application, use the following command (either in place of ‘docker-compose up’ or even after the compose application is up and running). This command will run the blog application with one container instance for the ‘web’ service and three container instances for the ‘db’ service.

docker-compose scale web=1 db=3

Recall that the docker-compose.yml file provided with the blog application project files does not allow for scaling multiple instances of the ‘web’ service. To scale the web service, the ‘ports’ option for the web service must be replaced with the ‘expose’ option. However, without a load-balancer in front of the web service, a user would need to reference individual container endpoint IPs and mapped ports for external access into the web front-end of this application. An improvement to this application would be to use volume mapping so that all ‘db’ container instances reference the same SQL database files. Stay tuned for a follow-on post on these topics.

Service Discovery in action

In this step, Service Discovery will be demonstrated through a simple interaction between the ‘web’ and ‘db’ application services. The idea here is to ping different instances of the ‘db’ service to see that Service Discovery allows it to be accessed as a single service, regardless of how many container instances are implementing the service.

Before you begin: Run the blog application using the ‘docker-compose scale’ instruction described above.

Return to your PowerShell session, and run the following command to ping the ‘db’ back-end service from your web service. Notice the IP address from which you receive a reply.

docker run blogengine ping db

Now run the ping command again, and notice whether or not you receive a reply from a different IP address (i.e. a different ‘db’ container instance).*

docker run blogengine ping db

The image below demonstrates the behavior you should see—after pinging 2-3 times, you should receive replied from at least two different ‘db’ container instances:

PowerShell Output

* There is a chance that Docker will return the set of IPs making up the ‘db’ service in the same order as your first request. In this case, you may not see a different IP address. Repeat the ping command until you receive a reply from a new instance.

Technical Note: Service Discovery implemented in Windows

On Linux, the Docker daemon starts a new thread in each container namespace to catch service name resolution requests. These requests are sent to the Docker engine which implements a DNS resolver and responds back to the thread in the container with the IP address/es of the container instance/s which correspond to the service name.

In Windows, service discovery is implemented differently due to the need to support both Windows Server Containers (shared Windows kernel) and Hyper-V Containers (isolated Windows kernel). Instead of starting a new thread in each container, the primary DNS server for the Container endpoint’s IP interface is set to the default gateway of the (NAT) network. A request to resolve the service name will be sent to the default gateway IP where it is caught by the Windows Host Networking Service (HNS) in the container host. The HNS service then sends the request to the Docker engine which replies with the IP address/es of the container instance/s for the service. HNS then returns the service name (DNS) query to the container.


New and updated Microsoft IoT Kits

$
0
0

Earlier this month, we released to customers around the world a new Windows Insider version of Windows 10 IoT Core that supports the brand new Intel® Joule™. We’ve been working hard on Windows 10 IoT Core, we’re proud of the quality and capability of IoT Core Insider releases and we’re humbled by the enthusiasm that you’ve shown in using it to build innovative devices and downright cool Maker projects.

We’ve spoken to thousands of you around the world at both commercial IoT events and Maker Faires and in many of these conversations, you have asked for better ways to get started – how to find the quickest path to device experimentation using Windows 10 and Azure. We’ve heard your feedback and today I’d like to talk about how this is manifesting in two new IoT starter kits from our partners: The Microsoft Internet of Things Pack for Raspberry Pi 3 by Adafruit, and the brand new Seeed Grove Starter Kit for IoT based on Raspberry Pi by Seeed Studio.

Back in September of 2015 we partnered with Adafruit to make a Raspberry Pi 2 based Windows 10 IoT Core Starter Kit available. This kit was designed to get you started quickly and easily on your path of learning both electronics and Windows 10 IoT Core and the Raspberry Pi 2. Adafruit had tremendous success with this kit, and we’re happy to announce that they are releasing a new version of it.

image1

This new kit keeps its focus on helping you get started quickly and easily in the world of IoT, but includes an upgrade to the new Raspberry Pi 3.

The best thing about this update? The price is the same as before.

The newest kit available is from Seeed Studio, called the Grove Starter Kit for IoT based on Raspberry Pi, builds on the great design work that Seeed and their partner Dexter Industries have done around the Grove connector. It utilizes a common connector from the large array of available sensors to simplify the task of connecting to the device platform. This helps you focus on being creative and not worrying about soldering electrical connections.

image2

The selection of compatible modular devices extends way beyond those that are included in the kit, making this applicable to starters, Makers and Maker Pros. The Seeed Kit can be ordered from the Microsoft Store, Seeed Studio or you can also acquire the kit from Digi-Key.

We’re excited about how these kits help enable everyone, from those with no experience to those who prototype for a living, to quickly get started making new devices with Windows 10 IoT Core, Azure IoT and the Raspberry Pi 3.

We can’t wait to see what you make!

Download Visual Studio to get started.

The Windows team would love to hear your feedback.  Please keep the feedback coming using our Windows Developer UserVoice site. If you have a direct bug, please use the Windows Feedback tool built directly into Windows 10.

Exploring Application Insights for disconnected or connected deep telemetry in ASP.NET Apps

$
0
0

Today on the ASP.NET Community Standup we learned about how you can use Application Insights in a disconnected scenario to get some cool - ahem - insights into your application.

Typically when someone sees Application Insights in the File | New Project dialog they assume it's a feature that only works in Azure, that will create some account for you and send your data to the cloud. While App Insights totally does do a lot of cool stuff when you have a cloud-hosted app and it does add a lot of value, it also supports a very useful "SDK only" mode that's totally offline.

Click "Add Application Insights to project" and then under "Send telemetry to" you click "Install SDK only" and no data gets sent to the cloud.

Application Insights dropdown - Install SDK only

Once you make your new project, can you learn more about AppInsights here.

For ASP.NET Core apps, Application Insights will include this package in your project.json - "Microsoft.ApplicationInsights.AspNetCore": "1.0.0" and it'll add itself into your middleware pipeline and register services in your Startup.cs. Remember, nothing is hidden in ASP.NET Core, so you can modify all this to your heart's content.

if (env.IsDevelopment())
{
// This will push telemetry data through Application Insights pipeline faster, allowing you to view results immediately.
builder.AddApplicationInsightsSettings(developerMode: true);
}

Request telemetry and Exception telemetry are added separately, as you like.

Make sure you show the Application Insights Toolbar by right-clicking your toolbars and ensuring it's checked.

Application Insights Dropdown Menu

The button it adds is actually pretty useful.

Application Insights Dropdown Menu

Run your app and click the Application Insights button.

NOTE: I'm using Visual Studio Community. That's the free version of VS you can get at http://visualstudio.com/free. I use it exclusively and I think it's pretty cool that this feature works just great on VS Community.

You'll see the Search window open up in VS. You can keep it running while you debug and it'll fill with Requests, Traces, Exceptions, etc.

image

I added a Exceptional case to /about, and here's what I see:

Searching for the last hours traces

I can dig into each issue, filter, search, and explore deeper:

Unhandled Exception

And once I've found something interesting, I can explore around it with the full details of the HTTP Request. I find the "telemetry 5 minutes before and after" query to be very powerful.

Track Operations

Notice where it says "dependencies for this operation?" That's not dependencies like "Dependency Injection" - that's larger system-wide dependencies like "my app depends on this web service."

You can custom instrument your application with the TrackDependancy API if you like, and that will cause your system's dependency to  light up in AppInsights charts and reports. Here's a dumb example as I pretend that putting data in ViewData is a dependency. It should be calling a WebAPI or a Database or something.

var telemetry = new TelemetryClient();

var success = false;
var startTime = DateTime.UtcNow;
var timer = System.Diagnostics.Stopwatch.StartNew();
try
{
ViewData["Message"] = "Your application description page.";
}
finally
{
timer.Stop();
telemetry.TrackDependency("ViewDataAsDependancy", "CallSomeStuff", startTime, timer.Elapsed, success);
}

Once I'm Tracking external Dependencies I can search for outliers, long durations, categorize them, and they'll affect the generated charts and graphs if/when you do connect your App Insights to the cloud. Here's what I see, after making this one code change. I could build this kind of stuff into all my external calls AND instrument the JavaScript as well. (note Client and Server in this chart.)

Application Insight Maps

And once it's all there, I can query all the insights like this:

Querying Data Live

To be clear, though, you don't have to host your app in the cloud. You can just send the telemetry to the cloud for analysis. Your existing on-premises IIS servers can run a "Status Monitor" app for instrumentation.

Application Insights Charts

There's a TON of good data here and it's REALLY easy to get started either:

  • Totally offline (no cloud) and just query within Visual Studio
  • Somewhat online - Host your app locally and send telemetry to the cloud
  • Totally online - Host your app and telemetry in the cloud

All in all, I am pretty impressed. There's SDKs for Java, Node, Docker, and ASP.NET - There's a LOT here. I'm going to dig deeper.


Sponsor: Big thanks to Telerik! 60+ ASP.NET Core controls for every need. The most complete UI toolset for x-platform responsive web and cloud development. Try now 30 days for free!



© 2016 Scott Hanselman. All rights reserved.
     

Test & Feedback – Collaborate with your team

$
0
0

In the previous blogs, we have gone through the first two steps – Capture your findings and Create artifacts. In this blog, we will take you through the third step i.e. Collaborate. Test & Feedback extension provides many ways in which teams can collaborate with one another to drive the quality. You can use the extension to share your findings in the form of a simple session report or to gather additional feedback where necessary. Additionally, you can also connect to your Visual Studio Team Services account or Team Foundation Server “15” to view in one place all the completed sessions and measure the effectiveness of your bug bashes and exploratory testing sessions using the rich insights provided. These collaboration techniques are available to users based on their access levels and the mode in which the extension is used.

Collaborate using Standalone mode

As described in the Overview blog, one of the modes supported by the extension is the Standalone mode. No connection to Visual Studio Team Services or Team Foundation Server is needed to use the extension in this mode. As you explore the application, you can capture your findings and create bugs offline. All the captured findings – screenshots, notes and bugs created are stored locally. While using the standalone mode, you can use the session report feature to share your captured findings and reported issues with rest of the team.

Session Report

The session report gets generated either on demand by using “Export” capability or automatically at the end of the session. This HTML report can then be easily shared with others as a mail attachment or by using OneNote or SharePoint or in any other way as appropriate. The session report consists of two parts:

  1. Summary of bugs filed
    The first part of the session report provides a list of all the bugs filed while testing along with the details of screenshots and notes that were captured as a part of these bugs.
  2. Session attachments
    This part of the report contains in chronological order the screenshots and notes that were captured while testing the application. If you don’t want to file bugs and are simply capturing your findings or if you have some captures (screenshots and notes) in the session which are not included as a part of any bug, this part of report will help you easily keep a track of them.

export

 

Collaborate using connected mode with stakeholder access

The new feedback flow enabled in Visual Studio Team Services and Team Foundation Server “15” allows teams to use the web access to send feedback requests to stakeholders. Stakeholders can use the Test & Feedback extension to respond to these feedback requests.  The feedback response work items (bugs/tasks or feedback response work item) gets automatically linked to the feedback request. This built-in traceability in the feedback flow allows teams to easily track in one place all the feedback received from different stakeholders. The stakeholders on the other hand can leverage the capabilities provided in the extension to manage all the different feedback requests they receive.

Note Note: Feedback flow is supported only in Team Services and Team Foundation Server “15”.

Request feedback from stakeholders on Features/User Stories

Team members with basic access can now directly request for feedback from stakeholders for features/stories being worked on using the “Request Feedback” option in the WIT form context menu. You only need to fill out a simple feedback form which will send off individual mails to all the selected stakeholders along with the instructions provided in the form.

RequestFeedback3

Respond to feedback requests

Stakeholders can easily respond to the feedback request by clicking on the “Provide feedback” link given in the mail, which will automatically configure the Test and Feedback extension with the selected feedback request. Stakeholders can then use the full capture capabilities of extension to capture their findings and submit their feedback in the form of feedback response or bug or task work items.

FeedbackResponse2

To see the list of feedback requests assigned to you click on [Test & Feedback - Capture Screenshot] in the extension. From the list, you can select the feedback request you want to provide feedback on and then quickly start providing feedback. From this page, you can also manage your “Pending feedback requests” by marking them as complete or by declining them and can switch between different types of feedback requests by clicking on the desired radio button.

feedback_request

In addition to above flow, stakeholders can also use the extension to provide voluntary feedback. In “Connected mode”, connect into the team project you want to provide feedback to. You can then use the extension to capture your findings and submit feedback in the form of feedback response/bug/task work items.

Collaborate using connected mode with basic access

Users with basic access can connect to their Team Services account or Team Foundation Server “15” to view the “Session Insights” page. This page allows users to view all completed sessions at an individual or team level at one place thus allowing them to collaborate with one another as a team. The page provides important summary level data like the total work items explored and created, total time spent across all sessions and the total number of session owners. Users can scope down the data by selecting the “period” they are interested in and grouping the data on various pivots like sessions, explored work items and session owners. Depending on their need teams can use session insights page to derive various kinds of insights.

Note Note: Click on “Recent exploratory sessions” in the Runs tab under Test hub to view “Session Insights” page. Alternatively you can also directly navigate to the insights page from the extension by clicking on [icojn2] in the Timeline.   

As mentioned in the Overview blog, one of the major scenarios that the extension supports is the bug bash scenario. The Session insights enable users to leverage the end-to-end bug-bash scenario which includes running the bug bash, triaging the bugs filed and finally measuring the effectiveness of the bug bashes conducted.

bugbash-scenario.fw

To run the bug bash, team leaders can specify the features and user stories they want to bash. Team members can bash the user story assigned to them by associating it with their session and exploring the application based on the user acceptance criteria provided if any. Users can also explore multiple work items in the same session. Once the bug bash is complete, team can view all the completed sessions in the “recent exploratory sessions” page on Test > Runs hub by changing the pivot to “Sessions”. Using the inline details page, you can easily triage the bugs found during the bug bash and assign them owners and appropriate priority. Finally, team leaders can also measure the effectiveness of the bug-bashes by viewing the amount and quality of exploratory testing done for each of the features and user stories. In addition to this they can also leverage the “Query” support to identify the user stories and features not explored. This data helps team leaders identify gaps in testing and can help them in making decisions regarding the quality of the features being shipped.

unexplored-work-items

Visual Awesomeness Unlocked: Liquid Fill Gauge

$
0
0
Liquid Fill Gauge is a circle gauge that represents a percentage value, but in an eye catching way: the liquid fills up the circle to up to the relevant value, with a beautiful animation of water waves.

Connect Microsoft Bookings to your Facebook Page and grow your business

$
0
0

Microsoft Bookings, a new service available in Office 365 Business Premium, makes it easy for your customers to schedule appointments with you in your own online Bookings Page. We’re pleased to announce you can now integrate your Bookings Page with your business’s Facebook Page too.

Once Bookings is set up on your Facebook Page, your customers simply click the Book Now button and select the service and time that works for them. Their contact information is automatically filled in for them, and once they click Book, they’re done.

bookings-and-facebook-1

All your bookings are still in one place

After your customer clicks Book Now, the same scheduling process—confirmation, notification, appointments, reminders, etc.—happens as if they were using your Microsoft Bookings web page.

Frequently asked questions

Q. How can I get Microsoft Bookings?

A. Microsoft Bookings is part of Office 365 Business Premium. Visit the Office 365 Business Premium website to learn how to purchase a subscription. Bookings is currently available in the U.S. and Canada and will be rolling out to all Office 365 Business Premium customers worldwide starting November 2016.

Q. Where can I learn more about Microsoft Bookings?

A. Read our announcement blog to learn more.

Q. Do I need to publish my Bookings Page to accept bookings through Facebook?

A. Yes. We use your Bookings Page as an indicator that you’d like your customers to book appointments with you online. It is a prerequisite before you start accepting bookings on Facebook.

Q. Will all my booking policies be respected if I integrate my Bookings Page with my Facebook Page?

A. All your policies (lead time, cancellation time, notification preference and staff selection) will be respected except for maximum number of days in advance that you can be booked. Customers can only book up to two months in advance through Facebook.

Q. How do I disconnect Microsoft Bookings and Facebook?

A. Follow steps one to three shown in this article and then select Remove Service on the Page Action button in the Microsoft Bookings tab.

The post Connect Microsoft Bookings to your Facebook Page and grow your business appeared first on Office Blogs.

Jump Start Your Analytics with Cortana Intelligence Solutions

$
0
0

This post is authored by Sachin Chouksey, Principal Software Engineering Manager, and Darwin Schweitzer, Senior Program Manager, at Microsoft.

What’s the Problem?

Building analytics solutions can consume a lot of time. 

Customers who wish to build intelligent solutions on Microsoft’s advanced analytics platform today, for instance, need to navigate through a multitude of options, thanks to the broad array of services available as part of the Cortana Intelligence Suite. This buffet of options could present a learning curve for newer customers who may not be sure where to start, or what the optimal architecture might be, or how to glue different services together. 

What’s the Solution?

To address the above challenge, we offer Cortana Intelligence Solutions, a set of pre-built solutions that are based on commonly encountered design patterns and which customers can quickly deploy and test.

In the sections below, we walk you through the typical phases that customers will go through, in using such a solution:

Discover: The customer discovers the solution they would like to deploy from the Cortana Intelligence Gallery. They can read a description of the solution, get an overview of the architecture, understand the underlying services that make up the solution and also get an estimate for the time it will take to deploy it. Some solutions such as IT Anomaly Insights (which uses the Anomaly Detection Machine Learning API) allow the user to interact with the solution and see it in action on their own data, via a “Try with your data” option, before they deploy it.


Deploy: To deploy a solution, the customer needs to simply click “Deploy”, sign in with their Azure credentials, enter a deployment name, choose the Azure subscription and Azure location in which they wish to deploy these services, and provide a few other inputs. The solution gets deployed within minutes.


Customize: Each solution comes with a detailed technical guide that helps the customer jump start their adoption and use of it. Customers can also work with one of our Advanced Analytics Partners to tailor the solution to their needs. We also pre-qualify partners for specific solutions and you can find a list of partners with each solution. Customers are free to experiment with various solutions and can delete a deployment, if needed, with just a few clicks.


What’s Available Today?

Currently available solutions are:

  • IT Anomaly Insights: This solution helps IT departments in large organizations quickly detect and fix issues based on underlying health metrics from their IT infrastructure, services and KPIs. This solution can be used to monitor metrics from any real time system such as those in IoT or healthcare.
  • Data Warehousing and Data Science with SQL Data Warehouse and Spark: This solution sets up an end-to-end data ingestion and warehousing pipeline using Apache Spark, Azure SQL Data Warehouse and Azure Data Factory, and shows how to use these services from a data science perspective.
  • Stream Analysis with Azure ML: This solution sets up an end-to-end pipeline to ingest tweets based on user-defined keywords, and analyzes their sentiment using an Azure ML based web service.
  • Predictive Maintenance for Aerospace: This solution demonstrates how to combine real-time data from sensors with advanced analytics to monitor aircraft equipment in real-time, and also predict the remaining useful life of critical parts, so that maintenance can be pro-actively scheduled to prevent failures.
  • Windows and Linux Data Science Virtual Machines: These solutions provision a Windows or Linux Data Science VM, a custom VM image that is pre-installed and configured with a set of popular tools commonly used for data science and ML.

How Do You Deploy Cortana Intelligence Solutions?

1. After viewing the desired solution in the Cortana Intelligence Gallery, simply click the ‘Deploy’ button to start deploying. You will need an Azure subscription – if you don’t have one, you can get a free trial subscription here.


2. You will need to provide the following parameters and then click ‘Create’:

  • Deployment name: A name for your deployment.
  • Azure Subscription: If you have multiple subscriptions, you can select the one you would like to deploy into.
  • Location: The Azure region where you would like your solution resources deployed.


3. When you click the ‘Create’ button you will be asked to provide additional solution level parameters such as username, password, API keys, keywords, etc.


4. Clicking the ‘Next’ button starts the provisioning. Based on the solution you have selected, deployment can take anywhere from 3 to 25 minutes.


5. Once deployment completes, you are provided with a detailed description of what was deployed, including instructions to customize the solution for your needs. Certain solutions have deeper technical documentation on GitHub and you will find links to the documentation on this ‘Next Steps’ page:


6. The Cortana Intelligence Solutions UX also provides direct links to the Azure Portal for the deployed resources, both during and after deployment:


Deleting Currently Deployed Solutions

There are three ways to delete a solution you have currently deployed:

1. Using the “Click here to view deployed solutions” link, in the Solutions category in the Cortana Intelligence Gallery:


2. Using the link in Note that reads “If you have already deployed this solution, click here to view your deployment“, at the bottom of the Description section of each solution:


3. By visiting the Cortana Intelligence Solutions landing page and clicking on ‘Deployments’ in the top navigation:


Try It Today

Be sure to check out the Cortana Intelligence Solutions page and give these solutions a spin. As an additional resource, you can also view our presentation on this topic at the recent Microsoft Data Science Summit 2016: Insanely Practical Patterns to Jump Start Your Analytics SolutionsDo send us your feedback, either via comments below or to cisolutions@microsoft.com.

Sachin & Darwin (@DataSnowman)

Data Science with Microsoft SQL Server 2016 – Free eBook

$
0
0

Foreword

The world around us – every business and nearly every industry – is being transformed by technology. This disruption is driven in part by the intersection of three trends: a massive explosion of data, intelligence from machine learning and advanced analytics, and the economics and agility of cloud computing.

Although databases power nearly every aspect of business today, they were not originally designed with this disruption in mind. Traditional databases were about recording and retrieving transactions such as orders and payments. They were designed to make reliable, secure, mission-critical transactional applications possible at small to medium scale, in on-premises datacenters.

Databases built to get ahead of today’s disruptions do very fast analyses of live data in-memory, as transactions are being recorded or queried. They support very low latency advanced analytics and machine learning, such as forecasting and predictive models, on the same data, so that applications can easily embed data-driven intelligence. In this manner, databases can be offered as a fully managed service in the cloud, making it easy to build and deploy intelligent Software as a Service (SaaS) apps.

These databases also provide innovative security features built for a world in which a majority of data is accessible over the Internet. They support 24×7 high-availability, efficient management, and database administration across platforms. They therefore make it possible for mission-critical intelligent applications to be built and managed both in the cloud and on-premises. They are exciting harbingers of a new world of ambient intelligence.

SQL Server 2016 was built for this new world and to help businesses get ahead of today’s disruptions. It supports hybrid transactional/analytical processing, advanced analytics and machine learning, mobile BI, data integration, always-encrypted query processing capabilities, and in-memory transactions with persistence. It integrates advanced analytics into the database, providing revolutionary capabilities to build intelligent, high-performance transactional applications.

Imagine a core enterprise application built with a database such as SQL Server. What if you could embed intelligence such as advanced analytics algorithms plus data transformations within the database itself, making every transaction intelligent in real time? That’s now possible for the first time with R and machine learning built in to SQL Server 2016.

By combining the performance of SQL Server in-memory Online Transaction Processing (OLTP) technology as well as in-memory columnstores with R and machine learning, applications can achieve extraordinary analytical performance in production, all while taking advantage of the throughput, parallelism, security, reliability, compliance certifications, and manageability of an industrial-strength database engine.

This eBook is the first to truly describe how you can create intelligent applications by using SQL Server and R. It is an exciting document that will empower developers to unleash the strength of data-driven intelligence in their organization.

Joseph Sirosh
Corporate Vice President
Data Group, Microsoft


Download this eBook from Microsoft Virtual Academy today!
https://mva.microsoft.com/ebooks/


Writing a book with Office—how Maxie McCoy does it

$
0
0

We recently caught up with inspirational writer, speaker and millennial expert Maxie McCoy, who’s a self-professed Office enthusiast (“I live in Office. Always. Period.” )—to get her take on Researcher and Editor, two new cloud-powered services designed to help you do your best writing in Word. Both features are exclusive to Office 365.

writing-a-book-with-office-1“I’ve always loved to write,” Maxie explains. “My parents have ‘books’ of my writing I created as a young kid where I wrote about all the things I wanted to grow up and be (not sure any of those included being a writer!). However, the moment I first saw myself as a ‘writer’ was during a long-form, magazine writing class in college. I noticed how much gratification I got in nailing a perfect transition or putting together two words to describe exactly the feeling I hoped to communicate. It was clear to me then that not only did I have a knack for this style of communication, it brought me immense joy. That has continued to be a guiding tenant in my life: pay attention to what energizes you and follow it.”

Maxie spends a lot of time in Word writing blog posts and articles for a variety of websites. Currently, she’s finessing a book proposal for her agent to pitch to publishers—a book to help millennials learn how to uncover their passions, get ahead in their careers, and essentially, live their best lives.

“When I’m on the road speaking to individuals, their ideas and obstacles motivate me. When I’m home in San Francisco I find so much inspiration from the world around me and the everyday conversations I have with gal pals, coworkers, and mentors. If you take the time to look up and open up, you’ll find the motivation you need.”

It’s no surprise she was curious to see how she could put Researcher and Editor to use for her book project. We developed Researcher to help make it easier to get to a first draft. You can explore reliable sources and add content into your documents—with properly-formatted citations—right within Word. Researcher taps into Bing so you can access the content you need on more than 1 billion people, places and things on the web.

When she saw what Researcher could do, Maxie was in awe. “Wait! Where was this in college???” she laughs. “I was just thinking about all the stuff I write—if it’s not working on the book, it’s writing multiple posts per week for different sites. I always try to credit the research that I’m pulling in. With Researcher, it’s like it’s all right there for me.”

“If you’re writing for the web, you just hyperlink your source because that’s how it works on blog posts and various digital sites. But then when you move back to longer form writing, whether it’s a proposal or a release, or what is going to be a book, I couldn’t tell you how to cite something appropriately. That’s where Researcher can save the day.”

writing-a-book-with-office-2

While Researcher helps you get started, Editor helps with those finishing touches by providing advanced proofing and editing capabilities. It combines natural language processing with machine learning to make recommendations on your writing style so you can communicate more effectively. Editor flags complex words or unclear phrases and offers suggestions to make your writing stronger. For example, Editor may recommend you use “most” instead of “the majority of” (in that literary masterpiece you’re almost finished with).

For Maxie, one struggle is figuring out how to keep the right nuances or voice on each website she writes for, while being efficient with her words. “I go from site to site and there’s a different voice for each website. Editor can provide recommendations on being much more concise when I need it. That might be good for a post that I’m doing for Huffington Post versus something I’m doing for my own website.” Editor also suggests synonyms for words used too frequently, to add variety to her different writings.

writing-a-book-with-office-3

“Researcher and Editor are so intuitive and smart,” she says. “For people who didn’t memorize all those grammar rules in Journalism school, I can imagine these features being very helpful. They’re allowing you to use your focus and your energy where you’re best optimized, while leaving all that other stuff like a bibliography and having to worry about which word to choose to Researcher and Editor. It keeps your energy in the right place—which is [thinking] creatively—or making a really thoughtful point.”

You don’t have to write for a living like Maxie to use Researcher and Editor. They’re there to help anyone communicate more effectively, more easily. And both features are designed to get better with time. Researcher will soon include sources like well-known encyclopedias and history databases and will be available on mobile devices. Editor will continue to improve its current spelling and grammar tools so you can spend your creative time being just that—creative.

We were wondering what Maxie’s next big project was after her book was published.

“The big sky vision is a global media company,” she said. “I’m so jazzed to be making progress on the book journey but I also love love love video (I have a background in broadcasting). I’m putting my attention towards bringing something fresh to wider audiences. I think when we can know what energizes us (like writing and speaking do for me) we can find peace in knowing what we see for ourselves without knowing exactly how it’ll get done. I’m all about trusting the process!”

Many thanks to Maxie for taking a look at our brand-new features in Word. She’s an up-and-coming powerhouse who shares our goal of helping people do and be their best. In the months ahead, keep an eye out for her as she travels around the world on her speaking and teaching gigs, and watch for her upcoming book, too.

Both Researcher and Editor are available right now for Office 365 subscribers using Word 2016 on Windows desktops. Look for mobile versions to be added soon.

—Maxie McCoy’s time is valuable, so we’ve compensated her for taking the time to share her story.

The post Writing a book with Office—how Maxie McCoy does it appeared first on Office Blogs.

Accessibility in the classroom—tools that impact my students

$
0
0

Today’s post was written by Robin Lowell, a Microsoft Innovative Educator Expert, special education, science and mathematics teacher, as well as teacher of blind and visually impaired students.

Creating a collaborative, inclusive classroom has many moving parts and pieces—and finding the right balance can be challenging. As a special education teacher, I am constantly on the hunt for technology and tools that give students with disabilities an environment that is personalized, differentiated and yet as close to their peers’ experience as possible. I have been an itinerant teacher, a distance education math teacher at a residential school and currently a resource room teacher—without the resource room. One goal of an inclusive classroom is to have all students working and collaborating throughout the day as much as possible, which takes planning, tools and creativity. When I work with my students and determine how to meet their needs, I think a lot about their accommodations rather than their modifications. The outcomes for a student can have a very dramatic effect on their learning.

In my accompanying blog post, “Accommodations versus modifications in an inclusive classroom,” I outline the important differences between accommodations and modifications to accessibly personalizing student learning. With Windows 10 and Office 365—free for teachers and students—I have been able to find and use many of the accommodations that I have been looking for making consumption of materials, content creation, collaboration and organization possible for students using the same technology and tools as their peers.

These tools help my students to consume content, create content, collaborate inclusively and stay organized.

Consuming content

With Windows 10 and Office 365, students can personalize how they consume content.

Learning Tools—Creates opportunities for accommodations for users, including listening and following along with the text instead of having the reading modified or shortened, which creates richer content for the student. A student can also use the dictate mode to create text for a paper or assignment. Learning Tools is a game changer. To learn more, read this blog post from Lauren Pittman, my fellow Microsoft Innovative Educator Expert, and listen to her TeacherCast podcast.

Reading mode—Another tool that makes reading and consuming content much easier. Reading mode takes away all the distractions by stripping away advertisements and toolbars, leaving the user with a clean background and a clear font that is easier to read. I use this with my students with dyslexia, have ADHD or are easily distracted to help them stay focused and on task. Reading mode is available in in both the Edge Browser and Word.

Ease of access center—Allows the user to modify how their computer looks and how they access content. Over the last year, a lot of thought and effort has gone into improving the ease of access center experience for users. Office 365 usability in high contrast mode has been significantly enhanced, improving the user’s experience through bolder colors, cleaner graphics and increased usability with shapes and charts.

This gif shows how to turn on High Contrast Mode on a PC to see a spreadsheet in Excel Online with less eye strain if you have a visual impairment. With this mode turned on, tables, active cell and cells selection outlines are shown to be clearly visible, hyperlinks in sheets are shown respecting High Contrast theme colors and shapes and charts are shown being rendered using High Contrast theme colors.

How to turn on High Contrast mode in PCs and use Excel Online with less eye strain.

Magnifier—The ease of access center allows the user to magnify the screen in three different ways such as “Docked,” which has a portion of the screen magnified; “Lens,” which is like a magnifying glass; and “Whole Screen,” which magnifies the entire screen. Turning Magnifier on or off, as well as increasing or decreasing the magnification, is now easier than ever.

Creating content

Creating, researching and reviewing content in an effective way is a great challenge for many of my students. Students who use screen readers struggle trying to get around the computer efficiently without using too many steps and clicks. When I introduced my students to the “Tell me what you want to do” feature in Office, I was a hero. To activate the feature, you can either click Tell Me on the ribbon in Office 2016 and Office Online or use the keyboard shortcut Alt+Q and type in what you need. For example, if you want to start researching a topic, just type in “Researcher” and it will take you to the feature you want within the Office application. Learn more about Tell Me here.

Researcher—A new feature in Word that helps students find reliable resources and content. Students have great ideas around what they would like to write but often struggle to get started. Researcher helps them overcome those mental roadblocks with access to strong ideas and supporting content. This will change the way my students start their research. Learn more about Researcher here. Note that Researcher is rolling out to Office 365 subscribers using Word 2016 on Windows desktops, so you might not see it yet.

Accessibility Checker—The Accessibility Checker tool scans a document for accessibility problems and is accessed from the Review tab in Word, Excel and PowerPoint for PCs and Macs. It is also available in Sway and OneNote. By the end of the year, it will be available in even more apps, including Office Online apps and Outlook. Learn more about the Accessibility Checker in Office here.

In this gif, accessibility checker is shown being opened from the “More options” pane in a Sway being created in a web browser. Sway has pre-populated the alt text with the descriptions that were available when images were imported from Word, PowerPoint, PDF and online sources while creating the Sway. For images that were uploaded from a local drive, it has pre-populated the alt text with the file name. While running the accessibility checker, the author of the Sway is given a chance to review the default alt text and edit it.

Simulation of the Accessibility Checker in Sway.

Editor—A newly released and rolling out feature in Word and Outlook for PCs. This tool helps students write impactful, collaborative documents with one clear and confident voice. It is a game changer for many of my students, especially those with dyslexia. Watch Editor in action here.

In this gif, spelling and grammar checker in Word desktop increases the likelihood of finding corrections even when the typed word is very different from the intended word, in this case “approximately,” reducing the occurrence of “No Suggestions.”

Editor in Word increases the likelihood of finding spelling corrections.

PowerPoint Designer—Often, students become so hung up on the design process when creating a PowerPoint presentation that the content itself becomes the secondary focus. PowerPoint Designer helps the user create beautiful looking slides without having to manually format pictures, create bulleted lists or place graphics. With an automatic image description service coming to PowerPoint early next year, photos that can be recognized with high confidence and the alt-text added automatically, allowing a screen reader to read the picture to the user. Learn more about PowerPoint Designer here.

OneNote—The app where many of my students choose to create content. They never have to worry about saving, and they have access to multi-modal forms of information. For example, a student can complete a worksheet by dictating answers into Learning Tools and copying them into the worksheet in Word or in OneNote. Previously, the task would’ve been modified or the student would’ve been altogether excused from completing it. With OneNote, however, the student has the same opportunity and access as the rest of the class and is empowered to complete the assignment in a timely manner. There are great trainings, samples and more for OneNote on the Microsoft Educator Community here.

Collaborating inclusively

Students with disabilities can struggle with group work, either because they don’t have access to the materials like other group members or because they struggle to communicate with those other group members. These students have a lot to offer in collaborative settings, though, and Windows 10 and Office 365 will help them do so by contributing in their preferred format, one that is easily compatible with the preferred formats of the other group members.

Word with Office 365 will soon let screen readers more effectively announce comments and track changes. This will give the student meaningful and enriching roles to play in a group setting. In the Skype Preview app, in the latest Windows 10, Skype Translator capabilities are built right in. Skype Translator will translate conversations in real time. The possibilities and applications are endless. A student who is deaf or hard of hearing can discuss a project with a peer or teacher, or listen to a lecture without the benefit of a translator, all through a Skype call. Other possibilities include parent meetings where the parent doesn’t speak English. Learn more about and get Skype and Skype Translator for Windows here.

Staying organized

Organization is a struggle for all students, and students with learning disabilities seem to struggle more than most.

Office Lens is an app that trims, enhances and makes pictures of whiteboards and documents readable. This is great for students who lose work, as it means they will always have a backup copy.

A student who has a visual impairment can independently and accurately scan the document using the voice guidance feature coming soon in Office Lens for iOS. They can then turn it into a PDF or text and format in whatever way best suits their needs: large print, audio or even Braille via a transcription program. Voice guidance and an Immersive Reader (like in Learning Tools mentioned above) is also coming to Office Lens soon. Learn more and download Office Lens here.

OneNote has been my go-to for student organization for several years. Students can have their work automatically saved in one place and in their format of choice: type or handwriting, audio or video. Learning Tools and dictation are available directly within the program. Since OneNote is a cloud-based app, students can access it from wherever they are, from any device, making completing and staying on top of tasks much easier.

In this image, a teacher has written examples for a polynomials lesson using digital ink in OneNote.

Math expressions written into OneNote with digital ink.

Using the Office 365—free for teachers and students—and Windows 10 tools has changed my students’ educational experience. When students graduate from high school and enter college or the workplace, they are equipped with productivity and collaboration tools, and skills they will use throughout their lives.

—Robin Lowell

robin-lowell

Robin Lowell is a Microsoft Innovative Educator Expert who has years of experience working as a special education, science and mathematics teacher, as well as teacher of blind and visually impaired students. At the Closing the Gap conference this week, she is partnering with Microsoft Office Product Managers to showcase the technologies she finds most impactful for creating inclusive learning environments. You can follow Robin on Twitter at: @teacherinthebox.

The post Accessibility in the classroom—tools that impact my students appeared first on Office Blogs.

Guide to inbox management

$
0
0

With hundreds (or thousands) of messages coming and going from your inbox each month, it can quickly get unruly. Outlook helps you take control and stay on top of what’s important. Here are seven Outlook tips and tools to help you overcome business email overload.

Focused Inbox

Your inbox should be your command center—helping you plan your day by staying on top of what matters. That’s why our Outlook team spends so much of their time improving your email experience. One of Outlook’s newest features, Focused Inbox, helps you focus on the emails most important to you. This feature separates your email into two tabs: Focused and Other—determined by an email’s content and the contacts you communicate with most. That way, all your less important emails are saved but out of the way, enabling you to focus on what’s most important first. To fine-tune the sorting criteria, just use the “Move to Focused” or “Move to Other” options.

Available on Outlook.com and Outlook for iOS and Android. This feature will soon be rolling out to Office 365 customers and Outlook on other platforms.

@Mentions

There’s now a better way to quickly identify action items for team members through email. Simply type the @ symbol followed by individuals’ names in the body of your emails. The @Mention changes the text color and style to call an item to the recipient’s attention. This helps you detect what emails require your response, as the @ symbol will appear in your message list when you’ve been mentioned in an email.

The @Mentions feature is already available in Outlook on the web and is available for Office Insiders using Outlook 2016 for Windows and Mac. Look for @Mentions coming soon for Outlook for iOS, Android and Windows 10 Mobile.

Search

Outlook’s smart search has made it easier to find what you’re looking for—regardless of where the email is stored. This reduces the need to sort emails into folders, which can sometimes take more time than it saves. Outlook searches all email that is synced to your computer as well as stored on your email server, so you can find exactly what you need when you need it. Start typing a name or keyword into the search bar, and Outlook provides smart suggestions based on your previous searches and the content of your mailbox.

Tagging

Outlook provides many ways you can organize your inbox to match your individual work styles or preferences. Make your emails more easily discoverable by taking advantage of features like colored Categories, Flags and Quick Steps. Categories allow you to assign a color to your emails, to assign them to a project or work group. Flagging an email will remind you to revisit later, and it will appear in your To-Do bar, Daily Task List within the Calendar as well as the Tasks view. Or simply use the Read/Unread options to come back to important items at your convenience.

Unsubscribe

Individually removing your email from each list can be a drain on your time. Luckily, Outlook can take care of this problem for you. Easily unsubscribe with just one click, without leaving your inbox.

Currently this feature is only available for Outlook on the web.

Sweep

If too many emails are cluttering your inbox, the Sweep tool can help you quickly get rid of unwanted mail. Delete emails in bulk with the Sweep feature or create a rule for deleting certain emails so you don’t have to do it manually. Tired of receiving a store’s promotion? Sweep and block all future emails with just a few clicks.

Currently this feature is only available for Outlook on the web.

Mobile apps

Up your productivity by taking advantage of Outlook’s mobile applications, which provide access to the best Outlook features on your mobile device. The iOS and Android apps bring together the core tools you need to get things done—your email, calendar, contacts and files—helping you get more done even on the smallest screen. Easily send attachments, view and respond to calendar invites, and schedule emails all from within one unified app.

You can also triage action items right on your wrist with Outlook for Apple Watch and Android Wear, as well as interact with emails and calendar information.

Curious about how these features work? Sign up to use Outlook through Office 365 for a more powerful and productive inbox. Also, check out these resources:

The post Guide to inbox management appeared first on Office Blogs.

One-stop shop for enterprise/IT pro content about Office 365

$
0
0

Are you in charge of getting your enterprise organization onto Office 365? We have the information you need to plan, deploy and manage Office 365 and hybrid environments. Find content about core Office 365, Office client deployment and Office 365 services (including Exchange Online, Skype for Business Online, SharePoint Online and more) all in one place at aka.ms/O365ITPro.

We consolidate trusted, authoritative content from multiple Microsoft content sources into one convenient experience so that you can browse to the information you need:

  • Learn what’s included in Office 365 by reviewing the service descriptions.
  • Discover the types of scenarios you can support with Office 365 and hybrid environments versus on-premises server environments, and what else you can do if you integrate with Microsoft Azure.
  • View cloud architecture resources for identity, security, storage and networking and use test lab resources to work through your specific implementation.
  • Find out how to set up the services you need in Office 365, and how to manage them over time.
  • Review your choices for how to deploy the Office client applications to your users and get training resources for them.
  • Understand Office 365’s security and compliance capabilities created to meet the legal, regulatory and technical standards that your organization might have.

office-365-enterprise-1

Find this content and more at aka.ms/O365ITPro.

—Samantha Robertson is a content portfolio manager for the Office 365 content team.

The post One-stop shop for enterprise/IT pro content about Office 365 appeared first on Office Blogs.

The new .LNK between spam and Locky infection

$
0
0

Just when it seems the Ransom:Win32/Locky activity has slowed down, our continuous monitoring of the ransomware family reveals a new workaround that the authors might be using to keep it going.

The decline in Locky activity can be attributed to the slowdown of detections of Nemucod, which Locky uses to infect computers. Nemucod is a .wsf file contained in .zip attachments in spam email (see our Nemucod WSF blog for details). Locky has also been previously distributed by exploit kits and spam email attachments with other extensions such as .js, .hta, etc.

The graph shows that Locky machine encounters has recently been low

Figure 1. The graph shows that Locky machine encounters has recently been low

 

Nemucod detection peaked early in October 2016

Figure 2: Nemucod detection peaked early in October 2016

 

We observed that the Locky ransomware writers, possibly upon seeing that some emails are being proactively blocked, changed the attachment from .wsf files to shortcut files (.LNK extension) that contain PowerShell commands to download and run Locky.

An example of the spam email below shows that it is designed to feign urgency. It is sent with high importance and with random characters in the subject line. The body of the email is empty.

Example of a spam email that could lead to a Locky infection

Figure 3: Example of a spam email that could lead to a Locky infection

 

The spam email typically arrives with a .zip attachment, which contains the .LNK files. We’ve observed that the attachment is named bill, possibly meant to trick users into thinking it is a bill they need to pay. In opening the .zip attachment, users trigger the infection chain.

.LNK file inside the zip attachment

Figure 4: .LNK file inside the zip attachment

 

Inspecting the .LNK file reveals the PowerShell script.

Embedded PowerShell command in the shortcut file

Figure 5: Embedded PowerShell command in the shortcut file

 

This threat is detected as TrojanDownloader:PowerShell/Ploprolo.A.

When the PowerShell script successfully runs, it downloads and executes Locky in a temporary folder (for example, BJYNZR.exe), completing the infection chain.

Embedded PowerShell command used to download the payload

Figure 6: Embedded PowerShell command used to download the payload

 

The payload malware is the recent version of Locky that has the following characteristics:

  • Encrypted file extension:
    • .odin
  • Decryption instruction files:
    • _440_HOWDO_text.html
    • _HOWDO_text.bmp
    • _HOWDO_text.html

 

For details, see the Win32/Locky family description.

The static configuration inside the binary contains the following information:

 

Static configuration variablesValues
AffiliateId5
DGA seed74311
Language skippedRussian
URL path/apache_handler.php
Hard coded C&C addresses used·         93.170.104.126

·         185.46.11.73

Offline encryption allowed using public keyBgIAAACkAABSU0ExAAgAAAEAAQA7cxE2y7KzaqNzjzvUMZHpLzaCnLlnDkPn3W74o09zNmJNhvjw

qEcwUOJBZmpRCjIoeCnH+NZVPLvdXjfHJGU3WguCLrOE97HEZaXd/uHW95UE8AZW+r4zPdCClnN1

mfHF+CvvLJGjiTv+8OMJXNxYA/TJlyXqDhpWarPN79UMGrWApdYkkUiPiN+EBXlJWJsnXfWi5d9N

xrb/vfPIZIzSXmOkOtEg5D1/MlElPrKYJ2yXwCAkSWDzeYXU06uIG6OYeCOrxKIy26wYmCdv+7yE

KJ6tXZYH3enbsiwXw+6VR2EAwyD7/U6GnWq4LTT0M/u58dY5WlyGuWIvBrzQ2xXO

 

 

The following SHA1s were used in this analysis:

 

Mitigation and prevention

To avoid falling prey to this new Locky ransomware campaign:

Francis Tan Seng and Duc NguyenMMPC

Free eBook: Using SQL Server 2016 for Data Science & Advanced Analytics

Announcing SC DPM 2016 with Modern Backup Storage

$
0
0

System Center Data Protection Manager (SCDPM) is well recognized in the Industry for protection of Microsoft workloads and environments. It protects key Microsoft workloads such as SQL, SharePoint and Exchange as well as virtual machines running on Hyper-V or VMware.

SC DPM 2016 brings in features that delivers improvements in multiple key areas of backup efficiency, performance, flexibility and security.  With Modern Backup Storage (MBS), DPM 2016 is turning around the way backups are stored by leveraging modern technologies like ReFS Block Cloning, ReFS Allocate On Write and VHDX.  This to 3X faster backups and 50% reduction in storage consumption thus reducing overall backup TCO.

Windows Server 2016 Private Cloud deployments are faster, secure and cost efficient with enhancements like Storage Spaces Direct (S2D), Rolling Cluster Update, Shielded VMs and Resilient Change Tracking (RCT).  DPM 2016 can protect Windows 2016 Private Cloud deployments efficiently and seamlessly.

3X Faster backups and 50% storage savings

Modern Backup Storage (MBS) uses technologies such as ReFS Block Cloning, VHDX and Deduplication to reduce storage consumption and improve performance. ReFS block cloning uses Allocate On Write technology where all new backup writes go to new location directly as opposed to Copy On Write. This leads to 70% reduction in IOs leading to 3X faster backups.  MBS is able to grow and shrink backup storage consumption inline with production storage by leveraging VHDX as backup container and storing it on ReFS volume.  Thus MBS helps in reducing overall storage consumption by 50%.

Optimized Backup Storage Utilization

DPM 2016’s workload-aware backup storage technology gives you the flexibility to choose appropriate storage for a given data source type.  For ex., SQL DBs with 15 minute RPO would need high-performance backup storage to ensure that storage is able to meet the backup speeds.  Backup of a file system with 1 day RPO can be stored on a low cost JBOD.  This flexibility optimizes overall storage utilization and thus reduces backup TCO further.

MBSWorkloadAwareStorage

 

Hyper-V Private Cloud Protection Enhancements

Windows Server 2016 introduced RCT technology that tracks backup changes natively within a VM.  This takes away the necessity of tracking backup changes by DPM’s filter driver.  As a result, VM backups are resilient, avoiding painful Consistency Checks in scenarios like VM storage migration.

Windows Server 2016 comes with Storage Spaces Direct (S2D), that eliminates the need for expensive shared storage and related complexities.  DPM recognizes and protects Hyper-V VMs deployed on any S2D or ReFS based SOFS cluster configurations.

DPM’s ability to do backup and recovery of Shielded VMs securely helps in maintaining security in backups. DPM maintains backup SLA by continuing VM backups while Cluster is being upgraded to Windows 2016 using Cluster OS Rolling Upgrade .

Upgrade with ease and peace of mind

DPM 2016 upgrade is very simple and will not disrupt your production servers. After upgrading to DPM 2016 and upgrading agents on production servers, DPM backups will continue without rebooting production servers.  DPM MBS capability is enabled after upgrading DPM OS to Windows Server 2016.

Be Sure to go through Introducing DPM 2016 Modern Backup Storage, to understand how MBS work.

Get these features Now!

You can get DPM 2016 up and running in ten minutes by downloading Evaluation VHD.  Questions?  Reach out to us at AskBackupTeam@microsoft.com.

If you want to enable Azure Backup for longterm retention, refer to Preparing to backup workloads to Azure with DPM Click for a free Azure trial subscription

Here are some additional resources:

 


Introducing DPM 2016 Modern Backup Storage

$
0
0

 

With DPM 2016, we announced Modern Backup Storage (MBS), delivering 50% storage savings, 3x faster backups, and efficient backup storage utilization with Workload Aware Storage.

Data Protection Manager can backup key workloads such as SQL, SharePoint, Exchange, file servers, clients and VMs running on Hyper-V or VMware. With Modern Backup Storage and RCT based Hyper-V VM backups, DPM 2016 goes a step further in enhancing Enterprise backups by completely restructuring the way data is backed up and stored.  As MBS uses Windows Server 2016 ReFS Block Cloning and VHDX technology, MBS is enabled when DPM 2016 is running on Windows Server 2016.

 

How does Modern Backup Storage work?

 

 

Add volumes to MBS and configure Workload Aware Storage

 

 

Begin backing up by creating Protection Group with MBS

With these simple steps, you can efficiently store your backups using Modern Backup Storage technology.

Get these features Now! 

You can get DPM 2016 up and running in ten minutes by downloading Evaluation VHD.  Questions? Reach out to us at AskBackupTeam@microsoft.com.

If you are new to Azure Backup and want to enable Azure Backup for longterm retention, refer to Preparing to backup workloads to Azure with DPM.  Click for a free Azure trial subscription.

Here are some additional resources:

.NET Core Tooling in Visual Studio “15”

$
0
0

This post was co-authored by David Carmona,a Principal Program Manager Lead in .NET Team and Joe Morris,a Senior Program Manager in .NET Team.

Couple of weeks back, we dedicated a blog post introducing .NET Standard 2.0, which will significantly extend your ability to share code by unifying .NET APIs across all application types and platforms.

Today, we are going to focus on how we are unifying the project system and the build infrastructure with MSBuild. These changes were announced back in May and will be available aligned with the next version of Visual Studio (Visual Studio “15”). These tools will provide you with a development experience in Visual Studio, Visual Studio Code and the command line.

For the impatient: TL;DR

We released .NET Core 1.0 back in June. This included the RTM runtime and a preview of tools components. The final release of .NET Core tools will provide a build and project system that are unified with the rest of .NET project types, moving away from the project.json infrastructure that was specific to .NET Core. That makes .NET Core application and .NET Standard library projects just another kind of .NET project you can use in conjunction with other project types, such as Xamarin, WPF, Unity or UWP.

This new set of tools and improvements provides a big step forward in the experience. We’ve preserved key project.json characteristics that many of you have told us you value while enabling new cross-project scenarios not possible before. We are also planning to bring those benefits to all the project types over time and not just for .NET Core or .NET Standard. And because we will support full migration of existing project.json files, you can continue to work with them safely.

Here are the key improved experiences you will see in this unified Build system:

  • Project references work: You can reference .NET Core and .NET Standard library projects from existing .NET projects (WPF, ASP.NET, Xamarin, Unity etc.) and the opposite direction also, as explained in the .NET Standard post.
  • Package references are integrated: NuGet package references are now part of the csproj format, not a special file using its own format.
  • Cross-targeting support: You can cross-target multiple target frameworks in one project.
  • Simplified csproj format: The csproj format has been made as minimal as possible to make it easier to edit by hand. Hand-editing is optional, common gestures in Visual Studio will take care of updating the file and we will also provide command line options in the CLI for the most common actions.
  • Support for file wildcards: No requirement to list individual files in the project file. This enables folder-based projects that don’t require every file to be added manually and dramatically improve team collaboration, as the project file doesn’t need to be modified every time a new file is added.
  • Migration of project.json/xproj to csproj : You can seamlessly migrate your existing .NET Core projects from project.json to csproj without any loss at any time, in Visual Studio or at the command line.

Why do we need a standard Build System?

We’ve been talking recently about .NET Standard 2.0. It will give you access to many more APIs and can be used to share code across all the apps you are working on. That sounds great! It turns out that a key enabler of this outcome is a standard build system. In absence of that, the .NET Standard 2.0 vision is not fully realized. .NET Standard requires a standard API and standard project types to act as currencies within a standard build system. With those in place, you can flow code to all the places you want, enabling all the potential combination of project to project and NuGet references.

.NET Core is the only one that isn’t using MSBuild today, so it’s the only one that has to change. This includes .NET Standard Library projects. With all project types using the same build system and project formats, it’s easy and intuitive to re-use libraries across different project types.

The New Tools Experience at the Command line

The updated tools experience will have similar ease-of-use as the existing project.json system, with a better experience if you want to switch back and forth with Visual Studio. The following walkthrough is intended to demonstrate that. Today’s post focusses on the command line experience. We will publish another post at a later date that walks through the same experiences in Visual Studio “15”.

Note: We will show today manual editing of csproj files. We also plan to add dotnet commands that will update csproj and sln files for common tasks, such as adding a NuGet package when working outside of Visual Studio.

New Template

dotnet new is the command to use to create new templates with the .NET Core command line tools. It will generate a csproj project and Program.cs files. The csproj file will be given the same name as the directory by default. You can see the new experience in the image below.

dotnet-new now uses csproj

csproj Format

The csproj file format has been significantly simplified to make it more friendly for the command line experience. If you are familiar with project.json, you can see that it contains very similar information. It also supports a wildcard syntax, to avoid the need of listing individual source files.

This is the default csproj that dotnet new creates. It provides you with access to all of the assemblies that are part of the .NET Core runtime install, such as System.Collections. It also provides access to all of the tools and targets that comes with the .NET Core SDK.

As you can see, the resulting project file definition is in fact quite simple, avoiding the use of complex values such as GUIDs. A detailed mapping of project.json to .csproj elements is listed here.

NuGet Package references

You can add NuGet package references within the csproj format, instead of needing to specify them in a separate file with a special format. NuGet package references take the following form:

.

For example, if you want to add a reference in the project above to WindowsAzure.Storage you just need to add the following line to the other two package references:

Cross-targeting

In most cases, you will target a single .NET Core, .NET Framework or .NET Standard target with your library. Sometimes you need more flexibility and have to produce multiple assets. MSBuild now supports cross-targeting as a key scenario. You can see the syntax below to specify the set of targets that you want to build for, as a semicolon-separated list.

This will also automatically set the right #defines that you can use in your code to enable or disable specific blocks of code depending on the target frameworks.

Project Migration

You will be able to migrate existing project.json projects to csproj very easily using the new command dotnet migrate. The following project.json generates the exact same csproj file showed before after applying dotnet migrate.

.NET CLI commands

There are a set of useful commands exposed by the .NET CLI tools. dotnet restore, dotnet build, dotnet publish and dotnet pack are good examples. These commands will continue to be included with the .NET CLI and do largely the same thing as before, with the exception that they will be implemented on top of MSBuild, as appropriate.

The only difference is that the .NET CLI will provide a much thinner layer, as it will rely on MSBuild for most of the work. The primary role for the .NET CLI is to provide a user-friendly experience for executing MSBuild commands and also as a single tool host for commands that do not use MSBuild, such as dotnet new.

Closing

We are in the process of building the new tools and related experiences that you’ve seen in this post. We’re excited to ship a preview update to you later this year and the final version aligned with Visual Studio “15”.

In the meanwhile you can safely continue to use the existing project.json format which will carry forward as shown before.

We’d love to hear your feedback on this work. We think this is a big step forward for a unified .NET platform that will make your life easier by bringing your new and existing code to any application type and platform.

And because we develop in the open, you are welcome to join the GitHub repos where this work is taking place:

Thank you!

Top 10 Moments in 20 Years of Exchange Server

$
0
0

During the recent Microsoft Ignite conference, we had a party celebration of 20 years of Exchange Server. It was great fun and very dignified, of course.

During the said celebration, there was a great video put together that poked fun at some of the significant moments in Exchange history. This is now available for you to enjoy:

Nino Bilic

Announcing Windows 10 Insider Preview Build 14951 for Mobile and PC

$
0
0

Hello Windows Insiders!

Today we are excited to be releasing Windows 10 Insider Preview Build 14951 for Mobile and PC to Windows Insiders in the Fast ring.

What’s new in Build 14951

Refining the customization experience for precision touchpad (PC): Since last week, we’ve continued working in this space, and have another set of improvements headed your way. To start with, we’ve hooked up our keyboard shortcut picker, so now if you choose that option on the Advanced gestures page, you’ll be presented a recorder so you can capture your favorite key combo –  perhaps WIN + Alt + D or WIN + F to start with? Secondly, we’ve added a Change audio and volume option to the set of basic swipe gestures you can pick from. Finally, we’ve updated the reset button to now has a progress circle and display a check mark when it is finished. We’ve also heard your feedback from the last flight, and fixed a few issues including that the reset button wasn’t clearing the settings listed on the advanced gestures page, that the 4-fingure gesture graphic was visible on devices that only supported 3-contact points, and that some of the advanced gestures options weren’t working as expected. If you have any more feedback for us, please let us know!

Windows Ink Improvements (PC): Starting with this build, pen dropdowns in Windows Ink Workspace will let you change both color and width without having to open it twice. After you make all the adjustments to your favorite pen, start drawing right away and we will dismiss the dropdown for you.

 

protactor

We are also introducing Stencils. Windows Ink protractor tool combines functions of both protractor and compass into one – now you can draw an arc or a complete circle of an arbitrary size with little effort. A familiar two-finger pinch gesture resizes the protractor to the desired size and a degree readout follows your pen, mouse, or finger as you draw along the side of the protractor displaying arc degrees. In this preview build, ruler also got a small update – its degree readout shows a numerical value of the angle, making drawing angles even easier.

Simplified, more familiar camera interface (PC & Mobile): The Camera app received a big update this week for Insiders. We’ve redesigned the Camera interface for ease and accessibility. Check out some of our new features:

  • Enjoy taking photos, videos, and panoramas with our higher-contrast capture buttons.
  • Set a photo timer right from the camera dashboard with our new toggle control.
  • Get to Settings faster! Now, launch into Settings directly from the camera UI.
  • Access your camera roll with one hand from its new spot on the screen.
  • Zoom more easily with the new zoom slider.
  • Make sure you nailed the shot, with a more noticeable capture animation.
  • Change between front- and rear-facing cameras with a more prominent button control.
  • On PC, use the spacebar as a shortcut to take pictures.

camera-app-update

You can now experience the magic of living images on your Surface, now enabled on Surface Book, Surface Pro 4, Surface Pro 3, and Surface 3! With living images, extend your still captures with a snippet of video. These are created automatically whenever your shots feature motion—just navigate to Settings and turn on Capture living images.

And this release features a variety of performance improvements to enhance your experience. We’ve added faster shot-to-shot support, improved feedback for saving large videos to SD cards, and improved camera startup time, among other improvements.

camera-app-update2

Simplifying your developer experience (PC): We’ve done some underlying work, and now you’ll no longer have to reboot your PC after turning on Developer Mode! This means that you can start using Device Portal and Device Discovery as soon as the Windows Developer Mode package has finished installing, rather than having to reboot first.

Narrator improvements: This build includes a number of improvements to Narrator including multiple fixes to continuous reading when used in tables and on web pages, a fix for the Caps Lock + W reading experience so dialogs and other elements are read correctly again, and a fix so that reading hint text does not interrupt the reading of information by Narrator but comes after the main information is read. And Narrator now properly indicates when it is exiting.

Windows Subsystem for Linux: Today we are happy to announce two large updates to WSL!

  • Official Ubuntu 16.04 support. Ubuntu 16.04 (Xenial) is installed for all new Bash on Ubuntu on Windows instances starting in build 14951.  This replaces Ubuntu 14.04 (Trusty).  Existing user instances will not be upgraded automatically.  Users on the Windows Insider program can upgrade manually from 14.04 to 16.04 using the do-release-upgrade command.
  • Windows / WSL interoperability. Users can now launch Windows binaries directly from a WSL command prompt.  This is the number one request from our users on the WSL User Voice page.  Some examples include:
$ export PATH=$PATH:/mnt/c/Windows/System32
$ notepad.exe
$ ipconfig.exe | grep IPv4 | cut -d: -f2
$ ls -la | findstr.exe foo.txt
$ cmd.exe /c dir

More information can be found on the WSL Blog and the WSL MSDN page. Other changes and more information can be found on the WSL Release Notes page.

Other improvements and fixes for Mobile

  • Following feedback discussing French punctuation rules, we’ve updated our French keyboards (with the exception of French (Canada), for which these rules don’t apply) to now add a space both before and after when using two-part punctuation marks, such as the semi-colon, the colon, the question mark and the exclamation mark.
  • We fixed an issue resulting in the Camera shutter sounds entry missing from the Sounds Settings page via Settings > Personalization > Sounds.

Other improvements and fixes for PC

  • We fixed the issue causing PCs that are capable of Connected Standby such as the Surface Book and Surface Pro 4 to sometimes bugcheck (bluescreen) while in Connected Standby.
  • We fixed an issue resulting in Forza Horizon 3, Gears of War (and some 3rd party games) failing to install from the Store with the error code 0x80073cf9 when the system’s app install location was set to drive with native 4k Sectors (or 4k sector sized drive).
  • We fixed an issue resulting in larger Store games such as ReCore, Gears of War 4, Forza Horizon 3, Killer Instinct and Rise of the Tomb Raider possibly failing to launch.
  • We fixed an issue where the console window (which hosts Command Prompt, PowerShell, and other command-line utilities) might not snap correctly to the inside edges between two monitors with different DPI scaling.
  • We fixed an issue where all agenda items in the taskbar’s Clock and Calendar flyout were using the primary calendar’s color, rather than matching their respective calendar’s colors as selected in the Outlook Calendar app.
  • We fixed issue where the Add PIN button in Settings > Accounts > Sign-in Options was sometimes unexpectedly greyed out for domain-joined devices.
  • We fixed an issue resulting in Groove crashing if you tried to reorder the songs in a very large playlist.
  • We've updated our migration logic to now include custom scan code mappings. That means that going forward from this build, if you've used Registry Editor to remap certain keys (for example, Caps Lock key to null), that change will persist across upgrades.
  • We fixed an issue Insiders may have experienced resulting in Adobe Photoshop Express crashing after clicking on the 'correct' button when trying to edit a cropped image.
  • We fixed an issue resulting in .csv or .xlsx files downloaded from Microsoft Edge sometimes unexpectedly appearing to be locked for editing by “another user” when SmartScreen was enabled.
  • We fixed an issue where enabling the RemoteFX adaptor for a Virtual Machine would result in it failing to power on with the error 'Unspecified error' (0x80004005).

Known issues for Mobile

  • Signing into apps such as Feedback Hub, Groove, MSN News, etc. with your Microsoft Account if you sign out or get signed out of these apps will not work. If this happens to you and you cannot sign in to Feedback Hub, you can send us feedback via the forums.
  • When rebooting a phone on this build and Build 14946 from last week, the device appears to get "stuck" on the blue Windows logo screen during the boot cycle. We have identified 2 bugs causing this issue and working to check in fixes soon. However, being "stuck" is only temporary. Depending on which device you have, your phone may be in this stuck state for 20-30 minutes, but it will eventually progress to the Lock screen. There is no permanent impact from these 2 bugs, just an unexpectedly long boot time. Please don't reset your device! It'll take longer to reset the device than it will to wait for the boot cycle to complete.
  • Excel Mobile will freeze after adding sheets and eventually crashes.
  • Insiders who have configured a data limit on their phone may get into a state where they receive frequent notifications about having exceeded this limit. To resolve the issue, please go into Settings > Device & Network > Data Usage and remove and recreate your data limit settings.

Known issues for PC

  • Signing into apps such as Feedback Hub, Groove, MSN News, etc. with your Microsoft Account if you sign out or get signed out of these apps will not work. If this happens to you and you cannot sign in to Feedback Hub, you can send us feedback via the forums.
  • You may experience a crash while using the protractor in Sketchpad - we're working on a fix.
  • If you have a 3rd party antivirus product installed on your PC – your PC might not be able to complete the update to this build and roll-back to the previous build.

Upcoming Bugbash
Just a heads-up that we are planning to kick off our next bugbash starting on Tuesday, November 8th and finishing at the end of the day on Sunday, November 13th. I’ll have more details in the coming weeks. Looking forward to seeing a lot of participation from Insiders!

Team Updates

Last week was tremendously exciting for the #WINsiders4Good work that Windows Insiders are doing. We kicked off a series of Create-A-Thons at Microsoft Retail Stores in Oregon, Puerto Rico, New York, Boston among others.  Some of our Insiders who did not have a retail store near them organized their own events in London and Germany!  The top thing we learned was that the simplest solution is often the best. Some simple solutions we created were new logos for Red Barn, a resource center for homeless looking to rebrand themselves.  a formula and model by which St. Francis House in Boston, a homeless shelter could decided if someone needed new clothing and a free resources list of software and learning tools for after-school care for underprivileged children at Educational Alliance...all using and running on Windows 10!  The NGOs were so pleased at having such incredibly passionate people like the Windows Insiders participating in their events.  It was amazing to watch Insiders travel across state lines to come attend these events (2 of our Insiders drove from Delaware to NYC for the event!). Again, if you’d like to participate in an event, START ONE! Many of our Insiders are starting to do this. Just let us know what you are doing and we will do our best to help get the word out and support you remotely.

For those who might have missed it, I wrote up an article about our #WINsiders4Good Nigeria fellowship for the Windows team and senior leadership, but decided that the Windows Insiders would like to see it as well.

All of this leads to our Windows Insiders mission of using technology to learn, create and make a lasting impact on the world.  I feel so fortunate to be a part of your community.

I leave for New Zealand soon to keynote Ignite NZ and gave a little preview on Channel 9 earlier. I hope to meet many Windows Insiders there. I will be "legitly loitering" at the Microsoft Stand/Windows Insider section for much of the week so please do come say hi and help me press the button to ship a build.

Thank you everyone and keep hustling,
Dona <3

Windows Server 2016 Volume Activation Tips

$
0
0

Hi,

This is Scott McArthur, a Supportability Program Manager for Windows and Surface. With the launch of Windows Server 2016 I wanted to share some information on volume activation:

  • Updating your existing KMS hosts to support Windows Server 2016
  • Setting up a new Windows Server 2016 KMS host
  • Activating Windows 10 Enterprise 2016 LTSB

Updating existing KMS Hosts

If your KMS host is Windows Server 2012 you need to install the following updates

If your KMS host is Windows Server 2012 R2 you need to install the following updates:

Once updated you need to obtain a Windows Server 2016 CSVLK. Do the following

  1. Log on to the Volume Licensing Service Center (VLSC).
  2. Click License.
  3. Click Relationship Summary.
  4. Click License ID of their current Active License.
  5. After the page loads, click Product Keys.
  6. Look for a key called “Windows Srv 2016 DataCtr/Std KMS”

If you are unable to locate your product key please contact the Volume licensing service center

Once you have the key then run the following commands at elevated cmd prompt

1. Install the Windows Server 2016 CSVLK
    Cscript.exe %windir%\system32\slmgr.vbs /ipk
2. Activate the Windows Server 2016 CSVLK
    Cscript.exe %windir%\system32\slmgr.vbs /ato

 

Setting up new Windows Server 2016 KMS host

If you want to setup a new Windows Server 2016 KMS host normally you can use the Volume Activation services role wizard or command line to configure the KMS host.

We are aware of issue where when you run the Volume Activation Services role wizard, it will report the error “vmw.exe has stopped working” during the product key management phase of the wizard

W2016 KMS Error

Microsoft is investigating this issue and will update this blog when a fix is available but in meantime you will need to configure it using the steps below
1. Open elevated cmd prompt
2. Install the Windows Server 2016 CSVLK
    cscript.exe %windir%\system32\slmgr.vbs /ipk Windows Srv 2016 DataCtr/Std KMS CSVLK here>
3. Activate the Windows Server 2016 CSVLK
    Cscript.exe %windir%\system32\slmgr.vbs /ato

 

If system does not have internet connectivity do the following to activate via the command line:
1. Open an elevated command prompt
2. Obtain the Installation ID
    Cscript.exe %windir%\system32\slmgr.vbs /dti
3. Look up Microsoft phone activation number using phone number listed in %windir%System32\SPPUI\Phone.inf
4. Call the number and follow the prompts to obtain the confirmation ID
5. Apply the confirmation ID (do not include hyphens)
    Cscript.exe %windir%\system32\slmgr.vbs /atp
6. Wait for a success message (numbers blurred on purpose)

W2016 KMS Success Message

7. Verify that the license status shows licensed:
    Cscript.exe %windir%\system32\slmgr.vbs /dlv

clip_image004

Windows 10 Enterprise 2016 LTSB Edition volume activation

Note: In addition to activating Windows Server 2016 the “Windows Srv 2016 DataCtr/Std KMS” KMS host(CSVLK) key also activates Windows 10 Enterprise 2016 LTSB edition

If your KMS host is Windows Server 2012 R2 you need to install the following updates:
Note:  Windows Server 2008 R2 is not supported as KMS host for 2016

 

Hope this helps with your Windows Server 2016 deployments
Scott McArthur
Viewing all 13502 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>