Quantcast
Channel: TechNet Technology News
Viewing all 13502 articles
Browse latest View live

Quick-Start Guide to the Data Science Bowl Lung Cancer Detection Challenge, Using Deep Learning, Microsoft Cognitive Toolkit and Azure GPU VMs

$
0
0

This post is by Miguel Fierro, Data Scientist, Ye Xing, Senior Data Scientist, and Tao Wu, Principal Data Scientist Manager, all at Microsoft.

Since its launch in mid-January, the Data Science Bowl Lung Cancer Detection Competition has attracted over than 1,000 submissions. To be successful in this competition, data scientists need to be able to get started quickly and make rapid iterative changes. In this post, we show how to compute features of the scanned images in the competition with a pre-trained Convolutional Neural Network (CNN), and use these features to classify the scans into cancerous or not cancerous, using a boosted tree, all in one hour. With a score of 0.55979, you would be ranked in the top 10% as of January 19th on the leaderboard, or in the top 20% as of February 7th.

To achieve this, we used the following:

  1. A pre-trained CNN as the image featurizer. This 152-layer ResNet model is implemented on the Microsoft Cognitive Toolkit deep learning framework (formerly called CNTK) and trained using the ImageNet dataset.
  2. LightGBM gradient boosting framework as the image classifier.
  3. Azure Virtual Machines (VMs) with GPU acceleration.

For the impatient, we have shared our code in this Jupyter notebook. The computation of the Cognitive Toolkit process takes 53 minutes (29 minutes, if a simpler, 18-layer ResNet model is used), and the computation of the LightGBM process takes 6 minutes at a learning rate of 0.001. A simple version of the code was also published on Kaggle.

Introduction

According to the American Lung Association, lung cancer is the leading cancer in mortality, in both men and women in the US, with a low rate of early diagnosis. The Data Science Bowl competition on Kaggle aims to help with early lung cancer detection. Participants use machine learning to determine whether CT scans of the lung have cancerous lesions or not. A 3D representation of such a scan is shown in Fig. 1.


Fig. 1: 3D volume rendering of a sample lung using competition data. It was computed using the script from this blog post.

Training speed is one of the most important factors for success at competitions like these. In this respect, both Cognitive Toolkit and LightGBM are excellent in a range of tasks (Shi et al., 2016; LightGBM performance summary). These two solutions, combined with Azure’s high-performance GPU VM, provide a powerful on-demand environment to compete in the Data Science Bowl.

To get started in in the GPU VM you need to install these frameworks:

  • CUDA: CUDA 8.0 can be downloaded from NVIDIA web (registration is required). If you are using Linux, you also need to download CUDA Patch 1 from the website. The patch adds support for gcc 5.4 as one of the host compilers.
  • cuDNN: cuDNN 5.1 (registration with NVIDIA required).
  • MKL: Intel´s Math Kernel Library (MKL) version 11.3 update 3 (registration with Intel required).
  • Anaconda: Anaconda 4.2.0 provides support for conda environments and jupyter notebooks.
  • OpenCV: Download and install from the official OpenCV website. This can also be installed via conda with this command:

    conda install -c https://conda.binstar.org/conda-forge opencv

  • Scikit-learn: Scikit-learn 0.18 is easily installed via pip:

    pip install scikit-learn

  • Cognitive Toolkit: Cognitive Toolkit 2.0 beta9 for Python. You can build from source but it’s faster to install the precompiled binaries.
  • LightGBM: LightGBM is easily installed with CMake. You will also need to install the Python bindings.
  • Data management libraries: You also need to install dicom and glob libraries, using pip:

    pip install pydicom glob2

In addition to these libraries and the pre-trained network (downloadable here), it’s necessary to download the competition data. The images are in DICOM format and consist of a group of slices of the thorax of each patient (see Fig. 2).


Fig. 2: Axial slices of the thorax of a patient with cancer (left) and a patient without cancer (right).

Cancer Image Detection with Cognitive Toolkit and LightGBM

Many deep learning applications use pre-trained models as a basis and apply trained models to a new domain, in a technique called transfer learning (Yosinski et al., 2014; Donahue et al., 2014; Oquab et al., 2014; Esteva et al., 2017). For image classification, the first few layers of a CNN represent low level features of the inputs, such as color blobs or texture features, and the last layers represent high-level features, specific to the classification task.

We use transfer learning with a pre-trained CNN on ImageNet as a featurizer to generate features from the Data Science Bowl dataset. Once the features are computed, a boosted tree using LightGBM is applied to classify the image.


Fig. 3: Representation of a ResNet CNN with an image from ImageNet. The input is an RGB image of a cat, the output is a probability vector,
whose maximum corresponds to the label “tabby cat”.

Fig. 3 represents the typical scheme of a CNN classifying an image. The input image, in this case a cat, has a size of 224×224 and a depth of 3, corresponding to three color channels, red, green and blue (RGB). The image is exposed to convolutions in each internal layer, diminishing in size and growing in depth. The final layer outputs a vector of 1000 probabilities, corresponding to the 1000 classes of ImageNet. The predicted class of the network is the component with the higher probability.


Fig. 4: Workflow of the proposed solution. The images of a patient scan are fed to the network in batches, which, after a forward propagation,
are transformed into features. This process is computed with the Microsoft Cognitive Toolkit. Next, these features are set as the input of a LightGBM
boosted tree, which classifies the images as those of a patient with or without cancer.

To create the featurizer, we remove the last layer of the CNN and use the output of the penultimate layer as features. The process is showed in Fig. 4. Each patient has an arbitrary number of scan images. The images are cropped to 224×244 and packed in groups of 3, to match the format of ImageNet. They are fed to the pre-trained network in k batches and then convoluted in each internal layer, until the penultimate one. This process is performed using Cognitive Toolkit. The output of the network are the features we’re going to feed to the boosted tree, programmed with LightGBM.

In Fig. 5, the optimization loss of the boosted tree is shown. In an Azure VM on NC-24 GPU instance, the computation of the features takes 53 minutes with ResNet-152 and 29 minutes with ResNet-18. The training procedure has an early stopping of 300, which means that training stops when the loss has not improved in 300 epochs. For this reason, the training time of the boosted tree can vary between 1 and 10 minutes.


Fig. 5: Boosted tree loss in each epoch, with early stopping.

Once training is finished, we can compute the prediction using the validation set provided by Kaggle. This prediction is a CSV file that can be submitted to get a rank in the leaderboard.

Possible Improvements

In this post, we discussed a baseline example of applying a ResNet CNN architecture pre-trained on ImageNet to the problem of cancer detection based on medical images, something that will get you quickly started in the current Data Science Bowl competition. Cognitive Toolkit provides other pre-trained networks that you can test such as AlexNet, AlexNet with Batch Normalization and ResNet with 18 layers.

The example we provided shows how to transfer learnings from natural ImageNet images to medical images. In the medical imaging domain, we often lack annotated image datasets that are large enough to train deep neural networks, thus the use of the pre-trained ImageNet CNN models on natural images as a base mitigates this problem. While differences of image contrast and texture between medical images and natural images are significant, supervised fine-tuning, i.e., updating the weights of the pre-trained network by backpropagation using domain-specific data, is shown to perform well in a number of tasks (Shin et al., 2016, Esteva et al., 2017, Menegola et al., 2016, Tajbakhsh et al., 2016). Fine-tuning can be done on all layers or only on the later layers which contain more domain-specific features.

Another possible improvement would be to integrate traditional medical image techniques and features with CNN models. For example, nodule candidate locations are identified using a commercial grade computer-aided detection (CAD) system before the processed images are fed into a CNN, which helps improve CT lung nodule identification (Ginneken et al., 2015). As another example, traditional medical image features such as geometric features (tumor volume, relative distance measured from pleural wall, tumor shape) and texture features (gray-level co-occurrence matrix) are combined with deep features from pre-trained ImageNet models in the prediction of short-term and long-term lung cancer survivors (Paul et al., 2016). Generally speaking, having domain knowledge in medical imaging and a background in low-level image processing (such as delineating the boundary of anatomical structures) are helpful when applying these techniques.

Additionally, the dimension difference between 2D natural ImageNet images and 3D lung cancer CT images could also be considered. The example provided in this post uses three adjacent axial images as the three input channels to replace the original three RGB channels from 2D ImageNet images. Promising lung nodule detection results were achieved by using a pre-trained 2D CNN on a 2.5D representation of 3D volumetric images, which uses three orthogonal slices (axial, coronal and sagittal) through the center of nodule candidates marked from CAD as the input patch to the pre-trained 2D CNN model (Roth et al, 2014).

We hope this blog post and the quick start script shared in the notebook make it easier for you to participate in the Data Science Bowl competition. Happy hacking!

Miguel, Ye and Tao


Encoding Hints and SQL Server Analysis Services vNext CTP 1.3

$
0
0

The public CTP 1.3 of SQL Server vNext on Windows is available here! The corresponding versions of SQL Server Data Tools (SSDT) and SQL Server Management Studio (SSMS) will be released in the coming weeks. They include much-anticipated new features, so watch out for the upcoming announcements!

Encoding hints

CTP 1.3 introduces encoding hints, which is an advanced feature used to optimize processing (data refresh) of large in-memory tabular models. Please refer to the Performance Tuning of Tabular Models in SQL Server 2012 Analysis Services whitepaper to better understand encoding. The encoding process described still applies in CTP 1.3.

  • Value encoding provides better query performance for columns that are typically only used for aggregations.
  • Hash encoding is preferred for group-by columns (often dimension-table values) and foreign keys. String columns are always hash encoded.

Numeric columns can use either of these encoding methods. When Analysis Services starts processing a table, if either the table is empty (with or without partitions) or a full-table processing operation is being performed, samples values are taken for each numeric column to determine whether to apply value or hash encoding. By default, value encoding is chosen when the sample of distinct values in the column is large enough – otherwise hash encoding will usually provide better compression. It is possible for Analysis Services to change the encoding method after the column is partially processed based on further information about the data distribution, and restart the encoding process. This of course increases processing time and is inefficient. The performance-tuning whitepaper discusses re-encoding in more detail and describes how to detect it using SQL Server Profiler.

Encoding hints in CTP 1.3 allow the modeler to specify a preference for the encoding method given prior knowledge from data profiling and/or in response to re-encoding trace events. Since aggregation over hash-encoded columns is slower than over value-encoded columns, value encoding may be specified as a hint for such columns. It is not guaranteed that the preference will be applied; hence it is a hint as opposed to a setting. To specify an encoding hint, set the EncodingHint property on the column. Possible values are “Default”, “Value” and “Hash”. At time of writing, the property is not yet exposed in SSDT, so must be set using the JSON-based metadata, Tabular Model Scripting Language (TMSL), or Tabular Object Model (TOM). The following snippet of JSON-based metadata from the Model.bim file specifies value encoding for the Sales Amount column.

  {"name": "Sales Amount","dataType": "decimal","sourceColumn": "SalesAmount","formatString": "\\$#,0.00;(\\$#,0.00);\\$#,0.00","sourceProviderType": "Currency","encodingHint": "Value"
  }

Extended events not working in CTP 1.3

SSAS extended events do not work in CTP 1.3. We plan to fix them for the next CTP.

Download now!

To get started, download SQL Server vNext on Windows CTP1.3 from here. Be sure to keep an eye on this blog to stay up to date on Analysis Services.

Nano Server Native Project Template now on Visual Studio Gallery

$
0
0

The Visual Studio project template for developing C++ applications targeting Nano Server is now available on the Visual Studio Marketplace (search on “nano”). The template supports both Visual Studio 2015 and 2017.

What’s changed?

The core of the extension is identical to the what we shared last year, with the following updates:

  • Compatible with Visual Studio 2017 in addition to Visual Studio 2015.
  • The template can be deployed seamlessly from within Visual Studio as it is available on the VS Gallery.
  • The same remote debugging experience, enabled using cmdlets, is available upon installing the PowerShell SDK for Nano from the PowerShell Gallery.
  • The sample application builds cleanly out of the box on current versions of Visual Studio as it targets the Windows 10 Anniversary Update SDK (10.0.14393.0) by default. To target a different SDK, right-click the solution name in Solution Explorer and choose “Retarget Solution”.

As as parting gif, checkout how quick and easy it is to deploy the template from within the IDE!

install2

Next week we’ll share details on remote debug workflow improvements with Visual Studio 2017.

Happy coding!

Get more done faster with Microsoft Teams

$
0
0

Get more done faster 1

Around the world teamwork is on the rise. Research suggests employees now work on nearly double the number of teams than they did just five years ago. This means more than ever people are reliant on their peers to help get things done. But a “one size fits all” approach does not work when it comes to group collaboration—different tools appeal to different groups and address unique needs.

This is not your typical online event

Each 90-minute session starts with an online business roundtable discussing your biggest business challenges with a trained facilitator and then transitions into a live environment in the cloud. You will receive a link to connect your own device to a remote desktop loaded with our latest and greatest technology so you can experience first-hand how Microsoft tools can solve your biggest challenges.

U.S. customers: Register here.
Outside the U.S.?Register here.

Why should I attend?

During this interactive online session, you will explore:

  • How Microsoft Teams, the newest collaboration tool:
    • Keeps everyone engaged with threaded persistent chat.
    • Creates a hub for teamwork that works together with your other Office 365 apps.
    • Builds customized options for each team with channels, connectors, tabs and bots.
    • Adds your personality to your team with emojis, GIFs and stickers.
  • How to keep information secure while being productive—Make it easier to work securely and maintain compliance without inhibiting your workflow.
  • How to quickly visualize and analyze complex data—Zero in on the data and insights you need without having to involve a BI expert.
  • How to co-author and share content quickly—Access and edit documents even while others are editing and reviewing them all at the same time.
  • How to get immediate productivity gains—Most attendees leave with enough time-saving skills that time invested to attend a Customer Immersion Experience more than pays for itself in a few short days.

Space is limited. Each is session is only open to 12 participants. Reserve your seat now.

The post Get more done faster with Microsoft Teams appeared first on Office Blogs.

Managing Security Settings on Nano Server with DSC

$
0
0

We have released DSC resources building upon the previously released security and registry cmdlets for applying security settings. You can now implement Microsoft-defined security baselines using DSC.

AuditPolicyDsc

SecurityPolicyDsc

GPRegistryPolicy

Install all 3 from the Gallery with the command:

install-module SecurityPolicyDsc, AuditPolicyDsc, GpRegistryPolicy 

A sample configuration, below, takes the Security Baselines for Windows Server 2016 and extracts the .inf, .csv and .pol containing the desired security settings from the exported Group Policy Objects. (You can find information on extracting the necessary files in the Registry cmdlets blogpost.) Simply pass the files into the new DSC resources, and you have successfully implemented security baselines using DSC!

This is most useful for Nano Server, since Nano Server doesn’t support Group Policy. However, this approach will work for all installation options. It’s not a good idea to manage the same server using both Group Policy and DSC since the two engines will constantly attempt to overwrite each other if they are both managing the same setting.

WARNING: As with all security settings, you can easily lock yourself out of remote access to your machine if you are not careful. Be sure to carefully review security settings before applying them to Nano Server, and stage test deployments before using security baselines in production!

ConfigurationSecurityBaseline
{Import-DscResource-ModuleName AuditPolicyDsc, SecurityPolicyDSC, GpRegistryPolicy
    node localhost
    {
        SecurityTemplate baselineInf
        {
            Path ="C:\Users\Administrator\Documents\GptTmpl.inf"# https://msdn.microsoft.com/powershell/dsc/singleinstance
            IsSingleInstance ="Yes"
        }
        AuditPolicyCsv baselineCsv
        {
            IsSingleInstance ="Yes"
            CsvPath ="C:\Users\Administrator\Documents\audit.csv"
        }
        RegistryPolicy baselineGpo
        {
            Path ="C:\Users\Administrator\Documents\registry.pol"
        }
    }
}#Compile the MOF file
SecurityBaseline Start-DscConfiguration-Path ./SecurityBaseline 

Penny Pinching in the Cloud: Running and Managing LOTS of Web Apps on a single Azure App Service

$
0
0

I've blogged before about "penny pinching in the cloud." I'll update that series for 2017 soon, but the underlying concepts still apply. Many if you are still using bigger virtual machines than are needed when doing IaaS (Infrastructure as a Service) or when doing PaaS (Platform as a Service) folks are doing "one website per App Service." That's super expensive.

Remember that you can fit as many web applications as memory and CPU will into an Azure App Service Plan. An "App Service Plan" in Azure is effectively the Virtual Machine under your Web Apps. You don't need to think about it as it's totally managed and hidden - but - if you choose think about it you'll be able to squeeze more out of it and you'll pay less.

For example, I have 20 web applications running in a plan I named "DefaultServerFarm." It's a Small Standard Plan (S1) and I pay about $70 a month. Some folks use a Basic (B1) plan if they don't need to scale out and that's about $50 a month. Both B1 and S1 support "unlimited" web apps within them, to the limits of memory. That's what allows me to run 20 modest (but real) sites on the one plan and that's what makes it a good deal from a pricing perspective for me.

I logged in to the Azure Portal recently and noticed the CPU percentage on my plan was higher than usual and higher than I'd like.

Why is that web app using so much CPU?

That's the CPU of the machine "under" my 20 sites. I can click here on my App Service Plan's "blade" to see the underlying sites, or just click "Apps" in the blade menu.

Running 20 apps in a Single Azure App Service

However, when I'm looking at an app that lives within my plan, there's two super powerful menu items to check out. One is  called "Metrics per instance (Apps)" and one is "Metrics per instance (App Service)." Click the latter option. For many of you it's going to become your favorite area in the Azure Portal. It was a game changer for me as it gave me the internal insight I needed to make sure I can get maximum density in my plan (thereby saving the most money).

Metrics per Instance - App Service Plan

I click here and see "Sites in App Service Plan."

20 sites in a single plan

I can see that over the last few days my CPU has been going up and up...

The CPU is going up and up over a few days

I can see by site:

A graph showing ALL 20 sites and their CPU

So now I can filter by site and I see that it's ONE site that's going nuts.

One site is using all the CPU

I can then dig in, go to the main CPU charge and see exactly when it started:

The site is using 2.12 days of CPU

I can change the scale

It started on Feb 11th

I had a Web Job stuck in a loop. I restarted and will be monitoring but for now, I'm in a much better place for this one app.

Now it's calming down

Now if I check the App Service Plan itself, I can see everything has calmed down.

Things have calmed down after the one rogue site was restarted

The point here is that even though it's "Platform as a Service" and we want a layer of abstraction, at no point are things HIDDEN from us. If you want to see the hardware, you can. If you want to see the process tree, you can. A good reminder.


Sponsor: Excited about the future in ASP.NET? The folks at Progress held an awesome webinar which gives a 360⁰ view of the new ASP.NET Core and how it compares to WebForms and MVC. Watch it now on demand!



© 2016 Scott Hanselman. All rights reserved.
     

Released: Public Preview for SQL Server Management Packs Update (6.7.16.0)

$
0
0

We are getting ready to update the SQL Server Management Packs. Please install and use this public preview and send us your feedback (sqlmpsfeedback@microsoft.com)! We appreciate the time and effort you spend on these previews which make the final product so much better.

Please download at:

Microsoft System Center Management Packs (Community Technical Preview 1) for SQL Server

Included in the download are Microsoft System Center Management Packs for SQL Server 2008/2008 R2/2012/2014/2016 (6.7.16.0).

New SQL Server 2008/2008 R2/2012 MP Features and Fixes

  • Implemented some enhancements to data source scripts
  • Fixed issue: The SQL Server 2012 Database Files and Filegroups get undiscovered upon Database discovery script failure
  • Fixed issue: DatabaseReplicaAlwaysOnDiscovery.ps1 connects to a cluster instance using node name instead of client access name and crashes
  • Fixed issue: CPUUsagePercentDataSource.ps1 crashes with “Cannot process argument because the value of argument “obj” is null” error
  • Fixed issue: Description field of custom user policy cannot be discovered
  • Fixed issue: SPN Status monitor throws errors for servers not joined to the domain
  • Fixed issue: SQL Server policy discovery does not ignore policies targeted to system databases in some cases
  • Increased the length restriction for some policy properties in order to make them match the policy fields
  • Actualized Service Pack Compliance monitor according to the latest published Service Packs for SQL Server

New SQL Server 2014/2016 MP Features and Fixes

  • Implemented some enhancements to data source scripts
  • Fixed issue: DatabaseReplicaAlwaysOnDiscovery.ps1 connects to a cluster instance using node name instead of client access name and crashes
  • Fixed issue: CPUUsagePercentDataSource.ps1 crashes with “Cannot process argument because the value of argument “obj” is null” error
  • Fixed issue: Description field of custom user policy cannot be discovered
  • Fixed issue: SPN Status monitor throws errors for servers not joined to the domain
  • Fixed issue: SQL Server policy discovery does not ignore policies targeted to system databases in some cases
  • Fixed issue: Garbage Collection monitor gets generic PropertyBag instead of performance PropertyBag
  • Increased the length restriction for some policy properties in order to make them match the policy fields
  • Actualized Service Pack Compliance monitor according to the latest published Service Packs for SQL Server

For more details, please refer to the user guides that can be downloaded along with the corresponding Management Packs.
We are looking forward to hearing your feedback at sqlmpsfeedback@microsoft.com.

Backup Managed Disk VMs using Azure Backup

$
0
0

Last week we announced the general availability of Managed Disks. Managed Disks are Azure Resource Manager (ARM) resources, that can be deployed via templates to create thousands of Managed Disks without worrying about creating storage accounts or specifying disk details. Backup of Managed disk VMs against accidental deletions and corruptions resulting from human errors is a critical capability for customers of all sizes. With Azure Backup service, you get key enterprise features like backup, restore, policy based management, backup alerts, job monitoring, instant data recovery without deploying any infrastructure in your tenant environment. You get the ability to backup Managed Disk VMs directly from VM management blade and the user experience is consistent with backup of VMs attached to Standard or Premium Unmanaged Disks. 

Value Proposition

Azure Backup’s cloud-first approach provides:

  • Freedom from infrastructure: No need to deploy any infrastructure to backup VMs
  • Eliminate backup storage management with bottomless Recovery Services vault.
  • Pay as you go model with no egress costs for restores.
  • Self-service backup and restore

Key features

  • Application Consistent backups for Windows Azure VMs and File-system consistent backup for Linux Azure VMs without the need to shutdown VM.
  • Policy Based Management:  Azure Backup allows you to specify the backup schedule as well as retention policy of backups.  The service handles periodic backups as well as pruning of recovery points beyond the configured retention period. 
  • Long Term Retention of backup data for years even beyond the lifecycle of the VM.
  • Full VM and Disk restore:  In case your VM is corrupted and needs replacement or want to simply make a copy of the VM you can do so with full VM or disk restore. 
  • Instant Data Recovery:  With Instant Data Recovery, you can restore individual files and folders within the VM instantly without provisioning any additional infrastructure, and at no additional cost. Instant Restore provides a writeable snapshot of a recovery point that you can quickly mount, browse, recover files/folders by simply copying them to a destination of your choice. These snapshots even allow you to open application files such as SQL, MySQL directly from cloud recovery point snapshots as if they are present locally and attach them to live application instances, without having to copy them.
  • Role Based Access:  You can limit the access to backup data in the Recovery Services vault using Role Based Access controls. Azure Backup supports Backup Contributor, Backup Operator and Backup Reader roles at a vault level.
  • Monitoring and Alerting: You can monitor your backup and restore jobs from the Recovery Services Vault dashboard.   In addition, they can also configure email alerts for job failures.

Customers can backup data to Recovery Services Vault in all public Azure regions, including Canada, UK, and West US2.

Getting started

To get started, enable backup with a few steps:

  • Select a virtual machine from the Virtual machines list view. Select Backup in the Settings menu.
  • Create or select a Recovery Services Vault:  The vault maintains backups in a separate storage account with its own lifecycle management. 
  • Create or select a Backup Policy

Watch the video below to instantly recover files from an Azure VM (Windows) backup.

Watch the video below to instantly recover files from an Azure VM (Linux) backup.

The instant restore capability will be available soon for users who are protecting their Linux VMs using Azure VM backup. If you are interested in being an early adopter and want to provide valuable feedback, please let us know at linuxazurebackupteam@service.microsoft.com. Watch the video below to know more.

Related links and additional content

 

 


Best Practices: How to deploy Azure Site Recovery Mobility Service

$
0
0

With large enterprises deploying Azure Site Recovery (ASR) as their trusted Disaster Recovery (DR) solution for application-aware DR, their DR architects have asked us about the best practices to be followed while deploying ASR in production environments. Given ASR’s multi-VM consistency promise to provide full application recovery on Microsoft Azure, the mobility service is a critical piece in the VMware to Azure scenario. In this blog, we take a look at the various options to deploy the ASR mobility service during different stages of a production ASR rollout.

Deployment Considerations

At a high level the challenges that we hear about day to day can be summarized as shown in the below table.

Firewall and Network Security
  • My organization has tight security policies. It does not allow me to change servers’ firewall settings to allow push install of ASR mobility service on the servers we want to protect.
Credential Management
  • My organization’s password expiry policy forces application owners to change the administrator password periodically. This causes ASR workflows that install and upgrade the mobility service to fail. Can I manage ASR mobility service deployment using software deployment tools (like System Center Configuration Manager) so that I don’t have to worry about these credentials?
  • As a hosting service provider, I want to provide DR as a Service to my customers, and I don’t like providing the customer’s virtual machine’s credentials to ASR, for it to push the mobility service. Can I manage the ASR mobility service initial deployment and upgrades using software deployment tools?
At Scale Deployment
  • My ASR proof of concept is done, and now we are starting a full-fledged production rollout. I have thousands of servers to protect. Is there a solution other than the push install service that we can use to deploy the ASR mobility service to all our production servers?
  • I want to pre-install the ASR mobility service during our planned software maintenance window, but replication should not start immediately. I want to start replicating virtual machines in batches to ensure that the initial replication traffic does not clog our network, and also finishes in a predictable desired timeframe.

Deployment Best Practices

Our goal here at Microsoft is to make Azure Site Recovery easy to deploy and use. We know that each enterprise environment is different and needs a customized solution to suite its security and audit needs. Therefore, we have support for multiple different ways in which you can install the ASR mobility service on the servers you want to protect. 

Note: All the ASR mobility service installation methods listed below can be used to deploy the mobility service on supported Microsoft Windows and Linux operating systems.

Push install mobility service during Enable Protection

Push install is the easiest method to deploy the ASR mobility service on the virtual machines you want to protect. This method is best suited for a proof of concept demonstration and deployment in production environments where firewall and network security rules are less stringent. To perform push install, your environment needs to meet the pre-requisites mentioned in our Prepare for push install documentation.

Install mobility service using software deployment tools

Enterprises use software deployment tools like System Center Configuration Manager (SCCM), Windows Server Update Service (WSUS), or other thirdparty software deployment tools to push software on servers in their environment. ASR allows out-of-band installation of the mobility service via these software deployment tools. The documentation page Automate Mobility Service installation using software deployment tools, provides you instructions and scripts that allows you to use your favorite software deployment tool to install the ASR mobility service in your production environment – the documentation uses SCCM as an example.

This method is best suited for a production rollout of Azure Site Recovery and gives you the following advantages:

  1. No need to add firewall exceptions
  2. Deploy at enterprise scale
  3. No need to manage guest (protected virtual machine) credentials

Install mobility service using Azure Automation Desired State Configuration (DSC)

In organizations that heavily use Azure services in their production environment, Azure Automation Desired State Configuration can be used to deploy and manage the deployment of ASR mobility service. The documentation page Deploy the Mobility Service with Azure Automation DSC for replication of VM talks in detail about how to use Azure Automation DSC to install and manage the lifecycle of the ASR mobility service.

This method is best suited for a production rollout of Azure Site Recovery assuming you use Microsoft Azure Services to manage your IT infrastructure, and gives you the following advantages:

  1. No need to add firewall exceptions
  2. Deploy at enterprise scale
  3. No need to manage guest (protected virtual machine) credentials
  4. Enforces software configuration on your protected servers

Manual install (command line and GUI Based)

The ASR mobility service can be installed manually via command line or GUI. If you plan to protect 5-10 servers, and don’t have a software deployment tool being used in your organization, then you can use the manual install method. The manual install method can also be used for proof of concept deployments. The command line install method can be used to create scripts to automate installations in your production environment. You can find both of these methods documented at Install Mobility Service using command line and Install Mobility Service using GUI.

Closing Notes

The below decision tree helps to summarize how to choose the best deployment option that suites your environment.

image

You can check out additional product information and start replicating your workloads to Microsoft Azure using Azure Site Recovery today. You can use the powerful replication capabilities of Site Recovery for 31 days at no charge for every new physical server or virtual machine that you replicate. Visit the Azure Site Recovery forum on MSDN for additional information and to engage with other customers, or use the ASR UserVoice to let us know what features you want us to enable next.

Azure Site Recovery, as part of Microsoft Operations Management Suite, enables you to gain control and manage your workloads no matter where they run (Azure, AWS, Windows Server, Linux, VMware, or OpenStack) with a cost-effective, all-in-one cloud IT management solution. Existing System Center customers can take advantage of the Microsoft Operations Management Suite add-on, empowering them to do more by leveraging their current investments. Get access to all the new services that OMS offers, with a convenient step-up price for all existing System Center customers. You can also access only the IT management services that you need, enabling you to on-board quickly and have immediate value, paying only for the features that you use.

Announcing: Remote Desktop Services solutions templates on Azure Marketplace

$
0
0

At Ignite 2016, we highlighted all the improvements we have made in our core Remote Desktop Services (RDS) platform in Windows Server 2016 to leverage the power of Azure. Since then, we have been working to continue providing tools for you to jumpstart your RDS deployment. We are happy to announce that youll now be able to find RDS in the Azure Marketplace!

Azure Marketplace is a great repository to start a trial or purchase Azure-verified solutions for your custom deployment, including the trickier multi-server deployment. With flexible search functionality, its easy to find the exact solution youre looking for. For example, you can quickly find the RDS Azure Marketplace solutions template by searching RDS.

With the RDS Solutions Template in the Azure Marketplace, you can quickly and easily deploy an RDS environment to test it and determine if its right for you. Please read this TechNet Article on how to deploy it.

 


Please let us know what you think of the EMS blog by taking our survey! Read this blog post to learn more about the survey and how you can qualify to win one of five $200 gift cards.

Kubernetes now Generally Available on Azure Container Service

$
0
0

Azure Container Service

There is a common thread in advancements in cloud computing – they enable a focus on applications rather than the machines running them. Containers, one of the most topical areas in cloud computing, are the next evolutionary step in virtualization.

Companies of every size and from all industries are embracing containers to deliver highly available applications with greater agility in the development, test and deployment cycle. Azure Container Service (ACS) is a service optimized for container applications. Today we are pleased to announce a number of improvements to ACS, most notably Kubernetes is now Generally Available as one of three choices of orchestrator.

Azure is the only public cloud platform that provides a container service with the choice of the three most popular open source orchestrators available today. ACS’s approach of openness has been pivotal in driving the adoption of containers on Azure. Enterprises and startups alike recognize the momentum around ACS and the benefit it brings to their applications, which includes agile deployment, portability and scalability.

With today’s news, we again deliver on our goal of providing our customers the choice of open-source orchestrators and tooling that simplifies the deployment of container based applications in the cloud. The ACS team are announcing today our next wave of features that includes:

  • Kubernetes now generally available (GA) – We announced preview support for Kubernetes in November 2016. Since then, we have received a lot of valuable feedback from customers. Based on this feedback we have improved Kubernetes support and now move it to GA. For more details, check out Brendan’s blog titled, "Containers as a Service: The foundation for next generation PaaS".
  • Preview of Windows Server Containers with Kubernetes – Coinciding with latest Kubernetes release, adding support for Windows Server Containers and coupled with enterprise customers expressing strong interest in adopting and going into production with Windows Server Containers, this is a great time to provide additional choice in orchestrator for Windows Server customers using ACS. Customers can now preview both Docker Swarm (launched in preview last year) as well as Kubernetes though ACS, providing choice as well as consistency with two of the top three Linux container orchestration platforms.
  • DC/OS 1.8.8 update – We are updating our DC/OS support to version 1.8.8. DC/OS is a production-proven platform that elastically powers both containers and big data services. ACS delivers the open source DC/OS while our partnership with Mesosphere ensures customers requiring additional enterprise features are catered for. Key features of 1.8.8 include a new orchestration framework, called Metronome, to run scheduled jobs. This has been added to a new Jobs tab in the DC/OS UI, along with a number of other UI improvements; and the addition of GPU and CNI support in the Universal container runtime. Based on Apache Mesos, DC/OS is trusted by Esri, BioCatch and many other Fortune 1000 companies. We have worked with Mesosphere to produce an e-book called "Deploying Microservices and Containers with ACS and DC/OS."

We love hearing from our customers about how they are using containers on Azure and the benefits it brings to their application development lifecycle. BioCatch, a startup based out of Israel, builds real time fraud prevention software that went from a PoC into production on ACS in a matter of weeks. Stories like this show the power of container-based applications and get us excited about the possibilities – we hope to hear from you, too.

You can easily get started deploying an Azure Container Service cluster using the Azure portal or the recently released Azure CLI 2.0 by using the az acs command. For example, this tutorial shows you how to deploy an ACS DC/OS cluster with a few simple Azure CLI 2.0 commands.

Azure Analysis Services now available in Canada Central and Australia Southeast

$
0
0

Last October we released the preview of Azure Analysis Services, which is built on the proven analytics engine in Microsoft SQL Server Analysis Services. With Azure Analysis Services you can host semantic data models in the cloud. Users in your organization can then connect to your data models using tools like Excel, Power BI, and many others to create reports and perform ad-hoc data analysis.

We are excited to share with you that the preview of Azure Analysis Services is now available in 2 additional regions: Canada Central and Australia Southeast.  This means that Azure Analysis Services is available in the following regions: Australia Southeast, Canada Central, Brazil South, Southeast Asia, North Europe, West Europe, West US, South Central US, North Central US, East US 2 and West Central US.

New to Azure Analysis Services? Find out how you can try Azure Analysis Services or learn how to create your first data model.

Windows 10 injects digital transformation into healthcare

$
0
0

This community of health care providers understand how integral a secure and productive IT infrastructure is to the delivery of quality healthcare to individuals and populations. We know the faster practitioners can access data safely and securely, the faster patients can receive care and get critical test results back.

This week, I’m pleased to share how some of the country’s top healthcare organizations – ProMedica Laboratories and Adventist Health Systems– are deploying Windows 10 to empower its healthcare professionals to continue delivering quality and timely healthcare. It’s been exciting to see our customers upgrade to Windows 10 at an incredible pace, with a 3X increase in Windows 10 enterprise deployments over the last few months.

We also know that protecting and securing patient data is just as important as delivering quality and timely healthcare for our healthcare providers like ProMedica Laboratories and Adventist Health Systems who are deploying Windows 10. Healthcare is one of the most regulated environments given the sensitive patient data that must be secured and protected. Despite significant protections over the last few years, health providers and payers in the industry are being subjected to an increasing number of cyber-attacks. Healthcare providers need to consider end-to-end security of their IT infrastructure more than ever.

With Windows 10 being the most secure version of Windows ever, we continue to hear from our healthcare customers who want to know how Windows 10 meets the technical and administrative safeguards required by the Health Insurance Portability and Accountability Act (HIPPA) which provides data privacy and security provisions for individual personal health information. While a number of the HIPPA requirements are not relevant to an operating system such as organizational documentation retention policies, a significant number remain important. I am excited to share that we commissioned HIPPA One, an organization recognized as leading HIPPA experts to review Windows 10 and its new security features as it relates to HIPPA safeguards and provide recommendations on the best configurations to assist with compliance requirements. You can learn more via the white paper hosted by HIPPA One which can help healthcare IT professionals understand HIPPA and how they can configure Windows 10 to help keep their IT systems running on Windows 10 secure and compliant.

Here’s a look at how Windows 10 is helping to drive digital innovation in the healthcare industry.

ProMedica Laboratories

ProMedica Laboratories is a highly automated complex operation and one of only 43 ISO certified labs in the United States. Each day, they process and analyze thousands of biological samples, requiring the testing instruments to be kept operational and calibrated to meet regulatory requirements. Ensuring these tasks were completed and documented was an arduous process and it used to be done the old fashion way: with lots of paperwork.

In U.S. hospitals alone, over a billion patient blood tests are processed annually.* These tests range from simply determining a patient’s blood type to more critical testing for diabetes or cancer. At ProMedica, they know the success of a patient’s treatment plan depends on the accuracy and timeliness of these tests.

ProMedica Laboratories needed a better way to perform routine quality inspections and equipment maintenance. In addition, a systematic process was needed to detect any errors that may have occurred along the way. To do this, ProMedica Laboratories partnered with software company Kaonsoft, member of the Microsoft Partner Network, to build the Assured Compliance Solution (ACS) on the Universal Windows Platform (UWP). The Kaonsoft team has decades of digital transformation and mobility expertise which it brought forward on the ProMedica project.

ACS replaced all paper logs with Windows 10 tablets, which provided security, manageability and an extra layer of data protection. The solution tracks all scheduled instrument activity and notifies them with a chain of alerts until a required task is completed. ProMedica Laboratories now has a centralized database hosted on Microsoft Azure, the trusted and reliable cloud to record and review equipment status, access records and run audit reports, all in real-time – resulting in a 100% compliance rate for log accuracy and task completion since implementing it in January 2016.

A ProMedica Laboratories employee uses the Assured Compliance Solution (ACS) on the Universal Windows Platform (UWP) via her Windows 10 device.

A ProMedica Laboratories employee uses the Assured Compliance Solution (ACS) on the Universal Windows Platform (UWP) via her Windows 10 device.

Lori Johnston, CIO, ProMedica says, “Transforming from a paper based system to a digital process means we can more precisely deliver the most important information: the results. And for individuals and families under the stress of a potential medical situation, nothing is more imperative than timely and accurate results. Creating ACS on Windows 10 and the Microsoft Cloud helps ProMedica Labs work faster and smarter. Better information, better treatment, better healthcare.”

With the ACS UWP app, ProMedica Laboratories can now decrease the time patients spend in the hospital due to an increase in laboratory efficiency. ProMedica Laboratories is also seeing an increase in productivity for technicians, supervisors, management and executives as they’re saving an average of 13 hours each month, 156 hours per year, in managing, troubleshooting and filing maintenance logs. As a result, the productivity of technicians, supervisors, management and executives has risen.

“I am thrilled to see technology and healthcare working in tandem to enable advancements in the healthcare and hospital management fields,” said Daniel J. Lee, Kaonsoft chief technology officer, co-founder and CEO. The success of the collaboration prompted ProMedica to partner with Kaonsoft to form Kapios Health, a joint venture company focused on developing mobile technology to spearhead innovations in healthcare. Kapios Health enables healthcare practitioners to provide exemplary patient care supported by sophisticated industry-inspired, field-tested mobile technology solutions. ACS is available through Kapios Health who will be exhibiting at HIMSS17 in Orlando, FL at booth 8068.

Adventist Health System

Adventist Health System is a faith-based health care organization based in Altamonte Springs, Florida. A national leader in quality, safety and patient satisfaction with 45 hospital campuses and more than 80,000 employees, Adventist Health System facilities leverage the latest technology and research to serve more than 4.7 million patients annually.  The organization is upgrading more than 55,000 devices running Windows 7 to Windows 10, with a goal of 25 percent of its devices in the first year.

Tony Qualls, director of technical services says, “Our experience to-date with Windows 10 and application compatibility has truly exceeded our expectations. At this point, it looks like we will exceed the first year goal that we were looking for, and it’s likely we’ll accelerate beyond that over the next two years. The Windows 10 upgrade will help bring about improved mobility for physicians, stronger security and easier authentication to improve the operational efficiency of staff.”

Adventist Health Systems physician and nurse collaborate on patient care in real-time via their Windows 10 device.

Adventist Health Systems physician and nurse collaborate on patient care in real-time via their Windows 10 device.

In a profession where speed, reliability and security are vital to delivering the best care possible, Adventist Health System is working to ensure that its health care professionals are able to access data securely and from any device whether they’re at a workstation in a patient’s room or walking the hallways with their mobile device.

Qualls says, “As I look towards the future, I’m also excited about faster authentication and lower reliance on passwords. We already see some potential high value cases, especially involving mobility, where facial recognition with Windows Hello could make a big difference. The faster our practitioners can access their data, the faster our patients can receive care.”

This is not the only area where speed has positively impacted the day-to-day operations, doctors have experienced notable performance improvements to patient critical applications. According to Dr. Qammer Bokhari, vice president and chief medical information officer, “Windows 10 is in clinician’s word, ‘lighter’, meaning the clinician platforms that run on Windows 10 are performing faster and they’re not crashing or freezing as they were previously, thereby diminishing workflow interruptions.”

We’re excited to see what these customers and others will be able to accomplish with the power of Windows 10. Please visit the Windows solutions for Health page for more information on how Windows 10 helps healthcare companies, professionals and patients.

*AMERICAN CLINICAL LABORATORY ASSOCIATION

The post Windows 10 injects digital transformation into healthcare appeared first on Windows For Your Business.

The CIO of Accenture Discusses Time Traveling Whales & How He Keeps His Org Secure

$
0
0

In the second part of my conversation with Andrew Wilson (CIO, Accenture) we talk about time-traveling humpback whales, how an organization as big as his uses mobile and cloud-based security, and we play an intense game of IT Would You Rather.

Andrew also shares some very helpful advice about how to look at mobile security as a function of the device + app + data + the user. This multi-part and multi-layer approach to security is really valuable and worth considering so, please, dont get thrown off by the whales.

He also has more feedback about my driving.

.

To Learn more about Microsoft Enterprise Mobility + Security, visit: http://www.microsoft.com/ems.

Next week, I meet up with Dan Morales, the CIO of the godfather of online shopping: eBay.

You can subscribe to these videoshere, or watch past episodes here: www aka.ms/LunchBreak.

Windows 10 Tip: Stay on top of your day with the Calendar app

$
0
0

It’s the Calendar app – free and pre-installed in Windows 10, so there’s no download needed. Simply search for “Calendar” or look for it in your list of installed programs in the Start Menu.

The Calendar app is available offline, so you can check your schedule even when you’re not connected to the internet. Unlike online calendars you check using a web browser, the Calendar app doesn’t require you to login every time.

Here are a few ways to get the most out of the Calendar app:

View multiple calendars – all in one place:

Windows 10 Tip: Stay on top of your day with the Calendar app

Manage all your calendars – from Outlook.com, Gmail, Yahoo, work, school or other accounts – all in one place, so you can plan your day and week better. Add as many calendars as you’d like to see in the Calendar app by going to Settings > Manage Accounts > Add Account.

Take a quick glance at your upcoming schedule with one click:

Want a quick snapshot of your day? The Calendar app is integrated with your Windows 10 taskbar calendar, so you can easily view your upcoming appointments. Just click on the taskbar, where the date and time are shown, and your schedule will appear.

Set up reminders using just your voice:

Windows 10 Tip: Stay on top of your day with the Calendar app

Add reminders and events to your Calendar with Cortana*, your personal digital assistant. Just say something like, “Hey Cortana. Remind me to take out the trash at 5 p.m.” or “Hey Cortana. Add Stacy’s school play to my calendar for tomorrow at 6 p.m.” Cortana will confirm with you the appointments or actions you like to be reminded about, and add them to Calendar app.

Have a great week!

*Cortana available in select markets. Enable “Hey Cortana” in Settings to let Cortana to respond to your voice.

The post Windows 10 Tip: Stay on top of your day with the Calendar app appeared first on Windows Experience Blog.


Demo Tuesday // How to turn event logs into security intelligence

$
0
0

Welcome to our new Demo Tuesday series. Each week we will be highlighting a new product feature from the Hybrid Cloud Platform.

Smart security with OMS and Windows Server 2016

The flood of security event logs can be enough to overwhelm an army of system administrators. In the event of a breach, whats needed isnt necessarily more data, its better data for better analysis.

Get more accurate threat identification with larger data scope using Azure-based Operations Management Suite (OMS). This cost-effective solution helps you to turn enhanced event logs from Windows Server 2016 into security intelligence you can take action on immediately. Take a look:

By tapping into the new security features in Windows Server 2016, such as Device Guard and enhanced security-event logging, Operations Management Suite provides a huge upgrade in security out-of-the-box. Simply add the Security and Audit solution from the OMS solutions gallery, and youre ready to start taking advantage of the deep security intelligence and take action to remediate.

This combination enables you to surface suspicious network activity that might otherwise be treated as background noise, and to do things like correlate it with users and IP addresses to identify potentially compromised accounts. Windows Server 2016 + Operations Management Suite helps you to:

  • Prevent and detect malicious activity
  • Detect vulnerabilities before an attack happens
  • Analyze and investigate incidents
  • Streamline security audits

You can rely on the intelligence built into Operations Management Suite and Windows Server 2016 to do the heavy lifting, so you can focus on keeping your organization secure.

Get started with OMSactivate your free account today and give it a try.

What I wish I knew—learn from the founder and entrepreneur coach of TheRickMartinez.com

$
0
0

This week, as we celebrate entrepreneurs across the U.S., we have the opportunity to recognize the work they put into their small businesses, the challenges they have faced and the growth they’ve achieved.

Being an entrepreneur is no easy feat. It takes an extraordinary amount of time, thought and energy to overcome hurdles that get in the way while getting started. It is important for entrepreneurs not only to learn from each other but also know how to build their business and their team. They must ensure they have the right tools to bring the two together. There are technology solutions that can make an entrepreneur’s day-to-day tasks easier. From staying connected with their clients through Skype for Business, to easily sharing their work with their team through OneDrive for Business, to maintaining a professional reputation by using the Office Suite—these are just a few tools entrepreneurs can use to improve their business.

We were interested in learning from entrepreneurs so we could better understand the grit, the emotions and the resources they used to be successful. Today, Rick Martinez, founder and entrepreneur coach for TheRickMartinez.com, shares his journey of starting his small business.

Here is his experience:

“My journey as an entrepreneur has brought me full circle. Today, I’m living my dream as a coach to up-and-coming entrepreneurs. It’s a far cry from my first company, a medical staffing business that provided care in military hospitals. I grew that first business from me alone at a desk in my one-bedroom apartment to 600 employees in offices in several states. My registered nurse credential equipped me to navigate the medical space, but running a business was daunting at first. I hadn’t gone to business school; I didn’t have an MBA. But I was driven; I wanted to do things.

“I learned. My company grew. I was now a CEO of a large medical staffing business. My days were mired in issues, from employee problems to the complexities and litigiousness of the medical space. Fitness was my outlet, and it was at a competition that my entire life trajectory changed. A weight fell during a weight-lifting event, crushing my leg: 225 pounds concentrated right above my right knee cap. As a trauma nurse, I saw at once that this was a serious injury, but it paled next to the spiritual impact. Lying on the ground, looking up at the sky, suddenly it hit me: I’d lost my way. My true goal was to care for soldiers, not to administer government contracts. Those were people’s kids in those beds; America’s heroes. I’d lost touch with my original dream. I had become an administrator, not a caregiver. I wanted to touch lives, not push paper. I knew then that I would sell my company. I’d always had this vision of my “someday” ideal life: writing, working one-on-one with people, helping them to make their lives better. So why invest years working at something that wasn’t nourishing my soul, with the goal of eventually living the life I wanted to live? Why not make this shift now?

“I sold the company and became a coach with a group called Entrepreneurs Organization. Traveling all over the world giving one-day seminars to CEOs of small companies, I found that I loved working with early-stage entrepreneurs. But I didn’t like teaching tactical skills like marketing, cash flows, personnel administration. There was usually something deeper blocking these entrepreneurs. They were unable to move themselves—and thus their company—to the next level. That’s how I developed my current coaching business. Now I work with clients one on one to help them move from their current level of success to the next level. What sets me apart from other career coaches? I know my ideal client; I’ve literally walked in their shoes. They are that person who has already achieved a level of success but is trying to move forward to a new level, yet doesn’t know how. My mission is to help rising entrepreneurs clarify their goals and find the focus they need to attain them.

“As I tell them, it’s not the threads on you, it’s the threads of you; it’s the threads of your soul that make you the person you are. That’s the attraction factor; it’s never the suit or the tie. Authenticity fuels your business. I feel almost a moral obligation to get up and prove that every day, especially as I work with these young entrepreneurs and help them stay grounded. Your values are your core; they’re your roots. It’s vital to understand and act upon those values. So here I am, once again, working one on one with people who need me. I started my career in the ultimate caring profession: nursing. After bringing skilled care to people on a large scale, I’m again giving my energy to working directly with clients individually to help them realize their entrepreneurial dreams.”

Learn more

Watch the following video where leaders from Inc 5000’s list of America’s fast-growing companies discuss the power of mentorship and share insights that can help make your business more successful.

Learn from the experts by reading about their experiences and picking up on the wisdom gained while building a business. For more insights from entrepreneurs, get the free eBook, “What I wish I knew: Success secrets from America’s fastest-growing companies.”

Interested in learning more about Office 365? We offer a platform of integrated tools that will give each small business owner and their teams the ability to stay connected and organized with their day-to-day tasks. Start your 30-day free trial today!

Related content

The post What I wish I knew—learn from the founder and entrepreneur coach of TheRickMartinez.com appeared first on Office Blogs.

3 Million Square Kilometers of New Imagery in Eastern Canada

$
0
0

We are excited to announce the release of new imagery for Canada. The area covered in this latest update is 3 million square kilometers of eastern Canada.

Below are just a few examples of some of the sights in our imagery update:

Peterborough, Ontario

Nicknamed “The Electric City”, Peterborough was the first town to use electric streetlights in Canada. Located in Central Ontario and harkening back to its technologically pioneering history, Peterborough is also home to operations of several large multinational companies and local technology businesses.

Peterborough Ontario

Niagara Falls, Ontario

Niagara Falls, Ontario, is located on the western bank of the Niagara River that flows into the famous Niagara Falls. The Niagara Falls make up three large waterfalls on the border between Ontario, Canada and New York, United States. Not only known for their natural beauty, the falls are also a source hydroelectric power, and are a popular tourist destination on both sides of the border.

Niagara Falls Ontario

Québec City Jean Lesage International Airport

Get a pilot’s perspective when descending for a landing at the Québec City Jean Lesage International Airport. Also known as the Jean Lesage International Airport, it is the second busiest passenger airport in Quebec after Montreal-Trudeau.

Jean Lesage International Airport

See even more of Canada on Bing Maps and visit http://www.microsoft.com/maps to learn how you can incorporate maps into your app.

- The Bing Maps Team

SQL Server 2016 Developer Edition in Windows Containers

$
0
0

We are excited to announce the public availability of SQL Server 2016 SP1 Developer Edition in Windows Containers! The image is now available on Docker Hub and the build scripts are hosted on our GitHub repository. This image can be used in both Windows Server Containers as well as Hyper-V Containers.

SQL Server 2016 Developer Edition: Docker Image| Installation Scripts

We hope you will find this image useful and leverage it for your container-based applications!

Why use SQL Server in containers?

SQL Server 2016 in a Windows container would be ideal when you want to:

  • Quickly create and start a set of SQL Server instances for development or testing.
  • Maximize density in test or production environments, especially in microservice architectures.
  • Isolate and control applications in a multi-tenant infrastructure.

Prerequisites

Before you can get started with the SQL Server 2016 Developer Edition image, you’ll need a Windows Server 2016 or Windows 10 host with the latest updates, the Windows Container feature enabled, and the Docker engine.

Pulling and Running SQL Server 2016 in a Windows Container

Below are the Docker pull and run commands for running SQL Server 2016 Developer instance in a Windows Container. Make sure that the mandatory sa_password environment variable meets the SQL Server 2016 Password Complexity requirements.

First, pull the image

docker pull microsoft/mssql-server-windows-developer

Then, run a SQL Server container

• Running a Windows Server Container (Windows Server 2016 only)
docker run -d -p 1433:1433 -e sa_password= -e ACCEPT_EULA=Y microsoft/mssql-server-windows-developer

• Running a Hyper-V Container (Windows Server 2016 or Windows 10)
docker run -d -p 1433:1433 -e sa_password= ––isolation=hyperv -e ACCEPT_EULA=Y microsoft/mssql-server-windows-developer

Connecting to SQL Server 2016

From within the container

One of the ways to connect to the SQL Server instance from inside the container is by using the sqlcmd utility.

First, use the docker ps command to get the container ID that you want to connect to and use it to replace the parameter placeholder “DOCKER_CONTAINER_ID” in the commands below. You can use the docker exec -it command to create an interactive command prompt that will execute commands inside of the container by using either Windows or SQL Authentication.

• Windows authentication using container administrator account
docker exec -it "DOCKER_CONTAINER_ID" sqlcmd

• SQL authentication using the system administrator (SA) account
docker exec -it "DOCKER_CONTAINER_ID" sqlcmd -Usa

From outside the container

One of the ways to access SQL Server 2016 from outside the container is by installing SQL Server Management Studio (SSMS). You can install and use SSMS either on the host or on another machine that can remotely connect to the host. Please follow this blog post for detailed instructions on connecting to a SQL Server 2016 Windows Containers via SSMS.

SQL 2016 Features Supported on Windows Server Core

Please refer to this link for all SQL Server 2016 features that are supported on a Windows Server Core installation.

Further Reading

Windows Containers Documentation
Container Resource Management
MSSQL Docker GitHub Repo
Tutorials for SQL Server 2016

Please give the SQL Server 2016 Developer image a try, and let us know what you think!

Thanks,
Perry Skountrianos
twitter | LinkedIn

Identifying WaaS Systems Using Config Manager

$
0
0

Hey Everybody! I am Jose Blasac, a Microsoft Premier Field Engineer, here with my first post on the world famous ASK PFE Platforms blog! I am super excited!

I spend a lot of time working with System Center Configuration Manager and Windows 10. If you have done any work with Config Manager and Windows 10 Servicing, you will have noticed some of the pre-requisites like Heartbeat Discovery and WaaS Deferral GPOs (More on this later).

By default, all Windows 10 systems are discovered as CB or Current Branch. (If you are not familiar with WaaS Concepts like CB or CBB, head over to our WaaS Quick Start Guide)
https://technet.microsoft.com/en-us/itpro/windows/manage/waas-quick-start

Starting with Config Manager 1511 a pair of new attributes have been added to the DDR or the Heartbeat Data Discovery Record of Config Manager Clients. For our purposes, we are concerned with the OS Readiness Branch attribute as highlighted below.


The OS Readiness Branch properties of a Computer Object can display the following values:

  • Do not defer upgrades = CB
  • Defer Upgrades = CBB
  • LTSB = LTSB

So where does the Client discover this information?

As I stated earlier, these attributes are now part of the DDR that is inventoried on Clients and copied up to the Management Point for Processing by the Primary Site. We can trace these activities on the client side via ConfigMgr Log files. The Client logs Discovery actions in the InventoryAgent.log found in %windir%\CCM\Logs folder. 


After manually initiating a DDR cycle let’s follow the action. If we drill down through the InventoryAgent.log to see what items were discovered and inventoried, we can see the following WMI query with a particular property of Interest! 



So, what is the OSBranch property all about and what values are we potentially looking for? If we launch the good old Wbemtest utility, we can test this WMI query for ourselves!
Right Click on Start, Run, Type in Wbemtest and Launch the Utility. Hit Connect and Attach to the Root\ccm\invagt namespace. We can take part of the query above to peek into the OSBranch Property. 





As you can see above we have an integer value of 1. This system is considered a CBB client.
The OSBranch Property has the following possible integer values:

  • Current Branch or CB = 0
  • Current Branch for Business or CBB = 1
  • Long Term Servicing Branch or LTSB = 2

As we continue to piece this together, what is the Client Discovery routine looking for to decide what value to set the OSBranch Property to? Now I happened to have read the documentation on configuring Windows Update for Business which is here.
https://technet.microsoft.com/en-us/itpro/windows/manage/waas-configure-wufb

So technically I already know what Registry Keys need to be set. (I am doing all my testing in this blog on Windows 10 1607)
If we scroll down the page to the section titled “Configure Devices for Current Branch (CB) or Current Branch for Business (CBB) we can see the Release branch policies and how to configure them for either Windows 10 1607 or Windows 10 1511. Here is a snippet of that Table. 


With that said we still have the ability to trace this for ourselves and observe the system behavior. Let’s resort to one of my favorites tools, Process Monitor. Chances are you have used this in the past but just in case you can go over to www.sysinternals.com and grab the it!

Prior to initiating a DDR discovery cycle I will launch Process Monitor. The DDR cycle runs quickly so I will pause the trace after approx. 30 seconds. Then I begin to search for key terms. In this case I used to the term “Branch”. 


Bingo!! The first hit takes us right to the relevant Registry key. 


We can see the RegQuery being performed by the WMI Provider host process but let’s dig deeper and see who is initiating the actions.
Double click on the Highlighted Line item and pull up the Event Properties Dialog box. 


Let’s go to the Stack tab to view this Threads Stack activity. Without getting too Nerdy we can see some Config Manager activity once we walk up the Stack. 


The ddrprov.dll belongs to the Config Manager Client DDR Provider as detailed below. 


Phewww, ok so now what? Knowing how Config Manager Discovers and identifies WaaS in Windows 10 can be very helpful once we start to play with things like Windows 10 Servicing Plans or trying to make sense of the Servicing Dashboard. Etc
We can create collections based on some of these Attributes and create Deployment ‘Rings’. For example, you could create collections based on OS Build or OS Readiness Branch. 


Some of the possible integer values when setting up your Query. 


We could also run SQL reports and queries against the Config Manager database to identify systems. The SQL view of interest is v_R_System. This contains the attributes like OS Build and OS Readiness Branch.
Here is an example query and result: 



As you can see the Branch value is also an integer in the Database. As we should have all mastered by now, Current Branch or CB is 0 and so on and on….

Well, I hope you have enjoyed this little exercise on identifying WaaS systems in your environment using System Center Configuration Manager.

Till next time!!

Jose Blasac

Viewing all 13502 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>