Quantcast
Channel: TechNet Technology News
Viewing all 13502 articles
Browse latest View live

Monitoring your home IP security cameras with OMS Log Analytics

$
0
0

Summary: Monitor devices, like home IP security cameras, with OMS Log Analytics without installing an agent!

Hi folks, In this blog post I would like to share with you how you can monitor devices with Log Analytics without the need to install the OMS (MMA) agent.

We’ve recently announced the Log Analytics HTTP Data Collector API. This enables a number of scenarios for which you may not have considered OMS Log Analytics previously. Especially in an environment where you cannot install the OMS agent or when the device does not run a supported OS version or distro like Windows or Linux. Well, let me show you the following.

A solution typically starts with a problem, so let’s dive into my problem first.

I have a home security system and part of that system are a couple of Foscam IP security camera’s and a Foscam NVR (Network Video Recorder) which records the camera feeds. The outdoor camera’s are at my front door and my carport, which leads to my backdoor. The camera’s are connected to the NVR and are configured in such a way that upon motion detection, they start a recording and optionally I can configure alert actions which typically would result in sending an email with a snapshot taken from the camera feed.

image

All goodness at this point. The problem or challenge if you will, is that I want to know if someone is doing some reconnaissance around my house in such a way that a person is detected by my front door camera AND by my carport camera WITHIN a specific interval of minutes, let’s say 5 minutes. I can visually see this correlation in my NVR when I play recorded streams, like this:

image

 

These systems (cameras and NVR) were never designed though to be able to perform any form of correlation and certainly not to do any data analytics. Did I say analytics?  Yes I did. So whenever I hear analytics, OMS Log Analytics obviously comes to mind. But to use Log Analytics in this scenario, I need to have the data available in Log Analytics first. Now with the ingestion API you can! Let’s break this project up for a second.

Step 1 – Does my camera log events and puts it in some kind of log file?

Well it does, but it’s mainly focused at the web UI and there’s no export capability to be found in the UI Sad smile , but hey, I have logged events! Smile

image

Step 2 – Is there any form of automation possible to retrieve the logs?

After thorough research (through Bing that is) I’ve discovered that there’s a limited SDK available which supports CGI requests in the form of POST and GET commands. Well that’s a start. It turned out that I can get the logs, but they are limited in the number of rows returned (10 max) and they come in this format:

image

Fast forward in time and more coffee….it appears that the datetime field and source IP address are notated in UNIX time and in decimal notation. Nothing that PowerShell can’t handle Smile , let’s move on to step 3.

Step 3 – Create PowerShell automation scripts – am I hearing Azure Automation here?

So the fun part begins:

1. Creating PowerShell snippets to get the logs through CGI requests, using Invoke-WebRequest,  and put the result in an array – done

2. Utilize existing PowerShell functions to convert Unix time and the IP address – done

3. Update the array so I end up with DateTime, UserName, Source IP address and Camera EventTypedone

4. Create a table with custom log field names, based on the array from step 1, and send it to the Log Analytics ingestion API – done

5. Test-drive the ingestion process – done

 

Step 4 – Using Log Search to query my camera data

Now that I have the log data from my outdoor camera’s sent to OMS Log Analytics, we can explore the data through Log Search:

image

Nice!

Step 5 – Correlating the camera log data

Time to take an outside walk to get some sun and wave at my two outdoors cameras. The cameras have done their job, they’ve detected me and have streamed the video feed to my NVR which has recorded my movements.  Let’s see if I can correlate this in Log Search:

image

Great! Let’s turn that into an alert with a schedule:

image

image

 

And while we’re there, let’s add an Azure Automation (webhook enabled) runbook which will send a text message (leveraging the Twilio text service):

image

Let’s test the webhook…. Ok, that works:

image

And we’re done!

Let’s test drive the solution. Again getting some sun, waving at my camera’s, sending the data to Log Analytics and….here we go:

image

image

Awesome! Peace of mind accomplished.

Now I can add my PowerShell script to Azure Automation, assign variables through assets, add it to a schedule and execute it on a Hybrid Runbook Worker, which has connectivity to my camera’s on my internal network.

image

 

So what’s next?

Besides creating visualization with the View Designer…

For the next project Smile I’ve noticed that in some rare occasion I did not have the complete camera recording, what is going on with that? Exploring the logs of the NVR I saw this:

image

That sounds like a good use case to send the NVR logs to Log Analytics too for analyzing and correlating the data. I can leverage the approach followed previously which allows me to search through the NVR data as well:

image

Since the camera’s, but also the NVR are IP based, I wanted to be able to troubleshoot if there are some kind of connectivity issues going on between my camera’s, NVR and my home router. So I’ve decided to leverage and enable Syslog forwarding on my router. That was easy, since on my Asus Router I luckily have this:

image

The Remote Log Server destination IP address is an Azure VM running Ubuntu, which has an OMS agent running, which forwards the data to Log Analytics:

image

 

With the router data in Log Analytics as well, I can now go ahead and start troubleshooting and correlating my “video lost” errors and hopefully find the root cause by searching for keywords like NVR or drop (for potentially dropped packets, etc.):

image

 image

If you want to explore Syslog forwarding, you can go here: Configuring syslog collection from the OMS portal

I hope that you’ve enjoyed this blog post and have seen the power and possibilities of the Log Analytics HTTP Data Collector API.

Thanks,

Tiander.


Improvements to Scheduling of Maintenance Mode (Client side via agent, and accessibility to operators via monitoring pane)

$
0
0

Before initiating the maintenance of IT infrastructure elements, one would want to initiate Maintenance Mode to suppress monitoring of infrastructure elements undergoing maintenance. This will prevent the monitoring tool from generating alerts and events pertaining to these elements, while they are under maintenance. Schedule Maintenance Mode feature (listed in Administration Workspace, earlier) allowed users to achieve this, but it was here that we identified the following gaps:

  1. Typically, in large organizations, Operators who manage the designated hosts, would also intend to maintain them as part of planned and unplanned maintenance workflows, and hence they, not just Administrators, should be allowed to set Maintenance schedules from the console
  2. Maintenance process is automated and is typically initiated from the client end (in batches)

Hence we have now enabled Maintenance Mode from Client side via agent, and moved “Maintenance Schedules” from the Administration pane to the Monitoring pane (which is accessible to Operators), so that operators can access this feature. These improvements have implemented in System Center 2016 Operations Manager.

Now, a machine can be put into maintenance from client side via agent, and operators will be able to access “Maintenance Schedules” from the Monitoring pane of Operations Console.

You can find the System Center 2016 GA here.

We request you to try this feature. You can submit your feedback at our SCOM Feedback site.

Schedule_Maintenance

ASR and Azure Hybrid Use Benefit make application migration to Azure even more cost-effective

$
0
0

Hybrid Use Benefit (HUB) lets Microsoft Software Assurance customers carry their on-premises Windows Server licenses to applications they move to Azure and easily extend their datacenter to the cloud. The HUB program, in addition to dramatic cost savings and asset productivity achieved by moving your applications to the industry’s leading hybrid enterprise public cloud,  allows you to realize significant savings on licensing costs.

Azure Site Recovery (ASR) is the tool of choice for our customers to migrate applications to Azure. ASR provides minimum downtime, hassle free migration to Azure across virtualization platforms and physical servers. By letting you test your applications in Azure before you migrate, and offering one click application migration through recovery plans, ASR simplifies the process of migrating to Azure. ASR supports migration of a wide range of operating systems including Windows Server and various Linux distributions, no matter what platform your applications are running on. What’s more - migration using ASR is free!  Yes, you read that right the first time, migration using ASR is free.  For the first 31 days from the time you start replicating your server, you only pay for the storage you consume on Azure and for the compute you use to test migration.

Azure Site Recovery, now lets you leverage your Hybrid Use Benefit while migrating your Windows servers to Azure. In this blog post, I’ll show you how you can use ASR and HUB to migrate your Windows Server environments to Azure.

Getting setup with Azure Site Recovery

The first thing you want to do is to get setup with Azure Site Recovery and start replicating your applications to Azure. All of this can be done in a few simple steps, as outlined in the following articles

If you are virtualized on Hyper-V, follow this article to get your servers replicating to Azure.

If you are virtualized on VMware or running on Physical servers, follow this article to get started with replication.

HUB is only available on servers migrated to Azure Resource Model (ARM) virtual machines. Ensure that the storage account you select for replication is an ARM Storage account and not a Classic Storage Account.

Once initial replication completes, your servers reach the protected state in ASR, at which point you are ready to test and migrate your applications to Azure.

Migrate applications using ASR and Hybrid Use Benefit

Use the Compute and Network configuration on the replicated item settings blade on the Azure portal to select the Azure virtual network and virtual machine size to migrate to.

Configure migration to use HUB

Once your servers are protected and you’ve validated your application in Azure by performing a test failover, all that’s left to do before you complete the migration is to configure ASR to use HUB while migrating your server. You can set this up in a few simple steps using Azure PowerShell. Get the latest version of Azure PowerShell from here. Ensure that you have the latest version of the AzureRM.SiteRecovery module (version 3.1.0 or later.)

PS C:\Users\bsiva> Get-Module -ListAvailable AzureRm.SiteRecovery


    Directory: C:\Program Files (x86)\Microsoft SDKs\Azure\PowerShell\ResourceManager\AzureResourceManager


ModuleType Version    Name                                ExportedCommands
---------- -------    ----                                ----------------
Manifest   3.1.0      AzureRM.SiteRecovery                {Get-AzureRmSiteRecoveryFabric, New-AzureRmSiteRecoveryFabric, Remove-AzureRmSiteRecoveryFabric, Stop-AzureRmSiteRecoveryJob...}

 

Login to your Azure account and select your Azure subscription:

PS C:\Users\bsiva> Login-AzureRmAccount


Environment           : AzureCloud
Account               : bsiva@microsoft.com
TenantId              : xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx
SubscriptionId        : xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx
SubscriptionName      : ASR PM team subscription 5
CurrentStorageAccount : 




PS C:\Users\bsiva>
PS C:\Users\bsiva>
PS C:\Users\bsiva> Select-AzureRmSubscription -SubscriptionName "DR Hybrid Application Scenarios"


Environment           : AzureCloud
Account               : bsiva@microsoft.com
TenantId              : xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx
SubscriptionId        : xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx
SubscriptionName      : DR Hybrid Application Scenarios
CurrentStorageAccount : 


Set the Recovery Services vault context:


 
PS C:\Users\bsiva> $vault = Get-AzureRmRecoveryServicesVault -Name "Contoso-RecoveryVault" PS C:\Users\bsiva> Set-AzureRmSiteRecoveryVaultSettings -ARSVault $vault ResourceName ResourceGroupName ResourceNamespace ResouceType ------------ ----------------- ----------------- ----------- Contoso-RecoveryVault Contoso-Recovery Microsoft.RecoveryServices vaults 

Get the list of replicating machines in the vault:

PS C:\Users\bsiva> $ReplicatedItems = Get-AzureRmSiteRecoveryFabric | Get-AzureRmSiteRecoveryProtectionContainer | Get-AzureRmSiteRecoveryReplicationProtectedItem
PS C:\Users\bsiva> $ReplicatedItems | Select-Object -Property FriendlyName

FriendlyName
------------
Contoso-EngWikiDB
Contoso-PayrollDB

 

Set the HUB License Type for the machines that are being migrated:

PS C:\Users\bsiva> $Job1 = Set-AzureRmSiteRecoveryReplicationProtectedItem -ReplicationProtectedItem $ReplicatedItems[0] -LicenseType WindowsServer

 

Validate that the ASR Job completed successfully:

PS C:\Users\bsiva> Get-AzureRmSiteRecoveryJob -Job $Job1


Name             : xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx
ID               : /Subscriptions/xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx/resourceGroups/Contoso-Recovery/providers/Microsoft.RecoveryServices/vaults/Contoso-RecoveryVault/repl
                   icationJobs/xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxx
Type             :
JobType          : UpdateVmProperties
DisplayName      : Update the virtual machine
ClientRequestId  : xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx-2016-10-19 18:50:18Z-P ActivityId: xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx
State            : Succeeded
StateDescription : Completed
StartTime        : 10/20/2016 12:20:18 AM
EndTime          : 10/20/2016 12:20:22 AM
TargetObjectId   : xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx
TargetObjectType : ProtectionEntity
TargetObjectName : Contoso-EngWikiDB
AllowedActions   :
Tasks            : {Update the virtual machine properties}
Errors           : {}

 

And that’s it! You are now all set to migrate your application to Azure.

Migrate to Azure

With ASR now setup to let you migrate to Azure and benefit from HUB, all that’s left to do is the final step of migrating your application to Azure. You can do this from the Portal or using ASR PowerShell cmdlets. To do this from the portal, go to your Recovery Services vault, select the replicated machine or recovery plan if you’ve set one up, and select the Failover action.

ASR: Failover Protected Servers

 

Once the failover job completes successfully, you’ll find your migrated VM among the virtual machines in your subscription. Verify that your VM is utilizing the licensing benefit.

At this point you can clean up the replications you had setup in your Recovery services vault by selecting Complete Migration and retire the on-premises infrastructure that you were previously using to host your application.

CompleteMigration

Migrating to the cloud was never easier. With a few simple steps you can easily migrate your existing applications and benefit from the superior cloud economics and power of the hyper-scale platform that Azure is.

This is awesome, where do I learn more about leveraging ASR to provide business continuity for my IT infrastructure, or to migrate my applications to Azure?

You can  check out additional product information, and start replicating your workloads to Microsoft Azure using Azure Site Recovery today. You can use the powerful replication capabilities of Site Recovery for 31 days at no charge for every new physical server or virtual machine that you replicate. Visit the Azure Site Recovery forum on MSDN for additional information and to engage with other customers, or use the ASR UserVoice to let us know what features you want us to enable next.

Azure Site Recovery, as part of Microsoft Operations Management Suite, enables you to gain control and manage your workloads no matter where they run (Azure, AWS, Windows Server, Linux, VMware or OpenStack) with a cost-effective, all-in-one cloud IT management solution. Existing System Center customers can take advantage of the Microsoft Operations Management Suite add-on, empowering them to do more by leveraging their current investments. Get access to all the new services that OMS offers, with a convenient step-up price for all existing System Center customers. You can also access only the IT management services that you need, enabling you to on-board quickly and have immediate value, paying only for the features that you use.

Making It Easier To Administer Power BI

$
0
0
Making It Easier To Administer Power BI

Announcing Azure Storage Client Library GA for Xamarin

$
0
0

We are pleased to announce the general availability release of the Azure Storage client library for Xamarin. Xamarin is a leading mobile app development platform that allows developers to use a shared C# codebase to create iOS, Android, and Windows Store apps with native user interfaces. We believe the Azure Storage library for Xamarin will be instrumental in helping provide delightful developer experiences and enabling an end-to-end mobile-first, cloud-first experience. We would like to thank everyone who has leveraged previews of Azure Storage for Xamarin and provided valuable feedback.

The sources for the Xamarin release are the same as the Azure Storage .Net client library and can be found on Github. The installable package can be downloaded from nuget (version 7.2 and beyond) or from Azure SDK (version 2.9.5 and beyond) and installed via the Web Platform installer. This generally available release supports all features up to and included in the 2015-12-11 REST version. 

Getting started is very easy. Simply follow the steps below:

  1. Install Xamarin SDK and tools and any language specific emulators as necessary: For instance, you can install the Android KitKat emulator.
  2. Create a new Xamarin project and install the Azure Storage nuget package version 7.2 or higher in your project and add Storage specific code.
  3. Compile, build and run the solution. You can run against a phone emulator or an actual device. Likewise you can connect to the Azure Storage service or the Azure Storage emulator.

Please see our Getting Started Docs and the reference documentation to learn how you can get started with the Xamarin client library and build applications that leverage Azure Storage features.ios

We currently support shared asset projects (e.g., Native Shared, Xamarin.Forms Shared), Xamarin.iOS and Xamarin.Android projects. This Storage library leverages the .Net Standard runtime library that can be run on Windows, Linux and MacOS. Learn about .Net Standard library and .Net Core. Learn about Xamarin support for .Net Standard.

As always, we continue to do our work in the public GitHub development branch for visibility and transparency. We are working on building code samples in our AzureStorage samples repository to help you better leverage the Azure Storage service and the Xamarin library capabilities. A Xamarin image uploader sample is already available for you to review/ download. If you have any requests on specific scenarios you'd like to see as samples, please let us know or feel free to contribute as a valued member of the developer community. Community feedback is very important to us.

Enjoy the Xamarin Azure Storage experience!

Thank you

Dinesh Murthy, Michael Roberson, Michael Curd, Elham Rezvani, Peter Marino and the Azure Storage Team.

Geo-filtering available for Akamai Standard profiles

$
0
0

Restricting access to your content by country is a powerful CDN feature. We’re excited to announce this is now available for all Akamai Standard profiles directly in the Azure portal.

This feature will allow you to specify specific paths on your endpoint and set rules to block or allow access to a specific list of countries.

If you are using either Standard or Premium from Verizon, the feature will still be accessed through the supplemental management portal for the time being. It will be migrated to the Azure portal in the future.

To access the new feature, follow these steps:

  1. Find your CDN profile at https://portal.azure.com.

image

2. Select an Endpoint.

image

3. Navigate to Geo-filtering.

image

4. Enter a file or directory path for PATH, an ACTION to block or allow, and one or more countries. The example below allows access to “myendpoint1.azureedge.net/pictures/mypics/*” from only the United States.

image

5. Hit Save and wait for the changes to propagate.

For more details, see please visit the full documentation page.

Additional resources

KMS Activation for Windows Server 2016

$
0
0

Hi everyone.  Graeme Bray here with a quick article around KMS and Server 2016.  KMS and Server 2016 you say?  Shouldn’t I be using Active Directory Based Activation?  Yes, you should, but in case you are not, let’s go over the pre-requisites to activate Windows Server 2016 via KMS.

First, lets review the pre-requisite updates that must be installed.

If your KMS host is running Windows Server 2012, you need two updates:

* https://support.microsoft.com/kb/3058168

o Direct Download: https://www.microsoft.com/en-us/download/details.aspx?id=47649

* https://support.microsoft.com/kb/3172615

o Direct Download: https://www.microsoft.com/en-us/download/details.aspx?id=53316

If your KMS host is running Windows Server 2012 R2, you need two updates:

* https://support.microsoft.com/kb/3058168

o Direct Download: https://www.microsoft.com/en-us/download/details.aspx?id=47622

* https://support.microsoft.com/kb/3172614

o Direct Download: https://www.microsoft.com/en-us/download/details.aspx?id=53333

If your KMS host is running Windows Server 2008 R2:

* There is no update to allow Windows Server 2008 R2 to activate Windows Server 2016.  Windows Server 2008 R2 is in extended support.  During this phase, we only release security updates and do not release updates that add additional functionality.

Lets review what these updates add:

* The first update allows the activation of Windows 10 from Windows 8, 8.1, and Windows Server 2012 R2 based systems.

* The second is an update rollup that allows KMS to activate Windows 10 1607 long-term servicing branch (LTSB) systems and Windows Server 2016 clients.

KMS License Key

After that, all you need to do is add your KMS License Key from the Volume License site.

But wait, how do I find my KMS License Key?!  Have no fear, there is a KB article (and detailed steps) for you!

Retrieve KMS License Key from the VLSC for Windows Server 2016.

To resolve this problem, follow these steps:

1. Log on to the Volume Licensing Service Center (VLSC).

2. Click License.

3. Click Relationship Summary.

4. Click License ID of your current Active License.

5. After the page loads, click Product Keys.

6. In the list of keys, locate Windows Srv 2016 DataCtr/Std KMS

7. Install this key on the KMS host.
https://support.microsoft.com/kb/3086418

Client Licensing

After your KMS host is activated, use the Client Setup Keys to activate your shiny new Windows Server 2016 hosts.

https://technet.microsoft.com/library/jj612867.aspx

What is a client setup key you ask?  A CSVLK is the key that is installed, by default, on your Volume License media that you pulled down day 1.  If you are using volume license media, no key change is required.

If you are converting an install from Retail, MSDN, etc, you will want to use the client setup key to convert to a Volume License key to allow activation.

Active Directory Based Activation

Graeme, this sure seems like a lot of work to get KMS working for Windows Server 2016.  Is there a better way to do this?

The recommendation at this point is to leave your existing KMS system alone.  Whether it is running on Windows Server 2008 R2, Windows Server 2012, or Windows Server 2012 R2, continue to service the machine via security and quality updates.  Allow your KMS system to activate down-level operating systems and Office installs (Windows 7, Windows Server 2008/2008 R2, and Office 2010).  Utilize Active Directory Based Activation (ADBA) for all new clients (Windows 8, 8.1, Windows Server 2012, 2012 R2, 2016, Windows 10, Office 2013, and Office 2016).

Active Directory Based Activation provides several key benefits:

1. Activation is near instantaneous when a system is brought online.  As soon as the system talks to Active Directory, the system is activated.

2. One less server to maintain and update.  Once all downlevel (2008 R2 & prior) systems are migrated, you can remove your KMS host.

3. AD-Based activation is forest wide.  KMS hosts require additional configuration to support multiple domains.

Have more questions?

Q: Where can I get even more details around KMS and AD Based Activation?

A: Refer to these other posts by one of my colleagues, Charity Shelbourne.

https://blogs.technet.microsoft.com/askpfeplat/2013/02/04/active-directory-based-activation-vs-key-management-services/https://blogs.technet.microsoft.com/askpfeplat/2015/11/09/kms-migration-from-2008-r2-to-windows-server-2012-r2-and-kms-activation-known-issues/

Q: Can Microsoft help me with this process?

A: Absolutely!  Reach out to your TAM and you can engage Premier Field Engineering for assistance.

Thanks for reading!

Graeme “Keeping KMS easy” Bray

Windows 10 Tip: Get started with the Game Bar on your Windows 10 PC

$
0
0

Did you know that with the Game Bar on your Windows 10 PC and tablet, you can capture screenshots and record game clips? You can even record with your microphone while capturing video, and share all of it through the Xbox app to your Xbox Live activity feed, messages, showcase and even Twitter.

Here’s how to record and capture screenshots with the Game Bar:

Simply tap the Xbox button if you’re using an Xbox controller – or press Windows Key + G on your keyboard – to bring up the Game Bar. You can easily find and share all your recordings and screenshots in the Xbox app under the GameDVR tab on the left navigation, or in the Video folder on your PC under “Captures.” What’s cool here is that any GameDVR clips you capture on your Xbox One will also appear on your PC for editing and sharing.

The Game Bar on your Windows 10 PC and tablet

The Xbox app is also great for downloading your free Games with Gold every month, using Party for free voice chat with your friends on Xbox One and Windows 10, comparing Achievements, launching PC games and streaming your Xbox One games to your PC to play when you’re away from your television.

Stay tuned for the updates coming to the app in November: you’ll be able to browse Xbox Live Clubs you belong to and find gamers looking to accomplish similar goals with Looking for Group.

If you don’t already have the Xbox app, you can download it here. Have a great week!


Microsoft and Intel squeezed hyper-convergence into the overhead bin

$
0
0

This post was authored by Cosmos Darwin, Program Manager, Windows Server.

The Challenge

In the Windows Server team, we tend to focus on going big. Our enterprise customers and service providers are increasingly relying on Windows as the foundation of their software-defined datacenters, and needless to say, our hyperscale public cloud Azure does too. Recent big announcements like support for 24 TB of memory per server with Hyper-V, or6+ million IOPS per cluster with Storage Spaces Direct, or delivering 50 Gb/s of throughput per virtual machine with Software-Defined Networking are the proof.

But what can these same features in Windows Server do for smaller deployments? Those known in the IT industry as Remote-Office / Branch-Office (“ROBO”) – think retail stores, restaurants, bank branches or private practices, remote industrial or constructions sites, and more. After all, their basic requirement isn’t so different – they need high availability for mission-critical apps, with rock-solid storage for those apps. And generally, they need it to be local, so they can operate – process transactions, or look up a patient’s records – even when their Internet connection is flaky or non-existent.

For these deployments, cost is paramount. Major retail chains operate thousands, or tens of thousands, of locations. This multiplier makes IT budgets extremely sensitive to the per-unit cost of each system. The simplicity and savings of hyper-convergence – using the same servers to provide compute and storage– present an attractive solution.

With this in mind,  under the auspices of Project Kepler-47, we set about going small

Both servers in one carry-on bag

This tiny two-server cluster packs powerful compute and spacious storage into one cubic foot.

Meet Kepler-47

“The storage is flash-accelerated, the chips are Intel Xeon,
and the memory is error-correcting DDR4 – no compromises.”

The resulting prototype – and it’s just that, a prototype– was revealed at Microsoft Ignite 2016 last week.

Project Kepler 47 at Ignite 2016

Kepler-47 on expo floor at Microsoft Ignite 2016 in Atlanta.

In our configuration, this tiny two-server cluster provides over 20 TB of available storage capacity, and over 50 GB of available memory for a handful of mid-sized virtual machines. The storage is flash-accelerated, the chips are Intel Xeon, and the memory is error-correcting DDR4 – no compromises. The storage is mirrored to tolerate hardware failures – drive or server – with continuous availability. And if one server goes down or needs maintenance, virtual machines live migrate to the other server with no appreciable downtime.

Kepler 47 vs 2U Rack Server

Kepler-47 is 45% smaller than standard 2U rack servers.

In terms of size, Kepler-47 is barely one cubic foot – 45% smaller than standard 2U rack servers. For perspective, this means both servers fit readily in one carry-on bag in the overhead bin!

We bought (almost) every part online at retail prices. The total cost for each server was just $1,101. This excludes the drives, which we salvaged from around the office, and which could vary wildly in price depending on your needs.

Retail value

Each Kepler-47 server cost just $1,101 retail, excluding drives.

Technology

Kepler-47 is comprised of two servers, each running Windows Server 2016 Datacenter. The servers form one hyper-converged Failover Cluster, with the new Cloud Witness as the low-cost, low-footprint quorum technology. The cluster provides high availability to Hyper-V virtual machines (which may also run Windows, at no additional licensing cost), and Storage Spaces Direct provides fast and fault tolerant storage using just the local drives.

Additional fault tolerance can be achieved using new features such as Storage Replica with Azure Site Recovery.

Notably, Kepler-47 does not use traditional Ethernet networking between the servers, eliminating the need for costly high-speed network adapters and switches. Instead, it uses Intel Thunderbolt™ 3 over a USB Type-C connector, which provides up to 20 Gb/s (or up to 40 Gb/s when utilizing display and data together!) – plenty for replicating storage and live migrating virtual machines.

To pull this off, we partnered with our friends at Intel, who furnished us with pre-release PCIe add-in-cards for Thunderbolt™ 3 and a proof-of-concept driver.

Intel Thunderbolt™ 3

Kepler-47 does not use traditional Ethernet between the servers; instead, it uses Intel Thunderbolt™ 3 over USB Type-C.

To our delight, it worked like a charm – here’s the Networks view in Failover Cluster Manager. Thanks, Intel!

Failover Cluster Manager

The Networks view in Failover Cluster Manager, showing Thunderbolt™ Networking.

While Thunderbolt™ 3 is already in widespread use in laptops and other devices, this kind of server application is new, and it’s one of the main reasons Kepler-47 is strictly a prototype. It also boots from USB 3 DOM, which isn’t yet supported, and has no host-bus adapter (HBA) nor SAS expander, both of which are currently required for Storage Spaces Direct to leverage SCSI Enclosure Services (SES) for slot identification. However, it otherwise passes all our validation and testing and, as far as we can tell, works flawlessly.

(In case you missed it, support for Storage Spaces Direct clusters with just two servers was announced at Ignite!)

Parts List

Ok, now for the juicy details. Since Ignite, we have been asked repeatedly what parts we used. Here you go:

Kepler 47 key parts

The key parts of Kepler-47.

FunctionProductCost
MotherboardASRock C236 WSI$ 199.00
CPUIntel Xeon E3-1235L v5 25w 4C4T 2.0Ghz$ 283.00
Memory32 GB (2 x 16 GB) Black Diamond ECC DDR4-2133$ 208.99
Boot DeviceInnodisk 32 GB USB 3 DOM$   29.33
Storage (Cache)2 x 200 GB Intel S3700 2.5” SATA SSD/
Storage (Capacity)6 x 4 TB Toshiba MG03ACA400 3.5” SATA HDD/
Networking (Adapter)Intel Thunderbolt™ 3 JHL6540 PCIe Gen 3 x4 Controller Chip/
Networking (Cable)Cable Matters 0.5m 20 Gb/s USB Type-C / Thunderbolt™ 3$  17.99*
SATA Cables8 x SuperMicro CBL-0481L$   13.20
ChassisU-NAS NSC-800$ 199.99
Power SupplyASPower 400W Super Quiet 1U$ 119.99
HeatsinkDynatron K2 75mm 2 Ball CPU Fan$   34.99
Thermal PadsStarTech Heatsink Thermal Transfer Pads (Set of 5)$    6.28*

* Just one needed for both servers.

Practical Notes

The ASRock C236 WSI motherboard is the only one we could locate that is mini-ITX form factor, has eight SATA ports, and supports server-class processors and error-correcting memory with SATA hot-plug. The E3-1235L v5 is just 25 watts, which helps keep Kepler-47 very quiet. (Dan has been running it literally on his desk since last month, and he hasn’t complained yet.)

Having spent all our SATA ports on the storage, we needed to boot from something else. We were delighted to spot the USB 3 header on the motherboard.

The U-NAS NSC-800 chassis is not the cheapest option. You could go cheaper. However, it features an aluminum outer casing, steel frame, and rubberized drive trays – the quality appealed to us.

We actually had to order two sets of SATA cables – the first were not malleable enough to weave their way around the tight corners from the board to the drive bays in our chassis. The second set we got are flat and 30 AWG, and they work great.

Likewise, we had to confront physical limitations on the heatsink – the fan we use is barely 2.7 cm tall, to fit in the chassis.

We salvaged the drives we used, for cache and capacity, from other systems in our test lab. In the case of the SSDs, they’re several years old and discontinued, so it’s not clear how to accurately price them. In the future, we imagine ROBO deployments of Storage Spaces Direct will vary tremendously in the drives they use – we chose 4 TB HDDs, but some folks may only need 1 TB, or may want 10 TB. This is why we aren’t focusing on the price of the drives themselves – it’s really up to you.

Finally, the Thunderbolt™ 3 controller chip in PCIe add-in-card form factor was pre-release, for development purposes only. It was graciously provided to us by our friends at Intel. They have cited a price-tag of $8.55 for the chip, but not made us pay yet.

Takeaway

With Project Kepler-47, we used Storage Spaces Direct and Windows Server 2016 to build an unprecedentedly low-cost high availability solution to meet remote-office, branch-office needs. It delivers the simplicity and savings of hyper-convergence, with compute and storage in a single two-server cluster, with next to no networking gear, that is very budget friendly.

Are you or is your organization interested in this type of solution? Let us know in the comments!

A trusted way to shop online is coming to a Windows 10 device near you

$
0
0

Masterpass, the omni-channel digital payment service from Mastercard, is connecting with Microsoft Wallet. You’ll soon be able to shop at hundreds of thousands of online merchants that accept Masterpass.

The Masterpass vision is to support all forms of commerce to address the full range of consumer needs. In bringing two iconic consumer brands together, users of Windows 10 phones, tablets and desktops that are also Mastercard cardholders will have a simple and secure way to pay online.

Why else are we excited to be working with Microsoft?

Reach: With over xxx (Microsoft to confirm number) million devices and growing quickly, the Windows platform represents a great way to reach consumers in the U.S. and around the world.

Security: Masterpass, which already has multiple layers of security including EMV standard tokenization and encryption, will get even more secure with Windows Hello technology.

Acceptance: Masterpass today is helping hundreds of thousands of merchants across the world accept millions of secure transactions. Tthis partnerships will enable Microsoft Wallet to reach a growing network of online merchants in a simple and secure way.

Windows devices offer a versatile computing platform for consumers and we are excited to bring the best payment experience to these devices.

How to follow along with our Microsoft Windows 10 Event

$
0
0

New York skyline through a window

Our Microsoft Windows 10 Event is taking place this week, and we invite you to join us to see what’s next for Windows 10. You can watch the livestream of our #MicrosoftEvent this Wednesday, Oct. 26, 2016 at 10AM EDT/7AM PDT. If you’re a member of the press, you can watch the keynote here.

Tip: Ask Cortana* to “Spark my imagination.”

*Cortana available in select markets.

High DPI Scaling Improvements for Desktop Applications and “Mixed Mode” DPI Scaling in the Windows 10 Anniversary Update

$
0
0

As display technology has improved over time, the cutting edge has moved towards having more pixels packed into each physical square inch, and away from simply making displays physically larger. This trend has increased the dots per inch (DPI) of the displays on the market today. The Surface Pro 4, for example, has roughly 192 DPI (while legacy displays have 96 DPI). Although having more pixels packed into each physical square inch of a display can give you extremely sharp graphics and text, it can also cause problems for desktop application developers. Many desktop applications display blurry, incorrectly sized UI (too big or too small), or are unusable when using high DPI displays in combination with standard-DPI displays. Many desktop UI frameworks that developers rely on to create Windows desktop applications do not natively handle high DPI displays and work is required on the part of the developer to address resizing application UI on these displays. This can be a very expensive and time-consuming process for developers. In this post, I discuss some of the improvements introduced in the Windows 10 Anniversary Update that make it less-expensive for desktop application developers to develop applications that handle high-DPI displays properly.

Note that applications built upon the Windows Universal Platform (UWP) handle display scaling very well and that the content discussed in this post does not apply to UWP. If you’re creating a new Windows application or are in a position where migrating is possible, consider UWP to avoid the problems discussed in this post.

Some Background on DPI Scaling

Steve Wright has written on this topic extensively, but I thought I’d summarize some of the complexities around display scaling for desktop applications here. Many desktop applications (applications written in raw Win32, MFC, WPF, WinForms or other UI frameworks) can often become blurry, incorrectly sized or a combination of both, whenever the display scale factor, or DPI, of the display that they’re on is different than what it was when the Windows session was first started. This can happen under many circumstances:

  • The application window is moved to a display that has a different display scale factor
  • The user changes the display scale factor manually
  • A remote-desktop connection is established from a device with a different scale factor

When the display scale factor changes, the application may be sized incorrectly for the new scale factor and therefore, Windows often jumps in and does a bitmap stretch of the application UI. This causes the application UI to be physically sized correctly, but it can also lead to the UI being blurry.

In the past, Windows offered no support for DPI scaling to applications at the platform level. When these type of “DPI Unaware” applications are run on Windows 10, they are almost always bitmap scaled by Windows when display scaling is > 100%. Later, Windows introduced a DPI-awareness mode called “System DPI Awareness.” System DPI Awareness provides information to applications about the display scale factor, the size of the screen, information on the correct fonts to use, etc., such that developers can have their applications scaled correctly for a high DPI display. Unfortunately, System DPI Awareness was not designed for dynamic-scaling scenarios such as docking/undocking, moving an application window to a display with a different display scale factor, etc. In other words: The model for system-DPI-awareness is one that assumes that only one display will be in use during the lifecycle of the application and that the scale factor will not change.

In dynamic-scale-factor scenarios applications will be bitmap stretched by Windows when the display-scale-factor changed (this even applies to system-DPI-aware processes). Windows 8.1 introduced support for “Per-Monitor-DPI Awareness” to enable developers to write applications that could resize on a per-DPI basis. Applications that register themselves as being Per-Monitor-DPI Aware are informed when the display scale factor changes and are expected to respond accordingly.

So… everything was good, right? Not quite.

Unfortunately, there were three big gaps with our implementation of Per-Monitor-DPI Awareness in the platform:

  • There wasn’t enough platform support for desktop application developers to actually make their applications do the right thing when the display-scale-factor changed.
  • It was very expensive to update application UI to respond correctly to a display-scale factor changes, if it was even possible to do at all.
  • There was no way to directly disable Window’s bitmap-scaling of application UI. Some applications would register themselves as being Per-Monitor-DPI Aware not because they actually were DPI aware, but because they didn’t want Windows to bitmap stretch them.

These problems resulted in very few applications handling dynamic display scaling correctly. Many applications that registered themselves as being Per-Monitor-DPI Aware don’t scale at all and can render extremely large or extremely small on secondary displays.

Background on Explorer

As I mentioned in another blog post, during the development cycle for the first release of Windows 10 we decided to start improving the way Windows handled dynamic display scaling by updating some in-box UI components, such as the Windows File Explorer, to scale correctly.

This was a great learning experience for us because it taught us about the problems developers face when trying to update their applications to dynamically scale and where Windows was limited in this regard. One of the main lessons learned was that, even for simple applications, the model of registering an application as being either System DPI Aware or Per-Monitor-DPI Aware was too rigid of a requirement because it meant that if a developer decided to mark their application as conforming to one of these DPI-awareness modes, they would have had to update every top-level window in their application or live with some top-level windows being sized incorrectly. Any application that hosts third-party content, such as plugins or extensions, may not even have access to the source code for this content and therefore would not be able to validate that it handled display scaling properly. Furthermore, there were many system components (ComDlg32, for example) that didn’t scale on a per-DPI basis.

When we updated File Explorer (a codebase that’s been around and been added to for some time), we kept finding more and more UI that had to be updated to handle scaling correctly, even after we reached the point in the development process when the primary UI scaled correctly. At that point we faced the same choice other developers faced: we had to touch old code to implement dynamic scaling (which came with application-compatibility risks) or live with these UI components being sized incorrectly. This helped us feel the pain that developers face when trying to adhere to the rigid model that Windows required of them.

Mixed-Mode DPI Scaling and the DPI-Awareness Context

Lesson learned. It was clear to us that we needed to break apart this rigid, process-wide, model for display scaling that Windows required. Our goal was to make it easier for developers to update their desktop applications to handle dynamic display scaling so that more desktop applications would scale gracefully on Windows 10. The idea we came up with was to move the process-level constraint on display scaling to the top-level window level. The idea was that instead of requiring every single top-level window in a desktop application to be updated to scale using a single mode, we could instead enable developers to ease-in, so to speak, to the dynamic-DPI world by letting them choose the scaling mode for each top-level window. For an application with a main window and secondary UI, such as a CAD or illustration application, for example, developers can focus their time and energy updating the main UI while letting Windows handle scaling the less-important UI, possibly with bitmap stretching. While this would not be a perfect solution, it would enable application developers to update their UI at their own pace instead of requiring them to update every component of their UI at once, or suffer the consequences previously mentioned.

The Windows 10 Anniversary Update introduced the concept of “Mixed-Mode” DPI scaling, also known as sub-process DPI scaling, via the concept of the DPI-awareness context (DPI_AWARENESS_CONTEXT) and the SetThreadDpiAwarenessContext API. You can think of a DPI-awareness context as a mode that a thread can be in which can impact the DPI-behavior of API calls that are made by the thread (while in one of these modes). A thread’s mode, or context, can be changed via calls to SetThreadDpiAwarenessContext at any time. Here are some key points to consider:

  • A thread can have its DPI Awareness Context changed at any time.
  • Any API calls that are made after the context is changed will run in the corresponding DPI context (and may be virtualized).
  • When a thread that is running with a given context creates a new top-level window, the new top-level window will be assigned the same context that the thread that created it had, at the time of creation.

Let’s discuss the first point: With SetThreadDpiAwarenessContext the context of a thread can be switched at will. Threads can also be switched in and out of different contexts multiple times.

Many Windows API calls in Windows will return different information to applications depending on the DPI awareness mode that the calling process is running in. For example, if an application is DPI-unaware (which means that it didn’t specify a DPI-Awareness mode) and is running on a display scale factor greater than 100%, and if this application queries Windows for the display size, Windows will return the display size scaled to the coordinate space of the application. This process is referred to as virtualization. Prior to the availability of Mixed-Mode DPI, this virtualization only took place at the process level. Now it can be done at the thread level.

Mixed-Mode DPI scaling should significantly reduce the barrier to entry for DPI support for desktop applications.

Making Notepad Per-Monitor DPI Aware

Now that I’ve introduced the concept of Mixed-Mode, let’s talk about how we applied it to an actual application. While we were working on Mixed Mode we decided to try it out on some in-box Windows applications. The first application we started with was Notepad. Notepad is essentially a single-window application with a single edit control. It also has several “level 2” UI such as the font dialog, print dialog and the find/replace dialog. Before the Windows 10 Anniversary Update, Notepad was a System-DPI-Aware process (crisp on the primary display, blurry on others or if the display scale factor changed). Our goal was to make it a first-class Per-Monitor-DPI-Aware process so that it would render crisply at any scale factor.

One of the first things we did was to change the application manifest for Notepad so that it would run in per-monitor mode. Once an application is running as per-monitor and the DPI changes, the process is sent a WM_DPICHANGE message. This message contains a suggested rectangle to size the application to using SetWindowPos. Once we did this and moved Notepad to a second display (a display with a different scale factor), we saw that the non-client area of the window wasn’t scaling automatically. The non-client area can be described as all of the window chrome that is drawn by the OS such as the min/max/close button, window borders, system menu, caption bar, etc.

Here is a picture of Notepad with its non-client area properly DPI scaling next to another per-monitor application that has non-client area that isn’t scaling. Notice how the non-client area of the second application is smaller. This is because the display that its image was captured on used 200% display scaling, while the non-client area was initialized at 100% (system) display scaling.

image1

During the first Windows 10 release we developed functionality that would enable non-client area to scale dynamically, but it wasn’t ready for prime-time and wasn’t released publicly until we released the Anniversary Update.

We were able to use the EnableNonClientDpiScaling API to get Notepad’s non-client area to automatically DPI scale properly.

Using EnableNonClientDpiScaling will enable automatic DPI scaling of the non-client area for a window when the following conditions are satisfied:

  • The API is called from the WM_NCCREATE handler for the window
  • The process or window is running in per-monitor-DPI awareness
  • The window passed to the API is a top-level window (only top-level windows are supported)

Font Size & the ChooseFont Dialog

The next thing that had to be done was to resize the font on a DPI change. Notepad uses an edit control for its primary UI and it needs to have a font-size specified. After a DPI change, the previous font size was either be too large or too small for the new scale factor, so this had to be recalculated. We used GetDpiForWindow to base the calculation for the new font size:

FontStruct.lfHeight = -MulDiv(iPointSize, GetDpiForWindow(hwndNP), 720);

This gave us a font size that was appropriate for the display-scale factor of the current display, but we next ran into an interesting problem: when choosing the font we ran up against the fact that the ChooseFont dialog was not per-monitor DPI aware. This meant that this dialog could be either too large or too small, depending on the display configuration at runtime. Notice in the image below that the ChooseFont dialog is twice as large as it should be:

image2

To address this, we used mixed-mode to have the ChooseFont dialog run with a system-DPI-awareness context. This meant that this dialog would scale to the system DPI on the primary display and be bitmap stretched any time the display scale factor changed:

DPI_AWARENESS_CONTEXT previousDpiContext = SetThreadDpiAwarenessContext(DPI_AWARENESS_CONTEXT_SYSTEM_AWARE);
BOOL cfResult = ChooseFont(cf);
SetThreadDpiAwarenessContext(previousDpiContext);

This code stores the DPI_AWARENESS_CONTEXT of the thread and then temporarily changes the context while the ChooseFont dialog is created. This ensures that the ChooseFont dialog will run with a system-DPI-awareness context. Immediately after the call to create the window, the thread’s awareness context is restored because we didn’t want the thread to have its awareness changed permanently.

We knew that the ChooseFont dialog did support system-DPI awareness so we chose DPI_AWARENESS_CONTEXT_SYSTEM_AWARE, otherwise we could have used DPI_AWARENESS_CONTEXT_UNAWARE to at least ensure that this dialog would have been bitmap stretched to the correct physical size.

Now we had the ChooseFont dialog scaling properly without touching any of the ChooseFont dialog’s code but this lead to our next challenge… and this is one of the most important concepts that developers should understand about the use of mixed-mode DPI scaling: data shared across DPI-awareness contexts can be using different scaling/coordinate spaces and can have different interpretations in each context. In the case of the ChooseFont dialog, this function returns a font size based off of the user’s input, but this font size returned is relative to the scale factor that the dialog is running in. When the main Notepad window is running at a scale factor that is different than that of the system scale factor, the values from the ChooseFont dialog must be translated to be meaningful for the main window’s scale factor. Here we scaled the font point size to the DPI of the display that the Notepad window was running on, again using GetDpiForWindow:

FontStruct.lfHeight = -MulDiv(cf.iPointSize, GetDpiForWindow(hwndNP), 720);

Windows Placement

Another place where we had to deal with handling data across coordinate spaces was with the way Notepad stores and reuses its window placement (position and dimensions). When Notepad is closed, it will store its window placement. The next time it’s launched, it reads this information in an attempt to restore the previous position. Once we started running the main Notepad thread in per-monitor-DPI awareness we ran into a problem: the Notepad window was opening in strange sizes when launched.

What was happening was that in some cases we would store Notepad’s size at one scale factor and then restore it for a different scale factor. If the display configuration of the PC that Notepad was run on hadn’t changed between when the information was stored and when Notepad was launched again, theoretically this wouldn’t have been a problem. However, Windows supports changing scale factors, connecting/disconnecting and rotating display at will. This meant that we needed Notepad handle these situations more gracefully.

The solution was again to use mixed-mode scaling, but this time not leverage Window’s bitmap-stretching functionality and instead to normalize the coordinates that Notepad used to set and restore its window placement. This involved changing the thread to a DPI-unaware context when saving window placement, and to do the same when restoring. This effectively normalized the coordinate space across all displays and display scale factors that Notepad would be restored to, approximately the same placement regardless of the display-topology changes:

DPI_AWARENESS_CONTEXT previousDpiContext = SetThreadDpiAwarenessContext(DPI_AWARENESS_CONTEXT_UNAWARE);
BOOL ret = SetWindowPlacement(hwnd, wp);
SetThreadDpiAwarenessContext(previousDpiContext);

Once all of these changes were made, we had Notepad scaling nicely whenever the DPI would change and the document text rendering natively for each DPI, which was a big improvement over having Windows bitmap stretch the application on DPI change.

Useful DPI Utilities

While working on Mixed Mode display scaling, we ran into the need to have DPI-aware variants of some commonly used APIs:

A note about GetDpiForSystem: Calling GetDpiForSystem is more efficient than calling GetDC and GetDeviceCaps to obtain the system DPI.

Also, any component that could be running in an application that uses sub-process DPI awareness should not assume that the system DPI is static during the lifecycle of the process. For example, if a thread that is running under DPI_AWARENESS_CONTEXT_UNAWARE awareness context queries the system DPI, the answer will be 96. However, if that same thread switched to DPI_AWARENESS_CONTEXT_SYSTEM and queried the system DPI again, the answer could be different. To avoid the use of a cached — and possibly stale — system-DPI value, use GetDpiForSystem() to retrieve the system DPI relative to the DPI-awareness mode of the calling thread.

What We Didn’t Get To

The Windows 10 Anniversary Update delivers useful API for developers that want to update desktop applications to support dynamic DPI scaling in their applications, in particular EnableNonClientDpiScaling and SetThreadDpiAwarenessContext (also known as “mixed-mode”), but there is still some missing functionality that we weren’t able to deliver. Windows common controls (comctl32.dll) do not support per-monitor DPI scaling and non-client area DPI-scaling is only supported for top-level windows (child-window non-client area, such as child-window scroll bars do not automatically scale for DPI (even in the Anniversary Update)).

We recognize that these, and many other, platform features are going to be needed by developers before they’re fully unblocked from updating their desktop applications to handle display scaling well.

As mentioned in my other post, WPF now offers per-monitor DPI-awareness support as well.

Sample Mixed-Mode Application:

We put together a sample that shows the basics of how to use mixed-mode DPI awareness. The project linked below creates a top-level window that is per-monitor DPI aware and has its non-client area automatically scaled. From the menu you can create a secondary window that uses DPI_AWARENESS_CONTEXT_SYSTEM_AWARE context so that Windows will bitmap stretch the content when its rendered at a different DPI.

https://github.com/Microsoft/Windows-classic-samples/tree/master/Samples/DPIAwarenessPerWindow

Conclusion

Our aim was to reduce the cost for developers to update their desktop applications to be per-monitor DPI aware. We recognize that there are still gaps in the DPI-scaling functionality that Windows offers desktop application developers and the importance of fully unblocking developers in this space. Stay tuned for more goodness to come.

Download Visual Studio to get started.

The Windows team would love to hear your feedback.  Please keep the feedback coming using our Windows Developer UserVoice site. If you have a direct bug, please use the Windows Feedback tool built directly into Windows 10.

Building your C++ application with Visual Studio Code

$
0
0

Over the last few months, we have heard a lot of requests with respect to adding capability to Visual Studio Code to allow developers to build their C/C++ application. The task extensibility in Visual Studio Code exists to automate tasks like building, packaging, testing and deploying. This post is going to demonstrate how using task extensibility in Visual Studio Code you can call compilers, build systems and other external tasks through the help of the following sections:

Installing C/C++ build tools

In order to build your C++ code you need to make sure you have C/C++ build tools (compilers, linkers and build systems) installed on your box. If you can already build outside Visual Studio Code you already have these tools setup, so you can move on to the next section.

To obtain your set of C/C++ compilers on Windows you can grab  the Visual C++ build tools SKU. By default these tools are installed at ‘C:\Program Files (x86)\Microsoft Visual C++ Build Tools’. You only need to do this if you don’t have Visual studio installed. If you already have Visual Studio installed, you have everything you need already.

If you are on a Linux platform which supports apt-get you can run the following commands to make sure you grab the right set of tools for building your C/C++ code.

sudo apt-get install g++
sudo apt-get install clang

On OS X, the easiest way to install the C++ build tools would be to install Xcode command line tools. You can follow this article on the apple developer forum. I would recommend this instead of installing clang directly as Apple adds special goodies to their version of the clang toolset. Once installed you can run these commands in a terminal window to determine where the compiler and build tools you need were installed.

xcodebuild -find make
xcodebuild -find gcc
xcodebuild -find g++
xcodebuild -find clang
xcodebuild -find clang++

Creating a simple Visual Studio Code task for building C/C++ code

To follow this specific section you can go ahead and download this helloworld C++ source folder. If you run into any issues you can always cheat and download the same C++ source folder with a task pre-configured.

In Visual Studio Code tasks are defined for a workspace and Visual Studio Code comes pre-installed with a list of common task runners. In the command palette (Ctrl+Shift+P (Win, Linux), ⇧⌘P (Mac)) you can type tasks and look at all the various task related commands.

commands

On executing the ‘Configure Task Runner’ option from the command palette you will see a list of pre-installed tasks as shown below, in the future we will grow the list of task runners for popular build systems but for now go ahead and pick up the others template from this list.

preinstalledtasks

This will create a tasks.json file in your .vscode folder with the following content:

{
    // See https://go.microsoft.com/fwlink/?LinkId=733558
    // for the documentation about the tasks.json format
    "version": "0.1.0",
    "command": "echo",
    "isShellCommand": true,
    "args": ["Hello World"],
    "showOutput": "always"
}
Setting it up for Windows

The easiest way to setup Visual Studio Code on Windows for C/C++ building is to create a batch file called ‘build.bat’ with the following commands:

@echo off
call "C:\Program Files (x86)\Microsoft Visual Studio 14.0\VC\vcvarsall.bat" x64     
set compilerflags=/Od /Zi /EHsc
set linkerflags=/OUT:hello.exe
cl.exe %compilerflags% helloworld.cpp /link %linkerflags%

Please note that the location of vcvarsall.bat file which sets up the right environment for building could be different on your machine. Also if you are using the Visual C++ build SKU, you will need to call the following command instead:

call “C:\Program Files (x86)\Microsoft Visual C++ Build Tools\vcbuildtools.bat” x64

Once the build script is ready you can then modify your tasks.json to directly call your batch file on Windows by making the following changes to the automatically generated tasks.json file.

{  
   // See https://go.microsoft.com/fwlink/?LinkId=733558
   // for the documentation about the tasks.json format
   "version": "0.1.0",
   "windows": {
      "command": "build.bat",
      "isShellCommand": true,
      "showOutput": "always"
   }

Initiate a build by bringing up the command palette again and executing the ‘Run Build Task’ command.

build

This should initiate the build for our C++ application and you should be able to monitor the build progress in the output window.

output

Now even though this is a Windows specific example you should be able to re-use the same series of steps to call a build script on other platforms as well.

Calling Clang and GCC from Visual Studio Code task for building C/C++ code

Alright let us now see how we can achieve building our C/C++ application without calling an external batch file using some popular toolsets like GCC and Clang directly without a build system in play.

To follow this specific section you can go ahead and download this helloworld C++ source folder. If you run into any issues you can always cheat and download the same C++ source folder with a task pre-configured.

Tasks.json allow you to specify qualifiers like the one below for ‘OS X’. These qualifiers similar will allow you create specific build configurations for your different build targets or as shown in this case for different platforms.

  "OS X": {
        "command": "clang++",
        "args": [
            "-Wall",
            "helloWorld.cpp",
            "-v"
          ],
        "isShellCommand": true,
        "showOutput": "always",
        "problemMatcher": {
            "owner": "cpp",
            "fileLocation": [
                "relative",
                "${workspaceRoot}"
            ],
            "pattern": {
                "regexp": "^(.*):(\\d+):(\\d+):\\s+(warning|error):\\s+(.*)$",
                "file": 1,
                "line": 2,
                "column": 3,
                "severity": 4,
                "message": 5
            }
  }

Another thing to highlight in this snippet is the ‘problemMatcher’ section. Visual Studio Code ships with some of the most common problem matchers out of the box but many compilers and other tools define their own style of errors and warnings. Need not worry you can create your own custom problem matcher as well with Visual Studio Code. This site which helps test out regex online might also come in handy.

The pattern matcher here will work well for Clang and GCC toolsets so just go ahead and use them. The figure below shows them in effect when you initiate the show problems command in Visual Studio Code (Cntrl+Shift+M (Win, Linux), ⇧⌘M (Mac)).

error

Calling Makefiles using Visual Studio Code task extensibility

Similar to the manner how you configure tasks.json to call the compiler, you can do the same for makefiles. Take a look at the sample tasks.json below, the one new concept in this tasks.json file is the nesting of tasks. Both ‘hello’ and ‘clean’ are tasks in the makefile where as ‘compile w/o makefile’ is a separate task but this example should show you how you can setup tasks.json in cases where there are
multiple build systems at play. You can find the entire sample here.

Note this is an OSX, Linux specific example but to obtain the same behavior on Windows you can replace ‘bash’ with ‘cmd’ and ‘args’ with ‘/C’.

{
    // See https://go.microsoft.com/fwlink/?LinkId=733558
    // for the documentation about the tasks.json format
   "version": "0.1.0",
    "osx": {
        "command": "bash",
        "args": ["-c"],
        "isShellCommand": true,
        "showOutput": "always",
        "suppressTaskName": true,
        "options": {
            "cwd": "${workspaceRoot}"
        },
        "tasks": [
            {
                "taskName": "hello",
                "args": [
                    "make hello"
                ],
                "isBuildCommand": true
            },
            {
                "taskName": "clean",
                "args": [
                    "make clean"
                ]
            },
            {
                "taskName": "compile w/o makefile",
                "args": [
                    "clang++ -Wall -g helloworld.cpp -o hello"
                ],
                "echoCommand": true
            }
        ]
    }
}

Two more things to mention here is that whichever task you associate the ‘isBuildCommand’ with becomes your default build task in Visual Studio Code. In this case that would be the ‘hello’ task. If you would like to run the other tasks bring up the command palette and choose ‘Run Task’ option.

task1

task2

Then choose the individual task to run e.g. ‘clean’ task. Alternatively, you can also wire the build task as a different key binding. For doing so bring up File -> Preferences -> Keyboard shortcuts and add the following key binding to your task. Bindings currently only exist for build and test tasks but an upcoming fix in the October release will allow bindings for individual tasks as well.

[
  {
        "key": "f7",
       "command": "workbench.action.tasks.build"
    }
]

Calling MSBuild using Visual Studio Code task extensibility

MSBuild is already a pre-installed task that Visual Studio Code comes with. Bring up the command palette and choose MSBuild, this will create the following task.json it should be easy then to add your MSBuild solution, project name to the ‘args’ section and get going.

{
 // See https://go.microsoft.com/fwlink/?LinkId=733558
// for the documentation about the tasks.json format
  "version": "0.1.0",
  "command": "msbuild",
  "args": [
        // Ask msbuild to generate full paths for file names.
        "/property:GenerateFullPaths=true"
    ],
    "taskSelector": "/t:",
    "showOutput": "silent",
    "tasks": [
    {
            "taskName": "build",

            // Show the output window only if unrecognized errors occur.
            "showOutput": "silent",
           
            // Use the standard MS compiler pattern to detect errors, warnings and infos
           "problemMatcher": "$msCompile"
        }
    ]
}
Wrap Up

This post provides some guidance with examples on how to use Visual Studio Code task extensibility to build your C/C++ application. If you would like us to provide more guidance on this or any other aspect for Visual Studio Code by all means do reach out to us by continuing to file issues at our Github page and keep trying out this experience and if you would like to shape the future of this extension please join our Cross-Platform C++ Insiders group, where you can speak with us directly and help make this product the best for your needs.

Join the PowerShell 10th Anniversary Celebration!

$
0
0

On November 14th, PowerShell will have been shipping for 10 years, so the team is going to celebrate with a day-long event, running from 8:00 am to 4:00 pm (PST). This will be streaming live and shown world-wide on the home page for Channel9.msdn.com.  

We’ll have segments on PowerShell and SQL, PowerShell and Azure Automation, and the future directions for PowerShell, and … well you get the idea. In addition, we’re planning several presentations showing cool ways of using PowerShell, including setting up a Minecraft server, using PowerShell to control sprinklers or an IOT-based Theramin, and managing a Tesla (among other things).

There will also be opportunities to hear the team members talk about how the product has evolved, and some of the MVPs talk about community involvement and the new open source engagement. Stay and join in on the coding contests, or come back later for any of the talks above. This is going out on Channel9.msdn.com, so the talks will be available later – even if they aren’t quite as much fun as hearing us live. We will update this blog topic the week before with the schedule of events.

So join us at 8:00am (PST) for the kickoff by Jeffrey Snover and Kenneth Hansen for the PowerShell 10th Anniversary Celebration!  

Azure Government – 3x Growth in 2016 and over 75 new capabilities in the last 90 days

$
0
0

Today I am with over 600 of our federal, state and local government customers and partners at the Government Cloud Forum 2016, in Washington D.C. The Forum is queued to take our audiences through breakthrough technology and solutions impacting citizen engagement, the empowerment of government employees, how agencies can optimize their infrastructures and solutions, and the digital transformation of government services. With this in mind, I wanted to provide a few highlights from the Forum where we are speaking of our continued advancements.

Continued investment in comprehensive compliance - breadth and depth

I’m proud to cite how Azure Government offers the most compliance certifications and attestations for mission-critical government workloads than any other cloud service provider.  We highlighted recent investments and compliance achievements.  And, our continued commitment is further evidenced by Microsoft signing with the state of Georgia, bringing the total number of signed CJIS agreements to 24. This is four times more than our nearest competitor. We are also proud of the 13 Microsoft Azure Government Cloud services in our FedRAMP scope, including FedRAMP Moderate and High, for which we have achieved authorization, which is two times more services in scope than the nearest cloud service provider.

Tremendous Azure Government services momentum

Azure is committed to bringing breakthrough innovation, the strongest security and incredible new technology to government at the same pace we innovate for all customers, globally.  In the past 90 days we have launched over 75 new Azure Government capabilities, and 25 new capabilities in October already.  We have an exciting roadmap upcoming so expect to see more innovation in coming days, weeks and months for government.

I encourage you to go to today’s blog by Curt Kolcun, Vice President, US Public Sector for Microsoft, to read about all our announcements and our momentum for government customers and partners and the powerful, impactful transformations we are helping them accomplish every day.

To experience the power of Azure Government for your organization, sign up for an Azure Government Trial.


Announcing Azure Analysis Services preview

$
0
0

We are pleased to announce the public preview of Microsoft Azure Analysis Services, the latest addition to our data platform in the cloud. Based on the proven analytics engine in SQL Server Analysis Services, Azure Analysis Services is an enterprise grade OLAP engine and BI modeling platform, offered as a fully managed platform-as-a-service (PaaS). Azure Analysis Services enables developers and BI professionals to create BI Semantic Models that can power highly interactive and rich analytical experiences in BI tools (such as Power BI and Excel) and custom applications.

Why Azure Analysis Services?

The success of any modern data-driven organization requires that information is available at the fingertips of every business user (not just IT professionals and data scientists) to guide their day-to-day decisions. Self-service BI tools have made huge strides in making data accessible to business users. However, most business users don’t have the expertise or desire to do the heavy lifting that is typically required to find the right sources of data, consume the raw data and transform it into the right shape, add business logic and metrics, and finally explore the data to derive insights. With Azure Analysis Services, a BI professional can create a semantic model over the raw data and share it with business users so that all they need to do is connect to the model and immediately explore the data and gain insights. Azure Analysis Services uses a highly optimized in-memory engine to provide responses to user queries at the "speed of thought".

Fully managed platform-as-a-service

  • Developers can create a server in seconds, choosing from the Developer (D1) or Standard (S1, S2, S4) service tiers. Each tier comes with fixed capacity in terms of query processing units and model cache. The developer tier (D1) supports up to 3GB model cache and the largest tier (S4) supports up to 100GB.
  • The Standard tiers offer dedicated capacity for predictable performance and are recommended for production workloads. The Developer tier is recommended for proof-of-concept, development, and test workloads.
  • Administrators can pause and resume the server at any time. No charges are incurred when the server is paused. We also plan to offer administrators the ability to scale up and down a server between the Standard tiers (not available currently).
  • Developers can use Azure Active Directory to manage user identity and role based security for their models.
  • The service is currently available in the South-Central US and West Europe regions. More regions will be added during the preview.

Compatible with SQL Server Analysis Services

  • Developers can use SQL Server Data Tools in Visual Studio for creating models and deploying them to the service. Administrators can manage the models using SQL Server Management Studio and investigate issues using SQL Server Profiler.
  • Business users can consume the models in any major BI tool. Supported Microsoft tools include Power BI, Excel, and SQL Server Reporting Services. Other MDX compliant BI tools can also be used, after downloading and installing the latest drivers.
  • The service currently supports tabular models (compatibility level 1200 only). Support for multidimensional models will be considered for a future release, based on customer demand.
  • Models can consume data from a variety of sources in Azure (e.g. Azure SQL Database, Azure SQL Data Warehouse) and on-premises (e.g. SQL Server, Oracle, Teradata). Access to on-premises sources is made available through the on-premises data gateway.
  • Models can be cached in a highly optimized in-memory engine to provide fast responses to interactive BI tools. Alternatively, models can query the source directly using DirectQuery, thereby leveraging the performance and scalability of the underlying database or big data engine.

Azure Analysis Services Overview
Get started with the Azure Analysis Services preview by imply provisioning a resource in the Azure Portal or using Azure Resource Manager templates, and using that server name in your Visual Studio project. Use Azure Active Directory user names (UPNs) or groups in the role memberships for securing access to your models. Give it a try and let us know what you think.

Learn more about Azure Analysis Services.

Top 5 Announcements at PASS Summit 2016

$
0
0

This post is by Joseph Sirosh, Corporate Vice President of the Data Group at Microsoft.

In June, we’ve announced the general availability of SQL Server 2016, the world’s fastest and most price-performant intelligence database for HTAP (Hybrid Transactional and Analytical Processing) with updateable, in-memory columnstores and advanced analytics through deep integration with R Services. As we officially move into the data-driven intelligence era, we continue to bring new capabilities to more applications, environments and users than ever before.

Today, we’re making several announcements to bring even more value to our customers.

1. Public Preview of Azure Analysis Services

PASS_Announce_1

Azure Analysis Services (Azure AS)– Based on the proven analytics engine in SQL Server Analysis Services, Azure AS is an enterprise grade OLAP engine and BI modeling platform, offered as a fully managed platform-as-a-service (PaaS). Azure AS enables developers and BI professionals to create BI Semantic Models that can power highly interactive and rich analytical experiences in BI tools such as Power BI and Excel. Azure AS can consume data from a variety of sources residing in the cloud or on-premises (SQL Server, Azure SQL DB, Azure SQL DW, Oracle, Teradata to name a few) and surface the data in any major BI tool. You can provision an Azure AS instance in seconds, pause it, scale it up or down (planned during preview), based on your business needs. Azure AS enables business intelligence at the speed of thought! For more details, see the Azure blog.

2. SQL Server 2016 DW Fast Track Reference Architecture, 145TB

We have collaborated with a number of our hardware partners on a joint effort to deliver validated, pre-configured solutions that reduce the complexity and drive optimization when implementing a data warehouse based on SQL Server 2016 Enterprise Edition. Today, I am happy to announce the Data Warehouse Fast Track (DWFT) reference architectures that certify that a SQL Server 2016 SMP unit can support active data sets of up to 145TB and maximum user size of 1.2 Petabytes of SQL Data in an all flash system. These reference architectures provide tested and validated configurations and resources to help customers build the right environment for their data warehouse solutions. Following these best practices and guidelines will yield these tangible benefits:

  • Accelerated data warehouse projects with pre-tested hardware and SQL Server configurations.
  • Reduced hardware and maintenance costs by purchasing a balanced hardware solution and
  • optimizing it for a data warehouse workload.
  • Improved ROI by optimizing software assets.
  • Reduced planning and setup costs by leveraging certified reference architecture configurations.
  • Predictable performance by properly configuring and tuning the system.

PASS_Announce_2

3. Data Migration Assistant and Database Experimentation Assistant

Today, I am also happy to announce that we are releasing the Data Migration Assistant (DMA) v2.0. DMA delivers scenarios that reduce the effort to upgrade to latest SQL Server 2016 from legacy SQL Servers by detecting compatibility issues that can impact database functionality after an upgrade. It recommends performance and reliability improvements for your target environment and then migrates the entire SQL Server database. Furthermore, DMA provides seamless assessments and migrations to SQL Azure VM.  DMA assessments discover the breaking changes, behavioral changes and depreciated features that can affect your upgrades. DMA also discovers the new features in the target SQL Server platform that your applications can benefit from after an upgrade.  DMA is the only tool providing comprehensive data platform movement capabilities, assisting DBAs with more than just schema and data migrations. DMA V1.0 was released on August 26, 2016 for general availability. Since then, DMA has been downloaded more than 2,000 times world-wide assessing more than 25,000 (37K Cores) databases with over 1,000 unique users.

Another tool that we are bringing to the market today is Database Experimentation Assistant (DEA). It’s a new A/B testing solution for SQL Server upgrades. It enables customers to conduct experiments on database workloads across two versions of SQL Server. Customers who are upgrading from older SQL Server versions (starting 2005 and above) to any new version of the SQL Server will be able to use key performance insights, captured using a real world workload to help build confidence about upgrade among database administrators, IT management and application owners by minimizing upgrade risks. This enables truly risk-free migrations.

4. Azure SQL Data Warehouse Expanded Free Trial

I am particularly excited about announcing the exclusive Azure SQL Data Warehouse free trial. Starting today, customers can request a one-month free trial for Azure SQL Data Warehouse. You can bring your data in and try out the capabilities of SQL Data Warehouse and complete POCs. This is a limited time offer, so submit your request now here. For PASS attendees (please watch my keynote) we have a special referral code that you can use while requesting the free trial. The SQL Data Warehouse team at PASS can also help you set up your free trial while you are there, you can find them at the Microsoft booth.

PASS_Announce_3

5. Cognitive Toolkit (formerly known as CNTK)

PASS_Announce_5

Starting today, we are announcing the availability of a beta for Microsoft Cognitive Toolkit (formerly CNTK), a free, easy-to-use, open-source, commercial-grade toolkit that trains deep learning algorithms to learn like the human brain. The Cognitive Toolkit enables developers and data scientists to reliably train faster than other available toolkits on massive datasets across several processors, including CPUs, GPUs and FPGAs, as well as multiple machines. Upgrades include more programming flexibility, advanced learning methods like reinforcement learning and extended API support for training and inference from Python, C++ and BrainScript so developers can use popular languages and network. Cognitive Toolkit is available under an open-source license to the public, and it is one of the most popular deep learning projects on GitHub. It is used to develop commercial grade AI in popular Microsoft products like Skype, Cortana, Xbox and Bing. Developers and researchers can start training with the Microsoft Cognitive Toolkit for free by visiting https://aka.ms/cognitivetoolkit.

Learn more about these announcements from my keynote tomorrow morning at PASS Summit 2016 at Washington Convention Center or via live-stream.

PASS_Announce_4

@josephsirosh

Making the world more accessible with Sway

$
0
0

Today’s post was written by Brett Bigham, 2014 Oregon State Teacher of the Year and a recipient of the NEA National Award for Teaching Excellence in 2015.

I’m the first to admit I’m a bit of a dinosaur when it comes to technology. It’s not that I’m not forward thinking, it’s more that I’ve spent way too many hours learning “the new best thing” only to have my district ditch it six months later for a newer, better best thing that only takes twice as long to learn. But Microsoft did something with their new Sway program that has created something every teacher needs. And they did it by asking teachers what they needed.

How Microsoft swayed me

For the past two years, I have participated in Microsoft Sway focus groups at the National Network of State Teachers of the Year (NNSTOY) conference. Groups of tech-savvy and award-winning teachers were brought in to pick our brains about what Sway needed to be. The result is a school-friendly program that allows not only second graders, but second grade teachers, the ability to quickly produce high-quality curriculum-related presentations. The NNSTOY event collection and Sways have been published on Docs.com.

Helping students with autism explore the world

Some people with autism have great anxiety about traveling to new places. To address this in my classroom I have created “Ability Guidebooks”—a series of step-by-step directions in photobook form on how to access community events and destinations. These guidebooks help students become familiar with an upcoming destination. With each successful community outing, they become less fearful of that unknown making the world easier, safer and less frightening for them.

When I started the project, the guidebooks were for my students in my hometown of Portland, Oregon. We were going to go ride the Aerial Tram, so I went the week before and took pictures of all the steps needed to ride the tram. It was a smashing success and soon I was creating a new Ability Guidebook every week. After they began to stack up I realized that every person with autism in the city could benefit from the books. I started sharing them with other teachers and school districts and word got around that these supports for field trips existed. I was asked to do books for the county libraries and the Portland Airport, and suddenly my city was becoming a leader in supporting community involvement for people with autism.

In 2015, I was named a National Education Association Foundation Global Fellow and was sent to Peru as an ambassador for American Education. This amazing opportunity gave me the chance to create Ability Guidebooks in a whole new country. Since then, I have added 13 new cities, 7 more countries and the guidebooks have been translated into Spanish, Italian and German. There are books for the Parthenon, the Colosseum, St. Paul’s Cathedral and over 40 other international destinations! I have dedicated my free time to creating an international standard of autism supports. But even as I fought for accessibility, my own books were not accessible to everyone.

Using Sway to open doors

In their original format, the Ability Guidebooks were meant to be read online like a book. If you could not read or were blind, you would have to rely on someone reading the book to you. Sway allowed me to overcome this problem. The program is set up not only to work in multiple languages, making translating the work much easier, but also allows for recordings to be added to the presentation. With the touch of a button a student who does not read can choose to play an audio version. By moving my work into Sway, it is opening doors to every student who cannot read or see the books.

You can see an example guidebook built in Sway here:

In partnership with Microsoft, I have launched almost 32 Ability Guidebooks in 3 different languages on Docs.com for anyone to use. I encourage you to add comments to improve the guidebooks and share links to Sways with your ability guidebooks so that we together can build the world’s largest collection of Ability Guidebooks on Docs.com.

making-the-world-more-accessible-with-sway-1

That’s how Microsoft swayed me. They didn’t promise me the new best thing. They sat down with teachers and created a program that will modernize and energize your classroom and your student’s work. I’m not a salesman or a Microsoft employee. What I am is a teacher who has found a valuable classroom tool that is worth sharing.

Have a great year!

—Brett Bigham

The post Making the world more accessible with Sway appeared first on Office Blogs.

How to Train a Deep-Learned Object Detection Model in the Microsoft Cognitive Toolkit

$
0
0

This post is authored by Patrick Buehler, Senior Data Scientist, and T. J. Hazen, Principal Data Scientist, at Microsoft.

Last month we published a blog post describing our work on using computer vision to detect grocery items in refrigerators. This was achieved by adding object detection capability, based on deep learning, to the Open Source Microsoft Cognitive Toolkit, formerly called the Computational Network Toolkit or CNTK.

We are happy to announce that this technology is now a part of the Cognitive Toolkit. We have published a detailed tutorial in which we describe how to bring in your own data and learn your own object detector. The approach is based on a method called Fast R-CNN, which was demonstrated to produce state-of-the-art results for Pascal VOC, one of the main object detection challenges in the field. Fast R-CNN takes a deep neural network (DNN) which was originally trained for image classification by using millions of annotated images and modifies it for the purpose of object detection.

Traditional approaches to object detection relied on expert knowledge to identify and implement so called “features” which highlighted the position of objects in an image. However, starting with the famous AlexNet paper in 2012, DNNs are now increasingly used to automatically learn these features. This has led to a huge improvement in accuracy, opening up the space to non-experts who want to build their own object detectors.

The tutorial itself starts by describing how to train and evaluate a model using images of objects in refrigerators. An example set of refrigerator images, with annotations indicating the positions of specific objects, is provided with the tutorial. It then shows how to annotate your own images using scripts to draw rectangles around objects of interest, and to label these objects to be of a certain class – for instance, an avocado, or an orange, etc. While the example images that we provide show grocery items, the same technology works for a wide range of objects, and without the need for a large number of training images.

The trained model, which can either use the refrigerator images we provided or your own images, can then be applied to new images which were not seen during training. See the graphic below, where the input to the trained model is an image from inside a refrigerator, and the output is a list of detected objects with a confidence score, overlaid on the original image for visualization:

CNTK Fridge 3

Pre-trained image classification DNNs are generic and powerful, allowing this method to also be applied to many other use cases, such as finding cars or pedestrians or dogs in images.

More details can be found on the Cognitive Toolkit GitHub R-CNN page and in the Cognitive Toolkit announcement.

Patrick& TJ

Sync Error Reports in Azure AD Connect Health are now in Public Preview!

$
0
0

Howdy folks!

If you follow this blog regularly, you know that over the past 6 months weve been adding a lot of new features to Azure AD Connect Health. It is one of our more widely used Azure AD Premium features and over 10k customers use it in production.

So today, I’m excited to announce that we’re rolling out the public preview of another new enhancement Azure AD Connect Health, Sync Error Reports! This enhancement reallly rounds out the Azure AD Connect feature set, making it easy and efficient for you to monitor the health of your hybrid identity control plane.

I asked Varun Karandikar, one of the Program Managers in our team, to write a blog about Sync Error Reports. His blog is below.

But if you are the kind of person who just wants to jump in, you can get started by installing or upgrading to the latest version of Azure AD Connect. (version 1.1.281.0 or higher). Thats all it takes!

I hope you find this new capability useful and as always, we would love to receive your feedback, questions, and suggestions.

Best Regards,

Alex Simons (Twitter: @Alex_A_Simons)

Director of Program Management

Microsoft Identity Division

—-

Hello,

I’m Varun Karandikar, a Program Manager on the Azure Active Directory team. As you already know, Azure AD Connect Health, a feature of Azure AD Premium, lets you monitor and gain insights into your hybrid identity infrastructure. Today Im excited to let you know that were adding a new capability within Azure AD Connect Health for Sync that makes it easy to report on any synchronization errors that might occur syncing data from on-premises AD to Azure AD using Azure AD Connect.

This preview release is available for all Azure AD Premium customers.

What we learned from you, our customers

As we worked to design this enhancement, we talked to a ton of customers to understand the kind of sync errors people were running into. We learned a lot from doing this:

  • Wading through the sync logs (parse XML) or using email addresses can be super time consuming.
  • It is really challenging to identify common patterns when there are a large number of errors.
  • It is challenging to pinpoint specific reasons for many sync failures.
  • There isnt a lot of guidance on how to fix syncing problems.
  • Trying to get to the root cause of a sync error required querying Azure AD and the underlying AD which isnt easy to do.
  • There is no easy way for a helpdesk professional to search within the error reports when trying to help a user.

With these challenges in mind, we focused on building a solution that addresses each of them b making it easy for admins easily access rich a sync reports and easily find tips and tricks for addressing them.

What the report provides

With Azure AD Connect Health for Sync you get a simple visual report of any synchronization errors that occur during an export operation to Azure AD on your active (non-staging) Azure AD Connect server. The report is available in the new Azure Portal.

A few of the key capabilities of this new feature include:

A quick count of the total number of errors are available at a glance

clip_image002

This gives a quick count of total number of errors.

Automatic categorization of errors based on error type and likely cause

Errors are categorized based on type and the potential root cause and include:

Category

Description
Duplicate AttributeSync errors due to a conflict between two objects for an attribute that must be unique in Azure AD.
Data MismatchSync errors due to data mismatches causing the soft-match mechanism to fail.
Data Validation FailureSync errors due to invalid data, including bad characters in UPN, Display Name, UPN Format, etc. that fail validation before being written in Azure AD.
Large AttributeSync errors due to attribute values or objects exceeding the allowed limits of size, length, count, etc.
Other

A catch-all bucket to capture errors that dont fit in the above categories.

Read more about troubleshooting each category of synchronization errors.

clip_image004

Easily drill down into each category for a detailed view of each error

Selecting a category shows you the list of objects in that category that have errors. You can then select a specific entry to see the details of the error, including the description, the AD object, Azure AD object, and links to relevant articles with tips on how to fix the error.

clip_image006

Role Based Access Control makes it easy to roll out securely

Azure AD Connect Health supports our Role Based Access Control. This means you can give users like helpdesk admins access to the report without requiring global admin privileges.

Getting started

1. Install or upgrade to the latest version of Azure AD Connect. (version 1.1.281.0 or higher). Thats it!

  • Note that if you have auto update enabled, you may already be running the latest version.
  • Ensure that the Azure AD Connect Health Agent for sync has outbound connectivity to the Health Service. Read our installation documentation to find out more about requirements.

2. Visit the Azure AD Connect Health portal and click on the Sync Errors section to view the report about your existing sync errors.

We hope that with this new feature youre able to understand and resolve Azure AD sync errors with greater efficiency and ease by having all the data in one place.

As always, we’d love to hear your feedback. If you have any feedback, questions, or issues to report, please leave a comment at the bottom of this post, send a note to the Azure AD Connect Health team, or tweet with the hashtag #AzureAD.

Thank you,

Varun Karandikar (@varundikar)

Viewing all 13502 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>