Quantcast
Channel: TechNet Technology News
Viewing all 13502 articles
Browse latest View live

Learn how to build the next generation of intelligent apps at this free Microsoft AI Workshop

$
0
0

The Microsoft AI Immersion Workshop is being held on Tuesday, May 9th, at the W Hotel in Seattle. If you are a developer interested in creating the next generation of intelligent apps and enterprise-grade solutions using the latest AI and machine learning techniques, this free workshop is for you. This is an in-person event, and capacity is limited, so register now to reserve your spot.

Register

The Workshop includes a keynote talk covering the breadth and depth of Microsoft’s AI investments and offerings, followed by five deep technical tutorials on the topics below. Seasoned software engineers and data scientists from Microsoft – people who are building some of the world’s most advanced AI and ML technologies – will run these hands-on tutorials.

  1. Applied Machine Learning for Developers.
  2. Big AI – Applying Artificial Intelligence at Scale.
  3. Weaving Cognitive and Azure Services to Provide Next-Generation Intelligence.
  4. Deep Learning and the Microsoft Cognitive Toolkit.
  5. Building Intelligent SaaS Applications.

AllImmersion

Tutorials will focus on hands-on projects and session abstracts and instructor names are available from the original post here. We hope to see many of you at this event in Seattle!

SQL Server blog team


Quick Measures Preview

$
0
0
Quick measures, a new feature we released in our April Power BI Desktop update, lets you quickly create new measures based on measures and numerical columns in your table. These new measures become…

Announcing general availability of Azure HDInsight 3.6

$
0
0

This week at DataWorks Summit, we are pleased to announce general availability of Azure HDInsight 3.6 backed by our enterprise grade SLA. HDInsight 3.6 brings updates to various open source components in Apache Hadoop & Spark eco-system to the cloud, allowing customers to deploy them easily and run them reliably on an enterprise grade platform.

What’s new in Azure HDInsight 3.6

Azure HDInsight 3.6 is a major update to the core Apache Hadoop & Spark platform as well as with various open source components. HDInsight 3.6 has the latest Hortonworks Data Platform (HDP) 2.6 platform, a collaborative effort between Microsoft and Hortonworks to bring HDP to market cloud-first. You can read more about this effort here.

HDInsight 3.6 GA also builds upon the public preview of 3.6 which included Apache Spark 2.1. We would like to thank you for trying the preview and providing us feedback, which has helped us improve the product.

Apache Spark 2.1 is now generally available, backed by our existing SLA. We are introducing capabilities to support real-time streaming solutions with Spark integration to Azure Event Hubs and leveraging the structured streaming connector in Kafka for HDInsight. This will allow customers to use Spark to analyze millions of real-time events ingested into these Azure services, thus enabling IoT and other real-time scenarios. HDInsight 3.6 will only have the latest version of Apache Spark such as 2.1 and above. There is no support for older versions such as 2.0.2 or below. Learn more on how to get started with Spark on HDInsight.

Apache Hive 2.1 enables ~2X faster ETL with robust SQL standard ACID merge support and many more improvements. This release also includes an updated preview of Interactive Hive using LLAP (Long Lived and Process) which enables 25x faster queries.  With the support of the new version of Hive, customers can expect sub-second performance, thus enabling enterprise data warehouse scenarios without the need for data movement. Learn more on how to get started with Interactive Hive on HDInsight.

This release also includes new Hiveviews (Hive view 2.0) which provides an easy to use graphical user interface for developers to get started with Hadoop. Developers can use this to easily upload data to HDInsight, define tables, write queries and get insights from data faster using Hive views 2.0. Following screenshot shows new Hive views 2.0 interface.

hiveview

We are expanding our interactive data analysis by including Apache Zeppelin notebookapart from Jupyter. Zeppelin notebook is pre-installed when you use HDInsight 3.6, and you can easily launch it from the portal. Following screenshot shows Zeppelin notebook interface.

ApacheZeppelin

Getting started with Azure HDInsight 3.6

It is very simple to get started with Apache HDInsight 3.6 – simply go to the Microsoft Azure portal and create an Azure HDInsight service.

HDInsight in Azure portal 

Once you’ve selected HDInsight, you can pick the specific version and workload based on your desired scenario. Azure HDInsight supports a wide range of scenarios and workloads such as Hive, Spark, Interactive Hive (Preview), HBase, Kafka (Preview), Storm, and R Server as options you can select from. Learn more on creating clusters in HDInsight.

HDInsightClusterOption

Once you’ve complete the wizard, the appropriate cluster will be created. Apart from the Azure portal, you can also automate creation of the HDInsight service using the Command Line Interface (CLI). Learn more on how to create cluster using CLI.

We hope that you like the enhancements included within this release. Following are some resources to learn more about this HDI 3.6 release:

Learn more and get help

Summary

This week at DataWorks Summit, we are pleased to announce general availability of Azure HDInsight 3.6 backed by our enterprise grade SLA. HDInsight 3.6 brings updates to various open source components in Apache Hadoop & Spark eco-system to the cloud, allowing customers to deploy them easily and run them reliably on an enterprise grade platform.

AXA Group accelerates digital transformation with the Microsoft Cloud

$
0
0

Today’s Office 365 post was written by Ron Markezich, corporate vice president at Microsoft.

When one of the world’s leading financial institutions invests in Microsoft Office 365 and Azure, it’s a testament to the confidence business leaders have in the security, privacy, and compliance enabled by the Microsoft Cloud. As a group, AXA has globally adopted our comprehensive cloud services, taking a leadership role in the industry and enabling a modern, secure workplace. Rather than looking at the global deployment of Office 365 as yet another routine upgrade, IT leaders are taking advantage of a cloud platform that empowers employees with innovative tools available on any device.

Matt Potashnick, CIO for AXA UK and Ireland, talks about the digital transformation underway:

“At AXA UK, we are now taking our applications to the cloud with Azure and enabling collaboration and mobility across the organization with Office 365. This is key for us to reach out to new customers and channels with smart devices; we will grow business market share by keeping our technology and daily operations on the leading edge. By looking to all aspects of the Microsoft Cloud, we can collaborate securely both within our organization and with external partners. We have revealed a more agile way of working that helps us simplify access to information, promote insights and analytics across the business, and remain competitive without sacrificing our essential security and compliance concerns.”

Already, 60 percent of the AXA global workforce is in the cloud. It’s great to see their continued success with Office 365. Our goal is to help customers of any size, geography and industry digitally transform through empowering employees. The bar is high for technology companies that want to provide highly regulated insurance and financial institutions with cloud productivity services. For global firms navigating the complex regulatory landscape like AXA, Office 365 and Azure provide the solutions they need to help meet their compliance needs wherever they do business. Our relationship with AXA proves that the Microsoft Cloud can drive innovation in any industry.

—Ron Markezich

The post AXA Group accelerates digital transformation with the Microsoft Cloud appeared first on Office Blogs.

Announcing the release of Threat Intelligence and Advanced Data Governance, plus significant updates to Advanced Threat Protection

$
0
0

Today, we’re pleased to announce several enhancements that bolster Office 365’s security and compliance capabilities.

With the launch of Office 365 Threat Intelligence, we are enriching security in Office 365 to help customers stay ahead of the evolving threat landscape. Today, we’re also introducing a new reporting interface to improve the customer experience for Advanced Threat Protection (ATP) and extending the ATP Safe Links feature to Word, Excel and PowerPoint for Office 365 ProPlus desktop clients.

Office 365 Advanced Data Governance also launches today, providing our customers with robust compliance capabilities. A new policy management interface for Data Loss Protection (DLP), helps Office 365 customers remain compliant and in control of their data.

Let’s take a closer look at these enhancements.

Enhancing threat protection—a path to proactive cyber-defense with Office 365 Threat Intelligence

According to a recent Ponemon Institute study,* the average cost of a data breach has risen to $4 million, with costs incurred for litigation, brand or reputation damage, lost sales—and in some cases—complete business closure. Staying ahead of threats has never been more important.

Office 365 Threat Intelligence, now generally available, provides:

  • Interactive tools to analyze prevalence and severity of threats in near real-time.
  • Real-time and customizable threat alert notifications.
  • Remediation capabilities for suspicious content.
  • Expansion of Management API to include threat details—enabling integration with SIEM solutions.

To provide actionable insights on global attack trends, Threat Intelligence leverages the Microsoft Intelligent Security Graph, which analyzes billions of data points from Microsoft global data centers, Office clients, email, user authentications, signals from our Windows and Azure ecosystems and other incidents that impact the Office 365 ecosystem.

It provides information about malware families, both inside and outside your organization, including breach information with details down to the actual lines of code that are used for certain types of malware. Threat Intelligence also integrates seamlessly with other Office 365 security features, like Exchange Online Protection and ATP—providing you an analysis that includes the top targeted users, malware frequency and security recommendations related to your business.

For an overview of Threat Intelligence, watch the following video:

Threat Intelligence is included in the Office 365 Enterprise E5 plan or as a standalone service. Visit Threat Intelligence—Actionable insights for global threats to learn more.

New Office 365 Advanced Threat Protection (ATP) reporting interface

The new reporting interface for Office 365 Advanced Threat Protection (ATP) reports is now available in the Office 365 Security & Compliance Center. These security reports provide insights and trends on the health of your organization, including information about malware and spam sent or received in your organization and advanced threat detections that Office 365 ATP helped discover and stop.

Using the new report interface, admins can schedule reports to be sent directly to their inbox, request custom reports and download or manage these reports through dashboards in the Security & Compliance Center. In our continued journey to provide our customers with the most powerful and robust advanced security solution, this new reporting interface helps you understand how ATP mitigates today’s most sophisticated threats from impacting your organization.

The new ATP reporting interface.

Extending ATP Safe Links to Office 365 ProPlus desktop clients

Later this month, we will enable ATP for Office 365 ProPlus desktop clients, a unique demonstration of the power of collaboration across the Microsoft ecosystem. As cyber criminals broaden the scope of attacks beyond email workloads, it’s necessary to extend security capabilities beyond email. The Safe Links feature in ATP protects customers from malicious links in email.

Safe Links is integrated across Outlook desktop, web and mobile to help protect a user’s inbox across devices. When a user clicks a link in an Office 365 client application (Word, Excel or PowerPoint), ATP will inspect the link to see if it is malicious. If the link is malicious, the user will be redirected to a warning page instead of the original target URL, protecting the user from compromise. This new capability will further integrate and expand security across Office 365. Our intent has always been to provide our customers with an end-to-end, unified and secure experience across all of Office 365, and this extended capability of Safe Links is an example of our continued step toward this goal.

Ensuring compliance—why Office 365 Advanced Data Governance matters

As the amount of electronic data grows exponentially, many organizations are exposing themselves to risk by retaining unnecessary data. For example, many organizations continue to retain the personal information of former employees who left the company long ago. If this data were compromised in a breach, the company could be liable for costly remediation, such as lifetime credit monitoring for these former employees.

Office 365 Advanced Data Governance applies machine learning to help customers find and retain important data while eliminating trivial, redundant and obsolete data that could cause risk if compromised.

Advanced Data Governance, also now generally available, delivers the following capabilities:

  • Proactive policy recommendations and automatic data classifications that allow you take actions on data—such as retention and deletion—throughout its lifecycle.
  • System default alerts to identify data governance risks, such as “Unusual volume of file deletion,” as well as the ability to create custom alerts by specifying alert matching conditions and threshold.
  • The ability to apply compliance controls to on-premises data by intelligently filtering and migrating that data to Office 365.

Customers are already seeing value from Advanced Data Governance. Tom Stauffer, vice president of Records and Information Management for the Walt Disney Company, says:

“Effective governance of unstructured information across communication, content and social platforms has long been a goal of organizations. Microsoft Office 365 Advanced Data Governance appears to provide a well-thought-out solution that is integrated into their entire Office 365 suite. This functionality and integration provides the powerful potential of delivering on this long-sought-after goal, and doing so without a major burden to end users.”

In the coming months, we will be delivering additional Advanced Data Governance enhancements, such as event based retention, manual disposition and supervision.

Learn more about Advanced Data Governance in this episode of Microsoft Mechanics:

Office 365 Advanced Data Governance is included in the Office 365 Enterprise E5 plan. It is also available as part of the Office 365 Advanced Compliance plan—which also includes Office 365 Advanced eDiscovery and Customer Lockbox to provide a comprehensive set of expanded compliance value.

For a deeper demo of Office 365 Advanced Data Governance, watch this presentation.

For additional information about Advanced Data Governance, please see these TechNet articles:

Enhanced Office 365 Data Loss Prevention (DLP) management experience

Customers all over the world use Data Loss Prevention (DLP) policies in Office 365 to help prevent sensitive information from getting into the wrong hands. Because of your feedback, we put DLP management front and center, providing quick access to content protection policies, app permissions and device security policies—all in one place.

It’s now easier than ever to configure and enforce sensitive data policies across your organization using the new DLP management experience in the Office 365 Security & Compliance Center. The new Policy page shows you important information about your current DLP policies at a glance, with detailed audit reports just a click away. It’s also easier to turn on and configure DLP—simply choose what you want to protect, then specify any special conditions to look for and the automatic actions you want to enforce to protect your important data. You can also go into the advanced settings to access additional customization and configuration options to help meet your specific compliance requirements. Learn more in this article.

The enhanced DLP management experience makes it easier to create and manage policies.

Join our Security, Privacy & Compliance tech community

These new features help broaden and enhance the scope of security and compliance capabilities within Office 365. To further evolve your organization’s security and compliance with these services, join our Security, Privacy & Compliance tech community. It is a great resource to communicate and learn from your peers, as well as offer your insights on the growing importance of security, privacy and compliance.

*IDC Ponemon Institute, Sponsored by IBM, Cost of a Data Breach Report (2016)

The post Announcing the release of Threat Intelligence and Advanced Data Governance, plus significant updates to Advanced Threat Protection appeared first on Office Blogs.

Vote for the #ExcelWorldChamp

$
0
0

Last year, we announced the #ExcelWorldChamp competition. From October to November 2016, Microsoft ran four rounds of Excel tests for residents of select countries. The top competitors in each round made it through to the next level—until there was one Excel champ from each country!

Now, these country champions are competing head-to-head in one last round to declare the #ExcelWorldChamp and win a trip to Seattle, Washington, U.S.A., to meet with the Excel product team.

Help us determine the winner!

Submissions for the data visualization challenge are now posted on the #ExcelWorldChamp website. Visit the site anytime between April 4 and April 6, 2017 and vote for your favorite submissions. Your vote will help determine each contestant’s final score. Join us at the #ExcelWorldChamp website on April 7, 2017 at 2 p.m. UTC / 10 a.m. EDT for our LIVE announcement of the winner!

The post Vote for the #ExcelWorldChamp appeared first on Office Blogs.

The week in .NET – On .NET on SonarLint and SonarQube, Happy birthday .NET with Dan Fernandez, nopCommerce, Steve Gordon

$
0
0

Previous posts:

On .NET

Last week, I spoke with Tamás Vajk and Olivier Gaudin about SonarLint and SonarQube:

This week, we’ll have Sébastien Ros on the show to talk about modular ASP.NET applications, as they are implemented in Orchard Core. We’ll take questions on Gitter, on the dotnet/home channel and on Twitter. Please use the #onnet tag. It’s OK to start sending us questions in advance if you can’t do it live during the show.

Happy birthday .NET with Dan Fernandez

We caught up with Dan Fernandez at the .NET birthday party last month to talk about the good old days and the crazy idea he had of giving away Visual Studio for free. Dan was also part of the original Channel9 crew and one of the best .NET evangelists out there. Happy birthday .NET!

Project of the week: nopCommerce

nopCommerce is a popular open-source e-commerce system built on ASP.NET MVC, Autofac, and Entity Framework. It’s been downloaded 1.8 million times, has more than a hundred partners, and is used by popular brands such as Volvo, BMW, Puma, Reebok, Lacoste, and many more.

nopCommerce

Blogger of the week: Steve Gordon

Steve Gordon‘s blog post are deep dives into ASP.NET. There’s no better place to learn about what’s going on when a request is processed by ASP.NET Core than his ASP.NET Core anatomy series. This week, we’re featuring two of Steve’s posts.

Meetups of the week: community lightning talks in Seattle

Lightning talks are a great way to keep things focused and fun. The Mobile .NET Developers group in Seattle hosts five of those on Wednesday night at 6:30PM.

.NET

ASP.NET

C#

F#

New F# Language Suggestions:

Check out F# Weekly for more great content from the F# community.

VB

Xamarin

Azure

UWP

Data

Game Development

And this is it for this week!

Contribute to the week in .NET

As always, this weekly post couldn’t exist without community contributions, and I’d like to thank all those who sent links and tips. The F# section is provided by Phillip Carter, the gaming section by Stacey Haffner, and the Xamarin section by Dan Rigby, and the UWP section by Michael Crump.

You can participate too. Did you write a great blog post, or just read one? Do you want everyone to know about an amazing new contribution or a useful library? Did you make or play a great game built on .NET?
We’d love to hear from you, and feature your contributions on future posts:

This week’s post (and future posts) also contains news I first read on The ASP.NET Community Standup, on Weekly Xamarin, on F# weekly, and on Chris Alcock’s The Morning Brew.

Team Services Large Account User Management Roadmap (April 2017)

$
0
0

As the use of Visual Studio Team Services continues to grow and the size of teams in the cloud grow, we have been working to better support user management scenarios in large accounts. We have heard the pains of administrators of large accounts, particularly having to manage the access of each user individually and not having an easy way to assign resources to certain sets of users. I want to share how we are improving those scenarios in the beginning half of this year.

As always, the timelines and designs shared in this post are subject to change.

Bulk Edit for Users

Today, administrators have to manage access levels, extensions, and group memberships individually. This works for our small accounts, but our large accounts are left at a loss for editing multiple users at once. This is where our new bulk edit feature will come into play. With bulk edit, you will be able to select multiple users and edit their access levels, extensions, or group memberships at once.

AAD Group Support

What a user has access to in a Team Services account is controlled by what of the following resources they have assigned to them:

  • Access Level: This controls what core features a user has visibility of (Basic features or Stakeholder features)
  • Extensions: Add-ons that customize the VSTS experience
  • Group memberships: These control a user’s ability to use features across VSTS

In Team Services today, AAD groups can be used to assign group memberships to the users in it. This makes it very easy to manage what specific actions a user can or cannot do in the product based on how you have categorized them in AAD.

We are taking this concept and bringing it to access levels and extensions as well. With this work, you will be able to assign access levels and extensions to groups. Adding someone to the AAD group will automatically grant them the correct access levels and extensions when they access the VSTS account.  As a result, you will no longer have to manage access levels and extensions on an individual basis.

Over the next year, we will be working to enable administrators to:

  • Assign extensions and access levels to AAD groups
  • Manage the resources of users via AAD groups
  • Set up AAD groups to automatically purchase necessary resources for new users

Future

We are working to deliver the features described above to Team Services within the first half of the year and are still working through what these improvements will look like on-premises. This is just the beginning of our improvements to the user management experience for administrators of large accounts. We are also working on prioritizing:

  • Supporting licensing-only and security-only administrators
  • Improved B2B invitation experiences
  • Improved project-level and team-level user management

We know we still have a lot more work to do in this space, and we look forward to hearing your feedback along the way.

Thanks,

Ali Tai

VSTS & TFS Program Manager

 


Uninstalling and reinstalling the Windows 2012R2 Failover Clustering feature

$
0
0

In some cases, it may be necessary to uninstall and reinstall the Windows Failover Clustering feature on a server that is currently a member of a Failover Cluster.  This can be done via either Server Manager, or PowerShell.  Below are the steps to complete the process using each method.  These instructions document the process for Windows Server 2012 R2, however the steps are similar for other versions of Windows.

Uninstall the Windows Failover Clustering feature via Server Manager

Complete the following steps with a user account that has administrative rights over the cluster, from any server that has access to the cluster.  If the desired server is not listed, first add it by clicking Manage>Add Servers in Server Manager.

  1. In the Nodes view of Failover Cluster Manager, right-click on the node where the Failover Clustering feature is being uninstalled, select More Actions>Evict, and wait for the node to be removed from the Nodes view.
  2. Open Server Manager and click Manage >Remove Roles and Features.
    • If prompted, click Next on the “Before you begin” window.
  3. Select the server where the Failover Clustering feature is being uninstalled, and click Next.
  4. Click Next on the “Remove server roles” window.
  5. On the “Remove features” window, deselect the checkbox next to “Failover Clustering” and click Next.
    • If prompted, deselect the checkbox next to “Remove management tools (if applicable)” to retain the Failover Cluster Management Tools and PowerShell modules, if desired, and click Continue.
  6. On the “Remove features” window, click Next.
  7. Select the checkbox next to “Restart the destination server automatically if required”, and confirm that the server will be restarted if required if prompted before clicking Remove.
  8. The server will be rebooted when the feature is successfully removed.

Install/reinstall the Windows Failover Clustering feature and add the node to a cluster via Server Manager and Failover Cluster Manager

Complete the following steps with a user account that has administrative rights over the cluster, from any server that has access to the cluster.  If the desired server is not listed, add it by clicking Manage>Add Servers in Server Manager.

  1. Open Server Manager and click Manage>Add Roles and Features.
  2. If prompted, click Next on the “Before you begin” and “Select installation type” windows, then select the server on which the Failover Clustering feature is to be installed.
  3. Click Next on the “Select server roles” window.
  4. Select Failover Clustering on the “Select features” window, and click Next.
    • If prompted, select the checkbox next to “Include management tools (if applicable)” to install the Failover Cluster Management Tools and PowerShell modules, if desired, and click “Add Features”.
  5. Select the checkbox next to “Restart the destination server automatically if required”, and confirm that the server will be restarted if required if prompted before clicking Install.
  6. The server will be rebooted when the Feature is successfully installed.
  7. When the reboot is complete, open Failover Cluster Manager, Click “Add Node”, and follow the instructions in the “Add Node Wizard” to add the node to a cluster.

Uninstall the Windows Failover Clustering feature via PowerShell

Complete the following steps from an elevated PowerShell prompt with a user account that has administrative rights over the cluster, from any server that has local or remote access to the cluster.  If necessary, run the following commands first to import the Failover Clustering and Server Manager PowerShell modules:

Import-Module FailoverClusters
Import-Module ServerManager

Once the modules have been loaded or verified:

  1. Run the following command to evict the node where the Failover Clustering feature is being uninstalled:
    Remove-ClusterNode -Cluster
    • Select Yes if prompted for confirmation.
  2. Run the following command to confirm that the desired node is not listed as a member of the cluster:
    Get-ClusterNode -Cluster
  3. Run the following command to remove the Failover Clustering feature from and reboot the desired node:
    Remove-WindowsFeature Failover-Clustering -ComputerName -Restart

Install or reinstall the Windows Failover Clustering feature and add the node back to the cluster via PowerShell

Complete the following steps from an elevated PowerShell prompt with a user account that has administrative rights over the cluster, from any server that has local or remote access to the cluster.  If necessary, run the following commands first to import the Failover Clustering and Server Manager PowerShell modules:

Import-Module FailoverClusters
Import-Module ServerManager

Once the modules have been loaded or verified:

  1. Run the following command to remove the Failover Clustering feature from and reboot the desired node:
    Install-WindowsFeature Failover-Clustering -ComputerName -Restart
  2. When the reboot is complete, run the following command to add the node back to a cluster:
    Add-ClusterNode -Cluster
  3. Run the following command to confirm that the desired node is listed as a member of the cluster:
    Get-ClusterNode -Cluster

I hope you find this information useful.  Happy clustering!

Eriq Stern
Support Escalation Engineer
Windows High Availability Team

Download TV shows and movies from Netflix to your Windows 10 PC

$
0
0

Download your favorite TV shows and movies like Dave Chappelle, Stranger Things, The Crown, Narcos, Master of None, BoJack Horseman and more.

Here’s how to get started:

If you haven’t already downloaded the Netflix app for Windows 10, you can do that here. Once you’ve launched the app you can check out all the downloadable titles in “Available for Download” from the Netflix menu, or look for the download icon next to a specific movie or TV show.

Click to view slideshow.

Tap the download icon next to a movie or episode of a TV show, and you’ll find them in the My “Downloads” section of the app.

*Downloadable content can vary by country

The post Download TV shows and movies from Netflix to your Windows 10 PC appeared first on Windows Experience Blog.

Introducing DSCEA

$
0
0

***This post was written by Ralph Kyttle, PFE, and back linked to the original post. The original post can be found at: https://blogs.technet.microsoft.com/ralphkyttle/2017/03/21/introducing-dscea/

Hello, World!

I am incredibly excited to announce the open source release of DSC Environment Analyzer (DSCEA), a PowerShell module that uses the declarative nature of Desired State Configuration to scan systems in an environment against a defined reference MOF file and generate compliance reports as to whether systems match the desired configuration.

DSCEA includes a customizable reporting engine that can provide reports on overall compliance and details on any DSC resource found to be non-compliant.  DSCEA utilizes the DSC processing engine and DSC resources to audit and report on the existing configuration of machines in an environment.

By using PowerShell Desired State Configuration at its core, DSCEA obtains some unique advantages. Most notably, by defining the desired configuration state using DSC, an admin can benefit from using the same work to both scan for compliance and then correct items that were found to be non-compliant. Building an audit file in DSC can help ease remediations, and in some cases it can be as simple as applying the same MOF file that was used to scan an environment onto systems to correct drift and bring things into the desired state.

DSCEA is hosted at https://github.com/Microsoft/DSCEA and can be downloaded from the PowerShell Gallery.

DSCEA documentation is hosted at https://microsoft.github.io/DSCEA

So, now that DSCEA is available, what does that mean for you?

DSCEA’s primary use case is to verify that your systems are actually configured the way you want them to be.

Real world examples of how DSCEA can be used include

  • Verifying a single setting, for example if a registry key is set appropriately across your entire environment
  • Auditing your systems to ensure that they meet the base level system configuration settings that are required to be a part of your environment
  • Scanning the systems in your environment against all of the items that make up your organization’s security baseline
  • Verifying that settings configured via Group Policy are being applied correctly to the systems in your environment
  • Verifying settings configured on Windows Server 2016 Nano servers (which do not support Group Policy)
  • Let’s take a look at the first example, to see how DSCEA can be used to verify that certain registry keys are set correctly on a group of systems in an environment.

    First, you will need to install the DSCEA PowerShell module onto the computer that will act as your management system from where you will execute the scan from.  Click here for instructions on installing the DSCEA module.

    Next, you need to create a DSC configuration that defines the desired state you would like to scan for.  Here is an example that defines a desired state for the security related crashonaudit registry key.

    configuration DSCEARegistryTest1 {
        param([string[]]$ComputerName='localhost')
    
        Import-DscResource -ModuleName PSDesiredStateConfiguration
        
        Node $ComputerName {   
    
            Registry 'CrashOnAuditFail' {
                Ensure    = 'Present'
                Key       = 'HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\Lsa'
                ValueName = 'crashonauditfail'
                ValueType = 'Dword'
                ValueData = '1'
            }  
    
        }            
    }
    
    DSCEARegistryTest1 -OutputPath .\

    view rawDSCEARegistryTest1.ps1 hosted with ❤ by GitHub

    Run the DSCEARegistryTest1.ps1 file, which creates an output file called localhost.mof in your current directory which you will then use to scan systems in your environment.

    Click here for examples on how to execute a DSCEA scan.  One example is included below.

    Scan multiple systems for compliance to settings defined within a localhost.mof file located in your current directory
    PS C:\> Start-DSCEAscan -MofFile .\localhost.mof -ComputerName dsctest-1, dsctest-2, dsctest-3

    This command executes a DSCEA scan against 3 remote systems, dsctest-1, dsctest-2 and dsctest-3 using a locally defined MOF file that exists in the current directory. This MOF file specifies the settings to check for during the scan. Start-DSCEAscan returns a XML results file containing raw data that can be used with other functions, such as Get-DSCEAreport to create reports with consumable information.

    Generate HTML Reports based on scan results – System Compliance

    Once you have completed a scan, you will want to generate reports.  Click here for instructions on creating HTML based reports with DSCEA.  Two examples are included below.

    PS C:\Users\username\Documents\DSCEA> Get-DSCEAreport -Overall

    This command is executed from a directory that contains DSCEA scan result XML files. It generates a report of overall system compliance, and will mark a system as non-compliant if it does not fully match the desired configuration.

    Example HTML Report #1


    Next we will use Get-DSCEAreport to generate a report showing all non-compliant items that were detected.

    PS C:\Users\username\Documents\DSCEA> Get-DSCEAreport -Detailed

    This command is executed from a directory that contains DSCEA scan result XML files. It generates a report containing a list of all items that were found to be non-compliant. If all systems that are scanned are found to be compliant, this report will show no results.

    Example HTML Report #2

    Feeling creative?  DSCEA XML data can be converted into CSV, for use with other tools like PowerBI

    This post is just the first of many, as we will continue to explore the ways you can use DSCEA in your environment to verify that your systems are actually configured the way you want them to be.

    DSCEA is an open source solution hosted on GitHub.  We look forward to your feedback and contributions!

Integrating Smoke Tests into your Continuous Delivery Pipeline

$
0
0

We’re really glad to have Abel Wang help us out for #SpringIntoDevOps with this awesome blog contribution about verifying whether your deployment finished successfully by integrating smoke tests into your pipeline.  Thank you Abel!  — Ed Blankenship


Having a Continuous Integration (CI) and Continuous Delivery (CD) pipeline in Visual Studio Team Services enables us to build and release our software quickly and easily.  Because of the high volume of builds and releases that can occur, there is a chance that some of the releases will fail.  Finding these failures early is vital.  Using integrated smoke tests in your CD pipeline is a great way to surface these deployment failures automatically after each deployment.

There are two types of smoke tests you can run.  Functional tests where you write code that tests which verifies your app is deployed and working correctly, and automated UI tests that will actually exercise the user interface using automated user interface test scripts.  Both these types of smoke tests can be run in your CD pipeline using the Visual Studio Test task.

1

The Visual Studio Test Task can run tests using multiple testing frameworks including MSTest, NUnit, xUnit, Mocha and Jasmine.  The task actually uses vstest.console.exe to execute the tests.  For this blog post, I’ll be using MSTest, but you can use whatever testing framework you want with the correct test adapter.

Using MSTest, it’s very simple to create smoke tests.  At the end of the day, tests in MSTest (or any of the other testing frameworks) are just chunks of code that is run.  Anything you can do with code can be part of your smoke tests.  Some common scenarios include:

  • hitting a database and making sure it has the correct schema,
  • checking if certain data is in the database,
  • hitting a service and making sure the response is correct,
  • hitting URLs and making sure some dynamic content is returned back

Automated UI tests can also be done using MSTest (or another testing framework) with Selenium or Coded UI or whatever automation technology you want to use.  Remember, if you can do it with code (in this case C#) then you can get the Visual Studio Test task to do it.  For this blog, we will be looking at creating automated UI smoke tests.

The first thing we need to do is make sure your smoke test project is part of the solution that gets compiled in your automated build.  In this example, I have a solution that includes a web project, a MSTest project used for my smoke tests and some other projects.

2

For the automation scripts, I used Selenium with the Page Object pattern where I have an object which represents each page of my app and each page object has all the actions you can do on the page as well as the asserts that you can do on the page.  This creates some super clean smoke tests.

345

Make sure your build compiles your test project and the test project’s .dll is one of the build artifacts.  For this example, I set up my release steps to do the following:

  1. Deploy web app
  2. Deploy database schema changes
  3. Massage my configuration files so my Selenium tests hits my Dev environment and uses Google Chrome as the browser
  4. Add release annotations for Application Insights
  5. Run Selenium Tests

6

Setting up the Visual Studio Test task in the CD pipeline to run my automated UI smoke tests using MSTest is straight forward.  All it requires is setting some parameters.

7

For detailed descriptions on all the parameters you can set for the Visual Studio Test task, check out https://github.com/Microsoft/vsts-tasks/blob/releases/m109/Tasks/VsTest/README.md.  In my example, I am running tests contained in any .dll that has the word test in it.  I’m also filtering and only running tests that are in the TestCategory=UITests.  You have lots of options for how you want to categorize and structure your tests.

8

9

Automated User Interface Smoke Tests

Automated UI tests require a private VSTS build/deploy agent running in interactive mode. If you have not ever setup an agent to run interactively, there is a great walkthrough for installing and configuring agent for interactive mode.  Alternatively, you can actually run these same smoke tests using phantomjs (headless) which will work with the hosted agents in VSTS.  To run my smoke tests using phantomjs, just change the environment variable Token.BrowserType from chrome to phantomjs.

10

Now, when a release is triggered, after deploying my web app and database, I run my set of smoke tests using the Visual Studio Test task and all results are automatically posted back to the release summary and to VSTS.

1112

Smoke Tests for Mobile Continuous Delivery

The Continuous Delivery system in VSTS is so flexible, we can even configure it to run smoke tests in complex mobile scenarios.  In the following example, I have an app that consists of a website, multiple REST API services, a back-end database and a mobile app.  My release pipeline consists of:

  • Create or update my Azure Resource Group from an ARM template
  • Deploy Web App and Services
  • Deploy database schema changes
  • Do a web performance test
  • Run some Selenium Automated UI tests for smoke tests against the web site and services
  • Deploy my mobile app to my Dev Testers group using HockeyApp
  • Deploy and run my smoke tests against my mobile app using Xamarin.UITests in Xamarin Test Cloud using the Xamarin Test Cloud Task

13

Using smoke tests as part of your CD pipeline is a valuable tool to help ensure your deployment, configuration and resources are all working.  Release Management in Visual Studio Team Services is fully configurable and customizable to run any type of smoke tests that you want as part of the deployment steps.  The source code for the examples in this blog are in GitHub  here.

 

Abel Wang
Senior Technical Product Marketing Manager, Visual Studio Team Services

Network Capture Best Practices

$
0
0

Hi Diddly Doodly readers. Michael Rendino here again with a follow up to my “Basic Network Capture Methods” blog, this time to give some best practices on network capture collection when troubleshooting. As you may have guessed, one of my favorite tools, due to my years in networking support, is the network capture. It can provide a plethora of information about what exactly was transpiring when systems were trying (and possibly failing) to communicate. I don’t really concern myself with the tool used, be it Network Monitor, Wireshark, Message Analyzer, Sniffer or any other tool. My biggest point to stress is what I mentioned previously – it shows the communication on the network. The important point to get from that is that collecting a trace from a single point doesn’t provide the full picture. While I will take a single-sided trace over no trace at all, the best scenario is to get it from all points involved in the transaction. With something like SharePoint, this could be a number of machines – the client running the browser, the web front end, the SQL back end and then multiple domain controllers. It sounds like a daunting task to get the captures from every location, but I would rather have too much data rather than too little. To add to that point, please don’t apply a capture filter unless absolutely necessary! By only capturing data between two select points, you could be omitting a critical piece of information.

Following is a perfect example of both of these points. I was engaged to troubleshoot an issue that was described as a problem with a SharePoint web front end talking to the SQL server. I got the captures from the two servers, which fortunately were not filtered. If I just went on the problem description, I would typically have opened the capture from the SQL box, applied the ipv4.address==Web Front End IP (ipv4 because I was using Network Monitor – it would be ip.addr== for you Wireshark fans) to locate the traffic from that box. In fact, I did that to start and saw that all traffic to and from the WFE appeared completely normal.

9873    9:37:54 AM     WFE    49346 (0xC0C2)    SQL    1433 (0x599)    TCP    TCP:Flags=…A…., PayloadLen=0, Seq=3198716784, Ack=438404416, Win=510

10093    9:37:55 AM     WFE    49346 (0xC0C2)    SQL    1433 (0x599)    TDS    TDS:RPCRequest, SPID = 0, PacketID = 1, Flags=…AP…, PayloadLen=201, Seq=3198716784 – 3198716985, Ack=438404416, Win=510

10094    9:37:55 AM     SQL    1433 (0x599)    WFE    49346 (0xC0C2)    TDS    TDS:Response, SPID = 117, PacketID = 1, Flags=…AP…, SrcPort=1433, DstPort=49346, PayloadLen=61, Seq=438404416 – 438404477, Ack=3198716985, Win=255

10188    9:37:55 AM     WFE    49346 (0xC0C2)    SQL    1433 (0x599)    TCP    TCP:Flags=…A…., PayloadLen=0, Seq=3198716985, Ack=438404477, Win=509

To me, it looked like clean SQL traffic, moving quickly and without errors. All good so I needed to look elsewhere. To move on, it’s important to know what other types of things will happen when using SharePoint. Other than the SQL traffic, the WFE will also have to communicate with the client, perform name resolution and communicate with a domain controller. I first applied the filter “dns or nbtns” (Again, this was Network Monitor, although I typically use multiple tools for my analysis) and again, everything looked “clean.” I then moved on to examine the authentication traffic. I applied the filter “Kerberosv5” and lo and behold, the issue jumped right out to me. Appearing over and over in the trace was this:

97    9:38:46 AM     0.0000000    WFE    52882 (0xCE92)    DC    88 (0x58)    TCP    TCP:Flags=……S., SrcPort=52882, DstPort=Kerberos(88), PayloadLen=0, Seq=2542638417, Ack=0, Win=8192 ( Negotiating scale factor 0x8 ) = 8192

98    9:38:46 AM     0.0004965    DC    88 (0x58)    WFE    52882 (0xCE92)    TCP    TCP:Flags=…A..S., SrcPort=Kerberos(88), DstPort=52882, PayloadLen=0, Seq=4098142762, Ack=2542638418, Win=65535 ( Negotiated scale factor 0x1 ) = 131070

99    9:38:46 AM     0.0000200    WFE    52882 (0xCE92)    DC    88 (0x58)    TCP    TCP:Flags=…A…., SrcPort=52882, DstPort=Kerberos(88), PayloadLen=0, Seq=2542638418, Ack=4098142763, Win=513 (scale factor 0x8) = 131328

100    9:38:46 AM     0.0000599    WFE    52882 (0xCE92)    DC    88 (0x58)    KerberosV5    KerberosV5:AS Request Cname: farmsvc Realm: CONTOSO.COM Sname: krbtgt/CONTOSO.COM

102    9:38:46 AM     0.0022497    DC    88 (0x58)    WFE    52882 (0xCE92)    KerberosV5    KerberosV5:KRB_ERROR – KDC_ERR_CLIENT_REVOKED (18)

KRB_ERROR – KDC_ERR_CLIENT_REVOKED means that the client account has been locked out. We checked active directory and sure enough, the account used for the WFE service was locked. We then learned that they had recently changed the password for that service account, which resulted in said lockout. One thing to note about Network Monitor (and you can do this with Wireshark, as well) is that I actually had all Kerberos traffic highlighted in green so it stood out quickly.

So what did we learn? We know that if the trace had just been taken from the SQL server, I wouldn’t have found the issue. We also know that if the WFE trace had been filtered to just include SQL traffic or SQL and client traffic, I wouldn’t have found the issue. Remember, more is better! Even if I get gigabytes of captures, I can always parse them or break them into smaller, bite-sized (no pun intended) chunks for faster filtering. Happy tracing!

Time to check your Windows Insider Program settings!

$
0
0

Hello Windows Insiders!

It’s that time again! We’re getting ready to start releasing new builds from our Development Branch. And just like before after the release of a new Windows 10 update, you won’t see many big noticeable changes or new features in new builds just yet. That’s because right now, we’re focused on making some refinements to OneCore and doing some code refactoring and other engineering work that is necessary to make sure OneCore is optimally structured for teams to start checking in code. Now comes our standard warning that these new builds from our Development Branch may include more bugs and other issues that could be slightly more painful for some people to live with. So, if this makes you uncomfortable, you can change your ring by going to Settings > Update & security > Windows Insider Program and moving to the Slow or Release Preview rings for more stable builds.

Additionally, if you are an Windows Insider who wants to stay on the Windows 10 Creators Update – you will need to go to Settings > Update & security > Windows Insider Program and press the “Stop Insider Preview builds” button.

Windows Insider Settings page

A menu will pop up and you will need to choose “Keep giving me builds until the next Windows release”. This will keep you on the Windows 10 Creators Update.

“Keep giving me builds until the next Windows release”

We’re excited to get some new builds out to Insiders soon!

Keep hustling,
Dona <3

The post Time to check your Windows Insider Program settings! appeared first on Windows Experience Blog.

Updated Forrester study finds Windows 10 can increase ROI for enterprises

$
0
0

Editor’s Note:Last year, we shared Forrester Consulting’s Total Economic Impact (TEI) study of Windows 10 (June 2016) which found significant IT management cost savings, security and productivity benefits for Windows 10 customers. In December 2016, Forrester Consulting on behalf of Microsoft updated the study with insights from four additional customers which brings the total number of customer participants to eight. We have updated the blog to reflect the latest data which provides evidence that Windows 10 can help drive business impact.

With more than 400 million monthly active devices now running Windows 10, enterprises are moving faster than ever to the most secure Windows 10. We designed Windows 10 to be the most secure Windows ever, ease management efforts and create more personal and productive computing experiences across devices. Customer satisfaction is at an all-time high because of the security and productivity features, which are behind the strong adoption and overall satisfaction. It has been great to see the market reaction to Windows 10 which has been unprecedented with a 3X increase in Windows 10 enterprise deployments.

Updated Forrester Study finds Windows 10 can increase ROI for enterprises

Last June, we commissioned Forrester Consulting to conduct a Total Economic Impact™ (TEI) study of Windows 10 to demonstrate how Windows 10 is having a positive impact to our customers’ bottom line. To continue to understand the benefits and costs associated with a Windows 10 implementation, Forrester interviewed four additional enterprise customers across various industries who were early adopters of Windows 10, including a government health department, a multinational food and beverage conglomerate, a global IT services firm and a global IT hardware and software vendor.

The updated study shows the three-year net present value of benefits per user increased from $403 to $515 and the Return on Investment grew from 188% to 233%, with a payback period of 13 to 14 months. This updated study helps provide further evidence that Windows 10 can drive significant cost savings, security and productivity benefits for enterprise customers. Enterprises that leverage the new tools in Windows 10 to deploy the updated operating system (OS) more quickly and easily than with past efforts have experienced improved boot times, application access, security, and mobility which has helped IT and users increase their productivity and complete their work more effectively.

These gains in efficiency and productivity have not only resulted in reduced IT costs but also helped drive new business especially for companies that sell technology services as employees who use Windows 10 to work with customers can show their commitment to the latest technology and avoid risking a sale.

Below are a few of the highlights of the benefits for the composite organization based on the eight customers* who have now been interviewed for this commissioned study. The below insights from the June 2016 commissioned study remain unchanged.

IT Management Cost Savings

Windows 10 requires less IT administration time to install, manage, and support, with easy-to-use features and more self-service functions. One customer found that deploying Windows 10 was quicker and easier by as much as 50% from their last operating system upgrade and overall IT administrators estimate a 15% improvement in IT management time with Windows 10— valuable time back to help in other key IT areas.

With Windows 10 and System Center Configuration Manager (SCCM), organizations are able to provide employees self-service tools allowing them to conveniently find and install a line of business applications. With new features in Windows 10, businesses also found a reduction on costs associated with third party software application licenses that were no longer needed.

Reduced Security Remediation

With new features such as Credential Guard, Device Guard and improved security features such as BitLocker, security events related to client device management requiring IT remediation are reduced or avoided altogether. Forrester estimated some of the businesses could be saving up to nearly $700,000 a year by enabling security features in Windows 10.

Improved Productivity

Improvements, such as faster boot times, convenient access to corporate applications, increased security, and better mobility tools help IT and users increase productivity and complete work more quickly and effectively. Employees, especially mobile workers, estimate they have 25% more time to get work done than they did before.

Take a look at the updated TEI Study and take a look at Windows 10 if you haven’t already. See how it can help you reduce IT costs, reduce security remediation, improve productivity and increase ROI within your organization!

*The Total Economic Impact™ Of Windows 10, a commissioned study conducted by Forrester Consulting on behalf of Microsoft, December 2016

The post Updated Forrester study finds Windows 10 can increase ROI for enterprises appeared first on Windows For Your Business.


Windows Networking for Kubernetes

$
0
0

A seismic shift is happening in the way applications are developed and deployed as we move from traditional three-tier software models running in VMs to “containerized” applications and micro-services deployed across a cluster of compute resources. Networking is a critical component in any distributed system and often requires higher-level orchestration and policy management systems to control IP address management (IPAM), routing, load-balancing, network security, and other advanced network policies. The Windows networking team is swiftly adding new features (Overlay networking and Docker Swarm Mode on Windows 10) and working with the larger containers community (e.g. Kubernetes sig-windows group) by contributing to open source code and ensuring native networking support for any orchestrator, in any deployment environment, with any network topology.

Today, I will be discussing how Kubernetes networking is implemented in Windows and managed by an extensible Host Networking Service (HNS) – which is used in both Azure Container Service (ACS) Windows worker nodes and on-premises deployments – to plumb network policy in the OS .

Note: A video recording of the 4/4 #sig-windows meetup where I describe this is posted here: https://www.youtube.com/watch?v=P-D8x2DndIA&t=6s&list=PL69nYSiGNLP2OH9InCcNkWNu2bl-gmIU4&index=1

Kubernetes Networking

Windows containers can be orchestrated using either Docker Swarm or Kubernetes to help “automate the deployment, scaling, and management of ‘containerized’ applications”. However, the networking model used by these two systems is different.

Kubernetes networking is built on the fundamental requirements listed here and is either agnostic to the network fabric underneath or assumes a flat Layer-2 networking space where all containers and nodes can communicate with all other containers and nodes across a cluster without using NAT or relying on encapsulation. Windows can support these requirements using a few different networking modes exposed by HNS and working with external IPAM drivers and route configurations.

The other large difference between Docker and Kubernetes networking is the scope at which IP assignment and resource allocation occurs. Docker assigns an IP address to every container whereas Kubernetes assigns IP addresses to a Pod which represents a network namespace and could consist of multiple containers running inside the Pod. Windows also has a network namespace concept called a network compartment and a management surface is being built in Windows to allow for multiple containers in a Pod to communicate with each other through localhost.

Connectivity between pods located on different nodes in a Kubernetes cluster can be accomplished either by using an overlay (e.g. vxlan) network or without an overlay by configuring routes and IPAM on the underlying (virtual) network fabric. Realizing this network model can be done through:

  • CNI Network Plugin
  • Implementing the “Routing” interface in Kubernetes code
  • External configuration

The sig-windows community (led by Apprenda) did a lot of work to come up with an initial solution for getting Kubernetes networking to work on Windows. The networking teams at Microsoft are building on this work and continues to partner with the community to add support for the native Kubernetes networking model – defined by the Container Network Interface (CNI) which, is different from the Cloud Network Model (CNM) used by Docker– and surfacing policy management capabilities through HNS.

Kubernetes networking in Azure Container Service (ACS)

Azure Container Service recently announced Kubernetes general availability which uses a routable-vip approach (no overlay) to networking and configures User-Defined Routes (UDR) in the Hyper-V (virtualization) host for Pod communication between Linux and Windows cluster node VMs. A /24 CIDR IP pool (routable between container host VMs) is allocated to each container host with one IP assigned per Pod (one container).

udr-in-acs-kubernetes

With the recent Azure VNet for Containers announcement which includes support for a CNI network plugin used in Azure (pre-released here: https://github.com/Azure/azure-container-networking/releases/tag/v0.7), tenants can connect their ACS clusters (containers and hosts) directly to Azure Virtual Networks. This means that individual IPs from a tenant’s Azure VNet IP space will be assigned to Kubernetes nodes and pods in potentially the same subnet. The Windows networking team is also working to build a CNI plugin to support and extend container management through Kubernetes on Windows for on-premises deployments.

Kubernetes networking in Windows

Microsoft engineers across Windows and Azure product groups actively contributed code to the Kubernetes repo to enhance kube-proxy (used for DNS and service load-balancing) and kubelet (for Internet access) binaries which are installed on ACS Kubernetes Windows worker nodes. This overcame gaps previously identified so that both DNS and service load-balancing worked correctly without the need for Routing and Remote Access Services (RRAS) or netsh port proxy. In this implementation, the Windows network uses Kubernetes’ default kubenet plugin without CNI plugin.

Using HNS, one transparent and one NAT network is created on each Windows container host for inter-Pod and external communication respectively. Two container endpoints – connected to the Service and Pod networks – are required for each Windows container which will participate in the Kubernetes service. Static routes must be added to the running Windows containers themselves on the container endpoint attached to the service network.

windows-container-host-acs-kubernetes

In the absence of ACS-managed User-Defined Routes, Out-of-Band (OOB) configuration of these routes need to be realized in the Cloud Service Provider network, implemented using the “routing” interface of the Kubernetes cloud provider, or connected via overlay networks. Other solutions include using the HNS overlay network driver for inter-Pod communication or using the OVS Hyper-V switch extension with OVN Controller.

Today, with the publicly available versions of Windows server and client you can deploy Kubernetes with the following restrictions:

  • One container per Pod
  • CNI Network Plugins are not supported
  • Each container requires two container endpoints (vNICs) with IP routing manually plumbed
  • Service IPs can only be associated with one Container Host and will not be load-balanced
  • Policy specifications (e.g. network security) are not supported

What’s Coming Next?

Windows is moving to a faster release cadence such that new platform features will be made available in a matter of months rather than years. In some circumstances, early builds can be made available to Insiders as well as to TAP customers and EEAP partners for early feature validation.

Stay tuned for new features which will be made available soon…

Summary

In this blog post, I described some of the nuances of the Kubernetes networking model and how it differs from the Docker networking model. I also talked about the code updates made by Microsoft engineering teams to the kubelet and kube-proxy binaries for Windows in open source repos to enable networking support. Finally, we ended with how Kubernetes networking is implemented in Windows today and the plans for how it will be implemented through a CNI plugin in the near future…

MSTest V2 is open source

$
0
0

As promised, we announced the open sourcing of MSTest Test Framework “MSTest V2”. The community now has a fully supported, open source, cross-platform implementation of the MSTest V2 portfolio with which to write tests targeting .NET Framework, .NET Core and ASP.NET Core on Windows, Linux, and Mac.

Here are the public repositories on GitHub where the project is hosted:
https://github.com/Microsoft/testfx
https://github.com/Microsoft/testfx-docs
These are fully open and ready to accept contributions.

The MSTest V2 portfolio
The MSTest V2 portfolio comprises the framework, the adapter, the templates, the wizard extensions, and documentation. All of it is now open sourced, as illustrated in the image below:

mstestv2portfolio

Let’s evolve MSTest V2 together
MSTest V2 – Now and Ahead summarizes the MSTest V2 journey so far. The roadmap charts the immediate future.
We invite you participate and direct this evolution.
Thank you for your support.

Real-time machine learning on globally-distributed data with Apache Spark and DocumentDB

$
0
0

At the Strata + Hadoop World 2017 Conference in San Jose, we have announced the Spark to DocumentDB Connector. It enables real-time data science, machine learning, and exploration over globally distributed data in Azure DocumentDB. Connecting Apache Spark to Azure DocumentDB accelerates our customer’s ability to solve fast-moving data science problems, where data can be quickly persisted and queried using DocumentDB. The Spark to DocumentDB connector efficiently exploits the native DocumentDB managed indexes and enables updateable columns when performing analytics, push-down predicate filtering against fast-changing globally-distributed data, ranging from IoT, data science, and analytics scenarios. The Spark to DocumentDB connector uses the Azure DocumentDB Java SDK. You can get started today and download the Spark connector from GitHub!

What is DocumentDB?

Azure DocumentDB is our globally distributed database service designed to enable developers to build planet scale applications. DocumentDB allows you to elastically scale both, throughput and storage across any number of geographical regions. The service offers guaranteed low latency at P99, 99.99% high availability, predictable throughput, and multiple well-defined consistency models, all backed by comprehensive SLAs. By virtue of its schema-agnostic and write optimized database engine, by default DocumentDB is capable of automatically indexing all the data it ingests and serve SQL, MongoDB, and JavaScript language-integrated queries in a scale-independent manner. As a cloud service, DocumentDB is carefully engineered with multi-tenancy and global distribution from the ground up.
These unique benefits make DocumentDB a great fit for both operational as well as analytical workloads for applications including web, mobile, personalization, gaming, IoT, and many other that need seamless scale and global replication.

What are the benefits of using DocumentDB for machine learning and data science?

DocumentDB is truly schema-free. By virtue of its commitment to the JSON data model directly within the database engine, it provides automatic indexing of JSON documents without requiring explicit schema or creation of secondary indexes. DocumentDB supports querying JSON documents using well-familiar SQL language. DocumentDB query is rooted in JavaScript's type system, expression evaluation, and function invocation. This, in turn, provides a natural programming model for relational projections, hierarchical navigation across JSON documents, self joins, spatial queries, and invocation of user defined functions (UDFs) written entirely in JavaScript, among other features. We have now expanded the SQL grammar to include aggregations, thus enabling globally-distributed aggs in addition to these capabilities.

Apache Spark

Figure 1: With Spark Connector for DocumentDB, data is parallelized between the Spark worker nodes and DocumentDB data partitions

Distributed aggregations and advanced analytics

While Azure DocumentDB has aggregations (SUM, MIN, MAX, COUNT, SUM and working on GROUP BY, DISTINCT, etc.) as noted in Planet scale aggregates with Azure DocumentDB, connecting Apache Spark to DocumentDB allows you to easily and quickly perform an even larger variety of distributed aggregations by leveraging Apache Spark. For example, below is a screenshot of calculating a distributed MEDIAN calculation using Apache Spark's PERCENTILE_APPROX function via Spark SQL.

select destination, percentile_approx(delay, 0.5) as median_delay
from df
where delay < 0
group by destination
order by percentile_approx(delay, 0.5)

Figure 2

Figure 2: Area visualization for the above distributed median calculation via Jupyter notebook service on Spark on Azure HDInsight.

Push-down predicate filtering

As noted in the following animated gif, the queries from Apache Spark will push down predicated to Azure DocumentDB and take advantage that DocumentDB indexes every attribute by default. Furthermore, by pushing computation close to the where the data lives, we can do processing in-situ, and reduce the amount of data that needs to be moved. At global scale, this results in tremendous performance speedups for analytical queries.

Figure 3

For example, if you only want to ask for the flights departing from Seattle (SEA), the Spark to DocumentDB connector will:

  • Send the query to Azure DocumentDB.
  • As all attributes within Azure DocumentDB are automatically indexed, only the flights pertaining to Seattle will be returned to the Spark worker nodes quickly.

This way as you perform your analytics, data science, or ML work, you will only transfer the data you need.

Blazing fast IoT scenarios

Azure DocumentDB is designed for high-throughput, low-latency IoT environments. The animated GIF below refers to a flights scenario.

Figure 4

Together, you can:

  • Handle high throughput of concurrent alerts (e.g., weather, flight information, global safety alerts, etc.)
  • Send this information downstream for device notifications, RESTful services, etc. (e.g., alert on your phone of an impending flight delay) including the use of change feed
  • At the same time, as you are building up ML models against your data, you can also make sense of the latest information

Updateable columns

Related to the previously noted blazing fast IoT scenarios, let's dive into updateable columns:

Figure 5

As the new piece of information comes in (e.g. the flight delay has changed from 5 min to 30 min), you want to be able to quickly re-run your machine learning (ML) models to reflect this newest information. For example, you can predict the impact of the 30min for all the downstream flights. This event can be quickly initiated via the Azure DocumentDB Change Feed to refresh your ML models.

Next steps

In this blog post, we’ve looked at the new Spark to DocumentDB Connector. The Spark with DocumentDB enables both ad-hoc, interactive queries on big data, as well as advanced analytics, data science, machine learning, and artificial intelligence. DocumentDB can be used for capturing data that is collected incrementally from various sources across the globe. This includes social analytics, time series, game or application telemetry, retail catalogs, up-to-date trends and counters, and audit log systems. Spark can then be used for running advanced analytics and AI algorithms at scale on top of the data coming from DocumentDB.

Companies and developers can employ this scenario in online shopping recommendations, spam classifiers for real time communication applications, predictive analytics for personalization, and fraud detection models for mobile applications that need to make instant decisions to accept or reject a payment. Finally, internet of things scenarios fit in here as well, with the obvious difference that the data represents the actions of machines instead of people.

To get started running queries, create a new DocumentDB account from the Azure Portal and work with the project in our Azure-DocumentDB-Spark GitHub repo. Complete instructions are available in the Connecting Apache Spark to Azure DocumentDB article.

Stay up-to-date on the latest DocumentDB news and features by following us on Twitter @DocumentDB or reach out to us on the developer forums on Stack Overflow.

Windows 10 privacy journey continues: more transparency and controls for you

$
0
0

Terry Myerson is the EVP of the Windows and Devices Group and Marisa Rogers is the WDG Privacy Officer

Last week, we shared that the Windows 10 Creators Update will begin to roll out to our customers starting April 11, bringing new features and tools that empower the creator in each of us. We’ve talked about innovations in this update like bringing 3D creation and mixed reality to everyone, enabling every gamer to be a broadcaster, and browsing improvements in Microsoft Edge. With all these new built-in innovations, we hope it will inspire you to choose Windows to be the place you love to create and play. And yet one of our most important improvements in the Creators Update is a set of privacy enhancements that will be mostly behind the scenes.

As part of our commitment to transparency and your privacy, today we’re bringing those enhancements front and center, and sharing three new things that will help you be more informed about your privacy with Windows 10.

  1. We are improving in-product information about your privacy. With both short descriptions about each privacy setting and a “Learn More” button, we are committed to making information about your privacy choices easy to access and understand.
  2. We are updating the Microsoft privacy statement to include more information about the privacy enhancements in the Creators Update; as well as share more detail about the data we collect and use to support new features offered in this update. Like previous privacy statement updates, we will make this information available to you in a layered manner online, allowing you to progressively explore more information about your privacy choices with Windows 10. We have also summarized the key changes in Change History for Microsoft Privacy Statement.
  3. We are publishing more information about the data we collect. Our commitment to you is that we only collect data at the Basic level that is necessary to keep your Windows 10 device secure and up to date. For customers who choose the Full level, we use diagnostic data to improve Windows 10 for everyone and deliver more personalized experiences for you where you choose to let us do so. Our hope is this information will help you be more informed about the data we collect and use, enabling you to make informed choices.

For the first time, we have published a complete list of the diagnostic data collected at the Basic level. Individual data points that relate to a specific item or event are collected together and called Events. These are further organized into diagnostic areas. We are also providing a detailed summary of the data we collect from users at both Basic and Full levels of diagnostics.

Aside from sharing new information to inform your choices, our teams have also worked diligently since the Anniversary Update to re-assess what data is strictly necessary at the Basic level to keep Windows 10 devices up to date and secure. We looked closely at how we use this diagnostic data and strengthened our commitment to minimize data collection at the Basic level. As a result, we have reduced the number of events collected and reduced, by about half, the volume of data we collect at the Basic level.

Finally, I want to introduce Marisa Rogers, our Windows and Devices Group Privacy Officer. She champions our privacy commitments to you both inside and outside Microsoft, working with our Microsoft engineers and advocates around the world to ensure we’re delivering great experiences with privacy by design and giving you the information that puts you in control. Marisa will share more today and in the future about the privacy work we’ve done with the Creators Update and our privacy plans for future updates.

I’m proud of the team’s work here and our continued commitment to your privacy. I’m also appreciative of the great feedback we’ve received from our customers along this journey. You inspire us every day to innovate and deliver a great product that respects your privacy choices. This feedback – in line with the feedback we have received from the European Union’s Article 29 Working Party and EU national data protection authorities that have specifically engaged us on Windows 10 – was essential for Microsoft to identify and implement improvements in our privacy practices.

Thank you and keep the feedback coming! Take it away, Marisa.

-Terry

Privacy Refresh: What to expect with the Windows 10 Creators Update and beyond

Thanks, Terry. Before I get into the privacy details about this update and beyond, I want to first thank YOU – our customers – for your feedback and support on this journey. Our commitment to your privacy is only fully realized when we deliver on your feedback.

With the Creators Update, we’ve taken significant steps forward to help ensure you have information to make informed choices and you are in control of the personalized experiences you choose with Windows 10. Here’s what you can expect from us with this update.

First, everyone will have the opportunity to review their privacy settings. We believe it is important that you understand your choices. We’ve made this easier than ever before with clearer descriptions of each privacy setting and “Learn More” buttons that allow you to dive deeper into the information we collect and how it is used.

For those of you already running a version of Windows 10, we will deliver a notification to schedule your Creators Update and choose your privacy settings as depicted below.

The first image shows an example of how the privacy settings screen may appear to you. The actual values of the toggles on this screen will be based on your current settings in Windows 10. For example, if you previously chose to turn off location services, the toggle in this screen will be initially set to “Off” for location services.

New privacy settings screen in the Windows 10 Creators Update. An example of how the privacy settings screen may appear to you. The actual values of the toggles on this screen will be based on your current settings in Windows 10. For example, if you previously chose to turn off location services, the toggle in this screen will be initially set to “Off” for location services.

The second image shows the same screen with all toggles set to “Off” (and, in the case of diagnostics, to “Basic”).

New privacy settings screen in the Windows 10 Creators Update. An example of how the privacy settings screen may appear to you. The actual values of the toggles on this screen will be based on your current settings in Windows 10. For example, if you previously chose to turn off location services, the toggle in this screen will be initially set to “Off” for location services.

You can use the toggles to customize your choices. Additionally, at any time, you can go to Windows Settings from the Start Menu, then select Privacy, to review and change your settings and to find more details and links to the Microsoft Privacy statement.

For those of you who are setting up a new Windows 10 device for the first time or running a clean install of Windows 10, the new privacy set up experience will look like the one below.

The first image is the screen as it will first appear, with toggles showing Microsoft’s recommended settings. Each toggle provides a short description of the purpose of the setting. If you want more information about the settings, you can select the “Learn more” button. We believe the recommended settings will provide you with the richest experience and enable important Windows 10 features to operate most effectively.

New privacy set up experience for those who are setting up a new Windows 10 device for the first time or running a clean install of Windows 10. The first image is the screen as it will first appear, with toggles showing Microsoft’s recommended settings. Each toggle provides a short description of the purpose of the setting. If you want more information about the settings, you can select the “Learn more” button.

This screen will replace the “Get going fast” and “Customize settings” screens that were available in the set-up experiences for previous releases of Windows 10. You must act on this screen before using Windows 10.

The next image below shows the same screen with all toggles set to “Off” (and, in the case of diagnostics, to “Basic”). Again, each toggle provides a short description of the impact of the setting.

New privacy set up experience for those who are setting up a new Windows 10 device for the first time or running a clean install of Windows 10. This image shows the same screen with all toggles set to “Off” (and, in the case of diagnostics, to “Basic”). Again, each toggle provides a short description of the impact of the setting.

Both images of this set-up screen display options in two columns (if necessitated by your screen resolution) to avoid needing to scroll to see all the text on the screen. If you are manually setting up or upgrading to the Creators Update using advanced tools such as the Media Creation Tool, the screen display may instead use a single‑column format and may require scrolling depending on your screen size or language. These tools are intended for advanced technical users and are not recommended for most customers.

After reviewing and selecting settings, you must then take a final action to approve your choices by selecting the “Accept” button.

For the those of you with mobile devices currently running Windows 10 Mobile, the key privacy choices relevant to the mobile version of the Windows 10 Creators Update will be presented after you install the update. The only difference in mobile is the “Tailored experiences with diagnostic data” setting is automatically turned off for all customers and is not presented as an option on the privacy screen due to limitations of the mobile platform.

New privacy settings screen for Windows 10 Mobile customers in the Windows 10 Creators Update.
New privacy settings screen for Windows 10 Mobile customers in the Windows 10 Creators Update.

The Journey Continues

We are on a journey with you and fully committed to putting you in control and providing the information you need to make informed decisions about your privacy. The Windows 10 Creators Update is a significant step forward, but by no means the end of our journey.

In future updates, we will continue to refine our approach and implement your feedback about data collection and privacy controls. We are committed to helping ensure you have access to even more information and can review and delete data we collect via the Microsoft privacy dashboard. This month, we will bring voice data to this dashboard, so you can review the data we have which improves Cortana’s ability to naturally respond to your requests as your personal digital assistant.

We will also share more information about how we will ensure Windows 10 is compliant with the European Union’s General Data Protection Regulation and how using Windows 10 and other Microsoft products will help our enterprise customers with compliance in their environments.

I look forward to a continued dialogue with you and advocacy organizations around the world and welcome you to contact me and my privacy team with your feedback here.

-Marisa

The post Windows 10 privacy journey continues: more transparency and controls for you appeared first on Windows Experience Blog.

Make Bing your caddy for this PGA Season

$
0
0

The Masters® has earned its place at the heart of golf folklore and is an event near and dear to the hearts of millions, from club professionals and scratch golfers to driving range and putt-putt heroes around the globe. As is tradition, Bing is delivering another breakthrough search experience this sports season to fulfill the information needs of golf fans.

Starting this PGA season with the event in Augusta, Bing will help you find the answers to your PGA questions. We'll provide round-by-round scores, full leaderboards, and specific golfer information. We track each golfer's world ranking, their total earnings, and each event they're participating in.

PGA Tour 2016 - 2017

But what about when the final group of golfers round Amen Corner for the last time on Sunday and one of the world's best accepts the coveted green jacket? What else does Bing have to offer? Like the players, we're committed to the full spring and summer season. With Bing you can check the entire event schedule for the tour season and track the FedEx Golfer rankings. Through each event weekend you can check back in with Bing to see who is poised to take home the Sunday evening hardware.

PGA Tour 2016 - 2017 Schedule

-The Bing Team

Viewing all 13502 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>