Quantcast
Channel: TechNet Technology News
Viewing all 13502 articles
Browse latest View live

Connected Drones: 3 powerful lessons we can all take away

$
0
0

Connected Drone Powe by Azure

Lesson 1: Cloud-Based Solutions Empower Trailblazers Wherever They Are

eSmart Systems is a young dynamic company in Norway that made the early strategic choice of basing their products and services completely on Microsoft Azure. Using Azure, gave them an immediate global reach in a way unthinkable just a few years earlier. Their mission is to bring big data analytics to utilities and smart cities, and one of their focus areas is electric utilities and smart grids. eSmart has been delivering analytics solutions to power utilities for years, but the specific use case I want to share with you in this blog is called Connected Drone. It is a story that combines drones with intelligent software to prevent power blackouts, or as eSmart puts it “making Azure intelligence mobile”.

Big Data Analytics to Utilites and Cities

The economic impact of blackouts is massive, and the scale of power grids is huge. According to Grid Resilience Report, US Department of Energy, 2011, the average annual cost of power outages caused by severe weather in the USA is anywhere between $18-33 billion. Current inspection methods include dangerous, low-flying helicopters, or people climbing pole-by-pole, and even in some cases, dogs sniffing for rot. Once the inspection is done, electric companies have data, and lots of it. Typically, they manually go through it, picture-by-picture, looking for faults. Such manual process over vast amounts of data is obviously slow, error-prone and very expensive.

Initiate Plan Execute Analyze Azure

Lesson 2: Established Industry + Deep Intelligence + Cloud = New Opportunities!

eSmart began developing Connected Drone out of a strong conviction that drones combined with cloud intelligence could bring a positive disruptive change to the power industry. The objective of Connected Drone is to support and automate the inspection and monitoring of power grid infrastructure instead of the currently an expensive, risky, and extremely time consuming activity performed by ground crews and helicopters. To do this, they use Deep Learning to analyze multiple data feeds streamed from the drones. Their analytics software recognizes individual objects, such as insulators on power poles, and directly links the new information with the component registry, so that inspectors can quickly become aware of potential problems. Connected Drone covers all phases of the inspection process: (1) initiation of a mission, (2) planning of the flight, (3) execution of the mission, and finally (4) the analysis of collected data.

eSmart applies a range of deep learning technologies to analyze data from Connected Drone, from the very deep Faster R-CNN to Single Shot Multibox Detectors and more, deployed on Azure GPUs.

Neural Nets for Object Recognition

Below you can see some examples of analysis of both still images and video streams and how the system recognizes the different components of power poles.

Drone Still Image 1

Drone Still Image 2

Lesson 3: When Faced with Challenges… Unleash Your Creativity!

In order to really appreciate the ingenuity that went into Connected Drone project, you need to get a taste of the challenges that eSmart faced:

  1. Recognition of Small Objects. The first challenge was the recognition of small objects. Images for power line inspections are typically high-resolution. Object detectors that are available typically work with low resolution images, since they were developed to recognize big objects such as cars, people, airplanes, etc., (e.g., ImageNet). Detecting small differences between small objects from a small image is simply not possible. eSmart solution was to have a multi-step process where low resolution object detectors first identify generic objects, and then this information is used to crop the high resolution image and apply an object classifier on it.Small Object Recognition
  2. Class Imbalance. The second big challenge was that of class imbalance. eSmart had many different components that they needed to identify. What you see in the picture below is just a small sample. Some components are more common than others, and it is not hard to imagine all the potential problems this creates when trying to create a balanced classifier.Class Imbalance

3. Few Fault Examples. A similar problem is the lack of sufficient examples of the faults that analytics software needed to recognize using Connected Drone. Below are a few examples of some fault types. It is unrealistic to expect to have tens of thousands examples of each fault situation, so eSmart needed to think of something new and innovative to address this challenge as well.

Few Fault Examples

A Creative Solution

The solution they came up with is to complement real images with synthetic images. They’ve built detailed 3D models which are rendered and can be used to generate images, on demand, by varying camera angles, lighting conditions, and backgrounds. In this way, they could both balance the object classes, and generate additional fault examples as needed. This is a very creative way of doing machine teaching to address the limitations and scarcity of high-quality training data.

Mix Real and Synthetic Images

So How Does This All Actually Work?

Below is an example from a real drone operation eSmart did in September. The red lines are the customers power grid, the blue is the 6 kilometer line that Connected Drone had inspected.

Real Drone Operation Activity 1

Closing in on the inspected line (picture below), you can see 6 observations that AI made.

Real Drone Operation Activity 2

The screenshot below shows that there might be a problem with one of the insulators.

Real Drone Operation Activity 3

The inspectors can then go through all the inspected data, pole-by-pole, and create a diagnosis. With Connected Drone on Azure, eSmart was able to turn the massive amount of inaccessible data into useful information that energy companies can act on.

Real Drone Operation Activity 4

Today, connected drones fly and literally collect an inventory of assets for electric companies. The ultimate solution that eSmart envisions is taking the skills of people working on power grids and train AI to suggest the areas and what should be fixed and the cost that it would take to fix it. eSmart is working on “what-if” analysis using the data from their Connected Grid platform and the time-series data. The outcome will be something like a time-travel “movie”, where you can go backwards and diagnose what happened (that part is relatively easy), or you can go forward (using their Connected Grid algorithms) and predict what will happen. Using the time-series travel, and allowing users to change the conditions will enable customers to see how it impacts the ‘what-if’ scenarios and where the electric companies should invest. Furthermore, eSmart is looking closer is expanding the usage of sensors. Connected Drone project is just scratching the surface of what’s possible.

Bonus Lesson: Empower Others to Join You on the Same Journey!

The story doesn’t end here… One of the interesting things was how eSmart created a community around the Connected Drone project.

Involving Kids Through Gamification (to Tag Insulators Smile)

Who knew that power grids could be gamified? eSmart proved that it can be. In order to train the models, they needed a lot of training data, so they made an AI Training Service with a website front-end to crowdsource data tagging with a ranking list as a motivational factor (similar to the one in computer games). Local kids, football teams and school classes are raising money for their tournaments and trips by tagging insulators. (Surprisingly, grown-ups weren’t as enthusiastic ). In addition to having fun, there is a strong incentive that they are doing something important for the infrastructure of their country and for their teams.

Involving Kids Through Gamification

Helping Government Agencies (through “Black Box” for Drones Innovation)

In Norway, drone business has recently exploded. There are more than 1,000 operators. Currently, Norway’s equivalent of FAA, called Civil Aviation Authority Norway (CAA), is severely under-staffed. eSmart has been closely working together with Norwegian CAA in developing safe and risk-free drone missions. Today, when drones fly, they typically don’t store statistical data about the flight. It’s one of the reasons why longer flights and more advanced operations are not allowed.

eSmart Solution is building a virtual “Black Box” for drones to make risk analysis more automatic. A virtual black box collects sufficient information about the flight and using that data they can perform risk analysis and share it with CAA. As eSmart puts it “We are not only in the business of helping electricity companies, but also CAA”. The basic idea of the black box is to actually stream data into Azure and then perform risk analysis on it there. In some rural areas, however, connectivity may actually be lacking, so in those cases they may store it on the drone itself and then perform analysis post-mission.

Collaborating with Drone Vendors to Design More Robust Drones (via Predictive Maintenance)

eSmart is also working on predictive maintenance for drones (something similar in spirit to this). They are actively involved with Robot Aviation (their drone partner) on addressing early warnings, e.g., when the temperature of a drone goes up over some period of time, maybe it’s not a good idea to fly. Furthermore, Robot Aviation doesn’t just have one type of drone – it’s a fleet of drones, from helicopters to small drones. Since the weather in Norway is often cold (as one Norwegian proverb says: “There Is No Such Thing as Bad Weather, Only Inadequate Clothing”),they need drones that can fly in cold weather and in remote areas and under bad weather conditions (many drones aren’t fit for this).

Other Industries?

While today, Connected Drone project only does inspections of power grids, it’s not a big leap to imagine other applications using the same concepts (of training models, ability to recognize objects, safe planning of missions) and architecture, for instance, to inspect railroads, infrastructure and gas pipelines.

It is fair to say that Connected Drone project exemplifies how Deep Learning on Azure GPUs is redefining what a cloud solution can do.

@rimmanehme


Register now for “Bridging the Generation Gap: How to create cohesive teams” on Modern Workplace

$
0
0

For the first time in history, many organizations now have five distinct generations in the workplace, according to Forbes.com. From Gen-Xers and Millennials, to Baby Boomers and beyond, how can you ensure your multi-generational teams are collaborating and communicating effectively?

Join us for the next episode of Modern Workplace,Bridging the Generation Gap: How to create cohesive teams,” airing November 15 at 8 a.m. PST / 4 p.m. GMT, and learn how to leverage the skills and knowledge from each team member to maximize productivity.

  • Asha Sharma, chief operating officer at Porch.com, shares how to create a dynamic workforce through empowering integrated and diverse teams.
  • Haydn Shaw, author of “Sticking Points,” highlights the twelve most common points of friction among team members and explains how they can either weaken your organization or, if leveraged properly, be the keys to its success.

Plus, get an exclusive look inside Microsoft’s latest collaboration tool—Microsoft Teams—and find out how you can improve communication and revolutionize the way your teams work.

Register now!

Related content

The post Register now for “Bridging the Generation Gap: How to create cohesive teams” on Modern Workplace appeared first on Office Blogs.

Git perf and scale

$
0
0

New features and UI changes naturally get a lot of attention. Today, I want to spotlight the less visible work that we do on Team Services: ensuring our performance and scale meet our customers’ needs now and in the future. We are constantly working behind the scenes profiling, benchmarking, measuring, and iterating to make every action faster. In this post, I’ll share 3 of the dozens of improvements we’ve made recently.


First up, we’ve sped up pull request merges significantly. We have an enormous “torture test repo” (tens of GBs across millions of files and 100K+ folders) we use for perf and scale testing. Median merge time for this repo went from 92 seconds to 33 seconds, a 64% reduction. We also saw improvements for normal-sized repos, but it’s harder to generalize their numbers in a meaningful way.

Several changes contributed to this gain. One was adopting a newer version of LibGit2. Another was altering LibGit2’s caching strategy – its default wasn’t ideal for the way we run merges. As a customer, you’ll notice the faster merges when completing PRs. For our service, it means we can serve more users with fewer resources.


An engineer on a sister team noticed that one of our ref lookups exhibited O(N) behavior. Refs are the data structure behind branches in Git. We have to look up refs to display branch names on the web. If you’re familiar with time complexity of algorithms, you’ll recall that O(N) behavior means that the work done by a program scales linearly with the size of the input.

The work done in this particular lookup scaled linearly with the number of branches in a repository. Up to several hundred refs, this lookup was “fast enough” from a human’s point of view. Humans are quite slow compared to computers 😉

Every millisecond counts in web performance, and there’s no reason to do excess work. We were able to rewrite that lookup to be constant with respect to the number of branches.


The last improvement requires a bit more explanation. At various points in our system, we need to track the history of a file: which commits touched this file? Our initial implementation (which served us well for several years) was to track each commit in a SQL table which we could query by file path or by commit.

Fast forward several years. One of the oldest repos on our service is the one which holds the code for VSTS itself. The SQL table tracking its commits had grown to 90GB (many, many times the size of the repo itself). Even after the usual tricks like schema changes and SQL page compression, we weren’t able to get the table size down to an acceptable level. We needed to rethink the problem.

The team spent 3+ months designing and implementing a fast, compact representation of the Git graph. This representation is small enough to keep in memory on the application tier machines, which themselves are cheaper to operate than SQL machines. The change was carefully designed and implemented to be 100% transparent to end customers. Across a variety of measurements, we found no noticeable performance regressions and in many cases saw improvements.

We were able to completely drop the commit change tracking table, freeing up dozens of gigabytes on every scale unit’s database tier. We finished migrating to the new system over 2 months ago. Besides a handful of incidents during early dogfooding, we have not received complaints about either its performance or correctness. (I’m flirting with chaos making such claims, of course. If you have a scenario where performance regressed since the beginning of September, email me so we can investigate.)

This explanation leaves out a lot of details in favor of brevity. If there’s interest, we’re thinking of doing a series of blog articles on how our Git service works under the hood. Let me know in the comments what you want to hear more about.

Thanks to the VC First Party team [Wil, Jiange, Congyi, Stolee, Garima, Saeed, and others] for their insights on this topic. All remaining errors are mine alone.

Microsoft Teams integration, repo favorites, and new package management and release management regions – Nov 2

$
0
0

Note: The improvements discussed in this post will be rolling out throughout the next week.

There are some exciting new features this sprint.

Package Management in India and Brazil

Package Management is now available to Team Services accounts hosted in the South India and Brazil South Azure regions. To get started, install the extension from the Marketplace.

Microsoft Teams integration

Microsoft Teams is a new chat-based workspace in Office365 that makes collaborating on software projects with Team Services a breeze. Team Services users can stay up to date with alerts for work items, pull requests, commits, and builds using the Connectors within Microsoft Teams. Starting November 9, users will also be able to bring their Kanban boards right into Microsoft Teams. For more information, see our blog.

Microsoft Teams

Repo favorites

You can now favorite the repos you work with most frequently. In the repo picker, you will see tabs for All repositories and your Favorites. Click the star to add a repository to your list of Favorites.

repo favorites

Rollback build definitions

You can roll a build definition back to a previous version by going to the History tab when editing a build definition.

Disable the sync and checkout of sources in a build

Starting with the 2.108 agent, you can optionally disable the automatic source sync and checkout for Git. This will enable you to handle the source operations in a task or script instead of relying on the agent’s built-in behavior. All standard source-related variables like Source.Version, Source.Branch and Build.SourcesDirectory are set.

Docker extension enhancements

There have been a number of enhancements to the Docker extension in the marketplace:

  • Docker Compose run action
  • Restart policy for Docker run
  • Ensuring the registry host is specified on logout
  • Bug fixes

.NET Core build task

We added support for building, testing and publishing .NET core applications with a dedicated .NET Core task for project.json templates.

.net core task

Build and release management templates

We have added support for new templates in Build and Release for building ASP.NET/ASP.NET core and deploying to Azure web applications.

build template

release template

ASP.NET Core and NodeJs deployments

The Azure Web App task now supports ASP.NET Core and NodeJs applications. You just specify the folder with all application contents for deployment. This task can run on Linux platforms for deploying ASP.NET Core or Node-based applications.

Azure Web App Service manage task

We added a new task for managing Azure Web App services. Currently, this task has support for swapping any slot to production.

Release Management available in multiple regions

The Release Management service is now available in Europe, Australia, Brazil, and India regions in addition to the US. All of your Release Management data is now co-located with the rest of your Team Services account data.

REST client helpers for Test Step operations

Users will now be able to create, modify and delete test steps and test step attachments in Test Case work items using the helper classes we have added to the REST client (see the RestApi-Sample).

Test case description in Web runner

Customers often use the test case description field for capturing the prerequisites that must be met before the test case execution can start. With this update, users will now be able to view the test case description information in the Web runner by using the Show description option.

test case description

As always, if you have ideas on things you’d like to see us prioritize, head over to UserVoice to add your idea or vote for an existing one.

Thanks,

Jamie Cool

How to use Test Step using REST Client Helper?

$
0
0

Test Case is the backbone for all manual testing scenarios. You can create test case using the web client from Test or Work hubs OR from Microsoft Test Manager (MTM), which then are stored in Team Foundation Server or Visual Studio Team Services. Using these clients you can create test artifacts such as test cases with test steps, test step attachments, shared steps, parameters, shared parameter. Test case is also a work item and using Work Item REST API support one can create a work item of type test case, see here: Create a work item.

Problem

Currently there is no support to modify/update test steps in a test case work item. Work item saves test steps, associated test step attachment, or expected results in custom XML document, and there is a need of helper to create that custom XML for test steps updates.

Solution

With the current deployment, we have added support to Create/Read/Update/Delete test step (action, and expected result) and test step attachments. ITestBase interface exposes required key method – loadActions and saveActions that provide helper methods for both in C# and JS to do above mentioned operations.

Requirement

C# Client (Microsoft.TeamFoundationServer.Client) as released in previous deployment.
OR
JS Client (vss-sdk-extension) (Note: JS changes will be available only after current deployment completes.)

Walk through using new helper in C# client

Here, let’s walk through step-by-step on how to consume these newly added helper classes. We have also added GitHub sample for the same with some more operations (link given at the bottom of the post).

  1. Create an instance of TestBaseHelper class and generate ITestBase object using that.
    TestBaseHelper helper = new TestBaseHelper();
    ITestBase testBase = helper.Create();
  2. ITestBase exposes methods for create test step, generate xml, save actions and load actions. You can even assign title, set expected result and description with each test step and associate attachment using attachment URL. In the end, all test steps are added to actions associated with testBase object (see below).
    ITestStep testStep1 = testBase.CreateTestStep();
    testStep1.Title = "title1";
    testStep1.ExpectedResult = "expected1";
    testStep1.Description = "description1";
    testStep1.Attachments.Add(testStep1.CreateAttachment(attachmentObject.Url, "attachment1"));
    
    testBase.Actions.Add(testStep1)
    
  3. A call to SaveActions uses the helper classes and calls appropriate field setting of the test case – Test Steps to save newly added steps, expected result and attachment links. A JSON patch document created using “SaveActions” is used to createWorkItemAsync as shown below.
    JsonPatchDocument json = new JsonPatchDocument();
    
    // create a title field
    JsonPatchOperation patchDocument1 = new JsonPatchOperation();
    patchDocument1.Operation = Operation.Add;
    patchDocument1.Path = "/fields/System.Title";
    patchDocument1.Value = "New Test Case";
    json.Add(patchDocument1);
    
    // add test steps in json
    // it will update json document based on test steps and attachments
    json = testBase.SaveActions(json);
    
    // create a test case
    var testCaseObject = _witClient.CreateWorkItemAsync(json, projectName, "Test Case").Result;
    
  4. To modify a test case and its steps, you need to get the test case and just call “LoadAction” which internally uses helper class to parse the given xml and attachmentlinks as shown below. This will populate the testBase class with all details as appropriate.
    testCaseObject = _witClient.GetWorkItemAsync(testCaseId, null, null, WorkItemExpand.Relations).Result;
    
    // initiate testbase object again
    testBase = helper.Create();
    
    // fetch xml from testcase object
    var xml = testCaseObject.Fields["Microsoft.VSTS.TCM.Steps"].ToString();
    
    // create tcmattachemntlink object from workitem relation, teststep helper will use this
    IList tcmlinks= new List();
    foreach (WorkItemRelation rel in testCaseObject.Relations)
    {
        TestAttachmentLink tcmlink = new TestAttachmentLink();
        tcmlink.Url = rel.Url;
        tcmlink.Attributes = rel.Attributes;
        tcmlink.Rel = rel.Rel;
        tcmlinks.Add(tcmlink);
    }
    
    // load teststep xml and attachemnt links
    testBase.LoadActions(xml, tcmlinks);
    
  5. Once testBase object has been loaded with test case information, you can update test steps and attachments in the test case object.
    ITestStep testStep;
    //updating 1st test step
    testStep = (ITestStep)testBase.Actions[0];
    testStep.Title = "New Title";
    testStep.ExpectedResult = "New expected result";
    
    //removing 2nd test step
    testBase.Actions.RemoveAt(1);
    
    //adding new test step
    ITestStep testStep3 = tb.CreateTestStep();
    testStep3.Title = "Title 3";
    testStep3.ExpectedResult = "Expected 3";
    testBase.Actions.Add(testStep3);
    
  6. Update test case object using new changes in the test steps and attachments.
    JsonPatchDocument json2 = new JsonPatchDocument();
    json2 = testBase.SaveActions(json2);
    // update testcase wit using new json
    testCaseObject = _witClient.UpdateWorkItemAsync(json2, testCaseId).Result;

As shown above, you can now use the helper classes provided to update test case steps, and still use the existing Work Item REST APIs for test case work item. You can find comprehensive samples for both C# and JS here on GitHub project: RESTApi-Sample.

– Test Management Team

Post notifications to Microsoft Teams using PowerShell

$
0
0

Microsoft Teams, announced earlier today, is a new platform for chat based communication. I was very happy to see the Connectors available including many popular CI/CD related tools. The Connector configurations allow for an easily pluggable extension for Microsoft Teams to integrate notifications in to discussions. This creates an opportunity to integrate datacenter and cloud operations practices with real time discussions.

Among the available connectors in the list, you will find Incoming Webhook. This provides an easy solution to post notifications from any scripting language through JSON formatted web service call.

To add the Connector:

  1. Open the Channel and click the More Options button (which appears as three dots at the top right of the window).
  2. Select Connectors.
  3. Scroll down to Incoming Webhook and click the Add button.
  4. Give the connector a name and image of your choosing and finally click Create.
  5. A new unique URI is automatically generated. Copy this URI string to your clipboard.

If you ever need to retrieve it in the future you can do so from this same configuration area of Teams. Select the Connector and then click the Manage button to reveal the URI string.

Call the Webhook from PowerShell

Let’s start with a simple test to verify functionality. Replace the value for the URI string in the following script and test it by running it in PowerShell ISE. You should see a “1” returned in the PowerShell window and new post in the Teams channel. If an error occurs, it will appear as an error in PowerShell communicating a malformed web service call.

$uri = 

$body = ConvertTo-JSON @{
    text = 'Hello Channel'
}

Invoke-RestMethod -uri $uri -Method Post -body $body -ContentType 'application/json'

simple

Adding more details

Now referencing the O365 Connector Cards API, we can add significantly more detail to the notification. This script will probably not be run interactively by a human, it will be called from some automated process. You can include this in an Azure Automation runbook to notify a channel every time the runbook has executed and the result. For example, the message text “Successfully shut down all running Hyper-V lab virtual machines using Hybrid Runbook Worker” can be posted to your team channel at the end of every work day to make everyone aware that a change occurred in your test environment.

As another example, you can use PowerShell to deliver notifications about changes to your Infrastructure As Code environment, managed by a CI/CD pipeline. In this case, you can leverage PowerShell Core to launch PowerShell if your build service is running on a Linux Server, and send the notification.

In the example below, the body includes an activity and multiple facts. This script is compatible across Windows, Linux, or MacOS when running in PowerShell Core.

$uri = 

# these values would be retrieved from or set by an application
$status = 'success'
$fact1 = 'All tests passed'
$fact2 = '1 test failed'

$body = ConvertTo-Json -Depth 4 @{
    title = 'New Build Notification'
    text = "A build completed with status $status"
    sections = @(
        @{
            activityTitle = 'Build'
            activitySubtitle = 'automated test platform'
            activityText = 'A change was evaluated and new results are available.'
            activityImage = 'http://URL' # this value would be a path to a nice image you would like to display in notifications
        },
        @{
            title = 'Details'
            facts = @(
                @{
                name = 'Unit Tests'
                value = $fact1
                },
                @{
                name = 'Integration Tests'
                value = $fact2
                }
            )
        }
    )
}


Invoke-RestMethod -uri $uri -Method Post -body $body -ContentType 'application/json'

details
Finally, add links to the fact values and a button at the end of the notification to link to a web location. Add links by formatting the values with markdown style link references. Then add the button using the “ViewAction” type.

There is a catch to adding the button.

In order for PowerShell to correctly convert the JSON, use the -Depth parameter. The target item within the ‘potentialAction’ object must be created as an array with a single item when it is converted to JSON, else you will have a malformed request. A Depth of “4” correctly creates the array in this example, as the value is 4 sub-sections in to the JSON document.


$uri = 

# these values would be retrieved from or set by an application
$status = 'success'
$fact1 = '[All tests passed](http://URLtoReport)'
$fact2 = '[1 test failed](http://URLtoReport)'

$body = ConvertTo-Json -Depth 4 @{
    title = 'New Build Notification'
    text = "A build completed with status $status"
    sections = @(
        @{
            activityTitle = 'Build'
            activitySubtitle = 'automated test platform'
            activityText = 'A change was evaluated and new results are available.'
            activityImage = 'http://URL' # this value would be a path to a nice image you would like to display in notifications
        },
        @{
            title = 'Details'
            facts = @(
                @{
                name = 'Unit Tests'
                value = $fact1
                },
                @{
                name = 'Integration Tests'
                value = $fact2
                }
            )
        }
    )
    potentialAction = @(@{
            '@context' = 'http://schema.org'
            '@type' = 'ViewAction'
            name = 'Click here to visit PowerShell.org'
            target = @('http://powershell.org')
        })
}


Invoke-RestMethod -uri $uri -Method Post -body $body -ContentType 'application/json'

button

Troubleshooting

In testing this, all of the errors I encountered were a result of not correctly formatting the JSON document. Reference the examples given in the API and compare your results as returned by the ConvertTo-Json cmdlet. To see this text, highlight just the section of the script that generates $body and run it in PowerShell ISE using the “Run Selection” button or by pressing F8. Then type $body in the terminal window and review the output. Remember that when you see square brackets in the JSON document, that indicates an array, and arrays might contain only a single value.

Thank you!
Michael Greene
Principal Program Manager
Enterprise Cloud Group CAT Team

DSC Resource Kit November 2016 Release

$
0
0

We just released the DSC Resource Kit!

Since our last release on September 21, we have added 1 new repository, PSDscResources, which will serve as the new home of the in-box DSC resources from the PSDesiredStateConfiguration module. This new module now has over 75,000 downloads from the PowerShell Gallery! Wow! Read more about PSDscResources in our blog post here.

This release includes updates to 12 DSC resource modules, including 13 new DSC resources. In these past 6 weeks, 62 pull requests have been merged and 43 issues have been closed, all thanks to our amazing community!

The modules updated in this release are:

  • xActiveDirectory
  • xCertificate
  • xExchange
  • xNetworking
  • xPSDesiredStateConfiguration
  • xRemoteDesktopSessionHost
  • xSqlServer
  • xStorage
  • xWebAdministration
  • SharePointDsc
  • SystemLocaleDsc
  • PSDscResources

For a detailed list of the resource modules and fixes in this release, see the Included in this Release section below.

Our last community call for the DSC Resource Kit was last week on October 26. Unfortunately, we had technical difficulties with Skype and weren’t able to actually see or hear any of our wonderful community members. We still posted a recording of our updates as well as summarizing notes. Join us next time to ask questions and give feedback about your experience with the DSC Resource Kit. Keep an eye on the community agenda for the next call date.

We strongly encourage you to update to the newest version of all modules using the PowerShell Gallery, and don’t forget to give us your feedback in the comments below, on GitHub, or on Twitter (@PowerShell_Team)!

As with past Resource Kits, all resources with the ‘x’ prefix in their names are still experimental – this means that those resources are provided AS IS and are not supported through any Microsoft support program or service. If you find a problem with a resource, please file an issue on GitHub.

Included in this Release

You can see a detailed summary of all changes included in this release in the table below. For past release notes, go to the README.md or Changelog.md file on the GitHub repository page for a specific module (see the How to Find DSC Resource Modules on GitHub section below for details on finding the GitHub page for a specific module).

Module NameVersionRelease Notes
xActiveDirectory2.14.0.0
  • xADDomainController: Adds Site option.
xCertificate2.2.0.0
  • Converted appveyor.yml to install Pester from PSGallery instead of from Chocolatey.
  • Moved unit tests to correct folder structure.
  • Changed unit tests to use standard test templates.
  • Updated all resources to meet HQRM standards and style guidelines.
  • Added .gitignore file
  • Added .gitattributes file to force line endings to CRLF to allow unit tests to work.
  • xCertificateCommon:
    • Moved common code into new module CertificateCommon.psm1
    • Added standard exception code.
                      • to use acceptable verb Test-*.
    • Added help to all functions.
  • xCertificateImport:
    • Fixed bug with Test-TargetResource incorrectly detecting change required.
    • Reworked unit tests for improved code coverage to meet HQRM standards.
    • Created Integration tests for both importing and removing an imported certificate.
    • Added descriptions to MOF file.
    • Removed default parameter values for parameters that are required or keys.
    • Added verbose messages.
    • Split message and error strings into localization string files.
    • Added help to all functions.
  • xPfxImport:
    • Fixed bug with Test-TargetResource incorrectly detecting change required.
    • Reworked unit tests for improved code coverage to meet HQRM standards.
    • Created Integration tests for both importing and removing an imported certificate.
    • Added descriptions to MOF file.
    • Removed default parameter values for parameters that are required or keys.
    • Added verbose messages.
    • Split message and error strings into localization string files.
    • Added help to all functions.
  • xCertReq:
    • Cleaned up descriptions in MOF file.
    • Fixed bugs generating certificate when credentials are specified.
    • Allowed output of certificate request when credentials are specified.
    • Split message and error strings into localization string files.
    • Created unit tests and integration tests.
    • Improved logging output to enable easier debugging.
    • Added help to all functions.
  • xPDT:
    • Renamed to match standard module name format (MSFT_x).
    • Modified to meet 100 characters or less line length where possible.
    • Split message and error strings into localization string files.
    • Removed unused functions.
    • Renamed functions to standard verb-noun form.
    • Added help to all functions.
    • Fixed bug in Wait-Win32ProcessEnd that prevented waiting for process to end.
    • Added Wait-Win32ProcessStop to wait for a process to stop.
    • Removed unused and broken scheduled task code.
xExchange1.11.0.0
  • xExchActiveSyncVirtualDirectory: Fix issue where ClientCertAuth parameter set to “Allowed” instead of “Accepted”
xNetworking3.0.0.0
  • Corrected integration test filenames:
    • MSFT_xDefaultGatewayAddress.Integration.Tests.ps1
    • MSFT_xDhcpClient.Integration.Tests.ps1
    • MSFT_xDNSConnectionSuffix.Integration.Tests.ps1
    • MSFT_xNetAdapterBinding.Integration.Tests.ps1
  • Updated all integration tests to use v1.1.0 header and script variable context.
  • Updated all unit tests to use v1.1.0 header and script variable context.
  • Removed uneccessary global variable from MSFT_xNetworkTeam.integration.tests.ps1
  • Converted Invoke-Expression in all integration tests to &.
  • Fixed unit test description in xNetworkAdapter.Tests.ps1
  • xNetAdapterBinding
    • Added support for the use of wildcard (*) in InterfaceAlias parameter.
  • BREAKING CHANGE – MSFT_xIPAddress: SubnetMask parameter renamed to PrefixLength.
xPSDesiredStateConfiguration5.0.0.0
  • xWindowsFeature:
    • Cleaned up resource (PSSA issues, formatting, etc.)
    • Added/Updated Tests and Examples
    • BREAKING CHANGE: Removed the unused Source parameter
    • Updated to a high quality resource
  • xDSCWebService:
    • Add DatabasePath property to specify a custom database path and enable multiple pull server instances on one server.
    • Rename UseUpToDateSecuritySettings property to UseSecurityBestPractices.
    • Add DisableSecurityBestPractices property to specify items that are excepted from following best practice security settings.
  • xGroup:
    • Fixed PSSA issues
    • Formatting updated as per style guidelines
    • Missing comment-based help added for Get-/Set-/Test-TargetResource
    • Typos fixed in Unit test script
    • Unit test “Get-TargetResource/Should return hashtable with correct values when group has no members” updated to handle the expected empty Members array correctly
    • Added a lot of unit tests
    • Cleaned resource
  • xUser:
    • Fixed PSSA/Style violations
    • Added/Updated Tests and Examples
  • Added xWindowsPackageCab
  • xService:
    • Fixed PSSA/Style violations
    • Updated Tests
    • Added “Ignore” state
xRemoteDesktopSessionHost1.3.0.0
  • Converted appveyor.yml to install Pester from PSGallery instead of from Chocolatey.
xSQLServer3.0.0.0
  • xSQLServerHelper
    • added functions
      • Test-SQLDscParameterState
      • Get-SqlDatabaseOwner
      • Set-SqlDatabaseOwner
    • Examples
      • xSQLServerDatabaseOwner
        • 1-SetDatabaseOwner.ps1
      • Added tests for resources
        • MSFT_xSQLServerDatabaseOwner.Tests.Tests.ps1
xStorage2.8.0.0
  • added test for existing file system and no drive letter assignment to allow simple drive letter assignment in MSFT_xDisk.psm1
  • added unit test for volume with existing partition and no drive letter assigned for MSFT_xDisk.psm1
  • xMountImage: Fixed mounting disk images on Windows 10 Anniversary Edition
  • Updated to meet HQRM guidelines.
  • Moved all strings into localization files.
  • Fixed examples to import xStorage module.
  • Fixed Readme.md layout issues.
  • xWaitForDisk:
    • Added support for setting DriveLetter parameter with or without colon.
    • MOF Class version updated to 1.0.0.0.
  • xWaitForVolume:
    • Added new resource.
  • StorageCommon:
    • Added helper function module.
    • Corrected name of unit tests file.
  • xDisk:
    • Added validation of DriveLetter parameter.
    • Added support for setting DriveLetter parameter with or without colon.
    • Removed obfuscation of drive/partition errors by eliminating try/catch block.
    • Improved code commenting.
    • Reordered tests so they are in same order as module functions to ease creation.
    • Added FSFormat parameter to allow disk format to be specified.
    • Size or AllocationUnitSize mismatches no longer trigger Set-TargetResource because these values can”t be changed (yet).
    • MOF Class version updated to 1.0.0.0.
    • Unit tests changed to match xDiskAccessPath methods.
    • Added additional unit tests to Get-TargetResource.
    • Fixed bug in Get-TargetResource when disk did not contain any partitions.
    • Added missing cmdletbinding() to functions.
  • xMountImage (Breaking Change):
    • Removed Name parameter (Breaking Change)
    • Added validation of DriveLetter parameter.
    • Added support for setting DriveLetter parameter with or without colon.
    • MOF Class version updated to 1.0.0.0.
    • Enabled mounting of VHD/VHDx/VHDSet disk images.
    • Added StorageType and Access parameters to allow mounting VHD and VHDx disks as read/write.
  • xDiskAccessPath:
    • Added new resource.
    • Added support for changing/setting volume label.
xWebAdministration1.15.0.0
  • Corrected name of AuthenticationInfo parameter in Readme.md.
  • Added sample for xWebApplication for adding new web application.
  • Corrected description for AuthenticationInfo for xWebApplication and xWebsite.
SharePointDsc1.4.0.0
  • Set-TargetResource of Service Application now also removes all associated proxies
  • Fixed issue with all SPServiceApplication for OS not in En-Us language, add GetType().FullName method in:
    • SPAccessServiceApp
    • SPAppManagementServiceApp
    • SPBCSServiceApp
    • SPExcelServiceApp
    • SPManagedMetaDataServiceApp
    • SPPerformancePointServiceApp
    • SPSearchServiceApp
    • SPSearchCrawlRule
    • SPSecureStoreServiceApp
    • SPSubscriptionSettingsServiceApp
    • SPUsageApplication
    • SPUserProfileServiceApp
    • SPVisioServiceApp
    • SPWordAutomationServiceApp
    • SPWorkManagementServiceApp
  • Fixed issue with SPServiceInstance for OS not in En-Us language, add GetType().Name method in:
    • SPDistributedCacheService
    • SPUserProfileSyncService
  • Fixed issue with SPInstallLanguagePack to install before farm creation
  • Fixed issue with mounting SPContentDatabase
  • Fixed issue with SPShellAdmin and Content Database method
  • Fixed issue with SPServiceInstance (Set-TargetResource) for OS not in En-Us language
  • Added .Net 4.6 support check to SPInstall and SPInstallPrereqs
  • Improved code styling
  • SPVisioServiceapplication now creates proxy and lets you specify a name for it
  • New resources: SPAppStoreSettings
  • Fixed bug with SPInstallPrereqs to allow minor version changes to prereqs for SP2016
  • Refactored unit tests to consolidate and streamline test approaches
  • Updated SPExcelServiceApp resource to add support for trusted file locations and most other properties of the service app
  • Added support to SPMetadataServiceApp to allow changing content type hub URL on existing service apps
  • Fixed a bug that would cause SPSearchResultSource to throw exceptions when the enterprise search centre URL has not been set
  • Updated documentation of SPProductUpdate to reflect the required install order of product updates
SystemLocaleDsc1.1.0.0
  • Fix AppVeyor.yml build process.
  • Convert Get-TargetResource to output IsSingleInstance value passed in as parameter.
PSDscResources2.0.0.0, 2.1.0.02.0.0.0
  • Initial release

2.1.0.0

  • Added WindowsFeature

How to Find Released DSC Resource Modules

To see a list of all released DSC Resource Kit modules, go to the PowerShell Gallery and display all modules tagged as DSCResourceKit. You can also enter a module’s name in the search box in the upper right corner of the PowerShell Gallery to find a specific module.

Of course, you can also always use PowerShellGet (available in WMF 5.0) to find modules with DSC Resources:

# To list all modules that are part of the DSC Resource KitFind-Module-Tag DSCResourceKit # To list all DSC resources from all sources Find-DscResource

To find a specific module, go directly to its URL on the PowerShell Gallery:

http://www.powershellgallery.com/packages/< module name >

For example:

http://www.powershellgallery.com/packages/xWebAdministration

How to Install DSC Resource Modules From the PowerShell Gallery

We recommend that you use PowerShellGet to install DSC resource modules:

Install-Module-Name < module name >

For example:

Install-Module-Name xWebAdministration

To update all previously installed modules at once, open an elevated PowerShell prompt and use this command:

Update-Module

After installing modules, you can discover all DSC resources available to your local system with this command:

Get-DscResource

How to Find DSC Resource Modules on GitHub

All resource modules in the DSC Resource Kit are available open-source on GitHub.
You can see the most recent state of a resource module by visiting its GitHub page at:

https://github.com/PowerShell/< module name >

For example, for the xCertificate module, go to:

https://github.com/PowerShell/xCertificate.

All DSC modules are also listed as submodules of the DscResources repository in the xDscResources folder.

How to Contribute

You are more than welcome to contribute to the development of the DSC Resource Kit! There are several different ways you can help. You can create new DSC resources or modules, add test automation, improve documentation, fix existing issues, or open new ones.
See our contributing guide for more info on how to become a DSC Resource Kit contributor.

If you would like to help, please take a look at the list of open issues for the DscResources repository.
You can also check issues for specific resource modules by going to:

https://github.com/PowerShell/< module name >/issues

For example:

https://github.com/PowerShell/xPSDesiredStateConfiguration/issues

Your help in developing the DSC Resource Kit is invaluable to us!

Questions, comments?

If you’re looking into using PowerShell DSC, have questions or issues with a current resource, or would like a new resource, let us know in the comments below, on Twitter (@PowerShell_Team), or by creating an issue on GitHub.

Katie Keim
Software Engineer
PowerShell Team
@katiedsc (Twitter)
@kwirkykat (GitHub)

The mystery of dotnet watch and 'Microsoft.NETCore.App', version '1.1.0-preview1-001100-00' was not found

$
0
0
dotnet watch says

WARNING: This post is full of internal technical stuff. I think it's interesting and useful. You may not.

I had an interesting Error/Warning happen when showing some folks .NET Core recently and I thought I'd deconstruct it here for you, Dear Reader, because it's somewhat multi-layered and it'll likely help you. It's not just about Core, but also NuGet, Versioning, Package Management in general, version pinning, "Tools" in .NET Core, as well as how .NET Runtimes work and version. That's a lot! All that from this little warning. Let's see what's up.

First, let's say you have .NET Core installed. You likely got it from http://dot.net and you have either 1.0.0 or the 1.0.1 update.

Then say you have a website, or any app at all. I made one with "dotnet new -t web" in an empty folder.

I added "dotnet watch" as a tool in the project.json like this. NOTE the "1.0.0-*" there.

"tools": {
"Microsoft.DotNet.Watcher.Tools": "1.0.0-*"
}

dotnet watch is nice because it watches the source code underneath it while running your app. If you change your code files, dotnet-watch will notice, and exit out, then launch "dotnet run" (or whatever, even test, etc) and your app will pick up the changes. It's a nice developer convenience.

I tested this out on last weekend and it worked great. I went to show some folks on Monday that same week and got this error when I typed "dotnet watch."

C:\Users\scott\Desktop\foofoo>dotnet watch
The specified framework 'Microsoft.NETCore.App', version '1.1.0-preview1-001100-00' was not found.
- Check application dependencies and target a framework version installed at:
C:\Program Files\dotnet\shared\Microsoft.NETCore.App
- The following versions are installed:
1.0.0
1.0.1
- Alternatively, install the framework version '1.1.0-preview1-001100-00'.

Let's really look at this. It says "the specified framework...1.1.0" was not found. That's weird, I'm not using that one. I check my project.json and I see:

"Microsoft.NETCore.App": {
"version": "1.0.1",
"type": "platform"
},

So who wants 1.1.0? I typed "dotnet watch." Can I "dotnet run?"

C:\Users\scott\Desktop\foofoo>dotnet run
Project foofoo (.NETCoreApp,Version=v1.0) will be compiled because expected outputs are missing
Compiling foofoo for .NETCoreApp,Version=v1.0
Hosting environment: Production
Content root path: C:\Users\scott\Desktop\foofoo
Now listening on: http://localhost:5000
Application started. Press Ctrl+C to shut down.

Hey, my app runs fine. But if I "dotnet watch" I get an error.

Remember that dotnet watch and other "tools" like it are not dependencies per se, but helpful sidecar apps. Tools can watch, squish css and js, precompile views, and do general administrivia that isn't appropriate at runtime.

It seems it's dotnet watch that wants something I don't have.

Now, I could go install the framework 1.1.0 that it's asking for, and the error would disappear, but would I know why? That would mean dotnet watch would use .NET Core 1.1.0 but my app (dotnet run) would use 1.0.1. That's likely fine, but is it intentional? Is it deterministic and what I wanted?

I'll open my generated project.lock.json. That's the calculated tree of what we ended up with after dotnet restore. It's a big calculated file but I can easily search it. I see two things. The internal details aren't interesting but version strings are.

First, I search for "dotnet.watcher" and I see this:

"projectFileToolGroups": {
".NETCoreApp,Version=v1.0": [
"Microsoft.AspNetCore.Razor.Tools >= 1.0.0-preview2-final",
"Microsoft.AspNetCore.Server.IISIntegration.Tools >= 1.0.0-preview2-final",
"Microsoft.DotNet.Watcher.Tools >= 1.0.0-*",
"Microsoft.EntityFrameworkCore.Tools >= 1.0.0-preview2-final",
"Microsoft.Extensions.SecretManager.Tools >= 1.0.0-preview2-final",
"Microsoft.VisualStudio.Web.CodeGeneration.Tools >= 1.0.0-preview2-final"
]

Ah, that's a reminder that I asked for 1.0.0-*. I asked for STAR for dotnet-watch but everything else was very clear. They were specific versions. I said "I don't care about the stuff after 1.0.0 for watch, gimme whatever's good."

It seems that a new version of dotnet-watch and other tools came out between the weekend and my demo.

Search more in project.lock.json and I can see what all it asked for...I can see my dotnet-watch's dependency tree.

"tools": {
".NETCoreApp,Version=v1.0": {
"Microsoft.DotNet.Watcher.Tools/1.0.0-preview3-final": {
"type": "package",
"dependencies": {
"Microsoft.DotNet.Cli.Utils": "1.0.0-preview2-003121",
"Microsoft.Extensions.CommandLineUtils": "1.1.0-preview1-final",
"Microsoft.Extensions.Logging": "1.1.0-preview1-final",
"Microsoft.Extensions.Logging.Console": "1.1.0-preview1-final",
"Microsoft.NETCore.App": "1.1.0-preview1-001100-00"
},

Hey now. I said "1.0.0-*" and I ended up with "1.0.0-preview3-final"

Looks like dotnet-watch is trying to bring in a whole new .NET Core. It wants 1.1.0. This new dotnet-watch is part of the wave of new preview stuff from 1.1.0.

But I want to stay on the released and supported "LTS" (long term support) stuff, not the new fancy builds.

I shouldn't have used 1.0.0-* as it was ambiguous. That might be great for my local versions or when I intend to chase the latest but not in this case.

I updated my version in my project.json to this and did a restore.

"Microsoft.DotNet.Watcher.Tools": "1.0.0-preview2-final",

Now I can reliably run dotnet restore and get what I want, and both dotnet watch and dotnet run use the same underlying runtime.


Sponsor: Big thanks to Telerik! They recently launched their UI toolset for ASP.NET Core so feel free to check it out or learn more about ASP.NET Core development in their recent whitepaper.


© 2016 Scott Hanselman. All rights reserved.
     

Azure Blueprint takes ATO processes to the next level

$
0
0

I am pleased to announce the release of the Azure Blueprint for the Department of Defense (DoD). Azure Blueprint recently released documentation to streamline the path for Azure Government customers working with the Federal Risk and Authorization Management Program (FedRAMP) Moderate Baseline to attain Authorizations to Operate (ATO).

Azure Blueprint has expanded to support our DoD customers working in Azure Government to document their customer security responsibilities. The Azure Blueprint Customer Responsibilities Matrix (CRM) and System Security Plan (SSP) template can now be used by DoD mission owners and third-party providers building systems on behalf of DoD customers.

The DoD migration to the cloud has been guided by the Department of Defense Cloud Computing Security Requirements Guide (SRG) Version 1 Release 2. All cloud systems must meet the security standards outlined in the SRG for use by DoD customers. The Cloud Computing SRG breaks down requirements into impact levels, covering specific data classifications that are adequately protected at each level.

As announced on June 23, 2016, Azure Government has been granted a provisional authorization (PA) at the DoD Impact Level 4 for processing of controlled unclassified information (CUI) and mission critical data. This includes export controlled data, protected health information, privacy information, and others (e.g. FOUO, SBU, etc.). Since that announcement, we have been working with DoD customers to help them understand the Azure Government security protections and work through their security responsibilities. Azure Blueprint now provides these DoD customers with a simplified way to understand the scope of their security responsibilities when architecting solutions in Azure.

We look forward to providing DoD L5 Azure Blueprints once we attain our Impact Level 5 PA for the Microsoft Azure Government DoD Regions and expanding our footprint as the most trusted cloud.

For any questions and to access to these documents, please e-mail AzureBlueprint@microsoft.com.

We welcome your comments and suggestions to help us continually improve your Azure Government experience. To stay up to date on all things Azure Government, be sure to subscribe to our RSS feed and to receive emails, click “Subscribe by Email!” on the Azure Government Blog. To experience the power of Azure Government for your organization, sign up for an Azure Government Trial.

Tech Tip Thursday: Get started with R Visuals

$
0
0
There's been a lot of Power BI news lately around R Visuals, but if you've never used them before you may not be sure how to get started. This week, Guy in a Cube shows you how to take advantage of the new R Script showcase to explore possibilities and start interacting with your data through R.

easyJet soars into a collaborative digital future with Office 365

$
0
0

Today’s post was written by Chris Brocklesby, chief information officer for easyJet.

easyjet-pro-pixIn just 20 years, easyJet has become one of Europe’s leading airlines, operating a fleet of 250 aircraft on more than 820 routes across 30 countries. Coming from a retail background, I like the customer-facing aspect of an airline, where we are all about providing great service that makes people happy. And I like the way that IT plays a huge role in supporting our innovative, entrepreneurial corporate culture. In terms of generating revenue, providing great service and improving the lives of the crew, IT is fundamental to our operations. IT is also a way to ensure that we are highly automated, efficient and cost effective. That’s why we are deploying Office 365 as the foundation of a more modern workplace that will empower employees to deliver great customer service—and enable easyJet to navigate the best route through a rapidly evolving aviation industry.

The digitization of the industry and passengers’ expectations began about five years ago. We deliver low fares because our sales are digitized, but frankly, that’s the easy part. We need to look at supporting our passengers through the entire day of travel with digital services that extend our reach into their lives and build brand loyalty. So, enabling people to check where their inbound flight is, to make plans with updated departure times and check how long the queue is at security; to view airport maps to find their gate and figure out which baggage carousel to use at their destination—all these things support our ethos of making travel simple and affordable.

We need to provide employees and crew with the right business tools to follow through with this vision. That’s the role of Office 365. We cannot digitize the customer experience without also digitizing the employee experience; the cloud-based collaboration and communication tools that Office 365 brings to the table will free employees from paper, from offices and from specific devices to work with a greater freedom to serve passengers before, during and after their flights. It will also serve to build a stronger sense of connection between our approximately 9,000 flight crew and our head office in Luton, north of London.

When we can engender stronger feelings of connection within the company through online collaboration tools and an enterprise social network, we anticipate a more engaged and informed crew providing better service to customers. Sharing best practices, recognizing each other’s customer service, as well as accessing corporate news anytime, anywhere will become a regular part of all employees’ workday.

And for management in local offices across Europe, Skype for Business Online is going to transform our company’s approach to meetings. Not surprisingly for an airline, people fly around Europe to attend meetings, but every time they do, we deny ourselves the opportunity of selling that seat to a passenger. As a low-cost airline, we are obsessed about reducing costs, and Skype for Business Online will make a big difference in that regard.

At easyJet, safety and security for passengers and employees is paramount. The same is true for our data. Our previous on-premises email solution wasn’t resilient but we can meet content security and data usage compliance guidelines more easily with the tools built into Office 365. From easy-to-use services in a modern, connected workplace, to enterprise grade security, our new business productivity cloud services are set to play a significant role in charting the future at easyJet.

—Chris Brocklesby

The post easyJet soars into a collaborative digital future with Office 365 appeared first on Office Blogs.

New capabilities in System Center 2016 Service Management Automation

$
0
0

As you try out the new features in System Center 2016 that were made generally available earlier last month, here is a summary of things you can do with System Center 2016 Service Management Automation:

  1. Create a Script type runbook and see how it compares with a Workflow type runbook
  2. Control where your runbook is run
  3. Develop and test SMA runbooks right from PowerShell ISE
  4. Take advantage of the latest PowerShell features in your SMA runbooks
    Let’s dig deeper into each of the new features:

Support for Script type runbooks

In System Center 2012 R2 Service Management Automation, runbooks were based on PowerShell Workflows only. We have now added support for native PowerShell Script type runbooks.

Advantages of using PowerShell Script type runbooks, in comparison to Workflow type runbooks:

  • No compilation time when executing runbooks – runbooks start at the scheduled time and not any later
  • Understanding PowerShell Workflows is not necessary to get started with Service Management Automation

However, there are some trade-offs to be made as well:

  • Parallelizing execution is a little difficult than when using PowerShell Workflows
  • Ability to suspend and resume is lost

You can learn more about how to use Script type runbooks through these blogposts. Overall, with the support for native PowerShell Script type runbooks, you now have the choice to use the type that best suits your needs – PowerShell Workflow or PowerShell Scripts.

Control where your runbook is run

In System Center 2012 R2 Service Management Automation, runbooks where run on all available runbook workers at random. The idea behind this was that this helped balance the load between the runbook workers. However, sometimes you might want to target a runbook worker for running a particular runbook. We have addressed this issue in System Center 2016 Service Management Automation by giving you the ability to choose which runbook worker, a runbook will run on.

You can learn more about how to use this feature through this blog post.

Develop and test SMA runbooks right from PowerShell ISE

In System Center 2012 R2, only ways to interact with Service Management Automation were through Windows Azure Pack portal or SMA PowerShell module. A lot of customers used PowerShell ISE to develop their SMA runbooks and then copied the runbook over into Windows Azure Pack portal which was cumbersome. To improve the authoring experience, we have released an add-on PowerShell ISE for Service Management Automation.

With the new add-on, you can now develop and test your runbooks right from PowerShell ISE. You can learn more about the add-on through this blog post.

As of today, you still need to use Windows Azure Pack portal to manage/monitor your runbook jobs and work with connection, schedule and module assets. In future updates, we will continue adding functionality to ISE Add-on to completely remove the dependency on Windows Azure Pack portal.

Take advantage of the latest PowerShell features in your SMA runbooks

Service Management Automation continues the support for the latest PowerShell features to meet your automation needs. For System Center 2016, this means the support for the new PowerShell 5.1 features. PowerShell 5.1 comes installed with the Windows Server 2016 and packs a lot of important newfeatures.

Thank You!

All the new features in System Center 2016 Service Management Automation have been developed from customer and community feedback. We would like to thank all our customers who have taken part in Technical Preview program or provided feedback through other channels. Thank you and we look forward to improving the product further.

You can download and try System Center 2016 from here.

5 project management tools that save time, money and energy

$
0
0

5-project-management-tools-1

Steering a project through the stressed and stressful waters of an always-moving organization can be a daunting proposition. These days, the average office worker is burdened by an untold number of deadlines, meetings, emails and tasks—and that means they probably don’t have the time or headspace to dedicate their full talents to you and your project. Luckily, you can help make things easier. To better ensure project success—whether what you’re making is for a client or an internal stakeholder—check out these five project management tools that save time, money and energy.

Gantt charts and scrum boards

In an era of infinite content—armadas of words constantly battling for our limited attention—a simple visual can be refreshing and highly effective. Gantt charts use basic horizontal bars to display project phases and progress. With a quick glance, your teammates will be able to see what work is currently getting done (or at least what work should be getting done) and when their contributions will be needed. They’ll also be able to see how much time each phase is expected to take.

Scrum boards function in a similar manner and work great for projects that are on a less rigid timeline and that involve multiple teams. The board displays different project phases—planning, execution, etc.—and markers for each team. As they complete phases and make progress, teams can move their markers across the board. This allows all involved to see the bigger picture and their place within it.

5-project-management-tools-2

Task assignments

It’s easy to agree to do something, much harder to remember to do it. Especially when every stray message seems to come with a new request. That’s why an online tool for task assignments is so valuable. Those responsible for completing a portion of the project—whether that means designing a logo or crunching the final numbers—can log in and easily see exactly what they need to do. Depending on their account settings, they’ll also receive email notifications about looming deadlines.

On the flipside, project managers can keep tabs on their team and make sure all tasks are being completed on time without having to scroll through long email chains. This helps avoid costly delays and unnecessary frustrations.

Resource allocation

In an agency environment, resource allocation is key. Project managers need an effective tool to schedule and track hours for every member of their team in order to prevent overbooking, missed deadlines and unhappy clients. But this manner of organization need not be limited to the “creative” world. Wouldn’t it be helpful, even in a corporate office, to have a clear picture of team members’ availability?

Ask your team upfront to estimate—to whatever extent is possible—their availability through the course of the project. Have them mark down any days they’ll be out on vacation, days they won’t be able to put their heads down and work due to meetings, and days they have other deadlines. If you have a sense of hourly availability, you’ll be able to build realistic timelines. An online resource allocation tool will make it easy for you and your teammates to track actual hours against estimates and adjust accordingly, allowing for smoother future workflows.

5-project-management-tools-3

Feedback tools

Giving feedback means different things to different people—some offer rigid and prescriptive edit requests, others vague feelings and reactions. Sometimes different stakeholders will present conflicting opinions. All of this makes it difficult for the person receiving the feedback to know how to proceed. Who are they supposed to listen to?

Feedback tools allow stakeholders to vote on and rank different versions of deliverables, making it easier for a group to provide cohesive direction in a fair manner. If a certain individual’s opinion should carry more weight—perhaps they boast significant relevant expertise—you can adjust their vote to count as two.

Out-of-the-box reports

Progress reports and post-mortems are crucial parts of any project—and they don’t need to be a time suck. Rather than spend valuable hours creating a custom report, choose an out-of-the-box option. You’ll be able to input relevant numbers and figures and let the program generate appropriate formatting and data visualizations.

That means you’ll be able to start planning your next project that much sooner.

Related content

The post 5 project management tools that save time, money and energy appeared first on Office Blogs.

New enhancements to the #AzureAD PowerShell 2.0 preview. Manage dynamic groups and more!

$
0
0

Howdy folks,

This week we published a really cool update to Azure AD PowerShell v2.0 preview cmdlets. This update gives you some pretty killer new Azure AD functionality. The new thing Im the most jazzed about is managing dynamic group settings using PowerShell. This was a top request from all you PowerShell folks out there. Previously, you needed to set rules for the creation and population of these groups using the management UX. Not anymore!

And the new AzureADMSGroup cmdlets provide the functionality of Microsoft Graph to create and manage groups, which includes creating Office 365 groups and dynamic groups through PowerShell.

With these cmdlets you can create new dynamic groups, set or update the dynamic group rule, and start and pause dynamic group processing. You can also use other Azure AD cmdlets to orchestrate all this in bulk.

Imagine running a script that automatically creates dynamic groups for all your departments, all your device types, or maybe for all your managers in your directory. Now you can understand why Im so excited about this!

Managing dynamic groups through Azure AD PowerShell

Create and manage a dynamic group by using the New-AzureADMSGroup cmdlet. In this example, were creating a dynamic security group called “All marketing users”:

New-AzureADMSGroup -Description “Marketing team” -DisplayName “Dynamic groups with all Marketing users” -MailEnabled $false -SecurityEnabled $true -MailNickname “Foo” -GroupTypes “DynamicMembership” -MembershipRule “(user.department -eq “”Marketing””)” -MembershipRuleProcessingState “Paused”

When you execute this cmdlet, the following output is returned:

Id : db78a43d-ba08-4eab-8766-07280a4ba580

Description : Marketing team

OnPremisesSyncEnabled :

DisplayName : Dynamic groups with all Marketing users

OnPremisesLastSyncDateTime :

Mail :

MailEnabled : False

MailNickname : Foo

OnPremisesSecurityIdentifier :

ProxyAddresses : {}

SecurityEnabled : True

GroupTypes : {DynamicMembership}

MembershipRule : (user.department -eq “Marketing”)

MembershipRuleProcessingState : Paused

Creating an Office 365 group with AzureAD PowerShell

You can also create an Office 365 group using the New-AzureADMSGroup cmdlet. In the example below, were creating a new Office 365 group for all the stamp collectors in our organization:

PS C:\Users\rodejo> New-AzureADMSGroup -Description “Stamp Collectors” -DisplayName “Office 365 group for all Stamp Collectors in our org” -MailEnabled $true -SecurityEnabled $true -MailNickname “StampCollectors” -GroupTypes “Unified”

And this is the output the cmdlet call returns:

Id : 92e93152-a1a6-4aac-a18a-bfe157e3b319

Description : Stamp Collectors

OnPremisesSyncEnabled :

DisplayName : Office 365 group for all Stamp Collectors in our org

OnPremisesLastSyncDateTime :

Mail : StampCollectors3545@drumkit.onmicrosoft.com

MailEnabled : True

MailNickname : StampCollectors

OnPremisesSecurityIdentifier :

ProxyAddresses : {SMTP:StampCollectors3545@drumkit.onmicrosoft.com}

SecurityEnabled : True

GroupTypes : {Unified}

MembershipRule :

MembershipRuleProcessingState :

Some things to note:

  • The value that you provide for the MailNickName parameter is used to create both the SMTP address and the email address of the group. If the MailNickName is not unique, a four-digit string is added to the SMTP and email addresses to make them unique, like in the example above.
  • The values for SecurityEnabled and MailEnabled are set and ignored when creating an Office 365 group because these groups are implicitly security and mail enabled when used in Office 365 features.
  • If you want to create a dynamic Office 365 group, you need to specify both DynamicMembership and Unified in the GroupTypes parameter, as in:

Set-AzureADMSGroup -Id c6edea99-12e7-40f9-9508-862193fcb710 -GroupTypes “DynamicMembership”,”Unified” -MembershipRule “(User.department -eq “”Marketing””)” -MembershipRuleProcessingState “Paused”

Managing the processing state of a group

Sometimes you may want to stop the processing of a dynamic group, like when youre importing a large number of new users or reorganizing your group architecture. To do that, use the MembershipRuleProcessingState parameter to switch processing on and off.

To switch it off, use:

Set-AzureADMSGroup -Id 92e93152-a1a6-4aac-a18a-bfe157e3b319 -MembershipRuleProcessingState “Paused”

And to switch it back on, use:

Set-AzureADMSGroup -Id 92e93152-a1a6-4aac-a18a-bfe157e3b319 -MembershipRuleProcessingState “On”

Other release updates

This release also includes a few other changes and new cmdlets.

Cmdlets to revoke a users Refresh Tokens: We got requests from several customers to provide capabilities to revoke a users refresh tokens. To address that need, we added these two new cmdlets:

Revoke-AzureADSignedInUserAllRefreshTokens

This cmdlet is used by the admin to invalidates all of their own refresh tokens issued to applications by resetting the refreshTokensValidFromDateTime user property to the current date-time. It also resets session cookies in a users browser. Use this command if you are concerned you account has been attacked or if you are trying to get back to a clean state when you are trying to verify a sign-in flow.

Revoke-AzureADUserAllRefreshTokens

This cmdlet Invalidates all the refresh tokens (as well as session cookies in a users browser) of the user specified by the admin in the invoking the command. This is accomplished by resetting the refreshTokensValidFromDateTime user property to the current date-time.

Connect-AzureAD no longer requires -Force: We learned from your feedback that youd rather not get the Connect-AzureAD cmdlet prompt for confirmation, so we removed the requirement to specify the -Force parameter to suppress confirmation prompting.

Naming convention change for cmdlets that call Microsoft Graph: In a previous blog post we mentioned how were aligning AzureAD PowerShell V2 functionality with Graph API functionality. Since the AzureAD PowerShell cmdlets expose both the Azure AD Graph API and the Microsoft Graph, we decided to make a small change to the naming convention of our cmdlets. Moving forward, all cmdlets that call the Microsoft Graph will have MS in their cmdlet names, as in Get-AzureADMSGroup. The cmdlets that call the Azure AD Graph will not change, so there is also a Get-AzureADGroup cmdlet. Well be implementing these name changes in an upcoming release and will share all the details then.

Getting started

To get started using the New-AzureADMSGroup cmdlet, take a look at a short video we made detailing how to manage dynamic groups using PowerShell.

Dynamic Membership for Groups requires an Azure AD premium license, so if you dont have one already, make sure to sign up for a free trial license.

I hope youll find these new capabilities useful! And as always, we would love to receive any feedback or suggestions you have.

Best regards,

Alex Simons (Twitter: @Alex_A_Simons)

Director of Program Management

Microsoft Identity Division

Our school fell in love at first sight!

$
0
0

Today’s post was written by Dr. Ramona Best, principal of the Coulter Grove Intermediate School.

Do you believe in love at first sight? I do, because my teachers and administrators at Coulter Grove Intermediate School in Maryville, Tennessee, have experienced it firsthand! Our story isn’t about an immediate connection between two people (although we are a close-knit family, and there is certainly love and caring that exists in our building). Instead, our story is about an immediate connection with a digital tool called OneNote that fulfilled a need and then continued to grow with us so much that we have become inseparable! Now, as our relationship has continued, we have built trust and reliance on this tool. We love it so much we want to share our love story with the world!

Here’s how our school fell in love with Microsoft OneNote!

onenote-fell-in-love-image-1

The Coulter Grove Intermediate School and OneNote love story is set at the beginning of the school year in late July of 2015. I am sure you are familiar with the excitement and anticipation that is a part of each annual startup at schools. There is so much to do and so little time! But we had an additional challenge this year. It was our first year in a full 1:1 digital environment. Sure, we had done the legwork leading up to this challenge. We had piloted laptops with early adopter teachers’ classrooms the previous year, utilized a Learning Management System (LMS) purchased by the district, collaborated and worked to build digital content and honed our digital skills. We had explored digital tools and were each on our own growth path and pace toward limitless learning for our students. But as with any journey, our path had not been smooth, and we had to reroute along the way. As administrators, we were fully embedded in this journey with our teachers, experiencing the bumps and bruises as we traveled and picking ourselves up to ride another day.

Some of our bumps in the road during the previous year had been a way to organize content so students could easily access curriculum in a timely and efficient manner. We found ourselves and our students constantly drilling down folder by folder and restating and redirecting to locate saved content. Also, finding and giving feedback and grading student digital work was imperfect and laborious at best. Additionally, as we were building our content digitally, we worried that our students without internet connection at home would not be able to access content and complete assignments outside of the school day. Would the power of a digital tool at home be lost for our students that needed access the most? Sure, there were plans in place to help including community partnerships for W-Fi access and hotspots available for checkout for home use. But we wondered, would this work? How could we help our students stay organized, access content and submit quality digital work? We and our students loved our tried-and-true interactive notebooks full of notes, diagrams and reflections. What digital tools would work best in our blended learning environment?

Skip back with me to the spring of 2015. Our administrative team attended a regional technology administrator’s conference hosted by our state affiliate of ISTE. We were on a mission to become digital administrators! Two of us signed up to attend a session on “OneNote for Administrators” led by my longtime friend and colleague, Jill Pierce. I had been using OneNote for almost ten years as a personal productivity tool and loved it but had just been using it “my way” and had not kept up with the changes and updates. It was at this session that our OneNote plan developed and that I rekindled, renewed and rebuilt my relationship with OneNote. What I learned about the OneNote Staff and Class Notebooks was a game changer. Our plan was simple. We would use this tool as administrators during the 2015-2016 school year for our staff handbook, for teacher collaboration, and have the teachers experience it as end users.

onenote-fell-in-love-4

We thought a few teachers might choose to pilot OneNote during the school year. We knew that it would be a tool that would support our staff as well as a new tool for our classrooms. But we worried that we could overwhelm our teachers if we moved too fast. They already had so much on their plates with the changes and new products. After all, we had a LMS, and everyone had been trained and had built content for a year within that interface. So, we would model the use of this tool, give our teachers the experience as an end user, and they could choose to be an end user or pick it up and use it as a part of their instruction and content development. Perhaps they would be ready for OneNote Class Notebooks in 2016.

onenote-fell-in-love-5

On July 21, 2015, our first day of teacher in-service for the year was held. Our admin team members were bright-eyed and bushy-tailed and ready to start the year off with a bang. We had built our professional development schedule to ensure success for our teachers. Within our schedule, I would kick off the year with an inspirational message and share our beautiful OneNote Staff Notebooks. We would model the use, spout its praises of automatic saving, everything opening at once and off-line access on various devices. We had a PD time slot on Friday morning for personalized learning that we would build based on our teachers needs assessment. As I closed that first gathering of the year that was themed with a growth mindset and content delivered via OneNote, I felt great about the year. We could do this.

onenote-fell-in-love-6

As I dismissed our teachers for a break before our classroom and team collaboration time, instead of breaking, a handful of our teachers gathered and bombarded us with questions. The number one statement and question was, “I need this for my students. How can I get it now?” We promised a “how to” session for Friday and counted it a success. Next on the agenda was our Leadership team meeting. An agenda was set but, in the end, was shortened because OneNote became the topic of discussion. Yes, I was using OneNote for our CGIS Leadership Team Notebook for the year and had embedded content and agenda templates for meetings and shared the notebook with team members.

By the end of the meeting, we knew. It was love at first sight for many of our teachers. On Friday, our entire teaching staff chose to attend the optional OneNote training session led by Mrs. Deana Bishop and, by the end of the first nine weeks of the school year, 50 percent of our teachers had OneNote Class Notebooks set up and were using them with students on a regular basis. By the end of the year, that number had grown to 70 percent. At the beginning of this school year, it was practically everyone. Today, it is the primary tool school-wide for digital content for students and staff. Teachers use OneNote consistently, including special needs classrooms and Encore classes.

onenote-fell-in-love-7

Why did we fall in love with OneNote? That answer is easy: It helps our kids. It helps them stay organized. It isn’t internet access-dependent, leveling the playing field for all our students. The old excuse of “The dog ate my homework” is no more. (We almost had a “dog ate the laptop” story, however!) Why do we keep loving OneNote? Have you seen Learning Tools? Enough said! Our admin team feels like the best “matchmakers” in the world, and we are living happily ever after!

—Dr. Ramona Best

The post Our school fell in love at first sight! appeared first on Office Blogs.


Announcing the Winners – Women’s Health Risk Assessment Competition

$
0
0

This post is authored by Hai Ning, Principal Program Manager at Microsoft.

We are excited to announce the winners of the Women’s Health Risk Assessment competition, a contest that we launched on the Cortana Intelligence Competition Platformback in July this year.

As we shared in an earlier blog entry, according to a World Health Organization report, 820,000 women and men aged between 15 and 24 living in developing countries were newly infected with HIV in 2011, and over 60% of this population were women. The Cortana Intelligence team at Microsoft saw an opportunity to get involved. Specifically, we decided to host a competition to build models in Azure Machine Learning that would categorize young women aged between 15 and 30 years and from one of nine underdeveloped regions into a risk segment based on various data points collected from these subjects. This would allow healthcare providers to offer them appropriate education and training programs to reduce their reproductive health risks, including for HIV infections.


This dataset was graciously donated by the Bill & Melinda Gates Foundation, and it contains roughly 9,000 samples collected through a 2015 survey conducted at clinics in 9 underdeveloped regions around the world. The data challenge is essentially a multi-class classification problem.

In a span of three months, we received 2,392 entries from 493 contest participants. The top 10 entries on the public leaderboard were only separated by 0.38% in accuracy. One participant, Rui Quintino, even created an informative Power BI dashboard (picture below) to showcase the data and the participation.

The competition officially ended on October 1st and, after validating all contest entries and results, we are very happy to announce the Top 3 competition prize winners, along with their published ML experiments! Here they are:


We would like to extend our hearty congratulations to Ion, Nailong and David, and hope you will check out their published winning entries. Additionally, we would also like to share our deep appreciation for all other participants who took on this challenge and made the contest a success.

When you look at the winning entries, a common thread you will see is that they all use XGBoost R package to solve this problem. Ion and David trained the XGBoost model in their local environment and brought the serialized model into Azure ML for scoring, while Nailong used the XGBoost package that’s built into the latest R runtime environment, Microsoft R Open v.3.2.2, in the Azure ML Execute R Script module for training and scoring. This is a great testament to both the popularity and power of XGBoost and to the extensibility of Azure ML Studio. In the end, Ion and Nailong had identical private scores (except Ion submitted his winning entry a few weeks earlier), and David was only 0.07% behind – quite the photo finish for a very close race!

We also received positive feedback about the Competition platform and Azure ML Studio during the contest. One contestant wrote to us, saying, “Azure ML Studio is a platform I am very fond of, since it offers extremely useful machine learning functionality in a rather easy and straight-forward way, plus integration with other Azure services is remarkably easy.” Another wrote, “Microsoft is changing the game in making machine learning so easy and fast!

We hope everyone enjoyed the contest. We are working with the Gates Foundation to put these winning models to work in the real world, so we can make a difference in the lives of women in these HIV-affected regions. It’s our privilege to work with the ML community to find solutions such as this with potentially outsized impact on the lives of people around the world – thank you!

Hai, on behalf of the Cortana Intelligence team.
@peacesea68

Configure rich document collaboration using Exchange Server 2016, Office Online Server (OOS) and SharePoint Server 2016

$
0
0

This document explains the configuration steps needed to get rich document collaboration working between Exchange Server 2016, SharePoint Server 2016, and Office Online Server, in your On-Premises environment.

Please use this link if you’re looking for configuration steps for Exchange Server 2016 On-Premises and SharePoint Online

Introduction

When used together, Exchange Server 2016, SharePoint Server 2016, and Office Online Server provide a rich set of document collaboration features.

For example, rather than directly attaching a document to an e-mail message you may now send a link to the document stored in OneDrive for Business (ODB). Outlook and Outlook on the Web (new name for OWA) will still display the file as if it was directly attached to the message like a classic attachment would be, as well as allow people to work with the file like they would with a classic attachment. Additionally, many people will be able to read and edit the same file at the same time while it is stored in OneDrive for Business (ODB).

You can see a short demo of how this collaboration can look like right here.

Pre-requisites

The solution requires you have the following set up On-Premises:

Configuration

The basic setup for these rich document collaboration features involves configuring OneDrive for Business (ODB) in the SharePoint 2016 farm, establishing a server-to-server trust (also referred to as S2S or OAuth) between SharePoint Server 2016 and Exchange Server 2016. Once completed, users will have the ability to attach ODB-based documents to email messages. Installing and configuring Office Online Server will introduce the additional capability of device-independent side-by-side viewing as well as edit & reply functionality in Outlook on the Web.

Note that editing documents is a premium feature of OOS and requires appropriate licenses!

Office Online Server

Install OOS and create a new OOS farm. Make sure the farm URL is accessible from Internet if you want users to be able to view and possibly edit documents via Outlook on the Web from outside of the corporate network:

Example:

For an OOS farm that is going to use same internal and external FQDN, with editing enabled:

New-OfficeWebAppsFarm -InternalURL “https://oos.contoso.com” -ExternalURL “https://oos.contoso.com” -CertificateName “Unified Certificate” -EditingEnabled

For an OOS farm that is going to use different internal and external FQDNs, with editing enabled:

New-OfficeWebAppsFarm -InternalURL “https://internaloos.contoso.com” -ExternalURL “https://externaloos.contoso.com” -CertificateName “Unified Certificate” -EditingEnabled

SharePoint Server 2016

In order to leverage the OneDrive for Business-based attachments on-premises, users must have a OneDrive for Business site hosted by SharePoint Server 2016 on-premises.

Follow steps from here, if the MySite Host (which gives you OneDrive for Business) is not already configured.

Additionally, to enable integration of Office Online Server for document previewing and online editing, WOPI bindings must be created in the SharePoint farm.

  • WOPI Bindings– WOPI bindings (or Web Application Open Platform Interface bindings) define related applications and available actions for a file extension. The New-SPWOPIBinding cmdlet is used to create these bindings between OOS and SharePoint. As with the other configurations, HTTPS is encouraged for production use, but non-production environments can be configured to communicate without SSL/TLS security by including the -AllowHTTP switch on the cmdlet: New-SPWOPIBinding -ServerName oos.contoso.com
  • S2S/OAuth Trust and Service Permissions– The SharePoint Server provides set of commands to configure Server to Server authentication, create App Principals and configure correct permissions that are needed to make this level of collaboration real.

The commands can be put together in a script to make life easy. A sample script for performing this configuration is provided here

Usage:

  • Download the script
  • Save this script as a .ps1 file on your SharePoint Server 2016, for example ‘Config-SPSOAuth.ps1’.
  • Open the SharePoint Management Shell and execute the script.
  • Script will prompt for:
    • An ExchangeServer URL – the hostname provided to access Exchange Server 2016.
    • A SharePoint MySite Host – URL of the SharePoint website hosting the MySite collection.

Example:

.\Config-SPSOAuth.ps1 -ExchangeServer mail.contoso.com -MySiteHostUrl https://sp01.contoso.com/

Exchange Server 2016

The user’s mailbox must be hosted on an Exchange Server 2016 server on-premises to enable the document collaboration functionality. There are a few settings to configure on Exchange Server to enable the full experience.

  • OOS Endpoint– Configuring the OOS Endpoint in Exchange enables preview options for file attachments, as well as the edit and reply functionality. The OOS endpoint can be set in two locations – the Organization level, and at the Mailbox Server level. The Organization level is used to enable a global configuration for all servers with a single setting. This is useful for a single server, or single location deployment. It also serves as a fallback/failsafe when the endpoint configured at the mailbox server level is unavailable. The Mailbox Server level allows administrators to distribute client requests to multiple OOS servers. This can be done to balance load, or when building geographically dispersed deployments.

Set-OrganizationConfig -WacDiscoveryEndpoint https://oos.contoso.com/hosting/discovery
Set-MailboxServer exch.contoso.com -WacDiscoveryEndpoint https://oos.contoso.com/hosting/discovery

If you have Exchange 2013 servers in your organization, do not configure an OOS endpoint at the organization level. Doing so will direct Exchange 2013 servers to use OOS, which is not supported.

  • My Site Host URL– Exchange must know the My Site Host URL to enable ODB-based attachments. This can be set in two locations, the OWA Virtual Directory, and through an OWA Mailbox Policy. The preferred approach setting the My Site Host URL is through an OWA Mailbox Policy. It is recommended for all environment configurations, but it is a requirement when running an Exchange environment with a mixture of Exchange 2016 and Exchange 2013 servers. Mailbox policies allow features to be enabled selectively for users or groups. Each organization will have at least a Default policy which can be assigned to all users. Additional policies can be created using the New-OWAMailboxPolicy cmdlet. The OWA Virtual Directory can only be used to set the My Site Host URL when Exchange 2016 is the only version of Exchange that frontends client access traffic.

Example 1:

Creating new policy for My Site host access:

New-OwaMailboxPolicy -Name ODBPolicy
Set-OwaMailboxPolicy -Name ODBPolicy -InternalSPMySiteHostURL https://sp01.contoso.com -ExternalSPMySiteHostURL https://sp01.contoso.com

Finally, assign the policy to mailboxes:

Set-CASMailboxPolicy JohnR@contoso.com -OWAMailboxPolicy ODBPolicy

Example 2:

In this example, only users connecting to the server ‘Exch’ need to be enabled for document collaboration:

Get-OwaVirtualDirectory -Server exch.contoso.com -ADPropertiesOnly | Set-OwaVirtualDirectory -InternalSPMySiteHostURL https://my.contoso.com -ExternalSPMySiteHostURL https://my.contoso.com

This configuration is useful in scenarios where only specific servers are going to frontend the Outlook on the Web traffic

  • S2S/OAuth Trust and Service Permissions– Enable secure communication between the SharePoint 2016 and Exchange 2016 servers. Production environments should have traffic to both Exchange and SharePoint encrypted by HTTPS. Additionally, neither server should receive a certificate error when communicating with the other or else the integration will fail. The half of the trust configured on Exchange is configured via a script included with the Exchange 2016 installation binaries. The script can be found in the scripts directory, which is by default found at “C:\Program Files\Microsoft\Exchange Server\V15\scripts” (your installation path may vary based on your installation choices). This location is referenced by the $ExScripts variable within the Exchange Management Console.

& $ExScripts\Configure-EnterprisePartnerApplication.ps1 -ApplicationType Sharepoint -AuthMetadataUrl https://sp01.contoso.com/_layouts/15/metadata/json/1

Limitations

These are currently some limitations we hope to address in the future. Please be aware of what they are for the time being:

  • The Outlook 2016 client can only interact with OneDrive for Business attachments in Office 365, it cannot connect to on-premises OneDrive for Business attachments. This limitation does not apply to Outlook on the Web 2016
  • For On-Premises deployments, only internal recipients (mailboxes) that are present in same organization as that of sender can be granted permissions on the OneDrive for Business document. The sender is informed via separate email if the automatic permission process fails. This means you cannot send ODB attachments to users outside of your on-premises organization.
  • OneDrive for Business must be provisioned and initialized (the user has logged in at least once) for both the sender and the recipient. Without both the sender and recipient being provisioned and initialized the side-by-side documents preview will not work for the recipient.

I wanted to thank Neil Hodgkinson, Jon Frick, Brian Day and Jason Haak for their help in putting this together!

Bhalchandra Atre

Introducing Unified Update Platform (UUP)

$
0
0

We’ve updated over 400 million devices running Windows 10 to date and release new builds to Windows Insiders nearly every week. That is pretty incredible if you think about where we were just 2 years ago. But we know we can do even better! Our customers have told us they would like updates to be more seamless, that they’d like more control over the timing of when updates are installed, that they’d like updating to require less local processing and thus improve battery life, and that they’d like download sizes to be reduced. We’re working on all of the above. In the Windows 10 Anniversary Update, we added active hours and improved the control capabilities for our customers. In the next Windows 10 update, we’ll be improving that and more. Today, we are ready to roll out to our Windows Insiders an improvement that works across PC, tablet, phone, IoT, and HoloLens. We are announcing the next generation of our delivery technologies incorporated into our latest Insider builds called the Unified Update Platform (UUP).

One of the biggest community and customer benefits of UUP is the reduction you’ll see in download size on PCs.  We have converged technologies in our build and publishing systems to enable differential downloads for all devices built on the Mobile and PC OS. A differential download package contains only the changes that have been made since the last time you updated your device, rather than a full build. As we rollout UUP, this will eventually be impactful for PCs where users can expect their download size to decrease by approximately 35% when going from one major update of Windows to another. We’re working on this now with the goal of supporting this for feature updates after the Windows 10 Creators Update; Insiders will see this sooner.

We have also revamped how devices check for updates, making them more efficient. As we move to UUP, we are reducing the update data sent to client devices as well as the amount of processing we are doing on devices, this especially important for devices built on the Mobile OS. Using UUP, when your device checks for updates, the Windows Update service will evaluate which updates are needed by a given device. The Windows Update service then returns these updates to the device for download and install. Because more processing is being done by the service, this will lead to faster checks for update operations. It’s important to note that with UUP, nothing will look or behave differently on the surface, UUP is all underlying platform and service optimization that happens behind the scenes.

We’ve also taken concepts that existed in the PC world and have extended them to Mobile. As you may have noticed in the past, PC flights update to the latest build in one operation, regardless of what base build you are currently running, yet that’s not how it worked for the Mobile OS. On your phone, we would sometimes require you to install in two-hops (updates) to get current. With UUP, we now have logic in the client that can automatically fallback to what we call a “canonical” build, allowing you to update your phone in one-hop, just like the PC.

We’re excited to start using UUP to release new builds to Windows Insiders. We plan to roll-out UUP in stages – starting today for Mobile devices. We expect to start using UUP for PC Insider builds later this year and then IoT and HoloLens shortly after. Our team is excited to begin publishing Mobile builds using UUP and seeing the results of a lot of hard work in unifying our update publishing platform for Windows.

Thanks,
Bill

Announcing Windows 10 Insider Preview Build 14959 for Mobile and PC

$
0
0

Hello Windows Insiders!

Today we are excited to be releasing Windows 10 Insider Preview Build 14959 for Mobile and PC to Windows Insiders in the Fast ring.

The Windows 10 Creators Update

Last week, we held an event in New York to share our aspiration to empower a new wave of creativity with the Windows 10 Creators Update and new devices such as the Surface Studio and Surface Dial. You can catch up on all the details on the things we announced at the event by checking out this blog post from Terry Myerson. As I mentioned last week, Windows is an iceberg, the features that people “see” are quite a small percent of the engineering work that we do to enable new UI to be visible. Windows Insiders running the latest builds have already been trying out the Creators Update – including last week’s build (Build 14955). The new and exciting visible features you saw shown off at the event will start rolling out in builds in the coming weeks. Features such as the Paint 3D Preview are now available for Insiders to try right now! We’re excited to get the more of the new Creators Update features in the hands of Insiders in the next couple of months.

What’s new in Build 14959

Unified Update Platform: This build for Mobile is being published using our new update publishing system called Unified Update Platform (UUP). For more information on UUP – check out this blog post from Bill.

Controlling the Display Scaling of your Virtual Machines (PC):  We’ve heard your feedback that Hyper-V Virtual Machines sometimes aren’t scaled as you’d expect, so we’ve added a new Zoom option in the View menu, where you can override the default scaling and set it to 100, 125, 150 or 200 – whichever matches your preference. Along the way, we also fixed an issue where certain VMs wouldn’t display the remote desktop connection bar after entering full screen mode. We are still refining the experience so there might be some rough edges. For example, although we added zoom levels to handle high DPI more gracefully, when zooming you won’t be able to see the VM’s whole screen without scrolling.

Other improvements and fixes for PC

  • We fixed an issue for Insiders resulting in the automatic brightness setting unexpectedly being turned off after upgrading. In doing so, we’ve re-enabled automatic brightness adjust for users that have never changed their auto-brightness setting. If you have already manually configured your auto-brightness setting, then this fix will not affect you. If you would like to enable or disable automatic brightness adjustment, please go to Settings > System > Display, where you can adjust your preferences.
  • We fixed an issue Insiders on domain connected PCs may have experienced where login might fail when the computer was disconnected from its domain network.
  • We fixed an issue resulting in certain apps, such as Outlook Mail and Calendar, failing to update for some Insiders with the error code 0x800700B7.
  • We fixed an issue for Insiders with certain device models where ejecting an SD card might result in a system crash.
  • We fixed an issue where disliking one of the Spotlight lock screen images would show the new image immediately, followed by a transition animation from the previous image to the new image.
  • We fixed an issue where launching an app from another app while in Tablet mode no longer launched it side by side, and instead launched it fullscreen (for example, when launching a web link from the MSN News app).

Other improvements and fixes for Mobile

  • The data usage page in Settings via Settings > Network & wireless > Data usage has been updated with performance and UI improvements.
  • We fixed issues preventing Insiders from adding cards to Wallet and paying using tap to pay.
  • We fixed an issue resulting in being unexpectedly able to close Start in the task switcher.
  • We fixed an issue resulting in certain options for default calling app via Settings > System > Phone > Default apps from being unexpectedly missing.
  • We fixed an issue causing apps that play media in the background, such as Groove Music, to stop when Battery Saver turns on.
  • We fixed an issue Insiders may have experienced where the phone would get into a state where copy/paste wouldn’t work until the device had been restarted.
  • We fixed an issue where Settings might hang after unchecking “Let apps automatically use this VPN connection” in VPN Settings.

Known issues for PC

  • If you have a 3rd party antivirus product installed on your PC – your PC might not be able to complete the update to this build and roll-back to the previous build.
  • We’re aware of an issue where Internet Explorer may crash a few seconds after launch and working.

Known issues for Mobile

  • If you used the previous ‘date change’ workaround to update to Build 14951 or Build 14955: Please don’t use it any longer! The Microsoft account (MSA) ticket on your device needs to expire and then you’ll be offered today’s build. If you changed your date by 30 years… you’ll want to do a device reset.
  • You will be unable to install additional languages, keyboards, and speech packs on your phone for the next few weeks. If you have existing languages, keyboards, and speech packs installed – they will carry over when you update to new builds. You just can’t install any new ones. If you do a hard reset of your phone on these builds – you will also be unable to install additional languages, keyboards, and speech packs. You can use Windows Device Recovery Tool to go back to Windows Phone 8.1 or Windows 10 Mobile, install any languages, keyboards, and speech packs you need and then update to the latest build in the Fast ring as a workaround.

November 201 Bug Bash begins next week!

We are moving the Bug Bash start date up to Monday, November 7th 12:01 AM PST. We wanted to give you, Windows Insiders, and our Windows Engineers the same dates for finding bugs (in the past our engineers started a day early)! We will start the Bug Bash on Build 14959 that is being released today so you can have time to get your PC’s and phones updated prior to the Bug Bash so you can start Quests as soon as they are published.  The Bug Bash will still finish at the end of the day on Sunday, November 13th (PST). Our whole team will be bug bashing with some fun events in-house and we are looking forward to seeing a lot of participation from Insiders!

If you have ideas on what Quests you will like to see for the Bug Bash, let us know!

Team Updates

We just returned from our Ignite NZ adventure! It was amazing to meet so many Windows Insiders and receive so much love from the amazing community down in New Zealand. Many of you have been asking for us to share our content so here you go: my keynote on tech innovation,  Jeremiah Marble’s and my Co-Create Community, Katharane Holdsworth’s Inside Windows Insider Program and an interview with Kyle Drunkerley, one of our Insiders in an Insider Interview.

We have LOVED hearing about the Windows Insider started Create-A-Thons. There is one in planning stages in Wellington for December, tentatively called a Sheep-A-Thon! If you’d like to get involved, please reach out to Callum.

If you’d like to have your own Create-A-Thon, please grab the DIY kit and let us know so we can amplify and support!

We are gearing up for the MVP Summit starting on Monday where we are planning to have several Insider events, including a keynote kicking off Monday morning by Bill Karagounis,Jeremiah Marble and me talking about our program and community.

Thank you everyone and keep hustling,
Dona <3

Announcing The MVP Reconnect Program & The 2016 MVP Global Summit

$
0
0

Once again, it is the time of the year when we have the privilege to host thousands of passionate technical leaders for our MVP Global Summit. This is far from a traditional technical conference. Next week our whole Redmond campus is dedicated to over 2,000 MVPs. Across 16 buildings and 55 of our largest meeting spaces, MVPs will attend over 620 deeply technical sessions, roundtables and networking opportunities with our Product Groups.

This year we have added some new, great workshops. The content will feature topics ranging from community leadership (led by Jono Bacon, the author of the critically acclaimed The Art of Community) and personal expression through storytelling (hosted by distinguished Microsoft engineer James Whittaker) to technical hands-on sessions hosted by our product teams. We’ll also host a Diversity and Inclusion event to welcome and feature diverse members of the MVP community. Our MVPs will have the opportunity to hack side by side with our engineers to create solutions to real-world problems. And over the course of the entire week, many of our most senior leaders will be speaking with the MVPs. You can learn more about these featured speakers on the MVP Summit website.

One of the best things about the MVP Summit is the community spirit among our MVPs. They always cherish the opportunity to reconnect and learn from other. Over the years, we have received feedback from former MVPs that they were looking for ways to stay in touch with the program and their peers. We want to make sure the valuable expertise these folks contribute to the community is still recognized and supported.

We are therefore thrilled to announce a new program called MVP Reconnect. This is our way of reconnecting former MVPs and keeping them in touch with Microsoft and the select technical expert community. The program is available to join starting today. All former MVPs, regardless of their technical expertise or award category, are invited to program.

Given all the benefits, access and recognition, lots of people want to know more about how to become an MVP.  I get this question all the time!  While there is no numerical equation for becoming an MVP, the basic formula comes down to distinguishing yourself in both your depth of expertise and your level of contributions that strengthen the technical community. We have highlighted some great examples of real MVPs that showcase the passion, community spirit, and leadership demonstrated by them. I invite you to check out those examples on our new webpage called “What it takes to be an MVP.”

To that end, we are also introducing a new show in our Channel 9 community called The MVP Show. This new format gives everyone the opportunity to see the life of MVPs around the world and how they can do great things for their community and have fun at the same time J In the first episode, we meet Shih-Ming, who is doing some cool bot development in Taiwan. We plan to feature a different MVP from a different place in each episode. Who knows? One day, it might be you!

About a year ago, I announced the new generation of the Microsoft MVP Award Program. This was the first step of a longer journey towards a broader vision: empowering community leaders to achieve more, while providing the special recognition they deserve. As you can see, we are fully committed to the MVP Award Program and its continuous improvement, and providing the best experience for MVPs is what drives us.

Thank you as always for your contributions and feedback. I am looking forward to talking with you in the next few days during the MVP Summit!

Cheers,

Guggs

@stevenguggs

Viewing all 13502 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>