Quantcast
Channel: TechNet Technology News
Viewing all 13502 articles
Browse latest View live

Exchange Server Edge Support on Windows Server 2016 Update

$
0
0

Today we are announcing an update to our support policy for Windows Server 2016 and Exchange Server 2016. At this time we do not recommend customers install the Exchange Edge role on Windows Server 2016. We also do not recommend customers enable antispam agents on the Exchange Mailbox role on Windows Server 2016 as outlined in Enable antispam functionality on Mailbox servers.

Why are we making this change?

In our post Deprecating support for SmartScreen in Outlook and Exchange, Microsoft announced we will no longer publish content filter updates for Exchange Server. We believe that Exchange customers will receive a better experience using Exchange Online Protection (EOP) for content filtering. We are also making this recommendation due to a conflict with the SmartScreen Filters shipped for Windows, Microsoft Edge and Internet Explorer browsers. Customers running Exchange Server 2016 on Windows Server 2016 without KB4013429 installed will encounter an Exchange uninstall failure when decommissioning a server. The failure is caused by a collision between the content filters shipped by Exchange and Windows which have conflicting configuration information in the Windows registry. This collision also impacts customers who install KB4013429 on a functional Exchange Server. After the KB is applied, the Exchange Transport Service will crash on startup if the content filter agent is enabled on the Exchange Server. The Edge role enables the filter by default and does not have a supported method to permanently remove the content filter agent. The new behavior introduced by KB4013429, combined with our product direction to discontinue filter updates, is causing us to deprecate this functionality in Exchange Server 2016 more quickly if Windows Server 2016 is in use.

What about other operating systems supported by Exchange Server 2016?

Due to the discontinuance of SmartScreen Filter updates for Exchange server, we encourage all customers to stop relying upon this capability on all supported operating systems. Installing the Exchange Edge role on supported operating systems other than Windows Server 2016 is not changed by today’s announcement. The Edge role will continue to be supported on non-Windows Server 2016 operating systems subject to the operating system lifecycle outlined at https://support.microsoft.com/lifecycle.

Help! My services are already crashing or I want to proactively avoid this

If you used the Install-AntiSpamAgents.ps1 to install content filtering on the Mailbox role:

  1. Find a suitable replacement for your email hygiene needs such as EOP or other 3rd party solution
  2. Run the Uninstall-AntiSpamAgents.ps1 from the \Scripts folder created by Setup during Exchange installation

If you are running the Edge role on Windows Server 2016:

  1. Delay deploying KB4013429 to your Edge role or uninstall the update if required to restore service
  2. Deploy the Edge role on Windows Server 2012 or Windows Servers 2012R2 (Preferred)

Support services is available for customers who may need further assistance.

The Exchange Team


Project Rome for Android Update: Now with App Services Support

$
0
0

Project Rome developers have had a month to play with Project Rome for Android SDK (Android SDK), and we hope you are as excited about its capabilities as we are! In this month’s release, we are thrilled to bring you support for app services. Before, we offered the ability to launch a URI from an Android device onto a Windows device. However, the SDK was limited to sending a URI. With the introduction of app services, now you can easily message between Android and Windows devices.

What are App Services?

In short, app services allow your app to provide services that can be interacted with from other applications. This enables an Android application to invoke an app service on a Windows application to perform tasks behind the scenes. This blog post is focused on how to use app services between Android to Windows devices. For a deeper look at app services on Windows, go here.

Messaging Between Connected Devices

Let’s circle back to the example in the original blog post. Paul is an app developer that has integrated the Android SDK into his app. He had created his Contoso Music App, giving users the ability to launch the app across devices, without skipping a beat. That experience was enabled using the RemoteLaunch APIs. It has been a great feature for his app. Paul has an Android phone and listens to music while he goes out for a run. When he gets home, he can easily launch the app on his Xbox—with surround sound speakers—to continue playing with a higher quality sound.

As Paul moves about the home he often finds it frustrating that he has to go back to the Xbox to control the music. On a typical day he loads a playlist but finds himself jumping around from song to song, depending on his mood. This is where app services comes in.

Now, Paul can add the ability to control the music app running on his Xbox from his Android phone. This works very well for Paul because he’s always carrying his phone with him, so it’s much more convenient than having to go to the Xbox every time he wants to change the song.  Once the Android app establishes an AppServiceClientConnection, messaging can flow between devices.

Here’s a look at the Android SDK app services in code.

First, you must discover devices, using RemoteSystemDiscovery for the connectionRequest:


// Create a RemoteSystemDiscovery object with a Builder
RemoteSystemDiscovery.Builder discoveryBuilder;

// Implement the IRemoteSystemDiscoveryListener to be used for the callback
discoveryBuilder = new RemoteSystemDiscovery.Builder().setListener(new IRemoteSystemDiscoveryListener() {
    @Override
    public void onRemoteSystemAdded(RemoteSystem remoteSystem) {
        Log.d(TAG, "RemoveSystemAdded = " + remoteSystem.getDisplayName());
        devices.add(new Device(remoteSystem));        
    }
});

// Start discovering devices
startDiscovery();
	 

Second, establish an AppServiceClientConnection. The IAppServiceClientConnectionListener handles the status of the connection, while the IAppServiceResponseListener handles the response to the message.

AppServiceClientConnection


// Create an AppServiceClientConnection
private void connectAppService(Device device) {
        _appServiceClientConnection = new AppServiceClientConnection(APP_SERVICE,
            APP_IDENTIFIER,
            new RemoteSystemConnectionRequest(device.getSystem()),
            new AppServiceClientConnectionListener(),
            new AppServiceResponseListener());

AppServiceClientConnection callback


// Implement the IAppServiceClientConnectionListener used to callback  
// the AppServiceClientConnection 
private class AppServiceClientConnectionListener implements IAppServiceClientConnectionListener {

	// Handle the cases for success, error, and closed connections
        @Override
        public void onSuccess() {
            Log.i(TAG, "AppService connection opened successful");            
        }

        @Override
        public void onError(AppServiceClientConnectionStatus status) {
            Log.e(TAG, "AppService connection error status = " + status.toString());
        }

        @Override
        public void onClosed() {
            Log.i(TAG, "AppService connection closed");            
        }
    }
	 

AppServiceClientResponse callback


// Implement the IAppServiceResponseListener used to callback  
// the AppServiceClientResponse
private class AppServiceResponseListener implements IAppServiceResponseListener() {
    @Override
    public void responseReceived(AppServiceClientResponse response) {
        AppServiceResponseStatus status = response.getStatus();

        if (status == AppServiceResponseStatus.SUCCESS)
        {
            Bundle bundle = response.getMessage();
            Log.i(TAG, "Received successful AppService response");

            String dateStr = bundle.getString("CreationDate");

            DateFormat df = new SimpleDateFormat(DATE_FORMAT);
            try {
                Date startDate = df.parse(dateStr);
                Date nowDate = new Date();
                long diff = nowDate.getTime() - startDate.getTime();
                runOnUiThread(new SetPingText(Long.toString(diff)));
            } catch (ParseException e) {
                e.printStackTrace();
            }
        }
        else
        {
            Log.e(TAG, "Did not receive successful AppService response);
        }
    }
}
	 

Xamarin

That’s not all: we have updated the Xamarin for Android sample with app services, too.

From the sample, these two functions are used in the RemoteSystemActivity class to connect, and then ping, via app services.

AppServiceClientConnection


private async void ConnectAppService(string appService, string appIdentifier, RemoteSystemConnectionRequest connectionRequest)
{
    // Create AppServiceClientConnection
    this.appServiceClientConnection = new AppServiceClientConnection(appService, appIdentifier, connectionRequest);
    this.id = connectionRequest.RemoteSystem.Id;

    try
    {
        // OpenRemoteAsync returns a Task
        var status = await this.appServiceClientConnection.OpenRemoteAsync();
        Console.WriteLine("App Service connection returned with status " + status.ToString());
    }
    catch (ConnectedDevicesException e)
    {
        Console.WriteLine("Failed during attempt to create AppServices connection");
        e.PrintStackTrace();
    }
}

SendMessageAsync


private async void SendPingMessage()
{
    // Create the message to send
    Bundle message = new Bundle();
    message.PutString("Type", "ping");
    message.PutString("CreationDate", DateTime.Now.ToString(CultureInfo.InvariantCulture));
    message.PutString("TargetId", this.id);

    try
    {
        var response = await this.appServiceClientConnection.SendMessageAsync(message);
        AppServiceResponseStatus status = response.Status;

        if (status == AppServiceResponseStatus.Success)
        {
            // Create the response to the message
            Bundle bundle = response.Message;
            string type = bundle.GetString("Type");
            DateTime creationDate = DateTime.Parse(bundle.GetString("CreationDate"));
            string targetId = bundle.GetString("TargetId");

            DateTime nowDate = DateTime.Now;
            int diff = nowDate.Subtract(creationDate).Milliseconds;

            this.RunOnUiThread(() =>
            {
                SetPingText(this as Activity, diff.ToString());
            });
        }
    }
    catch (ConnectedDevicesException e)
    {
        Console.WriteLine("Failed to send message using AppServices");
        e.PrintStackTrace();
    }
}

All documentation and code for both Java and Xamarin can be found on our GitHub here.

Staying Connected with Project Rome

The power of the Project Rome platform is centered around connecting devices (both Windows and Android). With the introduction of app services functionality into the Android SDK, we continue to provide the tools developers need to create highly compelling experiences.

To learn more about the capabilities of the Android SDK, browse sample code and get additional resources related to the platform, check out the information below:

The Windows team would love to hear your feedback. Please keep the feedback coming using our Windows Developer UserVoice site. If you have a direct bug, please use the Windows Feedback tool built directly into Windows 10.

The post Project Rome for Android Update: Now with App Services Support appeared first on Building Apps for Windows.

Announcing R Tools 1.0 for Visual Studio 2015

$
0
0

This post is authored by Shahrokh Mortazavi, Partner Director of Program Management at Microsoft.

I’m delighted to announce the General Availability of R Tools 1.0 for Visual Studio 2015 (RTVS). This release will be shortly followed by R Tools 1.0 for Visual Studio 2017 in early May. RTVS is a free and open source plug-in that turns Visual Studio into a powerful and productive R development environment. Check out this video for a quick tour of its core features:

Core IDE Features

RTVS builds on Visual Studio, which means you get numerous features for free, from using multiple languages to word-class editing and debugging, to over 7,000 extensions for every conceivable need.


  • A polyglot IDE – VS supports R, Python, C++, C#, Node.js, SQL, etc. projects simultaneously.
  • Editor – complete editing experience for R scripts and functions, including detachable/tabbed windows, syntax highlighting, and much more.
  • IntelliSense – aka auto-completion, available in both the editor and the Interactive R window.
  • R Interactive Window – work with the R console directly from within Visual Studio.
  • History window – view, search, select previous commands and send to the Interactive window.
  • Variable Explorer – drill into your R data structures and examine their values.
  • Plotting – see all your R plots in a Visual Studio tool window.
  • Debugging – breakpoints, stepping, watch windows, call stacks and more.
  • R Markdown – R Markdown/knitr support with export to Word and HTML.
  • Git – source code control via Git and GitHub.
  • Extensions – over 7,000 extensions covering a wide spectrum from data to languages to productivity.
  • Help – use ? and ?? to view R documentation within Visual Studio.

It’s Enterprise Grade

RTVS includes various features that address the needs of individual as well as data science teams, for example:

SQL Server 2016

RTVS integrates with SQL Server 2016 R Services and SQL Server Tools for Visual Studio 2015. These separate downloads enhance RTVS with support for syntax coloring and Intellisense, interactive queries, and deployment of stored procedures directly from Visual Studio.


Microsoft R Client

Use the stock CRAN R interpreter, or the enhanced Microsoft R Client and its ScaleR functions that support multi-core and cluster computing for practicing data science at scale.

Visual Studio Team Services

Integrated support for git, continuous integration, agile tools, release management, testing, reporting, bug and work-item tracking through Visual Studio Team Services. Use our hosted service or host it yourself, privately.

Remoting

Whether it’s data governance, security, or running large jobs on a powerful server, RTVS workspaces enable setting up your own R server or connecting to one in the cloud.

The Road Ahead

We’re very excited to officially bring another language to the Visual Studio family! Along with Python Tools for Visual Studio, you have the two main languages for tackling most any ML and analytics related challenge. Very soon (~May), we’ll release RTVS for VS2017 as well. We’ll also resurrect the “Data Science workload” in VS2017 which gives you R, Python, F# and all their respective package distros in one convenient install.

Beyond that, we’re looking forward to hearing from you on what features we should focus on next! R package development? Mixed R+C debugging? Model deployment? VS Code/R for cross-platform development? Please let us know on the github repo.

Shahrokh

Resources

 

Visual Studio 2017 can automatically recommend NuGet packages for unknown types

$
0
0

There's a great feature in Visual Studio 2015.3 and Visual Studio 2017 that is turned off by default. It does use about ~10 megs of memory but it makes me so happy that I turn it on.

It's under C# | Advanced in Tools Options. Or you can just type "Advanced" in the Quick Launch Bar (via Ctrl+Q if you like) to jump there.

I turn on

I turn on "Suggest usings for types in NuGet packages" and "Suggest usings for types in reference assemblies."

For example, if I am typing some code and start referencing a Type that isn't in my project but could be...you know how sometimes you just need a using statement to bring in a namespace? In this Web App, I already have Json.NET so it recommends a using statement to bring it into scope.

Can't find JSON

But in this Console App, I have no packages beyond the defaults. When I start using a type like JObject from a popular NuGet, Visual Studio can offer to install Json.NET for me!

Find and install latest version

Or another example:

XmlDocument

And then I can immediately continue typing with intellisense. If I know what I'm doing, I can bring in something like this without ever using the mouse or leaving the line.

JObject is now usable

Good stuff! 


Sponsor: Check out JetBrains Rider: a new cross-platform .NET IDE. Edit, refactor, test, build and debug ASP.NET, .NET Framework, .NET Core, or Unity applications. Learn more and get access to early builds!



© 2017 Scott Hanselman. All rights reserved.
     

Cows can be so silly

$
0
0

Last week, we had some visitors from Redmond visiting us in North Carolina.  I invited a few of them out to my house for dinner – they were interested in seeing this farm that I talk about from time to time.

After we got to my house I herded everyone into my car to drive up to the barn.  As we got to the top of the hill, I could tell something wasn’t right.  I saw a pile of hay and a bunch of cows (yearlings) but no hay ring.  The person sitting next to me in the car, apparently, could see by the look on my face that something was wrong and asked me about it.  I told them I wasn’t sure but this isn’t the way it’s supposed to be.

We got out of the car and I surveyed the field and, immediately, I knew what it was.  Sure enough, half way down the field was one of our calves with the hay ring stuck on its back hips.  Jeez.  Younger animals will go through the holes in the hay ring to stand inside it.  If they are big enough, when they try to get out they will sometimes get the hay ring stuck on their rear hip bones.

You may remember a story from some years ago about how I almost ended up in a pond because I got into the middle of the hay ring to try to get it off a cow and the cow took off running, pulling the hay ring and me with it.  I’ve learned from that lesson.  I did not make the mistake of climbing into the hay ring and, instead, stood beside it and lifted it just high enough for the cow to get its hips out.  Fortunately, it only took a few minutes and the cow stayed reasonably calm.  All was restored to normalcy and no harm was done.  But I do have to wonder how long that cow walked around that field dragging the hay ring 🙂  It couldn’t have been more than a few hours (because someone checks on them a couple of times a day, at least), but it could have been a few hours 🙂

Here’s a picture Aaron took of me rolling the hay ring back after I got it off the cow.  Unfortunately, we didn’t get a picture of the hay ring stuck on the cow – I was too focused on safely getting it off 🙂

hayring2

You can tell from the picture it was a beautiful day – it was cold, but beautiful.

I guess my friends, at least, have a new appreciation for how odd my life can be 🙂

Brian

VSTest task dons a new avatar – testing with unified agents and phases

$
0
0

Visual Studio Test (VSTest) and the Run Functional Test (RFT) tasks are used widely for continuous testing with Team Build and Release Management. As we thought about how test execution in the pipeline should evolve the guiding principles were to ensure that test execution in the pipeline is fast and reliable for all types of tests, be it unit tests (native MSTest as well as 3rd party) or functional tests – both UI and non-UI. Build and Release agents are already unified, so we were wondering if we could have Test Agent integrated as well  and get a ‘single automation agent to rule’ 🙂

So, what advantages does the unified agents work bring?

You can leverage the agent pool for all purposes – build, release and test.

  • Reusable agents means that you no longer need dedicated machines (as required by RFT task). Admins can setup a reusable pool and managing machines becomes easier.
  • You can use the unified agent for single machine as well multi-machine distributed execution.
    • VSTest task has been running on the agent, so single machine was always covered.
    • Distributed execution provided by RFT on remote machines using the Test Agent now comes to the unified agent, so you no longer need the ‘Deploy Test Agent’ like step.
    • ‘Deploy Test Agent’ was based on WinRM and WinRM had it’s own set of limitations, making the whole thing a steep learning curve. So yay, another complexity gone.
    • Since all execution is now local to the automation agent and phases downloads the artifacts automatically to the machines, you also don’t need ‘copy files’ tasks that were required to copy test assemblies and their dependencies when running tests remotely using RFT.

Let us get to the how-to now. The capabilities discussed here apply to the Visual Studio Test v2 (preview) task, so make sure you switch the task versions.

v2task

 

The first one is getting to know phases. When you create a release definition, you can add different types of phases – agent phase, deployment group phase or a server phase. You can add multiple phases of different types, in the order you need them to build your pipeline. Lets take a closer look at the agent phase settings.agentphasesettings‘Run on multiple agents in parallel’ provides 3 options:

  1. None– this means that a single agent from the specified queue will be allocated to this phase. This is the default and all tasks in the phase will run on that agent. When VSTest task runs, it runs exactly how it runs today – no change. You get a single agent test execution. So if I wanted to deploy an Azure web-app and run a small number of quick tests on it (for which a single agent suffices), along with some pre and post test setup/cleanup activities, I can model my environment as follows:
  2. Multi-agent – this mode means that multiple agents will get allocated to the phase. You can specify the number of agents to be allocated from the pool and the set of tasks in that phase will be run across all agents.

multiagent

VSTest task in this mode is special. It recognizes that it’s a multi-agent phase and runs tests in a distributed manner across the allocated agents. Since other tasks are run across all agents, any pre and post test steps I may want to do, also run equally on all the agents – so all the agents are prepped and cleaned up in a consistent manner. Test execution also does not require all agents to be available at the same time. If some agents are busy with another release or build, the phase can still start with the available number of agents that match the demand and test execution starts. As additional agents become available, they can pick up any remaining tests that have not been run yet. Here’s a screenshot of logs from my multi-agent test run, where some tests have failed.

multiagenttestrun

Artifacts are automatically downloaded when the phase starts, so my test assemblies and other files are already on the agent and I don’t need a copy files task.

So now if I want to publish an Azure web-app and run a large set of tests with fast test execution, I will model my Release environment as 2 phases – 1 being the deploy phase (runs on a single agent – I don’t want multiple agents to deploy the same app concurrently :-)) and a test phase that uses multi-agent mode to do test distribution. This also means that I can use different agent queues for the 2 phases, allowing me to manage agents for different purposes separately if I so choose.

distributedtests

3. Multi-config – this mode is driven by ‘multipliers’ pretty much the same way as a multi-config build is. Define the multipliers as variables and based on the values for these variables, the various configurations are run. In the case of Build, typically you would use BuildPlatform and BuildConfiguration as multipliers. Let us see how to apply this to testing. Same example as before – I want to deploy a web-app to Azure and run cross-browser tests on IE and Firefox. So I will model my environment as 2 phases – 1 deploy phase and the second is test phase. multiconfigThe test phase is setup as multi-config using ‘Browser’ variable, that has the values IE and Firefox. My phase will now run using these configs (2 in this case), 1 agent gets assigned a config and the appropriate config values are available for tasks to use.

multipliers

In my case, I want to use the Browser value to instantiate the right browser in my tests, so I will pass them as Test Run Parameters and access the value using TestContext in the test code. I will also use this value to title my test runs appropriately so that if a test fails in a particular config, I can easily figure that based on which run it came from.

multiconfigtasksettings

Here’s what the execution would look like:

multiconfiglogs

That’s all for today’s post. Give the VSTest v2 task a spin and let us know your feedback. Leave comments below or reach me at pbora AT microsoft DOT com.

FAQ

  1. How do I do this with Build?
    • The phases capability is currently available only in Release Management. It will become available in Build in a few sprints.
  2. Does VSTest v1 task behave the same way as the v2 task?
    • No, the v1 task does not do distribution. In the single agent (default, no parallelism) setting, the task will run as the way it has been running today. In the multi-config and multi-agent scenarios, it will get replicated on the agents, like all other tasks.
  3. What is needed to run UI tests?
    • To run UI tests, be sure to run agent in interactive mode. Agents set to run as service cannot run UI tests. Interactive agents in their current form will go down if the machine reboots for any reason. Enhancing the agent to survive reboots in interactive mode is being worked on. Also disable screensaver and unlock the machine so that UI actions in the test don’t get blocked. Automatic configuration of agents to do this is in the works.
  4. Can I run UI tests on the hosted agents?
    • No, running UI tests on the hosted agents is not possible currently.
  5. What does the ‘Test mix contain UI tests’ checkbox do?
    • Currently, it’s there only to serve as a reminder to run agents interactively if you are running UI tests. 🙂 If you are using an agent pool with a mix of interactive and ‘running as service’ agents, you may also want to add an ‘Interactive’ capability to your agents and use that demand in your test phase to pick the set of agents that can run UI tests.
  6. In multi-config mode, do I get distribution of tests as well?
    • No, multi-config mode assigns only one agent per config.
  7. How do I map the config in multi-config to my Test Configurations in TCM?
    • Currently this is not possible.
  8. How else can I use multi-config?
    • This mode can be used whenever you need multiple agents to do parallel jobs. For some other examples, refer the docs.
  9. Has the Run Functional Tests task changed also?
    • No, the Run Functional Tests (RFT) task has not changed. If you are using RFT task, you DO need the ‘Deploy Test Agent’ step. Please note that since tasks get replicated in the multi-agent and multi-config mode, using Run Functional Tests task in that mode will lead to undesirable effects.
  10. Do I need to install Visual Studio on all the machines to use the VSTest v2 task?
    • Currently, yes. We are looking at alternate means to run tests without needing Visual Studio on the agent, so that you can create a separate pool of agents for testing purposes.
  11. I am using my own test runner (not VSTest) in the pipeline. What happens to it?
    • In the multi-agent and multi-config mode, the task will get replicated on each of the agents. You can leverage the multi-config mode to partition your tests on different configs using the config variable (e.g., if you have a config variable called Platform that takes values of x86 and x64, you can run the two sets of tests on 2 agents in parallel by referring to your test assemblies using ‘**\$(Platform)\*test*.dll’
  12. How does the VSTest v2 task run on Deployment Groups?
    • Yes, the VSTest v2 task can be used to run on Deployment groups as well. If you have scenarios that necessitates running tests on machines in the deployment group where the app is deployed, you can use the VSTest v2 task. If multiple machines are selected (say, using tags) in the ‘Run on Deployment Group’ phase, the tests will get replicated on each of the machines.

Video:  Everything You Want to, Need to, and/or Should Know About EMS in 2017

$
0
0

The demands of mobility and security change fast. Really fast. Faster than anyone could have imagined just a few years ago.

This is a reality that inspires the work we do with EMS architecting it as a cloud service so that its features and solutions are tuned to the needs of your organization. The speed at which we update and improve EMS is, in my estimation, unmatched anywhere else and this rapid and regular cadence is our way of underscoring the value we place on your organizations productivity, security, and success.

That commitment is a lot more than corporate platitudes and to show just how serious we are about this, this new video is a great way to learn about everything weve added since we published the 2016 overview.

Spoiler alert: There are an unbelievable amount of upgrades, improvements, and enhancements. I think youre going to be blown away by what you see. There are things in this video that only EMS can provide, as well as things EMS can do far better than anyone else.

If you want to see whats in store, heres the table of contents from the videos intro:

a

 

Help us test Cloud Attachments in Outlook 2016 with SharePoint Server 2016

$
0
0

My name is Steven Lepofsky, and I’m an engineer on the Outlook for Windows team. We have released (to Insiders) support for Outlook 2016’s Cloud Attachment experience with SharePoint Server 2016. We need your help to test this out and give us your feedback!

So, what do I mean by “cloud attachments?” Let’s start there.

The Cloud Attachment Experience Today

Back when we shipped Outlook 2016, we included a refreshed experience for how you can add attachments in Outlook. To recap, here are a few of the new ways Outlook helped you to share your files and collaborate with others:

We added a gallery that shows your most recently used documents and files. Files in this list could come from Microsoft services such as OneDrive, OneDrive for Business, SharePoint hosted in Office 365 or your local computer. When you attach these files, you have the option of sharing a link to the file rather than a copy. With the co-authoring power of Microsoft Office, you can collaborate in real time on these documents without having to send multiple copies back and forth.

Image

Is the file you’re looking for not showing up in the in the recent items list? Outlook includes handy shortcuts to Web Locations where your file might be stored:

Image

And in a recent update, we gave you the ability to upload files directly to the cloud when you attach a file that is stored locally:

Image

Adding Support for SharePoint Server 2016

Until now, Cloud Attachments were only available from Office 365 services or the consumer version of OneDrive. We are now adding the ability to connect to SharePoint Server 2016, so you can find and share files from your on-premises SharePoint server in a single click. We’d love your help testing this out before we roll it out to everyone!

The new experience will match what we have today, just with an additional set of locations. Once setup, you’ll have new entries under Attach File -> Browse Web Locations. These will show up as “OneDrive for Business” for a user’s personal documents folder, and “Sites” for team folders.

Note: If you also happen to be signed in to any Office365 SharePoint or OneDrive for Business sites under File -> Office Account, both sites may show up. The difference will be that the Office 365 versions will have branding for your company. For example, it may say “OneDrive – Contoso” rather than “OneDrive for Business”, or “Sites – Contoso” rather than “Sites.”

Image

You’ll be able to upload locally attached files to the OneDrive for Business folder located on your SharePoint Server.

Image

And, of course, you’ll see recently used files from your SharePoint server start to show up in your recently used files list.

Image

How to get setup

Here are the necessary steps and requirements to start testing this feature out:

  1. This scenario is only supported if you are also using Exchange Server 2016. You’ll need to configure your Exchange server to point to your SharePoint Server 2016 Internal and/or External URLs. See this blog post for details: Configure rich document collaboration using Exchange Server 2016, Office Online Server (OOS) and SharePoint Server 2016
  2. You’ll need Outlook for Windows build 16.0.7825.1000 or above.
  3. Ensure that your SharePoint site is in included in the Intranet zone.
  4. Optional: Ensure that crawling is enabled so that your documents can show up in the recent items gallery. Other features such as uploading a local attachment to your site will work even if crawling is not enabled. See this page for more details: Manage crawling in SharePoint Server 2013

Once enrolled, any mailbox that boots up Outlook and is configured with your SharePoint Server’s information per step #1 above will start to see the new entry points for the server.

We hope you enjoy this sneak peek, and please let us know how this is working for you in the comments below!

Steven Lepofsky


Now Available: Update 1702 for System Center Configuration Manager

$
0
0

We are delighted to announce that we have released version 1702 for the Current Branch (CB) of System Center Configuration Manager that includes new features and product enhancements!

Many of these enhancements are designed for organizations that are going through the digital transformation and want to modernize their IT infrastructure, policies and processes. As one of the first steps in this journey, our customers are upgrading to the Current Branch of ConfigMgr, and by doing so, they are starting to gain some benefits such as lower costs, simplified management, and better experience for both users and IT Pros. Take a look at how Australian Government Department of Human Services went through this journey.

This transformation is also supported by data that we see through our telemetry. There are now more than 31,000 organizations managing almost 70 million devices with the Current Branch of Configuration Manager. We expect this trend to continue in the coming months as more customers start realize the benefits of improved productivity and security as well as lower costs that come with staying current with Windows 10, Office 365, and Configuration Manager.

Thanks to our active Technical Preview Branch community, the 1702 update includes feedback and usage data we have gathered from customers who have installed and road tested our monthly technical previews over the last few months. As always, 1702 has also been tested at scale by real customers, in real production environments. As of today, nearly 1 million devices are being managed by the version 1702 of Configuration Manager.

1702 update includes many new features and enhancements in Windows 10 management and also new functionality for customers using Configuration Manager connected with Microsoft Intune. Here are just few of the enhancements that are available in this update:

  • Support for Windows 10 Creators Update – This version of Configuration Manager now supports the release of upcoming Windows 10 Creators Update. You can upgrade Windows 10 ADK to the latest version for full OS imaging support.
  • Express files support for Windows 10 Cumulative Update– Configuration Manager now supports Windows 10 Cumulative Update using Express files.
  • Customize high-risk deployment warning– You can now customize the Software Center warning when running a high-risk deployment, such as a task sequence to install a new operating system.
  • Close executable files at the deadline when they would block application installation– If executable files are listed on the Install Behavior tab for a deployment type and the application is deployed to a collection as required, then a more intrusive notification experience is provided to inform the user, and the specified executable files will be closed automatically at the deadline. This is currently the feature with the second highest number of votes on UserVoice.
  • Conditional access for PCs managed by System Center Configuration Manager – Now production ready in update 1702, with conditional access for PCs managed by Configuration Manager, you can restrict access to various applications (including but not limited to Exchange Online and SharePoint online) to PCs that are compliant with the compliance policies you set.

This release also includes new features for customers using Configuration Manager connected with Microsoft Intune. Some of the new feature include:

  • Android for Work support – You can now enroll devices, approve and deploy apps, and configure policies for devices with Android for Work.
  • Lookout threat details You can view threat details as reported by Lookout on a device.
  • Apple Volume Purchase Program (VPP) enhancements – You can now request a policy sync on an enrolled mobile device from the Configuration Manager console.
  • Additional iOS configuration settings We added support for 42 iOS device settings for configuration items.

For more details and to view the full list of new features in this update check out our Whats new in version 1702 of System Center Configuration Manager documentation.

Note: As the update is rolled out globally in the coming weeks, it will be automatically downloaded and you will be notified when it is ready to install from the Updates and Servicing node in your Configuration Manager console. If you cant wait to try these new features, this PowerShell script can be used to ensure that you are in the first wave of customers getting the update. By running this script, you will see the update available in your console right away.

For assistance with the upgrade process please post your questions in the Site and Client Deployment forum. To provide feedback or report any issues with the functionality included in this release, please use Connect.If theres a new feature or enhancement you want us to consider including in future updates, please use the Configuration Manager UserVoice site.

Thank you,

The System Center Configuration Manager team

 

Additional resources:

Command Line: Using dotnet watch test for continuous testing with .NET Core 1.0 and XUnit.net

$
0
0

I've installed .NET Core 1.0 on my machine. Let's see if I can get a class library and tests running and compiling automatically using only the command line. (Yes, some of you are freaked out by my (and other folks') appreciation of a nice, terse command line. Don't worry. You can do all this with a mouse if you want. I'm just enjoying the CLI.

NOTE: This is considerably updated from the project.json version in 2016.

First, I installed from http://dot.net/core. This should all work on Windows, Mac, or Linux.

C:\> md testexample & cd testexample

C:\testexample> dotnet new sln
Content generation time: 33.0582 ms
The template "Solution File" created successfully.

C:\testexample> dotnet new classlib -n mylibrary -o mylibrary
Content generation time: 40.5442 ms
The template "Class library" created successfully.

C:\testexample> dotnet new xunit -n mytests -o mytests
Content generation time: 87.5115 ms
The template "xUnit Test Project" created successfully.

C:\testexample> dotnet sln add mylibrary\mylibrary.csproj
Project `mylibrary\mylibrary.csproj` added to the solution.

C:\testexample> dotnet sln add mytests\mytests.csproj
Project `mytests\mytests.csproj` added to the solution.

C:\testexample> cd mytests

C:\testexample\mytests> dotnet add reference ..\mylibrary\mylibrary.csproj
Reference `..\mylibrary\mylibrary.csproj` added to the project.

C:\testexample\mytests> cd ..

C:\testexample> dotnet restore
Restoring packages for C:\Users\scott\Desktop\testexample\mytests\mytests.csproj...
Restoring packages for C:\Users\scott\Desktop\testexample\mylibrary\mylibrary.csproj...
Restore completed in 586.73 ms for C:\Users\scott\Desktop\testexample\mylibrary\mylibrary.csproj.
Installing System.Diagnostics.TextWriterTraceListener 4.0.0.
...SNIP...
Installing Microsoft.NET.Test.Sdk 15.0.0.
Installing xunit.runner.visualstudio 2.2.0.
Installing xunit 2.2.0.
Generating MSBuild file C:\Users\scott\Desktop\testexample\mytests\obj\mytests.csproj.nuget.g.props.
Generating MSBuild file C:\Users\scott\Desktop\testexample\mytests\obj\mytests.csproj.nuget.g.targets.
Writing lock file to disk. Path: C:\Users\scott\Desktop\testexample\mytests\obj\project.assets.json
Installed:
16 package(s) to C:\Users\scott\Desktop\testexample\mytests\mytests.csproj

C:\testexample> cd mytests & dotnet test

Build started, please wait...
Build completed.

Test run for C:\testexample\mytests\bin\Debug\netcoreapp1.1\mytests.dll(.NETCoreApp,Version=v1.1)
Microsoft (R) Test Execution Command Line Tool Version 15.0.0.0
Copyright (c) Microsoft Corporation. All rights reserved.

Starting test execution, please wait...
[xUnit.net 00:00:00.5539676] Discovering: mytests
[xUnit.net 00:00:00.6867799] Discovered: mytests
[xUnit.net 00:00:00.7341661] Starting: mytests
[xUnit.net 00:00:00.8691063] Finished: mytests

Total tests: 1. Passed: 1. Failed: 0. Skipped: 0.
Test Run Successful.
Test execution time: 1.8329 Seconds

Of course, I'm testing nothing yet but pretend there's a test in the tests.cs and something it's testing (that's why I added a reference) in the library.cs, OK?

Now I want to have my project build and tests run automatically as I make changes to the code. I can't "dotnet add tool" yet so I'll add this line to my test's project file:



Like this:

" style="border-left-width: 0px; border-right-width: 0px; background-image: none; border-bottom-width: 0px; padding-top: 0px; padding-left: 0px; margin: 0px 0px 0px 5px; display: inline; padding-right: 0px; border-top-width: 0px" border="0" alt="Adding " src="https://www.hanselman.com/blog/content/binary/Windows-Live-Writer/785ac31a1e8a_1227F/image_a184785b-052e-4e29-ab4b-e7f16b56b808.png" width="1001" height="525">

Then I just dotnet restore to bring in the tool.

NOTE: There's a color bug using only cmd.exe so on "DOS" you'll see some ANSI chars. That should be fixed in a minor release soon - the PR is in and waiting. On bash or PowerShell things look fin.

In this screenshot, you can see as I make changes to my test and hit save, the DotNetWatcher Tool sees the change and restarts my app, recompiles, and re-runs the tests.

Test Run Successful

All this was done from the command line. I made a solution file, made a library project and a test project, made the test project reference the library, then built and ran the tests. If I could add the tool from the command line I wouldn't have had to manually touch the project file at all.

Again, to be sure, all this is stuff you can (and do) do in Visual Studio manually all the time. But I'll race you anytime. ;)


Sponsor: Check out JetBrains Rider: a new cross-platform .NET IDE. Edit, refactor, test, build and debug ASP.NET, .NET Framework, .NET Core, or Unity applications. Learn more and get access to early builds!


© 2017 Scott Hanselman. All rights reserved.
     

Power BI Mobile apps feature summary – March 2017

$
0
0
Hello everyone! We are happy to share notes for the latest release of our Power BI Mobile apps. This update includes some new and exciting capabilities, improvements to existing features, and extended support for additional platforms. Want to let us know what you think of these changes or have an idea for future development? Don’t be shy – we want to hear from you on our mobile feedback forum.

Large-Scale Analysis of DNS Query Logs Reveals Botnets in the Cloud

$
0
0

This post was co-authored by Tomer Teller, Senior Security Program Manager, Azure Security.

The arms race between data security professionals and cybercriminals continues at a rapid pace. More than ever, attackers exploit compute resources for malicious purposes by deploying malware, known as “bots”, in virtual machines running in the cloud. Even a conservative estimate reveals that, at least, 1 in every 10,000 machines are part of some known Botnet.

To better protect VMs in the cloud, Azure Security Center (ASC) applies a novel supervised Machine Learning model for high-precision Botnet detection based on analysis of DNS query logs. This model achieves 95% precision and 43% recall and can detect Botnets before they are reported by antimalware companies.

Communication patterns between Botnets and their CnC server

Bots are controlled by the attacker (Botmaster) using a Command and Control (CnC) server or servers. The Bots which are part of this network are called Botnets (or Zombies). A typical Bot network (Botnet) structure is illustrated in the following figure.

The Structure

Historically, a CnC server was assigned a static IP address making it very easy to take down or blacklist. To avoid detection, Botmasters responded by creating more complex bot/CnC communication patterns.

For example, attackers developed methods, such as Fast-Flux (Mehta, 2014), that use domain names to locate the CnC server which frequently changes its IP address. In addition, domain generation algorithms (DGA) are also used by various families of malware, with Conficker being perhaps the most notorious example. The DGA pattern works by periodically generating a large number of domain names that can be used by bots as connection points to the CnC servers.

More recently, social networks and other user-generated content sites are being exploited by Botmasters to pass information without ever establishing a direct link with a CnC server. Security professionals can therefore no longer rely on simple rule-based approaches to detect these complex communication patterns.

Opportunity to detect Botnets in the cloud

One of the more common applications of machine learning in the cybersecurity domain is anomaly detection. The idea is that a compromised machine exhibits anomalous behavior. While this assumption is usually correct, the opposite seldom holds. Therefore, such techniques achieve low precision and thus produce many false alarms.

Cloud providers such as Microsoft possess a unique opportunity to detect Botnet activity with much greater accuracy by applying large scale machine learning over multiple data sources as a well as a combined view of all the VM logs. Unlike most other systems which analyze data from each machine in isolation, our approach can effectively uncover patterns that are typical to Botnets.

Gathering the data

We collect DNS query and response data from Azure VMs. The logs contain around 50TB of data per day and includes information such as the query name, queried domain name server, the DNS response, and other DNS logging information.

In addition to DNS query and response data, we also use a Microsoft automated machine-readable feed of threat intelligence (TI). The feed includes information about IP addresses of devices which are likely to be part of a Botnet as well as the IP addresses and domains of known CnC servers.

To achieve optimal results, we model Botnet detection as a 2-class supervised learning problem. That is, to classify that a VM (on a specific date) is part of a Botnet based on that VM’s DNS query log. VM instances are labeled as possibly participating in a Botnet based on the following criteria:

  1. The IP address of the VM appears in the TI Botnet feed of that same day.
  2. The VM issued a DNS query with a domain known to belong to a CnC.
  3. The VM received a DNS response to an issued query and the resulting mapped IP is a known IP address of a CnC.

Feature extraction

The VM instances represent a VM on a specific day and are labeled as possible participants of a botnet based on the TI feed of that day. In our problem, feature extraction is difficult; the number of domains accessed by a given VM can be very large and the total number of possible domains is massive. Hence, the domain space is huge and relatively dense.

Moreover, since the model is used continuously, it needs to identify Botnets even when they query for domains is unseen during training. Based on communication patterns with CnC servers our features should capture the insights laid out in the following table.

Name Explanation
Rare domainsDomain names of CnC servers are rare since they are seldom requested by legitimate users
Young domainsWhen a domain generation algorithm (DGA) is used the CnC server frequently acquires new domain names hence they tend to be recently registered. We use a massive daily updated data feed to map domain names to their registration date
Domains Idiosyncratic to BotnetsBotnets controlled by the same CnC server issue DNS queries which contain similarities to each other yet are different from others
Non-existent domain responses

When DGA is used, Botnets query many non-existing domains before they find the actual domain of their CnC server for that time

To efficiently generate the features for each instance we apply two passes over the dataset. In the first pass, we generate a Reputation Table (RT) which maps domains at a given day to:

  • Rareness scores
  • Youngness scores
  • Botnet idiosyncratic scores

In the second pass we calculate the features for each instance based on the reputation scores of domains it queried for. The RT is calculated as follows:

Screenshot_12

To generate features for a VM at a given day, we create a set Dnx and Dx which are sets of all the non-existent and existent domains queried by the VM, respectively. We produce the feature vectors by summing up the corresponding values in the reputation table for each domain in Dnx and Dx separately.

Similarly, we do the same for the name-servers being queried (i.e., the DNS server that eventually resolved the DNS query). The latter features help identify legitimate scenarios in which rare subdomains of a non-malicious zone are accessed, e.g., rare subdomains of Yahoo.com. For classification, we used Apache Spark’s Gradient-Boosted Trees with default parameters.

Experimental evaluation

We use the Microsoft TI feed to generate the labels for our daily Azure VM instances. However, these labels are not perfect; Botnets can still remain undetected for quite some time. Hence, the labels we produced based on the feed are not comprehensive. This makes our evaluation setting different from that of a standard classification problem, since our goal is not to perfectly match labels that can be extracted based on the feed. This would simply duplicate information which is already available in the daily feed. Instead, our goal is to find compromised VMs before they appear in the TI feed.

With this in mind, we trained our model on a week of data from early June 2016. We let the model classify instances from late June and produce our “ground truth” for evaluation based on labels generated from the TI feed looking forward in time one week (into July). We report the accuracy of our model in the following confusion matrix.
Screenshot_13

From the matrix, we learn that the model classified 432 (411+21) instances from the test set as being Botnets. Out of these, 95% (411) eventually appear in the Interflow feed (within a week), hence the model achieves 95% precision and 43% recall. Note that the 5% apparent FPs may still potentially be Botnets that have not yet appeared in the feed, hence they require further investigation.

Conclusions

We present a novel supervised ML model for Botnet detection based on DNS logs. We generate the labels for the supervised model based on a threat intelligence feed provided by anti-malware vendors. We show that the model is able to identify with high accuracy the VMs that are part of a botnet well before they become part of the TI feed. This new Botnet detection feature will reduce the risk of Azure VMs becoming infected with malware.

Azure Relay Hybrid Connections is generally available

$
0
0

The Azure Relay service was one of the first core Azure services. Today’s announcement shows that it has grown up nicely with the times. For those familiar with the WCF Relay feature of Azure Relay rest assured it will continue to function, but its dependency on Windows Communication Foundation is not for everyone. The Hybrid Connections feature of Azure Relay sheds this dependency by utilizing open standards based protocols.

Hybrid Connections contains a lot of the same functionality as WCF Relay including:

  • Secure connectivity of on-premises assets and the cloud
  • Firewall friendliness as it utilizes common outbound ports
  • Network management friendliness that won't require a major reconfiguration of your network

The differences between the two are even better!

  • Open standards based protocol and not proprietary! WebSockets vs. WCF
  • Hybrid Connections is cross platform, using Windows, Linux or any platform that supports WebSockets
  • Hybrid Connections supports .NET Core, JavaScript/Node.js, and multiple RPC programming models to achieve your objectives

Getting started with Azure Relay Hybrid Connections is simple and easy with steps here for .NET and Node.js.

If you want to try it and we hope you do, you can find out more about Hybrid Connections pricing and the Azure Relay offering.

Skype for Business drives digital transformation

$
0
0

Today’s post was written by Ron Markezich, corporate vice president for Office 365 Marketing.

Office 365 is a universal toolkit for collaboration with more than 85 million monthly active users, designed to address the unique workstyle of every group. Through integration with Outlook for email, SharePoint for intelligent content management, Yammer for networking across the organization, and Microsoft Teams for high-velocity, chat-based teamwork—Skype for Business is the backbone for enterprise voice and video meetings in Office 365.

As communication and collaboration become increasingly vital to the way work gets done, customers are turning to Skype for Business in Office 365 for all of their conferencing and calling needs. People around the globe conduct over one billion meetings per year on the Skype network, and usage of Skype for Business Online has doubled in the last year.

Today, as the annual unified communications industry conference Enterprise Connect kicks off in Orlando, we’re pleased to announce several new enhancements and partner solutions for Skype for Business in Office 365, which advance our goal of putting communication at the heart of productivity with Skype:

  • Availability of Auto Attendant and Call Queues, two new calling features in Skype for Business Cloud PBX.
  • Preview of the new Skype for Business Call Analytics dashboard, which provides IT admins with greater visibility to identify and address call issues.
  • New meeting room solutions from our partners, including Polycom RealConnect for Office 365, which enables customers to connect existing video conferencing devices to Skype for Business Online meetings; and the new Crestron SR for Skype Room Systems, which seamlessly integrates with the Crestron control and AV systems.
  • Availability of Enghouse Interactive’s TouchPoint Attendant, the first attendant console for Skype for Business Online.

“Skype for Business Online is becoming part of our DNA.”
—Menakshi Sehwani, regional IT business partner for J. Walter Thompson Europe

A complete, enterprise-grade communications solution

This week, we’re releasing Auto Attendant and Call Queues, two new advanced calling features in Skype for Business Cloud PBX. Auto Attendant provides an automated system to answer and route inbound calls using dial pad inputs and speech recognition. Call Queues enable incoming calls to be routed to the next available live attendant in the order they are received.

This continues the rapid innovation over the past six months we have released into the service including:

  • iOS CallKit integration.
  • Skype for Business client for Mac.
  • Expanding PSTN Conferencing to more than 90 countries with dial-out to 180 countries.
  • Extending PSTN Calling to France, Spain and the UK, with preview currently available in Netherlands.
  • Enabling thousands of customers with hybrid deployments.
  • Skype for Business Server Cloud Connector edition to connect their on-premises telephony assets to our cloud voice solution.

With Skype for Business, companies can replace their legacy meeting and phone systems, and enable their employees to join meetings, as well as to make, receive and manage calls right within Office 365—all on any device. Skype for Business Cloud PBX also provides central management within the Office 365 admin console, making it seamless for IT admins to manage communications alongside email, content and collaboration.

Simplified manageability and control for IT

Today, we are also announcing a preview ofSkype for Business Online Call Analytics—a new dashboard in the Office 365 admin console that gives IT admins greater visibility to identify and address user call issues, such as network issues or headset problems. Customers tell us some of the greatest benefits of moving their communications to the cloud are the ability to consolidate all their meeting and calling systems into a single solution and streamline provisioning and administration. Customers have also asked for more visibility into calling data to help address user support inquiries. Call Analytics provides rich telemetry data in real-time to help IT admins troubleshoot issues and improve the user experience.

In addition to investing in IT management capabilities like the Call Analytics dashboard, we also released new authentication capabilities to enhance security in Skype for Business Online, including multi-factor authentication for PowerShell, certificate-based authentication, and custom policies for client conferencing and mobility.

“We want IT at Henkel to be an enabler for the digital world of the future,
and with features like Cloud PBX in Skype for Business, we live up to that role.”
—Markus Petrak, corporate director of Integrated Business Solutions for Henkel

Making meeting rooms more effective

For meetings to be as effective and engaging as possible for all participants—no matter their location—groups need web and video conferencing with features like screen sharing, IM and whiteboarding. At the same time, organizations want to take advantage of the full Skype for Business experience while leveraging their existing conferencing assets. Today, Polycom announced their RealConnect for Office 365 video interoperability cloud service will be generally available in North America in April. The RealConnect service enables customers to connect existing videoconferencing (VTC) devices to Skype for Business Online, at a low cost of ownership, and with ease of provisioning for IT and simplicity for users.

“Polycom RealConnect for Office 365 simplifies the video world by connecting Skype for Business online users with those using other video systems,” said Mary McDowell, Polycom CEO. “This cloud service protects customers’ investments in existing video systems as it allows these users to join a Skype for Business meeting with a single click.”

In addition, this week Crestron is introducing its SR for Skype Rooms Systems solution. As a next-generation Skype Room System, the Crestron SR will deliver a full native Skype for Business experience and has been designed from the ground up to seamlessly integrate with the Crestron control and AV systems. These Skype Rooms System solutions transform conference rooms of all sizes by providing rich audio and HD video and content sharing in the room. Remote participants have quick and easy join-meeting functionality and the ability to make phone calls. Customers are already seeing benefits from the Logitech SmartDock that was shipped in October of 2016.

“User adoption is critical for our IT success, and Logitech SmartDock with
Skype Room Systems makes it easy to collaborate over video.
The fact that it is highly affordable enables us to light up multiple rooms
for the price of a single traditional video conference room.”
—Franzuha Byrd, director of IT for Morgan Franklin Consulting

Business solutions on Skype for Business

Just as Skype for Business powers communication across Office 365, our partners and customers are taking advantage of Skype for Business APIs and SDKs to develop custom apps that bring real-time communications capabilities into line of business applications and enterprise solutions.

At HIMSS, we announced the availability of the Skype for Business App SDK and Office 365 Virtual Health Templates. Today, we’re pleased to announce that Enghouse has released its TouchPoint Attendant, one of the first attendant consoles tailored for Skype for Business Online.

From Enghouse, which is using Skype for Business to more efficiently route inbound customer calls with its new attendant console, to Smartsheet, which has incorporated Skype for Business into their collaborative work management platform, companies are making Skype for Business the backbone of custom communications scenarios.

Join us at Enterprise Connect this week

Office 365 is the broadest and deepest toolkit for communication and collaboration in the market, meeting the diverse needs of teams and individuals around the world. Skype for Business is our single platform for meetings, video and voice and is core to Office 365 to accelerate how teams and people build, create or produce,whether it be documents or ideas. We are excited to share our new innovations this week that drive greater productivity and simplified management as part of our comprehensive platform on-premises and in the cloud.

Join us live from Enterprise Connect, 10:00 a.m. EDT on Wednesday, March 29, 2017, when I deliver the Microsoft Keynote on how Microsoft is helping customers with their digital transformation by empowering people, IT and organizations through modern communication and collaboration.

—Ron Markezich

The post Skype for Business drives digital transformation appeared first on Office Blogs.

Azure Data Factory’s Data Movement is now available in the UK

$
0
0

Data Movement is a feature of Azure Data Factory that enables cloud-based data integration, which orchestrates and automates the movement and transformation of data. You can now create data integration solutions using Azure Data Factory that can ingest data from various data stores, transform/process data, and publish results to the data stores. 

Moreover, you can now utilize Azure Data Factory for both your cloud and hybrid data movement needs with the UK data store. For instance, when copying data from a cloud data source to an Azure store located in the UK, Data Movement service in UK South will perform the copy and ensure compliance with data residency.

Note: Azure Data Factory itself does not store any data, but instead lets you create data-driven flows to orchestrate movement of data between supported data stores and the processing of data using compute services in other regions or in an on-premises environment.

To learn more about using Azure Data Factory for data movement, view the Move data by using Copy Activity article. 

You can also go to Azure.com learn more about Azure Data Factory or view more in depth Azure Data Factory information documentation.


Detecting and mitigating elevation-of-privilege exploit for CVE-2017-0005

$
0
0

On March 14, 2017, Microsoft released security bulletin MS17-013 to address CVE-2017-0005, a vulnerability in the Windows Win32k component that could potentially allow elevation of privileges. A report from a trusted partner identified a zero-day exploit for this vulnerability. The exploit targeted older versions of Windows and allowed attackers to elevate process privileges on these platforms.

In this article, we walk through the technical details of the exploit and assess the performance of tactical mitigations in Windows 10 Anniversary Update—released in August, 2016—as well as strategic mitigations like Supervisor Mode Execution Prevention (SMEP) and virtualization-based security (VBS). We also show how upcoming Creators Update enhancements to Windows Defender Advanced Threat Protection (Windows Defender ATP) can detect attacker elevation-of-privilege (EoP) activity, including EoP activities associated with the exploit.

Zero-day elevation-of-privilege exploit

Upon review of its code, we found that this zero-day EoP exploit targets computers running Windows 7 and Windows 8. The exploit has been created so that it avoids executing on newer platforms.

The exploit package unfolds in four stages:

 

Execution stages of the exploit package and corresponding functionality

Figure 1. Execution stages of the exploit package and corresponding functionality

 

Stages 1 and 2 – Decryptor and API resolver

To protect the main exploit code, attackers have encrypted the initial stage PE file using AES-256 algorithm. To load code for the next stage, a password must be passed as a parameter to the main entry function. Using the CryptHashData API, the password is used as a key to decrypt the loader for the next stage.

Stage 2 acts as an intermediate stage where API resolution is performed. API resolution routines in this stage resemble how shellcode or position-independent code works.

The following code shows part of the GetProcAddress API resolution routine. This code appears to obfuscate the succeeding payload and stifle analysis.

 

Locating kernel32!GetProcAddress location using EAT traverse

Figure 2. Locating kernel32!GetProcAddress location using EAT traverse

 

Stage 3 – Avoiding newer platforms

In stage 3, the exploit package performs environmental checks, specifically to identify the operating system platform and version number. The attacker ensures that the exploit code runs on vulnerable systems that have fewer built-in mitigations, particularly Windows 7 and Windows 8 devices.

 

Code that performs environmental checks

Figure 3. Code that performs environmental checks

 

Analysis of the exploit code reveals targeting of systems running specific versions of Windows:

  • Major release version 5
  • Major release version 6 and minor version 0, 1, or 2

These versions map to Windows operating systems between Windows 2000 and Windows 8, notably excluding Windows 8.1 and Windows 10. Also, upon examination of its architecture-checking routine, we find that the exploit code targets 64-bit systems.

The next stage payload is loaded through DLL reflection.

 

Stage 4 – Exploit routine

After the environmental checks, the attacker code begins actual exploit of the Windows kernel vulnerability CVE-2017-0005, resulting in arbitrary memory corruption and privileged code execution.

PALETTE.pfnGetNearestFromPalentry corruption

Code execution in the kernel space is made possible by a corrupted pointer in the PALETTE.pfnGetNearestFromPalentry function. Microsoft security researchers have been closely tracking this exploitation technique, which is designed to execute code in the kernel courtesy of a malformed PALETTE object. Observed in an unrelated sample used during the Duqu incident, we have described this relatively old exploit technique in a Virus Bulletin 2015 presentation.

The following snippet shows the corrupted state of the PALETTE function pointer:

 

PALETTE.pfnGetNearestFromPalentry corruption

Figure 4. PALETTE.pfnGetNearestFromPalentry corruption

 

The exploit code calls the native API NtGdiEngBitBlt to trigger an win32k!XLATEOBJ_iXlate function call that uses the corrupted handler. This passes the control flow to a previously allocated shellcode. As a comparison, the exploit code in the Duqu 2.0 case used a GetNearestPaletteIndex call from Gdi32.dll to pass execution to the corrupt callback handler. This difference clearly indicates that these two exploits are unrelated, despite similarities in their code—similarities that can be attributed to the fact that these exploitation techniques are well-documented.

The exploit uses dynamically constructed syscall code snippets to call native Windows APIs.

 

Dynamically constructed calls to kernel functions

Figure 5. Dynamically constructed calls to kernel functions

 

During the execution of the shellcode, the call stack looks like following:

 

Example of the call stack when passing control flow using the corrupted function handler

Figure 6. Example of the call stack when passing control flow using the corrupted function handler

 

Once the shellcode is executed, the exploit uses a common token-swapping technique to obtain elevated, SYSTEM privileges for the current process. This technique is often observed in similar EoP exploits.

 

Token-swapping shellcode

Figure 7. Token-swapping shellcode

 

Mitigation and detection

As previously mentioned, this zero-day exploit does not target modern systems like Windows 10. If environmental checks in the exploit code are bypassed and it is forced to execute on such systems, our tests indicate that the exploit would be unable to completely execute, mitigated by additional layers of defenses. Let’s look at both the tactical mitigations—medium-term mitigations designed to break exploitation techniques—as well as the strategic mitigations—durable, long-term mitigations designed to eliminate entire classes of vulnerabilities—that stop the exploit.

Tactical mitigation – prevention of pfnGetNearestFromPalentry abuse

The use of PALETTE.pfnGetNearestFromPalentry as a control transfer point has been tracked by Microsoft security researchers for quite some time. In fact, this method is on the list tactical mitigations we have been pursuing. In August 2016, with the Windows 10 Anniversary Update, Microsoft released tactical mitigation designed to prevent the abuse of pfnGetNearestFromPalentry. The mitigation checks the validity of PALETTE function pointers when they are called, ensuring that only a predefined set of functions are called and preventing any abuse of the structure.

Strategic mitigations

Other than the described tactical mitigation, this exploit could also be stopped in Windows 10 by SMEP, ASLR improvements in Windows kernel 64-bit, and virtualization-based security (VBS).

Supervisor Mode Execution Prevention (SMEP)

SMEP is a strategic mitigation feature supported by newer Intel CPUs and adopted since Windows 8.

With SMEP, bits in the page table entry (PTE) serve as User/Supervisor (U/S) flags that designate the page to be either in user mode or kernel mode. If a user-mode page is called from kernel-mode code, SMEP generates an access violation and the system triggers a bug check that halts code execution and reports a security violation. This mechanism broadly stops attempts at using user-mode allocated executable pages to run shellcode in kernel mode, a common method used by EoP exploits.

 

SMEP capturing exploit attempt

Figure 8. SMEP capturing exploit attempt

 

Strategic mitigation like SMEP can effectively raise the bar for a large pool of attackers by instantly rendering hundreds of EoP exploits ineffective, including old-school exploitation methods that call user-mode shellcode directly from the kernel, such as the zero-day exploit for CVE-2017-0005.

To check whether a computer supports SMEP, one can use the Coreinfo tool. The tool uses CPUID instructions to show the sets of CPUs and platforms that should support the feature. The following screen shows that the tested CPU supports SMEP. SMEP is supported on Windows 8 and later.

 

Coreinfo shows whether SMEP is enabled

Figure 9. Coreinfo shows whether SMEP is enabled

 

Windows kernel 64-bit ASLR improvements

Although attackers are forced to work harder to create more sophisticated exploits with SMEP, we do know from studies shared in security conferences and documented incidents that there are ways to potentially bypass SMEP mitigation. These bypass mechanisms include the use of kernel ROP gadgets or direct PTE modifications through read-write (RW) primitives. To respond to these foreseeable developments in exploitation techniques, Microsoft has provided Windows kernel 64-bit ASLR improvements with the Windows 10 Anniversary Update and has made SMEP stronger with randomized kernel addresses, mitigating a bypass vector resulting from direct PTE corruption.

 

Windows Kernel 64-bit ASLR improvements

Figure 10. Windows Kernel 64-bit ASLR improvements

 

Virtualization-based security (VBS)

Virtualization-based security (VBS) enhancements provide another layer of protection against attempts to execute malicious code in the kernel. For example, Device Guard blocks code execution in a non-signed area in kernel memory, including kernel EoP code. Enhancements in Device Guard also protect key MSRs, control registers, and descriptor table registers. Unauthorized modifications of the CR4 control register bitfields, including the SMEP field, are blocked instantly.

Windows Defender ATP detections

With the upcoming Creators Update release, Windows Defender ATP will be able to detect attempts at a SMEP bypass through CR4 register modifications. Windows Defender ATP will monitor the status of the CR4.SMEP bit and will report inconsistencies. In addition to this, Windows Defender ATP will detect token-swapping attempts by monitoring the state of the token field of a process structure.

The following screenshot shows Windows Defender ATP catching exploit code performing the token-swapping technique to elevate privileges.

 

Detection of token-swapping technique on Windows Defender ATP

Figure 11. Detection of token-swapping technique on Windows Defender ATP

 

Conclusion: resiliency with mitigation and behavioral detection

The zero-day exploit for CVE-2017-0005 shied away from newer systems because it would have simply been stopped and would have only managed to get unnecessary exposure. Attackers are not as much focusing on legacy systems but avoiding security enhancements present in modern hardware and current platforms like Windows 10 Anniversary Update. While patches continue to provide single-point fixes for specific vulnerabilities, this attacker behavior highlights how built-in exploit mitigations like SMEP, the ASLR improvements, and virtualization-based security (VBS) are providing resiliency.

Windows Defender ATP with Creators Update—now available for public preview—extends defenses further by detecting exploit behavior on endpoints. With the upcoming enhancements, Windows Defender ATP could raise alerts so that SecOps personnel are immediately made aware of EoP activity and can respond accordingly. Read our previous post about uncovering cross-process injection to learn more about how Windows Defender ATP detects sophisticated breach activity.

In addition to strengthening generic detection of EoP exploits, Microsoft security researchers are actively gathering threat intelligence and indicators attributable to ZIRCONIUM, the activity group using the CVE-2017-0005 exploit. Comprehensive threat intelligence about activity groups and their attack methods are available to Windows Defender ATP customers.

Windows Defender ATP is built into the core of Windows 10 Enterprise and can be evaluated free of charge.

 

Matt Oh
Windows Defender ATP Research Team

 

New EMS + Skycure integration helps ensure devices are risk free before accessing corporate resources

$
0
0

Today were thrilled to announce the general availability of our integration with Skycure, a leader in the mobile threat defense space. The integration between Skycure and Microsoft Enterprise Mobility + Security gives organizations more confidence that devices are risk-free and secure before users access corporate resources.

Mobile devices can be susceptible to sophisticated threats under the guise of seemingly harmless scenarios that end users execute on their devices. For example, connecting to a coffee shop Wi-Fi access point could open the users device to a man-in-the-middle attack. Installing a seemingly harmless app could expose the user to malware that can exploit platform vulnerabilities or access the camera without their knowledge. Skycures real-time mobile threat protection leverages a public app for guaranteed user privacy and simple maintenance, plus global crowd-sourced intelligence to ensure protection from zero day threats. The solution is designed to proactively protect against all mobile threat vectorsmalware, network-based risks, and OS and app vulnerability risksto help you identify and remediate these risks before they become a problem.

This integration makes it easy for you to apply Skycures threat detection as an additional input into Intunes device compliance settings, giving Intune dynamic control over access to corporate resources and data based on Skycures real-time analysis. Once a threat is detected, Skycure immediately applies on-device protections and notifies Intune to enforce device status changes and conditional access controls to ensure that corporate data stays protected.

 

EMS Skycure Graph

Skycure and Intune work together to make sure only low risk, compliant devices can access corporate resources.

 

Visit our documentation site for more details on how to deploy and use Skycure with Intune.

You can read more about how Skycure defends against mobile threats.


Note that any necessary licenses for Skycure products must be purchased separately from Intune and/or EMS licenses.

Coming Soon: Transport Advancements in the Windows 10 Creators update

$
0
0

Windows 10 Creators update is coming soon along with an exciting array of new transport features.  Stay tuned for the official announcement!

TypeScript’s New Release Cadence

$
0
0

One of the things we love about the TypeScript community is the enthusiasm around new features and rapid adoption of new TypeScript releases. Because of this, we have been focusing on increasing the velocity and consistency of TypeScript releases so that you can get your hands on the latest features even more quickly and predictably. This new release cadence has been mostly great, but there has been some confusion on when and how to get the latest TypeScript version. There have also been questions regarding availability of TypeScript in Visual Studio 2017. This blog post aims to clarify our intentions and our general plan for shipping TypeScript in the future, including shipping as a part of Visual Studio and Visual Studio Code.

New release cadence

With TypeScript 2.0 and earlier releases, we didn’t bother keeping a predictable release schedule. New versions were ready when they were ready, and we kept working on a release until it had all the features we wanted to be merged in.

This was nice in that each TypeScript release felt large and impactful, but came with the downside of small fixes getting gated behind significant changes, and seemingly random amounts of time between releases. Furthermore, the scope of impact wasn’t just limited to TypeScripr users. VS Code takes advantage of the TypeScript language service to power its JavaScript editing experience as well. This meant that any bug fixes for VS Code, for both TypeScript and JavaScript, would have to wait for a full release to be completed before shipping. Because VS Code ships every month, it sometimes took several VS Code releases before even minor bugs could be fixed. To both address the needs of VS Code and to get features out to the TypeScript community faster, we’ve moved to a monthly release cadence for TypeScript that mirrors the cadence adapted by Visual Studio Code.

Going forward, TypeScript releases will adhere to the following principles:

  • Release a full feature release every two months (e.g. 2.1, 2.2, etc.)
  • Publish at least one TypeScript update to npm every month (either patch or feature releases)
  • Features that aren’t complete by release time are deferred to the next release
  • TypeScript’s monthly release should come ~1 week before, and be included in, a VS Code release
  • Other editors can adopt the new TypeScript version during their next available update

The following diagram may help visualize the new cadence.

We believe that following these principles will benefit everyone in the TypeScript community.

TypeScript updates to Visual Studio

Supporting the latest Typescript release in Visual Studio has been one of the goals that we’ve been working on as a team. However, when Visual Studio 2017 RTM released a few weeks back, we were not able to include the latest TS version (v2.2) with it. Instead, Visual Studio 2017 currently comes with built-in support for Typescript v2.1.5.

With the changing cadence of TypeScript releases, we could not fully align the TypeScript v2.2 release with Visual Studio 2017. Given some key changes that have gone into the new setup authoring process for Visual Studio 2017, we need to do additional work to ensure that TypeScript releases can be applied to Visual Studio 2017 at a faster cadence as users might be used to in Visual Studio 2015 and VS Code.

We’ve heard feedback from several users about their need to move to TypeScript v2.2 in Visual Studio 2017 and understand the confusion and pain this has caused. As a team, we’re actively working on the problem and hope to have a fix available soon. We’ll keep the community updated as we make progress. Once implemented, developers will have full flexibility to update as soon as a new version of TypeScript is available. We apologize for the confusion and want to assure you that fixing this issue is a top priority.

Drop us a line

The great part about being an open source project is that we can make changes, get immediate feedback from the community, and quickly improve in the future. As such, we greatly appreciate your enthusiasm and would love to hear your feedback.

As always, you can find us on GitHub, Twitter, and Gitter.

Happy Birthday EMS:  How A Great Idea, Brilliant Architecture, and Customer Obsession Took Over EMM in 3 Short Years

$
0
0

Today marks the three-year anniversary of two big moments in Enterprise Mobility: On this date in 2014 we announced Microsoft Office on the iPad and the Enterprise Mobility Suite (EMS).

This announcement was made at the very first public event for our newly appointed CEO, Satya Nadella. If you missed the news that day the recap is here.

I have always thought it was interesting (and not a little bit significant) that our new CEO used his first public appearance to make these announcements; it really highlights the importance of both Office 365 and EMS to Microsoft and our customers.

The following three years have been amazing. The Office mobile apps are now being used on 10’s of millions of devices (with Outlook being the most highly rated e-mail app on iOS and Android), and EMS has grown to over 41,000 enterprise customers.

With the benefit of hindsight, and maybe a little nostalgia, I wanted to give everyone a look behind the curtain at the discussions we were having and the key decisions we were making in 2012 and 2013 as we planned and built EMS and created the integrated scenarios with Office 365.

This long and careful process brought together leaders from across Microsoft. Together we studied and projected what we anticipated would be the key trends and the needs of the IT community (and the organizations those IT teams supproted). These strategy sessions were fascinating; they featured expertise in management, identity, security, productivity, collaboration, and more.

Here are some of the key things we identified, and decisions we made, beginning 5 years ago:

Managed Mobile Productivity

As we planned out the core scenarios for mobility in the Enterprise, we were fortunate to have a front row seat to the incredible interest that was building around Office 365. We also knew a deeply held secret at the time: The exhaustive engineering project aimed at bringing the Office apps to iOS and Android was already well underway. Building that software had initially met with some internal resistance, but, after the release of the iPad in 2010, we began to radically rethink what mobile computing looked like. We also recognized that more and more business would be conducted on mobile devices (with Office playing a huge role here), and that the first/primary use case that organizations wanted was to enable corporate e-mail on mobile devices.

At the outset, there were a couple significant challenges that required elegant solutions.

First, we needed to ensure the solution we built was both loved by users and trusted by IT. Second, because the Office mobile apps get used in both business and personal settings, we needed the ability to apply data loss prevention policy to corporate content while staying out of any personal content and data. In other words, we needed to be able to protect corporate documentsandprotect personal privacy. Third, we knew from our own experience that many users would not want to have IT taking over their personal devices and this issue needed a solution.

As we built EMS in 2012/2013, the other EMM solutions in market may have been loved by IT, but they were certainly not loved by users. These solutions had no concept of multi-user capabilities on iOS and Android, and the only way to apply policy to corporate apps and data was to fully manage the entire device. This was bad news all around.

To avoid these same serious flaws, we had to get extremely bold with our innovations in each of these areas (and many others, too). The result is whats currently available today something that we believe is the premiere solution for managed mobile productivity. The combination of Office 365 + EMS + Windows 10 enables what we call the Secure Productive Enterprise.

When I stop and look at how much progress was made so quickly, I am grateful we had so many perspectives and so much experience involved in the planning phase.

Identity Driven Security

While we were planning EMS, we had a firm understanding that compromised identities were the primary attack vector for the attacks we repeatedly saw devastating companies. We were fortunate to have Active Directory the authoritative source of corporate identities right in our backyard, and we were able to combine the astronomically large pool of learnings from running AD with what wed learned from operating so many of the worlds largest consumer services (Outlook.com, Messenger.com, Xbox Live, etc.). In other words, we had a ton of real-world experience when it came to something as foundationally important as protecting identities.

Another big boost to this planning/building process was that in mid/late 2012 we were just beginning to hit our stride with Azure and the rich services that Azure could offer around machine learning, data analytics, and Artificial Intelligence. We could see that the shift to the cloud was not only accelerating but that corporate identity the cloud corporate identity – would become increasingly important (even fundamentally essential) in the mobile-first, cloud-first world. We could see that the traditional perimeter-based security model that had been relied upon for decades would not be effective in a world of cloud services. The world needed to add a new security model and that model had to be based on identity.

By working with the Azure engineering organization, we were able to take all of that capacity/expertise/knowledge and build upon it to start innovating methods to identity suspicious pattern that indicate compromised identities in the massive amount of data/telemetry that comes back to Microsoft every second.

This process drew together a unique mix of experience and perspectives from across identity and cloud computing, combined with the massive amount of data Microsoft has available to use on behalf of our customers. The culmination of this is what youve heard me talk about often: Identity-driven security.

Never before in my 25+ year career have I enjoyed something more than being a part of the genesis of this identity-based security model. This deep level of protection could not have been built anywhere else in the industry and it is awesome to see the incredible things Microsoft can do (and the combined effort we can summon) to make our customers more secure.

Delivered as Cloud Services

One of the most significant (and, looking back, one of the most profound) decisions we made while planning EMS was that these solutions would be delivered as cloud services. While these solutions would certainly connect with and extend our on-prem solutions (like Active Directory and ConfigMgr), the cloud would be required. This type of architecture was nearly unthinkable at the time and it still doesnt exist anywhere else.

This idea wasnt completely out of left field, however by 2012 we could see the beginnings of a move from on-premises Office to Office 365. We were also just beginning to see the move to the public cloud for compute and storage. By sharing ideas and learning from the Office and Azure teams’ experiences, it became very clear that the move to cloud was real and set to accelerate industry wide. Our big takeaway during this period was that increased usage of the cloud would require services like Office 365 and EMS to be far more agile to keep up with the rate of change and the expected timetable of new capabilities. Finding a way to develop this agility took a lot of innovation and partnership with the Office team.

If EMS was going to thrive, it had to be a cloud service. But keep in mind that in late 2012 and early 2013 there were no customers asking for this. This was a carefully calculated risk on our part; we had confidence that this architecture would enable the kind of performance and functionality our customers would soon need. So we made a big bet.

The rest is history.

This story is one Ive repeated quite a few times when meeting with senior IT leaders from around the world. The two most common questions I get are:

  • What has the leadership at Microsoft done to change the company so dramatically over the past three years?
  • How are the EMS teams able to innovate so quickly?

The answer to both questions is no mystery. I say the same thing every time: The cultural changes Satya has driven have been the biggest factor, but architecture has also played a huge role. The cloud services architecture enables the teams supporting these products to innovate and update their services constantly. The telemetry that comes back from these services enables us to learn what is working and what isnt within hours allowing us to improve the products even more.

I remember, in the really early days of Intune, customers would ask if we delivered the Intune capabilities on-premises. Since we didnt (and wouldnt) have an on-premises solution, many customers decided they had to go another direction. Ill be honest, there were times in 2012 where I was nervous if we had made the right decision with our architecture. But, whenever we looked at other solutions, we just could not get comfortable with their client-server architecture. The client-server setup could never give us the scale and agility that we knew would be needed for what was coming over the next decade. Sooner or later, every other vendor is going to have to start the long process of rewriting their solutions as cloud services.

It has been an amazing experience to go back and talk with those same customers who evaluated the early version of Intune and hear their stories about migrating to Intune from another EMM product and subsequently reaping the rewards of a cloud-based service.

Comprehensive Capabilities

When we defined (and then later announced) EMS, the idea of bringing together identity, security, and management into a single, integrated solution was new. At the time, these were solutions that were thought of as separate and unique categories addressed by solitary, standalone products. For the first 18 months after we released EMS, one of our largest challenges was helping organizations step back and take a broader view of Enterprise Mobility rather than the traditional way of viewing them as siloed capabilities.

When EMS launched, most organizations had a wide variety of different solutions deployed one vendor would be used for device management, another for identity management, yet another (if not more) for security, another still for enterprise file sync, etc. Each of the MDM providers had also expanded to MAM and had built their own apps for things like e-mail and document editing. Not only was this too many consoles to keep track of, but each solution was operating separately and sometimes at odds with others. It did not take an overwhelming amount of explaining to show some of the more sophisticated organizations the value of an integrated solution. The work of making disparate solutions interoperate (often poorly) had been a full time job now that same full time job could focus on supporting the security and productivity of the business.

The reality is that there are integrations that must be engineered into these products from the ground up they arent things you can bolt on afterward. When I look at the work we have done to build EMS capabilities like data loss prevention, conditional access, and information protection into the Office apps and Office 365 services it is amazing to consider how much painstaking engineering work and coordination went into getting the scenarios to work consistently and seamlessly.

The definition of best-of-breed has changed significantly in the industry now that organizations are more aware of the full spectrum of needs requiring integrated Enterprise Mobility solutions.

Loved by Users, trusted by IT

I cannot tell you number of times Ive sat with leaders of major companies and gotten the same response when I ask how they like the work experience they have on their orgs mobile devices. The most common feedback: a lot of head shaking, a long sigh, and it aint good, man.

When we were planning EMS 5 years ago, the feedback from these customer meetings made one thing really clear: IT was not proud of the solution that was being delivered. Even though that solution might have met the security needs of IT, it did not deliver the rich, empowering and simple experience their users wanted.

From the beginning, we have sought to deliver on the needs of both end-users and IT. Meeting these needs and expectations is really hard. One thing that we have learned is that you have to design and build for both these priorities. This is another area where the mix of perspectives that we brought together during the planning stages of EMS had incredible impact on the overall direction and focus we took once we began to build. The management, security and identity teams brought a deep understanding of the needs of IT Professionals and organizations, and the Office team brought a deep understanding of the needs of end-users. Both perspectives were critical for the overall solution and both perspectives have impacted the end-results of EMS and Office 365.


Happy birthday, EMS! I can’t wait to see what you look like at 4.

Viewing all 13502 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>