Quantcast
Channel: TechNet Technology News
Viewing all 13502 articles
Browse latest View live

Debugging .NET Core on Unix over SSH

$
0
0

With the release of Visual Studio 2017 RC3 it is now possible to attach to .NET Core processes running on Linux over SSH. This blog post will explain how to set this up.

Machine Setup

On the Visual Studio computer, you need to install the ‘ASP.NET and web development’ workload in the 1/26/17 update for VS 2017 RC. If you previously installed Visual Studio 2017 RC, you can see if it is an RC3 release from Help->About.

On the Linux server, you need to install SSH server, unzip and either curl or wget. For example, on Ubuntu you can do that by running:

sudo apt-get install openssh-server unzip curl

Deploying the Application

In order to debug your application running on Linux, it will first need to be deployed there. One option for doing this is to copy sources to the target computer and build with ‘dotnet build’ on the Linux machine. Another option would be to build on Windows, and transfer the built artifacts (the application itself, any runtime libraries it might depend on and the .deps.json file).

For debugging, there are two important notes. First, it is much harder to debug retail-compiled code than debug-compiled code, so it is highly recommended to use the ‘Debug’ configuration. If you do need to use the ‘Release’ configuration, make sure to disable Tool->Options->Debugging->Just My Code. Second, the debugger for Linux will need to make sure Portable PDBs are enabled (which is the default), and they will need to be next to the dll.

Attaching the Debugger

Once the computers are configured and the application is up and running now we are ready to attach the debugger.

  1. Go to Debug->Attach to Process…
  2. In the ‘Connection Type’ drop down, select ‘SSH’
  3. Change the ‘Connection Target’ to the IP address or host name of the target computer
  4. Find the process that you would like to debug. By default, your code will run in a ‘dotnet’ process. You can look at the ‘Title’ column to see each processes command line arguments to find the process that you are interested in.
    Attach to process dialog showing a list of process from a remote Linux machine over an SSH transport.
  5. Click ‘Attach’. You will get a dialog to select which type of code you would like to debug. Pick ‘Managed (.NET Core for Unix)’.
  6. Debug as you would expect

A picture showing Visual Studio stopped at a breakpoint hit in code running on a remote Linuc machine.

If you have any feedback we’d love to hear from you, send it from the VS Send Feedback feature or find us on twitter at @VS_Debugger.

Gregg Miskelly
Principal Software Engineerunzip curl


Splitting up Git administer permissions

$
0
0

Like everything in VSTS and TFS, Git repos are protected by a set of permissions. For instance, you must have Read for a repo to clone or view its contents. Likewise, you must have Contribute to push changes. Until recently, you needed one permission to create, delete, or rename a repo, edit branch policies, or change other people’s permissions: Administer.

We heard from several customers that Administer covered too many scenarios. For instance, at one customer, anyone can create new repos and rename any repo they created. Due to compliance regulations, no one can delete a repo they created (only a select group of people have that capability). At another customer, for company policy reasons, separate individuals control branch policies (the repo owner), adding & removing other people’s permissions (project administrators), and deleting repos (like the previous customer, restricted to only a handful of people).

With Administer covering all of these capabilities in one permission, both customers were unable to delegate some authority without delegating all authority. In practice, this meant that only a small set of people were responsible for all repo creation and management, creating a bottleneck for engineers every time they wanted a new repo.

You’ve probably guessed what I’m going to say next: we’ve split the Administer permission into 6 new permissions. Those new permissions are:

  1. Create repository
  2. Delete repository
  3. Rename repository
  4. Edit policies
  5. Manage permissions
  6. Remove others’ locks

We automatically migrate permissions on upgrade (meaning that if you previously had Administer on a a repo or at the project level, you now have these 6 new permissions on that repo or project). When you create a new repo, we automatically grant Delete, Rename, Edit policies, Manage permissions and Remove others’ locks on that repo, equivalent to the old behavior of granting Administer.

We also took this opportunity to clean up the names of a few existing permissions. Several of them were awkwardly phrased. “Force push” is a well-understood Git concept, so we promoted it to the front of its corresponding permission name. Mapping from old name to new, the renamed permissions are:

  • Branch creation  Create branch
  • Note management → Manage notes
  • Rewrite and destroy history (force push) Force push (rewrite history and delete branches)
  • Tag management Manage tags

These changes apply to VSTS effective with the M112 release and will come to TFS 2017 in Update 1. Small caveat: Until Update 1 releases, users who set permissions using the command-line tf.exe git permissions tool will not be able to grant or revoke the new permissions against a VSTS account. The workaround in the short term is to set permissions with TFSSecurity.exe or the admin tab in the web experience.

It’s now pretty easy to let anyone create a new repo in your project without giving them full administrative control. Grant the Contributors group “Create repository” and you’re all set.

New Year & New Updates to the Windows Data Science Virtual Machine

$
0
0

This post is authored by Gopi Kumar, Principal Program Manager in the Data Group at Microsoft.

First of all, a big thank you to all users of the Data Science Virtual Machine (DSVM) for your tremendous response to our offering in 2016. We’re looking forward to a similarly great year in 2017.

The new year also brings in some interesting new tools to our DSVM users, to help you be more productive with data science. In this post, we summarize key recent changes on the Windows Server side of our DSVM offering, below.

  1. Microsoft R Server 9.0.1 (MRS9) developer edition, a major update to the enterprise scalable R extension from Microsoft, is now available on the VM. This version brings a lot of exciting changes including several fast ML / deep learning algorithms developed by Microsoft in a new library called Microsoft ML. There’s a new architecture and interface for deploying R models and functions as web services, this follows a paradigm and interface library very similar to Azure ML operationalization. The library is called mrsdeploy. We have some R deployment samples for both notebook and R Tools for Visual Studio (RTVS) and RStudio. The olapR package in Microsoft R Server lets you run MDX queries and connect directly to OLAP cubes on SQL Server 2016 Analysis Services from your R solution. SQL Server 2016 Developer edition and the associated Microsoft R In-DB analytics is also updated to Service Pack 1.
  2. R Studio Desktop open source edition is now preinstalled into the VM, by popular demand.
  3. R Tools for Visual Studio is now updated to version 0.5, bringing in multi-window plotting and SQL tooling to run R code on SQL Server 2016.
  4. Microsoft Cognitive Toolkit (formerly called CNTK) is now on Version 2 Beta 6, and features several improvements and sample notebooks to perform fast deep learning using Python interface or the CNTK Brainscript interface.
  5. Apache Drill, a SQL based query tool that can work with various data sources and formats (e.g. JSON, CSV), was part of our previous update. We now prepackage and configure drivers to access various Azure data services such as Blobs, SQLDW/Azure SQL, HDI and Document DB. See this tutorial in our gallery for information on how to query data in various Azure data sources from within the Drill SQL query language.
  6. JuliaPro is available to DSVM users and is now pre-installed and pre-configured on the VM, thanks to Julia Computing (a company founded by the creators of Julia programming language). JuliaPro is a curated distribution of the open source Julia language along with a set of popular packages for scientific computing, data science, AI and optimization. The JuliaPro distribution comes with an Atom based IDE, Jupyter notebooks and several sample notebooks on the DSVM Jupyter instance to help you get started. Julia Computing also provides an Enterprise edition with commercial support.
  7. The Deep Learning Toolkit for the Windows DSVM is an extension to help you jump start deep learning on Azure GPU VMs, and without having to spend time installing GPU framework dependencies and drivers or configuring the various deep learning tools. This extension has been updated to include the latest versions of CNTK 2, mxNet for GPU along with new samples. It also features the Windows version of TensorFlow.

We also offer a Linux Edition of the data science virtual machine and there will be a separate post on major updates there.

Meanwhile, here are some resources to get you started with the DSVM.

Windows Edition

Linux Edition

Webinar

I’d like to end this post with a graphical summary of the DSVM, showing a [non-exhaustive] list of the various tools that are preinstalled. DSVM helps you focus more on data science and spend less time on installing, configuring and administering tools, thereby making you more productive. Give DSVM a shot today and send us feedback on how we can make it even better for your data science needs.


Gopi

#AzureAD Mailbag: MFA Q&A, Round 7!

$
0
0

Hey yall, Mark Morowczynski here with the second part of our two part MFA mailbag. To read part 1 click here. Also for those that haven’t been reading these mailbags since the beginning you can read all the previous 21 posts using the ‘mailbag‘ tag. We are trying to make these Friday posts a regular thing and next week will cover App Proxy. If there are topics you’d like to see us discuss, even some that might require a much deeper dive let us know. Now on to the questions.

Question 6:

If you publish the on-prem MFA User Portal/MFA Server Mobile App Web Service with Azure AD Application Proxy, does this require a public cert? Can a private cert be used?

Answer 6:

Technically, you can use a self-signed cert for MFA User Portal if you are willing to have users ignore the cert warnings/errors, but that isnt recommended for an optimal end user experience. The MFA Server Mobile App Web Service on the other hand does in fact require a public certificate. Otherwise, the Microsoft Authenticator App will not be able to connect to the web service successfully, preventing the a successful activation.

 

Question 7:

Is there any equivalent feature in the Azure MFA Server for “Allow users to remember multi-factor authentication on devices they trust” that is available in Azure MFA?

Answer 7:

No, except when using IIS Authentication to secure IIS-based websites. In that case, a cookie can be set to only require MFA every X minutes but it isnt something the end user opts into by checking a box. The cookie is set on whichever browser the user signs in from. When using RADIUS or LDAP, MFA is performed with every verification request. Thats typically desired because the verifications are generally for remote access. When securing ADFS, ADFS has full control over when MFA is required and when it isnt.

 

Question 8:

Can we use both Azure MFA Server to secure on-premises applications and Azure MFA for Office 365? How do /can they both these work together?

Answer 8:

You can use Azure MFA Server to secure both on-premises applications and cloud applications that federate to ADFS, including O365 and other apps that federate to Azure AD. It is best to not use both Azure MFA Server and Azure MFA for the same set of users though because they would have to register and manage MFA enrollment data in both places. It make sense to utilize Azure MFA for your cloud-based users and Azure MFA Server for your federated sync’ed users.

If you use both, it is best to control it with groups so that certain groups use on-prem MFA and everyone else uses cloud-based MFA. Youll need to ensure that the SupportsMfa setting in the tenant DomainFederationSettings is set to False in this case. When AAD sends the user to ADFS for primary auth, ADFS will force users that are members of designated groups to perform MFA on-premises. So, ADFS will return the AuthMethodsReferences claim indicating that MFA was performed for those users, but not for the other users that arent members of those groups. Then Azure AD can perform cloud-based MFA for all of the other users. This design will apply to all auth flows on the reliant party trust (e.g. all applications that use Azure AD as the IdP).

 

Question 9:

Is there a way for us to migrate users [from our Azure MFA Server] to Azure MFA so there is no action required from the users perspective?

Answer 9:

We dont have a way to migrate users today from Azure MFA Server to cloud-based Azure MFA. We have heard this feedback previously and it is something that we are discussing.

 

Question 10:

We currently use TMG to proxy the ADFS front end to determine whether the user is coming from external. If they are external, the user is directed to Azure MFA Server to perform MFA. Any issues with this strategy ? Wed like to deprecate TMG over time, but not lose functionality.

Answer 10:

No issues with that approach. ADFS should be returning the InsideCorporateNetwork claim to Azure AD when users are inside the network, and thus not going through TMG or WAP. InsideCorporateNetwork claim can also be sent to Azure AD to determine whether you are on or off the network as well.

 

Question 11:

Can you/How do you secure on-prem OWA with MFA?

Answer 11:

To secure on-prem OWA (not rich clients), you have the following options:

  1. Publish OWA using Azure AD App Proxy. This allows the customer to either use cloud-based Azure MFA (https://azure.microsoft.com/en-us/documentation/articles/multi-factor-authentication-get-started-cloud/) or to use Azure MFA Server with ADFS (https://azure.microsoft.com/en-us/documentation/articles/multi-factor-authentication-get-started-adfs-w2k12/).
  2. Configure OWA for claims-based auth to ADFS. Use MFA Server to secure ADFS. This requires Exchange 2013 or higher.

If using a reverse proxy such as F5 in front of OWA that can do pre-authenticate via RADIUS or LDAP, you can point the RADIUS or LDAP authentication to MFA Server

 

Thanks for reading. Check back next week for more mailbag goodness.

For any questions you can reach us at
AskAzureADBlog@microsoft.com, the Microsoft Forums and on Twitter @AzureAD, @MarkMorow and @Alex_A_Simons

 

Chad Hasbrook, Mark Morowczynski, Shawn Bishop, Todd Gugler

No more “out of memory” errors for Windows Phone emulators in Windows 10 (unless you’re really out of memory)

$
0
0

For those of you who run emulators in Visual Studio, you may be familiar with an annoying error:

1A742E040AD543ACAF235D67681F6656

It periodically pops up even when task manager reports enough available memory – this is especially true for machines with less than 8GB RAM.  Most of the time, it’s because there genuinely isn’t enough memory available but sometimes it’s because of Hyper-V’s root memory reserve (discussed in KB2911380).

This blog will tell you what the root memory reserve is, why it exists, and why you shouldn’t need it on Windows 10 starting in build 15002 (original announcement here).  I also wrote a mini script to clear the registry key that controls root memory reserve if you think it may be set on your system.

So, What is the root memory reserve and why is it there?

Root memory reserve is the memory Hyper-V sets aside to make sure there will always be enough available for the host to run well.

We change Hyper-V host memory management periodically based on feedback and new technology (things like dynamic memory and changes in clustering).  The root memory reserve is only one piece of that equation and even calculating that piece has several factors.  Modifying it is not supportedbut there is still a registry key available for times when the default isn’t appropriate for one reason or another.

KB2962295 basically describes measuring, monitoring, and modifying the root reserve.

KB2911380 tells you how to manually set it.

And now I’m here to tell you to remove it!

Why you shouldn’t need root memory reserve and how to clear it.

We stopped using a root memory reserve in favor of other memory management tools in Windows 10.  The things that make it necessary are unique to server environments (clustering, service level agreements…).

However, while the default memory management settings on server are now different from Hyper-V on Windows,  if root reserve is set on Windows 10 Hyper-V will respect it.   If MemoryReserve is set, you won’t see any of the memory management changes we made.  Which is why now is the time to clear that custom root memory reserve.

Run the following in PowerShell to download and run my helper script.  The snippet above does the same thing but the full script auto-elevates, tells you if the MemoryReserve key is set and what the value was before clearing it.

 

Cheers,
Sarah

Introducing VMConnect dynamic resize

$
0
0

Starting in the latest Insider’s build, you can resize the display for a session in Virtual Machine Connection just by dragging the corner of the window.

dynamic_resize

When you connect to a VM, you’ll still see the normal options which determine the size of the window and the resolution to pass to the virtual machine:

vmconnectclassic

Once you log in, you can see that the guest OS is using the specified resolution, in this case 1366 x 768.

vmconnect4

Now, if we resize the window, the resolution in the guest OS is automatically adjusted. Neat!

dynamic_resize

Additionally, the system DPI settings are passed to the VM. If I change my scaling factor on the host, the VM display will scale as well.

There are 2 requirements for dynamic resizing to work:

  • You must be running in Enhanced session mode
  • You must be fully logged in to the guest OS (it won’t work on the lockscreen)

 

This remains a work in progress, so we would love to hear your thoughts.

-Andy

 

 

 

 

`yield` keyword to become `co_yield` in VS 2017

$
0
0

Coroutines—formerly known as “C++ resumable functions”—are one of the Technical Specifications (TS) that we have implemented in the Visual C++ compiler. We’ve supported coroutines for three years—ever since the VC++ November 2013 CTP release.

If you’re using coroutines you should be aware that the keyword `yield` is being removed in the release of VS 2017. If you use `yield` in your code, you will have to change your code to use the new keyword `co_yield` instead. If you have generators that use `yield expr`, these need to be changed to say `co_yield expr`.

As long as you’re changing your code you might want to migrate from using `await` to `co_await` and from `return` in a coroutine to `co_return`. The Visual C++ compiler accepts all three new keywords today.

For more information about coroutines, please see the Coroutines TS here: http://www.open-std.org/jtc1/sc22/wg21/docs/papers/2016/n4628.pdf. As the author of the Coroutines TS works on the Visual C++ team you can also just send us mail with your questions or feedback (see below.)

Why are we making this change?

As a Technical Specification, coroutines have not yet been adopted into the C++ Standard. When the Visual C++ team implemented them in 2013, the feature was implemented as a preview of an up-and-coming C++ feature. The C++ standards committee voted in October of 2015 to change the keywords to include the prefix `co_`. The committee didn’t want to use keywords that would conflict with variable names already in use. `yield`, for example, is used widely in agricultural and financial applications. Also, there are library uses of functions called `yield` in the Ranges TS and in the thread support library.

For reference, here are the keyword mappings that need to be applied to your code.




Instead of `await`Use `co_await`
Instead of `return`Use `co_return`
Instead of `yield`Use `co_yield`

We’re removing the `yield` keyword with VS 2017 because we’re also implementing the Range-v3 TS and we expect many developers to call `yield` after a using declaration for ranges, e.g., `using namespace ::ranges`.

Preventing these breaks in the future

We know many of you have taken dependencies on coroutines in your code and understand that this kind of breaking change is difficult. We can’t keep the committee from making changes (trust us, we try!) but at least we can do our best to make sure that you’re not surprised when things do change.

We created a new compiler switch, `/experimental` when we implemented the Modules TS in VS 2015 Update 1. You need to include `/experimental:module` on your command line so that it is clear the feature is experimental and subject to change. If we could go back in time we would have had coroutines enabled with `/experimental:await` instead of just `/await` (or `experiemental:coroutine` if we’d known what the feature would be called three years later!)

In a future release we will deprecate the `await` keyword as well as restrict the use of `return` from coroutines in favor of the new keywords `co_await` and `co_return`.

In closing

As always, we welcome your feedback. Please give us feedback about coroutines in the comments below or through e-mail at visualcpp@microsoft.com.

If you encounter other problems with Visual C++ in VS 2017 please let us know via the Report a Problem option, either from the installer or the Visual Studio IDE itself. For suggestions, let us know through UserVoice. Thank you!

Update to Visual Studio 2017 Release Candidate

$
0
0

Today we have another update to Visual Studio 2017 Release Candidate. Some of you may have noticed that yesterday we posted an RC update, but took it down because of a setup issue. The issue is now fixed so please give it a try. To try out the newest version, you can either click on the link above or click on the notification within Visual Studio.

Take a look at the Visual Studio 2017 Release Notes and Known Issues for the full list of what’s available in this update, but here’s a summary:

  • The .NET Core and ASP.NET Core workload is no longer in preview. We have fixed several bugs and improved usability of .NET Core and ASP.NET Core Tooling.
  • Team Explorer connect experience is now improved to make it easier to find the projects and repos to which you want to connect.
  • The Advanced Save option is back due to popular demand.
  • Multiple installation-related issues are now fixed in this update, including hangs. We’ve also added a retry button when installation fails, disambiguated Visual Studio installs in the Start menu, and added support for creating a layout for offline install.

Apart from these improvements you’ll notice that we’ve removed the Data Science and Python Development workloads. As we’ve been closing on the VS release, some of the components weren’t going to meet all the release requirements, such as translation to non-English languages. They’ll re-appear soon as separate downloads. F# is still available in the .NET Desktop and .NET Web development workloads.

Please try this latest update and share your feedback. For problems, let us know via the Report a Problem option in the upper right corner of the VS title bar. Track your feedback on the developer community portal. For suggestions, let us know through UserVoice.

John Montgomery, Director of Program Management for Visual Studio

@JohnMont is responsible for product design and customer success for all of Visual Studio, C++, C#, VB, .NET and JavaScript. John has been at Microsoft for 18 years working in developer technologies.


OMS Agent setup communication error ID 4004

$
0
0

We have recently seen a customer attempt to add a server to an OMS Log Analytics workspace, but the attempt failed and produced the following error in the Operations Manager event log on that server.

Error in the Operations Manager event log

In the TracingGUIDSNative.log ETW log files, we saw these entries:

[ServiceConnector] [] [Information] :CBackgroundRequester::OnTimerCallback{backgroundrequester_cpp657}Beginning background request for URL “https:// a12b3c4d-5e6f-7a8b-9c0d-1e2f3a4b5c6d.oms.opinsights.azure.com/AgentService.svc/AgentTopologyRequest”.

[HealthServiceCommon] [] [Verbose] :CHttpClientBase::CHttpRequest::SetProxy{httpclientbase_cpp1924}Using system default proxy for request to “https:// a12b3c4d-5e6f-7a8b-9c0d-1e2f3a4b5c6d.oms.opinsights.azure.com/AgentService.svc/AgentTopologyRequest”.

[HealthServiceCommon] [] [Error] :CHttpClientBase::CHttpRequest::AddHeader{httpclientbase_cpp1452}WinHttpAddRequestHeaders(x-ms-OmsCloudId: ) for URL “https://a12b3c4d-5e6f-7a8b-9c0d-1e2f3a4b5c6d.oms.opinsights.azure.com/AgentService.svc/AgentTopologyRequest” failed with code WINERROR=    2F76

[HealthServiceCommon] [] [Error] :CHttpClientBase::CHttpRequest::AddCloudHeaders{httpclientbase_cpp1378}AddHeader name:(x-ms-OmsCloudId: ) value:() failed with code WINERROR=    2F76.

281 [10]11044.32756::01/17/2017-10:19:40.995 [HealthServiceCommon] [] [Error] :CHttpClientBase::CHttpRequest::BeginHttpRequest{httpclientbase_cpp883}AddCloudHeaders for URL “https://a12b3c4d-5e6f-7a8b-9c0d-1e2f3a4b5c6d.oms.opinsights.azure.com/AgentService.svc/AgentTopologyRequest” failed with code WINERROR=    2F76

[HealthServiceCommon] [] [Error] :CHttpClientBase::BeginHttpRequest{httpclientbase_cpp234}BeginHttpRequest for URL “https:// a12b3c4d-5e6f-7a8b-9c0d-1e2f3a4b5c6d.oms.opinsights.azure.com/AgentService.svc/AgentTopologyRequest” failed with code WINERROR=    2F76.

[HealthServiceCommon] [] [Error] :CHttpClientBase::BeginHttpRequest{httpclientbase_cpp378}BeginHttpRequest to URL “https://a12b3c4d-5e6f-7a8b-9c0d-1e2f3a4b5c6d.oms.opinsights.azure.com/AgentService.svc/AgentTopologyRequest” failed with code WINERROR=    2F76

[ServiceConnector] [] [Error] :CHttpClient::BeginRestHttpRequest{httpclient_cpp310}BeginHttpRequest to URL “https://a12b3c4d-5e6f-7a8b-9c0d-1e2f3a4b5c6d.oms.opinsights.azure.com/AgentService.svc/AgentTopologyRequest” failed with code WINERROR=    2F76

[ServiceConnector] [] [Error] :CBackgroundRestRequester::BeginRequest{backgroundrequester_cpp729}BeginRestHttpRequest for URL “https://a12b3c4d-5e6f-7a8b-9c0d-1e2f3a4b5c6d.oms.opinsights.azure.com/AgentService.svc/AgentTopologyRequest” failed with code WINERROR=    2F76.

[ServiceConnector] [] [Error] :CBackgroundRequester::OnTimerCallback{backgroundrequester_cpp673}Background request for URL “https://a12b3c4d-5e6f-7a8b-9c0d-1e2f3a4b5c6d.oms.opinsights.azure.com/AgentService.svc/AgentTopologyRequest” failed with code WINERROR=    2F76, will try again next interval.

Our investigation found that this could happen on systems that had an empty string for their SMBIOSAssetTag. We are going to adapt our agent code to handle this situation. In the meantime, to correct this issue, please install an earlier version of the agent from here (http://go.microsoft.com/fwlink/?LinkID=517476).

Averting ransomware epidemics in corporate networks with Windows Defender ATP

$
0
0

Microsoft security researchers continue to observe ransomware campaigns blanketing the market and indiscriminately hitting potential targets. Unsurprisingly, these campaigns also continue to use email and the web as primary delivery mechanisms. Also, it appears that most corporate victims are simply caught by the wide nets cast by ransomware operators. Unlike cyberespionage groups, ransomware operators do not typically employ special tactics to target particular organizations.

Although indiscriminate ransomware attacks are very much like commodity malware infections, the significant cost that can result from a broad ransomware attack justifies consideration of a layered, defense-in-depth strategy that covers protection, detection, and response. As attacks reach the post-breach or post-infection layer—when endpoint antimalware fails to stop a ransomware infection—enterprises can benefit from post-breach detection solutions that provide comprehensive artifact information and the ability to quickly pivot investigations using these artifacts.

Our research into prevalent ransomware families reveals that delivery campaigns can typically stretch for days or even weeks, all the while employing similar files and techniques. As long as enterprises can quickly investigate the first cases of infection or ‘patient zero’, they can often effectively stop ransomware epidemics. With Windows Defender Advanced Threat Protection (Windows Defender ATP), enterprises can quickly identify and investigate these initial cases, and then use captured artifact information to proactively protect the broader network.

In this blog, we take a look at an actual Cerber ransomware infection delivered to an enterprise endpoint by a campaign that ran in late November 2016. We look at how Windows Defender ATP, in the absence of endpoint antimalware detections, can flag initial infection activity and help enterprises stop subsequent attempts to infect other devices.

Detecting Cerber ransomware behavior

In an earlier blogpost, we described how the Cerber ransomware family has been extremely active during the recent holiday season. It continues to be one of the most prevalent ransomware families affecting enterprises as shown in Figure 1. Not only are there similarities between members of this well-distributed ransomware family, certain Cerber behaviors are common malware behaviors. Detecting these behaviors can help stop even newly distributed threats.

Ransomware encounters on enterprise endpoints

Figure 1. Ransomware encounters on enterprise endpoints

 

A real case of Cerber meeting Windows Defender ATP

The Cerber ransomware infection started with a document downloaded into the Downloads folder through a webmail client. A user opened the document and triggered an embedded macro, which in turn launched a PowerShell command that downloaded another component carrying the ransomware payload. As shown below, the PowerShell command was detected by Windows Defender ATP.

PowerShell command detection

Figure 2. PowerShell command detection

 

Windows Defender ATP also generated an alert when the PowerShell script connected to a TOR anonymization website through a public proxy to download an executable. Security operations center (SOC) personnel could use such alerts to get the source IP and block this IP address at the firewall, preventing other machines from downloading the executable. In this case, the downloaded executable was the ransomware payload.

Alert for the TOR website connection showing the source IP address

Figure 3. Alert for the TOR website connection showing the source IP address

 

After the payload was downloaded into the Temp directory, it was then executed by a parent cmd.exe process.  The payload created a copy of itself in the Users folder and then launched that copy. Machine learning algorithms in Windows Defender ATP were able to detect this self-launching behavior.

Ransomware launching copy of itself as detected on Windows Defender ATP

Figure 4. Ransomware launching copy of itself as detected on Windows Defender ATP

 

Just prior to encrypting files, the Cerber ransomware tried to prevent future attempts at file recovery by deleting system restore points and all available volume shadow copies—these are used by Windows System Restore and Windows Backup and Restore during recovery. This hostile behavior was also detected by Windows Defender ATP.

Deletion of volume shadow copies

Figure 5. Deletion of volume shadow copies

 

Breadth and depth of alerts enable easy scoping and containment

Windows Defender ATP generated at least four alerts during the infection process, providing a breadth of detections that provides coverage for changing techniques between Cerber versions, samples, and infections instances. To build up the mechanisms behind these alerts, Microsoft security researchers comb through ransomware families and identify common behaviors. Their research supports machine learning models and behavioral detection algorithms that detect ransomware at different stages of the kill chain, during delivery (by email or using exploit kits) up to the point when victims make ransom payments.

Alerts that correspond to different kill stages

Figure 6. Alerts that correspond to different kill stages

 

Each alert provides additional context about the attack. In turn, SOC personnel can use this contextual information to pivot an investigation and get insights from endpoints across the organization. Using the provided file and network activity information, pivoting investigations in the Windows Defender ATP console can provide conclusive leads, even when no actual ransomware payload is detonated.

To investigate our Cerber case, we use the name of the payload file hjtudhb67.exe, which is clearly unusual and not likely used by legitimate executables. A quick search on the Windows Defender ATP console yields 23 other files with the same name. The files were suspiciously created in a span of approximately 10 days and scattered across endpoints in the organization. (Note that although most of these files are artifacts from the actual infection, some are possibly remnants of tests by SOC personal who responded to the alerts.)

Instances of file with the same unusual name as the ransomware

Figure 7. Instances of file with the same unusual name as the ransomware

 

We pivot to the source IP that hosted the payload file and perform a search to reveal that 10 machines connected to this IP address. Blocking this source IP on the corporate firewall on the day of the first infection could have helped prevent the Cerber ransomware payload file from reaching other machines.

Conclusion: Defense-in-depth with Windows Defender ATP

We have seen how Windows Defender ATP provides enterprise SOC personnel with a powerful view of events and behaviors associated with a ransomware infection, from the time of initial delivery and throughout the installation process. Enterprise SOC personnel are able to understand how ransomware has reached an endpoint, assess the extent of the damage, and identify artifacts that can be used to prevent further damage. These capabilities are made possible by cloud analytics that continuously search for and flag signs of hostile activity, including signs that could have been missed in other defensive layers.

Upcoming enhancements to Windows Defender ATP with the Windows 10 Creators Update will take its capabilities one step further by enabling network isolation of compromised machines. The update will also provide an option to quarantine and prevent subsequent execution of files.

Windows Defender ATP is built into the core of Windows 10 Enterprise and can be evaluated free of charge.

Windows 10 security against Cerber ransomware

Windows 10 is built with security technologies that can help detect the latest batch of Cerber ransomware.

  • Windows Defender detects Cerber ransomware as Win32/Cerber. It also detects files that assist in the distribution of the payload file using email and exploit kits. Malicious email attachments are detected as TrojanDownloader:O97M/Donoff, and the RIG exploit kit is detected as Exploit:HTML/Meadgive.
  • For security on the web, Microsoft Edge browser can help prevent exploit kits from running and executing ransomware on computers. SmartScreen Filter uses URL reputation to block access to malicious sites, such as those hosting exploit kits.
  • Device guard protects systems from malicious applications like ransomware by maintaining a custom catalog of known good applications and stopping even kernel-level malware with virtualization-based security.
  • AppLocker group policy also prevents dubious software from running.

Office and Office 365 security against Cerber ransomware

Office 365 Advanced Threat Protection blocks emails that spread malicious documents that could eventually install Cerber. IT administrators can use Group Policy in Office 2016 to prevent malicious macros inside documents from running, such as the documents in password-protected attachments used commonly in Cerber campaigns.

 

Tommy Blizard

Windows Defender ATP Research Team

SQL Database Query Editor available in Azure Portal

$
0
0

We are excited to announce the availability of an in-browser query tool that provides you an efficient way to execute queries on your Azure SQL Databases and SQL Data Warehouses without leaving the Azure Portal. This SQL Database Query Editor is now in public preview in the Azure Portal.

With this editor, you can access and query your database without needing to connect from a client tool or configure firewall rules.

The various features in this new editor create a seamless experience for querying your database.

Query Editor capabilities

Connect to your database

Before executing queries against your database, you must login with either your SQL server or Azure Active Directory (AAD) credentials. If you are the AAD admin for this SQL server, you will be automatically logged in when you first open the Query Editor using AAD single sign-on.

Learn more about how to configure your AAD server admin. If you are not currently taking advantage of Azure Active Directory, you can learn more.

Write and execute T-SQL scripts

If you are already familiar with writing queries in SSMS, you will feel right at home in the in-browser Query Editor.

Many common queries can be run in this editor, such as create new table, display table data, edit table data, create a stored procedure, or drop table. You have the flexibility to execute partial queries or batch queries in this editor. And by utilizing syntax highlighting and error indicating, this editor makes writing scripts a breeze.

Additionally, you can easily load an existing query file into the Query Editor or save your current script in this editor to your local machine. This ability provides you the convenience to save and port the queries between editors.

Manage query results

Another similarity between this Query Editor and SSMS is the ability to resize the Results pane to get the desired ratio between the Editor and Results sections. You can also filter results by keyword rather than having to scroll through all the output.

How to find Query Editor

SQL Database

You can find this experience by navigating to your SQL database and clicking the Tools command and then clicking Query Editor (preview), as shown in the screenshots below. While this feature is in public preview, you will need to accept the preview terms before using the editor.

SQL Database Find Query Editor

SQL Database Query Editor

SQL Data Warehouse

You can find this experience by navigating to your SQL data warehouse and clicking on Query Editor (preview), shown in the screenshot below. While this feature is in public preview, you will need to accept the preview terms before using the editor.

SQL Data Warehouse Find Query Editor

Run sample query

You can quickly test out the editor by running a simple query, such as in the screenshot below.

Sample query

Send us feedback!

Please reach out to us with feedback at sqlqueryfeedback@microsoft.com.

Microsoft Teams gains steam

$
0
0

As Satya Nadella discussed in last Thursday’s quarterly earnings call, Microsoft Teams has picked up momentum since its launch last November. In the last month alone, 30,000 organizations across 145 markets and 19 languages have actively used Microsoft Teams.

For companies that haven’t tried it yet, Microsoft Teams is a new chat-based workspace in Office 365 that brings together people, conversations and content to facilitate collaboration. IT administrators interested in the preview can light up Microsoft Teams in the Office 365 Admin Portal by clicking here.

We’re especially inspired by this early usage. Not only does it show that the product fills a real market need, but it gives us a ton of information to help shape the product leading up to General Availability, which is still on track for this current quarter, Q1 2017. Our customers have been a great guide as we’ve delivered numerous features into the product even since the preview launched—including built-in audio calling onmobile and named group chats, an easy way to keep track of the context of a conversation.

Customers are telling us Microsoft Teams is compelling by itself, but as an integrated part of Office 365, it can be a massive game-changer. Megan Horn, a process engineer at Hendrick Motorsports and a preview customer, reported rising usage internally and said, “Microsoft Teams has reduced the number of face-to-face coordination meetings that are typically required to do my job. I love using Power BI for visual reporting right within the Microsoft Teams experience.”

Another Microsoft Teams customer and partner demonstrates how Microsoft Teams has the potential to overtake the competition. Rodney Guzman, CEO and co-founder of Interknowlogy, said, “We use a lot of Microsoft products, and are using Office 365. All the centralized management of people and groups we already do in Office 365 was immediately inherited in Microsoft Teams. This was always a hassle in Slack. Since switching to Microsoft Teams from Slack, we’ve haven’t looked back.”

Great collaboration tools don’t need to come at the cost of poor security or a lack of information compliance. In the coming weeks, we will release new compliance and reporting capabilities into Microsoft Teams to help ensure employees can communicate and collaborate from anywhere, while keeping sensitive corporate information secure—all built on the Office 365 global, hyper-scale enterprise cloud.

Additionally, customers have responded positively to Microsoft Teams’ support for intelligent bots. And at General Availability, we’ll deliver WhoBot, which uses natural language processing to help learn about experts in your organization. Powered by LUIS.ai, WhoBot will answer questions like, “Who in our group knows about the Australia sales numbers?” This is just one example of what will be possible with the power of the Microsoft Teams utilizing the Microsoft Graph.

Our commitment is to empower every organization and every team to achieve more. Office 365 and its 85-million monthly active users are a cornerstone of this commitment, and Microsoft Teams adds another key component to an already potent lineup. We remain eager and on track for Microsoft Teams General Availability later this quarter.

Thanks to everyone who has tried Microsoft Teams for all the great feedback and support.

—Kirk Koenigsbauer

The post Microsoft Teams gains steam appeared first on Office Blogs.

Windows Server 2016 sweepstakes

$
0
0

Calling all Windows Server users! Whether you’ve already upgraded to Windows Server 2016 or you’re still on Windows Server 2012 and want to try the newest version, we want to hear from you. Tell us about your experience with Windows Server 2016 and you’ll get the chance to win a Microsoft Surface Pro 4.

Still on Windows Server 2012? That’s okay – we have a free virtual lab so you can give it a test drive. Then just write a review on Spiceworks, register for the sweepstakes, and you’re entered to win.

For those who don’t know, Windows Server 2016 is the cloud-ready operating system built to support your current workloads and allow you to transition to the cloud. Azure is an open, flexible, enterprise-grade cloud computing platform, and Windows Server 2016 delivers new layers of security and Azure-inspired innovation for the applications and infrastructure that power your business.

Try it out and let us know what you think.

Try Microsoft Windows Server 2016 Virtual Lab

Here’s how to enter:

  • Step 1: Review Microsoft Windows Server 2016 in Spiceworks
    • IMPORTANT: All entries must include, “This review is part of a Microsoft sweepstakes.”
  • Step 2: Register to complete your entry

Prizes:

  • First 100 SpiceHeads to review Windows Server 2016 will win a limited edition Nano-Man t-shirt
  • Weekly drawing for a life-size Nano-Man cutout
  • Grand Prize: Microsoft Surface Pro 4 with 256GB hard drive, Intel core i7 processor, 16GB RAM, type cover, and Office 365 Personal 1 year subscription

No purchase necessary. Open only to Spiceworks community members who are legal residents of the 50 U.S. and DC 18+. Game ends 2/24/17 at 5pm CT. Click here for official rules.

Windows Server 2016 Sweepstakes

$
0
0

Calling all Windows Server users! Whether youve already upgraded to Windows Server 2016 or youre still on Windows Server 2012 and want to try the newest version, we want to hear from you. Tell us about your experience with Windows Server 2016 and youll get the chance to win a Microsoft Surface Pro 4.

Still on Windows Server 2012? Thats okay we have a free virtual lab so you can give it a test drive. Then just write a review on Spiceworks, register for the sweepstakes, and youre entered to win.

For those who dont know, Windows Server 2016 is the cloud-ready operating system built to support your current workloads and allow you to transition to the cloud. Azure is an open, flexible, enterprise-grade cloud computing platform, and Windows Server 2016 delivers new layers of security and Azure-inspired innovation for the applications and infrastructure that power your business.

Try it out and let us know what you think.

Try Microsoft Windows Server 2016 Virtual Lab

Heres how to enter:

Prizes:

  • First 100 SpiceHeads to review Windows Server 2016 will win a limited edition Nano-Man t-shirt
  • Weekly drawing for a life-size Nano-Man cutout
  • Grand Prize: Microsoft Surface Pro 4 with 256GB hard drive, Intel core i7 processor, 16GB RAM, type cover, and Office 365 Personal 1 year subscription

No purchase necessary. Open only to Spiceworks community members who are legal residents of the 50 U.S. and DC 18+. Game ends 2/24/17 at 5pm CT. Click here for official rules.

Cyber intelligence—help prevent a breach next on Modern Workplace

$
0
0

According to Forbes, investment in cybersecurity is expected to more than double from $75 billion in 2015 to $170 billion by 2020, but how can you be sure your organization is creating the right framework to help safeguard against a security breach?

Join us for the first episode in a two-part Modern Workplace special security series, “Cyber intelligence—help prevent a breach,” airing February 14, 2017 at 8 a.m. PST / 4 p.m. GMT. In the first episode, we explore the world of a chief information security officer (CISO) and show you how to help keep your organization more secure from potential security breaches.

  • CISO for F5 Networks, Mike Convertino, shares how asking the right questions could be key to revealing your organization’s biggest threats.
  • CISO for DocuSign, Vanessa Pegueros, explains how creating an internal taskforce could be one of the most effective ways to help protect your organization.
  • Plus, learn how Office 365 Threat Intelligence can help keep your organization secure.

Register now!

Related content

The post Cyber intelligence—help prevent a breach next on Modern Workplace appeared first on Office Blogs.


Announcing .NET Core, .NET Native and NuGet Updates in VS 2017 RC

$
0
0

We just released updates to the .NET Core SDK, .NET Native Tools and NuGet, all of which are included in Visual Studio 2017 RC. You can also install the .NET Core SDK for command-line use, on Windows, Mac and Linux. Please check out the ASP.NET blog to learn more about Web Tools updates and the Visual Studio blog for the Visual Studio 2017 RC update.

The following improvements are the .NET highlights of the release:

  • .NET Core – The csproj project format has been simplified and project migration is much more reliable.
  • .NET Native – Major performance increase for SIMD and other performance improvements.
  • NuGet – .NET Framework, UWP, .NET Core and other .NET projects can now use PackageReference instead of packages.config for NuGet dependencies.

Note: csproj projects created with earlier releases of Visual Studio 2017 or on the command-line with the .NET Core SDK must be manually updated to load in the latest release. Please read the Updating .NET Core Projects section, below, for more information.

.NET Core

The .NET Core tools have been significantly improved in this update, fixing bugs and usability issues. A major focus of this release has been simplifying the project file and improving project migration. We are no longer using the “preview” term, but “RC3” to match Visual Studio 2017 RC.

The tools in the new SDK produce and operate on csproj projects. If you are using project.json projects, including with Visual Studio 2015, then please wait to install the new SDK until you are ready to move to csproj and MSBuild. If you are a Visual Studio user, you will get the SDK when you install Visual Studio 2017. See Known issues for Web Tools, and ASP.NET and ASP.NET/.NET Core in Visual Studio 2017.

Getting the Release

This .NET Core SDK release is available in Visual Studio 2017 RC, as part of the .NET Core cross-platform development workload. It is also available in the ASP.NET Web workload and an optional component of the .NET Desktop workload. These workloads can be selected as part of the Visual Studio 2017 RC installation process. The ability to build and consume .NET Standard class libraries is available in the all of the above workloads and in the UWP workload.

You can also install the .NET Core SDK release for command-line use on Windows, macOS and Linux by following the instructions at .NET Core 1.0 – RC3 Download.

The release is also available as Docker images, in the dotnet repo. The following images include the new SDK:

  • 1.0.3-sdk-msbuild-rc3
  • 1.0.3-sdk-msbuild-rc3-nanoserver
  • 1.1.0-sdk-msbuild-rc3
  • 1.1.0-sdk-msbuild-rc3-nanoserver

The aspnetcore-build repo has also been updated.

.NET Code SDK Components

This release contains the .NET Core Tools 1.0 RC3, the .NET Core 1.0.3 runtime and the .NET Core 1.1.0 runtime. No changes have been made to the .NET Core runtime in this release. .NET Core 1.0.3 and 1.1.0 were both released previously.

The .NET Core SDK includes a set of .NET Core Tools and one of more runtimes. In previous SDK releases, there was only ever one runtime included. This change in approach was made to make it easier to acquire all supported runtimes as a single step. This experience helps when you are collaborating with developers who are using multipe runtimes. It also makes it easier to update multiple runtimes on your machine when fixes are released. The SDK naturally gets larger, but not as much as you might guess since there is only ever one set of tools in the package. It also enables us to improve the tools for everyone more easily.

Smaller Project file

The following example is the new, much shorter, default template for .NET Core apps. This is the final project format for .NET Core.

<ProjectSdk="Microsoft.NET.Sdk"><PropertyGroup><OutputType>ExeOutputType><TargetFramework>netcoreapp1.0TargetFramework>PropertyGroup>Project>

You can change the TargetFramework value from netcoreapp1.0 to netcoreapp1.1 in order to target .NET Core 1.1. The example project above targets .NET Core 1.0.

The following csproj elements/attributes are no longer included in the project file.

  • ToolsVersion– no longer a required attribute.
  • Compile Include– the new default includes all files.
  • EmbeddedResource Include– the new default includes all resources.
  • PackageReference– is now implicit for Microsoft.NETCore.App, and NETStandard.Library. All other packages, including ASP.NET Core, still require PackageReference!
  • PackageReference for the Microsoft.NET.Sdk package has been moved to a required top-level ProjectSdk attribute.

The default template for .NET Standard library projects is very similar:

<ProjectSdk="Microsoft.NET.Sdk"><PropertyGroup><TargetFramework>netstandard1.4TargetFramework>PropertyGroup>Project>

Just like with the .NET Core template above, you can target a different version of .NET Standard by changing the TargetFramework value. .NET Standard 1.4 was chosen as the default version since it is supported by the .NET Framework and .NET Core and is the highest version currently supported by .NET Native (for UWP apps).

The default template for ASP.NET Core apps is only slightly larger and also does a good job of demonstrating what package references look like, for example with the ASP.NET Core meta package.

<ProjectSdk="Microsoft.NET.Sdk.Web"><PropertyGroup><TargetFramework>netcoreapp1.0TargetFramework>PropertyGroup><ItemGroup><FolderInclude="wwwroot\" />ItemGroup><ItemGroup><PackageReferenceInclude="Microsoft.ApplicationInsights.AspNetCore"Version="2.0.0-beta1" /><PackageReferenceInclude="Microsoft.AspNetCore"Version="1.0.3" />ItemGroup>Project>

Updating .NET Core csproj Projects

As described above, the .NET Core csproj project files are now shorter. This change moves multiple pieces of project data that were explicitly declared in earlier project files into default settings that are implicitly declared as part of the SDK. The existing explicitly declared data becomes a duplicate declaration of the same implicitly declared data, when loaded with the new tools, which generates MSBuild errors for Compile and EmbeddedResource elements and warnings for PackageReference.

You need to remove the following data from your existing csproj files:

  • Compile
  • EmbeddedResource
  • PackageReference Include=”Microsoft.NETCore.App” …
  • PackageReference Include=”NETStandard.Library” …
  • PackageReference Include=”Microsoft.NET.Sdk” …

The SDK reference has moved to the root Project element, as you can see in the examples above. The SDK reference is required.

For more information, see: Implicit metapackage package reference in the .NET Core SDK and Default Compile Item Values in the the .NET Core SDK.

Project.json/xproj -> csproj Migration

Major improvements have been made to project file migration (project.json/xproj -> csproj). The team fixed many project migration issues to improve the migration experience.

We recommend that you discard previously migrated csproj project files and do a fresh migration from project.json/xproj so that you’ll get the new improvements. You won’t need to follow the manual csproj project file updates described in the section above.

.NET Core CLI

.NET Core CLI commands were added to make it easier to manage your projects from the command-line. Our goal is to ensure that you can make project changes manually, via CLI commands or via UI in Visual Studio.

  • dotnet sln— This command adds, removes and lists projects to/from a solution. Usage: dotnet sln [arguments] [options] [command]
  • dotnet add reference— This command adds a project reference to a project. It replaces dotnet add p2p. Usage: dotnet add [PROJECT] reference [options] [args]
  • dotnet add package— This command adds/removes/updates packages in a project file. Usage: dotnet add package [options] [args].

Note that these commands check for illegal states, as appropriate. For example, If you attempt to add a package that is not compatible with a given project, the command will (correctly) fail.

Other Improvements

Edit and continue now works for .NET Core projects.

You can develop ASP.NET Core apps with Linux Docker images. See Docker Tools Troubleshooting for additional information.

You can now remote debug .NET Core apps running on Linux over SSH from Visual Studio. See the process attach dialog, attaching to a .NET Core process on Linux!

attach-to-process

Docs

You may wonder where the docs are for the new tools. The .NET Core Docs site continues to document the project.json tools and experience. We intend to have a complete set of docs in place for Visual Studio 2017 RTM.

In the meantime, the docs are being updated to document the csproj/msbuild tools and experience in the csproj branch of dotnet/docs and the csproj branch of aspnet/docs.  The docs are not ready for consumption, but feel free to follow progress or contribute. We are using GitHub projects to manage effort – see dotnet/docs and aspnet/docs.

.NET Core projects have adopted the new PackageReference syntax for csproj project files, as you can see in the ASP.NET Core template example above. The PackageReference syntax has now been enabled for other projects, including .NET Framework and UWP.

When you create a new .NET Framework or UWP project, you will now be asked if you want to use the existing packages.config or the new PackageReference format, as you can see below.

nuget-format-dialog

You can set the default NuGet format type in the dialog above or in the NuGet settings dialog, as you can see below.

nuget-format-default-dialog

PackageReference is the new NuGet format going forward. We recommend that you start using it for new projects. It is new in Visual Studio 2017 and is not supported in earlier Visual Studio versions.

At present, there is no automated way to convert an existing project that uses packages.config to the new PackageReference format. We know that this is an important scenario that you would like to see integrated into Visual Studio and are working on enabling it. We expect to make this capability available after Visual Studio 2017 releases.

.NET Native

.NET Native 1.6 RC contains lots of great improvements, including addressing over 100 customer reported issues!

Hardware-accelerated System.Numerics.Vectors

We’ve updated .NET Native’s System.Numerics.Vectors support to be hardware-accelerated on all .NET Native platforms (using 128-bit SSE2 on x64 and x86 and 128-bit NEON on ARM32)!

Here’s a rendering of a Mandelbrot set with .NET Native 1.6:

simd_1-6

Here’s the same rendering with .NET Native 1.4:

simd_1-4

As you can see, .NET Native 1.6 takes just 1.9 seconds while .NET Native 1.4 takes 64 seconds to do the same work! The use of SIMD registers providers a major improvement in performance for System.Numerics.Vectors.

How to Get .NET Native 1.6 RC

.NET Native is now included within the Microsoft.NETCore.UniversalWindowsPlatform NuGet package, starting with version 5.3.0. This means that you can select the version of .NET Native you want to use by selecting a specific version of the Microsoft.NETCore.UniversalWindowsPlatform package (starting with version 5.3.0). This capability is new with Visual Studio 2017. By using NuGet, we can ship .NET Native updates more easily and it’s easier for you to select the version you want to use, per project.

Here are the steps:

  1. Right click on the project and select Manage NuGet Packages…
  2. Select the Microsoft.NETCore.UniversalWindowsPlatform NuGet package.
  3. Change the version to 5.3.0.
  4. Click the Update button.

vsnugetupdate

If you do not see version 5.3.0 listed, make sure that the Package Source is set to nuget.org.

You can revert back to .NET Native 1.4 at any time by rolling back the Microsoft.NETCore.UniversalWindowsPlatform NuGet package from 5.3.0 to any earlier version such as 5.2.2 or 5.1.0. This can be done by following the same steps outlined above with the desired version.

Visual Studio 2017 RC comes with .NET Native 1.4 by default. This is the same version that is included in Visual Studio 2015 Update 3, with the addition of a few fixes. .NET Native 1.6 is not supported in Visual Studio 2015.

Publishing apps to the store

You can publish apps to the Windows Store with .NET Native 1.6 RC, starting today.

We will be actively listening for feedback and may make additional changes for the Visual Studio 2017 RTM. You will need to upgrade to a later version of the Microsoft.NETCore.UniversalWindowsPlatform package if there are additional updates at that time.

General Improvements

Here are some of the general improvements:

  • You can now inspect static fields that contain the ThreadStatic attribute.
  • We’ve began building the Shared Library package on x64 with profile-guided optimizations which reduces the package size and improves startup time for x64 native apps. This change brings x64 to parity with x86 and ARM32.
  • We’ve integrated .NET Native garbage collector with Windows Runtime MemoryManager API to properly calculate memory load factor in UWP applications.
  • We’ve reduced compile times for applications that contain large and/or complex methods by ~25% in certain scenarios.
  • Up to 400% performance improvement in reverse p/invoke, and 135% performance improvement when accessing Windows Runtime objects in certain scenarios.
  • We’ve made improvements to the reflection stack and metadata formats that resulted in up to 300% performance improvements in some customer scenarios.
  • We’ve made improvements to delegate invocation that can reduce code size and give up to 7% faster performance.
  • We’ve also made many other code quality improvements which improved startup times, better steady-state performance, less memory usage and smaller app size.

Here are some of the more common customer reported issues that we fixed:

  • We’ve resolved an issue that sometimes resulted in a 1300 error when submitting a package to the store after upgrading / cherry-picking .NET Core assembly packages.
  • We’ve resolved an issue that caused a memory leak when interacting with certain Windows Runtime objects in a different process.
  • We’ve significantly reduced global lock contention when accessing Windows Runtime objects from multiple threads
  • We’ve resolved an issue that resulted in queries not executing properly in Entity Framework when enabling .NET Native. (GitHub #6381)
  • We’ve resolved an issue with System.Linq.Expressions that resulted in unsupressable error messages. (GitHub #5088)
  • .NET Native will now show a warning if you have a native DLL in a different CPU architecture than the application being built. This is a common mistake that results in the application not being able to launch.

Known issues

.NET native does not currently support portable PDBs. When debugging managed components with portable PDBs in application compiled with .NET native, you may have trouble setting breakpoints, stepping in, and/or inspecting variables of related types in those components. You can delete the files from the local package directory (users\userName.nuget\packages) to workaround the warning. This change was also made in the servicing update for .NET Native 1.4 in the latest update to Visual Studio 2017 RC. Earlier versions of .NET native may incorrectly throw OutOfMemoryException and crash during build when consuming portable PDBs.

Closing

We announced our intention last summer to bring more uniformity to .NET projects and .NET development. Today’s releases are a major step forward on that plan. Our first focus has been the .NET project format and build tools, as described here. Later this year, we’ll focus more on available APIs. We intend to make .NET development simpler and easier and will stay focussed on that vision.

Thanks for using Visual Studio 2017 RC, for trying out the new .NET features and for giving us feedback. We’ve made major improvements for .NET development across multiple application types. We hope that you like them. Tell us what you think!

Please share feedback in the comments or send it directly to us via email:

Thanks for Stacey Haffner and Joe Morris for their contributions to this post.

An update to "Important notice for Office 365 email customers who have configured connectors"

$
0
0

Since we posted this blog post, we have received positive responses from many of our customers, who have proceeded with changing their connectors (as per instructions in the post), thereby protecting their email/domain reputation. However, we are also aware of customers who are either in the midst of making this change or need some additional time to complete their changes. We understand a change like this can take some time, so we have decided to move our deadline from Feb 1st, 2017 to July 5th, 2017.

We have also added more details in the original post. If you are an Office 365 email customer and your organization is hybrid (you have an on-premise environment), please take some time to read it!

Carolyn Liu

January 2017 Update for ASP.NET Core 1.1

$
0
0

We just released an update for ASP.NET Core 1.1 due to Microsoft Security Advisory 4010983. The advisory is for a vulnerability in ASP.NET Core MVC 1.1.0 that could allow denial of service. All of the information you need is in the advisory. A short summary is provided below.

Red Hat customers should consult the Red Hat advisory for the same issue.

How to Obtain the Update

The update is in the Microsoft.AspNetCore.Mvc.Core package. You need to upgrade your project to use version 1.1.1 (or later) of the package and then re-publish your application.

See below for examples of project file updates, for project.json and csproj formats. Note the updated Microsoft.AspNetCore.Mvc.Core package version.

Project.json

The dependencies section of an updated project.json file would look like the following (in its most minimal form).

"dependencies": {"Microsoft.NETCore.App": {"version": "1.1.0","type": "platform"
},"Microsoft.AspNetCore": "1.1.0","Microsoft.AspNetCore.Mvc.Core": "1.1.1",
}

CSProj

An updated csproj file would look like the following (in its most minimal form):

<ProjectSdk="Microsoft.NET.Sdk.Web"><PropertyGroup><TargetFramework>netcoreapp1.1TargetFramework>PropertyGroup><PropertyGroup><PackageTargetFallback>$(PackageTargetFallback);portable-net45+win8+wp8+wpa81;PackageTargetFallback>PropertyGroup><ItemGroup><PackageReferenceInclude="Microsoft.AspNetCore"Version="1.1.0" /><PackageReferenceInclude="Microsoft.AspNetCore.Mvc.Core"Version="1.1.1" />ItemGroup>Project>

Learn more

You can ask questions on the aspnet/mvc repo, where a discussion issue has been created.

Windows Server 2016 Data Deduplication users: please install KB3216755!

$
0
0

Hi folks!

Based on several customer bug reports, we have issued a critical fix for Data Deduplication in Windows Server 2016 in the most recent Windows update package, KB3216755. This patch fixes an issue where corruptions may appear in files larger than 2.2 TB. While we always recommend keeping your system up-to-date, based on the severity of any data corruption, we strongly recommend that everyone who is using Data Deduplication on Windows Server 2016 take this update!

Long-time users of Dedup on will note that we only officially support files with size up to 1 TB. While this is true, this is a “soft” support statement – we take your data integrity extremely seriously, and therefore will always address reported data corruptions. Our current defined support statement of 1 TB was chosen for two reasons: 1) for files larger than 1 TB, performance isn’t quite ‘up to snuff’ with our expectations, and 2) dynamic workloads with lots of writes may reach NTFS’ file fragmentation limits, causing the file to become read-only until the next optimization. In short, our 1 TB support statement is about preserving a high quality experience for you. Your mileage may vary… in particular, many users have reported to us that backup workloads that use VHDs or VHD-like container files sized over 1 TB work extremely well with Dedup. This is because backup workloads are typically append-only workloads. We do however recommend that you make use of the new Backup usage type in Windows Server 2016 to ensure the best performance with backup workloads.

Finally, I would just like to thank the three users who reached out to us with this issue and helped us validate the pre-release patch: thank you! We always love to hear from you, our customers, so please feel free to reach out to us with your questions, comments, or concerns anytime: dedupfeedback@microsoft.com!

Relaunching the Visual Basic Team Blog!

$
0
0
Last year we decided to retire this blog and consolidate content on the .NET team blog instead. The thinking at the time was that we weren’t really posting a lot of content to it and that there was so much overlap in content between the VB team blog and the C# FAQ that it would... Read more
Viewing all 13502 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>