Quantcast
Channel: TechNet Technology News
Viewing all 13502 articles
Browse latest View live

Bing Maps V8 SDK September 2016 Update

$
0
0

In this regular update to the Bing Maps Version 8 developer control (V8), we have added a couple of new data visualization features to help you make better sense of your business data.

Data Binning Module

Data binning, is the process of grouping point data into a symmetric grid of geometric shapes. An aggregate value can then be calculated from the pins in a bin and used to set the color or scale of that bin to provide a visual representation of a data metric that the bin contains. The two most common shapes used in data binning are squares and hexagons. When hexagons are used, this process is also referred to as hex binning. Since the size and the color can both be customized based on an aggregate value, it is possible to have a single data bin represent two data metrics (bivariant). The data binning module makes it easy to create data bins from thousands of pushpins.

Try it now

Here are links to additional data binning code samples and documentation.

Contour Module

Contour Lines, also known as isolines, are lines that connect points that share a characteristic of equal value. These are often used for visualizing data such as elevations, temperatures, and earthquake intensities on a flat 2D map. This module makes it easy to take contour line data and visualize it on Bing Maps as non-overlapping colored areas.

Try it now

Here are links to additional contour module code samples and documentation.

TypeScript Definitions

The TypeScript definitions for Bing Maps V8 have been updated to include the September updates. In addition to being available through NuGet, we have also made these definitions available through npm.

Additional Improvements

In addition to these new features, this update also includes many smaller feature additions such as double click event support for shapes and several bug fixes.

A complete list of new features added in this release can be found on the What’s New page in the documentation on MSDN. We have many other features and functionalities on the road map for Bing Maps V8. If you have any questions or feedback about V8, please let us know on the Bing Maps forums or visit the Bing Maps website to learn more about our V8 web control features.

-        Bing Maps Team


PSScriptAnalyzer Community Call – Oct 18, 2016

$
0
0

Please join PSScriptAnalyzer community call on Tuesday, October 18, 2016 10:00 AM PDT via Skype or telephone. You can find the meeting agenda here.

Skype

Click the following link to join the meeting via Skype.

Join online meeting

Phone

Dial +1-323-849-4874 to join the meeting via telephone.

OR

To join the meeting through a local number follow this link: Find a local number

Conference ID: 94231049

For any additional help regarding joining the meeting, follow this link: Help

SQL Server 2016 Express Edition in Windows containers

$
0
0

We are excited to announce the public availability of SQL Server 2016 Express Edition in Windows Containers! The image is now available on Docker Hub and the build scripts are hosted on our SQL Server Samples GitHub repository. This image can be used in both Windows Server Containers as well as Hyper-V Containers.

SQL Server 2016 Express Edition Docker Image | Installation Scripts

Please follow this blog post for detailed instructions on how to get started with SQL Server 2016 Express in Windows Containers.

Troubleshooting failed password changes after installing MS16-101

$
0
0

Hi!

Linda Taylor here, Senior Escalation Engineer in the Directory Services space.

I have spent the last month working with customers worldwide who experienced password change failures after installing the updates under Ms16-101 security bulletin KB’s (listed below), as well as working with the product group in getting those addressed and documented in the public KB articles under the known issues section. It has been busy!

In this post I will aim to provide you with a quick “cheat sheet” of known issues and needed actions as well as ideas and troubleshooting techniques to get there.

Let’s start by understanding the changes.

The following 6 articles describe the changes in MS16-101 as well as a list of Known issues. If you have not yet applied MS16-101 I would strongly recommend reading these and understanding how they may affect you.

        3176492 Cumulative update for Windows 10: August 9, 2016
        3176493 Cumulative update for Windows 10 Version 1511: August 9, 2016
        3176495 Cumulative update for Windows 10 Version 1607: August 9, 2016
        3178465 MS16-101: Security update for Windows authentication methods: August 9, 2016
        3167679 MS16-101: Description of the security update for Windows authentication methods: August 9, 2016
        3177108 MS16-101: Description of the security update for Windows authentication methods: August 9, 2016

The good news is that this month’s updates address some of the known issues with MS16-101.

The bad news is that not all the issues are caused by some code defect in MS16-101 and in some cases the right solution is to make your environment more secure by ensuring that the password change can happen over Kerberos and does not need to fall back to NTLM. That may include opening TCP ports used by Kerberos, fixing other Kerberos problems like missing SPN’s or changing your application code to pass in a valid domain name.

Let’s start with the basics…

Symptoms:

After applying MS16-101 fixes listed above, password changes may fail with the error code

“The system detected a possible attempt to compromise security. Please make sure that you can contact the server that authenticated you.”

Or

“The system cannot contact a domain controller to service the authentication request. Please try again later.”

This text maps to the error codes below:

Hexadecimal

Decimal

Symbolic

Friendly

0xc0000388

1073740920

STATUS_DOWNGRADE_DETECTED

The system detected a possible attempt to compromise security. Please make sure that you can contact the server that authenticated you.

0x80074f1

1265

ERROR_DOWNGRADE_DETECTED

The system detected a possible attempt to compromise security. Please make sure that you can contact the server that authenticated you.

Question: What does MS16-101 do and why would password changes fail after installing it?

Answer: As documented in the listed KB articles, the security updates that are provided in MS16-101 disable the ability of the Microsoft Negotiate SSP to fall back to NTLM for password change operations in the case where Kerberos fails with the STATUS_NO_LOGON_SERVERS (0xc000005e) error code.

In this situation, the password change will now fail (post MS16-101) with the above mentioned error codes (ERROR_DOWNGRADE_DETECTED / STATUS_DOWNGRADE_DETECTED).

Important: Password RESET is not affected by MS16-101 at all in any scenario. Only password change using the Negotiate package is affected.

So, now you understand the change, let’s look at the known issues and learn how to best identify and resolve those.

Summary and Cheat Sheet

To make it easier to follow I have matched the ordering of known issues in this post with the public KB articles above.

First, when troubleshooting a failed password change post MS16-101 you will need to understand HOW and WHERE the password change is happening and if it is for a domain account or a local account. Here is a cheat sheet.

Summary of SCENARIO’s and A quick reference table of actions needed.

Scenario / Known issue #

Description

Action Needed

1.

1. Domain password change fails via CTRL+ALT+DEL and shows an error like this:

image

Text: “System detected a possible attempt to compromise security

Troubleshoot using this guide and fix Kerberos.

2.

1. Domain password change fails via application code with an INCORRECT/UNEXPECTED Error code when a password which does not meet password complexity is entered.

For example, before installing MS16-101, such password change may have returned a status like STATUS_PASSWORD_RESTRICTION and it now returns STATUS_DOWNGRADE_DETECTED (after installing Ms16-101) causing your application to behave in an expected way or even crash.

Note: In these cases password change works ok when correct new password is entered that complies with the password policy.

Install October fixes in the table below.

3.

Local user account password change fails via CTRL+ALT+DEL or application code.

Install October fixes in the table below.

4.

Passwords for disabled and locked out user accounts cannot be changed using Negotiate method.

None. By design.

5.

Domain password change fails via application code when a good password is entered.

This is the case where if you pass a servername to NetUserChangePassword, the password change will fail post MS16-101. This is because it would have previously worked and replied on NTLM. NTLM is insecure and Kerberos is always preferred. Therefore passing a domain name here is the way forward.

One thing to note for this one is that most of the ADSI and C#/.NET changePassword API’s end up calling NetUserChangePassword under the hood. Therefore, also passing invalid domain names to these API’s will fail.

Troubleshoot using this guide and fix code to use Kerberos.

6.

After you install MS 16-101 update, you may encounter 0xC0000022 NTLM authentication errors.

To resolve this issue, see KB3195799 NTLM authentication fails with 0xC0000022 error for Windows Server 2012, Windows 8.1, and Windows Server 2012 R2 after update is applied.

Table of Fixes for known issues above release 2016.10.11, taken from MS16-101 Security Bulletin:

OS

Fix needed

Vista / W2K8

Re-install 3167679, re-released 2016.10.11

Win7 / W2K8 R2

Install 3192391 (security only)
or
Install 3185330 (monthly rollup that includes security fixes)

WS12

3192393 (security only)
or
3185332 (monthly rollup that includes security fixes)

Win8.1 / WS12 R2

3192392 (security only)
OR
3185331 ((monthly rollup that includes security fixes)

Windows 10

For 1511:3192441 Cumulative update for Windows 10 Version 1511: October 11, 2016
For 1607:3194798 Cumulative update for Windows 10 Version 1607 and Windows Server 2016: October 11, 2016

Troubleshooting

As I mentioned, this post is intended to support the documentation of the known issues in the Ms16-101 KB articles and provide help and guidance for troubleshooting. It should help you identify which known issue you are experiencing as well as provide resolution suggestions for each case.

I have also included a troubleshooting walkthrough of some of the more complex example cases. We will start with the problem definition, and then look at the available logs and tools to identify a suitable resolution. The idea is to teach “how to fish” because there can be many different scenario’s and hopefully you can apply these techniques and use the log files documented here to help resolve the issues when needed.

Once you know the scenario that you are using for the password change the next step is usually to collect some data. Here are the helpful logs.

DATA COLLECTION

The same logs will help in all the scenario’s.

LOGS

1. SPENGO debug log/ LSASS.log

To enable this log run the following commands from an elevated admin CMD prompt to set the below registry keys:

reg add HKLM\SYSTEM\CurrentControlSet\Control\LSA /v SPMInfoLevel /t REG_DWORD /d 0xC03E3F /f
reg add HKLM\SYSTEM\CurrentControlSet\Control\LSA /v LogToFile /t REG_DWORD /d 1 /f
reg add HKLM\SYSTEM\CurrentControlSet\Control\LSA /v NegEventMask /t REG_DWORD /d 0xF /f


  • This will log Negotiate debug output to the %windir%\system32\lsass.log.
  • There is no need for reboot. The log is effective immediately.
  • Lsass.log is a text file that is easy to read with a text editor such as Wordpad.

2. Netlogon.log:

This log has been around for many years and is useful for troubleshooting DC LOCATOR traffic. It can be used together with a network trace to understand why the STATUS_NO_LOGON_SERVERS is being returned for the Kerberos password change attempt.

· To enable Netlogon debug logging run the following command from an elevated CMD prompt:

            nltest /dbflag:0x26FFFFFF

· The resulting log is found in %windir%\debug\netlogon.log&netlogon.bak

· There is no need for reboot. The log is effective immediately.

            See also 109626 Enabling debug logging for the Net Logon service

· The Netlogon.log (and Netlogon.bak) is a text file.

           Open the log with any text editor (I like good old Notepad.exe)

3. Collect a Network trace during the password change issue using the tool of your choice.

Scenario’s, Explanations and Walkthrough’s:

When reading this you should keep in mind that you may be seeing more than one scenario. The best thing to do is to start with one, fix that and see if there are any other problems left.

1. Domain password change fails via CTRL+ALT+DEL

This is most likely a Kerberos DC locator failure of some kind where the password changes were relying on NTLM before installing MS16-101 and are now failing. This is the simplest and easiest case to resolve using basic Kerberos troubleshooting methods.

Solution:Fix Kerberos.

Some tips from cases which we saw:

1. Use the Network trace to identify if the necessary communication ports are open. This was quite a common issue. So start by checking this.

In order for Kerberos password changes to work communication on TCP port 464 needs to be open between the client doing the password change and the domain controller.

Note on RODC: Read-only domain controllers (RODCs) can service password changes if the user is allowed by the RODCs password replication policy. Users who are not allowed by the RODC password policy require network connectivity to a read/write domain controller (RWDC) in the user account domain to be able to change the password.

To check whether TCP port 464 is open, follow these steps (also documented in KB3167679):

a. Create an equivalent display filter for your network monitor parser. For example:

         ipv4.address== && tcp.port==464

b. In the results, look for the “TCP:[SynReTransmit” frame.

If you find these, then investigate firewall and open ports. It is often useful to take a simultaneous trace from the client and the domain controller and check if the packets are arriving at the other end.

2. Make sure that the target Kerberos names are valid.

· IP addresses are not valid Kerberos names

· Kerberos supports short names and fully qualified domain names. Like CONTOSO or Contoso.com

3. Make sure that service principal names (SPNs) are registered correctly.

For more information on troubleshooting Kerberos see https://blogs.technet.microsoft.com/askds/2008/05/14/troubleshooting-kerberos-authentication-problems-name-resolution-issues/

or

https://technet.microsoft.com/en-us/library/cc728430(v=ws.10).aspx

2. Domain password change fails via application code with an INCORRECT/UNEXPECTED Error code when a password which does not meet password complexity is entered.

For example, before installing MS16-101, such password change may have returned a status like STATUS_PASSWORD_RESTRICTION. After installing Ms16-101 it returns STATUS_DOWNGRADE_DETECTED causing your application to behave in an expected way or even crash.

Note: In this scenario, password change succeeds when correct new password is entered that complies with the password policy.

Cause:

This issue is caused by a code defect in ADSI whereby the status returned from Kerberos was not returned to the user by ADSI correctly.
Here is a more detailed explanation of this one for the geek in you:

Before MS16-101 behavior:

           1. An application calls ChangePassword method from using the ADSI LDAP provider.
           Setting and changing passwords with the ADSI LDAP Provider is documented here.
           Under the hood this calls Negotiate/Kerberos to change the password using a valid realm name.
           Kerberos returns STATUS_PASSWORD_RESTRICTION or Other failure code.

          2. A 2nd changepassword call is made via NetUserChangePassword API with an intentional realmname as the which uses
           Negotiate and will retry Kerberos. Kerberos fails with STATUS_NO_LOGON_SERVERS because a DC name is not a valid realm name.

         3.Negotiate then retries over NTLM which succeeds or returns the same previous failure status.

The password change fails if a bad password was entered and the NTLM error code is returned back to the application. If a valid password was entered, everything works because the 1st change password call passes in a good name and if Kerberos works, the password change operation succeeds and you never enter into step 3.

Post MS16-101 behavior /why it fails with MS16-101 installed:

         1. An application calls ChangePassword method from using the ADSI LDAP provider. This calls Negotiate for the password change with
          a valid realm name.
         Kerberos returns STATUS_PASSWORD_RESTRICTION or Other failure code.

         2. A 2nd ChangePassword call is made via NetUserChangePassword with a as realm name which fails over Kerberos with
         STATUS_NO_LOGON_SERVERS which triggers NTLM fallback.

          3. Because NTLM fallback is blocked on MS16-101, Error STATUS_DOWNGRADE_DETECTED is returned to the calling app.

Solution:Easy. Install the October update which will fix this issue. The fix lies in adsmsext.dll included in the October updates.

Again, here are the updates you need to install, Taken from MS16-101 Security Bulletin:

OS

Fix needed

Vista / W2K8

Re-install 3167679, re-released 2016.10.11

Win7 / W2K8 R2

Install 3192391 (security only)
or
Install 3185330 (monthly rollup that includes security fixes)

WS12

3192393 (security only)
or
3185332 (monthly rollup that includes security fixes)

Win8.1 / WS12 R2

3192392 (security only)
OR
3185331 ((monthly rollup that includes security fixes)

Windows 10

For 1511:3192441 Cumulative update for Windows 10 Version 1511: October 11, 2016
For 1607:3194798 Cumulative update for Windows 10 Version 1607 and Windows Server 2016: October 11, 2016

3.Local user account password change fails via CTRL+ALT+DEL or application code.

Installing October updates above should also resolve this.

MS16-101 had a defect where Negotiate did not correctly determine that the password change was local and would try to find a DC using the local machine as the domain name.

This failed and NTLM fallback was no longer allowed post MS16-101. Therefore, the password changes failed with STATUS_DOWNGRADE_DETECTED.

Example:

One such scenario which I saw where password changes of local user accounts via ctrl+alt+delete failed with the message “The system detected a possible attempt to compromise security. Please ensure that you can contact the server that authenticated you.” Was when you have the following group policy set and you try to change a password of a local account:

Policy

Computer Configuration \ Administrative Templates \ System \ Logon\“Assign a default domain for logon”

Path

HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Windows\CurrentVersion\Policies\System\DefaultLogonDomain

Setting

DefaultLogonDomain

Data Type

REG_SZ

Value

“.”    (less quotes). The period or “dot” designates the local machine name

Notes

Cause: In this case, post MS16-101 Negotiate incorrectly determined that the account is not local and tried to discover a DC using \\ as the domain and failed. This caused the password change to fail with the STATUS_DOWNGRADE_DETECTED error.

Solution:Install October fixes listed in the table at the top of this post.

4.Passwords for disabled and locked out user accounts cannot be changed using Negotiate method.

MS16-101 purposely disabled changing the password of locked-out or disabled user account passwords via Negotiate by design.

Important: Password Reset is not affected by MS16-101 at all in any scenario. Only password change. Therefore, any application which is doing a password Reset will be unaffected by Ms16-101.

Another important thing to note is that MS16-101 only affects applications using Negotiate. Therefore, it is possible to change locked-out and disabled account password using other method’s such as LDAPs.

For example, the PowerShell cmdlet Set-ADAccountPassword will continue to work for locked out and disabled account password changes as it does not use Negotiate.

5. Troubleshooting Domain password change failure via application code when a good password is entered.

This is one of the most difficult scenarios to identify and troubleshoot. And therefore I have provided a more detailed example here including sample code, the cause and solution.

In summary, the solution for these cases is almost always to correct the application code which maybe passing in an invalid domain name such that Kerberos fails with STATUS_NO_LOGON_SERVERS.

Scenario:

An application is using system.directoryservices.accountmanagement namespace to change a users password.
https://msdn.microsoft.com/en-us/library/system.directoryservices.accountmanagement(v=vs.110).aspx

After installing Ms16-101 password changes fail with STATUS_DOWNGRADE_DETECTED. Example .NET failing code snippet using PowerShell which worked before MS16-101:

Add-Type -AssemblyName System.DirectoryServices.AccountManagement
$ct = [System.DirectoryServices.AccountManagement.ContextType]::Domain
$ctoptions = [System.DirectoryServices.AccountManagement.ContextOptions]::SimpleBind -bor [System.DirectoryServices.AccountManagement.ContextOptions]::ServerBind
$pc = New-Object System.DirectoryServices.AccountManagement.PrincipalContext($ct, “contoso.com”,”OU=Accounts,DC=Contoso,DC=Com”, ,$ctoptions)
$idType = [System.DirectoryServices.AccountManagement.IdentityType]::SamAccountName  
$up = [System.DirectoryServices.AccountManagement.UserPrincipal]::FindByIdentity($pc,$idType, “TestUser”)
$up.ChangePassword(“oldPassword!123”, “newPassword!123”)

Data Analysis

There are 2 possibilities here:
(a) The application code is passing an incorrect domain name parameter causing Kerberos password change to fail to locate a DC.
(b)  Application code is good and Kerberos password change fails for other reason like blocked port or DNS issue or missing SPN.

Let’s start with (a) The application code is passing an incorrect domain name/parameter causing Kerberos password change to fail to locate a DC.

(a) Data Analysis Walkthrough Example based on a real case:

1. Start with Lsass.log (SPNEGO trace)

If you are troubleshooting a password change failure after MS16-101 look for the following text in Lsass.log to indicate that Kerberos failed and NTLM fallback was forbidden by Ms16-101:

Failing Example:

[ 9/13 10:23:36] 492.2448> SPM-WAPI: [11b0.1014] Dispatching API (Message 0)
[ 9/13 10:23:36] 492.2448> SPM-Trace: [11b0] LpcDispatch: dispatching ChangeAccountPassword (1a)
[ 9/13 10:23:36] 492.2448> SPM-Trace: [11b0] LpcChangeAccountPassword()
[ 9/13 10:23:36] 492.2448> SPM-Helpers: [11b0] LsapCopyFromClient(0000005EAB78C9D8, 000000DA664CE5E0, 16) = 0
[ 9/13 10:23:36] 492.2448> SPM-Neg: NegChangeAccountPassword:
[ 9/13 10:23:36] 492.2448> SPM-Neg: NegChangeAccountPassword, attempting: NegoExtender
[ 9/13 10:23:36] 492.2448> SPM-Neg: NegChangeAccountPassword, attempting: Kerberos
[ 9/13 10:23:36] 492.2448> SPM-Warning: Failed to change password for account Test: 0xc000005e
[ 9/13 10:23:36] 492.2448> SPM-Neg: NegChangeAccountPassword, attempting: NTLM
[ 9/13 10:23:36] 492.2448> SPM-Neg: NegChangeAccountPassword, NTLM failed: not allowed to change domain passwords
[ 9/13 10:23:36] 492.2448> SPM-Neg: NegChangeAccountPassword, returning: 0xc0000388

  • 0xc000005E is STATUS_NO_LOGON_SERVERS
    0xc0000388 is STATUS_DOWNGRADE_DETECTED

If you see this, it means Kerberos failed to locate a Domain Controller in the domain and fallback to NTLM is not allowed by Ms16-101. Next you should look at the Netlogon.log and the Network trace to understand why.

2. Network trace

Look at the network trace and filter the traffic based on the client IP, DNS and any authentication related traffic.
You may see the client is requesting a Kerberos ticket using an invalid SPN like:


Source

Destination

Description

Client

DC1

KerberosV5:TGS Request Realm: CONTOSO.COM Sname: ldap/contoso.com             {TCP:45, IPv4:7}

DC1

Client

KerberosV5:KRB_ERROR  – KDC_ERR_S_PRINCIPAL_UNKNOWN (7)  {TCP:45, IPv4:7}

So here the client tried to get a ticket for this ldap\Contoso.com SPN and failed with KDC_ERR_S_PRINCIPAL_UNKNOWN because this SPN is not registered anywhere.

  • This is expected. A valid LDAP SPN is example like ldap\DC1.contoso.com

Next let’s check the Netlogon.log

3. Netlogon.log:

Open the log with any text editor (I like good old Notepad.exe) and check the following:

  • Is a valid domain name being passed to DC locator?

Invalid names such as \\servername.contoso.com or IP address \\x.y.x.w will cause dclocator to fail and thus Kerberos password change to return STATUS_NO_LOGON_SERVERS. Once that happens NTLM fall back is not allowed and you get a failed password change.

If you find this issue examine the application code and make necessary changes to ensure correct domain name format is being passed to the ChangePassword API that is being used.

Example of failure in Netlogon.log:

[MISC] [PID] DsGetDcName function called: client PID=1234, Dom:\\contoso.com Acct:(null) Flags: IP KDC
[MISC] [PID] DsGetDcName function returns 1212 (client PID=1234): Dom:\\contoso.com Acct:(null) Flags: IP KDC

\\contoso.com is not a valid domain name. (contoso.com is a valid domain name)

This Error translates to:

0x4bc

1212

ERROR_INVALID_DOMAINNAME

The format of the specified domain name is invalid.

winerror.h

So what happened here?

The application code passed an invalid TargetName to kerberos. It used the domain name as a server name and so we see the SPN of LDAP\contoso.com.

The client tried to get a ticket for this SPN and failed with KDC_ERR_S_PRINCIPAL_UNKNOWN because this SPN is not registered anywhere. As Noted: this is expected. A valid LDAP SPN is example like ldap\DC1.contoso.com.

The application code then tried the password change again and passed in \\contoso.com as a domain name for the password change. Anything beginning with \\ as domain name is not valid. IP address is not valid. So DCLOCATOR will fail to locate a DC when given this domain name. We can see this in the Netlogon.log and the Network trace.

Conclusion and Solution

If the domain name is invalid here, examine the code snippet which is doing the password change to understand why the wrong name is passed in.

The fix in these cases will be to change the code to ensure a valid domain name is passed to Kerberos to allow the password change to successfully happen over Kerberos and not NTLM. NTLM is not secure. If Kerberos is possible, it should be the protocol used.

SOLUTION

The solution here was to remove “ContextOptions.ServerBind |  ContextOptions.SimpleBind ” and allow the code to use the default (Negotiate). Note, because we were using a domain context but ServerBind this caused the issue. Negotiate with Domain context is the option that works and is successfully able to use kerberos.

Working code:

Add-Type -AssemblyName System.DirectoryServices.AccountManagement
$ct = [System.DirectoryServices.AccountManagement.ContextType]::Domain
$pc = New-Object System.DirectoryServices.AccountManagement.PrincipalContext($ct, “contoso.com”,”OU=Accounts,DC=Contoso,DC=Com”)
$idType = [System.DirectoryServices.AccountManagement.IdentityType]::SamAccountName  
$up = [System.DirectoryServices.AccountManagement.UserPrincipal]::FindByIdentity($pc,$idType, “TestUser”)
$up.ChangePassword(“oldPassword!123”, “newPassword!123”)

Why does this code work before MS16-101 and fail after?

ContextOptions are documented here: https://msdn.microsoft.com/en-us/library/system.directoryservices.accountmanagement.contextoptions(v=vs.110).aspx

Specifically: “This parameter specifies the options that are used for binding to the server. The application can set multiple options that are linked with a bitwise OR operation. “

Passing in a domain name such as contoso.com with the ContextOptions ServerBind or SimpleBind causes the client to attempt to use an SPN like ldap\contoso.com because it expects the name which is passed in to be a ServerName.

This is not a valid SPN and does not exist, therefore this will fail and as a result Kerberos will fail with STATUS_NO_LOGON_SERVERS.
Before MS16-101, in this scenario, the Negotiate package would fall back to NTLM, attempt the password change using NTLM and succeed.
Post MS16-101 this fall back is not allowed and Kerberos is enforced.

(b) If Application Code is good but Kerberos fails to locate a DC for other reason

If you see a correct domain name and SPN’s in the above logs, then the issue is that kerberos fails for some other reason such as blocked TCP ports. In this case revert to Scenario 1 to troubleshoot why Kerberos failed to locate a Domain Controller.

You may also have both (a) and (b). Traces and logs are the best tools to identify.

Scenario6: After you install MS 16-101 update, you may encounter 0xC0000022 NTLM authentication errors.

I will not go into detail of this scenario as it is well described in the KB article reference.

See KB3195799 NTLM authentication fails with 0xC0000022 error for Windows Server 2012, Windows 8.1, and Windows Server 2012 R2 after update is applied.

That’s all for today! I hope you find this useful. I will update this post if any new information arises.

Linda Taylor | Senior Escalation Engineer | Windows Directory Services
(A well established member of the content police.)

Hosting .NET Core Services on Service Fabric

$
0
0

This post was written by Vaijanath Angadihiremath, a software engineer on the .NET team.

This tutorial is for users who already have a group of ASP.NET Core services which they want to host as microservices in Azure using Azure Service Fabric. Azure Service Fabric is a great way to host microservices in a PaaS world to obtain many benefits like high density, scalability and upgradability. In this tutorial, I will take a self-contained ASP.NET Core service targeting the .NETCoreApp framework and host it as a guest executable in Service Fabric.

Writing cross platform services/apps using the same code base is one of the key benefits of ASP.NET Core. If you plan to host the services on Linux and also want to host the same set of services using Service Fabric, then you can easily achieve this by using the guest services feature of Service Fabric. You can run any type of application, such as Node.js, Java, ASP.NET Core or native applications in Service Fabric. Service Fabric terminology refers to those types of applications as guest executables. Guest executables are treated by Service Fabric like stateless services. As a result, they will be placed on nodes in a cluster, based on availability and other metrics.

The current Service Fabric SDK templates only provide a way to host .NET services which target full .NET Frameworks, like the .NET Framework 4.5.2. If you already have a service that targets .NETCoreApp alone or .NETCoreApp and .NET Framework 4* then you cannot use the built-in ASP.NET Core template as the Service Fabric SDK only supports .NET Framework 4.5.2. To work around this, we need to use the guest services solution for all the projects that target .NETCoreApp as a target framework in their project.json.

Service Fabric Application package

As explained in Deploying a guest executable to Service Fabric, any Service Fabric application that is deployed on a Service Fabric cluster needs to follow a predefined directory structure.

|-- ApplicationPackage
    |-- code
        |-- existingapp.exe
    |-- config
        |-- Settings.xml
    |-- data
    |-- ServiceManifest.xml
|-- ApplicationManifest.xml

The root contains the ApplicationManifest.xml that defines the entire application. A subdirectory for each service included in the application is used to contain all the artifacts that the respective service requires. It contains the following items:

  • ServiceManifest.xml: this file defines the service.
  • Code: this directory contains the service code.
  • Config: this directory contains a Settings.xml for configuring specific settings for service.
  • Data: this directory stores local data that the service might need.

In order to deploy a guest service, we need to get all the required binaries to run the service and copy them under the Code folder. Config and Data folders are optional only and are used by services that require them. For .NETCoreApp self-contained projects, you can easily achieve this directory structure by using the publish to file system mechanism from Visual Studio. Once you publish the service to a folder, all the required binaries for the service including .NETCoreApp binaries will be copied to this folder. We can then use the published location and map it to Code folder in the Service Fabric service.

Publish .NETCoreApp Service to Folder

Right-click the .NET Core project and click Publish.

Create a custom publish target and name it appropriately to describe the final published service. I am deploying an account-management service and naming it Account.

Creating a new publish target

Under Connection, set the Target location where you want the project to be published. Choose the Publish method as File System.

Setting publish target location

Under Settings, set the Configuration to be Release – Any CPU, Target Framework to be .NETCoreApp, Version=v1.0 and Target Runtime to be win10-64. Click the Publish button.

Specifying publish settings

You have now published the service to a directory.

Creating a Guest Service Fabric Application

Visual Studio provides a Guest Service Fabric Application template to help you deploy a guest executable to a Service Fabric cluster.

Following are the steps.

  1. Choose File ->New Project and Create a Service Fabric Application. The template can be found under Visual C# ->Cloud. Choose an appropriate project name as this will reflect the name of the application that is deployed on the Cluster. Creating a guest service
  2. Choose the Guest Executable template. Under the Code Package Folder, browse to previously published directory of service.
  3. Under Code Package Behavior you can specify either Add link to external folder or Copy folder contents to Project. You can use the linked folders which will enable you to update the guest executable in its source as a part of the application package build.
  4. Choose the Program that needs to run as service and also specify the arguments and working directory if they are different. In my case I am just using Code Package.
  5. If your Service needs an endpoint for communication, you can now add the protocol, port and type to the ServiceManifest.xml for example

  6. Set the project as Startup Project.

  7. You can now publish to the cluster by just F5 debugging.

If you have multiple services that you want to deploy as Guest Services, you can simply edit this Guest Service Project file to include new Code, Config and Data packages for the new service or use the ServiceFabricAppPackageUtil.exe as mentioned in this Deploy multiple guest executables tutorial.

Resources

  1. Self-contained ASP.NET Core deployments
  2. Service Fabric programming model
  3. Deploying a guest service in Service Fabric.
  4. Deploying multiple guest services in Service Fabric.

Internet of Things on the Xbox (App Dev on Xbox series)

$
0
0

This week’s app is all about the Internet of Things. Best For You is a sample fitness UWP app focused on collecting data from a fictional IoT enabled yoga wear and presenting it to the user in a meaningful and helpful way on all of their devices to track health and progress of exercise. In this post, we will be focusing on the IoT side of the Universal Windows Platform as well as Azure IoT Hub and how they work together to create an end-to-end IoT solution. The source code for the application is available on GitHub right now so make sure to check it out.

image1

If you missed the previous blog post on Hosted Web Apps, make sure to check it out for in-depth on how to build hosted web experiences that take advantage of native platform functionality and different input modalities across UWP and other native platforms. To read the other blog posts and watch the recordings from the App Dev on Xbox live event that started it all, visit the App Dev on Xbox landing page.

Windows IoT Core

IoT, or the “Internet of Things,” is a system of physical objects capable of sensing the internal or external environment, connected to a larger network through which they are sending data to be processed and analyzed, and finally synthesized on the application level. This is intentionally a broad description, as IoT can take many shapes. It is the smart thermostat in your house, a water meter system on a massive hydroelectric dam, or a swarm of weather balloons with cellular data connections and GPS sensors.

The goal of most IoT scenarios is similar; gain a specific insight from, or operate on, its environment. For the purposes of this article, we’ll use an example of the smaller IoT systems that have the responsibility to collect a specific set of data using sensors and send that data to a more powerful system that can gain intelligence from that data to make larger decisions. Later in the post, we’ll reveal the fictional smart yoga ware and see how we can use Windows IoT to power the gear.

Windows IoT Core is a version of Windows 10 that is optimized for smaller devices with or without a display; devices such as Raspberry Pi 2 and 3, Arrow DragonBoard 410c, MinnowBoard MAX and upcoming support for the Intel Joule. There is also a professional version, Windows IoT Core Pro, that adds many enterprise friendly features, such as:

To install Windows IoT Core on a device is easier than it has ever been. You can use the Windows IoT Core Dashboard tool that automates the process of downloading the correct image for your device and flashing the OS onto the device’s memory for you.

Windows IoT leverages the flexible and powerful Universal Windows Platform. Yes, this means you can use your existing UWP skills, including XAML/C#, and deploy almost any UWP app onto an IoT device providing that you’re not leveraging special PC hardware (e.g. a AAA game that requires a powerful graphics card). Majority of UWP APIs work the same way, but you can also get access to IoT specific APIs on Windows IoT Core by simply adding a reference to the Windows IoT Extensions for the UWP. Getting started is really easy, and once the reference has been added you’ll get access to namespaces like Windows.Devices.Gpio and Windows.Devices.I2C (and many more) to begin developing for IoT specific scenarios.

Deploying a UWP app to an IoT Core device is the same as deploying to any remote Windows 10 device; no special knowledge set is required for this either! Simply select Remote Device as your target and put in the IP address (or machine name) and start debugging. Take a look at the Hello World sample app tutorial for Windows IoT to see just how easy it is.

Best For You

Let’s continue with an idea where we have invented smart yoga pants. A small IoT device is embedded in the Best For You yoga pants running Windows IoT Core with sensors woven into the fabric to capture environmental data such as heart rate, temperature (temp sensor) and leg position (flex sensor). These sensors are very small and virtually undetectable in the pants. The app running on the device constantly captures the incoming data from the sensors and sends it to the cloud for further processing.

Remember that the IoT device’s responsibility in this scenario is to monitor and report, not process the data. We’ll leave the processing up to more capable machines with much more processing power. There are many ways for the device to transfer this data, such as the very convenient and traditional HTTP (if the device can be connected to the internet directly), Bluetooth to a mobile device that can relay the data, and even through an AllJoyn (or other wireless standard) connection to a device like the IoTivity AllJoyn Device System Bridge.

With a way to communicate the data, where can we send that data so that it can be processed into meaningful insights? This is where Azure IoT Hub is ideal.

Azure IoT Hub

Azure IoT Hub is a powerful tool that allows for easy connection to all your Windows IoT devices in the field. It is a fully managed service that enables reliable and secure bi-directional communications between millions of Internet of Things (IoT) devices and a solution back end.

Here’s a high level architectural diagram of a Windows IoT Core solution with Azure IoT Hub and connected services to visualize and process the data:

image2

You can provision your Windows IoT devices so that they’re authenticated and can connect directly to the Hub. Provisioning your IoT device for Azure Hub is also easier than ever, the same IoT Dashboard you used to install Windows IoT lets you provision devices with Azure IoT Hub. You can read more about it in this blog post; here’s a screenshot of the IoT Core Dashboard’s provisioning tool:

image3

The Hub receives communications from the devices that contain data relevant to that’s devices’ responsibility. This usually takes the form of little data packets for each sensor reading. To continue with our smart yoga pants example, the IoT device’s UWP app takes a reading from all the sensors every second.

Capturing our sensor reading and sending it Azure IoT Hub might look something like this:

while (true)


{
    var pantsSensorDataPoint = new
    {
		rightLegAngle: rightLegSensor?.Value,
		leftLegAngle: leftLegSensor?.Value,
		bodyTemperature: tempSensor?.Value
        };

        var messageString = JsonConvert.SerializeObject(pantsSensorDataPoint );
        var message = new Message(Encoding.ASCII.GetBytes(messageString));

        await myAzureDeviceClient.SendEventAsync(message);

        Task.Delay(1000).Wait();
}

As we can see, the three sensors report a particular value, that we want to send to the Azure IoT Hub for processing.  Notice the myAzureDeviceClient; this is a DeviceClient class that comes from the Azure IoT Hub SDK (adding the SDK to your app is as simple as adding the Microsoft.Azure.Devices nuget package).

There is a little configuration to instantiate the client with your IoT Hub’s details, but when that’s ready all you needs to do to send up some data is call SendEventAsync(). To learn more about setting up the hub, check out this Getting Started with Azure IoT Hub tutorial, it covers everything you need to get up and running quickly. It simulates the IoT device with a small Console app, but you can replace that with your Windows IoT Core device’s UWP app as the nuget package can be added to UWP app as well. There is also a great Visual Studio Extension available to help you get configured quickly.

Alternatively, you can use an ARM template (ARM = Azure Resource Manager). An ARM template allows you to do an amazing “one-click deploy to Azure”. The template has a json file that defines the resources and the connection between those resources. An example of this is the ARM template linked in the readme of the project in GitHub.

Okay, so now we have the IoT Core device sending data to the Azure IoT Hub every second. How can we make use of this? How do we get insightful information from so much data? Let’s take a look at how we present the data.

Presenting the data

Once the data is being stored by the Azure IoT Hub, it can be used by other applications or used directly for analytics. We’ll cover two scenarios for the yoga pants data: Streaming analytics and a client UWP app running on an Xbox One!

Stream Analytics

You have the ability to hook up an Azure Streaming Analytics job to your Azure IoT Hub. The database that your Hub stores that data to becomes a treasure trove of information that can be plugged into a service like Power BI to be molded into insightful charts and graphs that present the information in a meaningful way.

First you need to setup your Stream analytics job, once that’s prepared, you create a query against the database using Stream Analytics Query Language (SAQL), this is very similar to SQL queries. An example to query against the yoga pants data to might look like this because it only has three relevant fields; LeftLegAngle, RightLegAngle and BodyTemp.

SELECT * FROM YogaPantsSensorTable

The output from the Stream Analytics would have each reading from every user. This is also known as a “passthrough query” because it sends all the data through to whatever consumes it. You can also take a look at other examples of how to use the query in this tutorial:  Get started using Azure Stream Analytics: Real-time fraud detection.

We could now connect the Stream Analytic Job to a service like Power BI in order to show the data in a multitude of charts to get at-a-glance information from all your sensors and users. Stream Analytics can support millions of events a second. This means you could have smart yoga pants for everyone, across the world, in a special world-wide yoga session and get immediate telemetry streaming into your data visualization apps. For more information on how to use Power BI and Stream analytics check out this tutorial: Stream Analytics & Power BI: A real-time analytics dashboard for streaming data.

Presenting Data in a UWP App

Now for the UI magic that bring all this together for the consumer’s delightful user experience. We’ll want a UWP app that runs on Xbox One (keep in mind that because this a UWP app, we can also run it on PC, Mobile and Hololens!).  Let’s start focusing on the Best For You demo app and how it delivers the experience to the user.

First we need to step back and think about some design considerations. When designing an IoT app for any device, it is crucial to think about the context in which the end user will be experiencing the app. A classic example of this train of thought would be if you were designing a remote control app for an IoT robot. Since the user may need to walk around during this interaction, the targeted device would be a phone or tablet.

However, when designing Best For You, we decided that the Xbox One is perfect for an exercise-focused app. Here are just a few reason why:

  • Xbox is great for hands free interactions. Since the devices are frequently in spacious room with about a 10ft viewing distance, this also gives the user a lot of space for said interactions.
  • Xbox is great for shared experiences. Spacious rooms can hold a lot of people that can simultaneously see and hear whatever is being played on the television.
  • Xbox is great for consumption.

Now that we know what the target device will be and how to design for it, we can start thinking about the flow of data from IoT Hub to the UI. The smart yoga pants have embedded heart rate sensors and has been sending this data to the Hub and we want to show this in the UI.

Let’s take a look at how the Best For You app connects to the Azure IoT Hub and gets the user’s heart rate. The demo app has a StatsPage.xaml, within that page is a StackPanel for displaying the user’s heart rate:

This TextBlock’s Text value is bound to a HeartRate property in the code-behind:


public string HeartRate
{
    get { return _heartRate; }
    set
    {
        _heartRate = value;
        …
        RaisePropertyChanged();   
    }
}

Now that the Property and the UI are configured, we can start getting some data from the Azure IoT Hub and update the HeartRate value.

We do this within a Task named CheckHeartRate, let’s break the task down. First, we need to connect to the IoT Hub:


var factory = MessagingFactory.CreateFromConnectionString(ConnectionString);

var client = factory.CreateEventHubClient(EventHubEntity);
var group = client.GetDefaultConsumerGroup();

var startingDateTimeUtc = DateTime.Now;

var receiver = group.CreateReceiver(PartitionId, startingDateTimeUtc);

The EventHubReceiver is where we can start receiving messages from the IoT Hub! Let’s look at how to get data from the EventHubReceiver:


while (true)
{
    EventData data = receiver.Receive();
    if (data == null) continue;

    await Dispatcher.RunAsync(Windows.UI.Core.CoreDispatcherPriority.Normal, () =>
    {
        HeartRate = Encoding.UTF8.GetString(data.GetBytes());
    });
}

That’s it! Calling the Receive method on the EventHubReceiver and will get you an EventData object. From there you call GetBytes and since out HeartRate property is a string, we convert it to a string and then update the HeartRate property. The data has now made the successful trip from pants sensor, to Windows IoT, to Azure IoT Hub and finally to UWP app on Xbox One!

Here’s what the UWP’s app UI looks like on Xbox One (see the heart logo and the heart rate at the top right):

image4

This isn’t necessarily the end of the journey of the data from sensor to UWP app. You can add a sharing mechanism into the UWP app and have the user share their progress and scores on social media and engage other users of your amazing smart yoga pants solution. Alternatively, you could also gamify the app and have a leaderboard in Azure and use Stream Analytics pull in the currently trending users.

That’s all!

Now that you are here, make sure to check out the app source for the UWP app on our official GitHub repository. Read through some of the resources provided below in the Resources section, watch the event if you missed it and let us know what you think through the comments below or on twitter.

Don’t forget to check back in next week for yet another blog post and a new app sample where we will focus on how to take advantage of the camera APIs in UWP and how to add intelligence by using the vision, face and emotion APIs from cognitive services.

Until then, happy coding!

Resources

Previous Xbox Series Posts

Instagram app for Windows 10 expands to PC and tablets

$
0
0

We’re excited to share that the Instagram app for Windows 10 is expanding beyond its current mobile availability and will begin rolling out and optimized for tablets and PCs today. The app is free to download from the Windows Store.

Instagram for Windows 10 tablets

We welcomed the Instagram app for Windows 10 mobile back in April, and you can now use the app right from your Windows 10 tablet or PC with Windows-only experiences such as Live Tiles, which let you see new photos and notifications right from your home screen.

Instagram for Windows 10 PC

Here are some of the features you can use in Instagram for Windows 10:

  • Post and edit photos* – Instagram makes sharing moments with everyone in your world easy, speedy, and fun.
  • Stories – Stories from people you follow will appear in a row at the top of Feed.
  • Instagram Live Tile – Find out what your friends and family are up to at a glance.
  • Rich, native notifications– We’ll send you the notifications you want to see so that you don’t miss important updates.
  • Instagram Direct – Instagram Direct lets you exchange threaded messages with one or more people, and share posts you see in Feed as a message.
  • Full featured Search, Explore, Profile, and Feed.

Download the Instagram app for Windows 10 for free today! Head over to the Instagram Blog to learn more about today’s exciting news!

*Posting and editing photos only available for tablets and PCs with touch screens and backward facing cameras.

Squeezing hyper-convergence into the overhead bin, for barely $1,000/server: the story of Project Kepler-47

$
0
0

This tiny two-server cluster packs powerful compute and spacious storage into one cubic foot.

This tiny two-server cluster packs powerful compute and spacious storage into one cubic foot.

The Challenge

In the Windows Server team, we tend to focus on going big. Our enterprise customers and service providers are increasingly relying on Windows as the foundation of their software-defined datacenters, and needless to say, our hyperscale public cloud Azure does too. Recent big announcements like support for 24 TB of memory per server with Hyper-V, or 6+ million IOPS per cluster with Storage Spaces Direct, or delivering 50 Gb/s of throughput per virtual machine with Software-Defined Networking are the proof.

But what can these same features in Windows Server do for smaller deployments? Those known in the IT industry as Remote-Office / Branch-Office (“ROBO”) – think retail stores, bank branches, private practices, remote industrial or constructions sites, and more. After all, their basic requirement isn’t so different – they need high availability for mission-critical apps, with rock-solid storage for those apps. And generally, they need it to be local, so they can operate – process transactions, or look up a patient’s records – even when their Internet connection is flaky or non-existent.

For these deployments, cost is paramount. Major retail chains operate thousands, or tens of thousands, of locations. This multiplier makes IT budgets extremely sensitive to the per-unit cost of each system. The simplicity and savings of hyper-convergence – using the same servers to provide compute and storage – present an attractive solution.

With this in mind, under the auspices of Project Kepler-47, we set about going small

 

Meet Kepler-47

The resulting prototype – and it’s just that, a prototype ­­– was revealed at Microsoft Ignite 2016 last week.

Kepler-47 on expo floor at Microsoft Ignite 2016 in Atlanta.

Kepler-47 on expo floor at Microsoft Ignite 2016 in Atlanta.

In our configuration, this tiny two-server cluster provides over 20 TB of available storage capacity, and over 50 GB of available memory for a handful of mid-sized virtual machines. The storage is flash-accelerated, the chips are Intel Xeon, and the memory is error-correcting DDR4 – no compromises. The storage is mirrored to tolerate hardware failures – drive or server – with continuous availability. And if one server goes down or needs maintenance, virtual machines live migrate to the other server with no appreciable downtime.

(Did we mention it also has not one, but two 3.5mm headphone jacks? Hah!)

Kepler-47 is 45% smaller than standard 2U rack servers.

Kepler-47 is 45% smaller than standard 2U rack servers.

In terms of size, Kepler-47 is barely one cubic foot – 45% smaller than standard 2U rack servers. For perspective, this means both servers fit readily in one carry-on bag in the overhead bin!

We bought (almost) every part online at retail prices. The total cost for each server was just $1,101. This excludes the drives, which we salvaged from around the office, and which could vary wildly in price depending on your needs.

Each Kepler-47 server cost just $1,101 retail, excluding drives.

Each Kepler-47 server cost just $1,101 retail, excluding drives.

 

Technology

Kepler-47 is comprised of two servers, each running Windows Server 2016 Datacenter. The servers form one hyper-converged Failover Cluster, with the new Cloud Witness as the low-cost, low-footprint quorum technology. The cluster provides high availability to Hyper-V virtual machines (which may also run Windows, at no additional licensing cost), and Storage Spaces Direct provides fast and fault tolerant storage using just the local drives.

Additional fault tolerance can be achieved using new features such as Storage Replica with Azure Site Recovery.

Notably, Kepler-47 does not use traditional Ethernet networking between the servers, eliminating the need for costly high-speed network adapters and switches. Instead, it uses Intel Thunderbolt™ 3 over a USB Type-C connector, which provides up to 20 Gb/s (or up to 40 Gb/s when utilizing display and data together!) – plenty for replicating storage and live migrating virtual machines.

To pull this off, we partnered with our friends at Intel, who furnished us with pre-release PCIe add-in-cards for Thunderbolt™ 3 and a proof-of-concept driver.

Kepler-47 does not use traditional Ethernet between the servers; instead, it uses Intel Thunderbolt™ 3.

Kepler-47 does not use traditional Ethernet between the servers; instead, it uses Intel Thunderbolt™ 3.

To our delight, it worked like a charm – here’s the Networks view in Failover Cluster Manager. Thanks, Intel!

The Networks view in Failover Cluster Manager, showing Thunderbolt™ Networking.

The Networks view in Failover Cluster Manager, showing Thunderbolt™ Networking.

While Thunderbolt™ 3 is already in widespread use in laptops and other devices, this kind of server application is new, and it’s one of the main reasons Kepler-47 is strictly a prototype. It also boots from USB 3 DOM, which isn’t yet supported, and has no host-bus adapter (HBA) nor SAS expander, both of which are currently required for Storage Spaces Direct to leverage SCSI Enclosure Services (SES) for slot identification. However, it otherwise passes all our validation and testing and, as far as we can tell, works flawlessly.

(In case you missed it, support for Storage Spaces Direct clusters with just two servers was announced at Ignite!)

 

Parts List

Ok, now for the juicy details. Since Ignite, we have been asked repeatedly what parts we used. Here you go:

The key parts of Kepler-47.

The key parts of Kepler-47.

FunctionProductView OnlineCost
MotherboardASRock C236 WSILink$199.99
CPUIntel Xeon E3-1235L v5 25w 4C4T 2.0GhzLink$283.00
Memory32 GB (2 x 16 GB) Black Diamond ECC DDR4-2133Link$208.99
Boot DeviceInnodisk 32 GB USB 3 DOMLink$29.33
Storage (Cache) 2 x 200 GB Intel S3700 2.5” SATA SSDLink
Storage (Capacity)6 x 4 TB Toshiba MG03ACA400 3.5” SATA HDDLink
Networking (Adapter)Intel Thunderbolt™ 3 JHL6540 PCIe Gen 3 x4 Controller ChipLink
Networking (Cable)Cable Matters 0.5m 20 Gb/s USB Type-C Thunderbolt™ 3Link$17.99*
SATA Cables8 x SuperMicro CBL-0481LLink$13.20
ChassisU-NAS NSC-800Link$199.99
Power SupplyASPower 400W Super Quiet 1ULink$119.99
HeatsinkDynatron K2 75mm 2 Ball CPU FanLink$34.99
Thermal PadsStarTech Heatsink Thermal Transfer Pads (Set of 5)Link$6.28*

* Just one needed for both servers.

 

Practical Notes

The ASRock C236 WSI motherboard is the only one we could locate that is mini-ITX form factor, has eight SATA ports, and supports server-class processors and error-correcting memory with SATA hot-plug. The E3-1235L v5 is just 25 watts, which helps keep Kepler-47 very quiet. (Dan has been running it literally on his desk since last month, and he hasn’t complained yet.)

Having spent all our SATA ports on the storage, we needed to boot from something else. We were delighted to spot the USB 3 header on the motherboard.

The U-NAS NSC-800 chassis is not the cheapest option. You could go cheaper. However, it features an aluminum outer casing, steel frame, and rubberized drive trays – the quality appealed to us.

We actually had to order two sets of SATA cables – the first were not malleable enough to weave their way around the tight corners from the board to the drive bays in our chassis. The second set we got are flat and 30 AWG, and they work great.

Likewise, we had to confront physical limitations on the heatsink – the fan we use is barely 2.7 cm tall, to fit in the chassis.

We salvaged the drives we used, for cache and capacity, from other systems in our test lab. In the case of the SSDs, they’re several years old and discontinued, so it’s not clear how to accurately price them. In the future, we imagine ROBO deployments of Storage Spaces Direct will vary tremendously in the drives they use – we chose 4 TB HDDs, but some folks may only need 1 TB, or may want 10 TB. This is why we aren’t focusing on the price of the drives themselves – it’s really up to you.

Finally, the Thunderbolt™ 3 controller chip in PCIe add-in-card form factor was pre-release, for development purposes only. It was graciously provided to us by our friends at Intel. They have cited a price-tag of $8.55 for the chip, but not made us pay yet. 🙂

 

Takeaway

With Project Kepler-47, we used Storage Spaces Direct and Windows Server 2016 to build an unprecedentedly low-cost high availability solution to meet remote-office, branch-office needs. It delivers the simplicity and savings of hyper-convergence, with compute and storage in a single two-server cluster, with next to no networking gear, that is very budget friendly.

Are you or is your organization interested in this type of solution? Let us know in the comments!

 

// Cosmos Darwin (@CosmosDarwin), Dan Lovinger, and Claus Joergensen (@ClausJor)


New themes, challenges and puzzles in Microsoft Mahjong for Windows 10

$
0
0

Microsoft Mahjong

Today, the Microsoft Casual Games team is proud to announce thatMicrosoft Mahjong, the classic tile-matching game, is now updated for Windows 10 with Daily Challenges, a new look and feel, and 20 new puzzles exclusively for Windows 10. You can now enjoy a total of more than 40 puzzles for hours of tile-matching fun!  Microsoft Mahjong is now a Universal Windows App, which means new experiences can be created and added to the game faster than ever on both Windows 10 PCs and Windows 10 mobile devices.

Head over to Xbox Wire to read about all the new features in the game available on Windows 10 today! Microsoft Mahjong is available for free in the Windows Store.

Update Rollup 1 for System Center 2016 Data Protection Manager is now available

$
0
0

Update Rollup 1 for Microsoft System Center 2016 Data Protection Manager (DPM 2016 UR1) is now available. After you install this update, you can store backups by using Modern DPM Storage technology. Using Resilient File System (ReFS) block-cloning technology to store incremental backups, System Center 2016 Data Protection Manager significantly improves storage usage and performance. Here are some benefits:

  • 30-40 percent savings in storage
  • Backups that are 70 percent faster with Modern DPM Storage
  • Ability to configure workloads for storage on certain volumes
  • Backup storage inline with the production data source
Announcing RCT based Hyper-V VM backups

System Center 2016 Data Protection Manager uses RCT based change tracking, by using Windows Server 2016. This makes backups more reliable, scalable, and improves backup performance. System Center 2016 Data Protection Manager also enables you to do the following:

  • Meet backup SLAs during cluster operating system rolling upgrade
  • Seamlessly protect and recover Shielded VMs
  • Protect VMs stored on Storage Spaces Direct
  • Protect VMs stored on ReFS-based SOFS clusters

For complete details, please see the following:

3190600Update Rollup 1 for System Center 2016 Data Protection Manager (https://support.microsoft.com/en-us/kb/3190600)


J.C. Hornbeck, Solution Asset PM
Microsoft Enterprise Cloud Group

Update Rollup 1 for Microsoft System Center 2016 Orchestrator is now available

$
0
0

The Microsoft System Center 2016 Orchestrator General Availability Update Rollup is now available. This update rollup package provides a collection of minor improvements to Orchestrator and we recommend that you apply this update rollup as part of your regular maintenance routines.

For details regarding how to obtain and install Orchestrator 2016 UR1, please see the following:

3190603Update Rollup 1 for Microsoft System Center 2016 Orchestrator (https://support.microsoft.com/en-us/kb/3190603)


Related Updates

3190604Update Rollup 1 for System Center 2016 Service Management Automation (https://support.microsoft.com/en-us/kb/3190604)

3190602Update Rollup 1 for System Center 2016 Orchestrator – Service Provider Foundation (https://support.microsoft.com/en-us/kb/3190602)

J.C. Hornbeck, Solution Asset PM
Microsoft Enterprise Cloud Group

Update Rollup 1 for System Center 2016 Operations Manager is now available

$
0
0

Just a quick note to let you know that Update Rollup 1 for System Center 2016 Operations Manager (OpsMgr 2016 UR1) is now available. For all the details regarding UR1for OpsMgr 2016, please see the following:

3190029Update Rollup 1 for System Center 2016 Operations Manager (https://support.microsoft.com/en-us/kb/3190029)


J.C. Hornbeck, Solution Asset PM
Microsoft Enterprise Cloud Group

TLS for Windows Standards-Based Storage Management (SMI-S) and System Center Virtual Machine Manager (VMM)

$
0
0

In a previous blog post, I discussed setting up the Windows Standards-Based Storage Management Service (referred to below as Storage Service) on Windows Server 2012 R2. For Windows Server 2016 and System Center 2016 Virtual Machine Manager, configuration is much simpler since installation of the service includes setting up the necessary self-signed certificate. We also allow using CA signed certificates now provided the Common Name (CN) is “MSSTRGSVC”.

Before I get into those changes, I want to talk about the Transport Layer Security 1.2 (TLS 1.2) protocol, which is now a required part of the Storage Management Initiative Specification (SMI-S).

TLS 1.2

Secure communication through the Hyper Text Transport Protocol (HTTPS) is accomplished using the encryption capabilities of Transport Layer Security, which is itself an update to the much older Security Sockets Layer protocol (SSL) – although still commonly called Secure Sockets. Over the years, several vulnerabilities in SSL and TLS have been exposed, making earlier versions of the protocol insecure. TLS 1.2 is the latest version of the protocol and is defined by RFC 5246.

The Storage Networking Industry Association (SNIA) made TLS 1.2 a mandatory part of SMI-S (even retroactively). In 2015, the International Standards Organization (ISO) published ISO 27040:2015“Information Technology – Security Techniques – Storage Security”, and this is incorporated by reference into the SMI-S protocol and pretty much all things SNIA.

Even though TLS 1.2 was introduced in 2008, it’s uptake was impeded by interoperability concerns. Adoption was accelerated after several exploits (e.g., BEAST) ushered out the older SSL 3.0 and TLS 1.0 protocols (TLS 1.1 did not see broad adoption). Microsoft Windows offered support for TLS 1.2 beginning in Windows 7 and Windows Server 2008 R2. That being said, there were still a lot of interop issues at the time, and TLS 1.1 and 1.2 support was hidden behind various registry keys.

Now it’s 2016, and there are no more excuses for using older, proven-insecure protocols, so it’s time to update your SMI-S providers. But unfortunately, you still need to take action to fully enable TLS 1.2. There are three primary Microsoft components that are used by the Storage Service which affect HTTPS communications between providers and the service: SCHANNEL, which implements the SSL/TLS protocols; HTTP.SYS, an HTTP server used by the Storage Service to support indications; and .NET 4.x, used by Virtual Machine Manager (VMM) (not by the Storage Service itself).

I’m going to skip some of the details of how clients and servers negotiate TLS versions (this may or may not allow older versions) and cipher suites (the most secure suite mutually agreed upon is always selected, but refer to this site for a recent exploit involving certain cipher suites).

A sidetrack: Certificate Validation

How certificates are validated varies depending on whether the certificate is self-signed or created by a trusted Certificate Authority (CA). For the most part, SMI-S will use self-signed certificates – and providers should never, ever, be exposed to the internet or another untrusted network. A quick overview:

A CA signed certificate contains a signature that indicates what authority signed it. The user of that certificate will be able to establish a chain of trust to a well-known CA.

A self-signed certificate needs to establish this trust in some other way. Typically, the self-signed certificate will need to be loaded into a local certificate store on the system that will need to validate it. See below for more on this.

In either case, the following conditions must be true: the certificate has not expired; the certificate has not been revoked (look up Revocation List for more about this); and the purpose of the certificate makes sense for its use. Additional checks include “Common Name” matching (disabled by default for the Storage Service; must not be used by providers) and key length. Note that we have seen issues with certificates being valid “from” a time and there is a time mismatch between the provider and the storage service. These tend to cure themselves once the start time has been passed on both ends of the negotiation. When using the Windows PowerShell cmdlet Register-SmisProvider you will see this information.

In some instances, your provider may ignore one or more of the validation rules and just accept any certificate that we present. A useful debugging approach but not very secure!

One more detail: when provisioning certificates for the SMI-S providers, make sure they use key lengths of 1024 or 2048 bits only. 512 bit keys are no longer supported due to recent exploits. And odd length keys won’t work either. At least I have never seen them work, even though technically allowed.

Microsoft product support for TLS 1.2

This article will discuss Windows Server and System Center Releases, and the .NET Framework. It should not be necessary to mess with registry settings that control cipher suites or SSL versions except as noted below for the .NET framework.

Windows Server 2012 R2/2016

Since the initial releases of these products, there have been many security fixes released as patches, and more than a few of them changed SCHANNEL and HTTP.SYS behavior. Rather than attempt to enumerate all of the changes, let’s just say it is essential to apply ALL security hotfixes.

If you are using Windows Server 2016 RTM, you also need to apply all available.

There is no .NET dependency.

System Center 2012 R2 Virtual Machine Manager

SC 2012 R2 VMM uses the .NET runtime library but the Storage Service does not. If you are using VMM 2012 R2, to fully support TLS 1.2, the most recent version of .NET 4.x should be installed; this is currently .NET 4.6.2. Also, update VMM to the latest Update Release.

If, for some reason, you must stay on .NET 4.5.2, then a registry change will be required to turn on TLS 1.2 on the VMM Server(s) since by default, .NET 4.5.2 only enables SSL 3.0 and TLS 1.0.

The registry value (which changes to allow TLS 1.0, TLS 1.1 and TLS 1.2 and not SSL 3.0 which you should never use anyway) is:

HKEY_LOCAL_MACHINE\SOFTWARE\Wow6432Node\Microsoft\.NETFramework\v4.0.30319“SchUseStrongCrypto”=dword:00000001

 

You can use this PowerShell command to change the behavior:

Set-ItemProperty -Path “HKEY_LOCAL_MACHINE\SOFTWARE\Wow6432Node\Microsoft\.NETFramework\v4.0.30319” -Name “SchUseStrongCrypto” -Value “1” -Force

(Note that the version number highlighted applies regardless of a particular release of .NET 4.5; do not change it!)

This change will apply to every application using the .NET 4.x runtime on the same system. Note that Exchange 2013 does not support 4.6.x, but you shouldn’t be running VMM and Exchange on the same server anyway! Again, apply this to the VMM Server system or VM, which may not be the same place you are running the VMM UI.

System Center 2016 VMM

VMM 2016 uses .NET 4.6.2; no changes required.

Exporting the Storage Service Certificate

Repeating the information from a previous blog, follow these steps on the VMM Server machine:

  • Run MMC.EXE from an administrator command prompt.
  • Add the Certificates Snap-in using the File\Add/Remove Snap-in menu.
  • Make sure you select Computer Account when the wizard prompts you, select Next and leave Local Computer selected. Click Finish.
  • Click OK.
  • Expand Certificates (Local Computer), then Personal and select Certificates.
  • In the middle pane, you should see the msstrgsvc Right click, select All Tasks, Export… That will bring up the Export Wizard.
  • Click Next to not export the private key (this might be grayed out anyway), then select a suitable format. Typically DER or Base-64 encoded are used but some vendors may support .P7B files. For EMC, select Base-64.
  • Specify a file to store the certificate. Note that Base-64 encoded certificates are text files and can be open with Notepad or any other editing program.

Note: if you deployed VMM in a HA configuration, you will need to repeat these steps on each VMM Server instance. Your vendor’s SMI-S provider must support a certificate store that allows multiple certificates.

Storage Providers

Microsoft is actively involved in SNIA plugfests and directly with storage vendors to ensure interoperability. Some providers may require settings to ensure the proper security protocols are enabled and used, and many require updates.

OpenSSL

Many SMI-S providers and client applications rely on the open source project OpenSSL.

Storage vendors who use OpenSSL must absolutely keep up with the latest version(s) of this library and it is up to them to provide you with updates. We have seen a lot of old providers that rely on the long obsolete OpenSSL 0.9.8 releases or unpatched later versions. Microsoft will not provide any support if your provider is out-of-date, so if you have been lazy and not keeping up-to-date, time to get with the program. At the time of this writing there are three current branches of OpenSSL, each with patches to mend security flaws that crop up frequently. Consult the link above. How a provider is updated is a vendor-specific activity. (Some providers – such as EMC’s – do not use OpenSSL; check with the vendor anyway.)

Importing the Storage Service certificate

This step will vary greatly among providers. You will need to consult the vendor documentation for how to import the certificate into their appropriate Certificate Store. If they do not provide a mechanism to import certificates, you will not be able to use fully secure indications or mutual authentication with certificate validation.

Summary

To ensure you are using TLS 1.2 (and enabling indications), you must do the following:

  • Check with your storage vendor for the latest provider updates and apply them as directed
  • Update to .NET 4.6.2 on your VMM Servers or enable .NET strong cryptography if you must use .NET 4.5.x for any reason
  • Install the Storage Service (installing VMM will do this for you)
  • If you are using Windows Server 2012 R2, refer back to this previous blog post to properly configure the Storage Service (skip this for Windows Server 2016)
  • Export the storage service certificate
  • Import the certificate into your provider’s certificate store (see vendor instructions)
  • Then you can register one or more SMI-S providers, either through the Windows Register-SmisProvider cmdlet or using VMM

 

 

Improved overall Visual Studio “15” Responsiveness

$
0
0

This is the final post in a five-part series covering performance improvements for Visual Studio “15”.

This series covers the following topics:

In this post we will highlight some of the improvements we’ve made in the Preview 5 release that make using Visual Studio more responsive as part of your daily use. We’ll first talk about improvements to debugging performance, Git source control, editing XAML, and finally how you can improve your typing experience by managing your extensions.

Debugging is faster and doesn’t cause delays while editing

In Visual Studio 2005 we introduced what’s known as the hosting process for WPF, Windows Forms, and Managed Console projects to make “start debugging” faster by spinning up a process in the background that can be used for the next debug session. This well-intentioned feature was causing Visual Studio to temporarily become unresponsive for seconds when pressing “stop debugging” or otherwise using Visual Studio after the debug session ended.

In Preview 5 we have turned off the hosting process and optimized “start debugging” so that it is just as fast without the hosting process, and even faster for projects that never used the hosting process such as ASP.NET, Universal Windows, and C++ projects. For example, here are some startup times we’ve measured on our test machines for our sample UWP Photo Sharing app, a C++ app that does Prime Visualization, and a simpler WPF app:

To achieve these improvements, we’ve optimized costs related to initializing the Diagnostic Tools window and IntelliTrace (which appears by default at the start of every debugging session) from the start debugging path. We changed the initialization from IntelliTrace so that it can be initialized in parallel with the rest of the debugger and application startup. Additionally we eliminated several inefficiencies with the way the IntelliTrace logger and Visual Studio processes communicate when stopping at a breakpoint.

We also eliminated several places where background threads related to the Diagnostic Tools window had to synchronously run code on the main Visual Studio UI thread. This made our ETW event collection more asynchronous so that we don’t have to wait for old ETW sessions to finish when restarting debugging.

Source code operations are faster with Git.exe

When we introduced Git support in Visual Studio we used a library called libgit2. With libgit2, we have had issues with functionality being different between libgit2 and the git.exe you use from the command prompt and that libgit2 can add 100s of megabytes of memory pressure to the main Visual Studio process.

In Preview 5, we have swapped this implementation out and are calling git.exe out of process instead, so while git is still using memory on the machine it is not adding memory pressure to the main VS process. We expect that using git.exe will also allow us to make git operations faster over time. So far we have found git clone operations to faster with large repos: cloning the Roslyn .NET Compiler repo on our machines is 30% faster: takes 4 minutes in Visual Studio ‘15’ compared with 5 minutes, 40 seconds with Visual Studio ‘15’. The following video shows this (for convenience the playback is at 4x speed):

In the coming release we hope to make more operations faster with this new architecture.

We have also improved a top complaint when using git: switching branches from the command line can in cause Visual Studio to reload all projects in the solution one at a time. In the file change notification dialog we’ve replaced ‘Reload All’ with ‘Reload Solution’:

This will kick off a single async solution reload which is much faster than the individual project reloads.

Improved XAML Tab switch speed

Based on data and customer feedback, we believe that 25% developers experience at least one tab switch delay > 1 sec in a day when switching between XAML tabs. On further investigation, we found that these delays were caused by running the markup compiler, and we’ve made use of the XAML language service to make this substantially faster. Here’s a video showing the improvements we’ve made:

The markup compiler is what creates the g.i.* file for each XAML file which, among other things, contains fields that represent the named elements in the XAML file that enables you to reference those named elements from code-behind. For example, given , the g.i.* file will contain a field named button of type Button that allows you to use “myButton” in code.

Certain user actions such as saving or switching away from an unsaved XAML file will cause Visual Studio to update the g.i.* file to ensure that IntelliSense has an up-to-date view of your named elements when you open your code-behind file. In past releases, this g.i.* update was always done by the markup compiler. In managed (C#/VB) projects the markup compiler is run on the UI thread resulting in a noticeable delay switching between tabs on complex projects.

We have fixed this issue in Preview 5 by leveraging the XAML Language Service’s knowledge of the XAML file to determine the names and types of the fields to populate IntelliSense, and then update the g.i.* file using Roslyn on a background thread. This is substantially faster than running the markup compiler because the language service has already done all the parsing and type metadata loading that causes the compiler to be slow. If the g.i.* file does not exist (e.g. after renaming a XAML file, or after you delete your project’s obj directory), we will need to run the markup compiler to generate the g.i.* file from scratch and you may still see a delay.

Snappier XAML typing experience

The main cause of UI delays in our XAML Language Service were related to initialization, responding to metadata changes, and loading design assemblies. We have addressed all three of these delays by moving the work to a background thread.

We’ve also made the following improvements to the design of assembly metadata loading include:

  • A new serialization layer for design assembly metadata that significantly reduces cross boundary calls.
  • Reuse of the designer’s assembly shadow cache for WPF projects. Reuse of the shadow cache across sessions for all project types. The shadow cache used to be recreated on every metadata change.
  • Design assembly metadata is now cached for the duration of the session instead of being recomputed on every metadata change.

These changes also allow XAML IntelliSense to be available before the solution is fully loaded.

Find out which extensions cause typing delays

We have received a number of reports about delays while typing. We are continuing to make bug fixes to improve these issues, but in many cases delays while typing are caused by extensions running code during key strokes. To help you determine if there are extensions impacting your typing experience, we have added reporting to Help -> Manage Visual Studio performance, and will notify you when we detect extensions slowing down typing.

Notification of extensions slowing down typing

You can see more details about extensions in Help -> Manage Visual Studio Performance

Try it out and report issues

We are continuing to work on improving the responsiveness of Visual Studio, and this post contains some examples of what we have accomplished in Preview 5. We need your help to focus our efforts on the areas that matter most to you, so please download Visual Studio ‘15’ Preview 5, and use the Report-a-Problem tool to report areas where we can make Visual Studio better for you.

Dan Taylor, Senior Program Manager

Dan Taylor has been at Microsoft for 5 years working on performance improvements to .NET and Visual Studio, as well as profiling and diagnostic tools in Visual Studio and Azure

This Week on Windows: Gears of War 4, Windows Ink and more

$
0
0

We hope you enjoyed this week’s episode of This Week on Windows! Read more about the HP Elite x3, now available in Microsoft Stores and this week’s milestone for Microsoft HoloLens, or head over here to learn how to get started with the Windows Ink Workspace.

Here’s what’s new in the Windows Store this week:

Gears of War 4 now available on Xbox One and Windows 10 PC

Gears of War 4

We’re thrilled to announce the official release of Gears of War 4 ($59.99 Standard Edition, $99 Ultimate Edition) with fans around the world. Available exclusively on Xbox One and Windows 10 PC, Gears of War 4 marks the beginning of a new saga for one of the most acclaimed videogame franchises in history. And, until Oct. 20, you can earn 10,000 bonus Microsoft Rewards points when you buy the Ultimate Edition.

Instagram now available for Windows 10 PCs and Tablets 

Instagram for Windows 10 PCs and tablets

Windows 10 PCs and tablets get the entire Instagram experience — including Instagram Stories. The Instagram app for Windows 10, already available for mobile, is now available for Windows 10 tablets and PCs. Get the free app from the Windows Store and read more about the news!

DC’s Legends of Tomorrow – Buy from $24.99

legends-of-tomorrow-still

After their defeat of the immortal villain Vandal Savage, the Legends will clash with foes both past and present to save the world from a mysterious new threat. Watch the season 2 premiere of DC’s Legends of Tomorrow, available now in the Movies & TV section of the Windows Store.

Microsoft Mahjong – Free

mahjong3

Microsoft Mahjong, the classic tile-matching game, is now updated for Windows 10 with Daily Challenges, a new look and feel, and 20 new puzzles exclusively for Windows 10. You can now enjoy a total of more than 40 puzzles for hours of tile-matching fun!  Microsoft Mahjong is now a Universal Windows App, which means new experiences can be created and added to the game faster than ever on both Windows 10 PCs and Windows 10 mobile devices.

Have a great weekend!


Managing Focused Inbox in Office 365 and Outlook

$
0
0

Focused Inbox—focus on the emails that matter most

For many, the inbox is the command center for their day. It’s the way to keep track of what is going on and what needs to get done. Outlook’s Focused Inbox makes this process easier by helping you focus on the emails that matter most to you. It separates your inbox into two tabs—Focused and Other. Emails that matter most to you are in the Focused tab, while the rest remain easily accessible—but out of the way in the Other tab. You’ll be informed about email flowing to “Other”, and you can switch between tabs at any time to take a quick look.

For more about what makes Focused Inbox great, see https://blogs.office.com/2016/07/26/outlook-helps-you-focus-on-what-matters-to-you/.

Admin Control available for Focused Inbox:

Ensure certain business critical mails land in the Focused tab.

Tenant admins will have controls to ensure certain business critical communications, like HR, Payroll, etc., always land in the user’s Focused tab of the Inbox. These whitelists can be set up using mail flow rules from the Admin center or via PowerShell cmdlets. After these are set up successfully, all future messages that satisfy these mail flow rules would be delivered in the Focused tab of the Inbox on Outlook clients that support Focused Inbox.

Tenant and mailbox level control to enable/disable Focused Inbox.

Tenant admins will have controls to enable/disable Focused Inbox on Outlook clients for all current and future mailboxes or select mailboxes in their tenant. These controls will be available via PowerShell cmdlets.

If tenant admins enable/disable Focused Inbox, the Focused Inbox experience on Outlook clients would be turned ON/OFF for these users the next time they boot the client.

These controls do not block the availability of the feature for these users. If the users so desire, they can still re-enable the feature individually again on each of their clients.

Transition from Clutter

Focused Inbox is a refinement and improvement of a previous feature called Clutter. Clutter’s purpose was also to help you focus on the most important items in your inbox, but it did so by moving “Other” email to a separate folder. Focused Inbox makes it easier for you to stay on top of incoming email without having to visit another folder.

Active Clutter users will have to opt-in to Focused Inbox and will be able to do so from an in-app prompt in Outlook. After they opt-in, they will no longer receive less important email in the “Clutter” folder. Instead, email will be split between the Focused and Other tabs in their inbox.

The same machine learned algorithm that moved items to the Clutter folder now powers Focused Inbox, meaning that any emails that were set to move to Clutter will now be moved to Other. The learning and training that users invested into Clutter would be transitioned to Focused Inbox without any effort on the user’s part.

Users can keep using the existing Clutter experience through the transition. However, after the transition period, Clutter will be completely replaced by Focused Inbox. Tenant admins would be proactively notified before this change is made.

Roll out of Focused Inbox

Focused Inbox will be rolled out in a staged manner in accordance with the change management policies for Office 365. The feature will be first rolled out to customers who have selected First Release cadence. Once it’s determined that feature is ready for broader audience, it will be rolled out to everyone in the service, including customers who have selected Standard Release cadence.

Updates to the stages of roll out will be announced on the Office 365 Public road map and more detailed information will be communicated closer to the roll out via the Office 365 Message Center.

Frequently asked questions:

  1. What happens to the Clutter folder after a user enables Focused Inbox?

    After Focused Inbox is enabled for a mailbox, Clutter folder would be demoted to a regular user folder. All regular user created folder operations would be supported, including delete.

  2. Could admins clean up the Clutter folder for their users without enabling Focused Inbox?

    We will enable admins to clean up the Clutter folder for their users, if they so desire. This will be supported via the current PowerShell cmdlets.

  3. Is Focused Inbox available to on-premises users?

    Focused Inbox only applies to Office 365 Exchange Online tenants and users.

The Exchange Team

Got your Daily Skimm?

$
0
0

theSkimm makes it easier for busy professionals—living in a world of information overload—to be in-the-know on current events, key world news and trends. Microsoft Office is partnering with theSkimm to bring their unique content to subscribers and make it easier to be smarter with new integrations with Office Connectors and Skype Bots.

Read more about theSkimm partnership at betterwith.office.com.

The post Got your Daily Skimm? appeared first on Office Blogs.

UML Designers have been removed; Layer Designer now supports live architectural analysis

$
0
0

We are removing the UML designers from Visual Studio “15” Enterprise. Removing a feature is always a hard decision, but we want to ensure that our resources are invested in features that deliver the most customer value.  Our reasons are twofold:

  1. On examining telemetry data, we found that the designers were being used by very few customers, and this was confirmed when we consulted with our sales and technical support teams.
  2. We were also faced with investing significant engineering resource to react to changes happening in the Visual Studio core for this release.

If you are a significant user of the UML designers, you can continue to use Visual Studio 2015 or earlier versions, whilst you decide on an alternative tool for your UML needs.
 
However, we continue to support visualizing of the architecture of .NET and C++ code through code maps, and for this release have made some significant improvements to Layer (dependency) validation. On interviewing customers about Technical Debt, architectural debt, in particular unwanted dependencies, surfaces as being a significant pain point. Since 2010, Visual Studio Ultimate, now Enterprise, has included the Layer Designer, which allows desired dependencies in .NET code to be specified and validated. However, validation only happens at build time and errors only surface at the method level, not at the lines of code which are actually violating the declared dependencies. In this release, we have rewritten layer validation to use the .NET Compiler Platform (“Roslyn”), which allows architecture validation to happen in real-time, as you type, as well as on build, and also means that reported errors are treated in the user experience like any other code analysis error. This means that developers are less likely to write code that introduces unwanted dependencies, as they will be alerted in the editor as they type. Moving to Roslyn, also makes it possible to create a plugin for SonarQube allowing layer validation errors to be reported with other technical debt during continuous integration and code review via pull requests, using the SonarQube build tasks integrated with Visual Studio Team Services. The plugin is on our near term backlog.
 

If you haven’t tried the Layer Designer before, we encourage you to give it a try. More detail on how to use it is available Live architecture dependency validation in Visual Studio ’15’ Preview 5. And please provide feedback not only on the experience, but also other rules you would like to see implemented.

Office 365 news roundup

$
0
0

What do workers in small, medium-size and enterprise organizations have in common? They all want productivity tools that can help them work smarter and collaborate more effectively, from almost any location and on almost any device. That’s exactly what Office 365 is designed to deliver.

During the past few weeks, we have introduced several new features to enhance the capabilities of Office 365. Last month at Ignite 2016, we updated attendees on Office 365 Groups, our complete group collaboration solution. And just this week we made another announcement tied to the integration of Yammer and Office 365 Groups. Now, you can create and co-author Office documents from within a Yammer group, and browse your SharePoint and OneDrive libraries to share files and start discussions with their teams on Yammer.

We recently announced a limited preview of the new Office 365 adoption content pack, which combines the intelligence of usage reports from the new Office 365 admin center with the interactive reporting capabilities of Power BI to help you get the most out of Office 365. The Office 365 adoption content pack enables you to visualize and analyze Office 365 usage data, create custom reports, share the insights within your organization and pivot by attributes such as location and department. We also announced the rollout of new auditing and reporting capabilities for Yammer, to give organizations more visibility and control over their data within the cloud services they use. These new capabilities are powered by the Office 365 Management Activity API and the Office 365 Security & Compliance Center.

We also provide resources to help you use Office 365 more effectively. Two of our most recent additions are the Office 365 Live Demo webinar series and the Office 365 Enterprise test lab guides. The webinar series is designed to help organizations use Office 365 to boost the productivity of their remote teams. The test lab guides help organizations configure Office 365 features specifically tailored for the needs of enterprises, and create a working dev/test environment they can use for further experimentation.

Below is a roundup of some key news items from the last couple of weeks. Enjoy!

Microsoft Office 365 for business: What you need to know—Discover how Office 365 offers businesses of every size exceptional value and performance.

City of Kansas City empowers employees, provides better services with Office 365—Learn how Office 365 benefits both city employees and citizens in Kansas City.

The Hershey Company: where collaboration and productivity are a recipe for goodness—Find out how Office 365 is providing the legendary candy company with a sweet solution for digital transformation.

Businesses can now use Power BI to gauge Microsoft Office usage—Discover how your organization can determine whether employees are using the full potential of Office 365 and assess its return on investment.

Microsoft adds intelligent cloud collaboration features to Office 365—Learn how you and your organization can benefit from the intelligent cloud collaboration features in Office 365.

The post Office 365 news roundup appeared first on Office Blogs.

Exploring ASP.NET Core with Docker in both Linux and Windows Containers

$
0
0

In May of last year doing things with ASP.NET and Docker was in its infancy. But cool stuff was afoot. I wrote a blog post showing how to publish an ASP.NET 5 (5 at the time, now Core 1.0) app to Docker. Later in December of 2015 new tools like Docker Toolbox and Kitematic made things even easier. In May of 2016 Docker for Windows Beta continued to move the ball forward nicely.

I wanted to see how things are looking with ASP.NET Core, Docker, and Windows here in October of 2016.

I installed these things:

Docker for Windows is really nice as it automates setting up Hyper-V for you and creates the Docker host OS and gets it all running. This is a big time saver.

Hyper-V manager

There's my Linux host that I don't really have to think about. I'll do everything from the command line or from Visual Studio.

I'll say File | New Project and make a new ASP.NET Core application running on .NET Core.

Then I right click and Add | Docker Support. This menu comes from the Visual Studio Tools for Docker extension. This adds a basic Dockerfile and some docker-compose files. Out of the box, I'm all setup to deploy my ASP.NET Core app to a Docker Linux container.

ASP.NET Core in a Docker Linux Container

Starting from my ASP.NET Core app, I'll make sure my base image (that's the FROM in the Dockerfile) is the base ASP.NET Core image for Linux.

FROM microsoft/aspnetcore:1.0.1
ENTRYPOINT ["dotnet", "WebApplication4.dll"]
ARG source=.
WORKDIR /app
EXPOSE 80
COPY $source .

Next, since I don't want Docker to do the building of my application yet, I'll publish it locally. Be sure to reach Steve Lasker's blog post "Building Optimized Docker Images with ASP.NET Core" to learn how to have one docker container build your app and the other run it it. This optimizes server density and resource.

I'll publish, then build the images, and run it.

>dotnet publish

>docker build bin\Debug\netcoreapp1.0\publish -t aspnetcoreonlinux

>docker images
REPOSITORY TAG IMAGE ID CREATED SIZE
aspnetcoreonlinux latest dab2bff7e4a6 28 seconds ago 276.2 MB
microsoft/aspnetcore 1.0.1 2e781d03cb22 44 hours ago 266.7 MB

>docker run -it -d -p 85:80 aspnetcoreonlinux
1cfcc8e8e7d4e6257995f8b64505ce25ae80e05fe1962d4312b2e2fe33420413

>docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
1cfcc8e8e7d4 aspnetcoreonlinux "dotnet WebApplicatio" 2 seconds ago Up 1 seconds 0.0.0.0:85->80/tcp clever_archimedes

And there's my ASP.NET Core app running in Docker. So I'm running Windows, running Hyper-V, running a Linux host that is hosting Docker containers.

What else can I do?

ASP.NET Core in a Docker Windows Container running Windows Nano Server

There's Windows Server, there's Windows Server Core that removes the UI among other things and there's Windows Nano Server which gets Windows down to like hundreds of megs instead of many gigs. This means there's a lot of great choices depending on what you need for functionality and server density. Ship as little as possible.

Let me see if I can get ASP.NET Core running on Kestral under Windows Nano Server. Certainly, since Nano is very capable, I could run IIS within the container and there's docs on that.

Michael Friis from Docker has a great blog post on building and running your first Docker Windows Server Container. With the new Docker for Windows you can just right click on it and switch between Linux and Windows Containers.

Docker switches between Mac and Windows easily

So now I'm using Docker with Windows Containers. You may not know that you likely already have Windows Containers! It was shipped inside Windows 10 Anniversary Edition. You can check for Containers in Features:

Add Containers in Windows 10

I'll change my Dockerfile to use the Windows Nano Server image. I can also control the ports that ASP.NET talks on if I like with an Environment Variable and Expose that within Docker.

FROM microsoft/dotnet:nanoserver
ENTRYPOINT ["dotnet", "WebApplication4.dll"]
ARG source=.
WORKDIR /app
ENV ASPNETCORE_URLS http://+:82
EXPOSE 82
COPY $source .

Then I'll publish and build...

>dotnet publish
>docker build bin\Debug\netcoreapp1.0\publish -t aspnetcoreonnano

Then I'll run it, mapping the ports from Windows outside to the Windows container inside!

NOTE: There's a bug as of this writing that affects how Windows 10 talks to Containers via "NAT" (Network Address Translation) such that you can't easily go http://localhost:82 like you (and I) want to. Today you have to hit the IP of the container directly. I'll report back once I hear more about this bug and how it gets fixed. It'll show up in Windows Update one day. The workaround is to get the IP address of the container from docker like this:  docker inspect -f "{{ .NetworkSettings.Networks.nat.IPAddress }}" HASH

So I'll run my ASP.NET Core app on Windows Nano Server (again, to be clear, this is running on Windows 10 and Nano Server is inside a Container!)

>docker run -it -d -p 88:82 aspnetcoreonnano
afafdbead8b04205841a81d974545f033dcc9ba7f761ff7e6cc0ec8f3ecce215

>docker inspect -f "{{ .NetworkSettings.Networks.nat.IPAddress }}" afa
172.16.240.197

Now I can hit that site with 172.16.240.197:82. Once that bug above is fixed, it'll get hit and routed like any container.

The best part about Windows Containers is that they are fast and light weight. Once the image is downloaded and build on your machine, you're starting and stopping them in seconds with Docker.

BUT, you can also isolate Windows Containers using Docker like this:

docker run --isolation=hyperv -it -d -p 86:82 aspnetcoreonnano

So now this instance is running fully isolated within Hyper-V itself. You get the best of all worlds. Speed and convenient deployment plus optional and easy isolation.

ASP.NET Core in a Docker Windows Container running Windows Server Core 2016

I can then change the Dockerfile to use the full Windows Server Core image. This is 8 gigs so be ready as it'll take a bit to download and extract but it is really Windows. You can also choose to run this as a container or as an isolated Hyper-V container.

Here I just change the FROM to get a Windows Sever Core with .NET Core included.

FROM microsoft/dotnet:1.0.0-preview2-windowsservercore-sdk
ENTRYPOINT ["dotnet", "WebApplication4.dll"]
ARG source=.
WORKDIR /app
ENV ASPNETCORE_URLS http://+:82
EXPOSE 82
COPY $source .

NOTE: I hear it's likely that the the .NET Core on Windows Server Core images will likely go away. It makes more sense for .NET Core to run on Windows Nano Server or other lightweight images. You'll use Server Core for heavier stuff. If you REALLY want to have .NET Core on Server Core you can make your own Dockerfile and easily build and image that has the things you want.

Then I'll publish, build, and run again.

>docker images
REPOSITORY TAG IMAGE ID CREATED SIZE
aspnetcoreonnano latest 7e02d6800acf 24 minutes ago 1.113 GB
aspnetcoreonservercore latest a11d9a9ba0c2 28 minutes ago 7.751 GB

Since containers are so fast to start and stop I can have a complete web farm running with Redis in a Container, SQL in another, and my web stack in a third. Or mix and match.

>docker ps
CONTAINER ID IMAGE COMMAND PORTS NAMES
d32a981ceabb aspnetcoreonwindows "dotnet WebApplicatio" 0.0.0.0:87->82/tcp compassionate_blackwell
a179a48ca9f6 aspnetcoreonnano "dotnet WebApplicatio" 0.0.0.0:86->82/tcp determined_stallman
170a8afa1b8b aspnetcoreonnano "dotnet WebApplicatio" 0.0.0.0:89->82/tcp agitated_northcutt
afafdbead8b0 aspnetcoreonnano "dotnet WebApplicatio" 0.0.0.0:88->82/tcp naughty_ramanujan
2cf45ea2f008 a7fa77b6f1d4 "dotnet WebApplicatio" 0.0.0.0:97->82/tcp sleepy_hodgkin

Conclusion

Again, go check out Michael's article where he uses Docker Compose to bring up the ASP.NET Music Store sample with SQL Express in one Windows Container and ASP.NET Core in another as well as Steve Lasker's blog (in fact his whole blog is gold) on making optimized Docker images with ASP.NET Core.

IMAGE ID            RESPOSITORY                   TAG                 SIZE
0ec4274c5571 web optimized 276.2 MB
f9f196304c95 web single 583.8 MB
f450043e0a44 microsoft/aspnetcore 1.0.1 266.7 MB
706045865622 microsoft/aspnetcore-build 1.0.1 896.6 MB

Steve points out a number of techniques that will allow you to get the most out of Docker and ASP.NET Core.

The result of all this means (IMHO) that you can use ASP.NET Core:

  • ASP.NET Core on Linux
    • within Docker containers
    • in any Cloud
  • ASP.NET Core on Windows, Windows Server, Server Core, and Nano Server.
    • within Docker windows containers
    • within Docker isolated Hyper-V containers

This means you can choose the level of feature support and size to optimize for server density and convenience. Once all the tooling (the Docker folks with Docker for Windows and the VS folks with Visual Studio Docker Tools) is baked, we'll have nice debugging and workflows from dev to production.

What have you been doing with Docker, Containers, and ASP.NET Core? Sound off in the comments.


Sponsor: Thanks to Redgate this week! Discover the world’s most trusted SQL Server comparison tool. Enjoy a free trial of SQL Compare, the industry standard for comparing and deploying SQL Server schemas.



© 2016 Scott Hanselman. All rights reserved.
     
Viewing all 13502 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>