Quantcast
Channel: TechNet Technology News
Viewing all 13502 articles
Browse latest View live

Killer Instinct: Definitive Edition launches on Windows 10 with Xbox Play Anywhere

$
0
0

That’s right, you read the headline correctly – Killer Instinct: Definitive Edition releases for Windows 10 today. Previously available only on Xbox One, the Definitive Edition packs up ALL the content we’ve ever released – every character (psst, there are 26), stage (20 in full glory), costume, color, trailers and tracks – and puts it all into one box of awesome goodness. As an added bonus, it also comes with behind-the-scenes videos, never-before-seen concept art, and a full universe map so you can read the bios and backstories of all your favorite characters.

With this release, Killer Instinct: Definitive Edition is now an Xbox Play Anywhere title – which means if you purchase the digital version (this also means current players on Xbox One) you can play on both Xbox One and Windows 10 PCs at no additional cost. Plus, Xbox Live cloud-saved player profiles enable you to access all your progress and add-on content, allowing you to pick up where you left off and bring all your saves, game add-ons and Achievements with you on both platforms. Killer Instinct: Definitive Edition also supports cross-play between Xbox One and Windows 10, so you can play with your friends regardless of which platform they use.

Killer Instinct: Definitive Edition is available today for $39.99 in the Windows Store. Read more over at Xbox Wire!

The post Killer Instinct: Definitive Edition launches on Windows 10 with Xbox Play Anywhere appeared first on Windows Experience Blog.


ICYMI – Build 14986, BUILD 2017, and Windows 10 on ARM-based computers!?

$
0
0

What a time to be a Windows developer.

This week we got a new Windows Insider Preview Build, new options added to the Desktop Bridge, expanded access to customer segmentation and notifications, and finally some big news coming from the Windows Hardware Engineer Conference (also known as WinHec 2016). More details below!

Windows 10 Insider Preview Build 14986

In the latest Insider Preview Build, you’ll find a treasure chest of updates. Our favorites include improvements to Cortana, the Windows Game Bar, Ink and enhancements to the Windows 10 experience in Asia. Click the link in the above title to read the blog post!

Desktop Bridge – Options for bringing Win32 Apps to the UWP

Use the Desktop App Converter to gain access to UWP APIs and simplify your app’s installation process with Windows 10 app packaging technology.

Customer Segmentation and Notifications

Developers can build customer segments and use the segments to create custom retargeting and reengagement ad campaigns for their UWP apps. Additionally, devs can use these segments to send push notifications. Previously this ability was only available to Windows Insiders. So on that note, we hope you enjoy it!

Windows 10 on ARM-based Computers (!)

It’s not too good to be true, nor is it some sort of developer witchcraft. It’s real and represents a huge step forward in mobile technology. Here’s a preview of what Windows 10 looks like on a Qualcomm Snapdragon processor.

And last, but not least…

BUILD 2017 dates and location announced!

And that’s it! We’re excited to get the BUILD 2017 ball rolling, and as always, tweet us @WindowsDev with any questions or feedback.

Download Visual Studio to get started.

The Windows team would love to hear your feedback.  Please keep the feedback coming using our Windows Developer UserVoice site. If you have a direct bug, please use the Windows Feedback tool built directly into Windows 10.

The post ICYMI – Build 14986, BUILD 2017, and Windows 10 on ARM-based computers!? appeared first on Building Apps for Windows.

Windows 10: protection, detection, and response against recent attacks

$
0
0

A few weeks ago, multiple organizations in the Middle East fell victim to targeted and destructive attacks that wiped data from computers, and in many cases rendering them unstable and unbootable. Destructive attacks like these have been observed repeatedly over the years and the Windows Defender and Windows Defender Advanced Threat Protection Threat Intelligence teams are working on protection, detection, and response to these threats.

Microsoft Threat Intelligence identified similarities between this recent attack and previous 2012 attacks against tens of thousands of computers belonging to organizations in the energy sector. Microsoft Threat Intelligence refers to the activity group behind these attacks as TERBIUM, following our internal practice of assigning rogue actors chemical element names.

Although the extent of damage caused by this latest attack by TERBIUM is still unknown, Windows 10 customers are protected. Windows 10 has built-in proactive security components, such as Device Guard, that mitigate this threat; Windows Defender customers are protected through multiple signature-based detections; and Windows Defender Advanced Threat Protection (ATP) customers are provided extensive visibility and detection capabilities across the attack kill chain, enabling security operation teams to respond quickly. Microsoft’s analysis has shown that the components and techniques used by TERBIUM in this campaign trigger multiple detections and threat intelligence alerts in Windows Defender Advanced Threat Protection.

Attack composition

Microsoft Threat Intelligence has observed that the malware used by TERBIUM, dubbed “Depriz” by Microsoft, reuses several components and techniques seen in the 2012 attacks, and has been highly customized for each targeted organization.

We do not see any indicators that a zero-day exploit is being used by TERBIUM.

Step 1: Writing to disk

The initial infection vector TERBIUM uses is unknown. As credentials have been hard-coded in the malware TERBIUM uses, it is suspected that TERBIUM has harvested credentials or infiltrated the target organization previously. Once TERBIUM has a foothold in the organization, its infection chain starts by writing an executable file to disk that contains all the components required to carry out the data-wiping operation. These components are encoded in the executables resources as fake bitmap images.

shamoon-depriz-implants

Figure 1. The components of the Trojan are fake bitmap images

We decoded the components as the following files:

  • PKCS12 – a destructive disk wiper component
  • PKCS7 – a communication module
  • X509 – 64-bit variant of the Trojan/implant

Step 2: Propagation and persistence through the target network

We have seen TERBIUM use hardcoded credentials embedded in the malware to propagate within a local network. The availability of these credentials to the activity group suggests that the attacks are highly targeted at specific enterprises.

The propagation and persistence is carried out as follows:

  1. First, it tries to start the RemoteRegistry service on the computer it is trying to copy itself to, then uses RegConnectRegistryW to connect to it.
  2. Next, it attempts to disable UAC remote restrictions by setting the LocalAccountTokenFilterPolicy registry key value to “1”.
  3. Once this is done, it connects to the target computer and copies itself as %System%\ntssrvr32.exe or %System%\ntssrvr64.exe before setting either a remote service called “ntssv” or a scheduled task.

Step 3: Wiping the machine

Next, the Trojan installs the wiper component. Note: TERBIUM establishes a foothold throughout the organization and does not proceed with the destructive wiping operation until a specific date/time: November 17, 2016 at 8:45 p.m.

The wiper component is installed as %System%\.exe. During our testing, it used the name “routeman.exe”, but static analysis shows it can use several other names that attempt to imitate file names of legitimate system tools.

The wiper component also contains encoded files in its resources as fake bitmap images.

The first encoded resource is a legitimate driver called RawDisk from the Eldos Corporation that allows a user mode component raw disk access. The driver is saved as %System%\drivers\drdisk.sys and installed by creating a service pointing to it using “sc create” and “sc start”. This behavior can be observed in the process tree available in the Windows Defender ATP portal. The below alert represents an example of the generic detections in Windows Defender ATP:

Screenshot of Windows Defender ATP alert: Depriz starting ephemeral service to load RawDisk driver

Figure 2. Windows Defender ATP alert: Depriz starting ephemeral service to load RawDisk driver “drdisk”


Screenshot of Windows Defender ATP event tree: Depriz Trojan dropping the wiper component (named “routeman” in this instance), which in turn drops the RawDisk driver “drdisk”

Figure 3. Windows Defender ATP event tree: Depriz Trojan dropping the wiper component (named “routeman” in this instance), which in turn drops the RawDisk driver “drdisk”

 

There are two interesting things worth noting about RawDisk:

  • It requires a valid license key from Eldos Corporation to run. However, the license key included in Depriz is the same as the one used in the 2012 attacks – and this license key was only valid for a short period in 2012. TERBIUM works around this by changing the system time on targeted computers to a valid period in 2012.
  • It is the same as the driver used in the 2012 attacks.

 

Screenshot of Depriz license key (the same as the one used in 2012 attacks) and its limited validity period

Figure 4. Depriz license key (the same as the one used in 2012 attacks) and its limited validity period

 

The wiper component uses an image file to overwrite files in locations listed in the following:

  • Master Boot Records (MBR)
  • HKLM\System\CurrentControlSet\Control\SystemBootDevice
  • HKLM\System\CurrentControlSet\Control\FirmwareBootDevice
  • C:\Windows\System32\Drivers
  • C:\Windows\System32\Config\systemprofile
  • Typical user folders like “Desktop”, “Downloads”, “Documents”, “Pictures”, “Videos” and “Music

Microsoft is also aware of a second threat that uses a distinct wiping component. We detect this as Trojan:Win32/Cadlotcorg.A!dha in Defender and generic detections with Defender ATP. Microsoft is continuing to monitor for additional information on this threat.

Step 4: Rendering the machine unusable

Finally, the following command is used to reboot the system into the intended unusable state:

shutdown -r -f -t 2

When the computer attempts to restart after shutting down, it is unable to find the operating system because the MBR was overwritten in step 3. The machine will no longer boot properly.

Mitigation: Multiple layers of protection from Microsoft

Windows 10 protects, detects and responds to this threat. Windows 10 has built-in proactive security components, such as Device Guard, that mitigate this threat by restricting execution to trusted applications and kernel drivers.

In addition, Windows Defender detects and remediates all components on endpoints as Trojan:Win32/Depriz.A!dha, Trojan:Win32/Depriz.B!dha, Trojan:Win32/Depriz.C!dha, and Trojan:Win32/Depriz.D!dha.

Windows Defender Advanced Threat Protection (ATP), our post-breach security service, provides an additional layer of security to enterprise users. With threat intelligence indicators, generic detections, and machine learning models, Windows Defender ATP (trial link) provides extensive visibility and detection capabilities across the attack kill chain of threats like TERBIUM.

Appendix – Indicators of compromise

We discovered the following SHA1s in relation to TERBIUM:

SHA1 hashes for malicious files

  • 5c52253b0a2741c4c2e3f1f9a2f82114a254c8d6
  • e7c7f41babdb279c099526ece03ede9076edca4e
  • a2669df6f7615d317f610f731b6a2129fbed4203
  • 425f02028dcc4e89a07d2892fef9346dac6c140a
  • ad6744c7ea5fee854261efa403ca06b68761e290

SHA1 hashes for legitimate RawDisk drivers

  • 1292c7dd60214d96a71e7705e519006b9de7968f
  • ce549714a11bd43b52be709581c6e144957136ec

Signature names for malicious files

 

Mathieu Letourneau

Windows Defender Advanced Threat Protection Threat Intelligence Team

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 
 

 

Exploring Wyam - a .NET Static Site Content Generator

$
0
0

It's a bit of a renaissance out there when it comes to Static Site Generators. There's Jekyll and GitBook, Hugo and Hexo. Middleman and Pelican, Brunch and Octopress. There's dozens, if not hundreds of static site content generators, and "long tail is long."

Wyam is a great .NET based open source static site generator

Static Generators a nice for sites that DO get updated with dynamic content, but just not updated every few minutes. That means a Static Site Generator can be great for documentation, blogs, your brochure-ware home page, product catalogs, resumes, and lots more. Why install WordPress when you don't need to hit a database or generate HTML on every page view? Why not generate your site only when it changes?

I recently heard about a .NET Core-based open source generator called Wyam and wanted to check it out.

Wyam is a simple to use, highly modular, and extremely configurable static content generator that can be used to generate web sites, produce documentation, create ebooks, and much more.

Wyam is a module system with a pipeline that you can configure and chain processes together however you like. You can generate HTML from Markdown, from Razor, even XSLT2 - anything you like, really. Wyam also integrates nicely into your continuous build systems like Cake and others, so you can also get the Nuget Tools package for Wyam.

There's a few ways to get Wyam but I downloaded the setup.exe from GitHub Releases. You can also just get a ZIP and download it to any folder. When I ran the setup.exe it flashed (I didn't see a dialog, but it's beta so I'll chalk it up to that) and it installed to C:\Users\scott\AppData\Local\Wyam with what looked like the Squirrel installer from GitHub and Paul Betts.

Wyam has a number of nice features that .NET Folks will find useful.

Let's see what I can do with http://wyam.io in just a few minutes!

Scaffolding a Blog

Wyam has a similar command line syntax as dotnet.exe and it uses "recipes" so I can say --recipe Blog and I'll get:

C:\Users\scott\Desktop\wyamtest>wyam new --recipe Blog
Wyam version 0.14.1-beta

,@@@@@ /@\ @@@@@
@@@@@@ @@@@@| $@@@@@h
$@@@@@ ,@@@@@@@ g@@@@@P
]@@@@@M g@@@@@@@ g@@@@@P
$@@@@@ @@@@@@@@@ g@@@@@P
j@@@@@ g@@@@@@@@@p ,@@@@@@@
$@@@@@g@@@@@@@@B@@@@@@@@@@@P
`$@@@@@@@@@@@` ]@@@@@@@@@`
$@@@@@@@P` ?$@@@@@P
`^`` *P*`
**NEW**
Scaffold directory C:/Users/scott/Desktop/wyamtest/input does not exist and will be created
Installing NuGet packages
NuGet packages installed in 101813 ms
Recursively loading assemblies
Assemblies loaded in 2349 ms
Cataloging classes
Classes cataloged in 277 ms

One could imagine recipes for product catalogs, little league sites, etc. You can make your own custom recipes as well.

I'll make a config.wyam file with this inside:

Settings.Host = "test.hanselman.com";
GlobalMetadata["Title"] = "Scott Hanselman";
GlobalMetadata["Description"] = "The personal wyam-made blog of Scott Hanselman";
GlobalMetadata["Intro"] = "Hi, welcome to my blog!";

Then I'll run wyam with:

C:\Users\scott\Desktop\wyamtest>wyam -r Blog
Wyam version 0.14.1-beta
**BUILD**
Loading configuration from file:///C:/Users/scott/Desktop/wyamtest/config.wyam
Installing NuGet packages
NuGet packages installed in 30059 ms
Recursively loading assemblies
Assemblies loaded in 368 ms
Cataloging classes
Classes cataloged in 406 ms
Evaluating configuration script
Evaluated configuration script in 2594 ms
Root path:
file:///C:/Users/scott/Desktop/wyamtest
Input path(s):
file:///C:/Users/scott/.nuget/packages/Wyam.Blog.CleanBlog.0.14.1-beta/content
theme
input
Output path:
output
Cleaning output path output
Cleaned output directory
Executing 7 pipelines
Executing pipeline "Pages" (1/7) with 8 child module(s)
Executed pipeline "Pages" (1/7) in 221 ms resulting in 13 output document(s)
Executing pipeline "RawPosts" (2/7) with 7 child module(s)
Executed pipeline "RawPosts" (2/7) in 18 ms resulting in 1 output document(s)
Executing pipeline "Tags" (3/7) with 10 child module(s)
Executed pipeline "Tags" (3/7) in 1578 ms resulting in 1 output document(s)
Executing pipeline "Posts" (4/7) with 6 child module(s)
Executed pipeline "Posts" (4/7) in 620 ms resulting in 1 output document(s)
Executing pipeline "Feed" (5/7) with 3 child module(s)
Executed pipeline "Feed" (5/7) in 134 ms resulting in 2 output document(s)
Executing pipeline "RenderPages" (6/7) with 3 child module(s)
Executed pipeline "RenderPages" (6/7) in 333 ms resulting in 4 output document(s)
Executing pipeline "Resources" (7/7) with 1 child module(s)
Executed pipeline "Resources" (7/7) in 19 ms resulting in 14 output document(s)
Executed 7/7 pipelines in 2936 ms

I can also run it with -t for different themes, like "wyam -r Blog -t Phantom":

Wyam supports themes

As with most Static Site Generators I can start with a markdown file like "first-post.md" and included name value pairs of metadata at the top:

Title: First Post
Published: 2016-01-01
Tags: Introduction
---
This is my first post!

If I'm working on my site a lot, I could run Wyam with the -w (WATCH) switch and then edit my posts in Visual Studio Code and Wyam will WATCH the input folder and automatically run over and over, regenerating the site each time I change the inputs! A nice little touch, indeed.

There's a lot of cool examples at https://github.com/Wyamio/Wyam/tree/develop/examples that show you how to generate RSS, do pagination, use Razor but still generate statically, as well as mixing Razor for layouts and Markdown for posts.

The AdventureTime sample is fairly sophisticated (be sure to read the comments in the config.wyam for gotcha) example that includes a custom Pipeline, use of Yaml for front matter, and mixes markdown and Razor.

There's also a ton of modules you can use to extend the build however you like. For example, you could have source images be large and then auto-generate thumbnails like this:

Pipelines.Add("Images",
ReadFiles("*").Where(x => x.Contains("images\\") && new[] { ".jpg", ".jpeg", ".gif", ".png"}.Contains(Path.GetExtension(x))),
Image()
.SetJpegQuality(100).Resize(400,209).SetSuffix("-thumb"),
WriteFiles("*")
);

There's a TON of options. You could even use Excel as the source data for your site, generate CSVs from the Excel OOXML and then generate your site from those CSVs. Sounds crazy, but if you run a small business or non-profit you could quickly make a nice workflow for someone to take control of their own site!

GOTCHA: When generating a site locally your initial reaction may be to open the /output folder and open the index.html in your local browser. You MAY be disappointed with you use a static site generator. Often they generate absolute paths for CSS and Javascript so you'll see a lousy version of your website locally. Either change your templates to generate relative paths OR use a staging site and look at your sites live online. Even better, use the Wyam "preview web server" and run Wyam with a "-p" argument and then visit http://localhost:5080 to see your actual site as it will show up online.

Wyam looks like a really interesting start to a great open source project. It's got a lot of code, good docs, and it's easy to get started. It also has a bunch of advanced features that would enable me to easily embed static site generation in a dynamic app. From the comments, it seems that Dave Glick is doing most of the work himself. I'm sure he'd appreciate you reaching out and helping with some issues.

As always, don't just send a PR without talking and working with the maintainers of your favorite open source projects. Also, ask if they have issues that are friendly to http://www.firsttimersonly.com.


Sponsor: Big thanks to Redgate! Help your team write better, shareable SQL faster. Discover how your whole team can write better, shareable SQL faster with a free trial of SQL Prompt. Write, refactor and share SQL effortlessly, try it now!


© 2016 Scott Hanselman. All rights reserved.
     

Lunch Break / s3 e6 / Justin Kershaw, CIO & CVP, Cargill (part 2)

$
0
0

In the second half of my drive with Justin Kershaw(CIO & CVP at Cargill) he tells me a pretty incredible story about the way IT is digitizing and improving one of the worlds oldest professions farming. Did you ever think a dairy farm would be prioritizing cyber security? Welcome to the modern age!

Justin also talks about the work hes done to make the IT department become the place where his company can invest and see the best return on their money. Its a great example of why IT is always a business asset.

.

To learn more about how top CIOs stay secure + productive,check out this new report.

In the next episode, I hit the road with my good friend Mark Russinovich, the CTO of Microsoft Azure.

You can also subscribe to these videoshere, or watch past episodes here:www aka.ms/LunchBreak.

SQL Server + PHP – What’s new

$
0
0

This post is authored by Meet Bhagdev, Program Manager, Microsfoft

PHP is one of the most widely used programming languages for web developers today. The Microsoft PHP Connector for SQL Server is used to connect PHP applications to SQL Server, whether SQL Server is hosted in the cloud, on-premises or provided as a platform as a service.

We recently announced SQL Server v.Next CTP1 on Linux and Windows, which brings the power of SQL Server to both Windows and — for the first time ever — Linux. Developers can now create applications with SQL Server on Linux, Windows, Docker or macOS (via Docker) and then deploy to Linux, Windows, or Docker, on-premises or in the cloud.

As part of this announcement, we have made some improvements to our PHP connector:

  • SQL Server v.Next Support: You can now use the PHP connectors to connect to SQL Server v.Next CTP1 of SQL Server running anywhere, including SQL Server on Linux, Windows or Docker.
  • Linux support: We’ve created Linux-native versions of sqlsrv and pdo_sqlsrv modules of the PHP SQL Server Connector. Now you can use the PHP Connector to connect to SQL Server on Ubuntu 15.04, 15.10, 16.04, Red Hat 6 and Red Hat 7.
  • PHP 7.0 and 7.1 support: We have added support for the latest PHP runtimes for Windows and Linux with our releases on GitHub.
  • PECL Install Experience for Linux: We have created native PECL packages for Linux. This enables developers to install, upgrade and uninstall the PHP SQL Server connector using the PECL package repository. This is explained in detail in our Getting Started tutorials.
  • Community input for prioritization: We have also started using surveys on GitHub to guide our prioritization for features and improvements. Check our latest survey and let us know what you think.

Get started today

Connect with us

Learn more

  • Visit Connect(); to watch overview, security, high-availability and developer tools videos about SQL Server on Linux on demand.

Learn more

Other videos in this series

Forza Horizon 3’s Blizzard Mountain Expansion is here!

$
0
0

Ready for a frozen adventure? Well, the wait is over because the Blizzard Mountain Expansion for Forza Horizon 3 is here! Available today in the Microsoft Store either as a standalone purchase or as part of the Forza Horizon 3 Expansion Pass, Blizzard Mountain invites players to the snowy elevations of a brand-new playable area of Forza Horizon 3’s Australia.

A big part of the fun with Blizzard Mountain is the amazing roster of new cars and trucks coming to the game, each of which have been hand-picked to provide maximum fun in the extreme elevations and challenging weather conditions that can only be found on the mountain. Heading off the list is the brand new 2016 Ford GYMKHANA 9 Focus RS RX. Then there’s legendary rally racers like the 1985 Lancia Delta S4 Group B, the 1975 Lancia Stratos HF Group 4, alongside rough and ready trucks like the 2016 Nissan Titan Warrior Concept and the 1966 Ford F-100 Flareside Abatti Racing Trophy Truck and the 2016 RJ Anderson #37 RZR-Rockstar Energry Pro 2 Truck. Last, but not least, the 2016 Subaru #199 WRX STI VT15r Rally Car is a slice of modern rally brilliance that must be driven to be believed. And don’t forget: this expansion will also feature a brand-new barn find car, hidden somewhere in the snowy reaches of Blizzard Mountain.

Blizzard Mountain is available separately for $19.99 or as part of the Forza Horizon 3 Expansion Pass, which gives players discounted access to two expansions for $34.99. Ultimate Edition owners receive a $10 discount on the Forza Horizon 3 Expansion Pass if purchased before the end of 2016.

Read more over at Xbox Wire and check out the day-long stream showing off the best of what the mountain has to offer over on the official Forza Beam channel. Time to enjoy the snow!

The post Forza Horizon 3’s Blizzard Mountain Expansion is here! appeared first on Windows Experience Blog.

Parameterization for Always Encrypted – Using SSMS to Insert into, Update and Filter by Encrypted Columns

$
0
0

SQL Server  Management Studio 17.o (the next major update of SSMS, currently available as a Release Candidate) introduces two important capabilities for Always Encrypted:

  • Ability to insert into, update and filter by values stored in encrypted columns from a Query Editor window.
  • The new online encryption algorithm, exposed in the Set-SqlColumnEncryption PowerShell cmdlet, which makes tables available for both reads and writes during the initial encryption and column encryption key rotation.

This article addresses the first of the two enhancements.

Prerequisites

To try the examples in this article, you need:

Introducing Parameterization for Always Encrypted

In SSMS 16.x, queries that insert, update or select data (in the WHERE clause) in encrypted columns  are not supported. For example, if you try to execute the following statement, it will fail, assuming the SSN column is encrypted.

DECLARE @SSN CHAR(11) = '795-73-9838'
SELECT * FROM [dbo].[Patients] WHERE [SSN] = @SSN

SSMS sends the query verbatim as a batch to SQL Server, including the plaintext value of the @SSN variable. As a result, the query fails with the below encryption scheme mismatch error, because the SQL Server expects the value targeting the SSN column to be encrypted, not in plaintext.

Msg 33299, Level 16, State 6, Line 2
Encryption scheme mismatch for columns/variables '@SSN'. The encryption scheme for the columns/variables is (encryption_type = 'PLAINTEXT') and the expression near line '2' expects it to be (encryption_type = 'DETERMINISTIC', encryption_algorithm_name = 'AEAD_AES_256_CBC_HMAC_SHA_256', column_encryption_key_name = 'CEK_Auto1', column_encryption_key_database_name = 'Clinic') (or weaker).

SSMS 17.0 introduces the new feature, called Parameterization for Always Encrypted, which, when enabled, maps Transact-SQL variables to query parameters ( SqlParameter objects, in .NET – SSMS uses .NET Framework Data Provider for SQL Server) and it refactors queries, using Transact-SQL variables, into parameterized statements.

For example, if you run the above query in SSMS over a connection with column encryption settings=enabled and with parameterization turned on, a SQL Server profiler log will capture the following two RPC calls, instead of the single batch statement, on the server side:

exec sp_describe_parameter_encryption N'DECLARE @SSN AS CHAR (11) = @pdf9f37d6e63c46879555e4ba44741aa6;
SELECT *
FROM [dbo].[Patients]
WHERE [SSN] = @SSN;
',N'@pdf9f37d6e63c46879555e4ba44741aa6 char(11)'
go

exec sp_executesql N'DECLARE @SSN AS CHAR (11) = @pdf9f37d6e63c46879555e4ba44741aa6;
SELECT *
FROM [dbo].[Patients]
WHERE [SSN] = @SSN;
',N'@pdf9f37d6e63c46879555e4ba44741aa6 char(11)',@pdf9f37d6e63c46879555e4ba44741aa6=0x01A01201846E5E924FC73155B7CC71CD05153DD09E95663F8DB34885B048E58C2D2DDDB15A6144A9CD7E6A46310590788F398CA1C216F9215992A0CF77990C9F6B
go

First thing to note is that SSMS has rewritten the query as a parameterized statement. The literal, used to initialize the @SSN variable in the original query, is being passed inside a parameter, with an auto-generated name (@pdf9f37d6e63c46879555e4ba44741aa6). This allows the .NET Framework Data Provider for SQL Server to automatically detect that the parameter needs to be encrypted. The driver achieves that by calling sp_describe_parameter_encryption that prompts SQL Server to analyze the query statement and determine which parameters should be encrypted and how. Then, the driver, transparently encrypts the parameter value, before submitting the query to SQL Server for execution via sp_executesql. SQL Server can now successfully execute the query.

Why Use It?

One of the benefits of Parameterization of Always Encrypted is that it makes it possible to try and test Always Encrypted in SSMS end-to-end. Before, you needed to write a custom app to insert or update data into encrypted columns, or to test point lookups against encrypted columns – you can now issue such queries in SSMS. The new capability also simplifies populating encrypted columns in a test or development database in your development environment.

Parameterization of Always Encrypted also allows you to use SSMS for management and development tasks that require access to plaintext values stored in encrypted columns inside production databases. For example, you can now perform tasks, such as manually cleansing/fixing data in encrypted columns or developing reporting queries against encrypted columns in SSMS. Please note the following important security considerations that apply to this scenario.

  • SSMS users accessing sensitive data in your production environment must be trusted. For example, if your organization stores sensitive data in the cloud (Azure SQL Database or SQL VMs), and the organization uses Always Encrypted to protect data from data breaches in the cloud (including data theft by malicious Microsoft Operators or malware), and the organization fully trusts their own DBAs, it might make perfect sense to enable the DBAs to manage sensitive data stored in encrypted columns.
  • On the other hand,  if the organization uses Always Encrypted to protect sensitive data from insider attacks by high privilege users, allowing those users to access sensitive data in plaintext may defeat the purpose of using the feature. To insert, update or read plaintext data from encrypted column, a user must be granted access to the keys protecting the data. Once a user is granted access to the keys, the only reliable method to revoke access is by rotating both the column encryption key and the column master key, which involves re-encrypting all data (with a new column encryption key).
  • When you use SSMS to access plaintext data in a production environment, always run it on a trusted/security machine, which is different from a computer hosting your SQL Server instance.

Getting Started with Parameterization for Always Encrypted

To issue parameterized queries targeting encrypted columns n SSMS:

  1. First, you need to make sure you can access a column master key, protecting a column you want to insert data to, update or filter by. For example, if the column master key is a certificate, you need to make sure, it is imported to Windows Certificate Store on your machine and you can access it.
  2. Make sure SSMS is connected to the database with column encryption setting=enabled in the database connection string, which instructs the .NET Framework Data Provider for SQL Server to encrypt query parameters (and decrypt the results). Here is how you can set the above setting for an existing Query Editor window:
    1. Right-click anywhere in the Query Editor window
    2. Select Connection>Change Connection ….
    3. Click Options>>.
    4. Select the Additional Properties tab,  type Column Encryption Setting=Enabled .
    5. Click Connect.
  3. Parameterization is disabled by default. To enable it:
    1. Select Query from the main menu.
    2. Select Query Options….
    3. Navigate to Execution>Advanced.
    4. Select or unselect Enable Parameterization for Always Encrypted.
    5. Click OK.
  4. Now, you are ready to author your to-be-parameterized query. Note that SSMS underlines each Transact-SQL variable that is going to be mapped to a parameter. If you hover on a declaration statement that got marked with a warning underline, you will see the results of the parameterization process, including the values of the key properties of the resulting SqlParameter object (the variable is mapped to): SqlDbType, Size, Precision, Scale, SqlValue.

Which Transact-SQL Variables Get Parameterized?

Not all Transact-SQL variables get parameterized. To be converted to a parameter, a variable must be:

  • Declared and initialized in the same statement (inline initialization). SSMS will not parameterize variables declared using separate SET statements.
  • Initialized using a single literal. Variables initialized using expressions including any operators or functions will not be parameterized.

Again, SSMS informs you which variables it parameterizes via warning underlines in the Query Editor window. You can also see the complete list of all variables that have been successfully parameterized in the Warning tab of the Error List view. To open the Error List view, select View from the main menu and then select Error List.

The below screenshot shows a Transact-SQL script with 6 variables. The first 3 variables (@SSN, @BirthDate and @Salary) get successfully parameterized as they meet the above two conditions. The following variables do not get parameterized.

  • @Name– is initialized using a separate SET statement.
  • @BirthDate– is initialized using a function.
  • @NewSalary– is initialized using an expression.


If a variable targets an encrypted column and it does not get parameterized, you need to change the way it is declared or/and initialized, otherwise, your query will fail with an encryption schema mismatch error.

Note that SSMS attempts to parameterize any variable, meeting the above two conditions, regardless if the variable is used in a query targeting an encrypted column.

Requirements for Initialization Literals

A declaration of a variable must meet the above condition for SSMS to attempt to parameterize the variable. In addition, the declaration must satisfy the following two requirements for the parameterization to succeed:

  • The type of the literal used for the initialization of the variable must the type in the variable declaration.
  • If the declared type of the variable is a date type or a time type, the variable must be initialized with a string using one of the ISO 8601-compliant formats (e.g. yyyy-mm-ddThh:mm:ss[.mmm], which are independent from local culture and language settings. SSMS imposes this restriction for the following reason. If SSMS allowed non-ISO formats, the date or date time values would be interpreted based on the culture/language settings of the machine SSMS is running on, which can be different than the settings of the target database. Consequently, running the same query from different machines or with vs. without parameterization, would lead to ambiguities, as it would produce different results.

The below screen shot shows two variables SSMS fails to parameterize:

  • @BirthDate– is initialized using a non-ISO format.
  • @Number– is declared as int, but it is initialized using a literal of an incompatible type (float).

You can see the details about the parameterization errors by hovering on the declaration of the variable or in the Error List view.

Troubleshooting Server-side Type Conversion Errors

The below screenshot shows an example of a successfully parameterized variable and a query. Yet, the execution of the query fails.

Here is the complete error message:

Msg 206, Level 16, State 2, Line 3
Operand type clash: nchar(50) encrypted with (encryption_type = 'DETERMINISTIC', encryption_algorithm_name = 'AEAD_AES_256_CBC_HMAC_SHA_256', column_encryption_key_name = 'CEK_Auto1', column_encryption_key_database_name = 'Clinic') is incompatible with char(11) encrypted with (encryption_type = 'DETERMINISTIC', encryption_algorithm_name = 'AEAD_AES_256_CBC_HMAC_SHA_256', column_encryption_key_name = 'CEK_Auto1', column_encryption_key_database_name = 'Clinic') collation_name = 'Latin1_General_BIN2'
Msg 8180, Level 16, State 1, Procedure sp_describe_parameter_encryption, Line 1 [Batch Start Line 0]
Statement(s) could not be prepared.

The reason for the failure is the type of the target SSN column is CHAR(11), but the variable uses NCHAR(50), which, when encrypted, is not compatible with CHAR(11). SQL Server supports few conversions for encrypted data types. In particular, conversions between Unicode and ANSI strings are not supported. To avoid such errors, make sure types of the variables and the types of the columns those variables target are the same.

Conclusion

Parameterization for Always Encrypted enables updating and filtering encrypted columns from SSMS. This new capability enables you to try Always Encrypted end-to-end using SSMS in a test/development database. It also aims at enabling trusted users within your organization to manage sensitive information protected with Always Encrypted in your production environment, or to develop reporting queries against sensitive data in production.

Parameterization for Always Encrypted is included in SSMS 17.0, which is currently available for a preview – please, see: Download SQL Server Management Studio (SSMS).

As always, we are looking forward to hearing your feedback. Please, post your comments below.


Enroll in the Calendar.help preview and let Cortana schedule your meetings

$
0
0

Setting up a meeting with someone outside your company can be a time-consuming hassle. When you can’t see each other’s calendars and free/busy information, working out a time to meet can easily take more time than the duration of the meeting itself.  Emailing back and forth, proposing alternative times and dealing with conflicts keep you from doing more productive work. Wouldn’t it be great and save a lot of time if you had your own personal scheduling assistant?

Overcoming the challenges of scheduling meetings outside your organization is the goal of a new Microsoft incubation project code-named “Calendar.help.”  This project gives Cortana, your personal digital assistant, the ability to arrange meetings on your behalf. By delegating scheduling tasks to Cortana, you can focus on getting things done rather than wasting time emailing back and forth.

Calendar.help is the latest project in a series of coordinated investments across Microsoft that bring together artificial intelligence (AI), conversational computing and calendaring. The project, which we introduced today at a Microsoft AI event in San Francisco, combines efforts from Microsoft Research, Outlook, Cortana and Genee—a scheduling AI startup that Microsoft acquired in August.

How does it work?

To use this new service, you will first need to sign up for the preview waitlist at Calendar.help. Once you are accepted into the program, scheduling a meeting is as simple as adding Cortana to the Cc: line on an email from your registered email address.

When you write the email, give Cortana instructions by including natural language to specify the length of the meeting (e.g. “let’s make this one 90 minutes”), timing (e.g. “sometime next week”) and location (“make this a Skype meeting”).  Alternatively, you can set-up default preferences in advance and Cortana will use those settings without additional commands.

enroll-in-the-calendar-help-preview-1

To schedule your meeting, simply add Cortana to the Cc line.

After you send the email, Cortana looks at your calendar to find times you are available and then reaches out to the invitees to propose times. Cortana communicates directly with the attendees, so the back and forth emailing won’t clutter your inbox.  As attendees reply with their availability, Cortana keeps the conversation moving forward until a time that works for everyone is found. Cortana also follows up with attendees if they don’t respond within 48 hours.

sync_back_and_forth

Cortana handles the back and forth of scheduling.

Once the date and time are confirmed, Cortana creates an event in your calendar with all the details and then sends out an invite to everyone.

calendar_invite-v2

Once the date and time are confirmed, Cortana sends a calendar event to the attendee from your calendar.

All interactions are natural and conversational—as if a real-life assistant was coordinating the meeting.  The service is powered by both machine and human intelligence to ensure that all scheduling requests are handled with accuracy.

Join the Calendar.help Exclusive Preview

Microsoft is always looking for ways to help you do your job better. The Calendar.help project is an example of how we are working to add intelligence to our productivity apps so you can focus on more important things. Today we are making these scheduling capabilities available in an exclusive preview for customers who are interested in helping improve the service, especially those who frequently schedule meetings with people outside their organization. If you would like to be considered for the preview, visit: Calendar.help and let’s schedule some time together.

—The Calendar.help team

The post Enroll in the Calendar.help preview and let Cortana schedule your meetings appeared first on Office Blogs.

Released: December 2016 Quarterly Exchange Updates

$
0
0

Today we are announcing the latest set of Cumulative Updates for Exchange Server 2016 and Exchange Server 2013. These releases include fixes to customer reported issues and updated functionality. Exchange Server 2016 Cumulative Update 4 and Exchange Server 2013 Cumulative Update 15 are available on the Microsoft Download Center. Update Rollup 22 for Exchange Server 2007 Service Pack 3 and Update Rollup 16 for Exchange Server 2010 Service Pack 3 are also available.

A new Outlook on the web compose experience

Exchange Server 2016 Cumulative Update 4 includes a refresh to the compose experience. The body of the message is now “framed” and formatting controls have been moved to the bottom of the view. This mirrors the current experience in Office 365.

image

Support for .Net 4.6.2

Exchange Server 2013 and Exchange Server 2016 now fully support .Net 4.6.2. Customers who have already updated their Exchange servers to .Net 4.6.1 can proceed with the upgrade to 4.6.2 before or after installing the cumulative updates released today. Customers who are still running .Net 4.5.2 are advised to deploy Cumulative Update 4 or Cumulative Update 15 prior to upgrading to .Net 4.6.2.

The upgrade to .Net 4.6.2, while strongly encouraged, is optional with these releases. As previously disclosed, the cumulative updates released in our March 2017 quarterly updates will require .Net 4.6.2.

Change to Pre-Requisites installed by Setup

Since Exchange Server 2013, the Windows feature Media Foundation has appeared as a pre-requisite in our setup checks on Windows Server 2012 and later. However, if you chose to allow Exchange setup to install the required OS Components, Desktop Experience has been installed on all supported operating systems. Desktop Experience is required on Windows Server 2008R2. The Desktop Experience feature includes additional components which are not necessary for Exchange Server and require frequent patching. Windows Server 2012 and later modified feature definitions to include Media Foundation. Exchange Setup in Exchange Server 2016 Cumulative Update 4 and Exchange Server 2013 Cumulative Update 15 has been updated to install Media Foundation instead of Desktop Experience on Windows Server 2012 and later. This change will only apply to newly installed servers. Applying either cumulative update will not change the existing configuration of the server. If desired, an administrator can add Media Foundation and remove Desktop Experience from the list of installed Windows features on Windows Server 2012 and later.

Update on Windows Server 2016 support

The Windows team has released KB3206632. This update addresses the issue where IIS would crash after a DAG is formed and the server is subsequently restarted. This update is now required on all servers running Exchange Server 2016 on Windows Server 2016. Setup will not proceed unless the KB is installed.

Latest time zone updates

All of the packages released today include support for time zone updates published by Microsoft through October 2016.

Important Public Folder fix included in these releases

Exchange Server 2013 Cumulative Update 14 and Exchange Server Cumulative Update 3 introduced an issue where new posts to a public folder may not have been indexed if there was an active public folder migration (KB3202691). This issue is now resolved. To ensure all public folders are indexed appropriately, all public folder mailboxes should be moved to a new database after applying the appropriate cumulative update released today.

Release Details

KB articles which contain greater depth on what each release includes are available as follows:

Exchange Server 2016 Cumulative Update 4 does not include new updates to Active Directory Schema. If upgrading from an older Cumulative Update or installing a new server, Active Directory updates may still be required. These updates will apply automatically during setup if user permissions and AD requirements are met. If the Exchange Administrator lacks permissions to update Active Directory Schema, a Schema Admin needs to execute SETUP /PrepareSchema prior to the first Exchange server installation or upgrade. The Exchange Administrator should also execute SETUP /PrepareAD to ensure RBAC roles are updated correctly.

Exchange Server 2013 Cumulative Update 15 does not include updates to Active Directory, but may add additional RBAC definitions to your existing configuration. PrepareAD should be executed prior to upgrading any servers to Cumulative Update 15. PrepareAD will run automatically during the first server upgrade if Setup detects this is required and the logged on user has sufficient permission.

Additional Information

Microsoft recommends all customers test the deployment of any update in their lab environment to determine the proper installation process for your production environment. For information on extending the schema and configuring Active Directory, please review the appropriate TechNet documentation.

Also, to prevent installation issues you should ensure that the Windows PowerShell Script Execution Policy is set to “Unrestricted” on the server being upgraded or installed. To verify the policy settings, run the Get-ExecutionPolicy cmdlet from PowerShell on the machine being upgraded. If the policies are NOT set to Unrestricted you should use the resolution steps in KB981474 to adjust the settings.

Reminder: Customers in hybrid deployments where Exchange is deployed on-premises and in the cloud, or who are using Exchange Online Archiving (EOA) with their on-premises Exchange deployment are required to deploy the most current (e.g., 2013 CU15, 2016 CU4) or the prior (e.g., 2013 CU14, 2016 CU3) Cumulative Update release.

For the latest information on Exchange Server and product announcements please see What’s New in Exchange Server 2016 and Exchange Server 2016 Release Notes. You can also find updated information on Exchange Server 2013 in What’s New in Exchange Server 2013, Release Notes and product documentation available on TechNet.

Note: Documentation may not be fully available at the time this post was published.

The Exchange Team

Designing and Prototyping Apps with Adobe Experience Design CC (Beta)

$
0
0

Adobe Experience Design CC (Beta) or Adobe XD, is a new creative tool from Adobe for designing high-fidelity prototypes of websites and mobile apps.  You can try a new public preview released today of Adobe XD on Windows 10.

Why Adobe XD?

A well-designed app often starts out with a sketch, a rough prototype, something that can be shared with stakeholders. But the challenge has always been that to get something testable and demonstrable, you needed to do some coding, you needed to get developers involved in building a prototype that might get thrown away.  But once you have developers investing in coding, they are reluctant to change the code – even if that’s the right thing to do based on the feedback from your prototype.  In his book The Inmates are Running the Asylum, Alan Cooper discusses just this challenge.  That’s where Adobe XD comes in – it is a tool expressly designed for building quick prototypes as well as high-fidelity user experience designs.  With Adobe XD, anyone can create wireframes, interactive prototypes, and high-fidelity designs of apps and websites.  Once you have your prototype, you then can import the visuals into Visual Studio or the IDE of your choice to start building out the final application.

Below is a quick walk-through of using Adobe XD.

Designing a User Experience

To give you an idea of how to use Adobe XD to design quick prototypes, I am going to walk you through the process that I am going through to redesign an app and create a quick prototype with Adobe XD.  I have found that having an interactive prototype with transitions and multiple screens is much more effective at illustrating a user journey than a storyboard of screen images.  I am designing a new version of an app, Architecture, that I originally built for Windows but now I’m using Xamarin to make a cross-platform version that works on Windows, iOS and Android.  Having studied architecture in college, I have always loved the field.  Quite often, I start off with a rough sketch in my journal but that isn’t typically something that is interactive or in a state that can be shared with enough fidelity, so I use XD.

When I start it up, Adobe XD greets me with a blank canvas where I want to place artboards, one for each screen of my app. To place artboards on the canvas, I press the artboard button (the last icon on the left toolbar) – then I see options for various device form factors, including options for iOS, Android, Surface and Web.

To start, I pick a few screen sizes by tapping on Android Mobile, iPhone and Surface Pro 4 on in the property inspector on the right and blank artboards for each format are created on the design canvas.

To start my design I first focus on designing a map page which would show a map of the user’s current location and notable buildings nearby. I grab a screenshot of San Francisco in a folder on my PC and drag it onto each page, resizing it.  Once I place an image onto a page, any overflow is hidden once I deselect the image.  This is very helpful as I design multiple screen sizes in parallel.

Now I want to focus on one of the designs to add some more detail, in this case, the Android design on the left.  I navigate around the artboard by using the trackpad on my computer, panning with two fingers and zooming in and out on the trackpad by pinching and expanding gestures.  This is similar to the interaction method for XD on macOS.  In this initial preview of XD for Windows, touch and pen support are not enabled yet on the design canvas but they do work on the toolbar and in the property inspector.10Ty team is working closely with the XD team to enable a great experience for pen and touch with Adobe XD that will be ready later in 2017.

I’ve started by adding three red boxes for architectural landmarks in San Francisco, and three boxes at the bottom that will work as buttons for UI interactions.  As I draw each button, XD puts snapping guidelines in to help me position the buttons relative to each other.  I ignore the guidelines to show that by selecting all three buttons and pressing the align bottom button at the top of the property inspector (the pane on the right), I can quickly align the buttons and set them all to have the same width and height in the property task pane.  I can then distribute the buttons horizontally using the hotkey Ctrl-Shift-H.  You can also distribute objects horizontally and vertically using the distribute icons in the property inspector.

I then use the text tool to add placeholder icons to the buttons, taking advantage of the Segoe MDL2 Assets Font (use the Character Map app that comes with Windows) for graphics for the Buildings, Locate Me, and Add buttons.  In a few minutes, I get my ideas out and start a first page of my Architecture app.  Now I want to add another page that would be used to browse a list of buildings by pressing the first button on the first page.  I add another Android mobile page by clicking on the artboard button and selecting a new Android mobile page.  A new artboard page is now placed on my design canvas right below the page I’m working on.  Since this page is for browsing a list of buildings, I start with a design of what each building in the list would look like.  I drag an image of a building from my desktop onto a square and it automatically resizes and crops the image to the square.

After finishing that first item design, I select all of the elements for the building and press the Repeat Grid button on the right and then drag the handle that appeared on the bottom of the rectangle to the bottom of the page, repeating the element.

While I’m dragging the repeat grid, I see the items building instantly and hints showing me the spacing between the items.  Once I look at the items together, it becomes clear that I don’t need the frame around the items and the spacing is a bit wide. All I need to do is select the prototypical items at the top of the list and edit the that item – the changes are replicated throughout the list. To change the spacing, I put my cursor between the items and the pink spacing guide appears. By dragging that, I change the spacing between the items and see the results instantly.

The last thing I want to do on this page is to use different images and text for each building in the list.  To do this, I just grab some images that I have in a folder on my PC and drop them on one of the images in the list.  I also have a text file with the names of the buildings that I drag onto the “Building Name” text.  I instantly have a list of items with unique text images and text, a perfect design for the Xamarin ImageCell element when I’m ready to code this.

Now that I have two related pages, I want to connect them so I have a prototype that starts on the map page and then shows the Buildings page when the user clicks on the Buildings button.  I do that in the Adobe XD Prototyping interface by pressing the Prototype button at the top of the window. I start by clicking on the Buildings button on the maps page and the button is highlighted in blue and a blue arrow appears on the right of the button.  All I do is drag and drop that arrow onto the Building page and a connection is made – I can set the transition type, easing type and duration – very easy.

To test that action, I press the desktop preview button (Play button) in the upper right of the application window and a new window with the map page pops up.  I can then press the Buildings button and see the transition as the app preview shows the Buildings page. I can also drag that preview page to another screen if I have an extended desktop and I can even make changes in the design view while the preview is running.  Once you are done with the design prototype, you can easily export the artboards as images that developers could use as starting points for app development.

As a last step, I exported the artboards as PNG images and opened them up in Visual Studio to start the process of laying out the Xaml for my app:

“Design at the Speed of Thought”

Adobe looked at making XD enable “design at the speed of thought” and through this short walk-through, I hope you get the idea that adding the app to your toolbox will help you design, prototype, test and refine your designs quickly and fluidly.

The Technology Behind Adobe XD

Working with Adobe to bring an app of this sophistication and quality will help other developers prepare for Windows 10. Through close collaboration on this app, we have taken much of the feedback from the Adobe developers to make the Universal Windows Platform even better.

Adobe XD on Windows is a UWP app using XAML, C++, JavaScript, and ANGLE striving for a best-in-class Windows UWP experience while sharing as much code as possible with their Mac version. As Adobe has a very high quality bar for app development, the app is testable through automated tests using the Adobe first released Adobe XD earlier this year on the Mac as a public preview and through that preview, Adobe got input that enabled them to make it the best app for designing user experiences.  That feedback went into making both the Mac and Windows versions of XD even better.  Interestingly, Adobe is taking advantage of some of the new functionality in the Windows Anniversary Edition to enable them to release Adobe XD through their Creative Cloud app (how you get Photoshop, Illustrator Lightroom and other creative apps today) instead of the Windows Store.

Help Shape Adobe XD on Windows

Now that you can start using Adobe XD on Windows, please try it and submit your feedback to Adobe through their UserVoice site and help shape the future of Adobe XD on Windows 10. This is just the beginning.

  • Read Adobe’s blog post about today’s release of Adobe XD on Windows 10.
  • Try the Adobe XD public preview (all you need is a Windows 10 PC running the Anniversary Edition and a free Adobe ID or Creative Cloud account).
  • Provide feedback to Adobe on any topic but we’re especially interested in understanding how would you want to use pen and touch in Adobe XD and how you would want to use the new Surface Dial? How would you use pen and touch simultaneously with Adobe XD?  What other apps  and services would you want Adobe XD to connect with? What kinds of extensibility would make Adobe XD even better for your designer-developer workflow?

Get started today with Adobe XD on Windows 10 with the public preview today.

The post Designing and Prototyping Apps with Adobe Experience Design CC (Beta) appeared first on Building Apps for Windows.

December 2016 Update for .NET Core 1.0

$
0
0

December 2016 Updates for .NET Core 1.0

Today, we are releasing a new set of reliability and quality updates for .NET Core 1.0. This month’s update is our second Long Term Support (LTS) update and includes updated versions of multiple packages in .NET Core, ASP.NET Core and Entity Framework Core. We recommend everyone moves to this update immediately.

How to obtain the updates

.NET Core 1.0.3 fixes

For more information on the change, please see the .NET Core 1.0.3 release notes.

Debugging

  • Visual Studio Remote Debugger with CoreCLR executables on Nano server does not work. 7316
  • Generate symbol packages for CoreCLR binaries. 5832

WinHttpHandler Fixes:

  • Nonstandard HTTP authentication responses. 11452, 11456
  • Basic authentication with default credentials. 11266
  • Uri escaping for HTTP requests. 11156
  • WinHttpRequestState objects leak during HTTP resends. 11693

ASP.NET Core Fixes

  • Exception page showing only method names in the call stack. 335
  • ActionResults returned from controller actions rendered as JSON, instead of executed. 5344
  • Html.ValidationSummary helper throwing exception when model binding a collection. 5157
  • WebHost.Run() completes before ApplicationStopping. 873
  • AntiForgeryValidation attribute conflict with CookieAuthenticationEvents OnRedirectToLogin event handler. 1009
  • UvException (Error -4047 EPIPE broken pipe) timing out HTTP requests. 1208, 1207
  • UserSecrets causes design-time tools to crash. 543

Entity Framework Core Fixes

  • Query: Regression: GroupBy multiple keys throws exception in 1.0.1. 6620
  • Select with Ternary Operator/CASE-WHEN Server Side Evaluation. 6598
  • Query: Including a collection doesn’t close the connection. 6581
  • Query: Take() with Include() generates incorrect SQL. 6530
  • Query: Include() for related collections are dropped when use Skip(). 6492
  • Query: Port Include() performance improvement to 1.0.2. 6760
  • Tools: Better ConfigureDesignTimeServices entry point. 5617
  • Query: Entities not being released due to leak in query caching. 6737

If you are having trouble, we want to know about it. Please report issues on GitHub issue – 391.

Thanks to everyone who reported issues and contributed!

.NET Framework December Monthly Rollup is Now Available

$
0
0

Today we are releasing a new Security and Quality Rollup and Security Only Update for the .NET Framework. This release resolves a security vulnerability and includes two new quality and reliability improvements. The Security and Quality Rollup is available via Windows Update, Windows Server Update Services and Microsoft Update Catalog. The Security Only Update is available via Windows Server Update Services and Microsoft Update Catalog.

You can read more about the recent changes to how the .NET Framework receives updates on the .NET Framework Monthly Rollups Explained post.

Security

This release resolves a vulnerability in Microsoft .NET 4.6.2 Framework’s Data Provider for SQL Server. A security vulnerability exists in Microsoft .NET Framework 4.6.2 that could allow an attacker to access information that is defended by the Always Encrypted feature. The security update addresses the vulnerability by correcting the way .NET Framework handles the developer-supplied key, and thus properly defends the data. This security update is rated Important for Microsoft .NET Framework 4.6.2. To learn more about the vulnerability, see Microsoft Security Bulletin MS16-155.

Quality and Reliability

Common Language Runtime

When an application uses unaligned block initialization, for example, from managed C++, the code generated on AVX2 hardware has an error. As a result, if the JIT uses a register other than xmm0 for the source, an incorrect encoding will be used. This improvement applies .NET Framework 4.6 and 4.6.1.

Windows Presentation Foundation

A memory leak may occur for certain scenarios when an application includes a D3DImage control. For example, if you started an application, changed both the size and content of the image and then ran the application through Remote Desktop. This improvement applies .NET Framework 4.5.2, 4.6 and 4.6.1.

More Information

Additional information on what is included in each of the rollups along with the applicable operating systems can be found on their associated knowledge base articles, listed below.

Security and Quality Rollup

KB Article.NET VersionOperating System
3210142.NET Frameworks 3.5, 4.5.2, and 4.6Windows Vista SP2 and Windows Server 2008 SP2
3205402.NET Frameworks 3.5, 4.5.2, 4.6, 4.6.1, and 4.6.2Windows 7 and Windows Server 2008 R2
3205403.NET Frameworks 3.5, 4.5.2, 4.6, 4.6.1, and 4.6.2Windows Server 2012
3205404.NET Frameworks 3.5, 4.5.2, 4.6, 4.6.1, and 4.6.2Windows 8.1 and Windows Server 2012 R2

Security Only Update

KB Article.NET VersionOperating System
3205406.NET Framework 4.6.2Windows 7 and Windows Server 2008 R2
3205407.NET Framework 4.6.2Windows Server 2012
3205410.NET Framework 4.6.2Windows 8.1 and Windows Server 2012 R2

New features arrive in Microsoft Photos on Windows 10

$
0
0

Focusing on the Creators in all of Us

Since I was a kid, I have been drawn to computers because of what they enable each of us to make, to create. Recently Microsoft announced new hardware such as the Surface Studio and Dial, the Windows 10 Creators update, and new software such as Paint 3D, all focused on creators. Creation is a theme that extends across our suite of experiences, including the Photos app that comes with Windows 10.

We have now made available the next step in this creator’s journey with an update to Microsoft Photos. We’re making it fun to view all your digital memories in photo or video form, with a refreshed user experience that makes it pleasant to browse your collection. We’ve updated the way you edit photos and apply filters to simplify the most common actions. To celebrate the new hardware and the creator in all of us, we’ve added the ability to draw on your photos and videos and even play back the ink with animation!

We have ambitious plans with much more to come as we think about creators, digital memories, and storytelling. Stay tuned.

The updated Photos app: Now in dark or light

One of the first things you’ll notice in the updated Photos app is that things got a little lighter. We heard your feedback that for some people (most people!) a dark theme can be overwhelming or intimidating. We’ve got a new, light theme for browsing your pics! Let your memories shine through with the new light theme, or you can always go back to the dark theme in Settings. The single photo view still uses a black ‘lightbox’ feel to let your media show most effectively when it is the center of attention.

Plus, Photos also now has a horizontal navigation bar, making it easier than ever to view your memories in different ways: your whole collection chronologically, or by Albums or Folders. We’ve also taken the time to add subtle animations throughout the experience to make your memories come alive.

New features arrive in Microsoft Photos on Windows 10

Draw on your memories

We each use photos and videos to capture some of the most important moments in our lives. But sometimes, there is more to the story than what our pictures and videos can convey on their own, or you’d just like to personalize a message. Now you can use your stylus (or your finger if you have a touch screen device, or your mouse!) to draw on your memories directly.

New features arrive in Microsoft Photos on Windows 10

Choose from three pen types (I like calligraphic!), pick a color to draw with, and use the eraser to fine-tune your work. Once the ink dries, you can share a still of your new image. But even cooler, allow your message to come to life by sharing an animation of your drawing with friends and family as a video. Share it on Facebook, send over email.

You can also draw on videos, and the ink will play back at the right places when others view it. Use the pen to mark up the peewee league football video just like the pros. Or give stage direction for the school play. Or just add funny comments, thought bubbles and moustaches to lighten up a goofy video.

Windows Ink with Photos

Editing made simple

The photo editor now has a new, easy-to-use interface. The commands have been rearranged to emphasize the most common user needs, such as easy cropping and adjusting. All the other capabilities are still there under Enhance and Adjust. We’ve added a whole new set of filters too. Get creative with filters such as Zeke or Denim, then check out the other adjustable enhancements you can make to your photos, like tweaking the lighting or warmth.

New features arrive in Microsoft Photos on Windows 10

Photos now on Xbox

As a Universal Windows application, Microsoft Photos is showing up throughout the Windows ecosystem. We’re also releasing Photos for the Xbox, which allows you to browse media you have stored on OneDrive for access on all your devices. Use your controller to navigate your memories just as you would expect with our Xbox optimized user interface.

New features arrive in Microsoft Photos on Windows 10

We’d love to hear from you!

We’re making a big investment in Photos these days and we want your feedback on how to make it better. You are a key part of all the changes we make to the Photos experience. Try out the latest update, edit some photos, draw on some videos, and continue to share your feedback with us through the built-in feedback tool. You can find “Send Feedback” under the “…” menu.

Chris Pratley
Studio Manager

The post New features arrive in Microsoft Photos on Windows 10 appeared first on Windows Experience Blog.

Building Intelligent Bots for Business

$
0
0

This post is authored by Herain Oberoi, Senior Director of Product Marketing at Microsoft.

Earlier today, in San Francisco, we provided an update on how Microsoft is helping to democratize Artificial Intelligence (AI) by making it accessible to everyone and every organization. Today’s focus was on conversational computing, which combines the power of natural language with advanced machine intelligence to help people engage with technology in more natural and personal ways.

As we talk to businesses and governments who are looking to take advantage of these new capabilities, we see significant value being created when leading organizations start using intelligent bots to transform business processes such as customer services, helpdesks, and even factory floor operations.

One example is at Rockwell Automation, which provides industrial automation and information solutions to customers in more than 80 countries. Their customers wanted access to information in their production lines in faster and more innovative ways, and so, with that objective in mind, Rockwell Automation used the Bot Framework and Cognitive Services in Cortana Intelligence to build Shelby™, a bot that monitors production more efficiently and lets managers know the status of their operations through more natural forms of interaction.

“Our customers need to move quickly to meet their goals. Shelby™ gives them an entirely new way to interact with their environment. The health and diagnostics of their production is critical to make the decisions that matter.”

Paula Puess, Global Market Development Manager, Rockwell Automation

Australia’s Department of Human Services (DHS), an arm of the government responsible for delivering social and health related services and payments, is pioneering a proof of concept to deliver intelligent customer experiences powered by deep learning. Using the machine learning and cognitive services capabilities in Cortana Intelligence, DHS is building an ‘expert system’ that helps its employees respond faster and more effectively to citizen queries by infusing bots with deeper human context and conversational understanding, ultimately improving and expanding their customer engagement channels.

Improving Customer Interactions with Intelligent Bots

Intelligent bots deepen customer engagement by augmenting the skills and knowledge of employees interacting with customers, and also via direct conversations that provide for more natural and personalized interactions at massive scale. Specifically:

  1. Bots go beyond simple task completion, using social and historical context to better infer intent and make recommendations that are actionable in the context of the conversation.
  2. Bots drive efficiencies by automating workflows and integrating task completion with existing systems in the context of the business process.
  3. Bots help uncover new insights about customer challenges and preferences by being able to capture and reason over all the customer interaction data.

The illustration below shows how businesses can leverage intelligent bots to augment their contact center operations and improve efficiencies.

bots-h

The use of an intelligent bot in this example helps the business in the following ways:

  • A majority of common customer requests can be fulfilled by bots. Customer requests that require deeper human intervention are filtered and handed off to contact center agents. This intelligent filtering helps reduce costs while simultaneously improving the customer service experience.
  • The quality of contact center agent interactions is improved as bots help augment the skills and knowledge of agents by providing real-time recommendations in the context of the current conversation.
  • The business also benefits from enhanced insights obtained by analyzing the rich customer interaction data captured by the bots. This can help the business spot emerging patterns, take preemptive actions on issues, and much more.

Getting Started

Developers can get started with the open source based Microsoft Bot Framework which includes the Bot Builder SDK, Bot Connectors, Developer Portal, Bot Directory and an emulator to use and test your bots.

As you build, test and scale your bots in the cloud, the Microsoft Azure Bot Service helps you accelerate your work through an integrated environment that is purpose-built for bot development. The Azure Bot Service allows you to get started quickly with built-in templates, scale and pay on demand as your needs grow over time, and reach users on multiple channels including from your app or website, via text/SMS, Skype, Slack, Facebook Messenger, Kik, Office 365 email, and other popular services.

Building an intelligent bot goes beyond simple task completion – a complete business solution requires cognitive understanding, integration with business processes, and the ability to gain deep insights on customer interaction data. Microsoft Cortana Intelligence provides you with all the capabilities you need, including big data storage, orchestration, advanced analytics and cognitive services, to build your intelligent bots.

I hope customers like Rockwell Automation and Australia’s Department of Human Services give you the needed inspiration to take the leap and start defining the requirements for intelligent bots that can help improve customer engagement at your organization.

Herain


Hotfix 1 for System Center 2016 Virtual Machine Manager Update Rollup 1 is now available

$
0
0

We published a new KB article that describes the fix that’s included in Hotfix 1 for Microsoft System Center 2016 Virtual Machine Manager Update Rollup 1. Additionally, the article contains information about how to obtain the hotfix as well as installation instructions. There are no updates to the Administrator Console or Guest Agent as part of this hotfix, however installation of this hotfix requires you to update the Host agent on all the VMM-managed hosts. Also note that you must have Update Rollup 1 for System Center 2016 Virtual Machine Manager installed to apply this hotfix.

For complete details please see the following:

3208888Hotfix 1 for System Center 2016 Virtual Machine Manager Update Rollup 1 (https://support.microsoft.com/en-us/kb/3208888)

J.C. Hornbeck, Solution Asset PM
Microsoft Enterprise Cloud Group

VMM 2016 UR1

The Year in .NET – Visual Studio 2017 RC and .NET Core updated, On .NET with Stephen Cleary and Luis Valencia, Ulterius, Inferno, Bastion, LoGeek Night

$
0
0

To read last week’s post, see The week in .NET – On .NET on MyGet – FlexViewer – I Expect You To Die.

The Week in .NET is now more than a year old! Our first post was published on December 1 of last year, and it had only 6 links. This week’s issue has more than 60! This is not me becoming less selective (I’m actually becoming more selective), it really is the community growing, and producing more quality content each week. My goals when I started these posts were the following:

  • Provide useful resources every week.
  • Show how productive the .NET community is.
  • Recognize the amazing work that’s being done by you all.

Thank you all for an amazing year. Thank you to all the great writers of code and blogs, without whom this could simply not exist. Thank you to Stacey, Phillip, Dan, and Rowan for sending me gaming, F#, Xamarin, and EF content every week. And finally, thanks to all of you who read and support us every week.

Visual Studio 2017 RC and .NET Core 1.0 updated

Yesterday, Visual Studio 2017 RC got an update, with further improvements to the csproj format. You can read all the .NET Core and csproj details in Updating Visual Studio 2017 RC – .NET Core Tooling improvements, and the ASP.NET changes in New Updates to Web Tools in Visual Studio 2017 RC. .NET Core 1.0 also got updated to 1.0.3, along with ASP.NET and Entity Framework Core.

On .NET

Last week, I published the first two of our MVP Summit interviews.

Stephen Cleary talked about his AsyncEx library:

Luis Valencia showed his IoT work with sensor data aggregation using Azure:

This week, we’ll be back in the studio to speak with Immo Landwerth, Karel Zikmund, and Wes Haggard about the way the .NET team manages the .NET Core open source projects and repositories. The show is on Thursdays and begins at 10AM Pacific Time on Channel 9. We’ll take questions on Gitter, on the dotnet/home channel and on Twitter. Please use the #onnet tag. It’s OK to start sending us questions in advance if you can’t do it live during the show.

App of the week: Ulterius

Ulterius is a complete remote computer access solution in your browser. It features hardware and process monitoring and management, remote shells (cmd, PowerShell, and bash), file system access, scheduling, webcam access, and remote desktop.

Ulterius' remote desktop feature

Ulterius is open source and built with .NET.

Package of the week: Inferno

Inferno is a modern, open-source, general-purpose .NET crypto library, that has been professionally audited. It comes to us from Stan Drapkin, the author of Security-drive .NET.

Game of the week: Bastion

Bastion is an action role playing game. Play as a young man who sets out on a journey towards Bastion after waking up to find his world shattered to pieces by a catastrophe called the Calamity. Explore over 40 beautifully hand-painted environments as you discover the secrets of Calamity while trying to reverse its effects. Bastion features a reactive narrator who marks your ever move, upgradeable weapons, and character customization that lets you tailor game play to your style.

Bastion

Bastion was created Supergiant Games using C# and their own custom engine. It is currently available on Steam, Xbox One, Xbox 360, PlayStation 4, PlayStation VITA and the Apple App Store.

User group meeting of the week: LTS LoGeek Night in Wrocław, Poland

LoGeek Night is a full day of presentations and discussions in a relaxed atmosphere as hot pizza and cold beer are served. LoGeek Night is on Thursday, December 15, at the Wędrówki Pub in Wrocław.

The presentations include:

  • Łukasz Pyrzyk: .NET Core and Open Source in 2017.
  • Ivan Koshelev: Advanced queries in LINQ, IQueryable and Expression Trees with examples in Entity Framework 6.
  • Andrey Gordienkov: Transitive dependecies: they are not who we think they are.

.NET

There’s an awesome article this week by Matt Warren about research papers in the .NET source that’s juxtaposing research papers with their application in the .NET source code. It’s also showing examples of the reverse: academic papers that are the result of work on .NET. This is an excellent read that I highly recommend!

ASP.NET

F#

Check out the F# Advent Calendar for loads of great F# blog posts for the month of December.

Check out F# Weekly for more great content from the F# community.

Xamarin

Azure

Data

Games

And this is it for this week!

Contribute to the week in .NET

As always, this weekly post couldn’t exist without community contributions, and I’d like to thank all those who sent links and tips. The F# section is provided by Phillip Carter, the gaming section by Stacey Haffner, and the Xamarin section by Dan Rigby.

You can participate too. Did you write a great blog post, or just read one? Do you want everyone to know about an amazing new contribution or a useful library? Did you make or play a great game built on .NET?
We’d love to hear from you, and feature your contributions on future posts:

This week’s post (and future posts) also contains news I first read on The ASP.NET Community Standup, on Weekly Xamarin, on F# weekly, and on Chris Alcock’s The Morning Brew.

Cortana to open up to new devices and developers with Cortana Skills Kit and Cortana Devices SDK

$
0
0

We believe that everyone deserves a personal assistant. One to help you cope as you battle to stay on top of everything, from work to your home life. Calendars, communications and commitments. An assistant that is available everywhere you need it, working in concert with the experts you rely on to get things done.

We’re at the beginning of a technological revolution in artificial intelligence. The personal digital assistant is the interface where all the powers of that intelligence can become an extension of each one of us. Delivering on this promise will take a community that is equally invested in the outcome and able to share in the benefits.

Today we are inviting you to join us in this vision for Cortana with the announcement of the Cortana Skills Kit and Cortana Devices SDK.

The Cortana Skills Kit is designed to help developers reach the growing audience of 145 million Cortana users, helping users get things done while driving discovery and engagement across platforms: Windows, Android, iOS, Xbox and new Cortana-powered devices.

The Cortana Devices SDK will allow OEMs and ODMs to create a new generation of smart, personal devices – on no screen or the big screen, in homes and on wheels.

Developers and device manufacturers can sign up today to receive updates as we move out of private preview.

Cortana Skills Kit Preview

The Cortana Skills Kit will allow developers to leverage bots created with the Microsoft Bot Framework and publish them to Cortana as a new skill, to integrate their web services as skills and to repurpose code from their existing Alexa skills to create Cortana skills. It will connect users to skills when users ask, and proactively present skills to users in the appropriate context. And it will help developers personalize their experiences by leveraging Cortana’s understanding of users’ preferences and context, based on user permissions.

In today’s San Francisco event, we showed how early development partners are working with the private preview of the Cortana Skills Kit ahead of broader availability in February 2017.

  • Knowmail is applying AI to the problem of email overload and used the Bot Framework to build a bot which they’ve published to Cortana. Their intelligent solution works in Outlook and Office 365, learning your email habits in order to prioritize which emails to focus on while on-the-go in the time you have available.
  • We showed how Capital One, the first financial services company to sign on to the platform, leveraged existing investments in voice technology to enable customers to efficiently manage their money through a hands-free, natural language conversation with Cortana.
  • Expedia has published a bot to Skype using the Microsoft Bot Framework, and they demonstrated how the bot, as a new Cortana skill, will help users book hotels.
  • We demonstrated TalkLocal’s Cortana skill, which allows people to find local services using natural language. For example, “Hey Cortana, there’s a leak in my ceiling and it’s an emergency” gets Talk Local looking for a plumber.

Developers can sign up today to stay up to date with news about the Cortana Skills Kit.  today to stay up to date with news about the Cortana Skills Kit.  today to stay up to date with news about the Cortana Skills Kit.

Cortana Devices SDK for device manufacturers

We believe that your personal assistant needs to help across your day wherever you are: home, at work and everywhere in between. We refer to this as Cortana being “unbound” – tied to you, not to any one platform or device. That’s why Cortana is available on Windows 10, on Android and iOS, on Xbox and across mobile platforms.

We shared last week that Cortana will be included in the IoT Core edition of the Windows 10 Creators Update, which powers IoT devices.

The next step in this journey is the Cortana Devices SDK, which makes Cortana available to all OEMs and ODMs to build smarter devices on all platforms.

It will carry Cortana’s promise in personal productivity everywhere and deliver real-time, two-way audio communications with Skype, Email, calendar and list integration – all helping Cortana make life easier, everywhere. And, of course, it will carry Cortana expert skills across devices.

We are working with partners across a range of industries and hardware categories, including some exciting work with connected cars. The devices SDK is designed for diversity, supporting cross platforms including Windows IoT, Linux, Android and more through open-source protocols and libraries.

One early device partner, Harman Kardon, a leader in premium audio, will have more news to share next year about their plans, but today provided a sneak peek at their new device coming in 2017.

The Cortana Devices SDK is currently in private preview and will be available more broadly in 2017. If you are an OEM or ODM interested in including Cortana in your device, please contact us using this form to receive updates on the latest news about the Cortana Devices SDK and to be considered for access to the early preview.

The post Cortana to open up to new devices and developers with Cortana Skills Kit and Cortana Devices SDK appeared first on Building Apps for Windows.

Project Springfield: a Cloud Service Built Entirely in F#

$
0
0

This post was written by William Blum, a Principal Software Engineering Manager on the Springfield team at Microsoft Research.

Earlier this year, Microsoft announced a preview of Project Springfield, one of the most sophisticated tools Microsoft has for rooting out potential security vulnerabilities in software. Project Springfield is a fuzz testing service which finds security-critical bugs in your code.

One of the amazing things about Springfield is that it’s built on top of Microsoft’s development platforms – F#, .NET, and Azure! This post will go over some of the what, why, and how of Project Springfield, F#, .NET, and Azure.

What is Project Springfield?

Project Springfield is Microsoft’s unique fuzz testing service for finding security critical bugs in software. It helps you quickly adopt practices and technology from Microsoft. The service leverages the power of the Azure cloud to scale security testing using a suite of security testing tools from Microsoft. It’s currently in preview, and you can sign up at Project Springfield if you want to give it a try!

Microsoft's Springfield group. (Photography by Scott Eklund/Red Box Pictures)

William Blum (right) and Cheick Omar Keita (left) discuss the architecture of Springfield.

Why F#?

The reason why we chose F# for this project can be summarized as fast time to market.

In 2015, Microsoft Research NExT kicked off Project Springfield. The engineering team, consisting of three developers at the time, was given the ambitious goal to build an entirely new service from scratch and ship it to external customers in just three months.

Due to its conciseness, correctness, and interoperability with the entire .NET ecosystem, we believe that F# accelerated our development cycle and reduced our time to market. Some specific benefits of using F# we saw included scripting capabilities and interactive REPL to quickly prototype working code, Algebraic Data Types, Immutability by default, Pattern Matching, Higher-Order Functions, a powerful Asynchronous Programming model, and Type Providers.

How it was done

F# scripting allowed the team to quickly discover and interact with external .NET APIs. For instance, we routinely used F# interactive to learn how to use the Azure SDK:

The above script in F# Interactive will enumerate all Azure Virtual Machines in the specified Azure subscription.

Later on in the development process the very same script code could easily be integrated into the final compiled code without any modification.

Functional Programming

Because F# is a functional programming language, we wrote our code in a functional style, which allowed us to eliminate a lot of boilerplate code.

For instance, when working with collections such as lists or arrays, we would use F# sequence operations from the Seq module to process data. Because F# supports partial application of function arguments and functions as first-class arguments, we can process sequences of data with simple, reliable, and composable code.

We find this to be simple because it avoids iteration and needing to store the state of something in a temporary variable, such as when using a C-style for loop. We value the increased reliability of not needing to iterate because it avoids common issues found in iteration, such as Out of Bounds array indexing exceptions. Lastly, the pipeline operator in F# (|>) allows us to compose operations succinctly.

Sorted histogram using F# piping operator and Sequence module

The above snippet demonstrates the use of quite a few functional programming features like lambda functions, function composition, and functional purity, which are natural defaults for F#. The power of these constructs have even led to some integration into non-functional languages like C#.

Functional purity in particular is one of the biggest benefits to our codebase. Functional purity is a form of determinism indicating that a function’s output is fully determined by the value of its input. That is, for a given input, a pure function will always return the same output. Furthermore, a pure function does not affect its environment in any way (e.g., global variables, affecting files on disk…).

Upholding the property of functional purity in our codebase has lead to code that’s easier to reason about and simpler for us to test. And because pure functions do not affect one another, they are easily parallelizable!

F# makes it easy to write functionally pure code by making all let-bounded values immutable by default. In other words, you must opt in to not writing a pure function. When this is truly necessary (e.g., you must modify the environment in some way), you can easily do so by explicitly indicating mutability with the let mutable keywords. In Springfield, we only have five mutable variables in our entire F# codebase.

Conciseness

The example above highlights another aspect of functional languages: programs tend to be concise. The four lines of code from the program above would be expanded to many more if written in imperative style. This may look like a contrived example but at the scale of Springfield, it yields a codebase that is very easy to maintain.

In fact, we could quantify this phenomenon when we ported some of our components from other languages to F#. In order to remove some legacy dependencies, for instance, we ported a Perl script to a 37% smaller F# program. In a separate effort we ported 1,338 lines of PowerShell scripts to just 489 lines of F# (2.7 times smaller). In both cases, despite the code size reduction, the resulting F# program improved logging, readability and reliability (due in part to static type checking).

Correctness

Another reason why F# helped us to ship quickly is because we found that the functional paradigm F# uses helped us improve code correctness. One of the most compelling examples of how language constructs improve correctness is the use of Algebraic Data Types and Pattern Matching.

The quintessential example of this is how you represent and handle missing data in F#. In most mainstream languages, missing data is typically represented by a special null value. This has a big drawback: because null is implicit in most types you operate on, it’s easy to forget to check for the possibility of a null when consuming your data. This makes it easy to have reliability issues and bugs such as NullReferenceException errors at runtime. In many languages, such as C#, every object value is by default nullable, which means that the need to check for null is spread throughout an entire codebase.

In F#, the datatypes you define are by default non-nullable. If missing data is expected, then you wrap your existing type 'T into the Algebraic Data Type 'T option (or Option). F# Options are inhabited by two possible kinds of values: the None value representing absence of data, or Some v where v is some valid of type 'T.

By capturing the possible absence of data in the Option type itself, the compiler is able to enforce that you account for both the Some v and None cases whenever you try to consume an Optional in your code. This is typically done with Pattern Matching and the match ... with construct. Here is an example taken from Springfield’s codebase:

Pattern matching works together with the type system to ensure that all cases are accounted for: the None case and the Some case.

This language feature alone helped us almost entirely eliminate null as a concern from our codebase, which ultimately saved us very precious time.

An expressive type system

Option types are just one example of the power of Algebraic Data Types. Used more generally, Algebraic Data Types allowed us to concisely define all the data structures involved in the system, and write correct and efficient code to manipulate those data structures. For instance, we use simple Discriminated Unions to define the size of the virtual machines provisioned in Azure for testing:

We also use more complex structures to encode events and messages exchanged between various component of the system.

For each test workload submitted to Springfield, thousands of messages are being created and exchanged between the various components of the service. Thanks to the powerful F# type system we can easily represent such complex information via F# Records and Discriminated Unions:

Once we represent incoming messages via the type system, we can use Pattern Matching dispatch on the incoming message.

What’s nice about the above is that the compiler enforces that we account for all cases. Any queue message that is successfully deserialized into the F# discriminated union EventType is guaranteed to be accounted for by the dispatch function. Because we get that correctness guarantee, we don’t spend nearly as much time debugging. Features like F# types, used in conjunction with Pattern Matching, helped us tremendously in getting working code completed faster.

Another example: for reliability, service requests are implemented in our back end using finite state machines. The state of the machine is saved onto an Azure Queue, that way the service can resume from where it left off should a failure ever happen. Once again F# lets us define our state machines very succinctly:

Finite state machines used in Springfield backend

Json serialization and open source contribution

In Springfield, we leveraged Json.NET to serialize and deserialize JSON messages. However, we found that the default output when serializing F# data types was too verbose for our needs. We built FSharpLu.Json, a small library which wraps and augments Json.NET when serializing F# data types, so we could more succinctly serialize F# data types like Options, Algebraic Data Types and Discriminated Unions.

For example the simple value Some [ None; Some 2; Some 3; None; Some 5 ] gets serialized by FSharpLu.Json to just [null, 2, 3, null, 5]. Without FSharpLu.Json, it would get serialized to the following:

For complex data types, like the Event type introduced earlier, the difference becomes more appreciable. The following event, for instance:

gets serialized with FSharpLu.Json to just

which better reflects the F# syntax, and is 47% more compact than the default Json.NET formatting:

We thought a JSON utility like this would be useful to the F# community, so we’ve open-sourced FSharpLu.Json on GitHub and released it on NuGet.

F# Type Providers + Azure

Springfield is built entirely on Azure. All the compute and network resources used to run the test workloads are dynamically provisioned in Azure through the Azure Resource Manager (ARM). Creating resources in ARM requires authoring two JSON files: one template JSON file definining all the resources you want to create (e.g., virtual machines), and one parameter JSON file with values used to customize the deployment (e.g., machine names).

Springfield allocates compute resources dynamically, therefore it needs to generate JSON parameter files at run-time; a task that can be error-prone. With F# Type Providers we can statically verify at compilation time that our generated template parameters are valid. Because our ARM templates constantly evolves, this tremendously speed-up development and debugging time.

With the Json Type Provider from FSharp.Data, just three lines of F# code suffice to automatically infer from the template parameters file (shown in the screenshot below) all the necessary types required to submit the deployment to Azure:

JSON Type Providers for Azure Templates

Screenshot showing F# Intellisense catching a missing field from the template parameters (left), and the corresponding ARM template (right)

Strongly-typed logging, Asynchronous programming, and Active Patterns

To illustrate other areas where F# helped us build Springfield, let’s look at another snippet from our codebase. Below is the function we use to delete a resource group in Azure.

Strongly-typed logging

In the snippet of code above, C/C++ programmers must recognize the printf-like formatting with the use of %s in the calls to logging functions Trace.info and Trace.error. According to game programmer John Carmack, “Printf format string errors were, after null-safety issue, the second biggest issue in (video game Rage C/C++) codebase”. Such errors occur when you pass an incorrect number of parameters to the printf function, or if the input parameter types do not match the format specifiers like %d and %s.

Because we rely a lot on trace logging to help us diagnose bugs and issues in Springfield, we cannot afford reliability issues in the logging function themselves! Thanks to its powerful type system, F# helps you eliminate the problem altogether: any mismatch between formatting specification and parameters is statically caught by the compiler! To take advantage of this, we simply defined our own trace logging helpers using the strongly-typed formatting module Printf. The underlying logging logic is then offloaded to some other logging APIs like .NET Frameworks’s System.Diagnostics.TraceInformation or Azure SDK’s AppInsights.

We’ve open sourced the strongly-typed wrapper for System.Diagnostics.TraceInformation in the FSharpLu library and plan to open source the AppInsights wrapper in the future.

Strongly-typed logging to System.Diagnostics with Microsoft.FSharpLu.TraceLogging

Asynchronous programming

To achieve high scalability, online services like Springfield must make use of asynchronous code to further utilize hardware resources. Because this is a difficult task for programmers, language-level abstractions for asynchronous programming, which make this task easier, have recently begun to emerge in mainstream languages.

F# pioneered a language-level asynchronous programming model for the .NET platform in 2007. In practice this means that F# comes out of the box with state of the art support for asynchrony in the form of Asynchronous Workflows.

In Springfield, most of the IO-bound code is wrapped inside an async{..} block and make us of the let! operator to asynchronously wait for the underlying IO operation to complete.

For example, in the delete snippet above, we use let! to asynchronously wait on the delete API from the Azure SDK. Asynchronous workflows are used pervasively in our services. Our back end event processing and our REST API are all asynchronous:

Asynchronous REST API to submit a Springfield job

The F# asynchronous programming model is implemented entirely in the F# Core Library using Computation Expressions, a language construct based on a sound theoretical foundation used to extend the language syntax in a very generic way.

Many common pitfalls faced by C# programmers when writing asynchronous code aren’t a concern when using the F# asynchronous programming model. To learn more, check out Tomas Petricek’s wonderful blog post which explores the differences in the C# and F# models of asynchrony

Handling asynchronous exceptions with Active patterns

One of the key behaviors of asynchronous and parallel programming in .NET is that exceptions sometimes get nested under, or grouped into exceptions of type System.AggregateException. In .NET languages like C#, exception handling dispatch is based solely on the type of the exception. In F#, the pattern matching construct lets you express complex conditions to filter on the exception you want to handle. For instance, in the delete function from the snippet above, we use pattern matching in combination with Active Patterns to concisely filter on aggregated exceptions:

Active pattern to match over aggregated exceptions

Pattern matching to filter over Azure SDK exception Hyak.Common.CloudException

F# as a scripting language

F# comes with a REPL environment that makes it a great alternative to other scripting languages like PowerShell. Since F# scripts are executed on the .NET Platform, they can leverage existing code from existing core assemblies. In Springfield, we have F# scripts to perform maintenance operations like usage monitoring and clean up. Another advantage of F# scripts is that they are statically type-checked, an unusual thing for a scripting language! In practice this yields huge saving in debugging time. Foolish errors like typos in variable names or incorrect typing are immediately caught by Intellisense in the IDE tooling available for F# – Visual Studio, Xamarin Studio, and Visual Studio Code with the Ionide suite of plug-ins. Refactoring code also becomes a piece of cake. This stands in stark contrast to the fragility of PowerShell scripts experienced by our team.

These features of F# Scripting have been a huge benefit, allowing our team to replace PowerShell for our scripting needs in some components of the service.

We still use PowerShell for our deployments and resource management, mainly due to our reliance on Azure, or because some tools like Service Fabric only expose certain feature through PowerShell. But whenever possible, we try to stick to F# scripting.

Springfield .FSX script to list all resource groups in Azure

Scaling with .NET and Azure

Because F# is a .NET language, we can leverage the entire .NET ecosystem. In particular we use the Azure .NET SDK to access many Azure services such as the Resource Manager, Compute, Network, Files storage, Queues, KeyVault, and AppInsights. We also built our backend service using Service Fabric.

Read more about how Springfield used Azure here: https://azure.microsoft.com/blog/scaling-up-project-springfield-using-azure

Community libraries

What’s also great about F# is its vibrant community. In Springfield we leverage many open-source projects like Paket to simplify NuGet dependency management, FsCheck for automated test generation, type-providers from FSharp.Data and the cross-platform F# editor Ionide for Visual Studio Code. We also keep a close eye on other projects. For instance, we are considering Suave for future web-related components.

As mentioned earlier we’ve also contributed back to the community in the form of two F# libraries: FSharpLu and FSharpLu.Json.

What’s Next for Project Springfield

This article, hopefully, gives you a good overview of some aspects of F# that helped us build Springfield. When we started the project, we chose F# based on positive experiences building smaller projects. Throughout the development of Springfield we learnt that you can use it just as well to build a full-fledge online service!

The functional paradigm is now mainstream in the industry as indicated by the popularity of languages like F#, Scala, Swift, Elixir, and Rust; as well as the inclusion of functional programming constructs in languages such as C# and Java. Even C++ wants its own lambdas now! The reason, we believe, is that the correctness guarantees and expressivity of the functional paradigm yields a unique competitive advantage in a world where code must evolve rapidly to adapt to changing customer needs! For .NET developers, F# is the perfect language to make the jump!

To conclude, we want to call out the success we’ve had with F# as a recruiting tool. When building an engineering team to work on a codebase on a less popular language like F#, one of the biggest concerns is that you won’t be able to find enough people. But surprisingly, things turned out otherwise. Firstly, we found that many great candidates were interested in the position precisely due to using a functional programming language like F#. For some, it was just out of pure love for the language or frustration for not being able to use it in their current job (sometimes due to resistance in their current team). For others, it was curiosity in learning a new programming paradigm, and willingness to challenge themselves and try to do things differently. Secondly, we observed that, once hired, those engineers turn out to be great developers in any language, not just in F#. We had no trouble recruiting engineers to work on Springfield, and even found the use of F# in the codebase a boon to hiring talented people!

Microsoft's Springfield group photographed on November 1, 2016. (Photography by Scott Eklund/Red Box Pictures)

Members of the Springfield Team. From left to right: Lena Hall, Patrice Godefroid, Stas Tishkin, David Molnar, Marc Greisen, William Blum, Marina Polishchuk

As for Springfield, we have plenty more work in the pipeline. Among other things, we are considering porting our backend to .NET core, which F# will support in the forthcoming 4.1 release!

Learn more

SonarSource have announced their own SonarQube Team Services / TFS integration

$
0
0

Microsoft have been partnering with SonarSource for almost two years to bring SonarQube to .NET developers and to make it easy to analyze MSBuild and Java projects from Visual Studio Team Services, TFS and Visual Studio. The partnership, and Team Services extensibility, have now matured to the point that we have jointly decided that it was time for Microsoft to transfer ownership of the SonarQube MSBuild build tasks to SonarSource. They are better placed to keep the tasks up to date and consistent with the SonarQube vision. SonarSource have now announced the availability of their own SonarQube Team Services and TFS extension on the VSTS marketplace.

Concretely what does it change for you?

In the past, we released the SonarQube Team Services build tasks “in the box”, so whenever we updated VSTS – every 3 weeks – we pushed updates to these tasks. The tasks were also shipped with the TFS on-premises product. The source code is in the vsts-tasks repository on GitHub along with the other tasks released by Microsoft. In future Microsoft’s SonarQube tasks won’t be released in the service or TFS product. Like many partners, SonarSource is now providing a dedicated SonarQube extension. This allows them to fully control the development and deployment of updates and fixes. Therefore, we are deprecating the MSBuild SonarQube tasks, and you will need to install the SonarQube extension to continue analyzing technical debt in your MSBuild projects.

What build tasks are affected?

The two tasks which are deprecated are the SonarQube for MSBuild tasks (SonarQube for ‘MSBuild – Begin Analysis’ and ‘SonarQube for MSBuild – End Analysis’).

Note that we also integrated SonarQube into the Java build tasks for Maven and Gradle in order to enable code analysis feedback in Pull Requests. These integrations will remain as they are for now, and will continue to be released by Microsoft. SonarSource may in the future provide a replacement build task or tasks for Java with this capability.

What will be the deprecation experience?

If you are a Team Services user, when you run a build that contains SonarQube for MSBuild tasks, you’ll notice some build warnings:

clip_image002

The warnings contain hyperlinks that will help you migrate.

Also, if you try to add the former tasks to a build definition, you’ll notice the [DEPRECATED] prefix in their label:

clip_image003

On the other hand, if you are working on-premises with TFS 2017, you’ll see these changes starting with TFS 2017 Update1.

Moving to the new tasks

At some point, the Microsoft-owned tasks will be deleted. We recommend switching to SonarSource’s extension as soon as possible. This is straight-forward – just install the SonarQube extension to your account and you’ll notice three new tasks in your library:

clip_image004

You‘ll recognize the last two, but the first is new: SonarSource is introducing a new task named “SonarQube Scanner CLI” that supports analysis of projects outside MSBuild and Java build technologies, a common request. You can now analyze your node.js projects, etc …

Minor breaking changes

SonarSource have taken the opportunity to address shortcomings in the old tasks and to action some of your feedback. Consequently, there are 2 small breaking changes:

  • There is now a dedicated “SonarQube” endpoint instead of a generic one. This is an improvement, since you will now be able to find at a glance the service endpoints which are relevant to these tasks, without having to trawl through a long list of generic service end points.

clip_image005

You will be asked to input a token, which can be generated from your SonarQube dashboard.

clip_image007

The new endpoint will show up in the list of end points with a SonarQube icon, so that you can see it immediately.

  • The database connection input fields are no longer available in the build step. This will only matter if you are using a version of SonarQube lower than 5.2. In that case, we advise you to upgrade your SonarQube server, or use the Additional Settings field to configure these parameters
    clip_image009

Support

Moving forward, support is moving entirely to SonarSource, who would love to get your feedback. If you have questions or suggestions about the SonarQube build tasks, please use Google Groups with the SonarQube tag: https://groups.google.com/forum/#!forum/sonarqube

Viewing all 13502 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>