Quantcast
Channel: TechNet Technology News
Viewing all 13502 articles
Browse latest View live

Executing on your ideas through innovation labs

$
0
0

Innovation is not enough. You can have a billion-dollar idea, but to be successful you must develop a product and deliver it to customers. The Harvard Business Review estimated that companies typically only achieve 60 percent of the potential value of their strategies due to failures in planning and execution.

Innovation labs are becoming a popular way to drive innovation. Learn how you can implement a successful innovation lab at your company from Kyle Nel, founder and executive director of Lowe’s Innovation Lab. Nel gives advice on how to:

  • Use stories to sell ideas and keep the team focused.
  • Set up actionable metrics that give new ideas a chance while enabling good decisions.
  • Keep business leaders invested by delivering tangible demonstrations of progress.

You’ll learn more about innovation best practices from Hayagreeva “Huggy” Rao, professor of Organization Behavior and Human Resources at the Stanford Business School, and author of “Scaling up Excellence.” Rao shares three methods to scale up innovation:

  • Conduct a “pre-mortem” on ideas.
  • Reduce cognitive load.
  • “Fix broken windows” by identifying and changing the things that aren’t working well.

Also, see a demonstration on how Dynamics 365 pulls together familiar Office 365 tools to help employees work more efficiently.

Watch the Modern Workplace episode to learn more.

The post Executing on your ideas through innovation labs appeared first on Office Blogs.


Certificate-Based Authentication (CBA) for Exchange Online is Generally Available

$
0
0

Many organizations have been using certificate based authentication for Exchange Online while the feature was in preview. Today, we are excited to announce that the feature is generally available in Office 365 Enterprise, Business, Education, and Government plans. For more details, please reference our preview post which has been modified to reflect this announcement. As always, we look forward to hearing your suggestions and feedback!

Tyler Lenig
Program Manager
Office 365

Brad Anderson’s Lunch Break / s3 e7 / Mark Russinovich, CTO, Azure

$
0
0

Its no secret that Mark Russinovich (CTO, Microsoft Azure) is one of my favorite people at Microsoft Ive had a chance to work with him ever since he joined the company, and Ive had a lot of fun doing a few webcasts with him over the years. Hes brilliant, hes funny, and he has some great stories to tell.

In the first of two much-longer-than-normal episodes, Mark hops in the car for one of the best Lunch Break convos yet. He and I talk about how he completely reversed engineered Windows from scratch (this was long before he joined the company), the benefits of using the cloud for IOT projects, and what it was like having a photographer from WIRED follow him everywhere for a week.

.

To learn more about how top CIOs stay secure + productive,check out this new report.

After the holiday break, I’ll be back to wrap up this conversation with Mark and it is a lot of fun.

You can subscribe to these videoshere, or watch past episodes here:www aka.ms/LunchBreak.

Conditional Access now in the new Azure portal

$
0
0

The digital transformation thats affecting every organization brings new challenges for IT, as they strive to empower their users to be productive while keeping corporate data secure in an increasingly complex technology landscape. Microsoft Enterprise Mobility + Security (EMS) provides a unique identity-driven security approach to address these new challenges at multiple layers and to provide you with a more holistic and innovative approach to security one that can protect, detect, and respond to threats on-premises as well as in the cloud.

Risk-based conditional access is a critical part of our identity-driven security story. It ensures that only the right users, on the right devices, under the right circumstances have access to your sensitive corporate data. Conditional access allows you to define policies that provide contextual controls at the user, location, device, and app levels, and it also takes risk information into consideration (powered by the vast data in Microsofts Intelligent Security Graph). As conditions change, natural user prompts ensure only the right users on compliant devices can access sensitive data, providing you the control and protection you need to keep your corporate data secure while allowing your people to do their best work from any device.

This is an area where we are constantly innovating to bring you the most secure and easy-to-use solution, and today were announcing several improvements to Conditional Access in EMS:

  1. Risk-based access policies per application. Leverage machine learning on a massive scale to provide real-time detection and automated protection. Now you can use this data to build risk-based policies per application.
  2. Greater flexibility to protect applications. Set multiple policies per application or set and easily roll out global rules to protect all your applications with a single policy.
  3. All these capabilities are now available in a unified administrative experience on the Azure portal. This makes it even easier to create and manage holistic conditional access policies to all your applications.

These new conditional access capabilities provide more flexible and powerful policies to enable productivity while ensuring security. Additionally, the new admin experience unifies conditional access workloads across Intune and Azure AD.

If you are an Intune customer using the existing browser-based console or the Configuration Manager console, or an Azure AD customer using the classic Azure portal, you can now preview the new Conditional Access policy interface in the Azure portal.

Get started with these Conditional Access capabilities or read on to learn a bit more about Conditional Access with EMS.

Overview

A Conditional Access policy is simply a statement about
When the policy should apply (called Conditions), and
What the action or requirement should be (called Controls).

Conditional access policy

Conditions (When the policy should apply)

Conditions are the things about a login that dont change during the login, and are used to decide which policies should apply. Azure AD supports the following Conditions:

  1. Users/Groups are the users/groups in the directory that the policy applies to.
  2. Cloud apps are the services the user accesses that you want to secure.
  3. Client app is the software the user is employing to access cloud app.
  4. Device platform is the platform the user is signing in from.
  5. Location is the IP-address based location the user is signing in from.
  6. Sign-in risk is the likelihood that the sign-in is coming from someone other than the user.

Conditions preview

Our documentation provides further details on how to set the conditions.

Controls (What the action or requirement should be)

Controls are the additional enforcements that are put in place by the policy (such as do a Multi-factor authentication challenge) that will be inserted into the login flow. Azure AD supports the following controls:

  1. Block access
  2. Multi-factor authentication
  3. Compliant device
  4. Domain Join

You can select individual controls or all of them.

Controls preview

To learn more about how to get started with controls, you can read a detailed documentation article.

Were really excited about the wide range of scenarios that this new experiences lights up and hope you find it useful. As always, were looking forward to your feedback.

Astroneer launches in the Windows Store for Game Preview Dec. 16

$
0
0

Today, we’re excited to announce that Astroneer will launch into Game Preview for Xbox One and on the Windows Store on Dec. 16!

Astroneer launches in the Windows Store for Game Preview Dec. 16

Astroneer is a game about independent space explorers prospecting the stars for fortune and glory. This is a game about discovery and mystery, as you uncover rare artifacts and the resources you need to find them, on vast worlds where every cubic inch of space can be explored. It’s a game about creativity, as you use your tools to dig, build, sculpt, and shape the very ground to serve your needs, whether utilitarian or aesthetic. It’s a game about solitude, as you enjoy the beauty of your surroundings as you might on a long hike or camping trip. It’s also a game about cooperation, as you invite your friends to share the experience with you in multiplayer co-op.

How can you give feedback and participate? The absolute best way will be to get on our forums, where you can post about bugs, discuss features, and meet your fellow Astroneers. The forum will contain links to some of our other communication avenues, and our latest update roadmap.

Head over to Xbox Wire to read more. We look forward to pushing the final frontier with you!

The post Astroneer launches in the Windows Store for Game Preview Dec. 16 appeared first on Windows Experience Blog.

Code Style Configuration in the VS2017 RC Update

$
0
0

Fighting like Cats and Dogs

Visual Studio 2017 RC introduced code style enforcement and EditorConfig support. We are excited to announce that the update includes more code style rules and allows developers to configure code style via EditorConfig.

What is EditorConfig?

EditorConfig is an open source file format that helps developers configure and enforce formatting and code style conventions to achieve consistent, more readable codebases. EditorConfig files are easily checked into source control and are applied at repository and project levels. EditorConfig conventions override their equivalents in your personal settings, such that the conventions of the codebase take precedence over the individual developer.

The simplicity and universality of EditorConfig make it an attractive choice for team-based code style settings in Visual Studio (and beyond!). We’re excited to work with the EditorConfig community to add support in Visual Studio and extend their format to include .NET code style settings.

EditorConfig with .NET Code Style

In VS2017 RC, developers could globally configure their personal preferences for code style in Visual Studio via Tools>Options. In the update, you can now configure your coding conventions in an EditorConfig file and have any rule violations get caught live in the editor as you type. This means that now, no matter what side you’re on in The Code Style Debate, you can choose what conventions you feel are best for any portion of your codebase—whether it be a whole solution or just a legacy section that you don’t want to change the conventions for—and enforce your conventions live in the editor. To demonstrate the ins-and-outs of this feature, let’s walk through how we updated the Roslyn repo to use EditorConfig.

Getting Started

The Roslyn repo by-and-large uses the style outlined in the .NET Foundation Coding Guidelines. Configuring these rules inside an EditorConfig file will allow developers to catch their coding convention violations as they type rather than in the code review process.

To define code style and formatting settings for an entire repo, simply add an .editorconfig file in your top-level directory. To establish these rules as the “root” settings, add the following to your .editorconfig (you can do this in your editor/IDE of choice):

EditorConfig settings are applied from the top-down with overrides, meaning you describe a broad policy at the top and override it further down in your directory-tree as needed. In the Roslyn repo, the files in the Compilers directory do not use var, so we can just create another EditorConfig file that contains different settings for the var preferences and these rules will only be enforced on the files in the directory. Note that when we create this EditorConfig file in the Compiler directory, we do not want to add root = true (this allows us to inherit the rules from a parent directory, or in this case, the top-level Roslyn directory).

EditorConfig File Hierarchy

Figure 1. Rules defined in the top-most EditorConfig file will apply to all projects in the “src” directory except for the rules that are overridden by the EditorConfig file in “src/Compilers”.

Code Formatting Rules

Now that we have our EditorConfig files in our directories, we can start to define some rules. There are seven formatting rules that are commonly supported via EditorConfig in editors and IDEs: indent_style, indent_size, tab_width, end_of_line, charset, trim_trailing_whitespace, and insert_final_newline. As of VS2017 RC, only the first five formatting rules are supported. To add a formatting rule, specify the type(s) of files you want the rule to apply to and then define your rules, for example:

Code Style Rules

After reaching out to the EditorConfig community, we’ve extended the file format to support .NET code style. We have also expanded the set of coding conventions that can be configured and enforced to include rules such as preferring collection initializers , expression-bodied members, C#7 pattern matching over cast and null checks, and many more!

Let’s walk through an example of how coding conventions can be defined:

The left side is the name of the rule, in this case “csharp_style_var_for_built_in_types”. The right side indicates the rule settings: preference and enforcementlevel, respectively.

  • A preference setting can be either true (meaning, “prefer this rule”) or false (meaning, “do not prefer this rule”).
  • The enforcement level is the same for all Roslyn-based code analysis and can be, from least severe to most severe: none, suggestion, warning, or error.

Ultimately, your build will break if you violate a rule that is enforced at the error severity level (however, this is not yet supported in the RC). To see all code style rules available in the VS2017 RC update and the final Roslyn code style rules, see the Roslyn .editorconfig or check out our documentation.

If you need a refresher on the different severity levels and what they do, see below:

Table of code analysis severity levels

Pro-tip: The gray dots that indicate a suggestion are rather drab. To spice up your life, try changing them to a pleasant pink. To do so, go to Tools>Options>Environment>Fonts and Colors>Suggestion ellipses (…) and give the setting the following custom color (R:255, G:136, B:196):

R:255, G:136, B:196

Experience in Visual Studio

When you add an EditorConfig file to an existing repo or project, the files are not automatically cleaned up to conform to your conventions. You must also close and reopen­ any open files you have when you add or edit the EditorConfig file to have the new settings apply. To make an entire document adhere to code formatting rules defined in your settings, you can use Format Document (Ctrl+K,D). This one-click cleanup does not exist yet for code style, but you can use the Quick Actions menu (Ctrl+.) to apply a code style fix to all occurrences in your document/project/solution.

Fix all violations of a code style rule

Figure 2. Rules set in EditorConfig files apply to generated code and code fixes can be applied to all occurrences in the document, project, or solution.

Pro Tip: To verify that your document is using spaces vs tabs, enable Edit>Advanced>View White Space.

How do you know if an EditorConfig file is applied to your document? You should be able to look at the bottom status bar of Visual Studio and see this message:

Visual Studio status bar

Note that this means EditorConfig files override any code style settings you have configured in Tools>Options.

Conclusion

Visual Studio 2017 RC is just a stepping stone in the coding convention configuration and enforcement experience. To read more about EditorConfig support in Visual Studio 2017, check out our documentation. Download the VS2017 RC with the update to test out .NET code style in EditorConfig and let us know what you think!

Over ‘n’ out,

Kasey Uhlenhuth, Program Manager, .NET Managed Languages

Known Issues

  • Code style configuration and enforcement only works inside the Visual Studio 2017 RC update at this time. Once we make all the code style rules into a separate NuGet package, you will be able to enforce these rules in your CI systems as well as have rules that are enforced as errors break your build if violated.
  • You must close and reopen any open files to have EditorConfig settings apply once it is added or edited.
  • Only indent_style, indent_size, tab_width, end_of_line, and charset are supported code formatting rules in Visual Studio 2017 RC.
  • IntelliSense and syntax highlighting are “in-progress” for EditorConfig files in Visual Studio right now. In the meantime, you can use MadsK’s VS extension for this support.
  • Visual Basic-specific rules are not currently supported in EditorConfig beyond the ones that are covered by the dotnet_style_* group.
  • Custom naming convention support is not yet supported with EditorConfig, but you can still use the rules available in Tools>Options>Text Editor>C#>Code Style>Naming. View our progress on this feature on the Roslyn repo.
  • There is no way to make a document adhere to all code style rules with a one-click cleanup (yet!).

GameAnalytics SDK for Microsoft UWP Released

$
0
0

We’re excited to announce our partnership with GameAnalytics, a powerful tool that helps developers understand player behavior so they can improve engagement, reduce churn and increase monetization.

The tool gives game developers a central platform that consolidates player data from various channels to help visualize their core gaming KPIs in one convenient view. It also enables team members to collaborate with reporting and benchmark their game to see how it compares with more than 10,000 similar titles.

You can set up GameAnalytics in a few minutes and it’s totally free of charge, without any caps on usage or premium subscription tiers. If you’d rather see the platform in action before making any technical changes, just sign up to view the demo game and data.

GameAnalytics is used by more than 30,000 game developers worldwide and handles over five billion unique events every day across 1.7 billion devices.

“I believe the single most valuable asset for any game developer in today’s market is knowledge,” said GameAnalytics Founder and Chairman, Morten E Wulff. “Since I started GameAnalytics back in 2012, I’ve met with hundreds of game studios from all over the world, and every single one is struggling with increasing user acquisition costs and falling retention rates.”

“When they do strike gold, they don’t always know why. GameAnalytics is here to change that. To be successful, game studios will have to combine creative excellence with a data-driven approach to development and monetization. We are here to bridge this gap and make it available to everyone for free,” he added.

GameAnalytics provides SDKs for every major game engine. The following guide will outline how to install the SDK and setup GameAnalytics to start tracking player behavior in four steps.

1.  Create a free GameAnalytics account

To get started, sign up for a free GameAnalytics account and add your first game. When you’ve created your game, you’ll find the integration keys in the settings menu (the gear icon), under “Game information.” You’ll need to copy your Game Key and Secret Key for the following steps.

2.  Download the standalone SDK for Microsoft UWP

Next, download the GameAnalytics SDK for Microsoft UWP. Once downloaded, you can begin the installation process.

3.  Install the native UWP SDK

To install the GameAnalytics SDK for Microsoft UWP, simply install using the Nuget by adding the GameAnalytics.UWP.SDK package from Nuget package manager. For Manual installation, use the following instructions:

Manual installation

  • Open GA-SDK-UWP.sln and compile the GA_SDK_UWP project
  • Create a Nuget package: nuget pack GA_SDK_UWP/GA_SDK_UWP.nuspec
  • Copy the resulting GameAnalytics.UWP.SDK.[VERSION].nupkg (where [VERSION] is the version specified in the .nuspec file) into for example C:\Nuget.Local (the name and location of the folder is up to you)
  • Add C:\Nuget.Local (or whatever you called the folder) to the Nuget package sources (and disable Official Nuget source)
  • Add GameAnalytics.UWP.SDK package from Nuget packet manager

4.  Initialize the integration

Call this method to initialize using the Game Key and Secret Key for your game (copied in step 1):


// Initialize
GameAnalytics.Initialize("[game key]", "[secret key]");
:bulb:

Below is a practical example of code that is called at the beginning of the game to initialize GameAnalytics:


using GameAnalyticsSDK.Net;

namespace MyGame
{
    public class MyGameClass
    {
        // ... other code from your project ...
        void OnStart()
        {
            GameAnalytics.SetEnabledInfoLog(true);
            GameAnalytics.SetEnabledVerboseLog(true);
            GameAnalytics.ConfigureBuild("0.10");

            GameAnalytics.ConfigureAvailableResourceCurrencies("gems", "gold");
            GameAnalytics.ConfigureAvailableResourceItemTypes("boost", "lives");
            GameAnalytics.ConfigureAvailableCustomDimensions01("ninja", "samurai");
            GameAnalytics.ConfigureAvailableCustomDimensions02("whale", "dolpin");
            GameAnalytics.ConfigureAvailableCustomDimensions03("horde", "alliance");
            GameAnalytics.Initialize("[game key]", "[secret key]");
        }
    }
}

5.  Build to your game engine

GameAnalytics has provided full documentation for each game engine and platform. You can view and download all files via their Github page, or follow the steps below. They currently support building to the following game engines with Microsoft UWP:

You can also connect to the service using their Rest API.

Viewing your game data

Once implemented, GameAnalytics provides insight into more than 50 of the top gaming KPIs, straight out of the box. Many of these metrics are viewable on a real-time dashboard to get a quick overview into the health of your game throughout the day.

The real-time dashboard gives you visual insight into your number of concurrent users, incoming events, new users, returning users, transactions, total revenue, first time revenue and error logs.

Creating custom events

You can create your own custom events with unique IDs, which allow you to track actions specific to your game experience and measure these findings within the GameAnalytics interface. Event IDs are fully customizable and should fall within one of the following event types:

EventDescription
BusinessIn-App Purchases supporting receipt validation on GA servers.
ResourceManaging the flow of virtual currencies – like gems or lives.
ProgressionLevel attempts with Start, Fail & Complete event.
ErrorSubmit exception stack traces or custom error messages.
DesignSubmit custom event IDs. Useful for tracking metrics specifically needed for your game.

For more information about planning and implementing each of these event types to suit your game, visit the game analytics data and events page.

GameAnalytics Dashboards

Developers using GameAnalytics can track their events in a selection of dashboards tailored specifically to games. The dashboards are powerful, yet totally flexible to suit any use case.

Overview Dashboard

With this dashboard you will see a quick snapshot of your core game KPIs.

Acquisition Dashboard

This dashboard provides insight into your player acquisition costs and best marketing sources.

Engagement

This dashboard helps to measure how engaged your players are over time.

Monetization

This dashboard visualizes all of the monetization metrics relating to your game.

Progression

This dashboard helps you understand where players grind or drop off in your game.

Resources

This dashboard helps you balance the flow of “sink” and “gain” resources in your game economy.

You can find a more detailed overview for each dashboard on the GameAnalytics documentation portal.

The post GameAnalytics SDK for Microsoft UWP Released appeared first on Building Apps for Windows.

Using Cortana Intelligence in HoloLens Applications

$
0
0

This post is authored by Scott Haynie, Senior Software Engineer, and Senja Filipi, Software Engineer, at Microsoft.

Telemetry plays an important role when you operationalize new experiences/apps that span the web, mobile and IoT, including new gadgets such as the Microsoft HoloLens. The abundance of data that is made available can help developers monitor and track system health and usage patterns, and provide important new insights into how users interact with your application. Tapping into this wealth of information can really help you align your customers’ experiences with their needs and expectations.

At the Ignite 2016 Innovation Keynote, we showed the future of home improvement as envisioned by Lowe’s and Microsoft. As part of this experience, we showed how to use telemetry emitted by HoloLens to understand the [virtual] places that customers are gazing at, as they experience an immersive home remodeling experience in 3D, using the augmented reality capabilities of HoloLens.

Kitchen remodeling experience with HoloLens.

Heat map based on HoloLens telemetry data.

In this post, we show how you can use the Cortana Intelligence Suite to ingest data from the HoloLens application, analyze it in real-time using Azure Stream Analytics, and visualize it with Power BI.

Empowering HoloLens Developers with Cortana Intelligence

In HoloLens applications, users can interact through one of these methods:

  1. By gazing at an object.
  2. By using hand gestures.
  3. By using voice commands.

When wearing, and interacting with, the HoloLens, a frontal camera tracks head movements, including the “head ray” and focus. The point that is being looked at is reflected using a cursor as a visual cue. These interactions are handled by the Update method (part of the MonoBehavior class). This method gets called at the refresh rate frequency, usually 60 frames per second. It’s crucial to not slow down the update process with any side operations.

We set out with a simple goal that HoloLens developers should be able to use the Cortana Intelligence Suite and any related Azure services with just a few lines of code. In this example, you can see the code in one of the Update methods that tracks the gaze of the HoloLens user. To use Cortana Intelligence, the HoloLens application needs to just add this one line:

logger.AddGazeEvent(FocusedObject.gameObject.name)

The HoloLens is now able to send telemetry data to Event Hub, and you can further analyze the data using various Azure Services, for instance, Azure Stream Analytics.


Calling the telemetry AddGazeEvent from the HoloLens app.

Building the End to End Solution

A canonical telemetry processing pipeline consists of the following:

  1. An Event Hub that enables the ingestion of data from the HoloLens client application.
  2. A Stream Analytics job that consumes the telemetry data, analyzes it real time, and writes the insights derived into Power BI, as an output.
  3. A PowerBI dashboard.

If you don’t have an Azure account yet, you can obtain a free account which will help you follow along in this post. Azure provides a rich set of libraries that developers can use to interact its many services. When using these libraries with a HoloLens application (which is a Universal Windows App), you will not be able to use NuGet packages directly if they rely on the .NET full framework implementation. Additionally, Azure services provide REST interfaces for client communication.

In our case, to send telemetry data from the HoloLens to the Event Hub, we implemented a .NET library, using the core .NET framework to meet the UWP apps requirements. The DLL handles the batching of events and composing of the request payload, and takes care of network retries.


Initializing the Telemetry Library

Step-by-step instructions on how to set up the Azure services and send data to the Azure Event Hub with retries from the UWP app can be found in GitHub here. Additional resources that you may find useful are included below. We would love to hear from you, so do let us know what you think – you can send your feedback via the comments section below.

Scott & Senja

Resources:


Bing Location Control helps devs add location to the conversation

$
0
0

Bots often need the user to input a location to complete a task. And normally bot developers need to use a combination of location or place APIs, and have their bots engage in a multi-turn dialog with users to get their desired location and subsequently validate it. The development steps are usually complicated and error-prone.

As announced on the Bot Framework blog, the open source Bing Location Control for Bot Framework allows bot developers to easily and reliably get the user’s desired location within a conversation. The control is available in C# and Node.js and works consistently across all messaging channels supported by Bot Framework. All this with a few lines of code. Read the full post on the Bing Developer blog.

SQL Server on Linux: How? Introduction

$
0
0

This post was authored by Scott Konersmann, Partner Engineering Manager, SQL Server, Slava Oks, Partner Group Engineering Manager, SQL Server, and Tobias Ternstrom, Principal Program Manager, SQL Server

Introduction

We first announced SQL Server on Linux in March, and recently released the first public preview of SQL Server on Linux (SQL Server v.Next CTP1) at the Microsoft Connect(); conference. We’ve been pleased to see the positive reaction from our customers and the community; in the two weeks following the release, there were more than 21,000 downloads of the preview. A lot of you are curious to hear more about how we made SQL Server run on Linux (and some of you have already figured out and posted interesting articles about part of the story with “Drawbridge”). We decided to kick off a blog series to share technical details about this very topic starting with an introduction to the journey of offering SQL Server on Linux. Hopefully you will find it as interesting as we do! J

Summary

Making SQL Server run on Linux involves introducing what is known as a Platform Abstraction Layer (“PAL”) into SQL Server. This layer is used to align all operating system or platform specific code in one place and allow the rest of the codebase to stay operating system agnostic. Because of SQL Server’s long history on a single operating system, Windows, it never needed a PAL. In fact, the SQL Server database engine codebase has many references to libraries that are popular on Windows to provide various functionality. In bringing SQL Server to Linux, we set strict requirements for ourselves to bring the full functional, performance, and scale value of the SQL Server RDBMS to Linux. This includes the ability for an application that works great on SQL Server on Windows to work equally great against SQL Server on Linux. Given these requirements and the fact that the existing SQL Server OS dependencies would make it very hard to provide a highly capable version of SQL Server outside of Windows in reasonable time it was decided to marry parts of the Microsoft Research (MSR) project Drawbridge with SQL Server’s existing platform layer SQL Server Operating System (SOS) to create what we call the SQLPAL. The Drawbridge project provided an abstraction between the underlying operating system and the application for the purposes of secure containers and SOS provided robust memory management, thread scheduling, and IO services. Creating SQLPAL enabled the existing Windows dependencies to be used on Linux with the help of parts of the Drawbridge design focused on OS abstraction while leaving the key OS services to SOS. We are also changing the SQL Server database engine code to by-pass the Windows libraries and call directly into SQLPAL for resource intensive functionality.

Requirements for supporting Linux

SQL Server is Microsoft’s flagship database product which with close to 30 years of development behind it. At a high level, the list below represents our requirements as we designed the solution to make the SQL Server RDBMS available on multiple platforms:

  1. Quality and security must meet the same high bar we set for SQL Server on Windows
  2. Provide the same value, both in terms of functionality, performance, and scale
  3. Application compatibility between SQL Server on Windows and Linux
  4. Enable a continued fast pace of innovation in the SQL Server code base and make sure new features and fixes appear immediately across platforms
  5. Put in place a foundation for future SQL Server suite services (such as Integration Services) to come to Linux

To make SQL Server support multiple platforms, the engineering task is essentially to remove or abstract away its dependencies on Windows. As you can imagine, after decades of development against a single operating system, there are plenty of OS-specific dependencies across the code base. In addition, the code base is huge. There are tens of millions of lines of code in SQL Server.

SQL Server depends on various libraries and their functions and semantics commonly used in Windows development that fall into three categories:

  • “Win32” (ex. user32.dll)
  • NT Kernel (ntdll.dll)
  • Windows application libraries (such as MSXML)

You can think of these as core library functions, most of them have nothing to do with the operating system kernel and only execute in user mode.

While SQL Server has dependencies on both Win32 and the Windows kernel, the most complex dependency is that of Windows application libraries that have been added over the years in order to provide new functionality.  Here are some examples:

  • SQL Server’s XML support uses MSXML which is used to parse and process XML documents within SQL Server.
  • SQLCLR hosts the Common Language Runtime (CLR) for both system types as well as user defined types and CLR stored procedures.
  • SQL Server has some components written in COM like the VDI interface for backups.
  • Heterogeneous distributed transactions are controlled through Microsoft Distributed Transaction Coordinator (MS DTC)
  • SQL Server Agent integrates with many Windows subsystems (shell execution, Windows Event Log, SMTP Mail, etc.).

These dependencies are the biggest challenge for us to overcome to meet our goals of bringing the same value and having a very high level compatibility between SQL Server on Windows and Linux. As an example, to re-implement something like SQLXML would take a significant amount of time and would run a high risk of not providing the same semantics as before, and could potentially break applications. The option of completely removing these dependencies would mean we must also remove the functionality they provide from SQL Server on Linux. If the dependencies were edge cases and only impacting very few customer visible features, we could have considered it. As it turns out, removing them would cause us to have to remove tons of features from SQL Server on Linux which would go against our goals around compatibility and value across operating systems.

We could take the approach of doing this re-implementation piecemeal, bringing value little by little. While this would be possible, it would also go against the requirements because it would mean that there would be a significant gap between SQL Server on Linux and Windows for years. The resolution lies in the right platform abstraction layer.

Building a PAL

Software that is supported across multiple operating systems always has an implementation of some sort of Platform Abstraction Layer (PAL). The PAL layer is responsible for abstraction of the calls and semantics of the underlying operating system and its libraries from the software itself. The next couple of sections consider some of the technology that we investigated as solutions to building a PAL for SQL Server.

SQL Operating System (SOS or SQLOS)

In the SQL Server 2005 release, a platform layer was created between the SQL Server engine and Windows called the SQL Operating System (SOS). This layer was responsible for user mode thread scheduling, memory management, and synchronization (see SQLOS for reference).  A key reason for the creation of SOS was that it allowed for a centralized set of low level management and diagnostics functionality to be provided to customers and support (subset of Dynamic Management Views/DMVs and Extended Events/XEvents).  This layer allowed us to minimize the number of system calls involved in scheduling execution by running non-preemptively and letting SQL Server do its own resource management.  While SOS improved performance and greatly helped supportability and debugging, it did not provide a proper abstraction layer from the OS dependencies described above, i.e. Windows semantics were carried through SOS and exposed to the database engine.

01

In the scenario where we would completely remove the dependencies on the underlying operating system from the database engine, the best option was to grow SOS into a proper Platform Abstraction Layer (PAL).  All the calls to Windows APIs would be routed through a new set of equivalent APIs in SOS and a new host extension layer would be added on the bottom of SOS that would interact with the operating system. While this would resolve the system call dependencies, it would not help with the dependencies on the higher-level libraries.

Drawbridge

Drawbridge was an Microsoft Research project (see Drawbridge for reference) that focused on drastically reducing the virtualization resource overhead incurred when hosting many Virtual Machines on the same hardware.  The research involved two ideas.  The first idea was a “picoprocess” which consists of an empty address space, a monitor process that interacts with the host operating system on behalf of the picoprocess, and a kernel driver that allows a driver to populate the address space at startup and implements a host Application Binary Interface (ABI) that allows the picoprocess to interact with the host.  The second idea was a user mode Library OS, sometimes referred to as LibOS.  Drawbridge provided a working Windows Library OS that could be used to run Windows programs on a Windows host.  This Library OS implements a subset of the 1500+ Win32 and NT ABIs and stubs the rest to either succeed or fail depending on the type of call.

02

Our needs didn’t align with the original goals of the Drawbridge research.  For instance, the picoprocess idea isn’t something needed for moving SQL Server to other platforms.  However, there were a couple of synergies that stood out:

  1. Library OS implemented most of the 1500+ Windows ABIs in user mode and only 45-50 ABIs were needed to interact with the host.  These ABIs were for address space and memory management, host synchronization, and IO (network and disk).  This made for a very small surface area that needs to be implemented to interact with a host.  That is extremely attractive from a platform abstraction perspective.
  2. Library OS was capable of hosting other Windows components.  Enough of the Win32 and NT layers were implemented to host CLR, MSXML, and other APIs that the SQL suite depends on. This meant that we could get more functionality to work without rewriting whole features.

There were also some risk and reward tradeoffs:

  1. The Microsoft Research project was complete and there was no support for Drawbridge. Therefore, we needed to take a source snapshot and modify the code for our purposes.  The risks were around the costs to ramp up a team on the Library OS, modify it to be suitable for SQL Server, and make it perform comparably with Windows.  On the positive side, this would mean everything is in user mode and we would own all the code within the stack.  Performance critical code can be optimized because we can modify all layers of the stack including SQL Server, the Library OS, and the host interface as needed to make SQL Server perform.  Since there are no real boundaries in the process, it is possible for SQL Server to call Linux.
  2. The original Drawbridge project was built on Windows and used a kernel driver and monitor process.  This would need to be dropped in favor of a user mode only architecture.  In the new architecture, the host extension (referred to as PAL in the Drawbridge design) on Windows would move from a kernel driver to just a user mode program.  Interestingly enough, one of the researchers had developed a rough prototype for Linux that proved it could be done.
  3. Because the technologies were created independently there was a large amount of overlapping functionality.  SOS had subsystems for object management, memory management, threading/scheduling, synchronization, and IO (disk and network). The Library OS and Host Extension also had similar functionality.  These systems would need to be rationalized down to a single implementation.
Technologies

SOS

Library OS

Host Extension

Object Management

Memory Management

Threading/Scheduling

Synchronization

I/O (Disk, Network)

 

Meet SQLPAL

As a result of the investigation, we decided on a hybrid strategy.  We would merge SOS and Library OS from Drawbridge to create the SQL PAL (SQL Platform Abstraction Layer). For areas of Library OS that SQL Server does not need, we would remove them. To merge these architectures, changes were needed in all layers of the stack.

The new architecture consists of a set of SOS direct APIs which don’t go through any Win32 or NT syscalls.  For code without SOS direct APIs they will either go through a hosted Windows API (like MSXML) or NTUM (NT User Mode API – this is the 1500+ Win32 and NT syscalls). All the subsystems like storage, network, or resource management will be based on SOS and will be shared between SOS direct and NTUM APIs.

03

This architecture provides some interesting characteristics:

  • Everything running in process boils down to the same platform assembly code.  The CPU can’t tell the difference between the code that is providing Win32 functionality to SQL Server or native Linux code.
  • Even though the architecture shows layering, there are no real boundaries within the process (There is no spoon!).  If code running in SQL Server which is performance critical needs to call Linux it can do that directly with a very small amount of assembler via the SOS direct APIs to setup the stack correctly and process the result.  An example where this has been done is the disk IO path.  There is a small amount of conversion code left to convert from Windows scatter/gather input structure to Linux vectored IO structure.  Other disk IO types don’t require any conversions or allocations.
  • All resources in the process can be managed by SQLPAL.  In SQL Server, before SQLPAL, most resources such as memory and threads were controlled, but there were some things outside it’s control.  Some libraries and Win32/NT APIs would create threads on their own and do memory allocations without using the SOS APIs.  With this new architecture, even the Win32 and NT APIs would be based on SQLPAL so every memory allocation and thread would be controlled by SQL PAL. As you can see this also benefits SQL Server on Windows.
  • For SQL Server on Linux we are using about 81 MB of uncompressed Windows libraries, so it’s a tiny fraction (less than 1%) of a typical Windows installation. SQLPAL itself is currently around 8 MB.

Process Model

The following diagram shows what the address space looks like when running.   The host extension is simply a native Linux application.  When host extension starts it loads and initializes SQLPAL, SQLPAL then brings up SQL Server.  SQLPAL can launch software isolated processes that are simply a collection of threads and allocations running within the same address space.  We use that for things like SQLDumper which is an application that is run when SQL Server encounters a problem to collect an enlightened crash dump.

One point to reiterate is that even though this might look like a lot of layers there aren’t any hard boundaries between SQL Server and the host.

04

Evolution of SQLPAL

At the start of the project, SQL Server was built on SOS and Library OS was independent.  The eventual goal is to have a merged SOS and Library OS as the core of SQL PAL.  For public preview, this merge wasn’t fully completed, but the heart of SQLPAL had been replaced with SOS.  For example, threads and memory already use SOS functionality instead of the original Drawbridge implementations.

The result is that there are two instances of SOS running inside the CTP1 release: one in SQL Server and one in SQLPAL .  This works fine because the SOS instance in SQL Server is still using Win32 APIs which call down into the SQLPAL.  The SQLPAL instance of the SOS code has been changed to call the host extension ABIs (i.e. the native Linux code) instead of Win32.

Now we are working on removing the SOS instance from SQL Server.  We are exposing the SOS APIs from the SQLPAL.  Once this is completed everything will flow through the single SQLPAL SOS instance.

More posts

We are planning more of these posts to share to tell you about our journey, which we believe has been amazing and a ton of fun worth sharing. Please provide comments if there are specific areas you are interested in us covering!

Thanks!

This Week on Windows: Forza Horizon 3, Microsoft Photos, Gearsmas and more

$
0
0

This Week on Windows, we’re talking about how to stream your favorite Xbox and Windows 10 game titles to Oculus Rift, sales on movies and TV in the Windows Store and more!

In case you missed it:

We shared an update on the many wonderful things happening across the board for Surface

This Week on Windows: Forza Horizon 3, Microsoft Photos and more

We’re so proud of the momentum and progress we are making with our Surface line-up. We’re learning a ton from Surface and Surface Hub customers and have an amazing team of hardware and software engineers dedicated to continually improving our products. Read more in our blog post from Monday.

The Microsoft Photos app for Windows 10 got new features

 New features arrive in Microsoft Photos on Windows 10

With an update to Microsoft Photos, we’re making it fun to view all your digital memories in photo or video form, with a refreshed user experience that makes it pleasant to browse your collection. We’ve updated the way you edit photos and apply filters to simplify the most common actions. To celebrate the new hardware and the creator in all of us, we’ve added the ability to draw on your photos and videos and even play back the ink with animation! Read more here.

Killer Instinct: Definitive Edition launched on Windows 10

 Killer Instinct

Previously available only on Xbox One, the Definitive Edition packs up ALL the content we’ve ever released – every character, stage, costume, color, trailers and tracks – and puts it all into one box of awesome goodness. As a bonus, it also comes with behind-the-scenes videos, never-before-seen concept art, and a full universe map so you can read the bios and backstories of all your favorite characters.

Here’s what’s new in the Windows Store this week:

Celebrate Gearsmas with unique content, special events and special offers!

The days are getting colder and the nights are getting longer, which means it’s time for the seasonal signature event for Gears of War – Gearsmas! Beginning today, this year’s holiday spectacular marks the biggest event for Gears of War 4 yet with over thirty pieces of festive content to collect, a special playlist featuring the first ever custom weapon for Gears 4 and a whole lot more. If you haven’t picked up Gears of War 4 yet, there’s never been a better time to join. In the spirit of Gearsmas, Gears of War 4 has extended a special offer – a 35% discount! Head over to Xbox Wire to read more.

Elf – Sale

This Week on Windows: Forza Horizon 3, Microsoft Photos and more

Until Dec. 18, buy Elf for just $4.99 HD on Microsoft Movies & TV and get $5 in store credit to spend on even more movies, games, apps, or music! Once you own Elf, you can watch across any Windows 10 device or Xbox console or download for offline viewing.

Pre-order Rogue One: A Star Wars Story

This Week on Windows: Forza Horizon 3, Microsoft Photos and more

As the Empire continues to gain power, a group of rebel spies will risk everything to steal the plans to their enemy’s most terrifying new weapon: the Death Star. Preorder Rogue One: A Star Wars Story while the film is still in theaters in the Movies & TV section of the Windows Store. Available in the US and Canada.

Astroneer (Game Preview)

Astroneer launches in the Windows Store for Game Preview Dec. 16

Today, Astroneer ($19.99) will launch into Game Preview for Xbox One and in the Windows Store. Astroneer is a game about independent space explorers prospecting the stars for fortune and glory. It is also about discovery and mystery, as you uncover rare artifacts and the resources you need to find them, on vast worlds where every cubic inch of space can be explored.

Forza Horizon 3 Blizzard Mountain

Ready for a frozen adventure? Well, the wait is over with the Forza Horizon 3 Blizzard Mountain! Available today as a standalone purchase or as part of the Forza Horizon 3 Expansion Pass, Blizzard Mountain invites players to the snowy elevations of a brand-new playable area of Forza Horizon 3’s Australia.

Movies & TV Winter Sale

This Week on Windows: Forza Horizon 3, Microsoft Photos and more

Winter Sale has arrived! Enjoy hit movies for up to 38% off, get TV season passes for as low as $14.99, and score major savings on movie bundles like Marvel’s Captain America 3-Movie Collection. Don’t miss out on our Winter Sale, now through Dec.19 in the

Have a great weekend!

The post This Week on Windows: Forza Horizon 3, Microsoft Photos, Gearsmas and more appeared first on Windows Experience Blog.

Power BI reports in SQL Server Reporting Services: Feedback on the Technical Preview

$
0
0

In October at PASS Summit 2016, we released a Technical Preview of Power BI reports in SQL Server Reporting Services (SSRS). Since then, we’ve received a ton of enthusiastic feedback. Here’s just some of the feedback we’ve heard so far:

  • The virtual machine with everything you need preloaded – Power BI Desktop, Reporting Services, Analysis Services, and even some sample data and reports – makes it really quick and easy to try a new feature like this one.
  • Power BI reports feel natural within the Reporting Services web portal and work just as you’d expect, while Power BI Desktop’s ability to open reports from and save them to a report server feels seamless and supports the iterative process of creating reports.
  • The Comments feature is a welcome addition and a great way to engage in discussions about the insights you uncover in Power BI reports as well as in other reports.

We thank everyone who’s tried the Technical Preview and shared their feedback so far.

In addition to feedback, we’ve received a number of questions, and in today’s post, we thought we’d round up some of the most common ones.

What’s Microsoft’s approach to offering Power BI capabilities in an on-premises solution?

Power BI was designed to be Software-as-a-Service running in Microsoft’s Azure datacenters, while SQL Server Reporting Services (SSRS) was designed to be an on-premises solution that customers can deploy and manage on their own servers. For customers who need an on-premises solution, as described in our reporting roadmap, we’re investing in the SSRS product and adding support for Power BI reports to SSRS. With this support, you can create a report in Power BI Desktop, publish it to your SSRS report server, and view and interact with it in your web browser.

Which Power BI capabilities do you plan to add to SSRS?

We’re focusing our efforts on adding Power BI reports to SSRS and on supporting the features Power BI Desktop offers for use within these reports, including a variety of data connectors and visualizations. Beyond the current Technical Preview, we plan to add support for

  • Custom visuals
  • Additional data connectors (besides Analysis Services), cached data, and scheduled data refresh
  • Power BI mobile apps (viewing Power BI reports stored in SSRS)

Given our focus on Power BI reports, we have no current plans to add other Power BI features (such as “dashboards,” Q&A, Quick Insights, and others) to SSRS.

What can we expect in the next Technical Preview of Power BI reports in SSRS?

With the current Technical Preview, we used a pre-configured Azure VM to offer you a preview that’s quick and easy to try. Our focus for the next Technical Preview is on a version you can download and install on your own VM or server, a necessary next step toward a production-ready version. Aside from this aspect, the functionality will be similar to the current Technical Preview’s.

When will we have this next Technical Preview?

We’re targeting January 2017 to release this next Technical Preview.

What’s the release vehicle for a production-ready version?

We plan to release the production-ready version in the next SQL Server release wave. We won’t be releasing it in a Service Pack, Cumulative Update, or other form of update for SSRS 2016.

When will we have a production-ready version?

We’re targeting availability in mid-2017.

Can I deploy SSRS 2016 today and migrate to SSRS with Power BI reports when it’s available?

Yes, we aim to make it easy to migrate to SSRS with Power BI reports from SSRS 2016 and previous versions.

How can I participate today?

The best way to participate and help us deliver SSRS with Power BI reports as quickly as possible is to try our preview releases and share your feedback with us:

 

 

SQL Server next version Community Technology Preview 1.1 now available

$
0
0

Microsoft is excited to announce that the next version of SQL Server (SQL Server v.Next) Community Technology Preview (CTP) 1.1 is now available for download on both Windows and Linux. In SQL Server v.Next CTP 1.1, part of our rapid preview model, we made enhancements to several database engine and Business Intelligence (BI) capabilities which you can try in your development and test environments.

Key enhancements in SQL Server v.Next on Windows CTP 1.1 for Analysis Services tabular models include:

  • New infrastructure for data connectivity and ingestion into tabular models with support for TOM APIs and TMSL scripting. This infrastructure enables:
    • Support for additional data sources, such as MySQL. Additional data sources are planned in upcoming CTPs.
    • Data transformation and data mashup capabilities.
  • Support for BI tools such as Microsoft Excel enable drill-down to detailed data from an aggregated report. For example, when end-users view total sales for a region and month, they can view the associated order details.
  • Support for ragged hierarchies in reports, such as organizational and account charts.
  • Enhanced security for tabular models, including the ability to set permissions to help secure individual tables.

For more detailed information about Analysis Services in SQL Server v.Next CTP 1.1, see the Analysis Services Team Blog.

Key SQL Server v.Next on Windows and Linux CTP 1.1 database engine enhancements include:

  • Language and performance enhancements to natively compiled T-SQL modules, including support for OPENJSON, FOR JSON, JSON built ins as well as memory-optimized tables support for computed columns.
  • Improved the performance of updates to non-clustered columnstore indexes in the case when the row is in the delta store.
  • Batch mode queries now support “memory grant feedback loops,” which learn from memory used during query execution and adjusts on subsequent query executions; this can allow more queries to run on systems that are otherwise blocking on memory.
  • New T-SQL language features:
    • Introducing three new string functions: TRIM, CONCAT_WS, and TRANSLATE
    • BULK IMPORT supports CSV format and Azure Blob Storage as file source
    • STRING_AGG supports WITHIN GROUP (ORDER BY)s

In addition, we have added support for Red Hat 7.3 and Ubuntu 16.10 to SQL Server on Linux.

For additional detail, please visit What’s New in SQL Server v.Next, Release Notes and Linux documentation.

Download SQL Server v.Next CTP 1.1 preview today!

SQL Server v.Next brings the power of SQL Server to both Windows – and for the first time ever – Linux.  SQL Server enables developers and organizations to build intelligent applications with industry-leading performance and security technologies using their preferred language and environment.

Try the preview of the next release of SQL Server today! Get started with the preview of SQL Server on Linux, macOS (via Docker) and Windows with our developer tutorials that show you how to install and use SQL Server v.Next on macOS, Docker, Windows, RHEL and Ubuntu and quickly build an app in a programming language of your choice.

Visit the SQL Server v.Next webpage to learn more.  To experience the new, exciting features in SQL Server v.Next and our rapid release model, download the preview on Linux and Windows and start evaluating the impact these new innovations can have for your business. Have questions? Join the discussion of the new SQL Server v.Next capabilities at MSDN. If you run into an issue or would like to make a suggestion, you can let us know at Connect. We look forward to hearing from you!

Introducing a Modern Get Data Experience for SQL Server vNext on Windows CTP 1.1 for Analysis Services

$
0
0

Starting with SQL Server vNext on Windows CTP 1.1, Analysis Services features a modern connectivity stack similar to the one that users already appreciate in Microsoft Excel and Power BI Desktop. You are going to be able to connect to an enormous list of data sources, ranging from various file types and on-premises databases through Azure sources and other online services all the way to Big Data systems. You can perform data transformations and mashups directly into a Tabular model. You can also add data connections and M queries to a Tabular model programmatically by using the Tabular Object Model (TOM) and the Tabular Model Scripting Language (TMSL). The modern Get Data experience is adding exciting data access, transformation, and enrichment capabilities to Tabular models.

Taking a First Glance

In sync with the SQL Server vNext CTP 1.1 release, the December release of SSDT 17.0 RC2 for SQL Server vNext CTP 1.1 Analysis Services (SSDT Tabular) ships with a preview of the modern Get Data experience. You don’t necessarily need to deploy a CTP 1.1 instance of Analysis Services because the Integrated workspace mode in SSDT Tabular relies on and includes the same Analysis Services engine. You can choose that for taking a quick look at the new connectivity stack. To learn more about Integrated workspace mode, check out the blog article Introducing Integrated Workspace Mode for SQL Server Data Tools for Analysis Services Tabular Projects (SSDT Tabular).

Note that this SSDT Tabular release for CTP 1.1 is an early preview for evaluating the vNext capabilities of Analysis Services delivered with the 1400 compatibility level. It is not supported in production environments. Also, only install the Analysis Services, but not the Reporting Services and Integration Services components. Note also that upgrades from previous SSDT versions are not supported. Either install on a newly installed computer or VM or uninstall any previous versions first. Also, only work with Tabular 1400 models using this preview version of SSDT. For Multidimensional as well as Tabular 1100, 1103, and 1200 models, use SSDT version 16.5.

After downloading and installing the December release of SSDT that supports SQL Server vNext CTP 1.1, create a new Analysis Services Tabular Project. In the Tabular Model Designer dialog, make sure you select the SQL Server vNext (1400) compatibility level. The modern Get Data experience is only available at compatibility level 1400. Tabular 1200 models continue to use the legacy connectivity stack available with SQL Server 2016 and previous releases.

tabularmodeldesigner

Figure 1   Creating a Tabular 1400 model to use the modern Get Data experience

Note: If you are using a previous version of Analysis Services as your workspace server or a previous version of SSDT Tabular in integrated workspace mode, you will not be able to create Tabular 1400 models or use the modern Get Data experience.

Once you’ve created a Tabular 1400 model, click the Model menu or right-click on Data Sources in Tabular Model Explorer and then click Import from Data Source. In Tabular Model Explorer, you can also click New Data Source. The difference between these two commands is that Import from Data Source leads you through both the definition of a data source and the import of data into one or multiple tables, while the New Data Source command only creates a new data source definition. In a subsequent step, you would right-click the resulting data source object and choose Import New Tables. Either way, the two commands display the same Get Data dialog box similar to the version you see in Power BI Desktop.

getdatadlg

Figure 2   Importing data into a Tabular 1400 model through the modern Get Data experience

Don’t be disappointed when you see a rather short list of data sources in the Get Data dialog box. CTP 1.1 is an early preview and exposes only a small set of tested options. Our plan for the SQL Server vNext release is to provide the same list of data sources that Power BI Desktop already supports, so the list will grow with subsequent CTPs.

The steps to create a data source are the same as in Power BI Desktop. However, an important difference is noticeable in the Query Editor window that appears when you import one or more tables from a data source. Apart from the fact that the Query Editor window features a toolbar consistent with the Visual Studio user interface, instead of a collection of ribbons, you might notice that the Merge Queries and Append Queries commands are missing. These commands will be available in a subsequent CTP when SSDT implements full support for shared queries.

queryeditor

Figure 3   The Query Editor dialog box in SSDT Tabular when importing tables into a Tabular 1400 model

For now, each table you choose to import in the Navigator window translates into an individual query in the Query Editor window, which will result in a corresponding table in the 1400 model when you click on Import in the Query Editor toolbar. Of course, you can define data transformation steps prior to importing the data, such as split columns, hide columns, change data types, and so on. Or, click on the Advanced Editor button (right next to Import on the toolbar) to display the Advanced Editor window, which lets you modify the import query in an unconstrained way based on the M query language. You can resize and maximize the Query Editor and Advanced Editor windows if necessary. Just be careful with advanced query authoring because SSDT does not yet capture all possible query errors. For the CTP 1.1 preview, a better approach might be to create and test advanced queries in Power BI Desktop and then paste the results into the Advanced Editor window in SSDT Tabular.

advancedmashup

Figure 4   The Advanced Editor window is available to define advanced M queries

If you choose to copy queries from Power BI Desktop, note how the Source statement in Figure 4 refers to the AS_AdventureWorksDW data source object defined in the Tabular model. Instead of referring to the source directly by using a statement such as Source = Sql.Databases(“<Name of SQL Server>”), M queries in Analysis Services can refer to a data source by using a statement such as Source = <Name of Data Source Object>. It’s relatively straightforward to adjust this line after posting a Power BI Desktop query into the Advanced Editor window.

Referring to data source objects helps to centralize data source settings for multiple queries and simplifies deployments and maintenance if data source definitions must be updated later on. When updating a data source definition, all M queries that refer to it automatically use the new settings.

Of course, you can also edit the M query of a table after the initial import. Just display the table properties by clicking on Table Properties in the Table menu or in the context menu of Tabular Model Explorer after right-clicking the table. In the CTP 1.1 preview, the Edit Table Properties dialog box immediately shows you the advanced view of the M query, but you can click on the Design button to launch the Query Editor window and apply changes more conveniently (see Figure 5). Just be cautious not to rename or remove any columns in the M source query at this stage. In the CTP 1.1 preview, SSDT doesn’t yet handle the remapping of source columns to table columns gracefully in tabular models. If you need to change the names, order, or number of columns, delete the table and recreate it from scratch or edit the TMSL code in the Model.bim file directly.

tableproperties

Figure 5   Editing an existing table in a Tabular 1400 model via Table Properties

One very useful scenario for editing an M source query without changing column mappings revolves around the definition of multiple partitions for a table. For example, by using the Table.Range M function, you can define a subset of rows for any given partition. Table 1 and Figure 6 show a partitioning scheme for the FactInternetSales table that relies on this function. You could also define entirely different M queries. As long as a partition’s M query adheres to the column mappings of the table, you are free to perform any transformations and pull in data from any data source defined in the model. Partitioning is an exclusive Analysis Services feature. It is not available in Excel or Power BI Desktop.

Table 1   A simple partitioning scheme for the AdventureWorks FactInternetSales table based on the Table.Range function

PartitionM Query Expression
FactInternetSalesP1let

Source = AS_AdventureWorksDW,

dbo_FactInternetSales = Source{[Schema=”dbo”,Item=”FactInternetSales”]}[Data],

#”Kept Range of Rows” = Table.Range(dbo_FactInternetSales,0,20000)

in

#”Kept Range of Rows”

FactInternetSalesP2let

Source = AS_AdventureWorksDW,

dbo_FactInternetSales = Source{[Schema=”dbo”,Item=”FactInternetSales”]}[Data],

#”Kept Range of Rows” = Table.Range(dbo_FactInternetSales,20000,20000)

in

#”Kept Range of Rows”

FactInternetSalesP3let

Source = AS_AdventureWorksDW,

dbo_FactInternetSales = Source{[Schema=”dbo”,Item=”FactInternetSales”]}[Data],

#”Kept Range of Rows” = Table.Range(dbo_FactInternetSales,40000,20000)

in

#”Kept Range of Rows”

FactInternetSalesP4let

Source = AS_AdventureWorksDW,

dbo_FactInternetSales = Source{[Schema=”dbo”,Item=”FactInternetSales”]}[Data],

#”Kept Range of Rows” = Table.Range(dbo_FactInternetSales,60000,20000)

in

#”Kept Range of Rows”

partitioning

Figure 6   A simple partitioning scheme based on the Table.Range function

Upgrading a Tabular Model to the 1400 Compatibility Level

The modern Get Data experience is one of the key features of the 1400 compatibility level. Others are support for ragged hierarchies and detail rows. As long as your workspace server is at the CTP 1.1 level, you can upgrade Tabular 1200 models to 1400 in SSDT by changing the Compatibility Level in the Properties window, as illustrated in Figure 7. Just remember to take a backup of your Tabular project prior to the upgrade because the compatibility level cannot be downgraded afterwards.

upgrade

Figure 7  Upgrading a Tabular 1200 model to the 1400 compatibility level. Downgrade is not supported.

If you are planning to upgrade a Tabular 1103 (or earlier) model to 1400, make sure you upgrade first to the 1200 compatibility level. In the CTP 1.1 preview, SSDT is not yet able to upgrade these older models to 1400 directly. Like all other known issues, we plan to address this in one of the next preview releases. Also, be sure to see the Known Issues in CTP 1.1 section later in this article.

Working with Legacy and Modern Data Sources

By default, SSDT creates modern data source definitions in Tabular 1400 models. On the other hand, if you upgrade a 1200 model, the existing data source definitions remain unchanged. For these existing data source definitions, known as provider data sources, SSDT currently continues to show the legacy user interface. However, the plan is to replace the legacy interface with the modern Get Data experience. Furthermore, importing new tables from an existing provider data source brings up the legacy user interface. Importing from a modern data source brings up the modern Get Data experience.

In the CTP 1.1 preview specifically, you can configure SSDT to enable the legacy user interface even for creating new data sources by setting a DWORD registry parameter called Enable Legacy Import to a value of 1, as in the following Registration Entries (.reg) file. This might be useful if you only want to try out certain tabular 1400 specific features such as detail rows without switching to modern data source definitions.  After setting the Enable Legacy Import parameter to 1, you can find additional commands in the data source context menu in Tabular Model Editor. You can use these commands to create and manage provider data sources (see Figure 8). Setting this parameter to any other value than 1 or removing it altogether disables these additional commands again.

Windows Registry Editor Version 5.00

[HKEY_CURRENT_USER\Software\Microsoft\Microsoft SQL Server\14.0\Microsoft Analysis Services\Settings]

“Enable Legacy Import”=dword:00000001

legacyimport

Figure 8   Enabling legacy data import commands in the CTP 1.1 preview release of SSDT Tabular

Regardless of the user interface, Table 2 lists the various data connectivity related objects that can coexist in a Tabular 1400 model. Ideally, you can mix and match any data source type with any partition source type, but there are limitations in the CTP 1.1 preview. For example, it should be possible to create a partition with an M expression over an existing provider data source. This does not work yet. Equally, it should be possible to have a partition with a native query over a modern data source. This can be accomplished programmatically or in TMSL, but processing such a query partition fails in SSDT with an error stating the data source is of an unsupported type for determining the connection string. This is an issue in the December 2016 release of SSDT Tabular, but processing succeeds in SSMS (see the Working with a Tabular 1400 Model in SSMS section later in this article). For the CTP 1.1 preview, we recommended you use query partitions over legacy (provider) data sources and M partitions over modern (structured) data sources. In a later preview release, you will be able to mix and match these resources more freely so you don’t have to create redundant data source definitions for models that contain both query and M partitions.

Table 2   Data Source and corresponding partition types supported in Tabular 1400 models

LevelData Source TypePartition TypeSource Query Type
1200 and 1400Provider Data SourceQuery PartitionNative Query, such as T-SQL
1400 onlyStructured Data SourceM PartitionM Expression

Working with a Tabular 1400 Model in SSMS

SQL Server Management Studio (SSMS) does not yet provide a user interface for the modern Get Data experience, but don’t let that stop you from managing your Tabular 1400 models. Although you cannot yet change the settings of a modern data source in the Connection Properties dialog box or conveniently manage partitions for a table, you can script out the desired objects and apply your changes in the TMSL code (Be sure to also read the Working with TOM and TMSL section later in this article). Just right-click the desired object, such as a modern data source, click on Script Connection as, and then choose any applicable option, such as Create or Replace To a New Query Editor Window, as shown in Figure 9.

scriptout

Figure 9   Scripting out a modern data source

You can also script out tables and roles, process the database or individual tables, and perform any other management actions as you would for Tabular 1200 models.

Working with TOM and TMSL

In addition to the metadata objects you already know from Tabular 1200 models, 1400 models introduce three important new object types: StructuredDataSource, MPartitionSource, and NamedExpression. The StructuredDataSource type defines the properties that describe a modern data source. MPartitionSource takes an M expression as the source query and can be assigned to the Source property of a partition. And, NamedExpression is a class to define shared queries. SSDT does not yet support shared queries, but the AS engine and TOM already do. Creating and using shared queries programmatically is going to be the subject of a separate article.

Editing the Model.bim file

Whenever you cannot perform a desired action in the user interface of SSDT, consider switching to Code View and performing the action at the TMSL level. For example, SSDT does not yet support renaming of modern data sources. If you don’t find the default name assigned to a data source intuitive, such as Query1, switch to Code View, and then perform a Find and Replace operation. Keep in mind that expressions in M partition sources refer to modern data sources by name, so don’t forget to update these expressions together with the data source name. Figure 10 shows an example. Also, as always, make sure you first backup the Model.bim file before editing it manually.

renamedatasource

Figure 10   Updating the data source reference in an M expression

After changing data source properties and affected M expressions, switch back to Designer View and process the affected tables to ensure the model is still in a consistent state. If you receive an error stating “The given credential is missing a required property. Data source kind: SQL. Authentication kind: UsernamePassword. Property name: Password. The exception was raised by the IDbConnection interface.” you could switch back to Code View and provide the missing password although it is usually easier to use the user interface via the Edit Permissions command on the data source object in Tabular Model Explorer. If you prefer the Code View, use the following TMSL code as a reference to provide the missing password for a modern (structured) data source.

{

“type”: “structured”,

“name”: “AdventureWorks2014DWSDS”,

“connectionDetails”: {

“protocol”: “tds”,

“address”: {

“server”: “<Server Name>”,

“database”: “AdventureWorksDW2014”

},

“authentication”: null,

“query”: null

},

“credential”: {

“AuthenticationKind”: “UsernamePassword”,

“kind”: “SQL”,

“path”: “<Server Name>”,

“Username”: “<User>”,

          Password “: ,

“EncryptConnection”: true

}

}

Note: For security reasons, Analysis Services does not return sensitive information such as passwords when scripting out a Tabular model or tracing commands and responses in SQL Profiler. Even though you don’t see the password, the server may have it and can perform processing successfully. You only need to provide the password if an error message informs you that it is missing.

Working with Tabular 1400 models programmatically

If you want to work with modern data sources and M partitions programmatically, you need to use the CTP 1.1 version of Analysis Management Objects (AMO). The AMO libraries are part of the SQL Server Feature Pack, yet a Feature Pack for CTP 1.1 is not available. As a workaround for CTP 1.1, you can use the server version of the AMO libraries, Microsoft.AnalysisServices.Server.Core.dll, Microsoft.AnalysisServices.Server.Tabular.dll, and Microsoft.AnalysisServices.Server.Tabular.Json.dll. These libraries are included with SSDT. By default, these libraries are located in the C:\Program Files (x86)\Microsoft Visual Studio 14.0\Common7\IDE\PrivateAssemblies\Business Intelligence Semantic Model\LocalServer folder. However, you cannot redistribute these libraries with your client application. For CTP 1.1, this means that your code can only run on a machine with SSDT installed, which should suffice for a first evaluation of the TOM objects for the modern Get Data experience.

Figure 11 shows a sample application that creates a Tabular 1400 model on a server running SQL Server vNext CTP 1.1 Analysis Services. It uses StructuredDataSource and MPartitionSource objects to add a modern data source and an M partition to the model. See the attachment to this article for the full sample code. The ConnectionDetails and Credential properties that you must set for the StructuredDataSource object are not yet documented, but you can glean examples for these strings from a Model.bim file that contains a modern data source. The MPartitionSource object on the other hand takes an M query in its Expression property. As explained earlier in this article, make sure the M query refers to a data source defined in the model by name.

tom

Figure 11   Creating a Tabular 1400 model with a modern data source and a table based on an M partition source programmatically.

Known Issues in CTP 1.1

SQL Server vNext CTP 1.1 provides an early preview of the modern Get Data experience. It is not fully tested and not supported in production environments. The following are known issues in CTP 1.1 Analysis Services and the corresponding SSDT release:

  • The SSDT Tabular release for CTP 1.1 is an early preview for evaluating the vNext capabilities of Analysis Services. It is not supported in production environments and must be installed without the Reporting Services and Integration Services components. Upgrades from previous SSDT versions are not supported. Either install on a newly installed computer or VM or uninstall any previous versions first. Also, only work with Tabular 1400 models. For Multidimensional as well as Tabular 1100, 1103, and 1200 models, use SSDT version 16.5.
  • SSDT does not yet support all required operations on modern data sources and M partitions through the user interface. For example, renaming data source objects or changing the column mappings for a table. It’s also not yet possible to define shared mashups through the user interface. You must edit the Model.bim file manually.
  • SSMS can script out Tabular 1400 models and individual objects, but the user interface is not yet 1400 aware. For example, you cannot manage partitions if the model contains a structured data source and you cannot change the settings of a modern data source through the Connection Properties dialog box. You must script out these objects and apply the changes that the TMSL level.
  • Creating a new tabular project in SSDT by using the option to Import from Server (Tabular) does not work. You get an error message stating the model is not recognized as compatible with SQL Server 2012 or higher. You can script out the database in SSMS and copy the TMSL code into the Model.bim file of an empty Tabular project created from scratch.
  • Erroneous M queries and changes to M queries that affect the column mapping of an existing table after the initial import may cause SSDT Tabular to become unresponsive. If you must change the column mapping, delete and recreate the table.
  • Tables with M partition sources don’t work over legacy (provider) data sources. You must use modern data sources for these tables.
  • Tables with query partition sources don’t fully work over modern data sources. SSDT cannot process these tables. You must process these tables in SSMS or programmatically.
  • Processing individual partitions does not succeed. Process the full model or the table.
  • Direct upgrades of Tabular 1103 or earlier models to the 1400 compatibility level does not finish successfully. You must first upgrade these models to the 1200 compatibility level and then perform the upgrade to 1400.
  • DirectQuery mode is not yet supported at the 1400 compatibility level. To preview the modern Get Data experience, you must import the data into the Tabular model.
  • Out-Of-Line Bindings are not yet supported. It’s not possible to override a structured data source or M partition source on a request basis in a Tabular 1400 model yet.
  • All modern data sources are considered private data sources to avoid disclosing sensitive or confidential information. A private data source is completely isolated from other data sources. The privacy settings for data sources cannot be changed in CTP 1.1.
  • Impersonation options such as ImpersonateWindowsUserAccount are not yet supported for modern data sources. You must specify credentials explicitly when defining the data source.
  • Localization is not supported. CTP 1.1 is available in English (US) only.

Give us Feedback

Your feedback is critical for delivering a high-quality product! Deploy SQL Server vNext CTP 1.1 and the December 2016 release of SSDT Tabular in a lab environment or on a virtual machine in Azure and let us know what you think. Report issues and send us your suggestions to SSASPrev here at Microsoft.com. Or use any other available communication channels such as UserVoice or MSDN forums. You can influence the evolution of the Analysis Services connectivity stack to the benefit of all our customers.

 

What’s new for SQL Server vNext on Windows CTP 1.1 for Analysis Services

$
0
0

The public CTP 1.1 of SQL Server vNext on Windows is available here! This public preview includes the following enhancements for Analysis Services tabular.

  • New infrastructure for data connectivity and ingestion into tabular models with support for TOM APIs and TMSL scripting. This enables:
    • Support for additional data sources, such as MySQL. Additional data sources are planned in upcoming CTPs.
    • Data transformation and data mashup capabilities.
  • Support for BI tools such as Microsoft Excel enable drill-down to detailed data from an aggregated report. For example, when end-users view total sales for a region and month, they can view the associated order details.
  • Enhanced support for ragged hierarchies such as organizational charts and chart of accounts.
  • Enhanced security for tabular models, including the ability to set permissions to help secure individual tables.
  • DAX enhancements to make DAX more accessible and powerful. These include the IN operator and table/row constructors.

New 1400 Compatibility Level

SQL Server vNext CTP 1.1. for Analysis Services introduces the 1400 compatibility level for tabular models. To benefit from the new features for models at the 1400 compatibility level, you’ll need to download and install a forthcoming version of SQL Server Data Tools (SSDT) from here.

In this forthcoming version of SSDT, you can select the new 1400 compatibility level when creating new tabular model projects. Models at the 1400 compatibility level cannot be deployed to SQL Server 2016 or earlier, or downgraded to lower compatibility levels.

1400-new-model

New Infrastructure for Data Connectivity

CTP1.1 release introduces a new infrastructure for data connectivity and ingestion into tabular models with support for TOM APIs and TMSL scripting. This is based on similar functionality in Power BI Desktop and Microsoft Excel 2016. There is a lot of information on this topic, so we have created a separate blog post here.

Detail Rows

A much-requested feature for tabular models is the ability to define a custom row set contributing to a measure value. Multidimensional models already achieve this by using the default drillthrough action. This allows end-users to view information in more detail than the aggregated level.

For example, the following PivotTable shows Internet Total Sales by year from the Adventure Works sample tabular model. Users can right-click the cell for 2010 and then select the Show Details menu option to view the detail rows.

show-details

By default, the associated data in the Internet Sales table is displayed. This behavior is often not meaningful to users because the table may not have the necessary columns to show useful information such as customer name and order information.

Detail Rows Expression Property for Measures

CTP1.1 introduces the Detail Rows Expression property for measures. It allows the modeler to customize the columns and rows returned to the end user.

detail-rows-expression

It is anticipated the SELECTCOLUMNS DAX function will be commonly used for the Detail Rows Expression. The following example defines the columns to be returned for rows in the Internet Sales table.

SELECTCOLUMNS(
    'Internet Sales',"Customer First Name", RELATED(Customer[Last Name]),"Customer Last Name", RELATED(Customer[First Name]),"Order Date", 'Internet Sales'[Order Date],"Internet Total Sales", [Internet Total Sales]
)

With the property defined and the model deployed, the custom row set is returned when the user selects Show Details. It automatically honors the filter context of the cell that was selected. In this example, only the rows for 2010 value are displayed.

detail-rows-returned

Default Detail Rows Expression Property for Tables

In addition to measures, tables also have a property to define a detail rows expression. The Default Detail Rows Expression property acts as the default for all measures within the table. Measures that do not have their own expression defined will inherit the expression from the table and show the row set defined for the table. This allows reuse of expressions, and new measures added to the table later will automatically inherit the expression.

default-detail-rows-expression

DETAILROWS DAX Function

The DETAILROWS DAX function has been added in CTP1.1. The following DAX query returns the row set defined by the detail rows expression for the measure or its table. If no expression is defined, the data for the Internet Sales table is returned as it is the table containing the measure.

EVALUATE DETAILROWS([Internet Total Sales])

MDX DRILLTHROUGH statements – without a RETURN clause – are also compatible with detail rows expressions defined in tabular models.

Ragged Hierarchies

As described in this article, Analysis Services tabular models can be used to model parent-child hierarchies. Hierarchies with a differing number of levels are referred to as ragged hierarchies. An example of a ragged hierarchy is an organizational chart. By default, ragged hierarchies are displayed with blanks for levels below the lowest child. This can look untidy to users, as shown by this organizational chart in Adventure Works:

ragged-hierarchies-with-blanks

CTP1.1 introduces the Hide Members property to correct this. Simply set the Hide Members property the hierarchy to Hide blank members.

hide-members-property

Note: It is necessary that the blank members in the model are represented by a DAX blank value, not an empty string.

With the property set and the model deployed, the more presentable version of the hierarchy is displayed.

ragged-hierarchies-clean

Table-Level Security

Roles in tabular models already support a granular list of permissions, and row-level filters to help protect sensitive data. Further information is available here.

CTP1.1 builds on this by introducing table-level security. In addition to restricting access to the data itself, sensitive table names can be protected. This helps prevent a malicious user from discovering that such a table exists.

The current version requires that a whole table’s metadata, and therefore all its columns, is set to be protected. Additionally, table-level security must be set using the JSON-based metadata, Tabular Model Scripting Language (TMSL), or Tabular Object Model (TOM).

The following snippet of JSON-based metadata from the Model.bim file helps secure the Product table in the Adventure Works sample tabular model by setting the MetadataPermission property of the TablePermission class to None.

"roles": [
  {"name": "Users","description": "All allowed users to query the model","modelPermission": "read","tablePermissions": [
      {"name": "Product","metadataPermission": "none"
      }
    ]
  }

DAX Enhancements

CTP1.1 is compatible with the IN operator for DAX expressions. The TSQL IN operator is commonly used to specify multiple values in a WHERE clause. It feels natural to SQL Server database developers.

Prior to CTP1.1, it was common to specify multi-value filters using the logical OR operator or function. Consider the following measure definition.

Filtered Sales:=CALCULATE(
    [Internet Total Sales],
    'Product'[Color] = "Red"
 || 'Product'[Color] = "Blue"
 || 'Product'[Color] = "Black"
)

This is simplified using the IN operator.

Filtered Sales:=CALCULATE(
    [Internet Total Sales], 'Product'[Color] IN { "Red", "Blue", "Black" }
)

In this case, the IN operator refers to a single-column table with 3 rows; one for each of the specified colors. Note the table constructor syntax using curly braces.

The IN operator is functionally equivalent to the CONTAINSROW function.

Filtered Sales:=CALCULATE(
    [Internet Total Sales], CONTAINSROW({ "Red", "Blue", "Black" }, 'Product'[Color])
)

We hope you will agree the IN operator used with table constructors is a great enhancement to the DAX language. MDX veterans should be jumping out of their seats with excitement at this point. The curly braces syntax should also feel natural to programmers of C based languages like C#, and Excel practitioners who use arrays. But wait, there’s more …

Consider the following measure to filter by combinations of product color and category.

Filtered Sales:=CALCULATE(
    [Internet Total Sales],FILTER( SUMMARIZE( ALL(Product), Product[Color], Product[Product Category name] ),
        ( 'Product'[Color] = "Red"   && Product[Product Category Name] = "Accessories" )
     || ( 'Product'[Color] = "Blue"  && Product[Product Category Name] = "Bikes" )
     || ( 'Product'[Color] = "Black"&& Product[Product Category Name] = "Clothing" )
    )
)

Wouldn’t it be great if we could use table constructors, coupled with row constructors, to simplify this? In CTP1.1, we can! The above measure is equivalent to the one below.

Filtered Sales:=CALCULATE(
    [Internet Total Sales],FILTER( SUMMARIZE( ALL(Product), Product[Color], Product[Product Category name] ),
        ('Product'[Color], Product[Product Category Name]) IN
        { ( "Red", "Accessories" ), ( "Blue", "Bikes" ), ( "Black", "Clothing" ) }
    )
)

Lastly, it is worth pointing out that table and row constructors are independent of the IN operator. They are simply DAX table expressions. Consider the following DAX query.

EVALUATEUNION(ROW("Value1", "Red Product Sales","Value2", CALCULATE([Internet Total Sales], 'Product'[Color] = "Red")
    ),ROW("Value1", "Blue Product Sales","Value2", CALCULATE([Internet Total Sales], 'Product'[Color] = "Blue")
    ),ROW("Value1", "Total","Value2", CALCULATE([Internet Total Sales], 'Product'[Color] IN { "Red", "Blue" })
    )
)

In CTP1.1, it can be more simply expressed like this:

EVALUATE
{
    ("Red Product Sales",  CALCULATE([Internet Total Sales], 'Product'[Color] = "Red")),
    ("Blue Product Sales"CALCULATE([Internet Total Sales], 'Product'[Color] = "Blue")),
    ("Total",              CALCULATE([Internet Total Sales], 'Product'[Color] IN { "Red", "Blue" }))
}

Download Now!

To get started, download SQL Server vNext on Windows CTP1.1 from here. The forthcoming version of SQL Server Data Tools will be available here. Be sure to keep an eye on this blog to stay up to date on Analysis Services.


UWP Experiences – App Samples

$
0
0

The UWP App Experiences are beautiful, cross-device, feature-rich and functional app samples built to demonstrate realistic app scenarios on the UWP platform across PC, Tablet, Xbox and more. Besides being open source on GitHub, each sample is accompanied by at least one blog post and short overview video, and will be published on the Windows Store in the upcoming month to provide easier access for developers.

The News Experience

( source | blog post | video )

Fourth Coffee is a news app that works across desktop, phone, and Xbox One, and offers a premium experience that takes advantage of each device’s strengths including tailored UI for each input modality such as controller on Xbox, touch on tablet and mouse on Desktop.

The Weather Experience

( source | blog post | video )

Atmosphere is a weather app that showcases the use of the popular Unity Engine to build beautiful UWP apps. In addition, the app implements UWP app extensions to enable other developers to extend certain areas of the app, as well as exposes an app service that enables other apps to use that weather information, as illustrated by Fourth Coffee.

The Music Experience

( source | blog post | video )

Backdrop is a cross-platform music app sharing code between UWP and other platforms using Xamarin. It supports background audio on UWP devices and cross-platform device collaboration using SignalR.

The Video Experience

( source | blog post | video )

South Ridge Video is a hosted web application built with React.js and hosted on a web server. The app can easily be converted to a UWP application that takes advantage of native platform capabilities, and can be distributed through the Windows Store as with any other UWP app.

The IoT Experience

( source | blog post | video )

Best For You is a fitness UWP app focused on collecting data from an IoT device using Windows IoT Core, Azure IoT Hub, Azure Event Hub and Azure Stream Analytics for processing.

The Social Experience

( source )

Adventure Works is a cross-device UWP application for sharing adventures and experiences with fictional friends. It is separated into three parts:

About the samples

These samples have been built and designed for multiple UWP devices and scenarios in mind from the start and are meant to showcase end to end solutions. Any developer can take advantage of these samples regardless of the device type or features they are targeting, and we are looking forward to hearing about your experience on the official GitHub repository.

Happy coding!

The post UWP Experiences – App Samples appeared first on Building Apps for Windows.

Bing 2016 End Of Year

$
0
0




Bing users made billions of searches in 2016. Those searches often reveal the year’s most powerful moments, telling stories of heartbreak, LOLs, and of trends that seized the world’s imagination.
 
The data clearly shows us one thing: people spent much of the year searching topics they love. And that’s what the Bing team wants to recognize as 2016 comes to an end–the bright snapshots in time, such as the year’s top searched video games or the top searched celebrities, while also pointing toward an exciting tomorrow, including top anticipated movies and top searched cars of 2017.
 
No matter what the new year brings, we look forward to helping you on your quest for knowledge, news, and information. Discover more here.
 
Happy New Year from your friends at Bing.

-The Bing Team
 

Singapore Machine Learning & Data Science Summit – Recap

$
0
0

This post is authored by Tamarai G V, Senior Product Marketing Manager at Microsoft.

Singapore has started to embrace the many benefits of digital transformation, and data plays a central role in this process. From using non-traditional indicators such as electricity consumption and public transportation to monitor the economy to helping the government improve the lives of ordinary citizens, machine learning and data science are being put to use to solve real world problems.

As part of Singapore’s digital efforts, and to nurture a vibrant ML and data science community, the inaugural Machine Learning and Data Science Summit in Asia was held in Singapore on Dec 9th, 2016, at the beautiful University Town in the National University of Singapore (NUS).

The event was jointly organized by the Government Technology Agency of Singapore (GovTech), NUS and Microsoft, and attended by hundreds of data scientists, developers, students and faculty. The full day summit had an exciting agenda, with keynotes, breakout sessions and hands-on labs helping attendees learn how to tap the power of Cortana Intelligence, Microsoft R Server and SQL Server R Services to build intelligent applications. There were sessions on building intelligent bots, demystifying deep learning, understanding how the Team Data Science Process can help jump-start successful data science teams and many more.


The summit kicked off with keynote sessions by Jessica Tan, Managing Director for Microsoft Singapore, Chan Cheow Hoe, CIO at GovTech, and ProfessorLakshminarayanan of NUS. Jessica highlighted the new possibilities of digital transformation and the need to approach data sciences as a team sport.


Chan Cheow Hoe spoke about the use of Data Science in the Public Sector, and how a data-driven approach has solved real world problems in Singapore, such as the recent Circle Line incidents. He also shared how data science can be harnessed to derive deep insights from data to inform policy changes and reviews, and to improve operations and service delivery through applications and data visualisation.


Finally, Professor Lakshminarayanan welcomed the attendees to University Town, and shared the work that college is doing on systems thinking and design, and how it is relevant to the data science community.


Hongyi Li, Product & Engineering lead at GovTech, presented on how his organisation is working on using data for the public good and how open data can help citizens understand and use data. He talked about how data.gov.sg wants to help agencies establish common data sharing infrastructure and make it accessible to use for decision making.


Other sessions that followed included topics on adopting a system thinking towards data science by Wee Hyong Tok and Jenson Goh (from NUS); Matt Winkler and Jennifer Marsman who shared how one can bring intelligence into applications using Cognitive Services and Cortana Intelligence Suite; and Anusua Trivedi who demystified deep learning and shared the exciting applications that can be built in this area.

A key highlight of the event was the Hackathon, led by Hang Zhang, where participants from across academia and industry pitted their skills against the best in ML and data science. Hackathon participants tackled the problem of predicting the number of fatalities in traffic accidents in which drunk drivers had an impact on the outcome. Using the Cortana Intelligence Competition Platform, participants came up with many creative ways of building and improving on their solutions, and worked away to get on the leaderboard.


The Summit concluded with a closing address by Vijay Narayanan, Director of Data Sciences at Microsoft, and an awards ceremony recognizing the hackathon winners.


It was great to see the winners being offered internship opportunities by different organizations at the conclusion of the event!

We look forward to the next Data Science Summit in 2017.

Tamarai

ICYMI – Cortana Skills Kit, Adobe XD, Surface Dial and the GameAnalytics SDK

$
0
0

No time for an intro, we want to jump right into this.

The New Cortana Skills Kit

Excited doesn’t even begin to describe the glass case of emotion containing our feelings toward the new Cortana Skills Kit announced this week. In addition to preparing developers to reach millions of new users, the Cortana Skills Kit will also help developers leverage bots and personalize apps to specific users.

Adobe XD

Many creatives use tools like Photoshop, Premiere and the entire Adobe Creative Cloud on a daily basis to mock up and prototype a wide range of assets. Now, UWP developers can tap into the prototyping power of the Adobe Creative Cloud with Adobe Experience Design (Adobe XD).

Surface Dial for Devs

The Surface Dial is an amazing new input method, and in our latest Surface Dial blog post, we show how developers can either add it to their toolkit or develop apps with the Surface Dial in mind.

GameAnalytics SDK for UWP

Get all of the information you need and see how your game is performing in one easy-to-digest dashboard. Even better, GameAnalytics is free. Click below to read more and get started.

TL;DR: Go check out the Cortana Skills Kit, dial in the dev side of the Surface Dial, prototype your UWP app idea with Adobe XD, and check your game’s performance with the GameAnalytics SDK.

Download Visual Studio to get started.

The Windows team would love to hear your feedback.  Please keep the feedback coming using our Windows Developer UserVoice site. If you have a direct bug, please use the Windows Feedback tool built directly into Windows 10.

The post ICYMI – Cortana Skills Kit, Adobe XD, Surface Dial and the GameAnalytics SDK appeared first on Building Apps for Windows.

Update 1612 for Configuration Manager Technical Preview Branch – Available Now!

$
0
0

Hello everyone! We are happy to let you know that update 1612 for the Technical Preview Branch of System Center Configuration Manager has been released. Technical Preview Branch releases give you an opportunity to try out new Configuration Manager features in a test environment before they are made generally available. This months new preview features include:

  • Azure Active Directory onboarding Creates a connection between Configuration Manager and Azure AD to be used by other cloud services, such as the Cloud Management Gateway.
  • Windows Hello for Business toast notification – A new Windows 10 toast notification added to let end users know that they need to take additional actions to complete Windows Hello for Business PIN setup.
  • Enhancement for online-licensed apps from the Windows Store for Business You can now deploy online-licensed apps with a deployment purpose of Available to Windows 10 PCs managed with the Configuration Manager client.
  • Express files support for Windows 10 Cumulative Update Configuration Manager can now support Windows 10 Cumulative Update using Express files.
  • Ability to block installation of an application if specified executables are running You can now configure a list of executable files (with the extension .exe) in Deployment Type Properties which, if running, will block installation of an application. After installation is attempted, a user will see a dialog box asking them to close the processes that are blocking installation, and then try again.
  • Ability to retry task sequence If a step doesnt work properly in the task sequence wizard, you can now click “Previous” to retry the process.
  • OData endpoint data access Configuration Manager now provides a RESTful OData endpoint for accessing Configuration Manager data. The endpoint is compatible with OData version 4, which enables tools such as Excel and Power BI to easily access Configuration Manager data through a single endpoint. Update 1612 for the Technical Preview Branch supports read-only access to objects in Configuration Manager.
  • Data Warehouse for historical reporting The Data Warehouse enhances reporting for Configuration Manager by storing long-term data for historic reporting. This enables you to look at compliance, application deployment, and more, with reports that show trends over a period of time.

This release also includes the following changes for customers using System Center Configuration Manager connected with Microsoft Intune to manage mobile devices:

Update 1612 for Technical Preview Branch is available in the Configuration Manager console. For new installations please use the 1610 baseline version of Configuration Manager Technical Preview Branch available on TechNet Evaluation Center.

We would love to hear your thoughts about the latest Technical Preview! To provide feedback or report any issues with the functionality included in this Technical Preview, please use Connect. If theres a new feature or enhancement you want us to consider for future updates, please use the Configuration Manager UserVoice site.

Thanks,

The System Center Configuration Manager team

Configuration Manager Resources:

Documentation for System Center Configuration Manager Technical Previews

Documentation for System Center Configuration Manager

System Center Configuration Manager Forums

System Center Configuration Manager Support

Download the Configuration Manager Support Center

System Center Configuration Manager and Endpoint Protection (technical preview branch version 1610

Viewing all 13502 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>