Quantcast
Channel: TechNet Technology News
Viewing all 13502 articles
Browse latest View live

New Get Data Capabilities in the GA Release of SSDT Tabular 17.0 (April 2017)

$
0
0

With the General Availability (GA) release of SSDT 17.0, the modern Get Data experience in Analysis Service Tabular projects comes with several exciting improvements, including DirectQuery support (see the blog article “Introducing DirectQuery Support for Tabular 1400”), additional data sources (particularly file-based), and support for data access options that control how the mashup engine handles privacy levels, redirects, and null values. Moreover, the GA release coincides with the CTP 2.0 release of SQL Server 2017, so the modern Get Data experience benefits from significant performance improvements when importing data. Thanks to the tireless effort of the Mashup engine team, data import performance over structured data sources is now at par with legacy provider data sources. Internal testing shows that importing data from a SQL Server database through the Mashup engine is in fact faster than importing the same data by using SQL Server Native Client directly!

Last month, the blog article “What makes a Data Source a Data Source?” previewed context expressions for structured data sources—and the file-based data sources that SSDT Tabular 17.0 GA adds to the portfolio of available data sources make use of context expressions to define a generic file-based source as an Access Database, an Excel workbook, or as a CSV, XML, or JSON file. The following screenshot shows a structured data source with a context expression that SSDT Tabular created for importing an XML file.

Context Expression for an XML file data source.

Note that file-based data sources are still a work in progress. Specifically, the Navigator window that Power BI Desktop shows for importing multiple tables from a source is not yet enabled so you end up immediately in the Query Editor in SSDT. This is not ideal because it makes it hard to import multiple tables. A forthcoming SSDT release is going to address this issue. Also, when trying to import from an Access database, note that SSDT Tabular in Integrated Workspace mode would require both the 32-bit and 64-bit ACE provider, but both cannot be installed on the same computer. This issue requires you to use a remote workspace server running SQL Server 2017 CTP 2.0, so that you can install the 32-bit driver on the SSDT workstation and the 64-bit driver on the server running Analysis Services CTP 2.0.


Keep in mind that SSDT Tabular 17.0 GA uses the Analysis Services CTP 2.0 database schema for Tabular 1400 models. This schema is incompatible with CTPs of SQL vNext Analysis Services. You cannot open Tabular 1400 models with previous schemas and you cannot deploy Tabular 1400 models with a CTP 2.0 database schema to a server running a previous CTP version.


Another great data source that you can find for the first time in SSDT Tabular is Azure Blob Storage, which will be particularly interesting when Azure Analysis Services provides support for the 1400 compatibility level. When connecting to Azure Blob Storage, make sure you provide the account name or URL without any containers in the data source definition, such as https://myblobdata.blob.core.windows.net. If you appended a container name to the URL, SSDT Tabular would fail to generate the full set of data source settings. Instead, select the desired contain in the Navigator window, as illustrated in the following screenshot.

Importing from Azure Blob Storage

As mentioned above, SSDT Tabular 17.0 GA uses the Analysis Services CTP 2.0 database schema for Tabular 1400 models. This database schema is more complete than any previous schema version. Specifically, you can find additional Data Access Options in the Properties window when selecting the Model.bim file in Solution Explorer (see the following screenshot). These data access options correspond to those options in Power BI Desktop that are applicable to Tabular 1400 models hosted on an Analysis Services server, including:

  • Enable Fast Combine (default is false)   When set to true, the mashup engine will ignore data source privacy levels when combining data.  
  • Enable Legacy Redirects (default is false)  When set to true, the mashup engine will follow HTTP redirects that are potentially insecure (for example, a redirect from an HTTPS to an HTTP URI).  
  • Return Error Values as Null (default is false)  When set to true, cell level errors will be returned as null. When false, an exception will be raised if a cell contains an error.  

Data Access Options in SSDT Tabular

And especially with the Enable Fast Combine setting you can now begin to refer to multiple data sources in a single source query.

Yet another great feature that is now available to you in SSDT Tabular is the Add Column from Example capability introduced with the April 2017 Update of Power BI Desktop. For details, refer to the article “Add a column from an example in Power BI Desktop.” The steps are practically identical. Add Column from Example is a great illustration of how the close collaboration and teamwork between the AS engine, Mashup engine, Power BI Desktop, and SSDT Tabular teams is compounding the value delivered to our customers.

Looking ahead, apart from tying up loose ends, such as the Navigator dialog for file-based sources, there is still a sizeable list of data sources we are going to add in further SSDT releases. Named expressions discussed in this blog article a while ago also still need to find their way into SSDT Tabular, and there are other things such as support for the full set of impersonation options that Analysis Services provides for data sources that can use Windows authentication. Currently, only service account and explicit Windows credentials can be used. Forthcoming impersonation options include current user and unattended accounts.

In short, the work to enable the modern Get Data experience in SSDT Tabular is not yet finished. Even though SSDT Tabular 17.0 GA is fully supported in production environments, Tabular 1400 is still evolving. The database schema is considered complete with CTP 2.0, but minor changes might still be coming. So please be invited to deploy SSDT Tabular 17.0 GA, use it to work with your Tabular 1200 models and take Tabular 1400 for a thorough test drive. And as always, please send us your feedback and suggestions by using ProBIToolsFeedback or SSASPrev at Microsoft.com. Or use any other available communication channels such as UserVoice or MSDN forums. Influence the evolution of the Analysis Services connectivity stack to the benefit of all our customers!


Introducing a DAX Editor Tool Window for SSDT Tabular

$
0
0

The April 2017 release of SSDT Tabular for Visual Studio 2015 and 2017 comes with a DAX editor tool window that can be considered a complement to or replacement for the formula bar. You can find it on the View menu under Other Windows, and then select DAX Editor, as the following screenshot illustrates. You can dock this tool window anywhere in Visual Studio. If you select a measure in the Measure Grid, DAX Editor lets you edit the formula conveniently. You can also right-click on a measure in Tabular Model Explorer and select Edit Formula. Authoring new measures is as easy as typing a new formula in DAX Editor and clicking Apply. Of course, DAX Editor also lets you edit the expressions for calculated columns.

DAX Editor Tool Window for SSDT Tabular

SSDT Tabular also displays the DAX Editor when defining Detail Rows expressions, which is an improvement over previous releases of SSDT Tabular that merely let you paste an expression into the corresponding textbox in the Properties windows, as the following screenshot illustrates. When working with measures, calculated columns, and the detail rows expression properties, note that there is only one DAX Editor tool window instance, so the DAX Editor switches to the expression you currently want to edit.

Detailrows Expression in DAX Editor_big

The DAX Editor tool window is a continuous improvement project. We have plans to include features such as code formatting and additional IntelliSense capabilities. Of course, we are also looking forward to hearing from you. So please send us your feedback and suggestions via ProBIToolsFeedback or SSASPrev at Microsoft.com, and report any issues you encounter. Or use any other available communication channels such as UserVoice or MSDN forums. You can influence the evolution of SSDT Tabular to the benefit of all our customers.

Integrated management and security across your hybrid cloud

$
0
0

Do you have a truly end-to-end view of your hybrid cloud environment? Most environments today are complex, with multi-tier applications that may span multiple datacenters and cloud hosting environments. In fact, the reality is that for most companies, complexity is the number one challenge in a hybrid cloud environment, according to the 2017 State of Hybrid Cloud research study. Not coincidentally, respondents identified unified management across multiple operating systems and public clouds as a top priority.

To make sure your critical applications and systems perform at peak efficiency, you need a big picture view that spans the different application components and infrastructure services, and includes the ability to act on insights and resolve issues quickly. The advantage of doing this deep level of analytics in the cloud is that you can have unlimited scale and flexibility with your log data, without the heavy weight of infrastructure to put in place. With management-as-a-service in Azure, you let us do the hard part of correlating, analyzing, and crowd-sourcing information. You can then use the insights you gain to start anticipating and resolving issues before problems result in business impact.

At Microsoft, our core management approach is to bring data from your applications, workloads, and infrastructure together in one place, then provide you the ability to drill down deep and do rich analytics. With Azure management and security services, you can pull data from multiple sources to find out if there is an infrastructure issue, if the network is slow, or if the latest deployment of your application is causing problems. Since we include built-in collection of log and performance data from servers all the way to application code, we can help you bring IT and Developers together to troubleshoot issues quickly.

One of the key technologies that can help you turn data into actionable insights about your hybrid environment is Service Map, part of Azure Insight & Analytics. Today we released Service Map general availability, a tool that automatically discovers and builds a map of server and process dependencies for you. It pulls in data from other solutions in the service, such as Log Analytics, Change Tracking, Update Management, and Security, all in context. Rather than looking at individual types of data, you can now see all data related to the systems you care about most, as well as graphically visualize their dependencies.

Learn more about how you can use integrated management and security services to reduce complexity in your hybrid cloud environment.

Try today with a free Operations Management Suite account.

Announcing Windows 10 Insider Preview Build 16179 for PC + Build 15205 for Mobile

$
0
0

Hello Windows Insiders!

Today we are excited to be releasing Windows 10 Insider Preview Build 16179 for PC to Windows Insiders in the Fast ring. We are also releasing Windows 10 Mobile Insider Preview Build 15204 to Insiders in the Fast ring.

What’s New in Build 16179 For PC

Revert VM: Continuing our theme of simplifying Hyper-V for developers on Windows 10 (see What’s New), we’re introducing automatic checkpoints so that you’ll always be able to undo a mistake in your virtual machine – you can now always revert to the last time you started a virtual machine.

Introducing Power Throttling*: You may remember some of the power experiments we did back in January with Build 15002. Power Throttling was one of those experiments, and showed to have up to 11% savings in CPU power consumption for some of the most strenuous use cases. So, we’re turning this on for everyone starting with last week’s build. Check out our complete blog post on this for more details!

*Power throttling is a temporary working name for this feature and may change before the release of the next Windows 10 update.

Changes, improvements, and fixes for PC

  • We fixed the issue causing apps that use the Desktop Bridge (“Centennial”) from the Store such as Slack and Evernote will cause your PC to bugcheck (GSOD) when launched with a “kmode exception not handled” in ntfs.sys error.
  • We fixed an issue where adding Hindi to your language list and downloading the on-demand language resources would result in Microsoft Edge crashing on launch and file search returning no results via Cortana or Windows Explorer.
  • We fixed an issue where desktop icons would sometimes move around unexpectedly when “Auto arrange icons” was set to On and “Align icons to grid” was set to Off.
  • The existing Group Policy to disable the lock screen is now available for those on the Pro edition of Windows 10. Appreciate all who shared feedback on the subject. Note, the Group Policy text has not yet been updated to incorporate this change, that will happen with a later flight.
  • We fixed a rendering issue from previous flights where specific multi-monitor and projection configurations could fail depending on the hardware used. This could have impacted all Surface (Surface Book, Surface Pro, etc.) devices as well as other devices using similar chipsets. Another symptom may have been to see screen flickering and potentially being logged out when any screen mode change occurred.
  • We fixed an issue resulting in the location icon being continually on in the taskbar after the first time the Action Center was opened if the night light quick action was visible.

Known issues for PC

  • Some Insiders have reported seeing this error “Some updates were cancelled. We’ll keep trying in case new updates become available” in Windows Update. See this forum post for more details.
  • Double-clicking on the Windows Defender icon in the notification area does not open Windows Defender. Right-clicking on the icon and choosing open will open Windows Defender.
  • Surface 3 devices fail to update to new builds if a SD memory card is inserted. The updated drivers for the Surface 3 that fix this issue have not yet been published to Windows Update.
  • Pressing F12 to open the Developer Tools in Microsoft Edge while F12 is open and focused may not return focus to the tab F12 is opened against, and vice-versa.
  • exe will crash and restart if you tap any of the apps listed in the Windows Ink Workspace’s Recent Apps section.
  • Insiders who use Simplified Chinese IMEs or the Traditional Chinese Changjie or Quick IME to input text will find that the candidate window doesn’t appear when typing into certain apps. If you press space, the first candidate will be finalized. Using the number keys will not finalize any other candidate. If the candidate you need is not the first one, for now you will have to enter your text into an app where the candidate window appears, such as Notepad, and copy it into the desired text field.
  • Navigating to Settings > Update & security > Windows Update may crash Settings app. You can simply re-open the Settings app again and it should work again.
  • The “Save” dialog appears to be broken in several desktop (Win32) apps. The team is investigating.

Changes, improvements, and fixes for Mobile

  • We fixed the targeting issue that caused some variants of the Alcatel IDOL 4S to not receive Build 15204 last week. All variants of the Alcatel IDOL 4S should receive Build 15205.
  • FYI: We fixed the issue where supported Windows 10 Mobile devices were showing the update to the Windows 10 Anniversary Update as “not yet available” in the Upgrade Advisor app.
  • We fixed an issue where Continuum would stop working when HP Elite X3 case is closed.
  • We fixed an issue where Continuum would hang or render incorrectly after disconnecting on devices like the Lumia 950.
  • We fixed an issue with Microsoft Edge where you might get into a bad state after opening a new Microsoft Edge windows and screen off with the JIT process suspended.
  • We fixed an issue with where the device screen might stay black when disconnecting from a Continuum dock after screen has timed out normally.
  • We fixed an issue with backup and restore which impacts users with slower network connection.
  • We fixed an issue around Microsoft Edge reliability.

Known issues for Mobile

  • A small percentage of devices may experience text message backup loss related to backup and recovery of the messaging database.
  • For Insiders who have upgraded from a prior 150xx build to this build, the “Add Bluetooth or other devices” Settings page and the Connect UX page may fail to open.
  • The copyright date is incorrect under Settings > System > About. It shows as 2016 when it should be 2017. Thanks to the Windows Insiders that reported this!
  • Insiders may experience random shutdowns on some devices.

Community Updates

Many of you have been asking for our team to share more about our future plans about the overall Windows Insider community. We want to be inclusive of all consumption styles, so we thought we would experiment with an audio podcast. Check out episode 1!

Keep hustling team,
Dona <3

The post Announcing Windows 10 Insider Preview Build 16179 for PC + Build 15205 for Mobile appeared first on Windows Experience Blog.

What’s new in SQL Server 2017 CTP 2.0 for Analysis Services

$
0
0

The public CTP 2.0 of SQL Server 2017 on Windows is available here! This public preview includes the following enhancements for Analysis Services tabular.

  • Object-level security to secure model metadata in addition to data.
  • Transaction-performance improvements for a more responsive developer experience.
  • Dynamic Management View improvements for 1200 and 1400 models enabling dependency analysis and reporting.
  • Improvements to the authoring experience of detail rows expressions.
  • Hierarchy and column reuse to be surfaced in more helpful locations in the Power BI field list.
  • Date relationships to easily create relationships to date dimensions based on date columns.
  • Default installation option for Analysis Services is tabular, not multidimensional.

Other enhancements not covered by this post include the following.

  • New Power Query data sources. See this post for more info.
  • DAX Editor for SSDT. See this post for more info.
  • Existing Direct Query data sources support for M expressions. See this post for more info.
  • SSMS improvements, such as viewing, editing, and scripting support for structured data sources.

Incompatibility with previous CTP versions

Tabular models with 1400 compatibility level that were created with previous versions are incompatible with CTP 2.0. They do not work correctly with the latest tools. Please download and install the April 2017 (17.0 GA) release of SSDT and SSMS.

Object-level security

Roles in tabular models already support a granular list of permissions, and row-level filters to help protect sensitive data. Further information is available here. CTP 1.1 introduced table-level security.

CTP 2.0 builds on this by introducing column-level security, which allows sensitive columns to be protected. This helps prevent a malicious user from discovering that such a column exists.

Column-level and table-level security are collectively referred to as object-level security (OLS).

The current version requires that column-level security is set using the JSON-based metadata, Tabular Model Scripting Language (TMSL), or Tabular Object Model (TOM). We plan to deliver SSDT support soon. The following snippet of JSON-based metadata from the Model.bim file secures the Base Rate column in the Employee table of the Adventure Works sample tabular model by setting the MetadataPermission property of the ColumnPermission class to None.

"roles": [
  {"name": "Users","description": "All allowed users to query the model","modelPermission": "read","tablePermissions": [
      {"name": "Employee","columnPermissions": [
          {"name": "Base Rate","metadataPermission": "none"
          }
        ]
      }
    ]
  }

DAX query references to secured objects

If the current user is a member only of the Users role, the following query that explicitly refers to the [Base Rate] column fails with an error message saying the column cannot be found or may not be used.

EVALUATESELECTCOLUMNS(
    Employee,"Id", Employee[Employee Id],"Name", Employee[Full Name],"Base Rate", Employee[Base Rate] --Secured column
)

The following query refers to a measure that is defined in the model. The measure formula refers to the Base Rate column. It also fails with an equivalent error message. Model measures that refer to secured tables or columns are indirectly secured from queries.

EVALUATESELECTCOLUMNS(
    { [Average of Base Rate] } --Indirectly secured measure
)

As you would expect, IntelliSense for DAX queries in SSMS also honors column-level security and does not disclose secured column names to unauthorized users.

Detail-rows expression references to secured objects

It is anticipated that the SELECTCOLUMNS() function will be commonly used for detail-rows expressions. Due to this, SELECTCOLUMNS() is subject to special behavior when used by DAX expressions in the model. The following detail-rows expression defined on the [Reseller Total Sales] measure does not return an error when invoked by a user without access to the [Base Rate] column. Instead it returns a table with the [Base Rate] column excluded.

--Detail rows expression for [Reseller Total Sales] measureSELECTCOLUMNS(
    Employee,"Id", Employee[Employee Id],"Name", Employee[Full Name],"Base Rate", Employee[Base Rate] --Secured column
)

The following query returns the output shown below – with the [Base Rate] column excluded from the output – instead of returning an error.

EVALUATEDETAILROWS([Reseller Total Sales])

detailrows secured output

However, derivation of a scalar value using a secured column fails on invocation of the detail-rows expression.

--Detail rows expression for [Reseller Total Sales] measureSELECTCOLUMNS(
    Employee,"Id", Employee[Employee Id],"Name", Employee[Full Name],"Base Rate", Employee[Base Rate] * 1.1--Secured column
)

Limitations of RLS and OLS combined from different roles

OLS and RLS are additive; conceptually they grant access rather than deny access. This means that combined membership from different roles that specify RLS and OLS could inadvertently cause security leaks. Hence combined RLS and OLS from different roles is not permitted.

RLS additive membership

Consider the following roles and row filters.

RoleModel PermissionTable 
RoleAReadGeographyRLS Filter:

Geography[Country Region Name] = “United Kingdom”

RoleBReadGeographyRLS Filter:

Geography[Country Region Name] = “United States”

Users who are members of both RoleA and RoleB can see data for the UK and the US.

OLS additive membership

A similar concept applies to OLS. Consider the following roles.

RoleModel PermissionTable 
RoleAReadEmployeeOLS Column Permission:

[Base Rate], MetadataPermission=None

RoleBRead

RoleB allows access to all tables and columns in the model. Therefore, users who are members of both RoleA and RoleB can query the [Base Rate] column.

RLS and OLS combined from different roles

Consider the following roles that combine RLS and OLS.

RolePurposeModel PermissionTable 
RoleAProvide access to sales in the UK by customer (not product)ReadGeographyRLS Filter:

Geography[Country Region Name] = “United Kingdom”

ProductOLS Table Permission:

MetadataPermission=None

RoleBProvide access to sales in the US by product (not customer)ReadGeographyRLS Filter:

Geography[Country Region Name] = “United States”

CustomerOLS Table Permission:

MetadataPermission=None

The following diagram shows the intersection of the tables and rows relevant to this discussion.

rls-ols-quadrant-copy

RoleA is intended to expose data only for the top right quadrant.

RoleB is intended to expose data only for the bottom left quadrant.

Given the additive nature of OLS and RLS, Analysis Services would be allowing access to all 4 quadrants by combining these permissions for users who are members of both roles. Data would be exposed that neither role had the intention of exposing. For this reason, queries for users who are granted RLS and OLS permissions combined from different roles fail with an error message stating that the combination of active roles results in dynamic security configuration that is not supported.

Transaction-performance improvements

SSDT updates the workspace database during the development process. Optimized transaction management in CTP 2.0 is expected to result in a more responsive developer experience due to faster metadata updates to the workspace database.

DMV improvements

DISCOVER_CALC_DEPENDENCY is back! This Dynamic Management View (DMV) is useful for tracking and documenting dependencies between calculations and other objects in a tabular model. In previous versions, it worked for tabular models with compatibility level of 1100 and 1103, but it did not work for 1200 models. In CTP 2.0, it works for all tabular compatibility levels including 1200 and 1400.

The following query shows how to use the DISCOVER_CALC_DEPENDENCY DMV.

SELECT * FROM $System.DISCOVER_CALC_DEPENDENCY;

There are differences in the output for 1200 and 1400 models. The easiest way to understand them is to compare the output for models with different compatibility levels. Notable differences are listed here for reference.

  • Relationships in 1200 and higher are identified by name (normally a GUID) in the OBJECT column. Active relationships have OBJECT_TYPE of “ACTIVE_RELATIONSHIP”; inactive relationships have OBJECT_TYPE of “RELATIONSHIP”. 1103 and lower models differ because they include all relationships with OBJECT_TYPE of “RELATIONSHIP” and an additional “ACTIVE_RELATIONSHIP” row to flag each active relationship.
  • 1103 and lower models include a row with OBJECT_TYPE “HIERARCHY” for each attribute hierarchy dependency on its column. 1200 and higher do not.
  • 1200 and higher models include rows for calculated tables with OBJECT_TYPE “CALC_TABLE”. Calculated tables are not supported in 1103 or lower models.
  • 1200 and higher models currently do not include rows for measure data dependencies on tables and columns. Data dependencies between DAX measures are included.

We intend to may make further improvements to DISCOVER_CALC_DEPENDENCY in forthcoming CTPs, so stay tuned.

Improved authoring experience for Detail Rows

The April 2017 release (17.0 GA) of SSDT provides an improved authoring experience with IntelliSense and syntax highlighting for detail rows expressions using the new DAX Editor for SSDT. Click on the ellipsis in the Detail Rows Expression property to activate the DAX editor.

detailrows daxeditor

Hierarchy & column reuse

Hierarchy reuse is a Power BI feature, although it is surfaced differently in Analysis Services. Power BI uses it to provide easy access to implicit date hierarchies for date fields. Introducing such features for Analysis Services furthers the strategic objective of enabling a consistent modeling experience with Power BI.

power-bi-variations

Tabular models created with CTP 2.0 can leverage hierarchy reuse to surface user hierarchies and columns – not limited to those from a date dimension table – in more helpful locations in the Power BI field list. This can provide a more guided analytics experience for business users.

For example, the Calendar hierarchy from the Date table can be surfaced as a field in Internet Sales, and the Fiscal hierarchy as a field in the Sales Quota table. This assumes that, for some business reason, sales quotas are frequently reported by fiscal date.

The current version requires that hierarchy and column reuse is set using the JSON-based metadata, Tabular Model Scripting Language (TMSL), or Tabular Object Model (TOM). The following snippet of JSON-based metadata from the Model.bim file associates the Calendar hierarchy from the Date table with the Order Date column from the Internet Sales table. As shown by the type name, the feature is also known as variations.

{"name": "Order Date","dataType": "dateTime","sourceColumn": "OrderDate","variations": [
    {"name": "Calendar Reuse","description": "Show Calendar hierarchy as field in Internet Sales","relationship": "3db0e485-88a9-44d9-9a12-657c8ef0f881","defaultHierarchy": {"table": "Date","hierarchy": "Calendar"
      },"isDefault": true
    }
  ]
}

The current version also requires the ShowAsVariationsOnly property on the dimension table to be set to true, which hides the dimension table. We intend to remove this restriction in a forthcoming CTP.

{"name": "DimDate","showAsVariationsOnly": true

The Order Date field in Internet Sales now defaults to the Calendar hierarchy, and allows access to the other columns and hierarchies in the Date table.

as-variations

Date relationships

Continuing the theme of bringing Power BI features to Analysis Services, CTP 2.0 allows the creation of date relationships using only the date part of a DateTime value. Power BI uses this internally for relationships to hidden date tables.

Date relationships that ignore the time component currently only work for imported models, not Direct Query.

The current version requires that date relationship behavior is set using the JSON-based metadata, Tabular Model Scripting Language (TMSL), or Tabular Object Model (TOM). The following snippet of JSON-based metadata from the Model.bim file defines a relationship from Reseller Sales to Order based on the date part only of the Order Date column. Valid values for JoinOnDateBehavior are DateAndTime and DatePartOnly.

{"name": "100ca454-655f-4e46-a040-cfa2ca981f88","fromTable": "Reseller Sales","fromColumn": "Order Date","toTable": "Date","toColumn": "Date","joinOnDateBehavior": "datePartOnly"
}

Default installation option is tabular

Tabular mode is now the default installation option for SQL Server Analysis Services in CTP 2.0.

default-tabular-install

Note: this also applies to installations from the command line. Please see this document for further information on how to set up automated installations of Analysis Services from the command line. In CTP 2.0, if the ASSERVERMODE parameter is not provided, the installation will be in tabular mode. Previously it was multidimensional.

Extended events

Extended events were not working in CTP 1.3. They do work again in CTP 2.0 (actually since CTP 1.4).

Download now!

To get started, download SQL Server 2017 on Windows CTP 2.0 from here. Be sure to keep an eye on this blog to stay up to date on Analysis Services.

New Offline Books for Visual Studio 2017 Available for Download

$
0
0

Today we are happy to announce that new offline books for Visual Studio 2017 are now available for download. Now you can easily download content published on MSDN and Docs for consumption on-the-go, without needing an active internet connection. We are also hosting the book generation and fetching services entirely on Microsoft Azure, which makes them more performant and reliable – we will be continuously updating the content, so you will no longer be stuck with outdated books and wait 6 months for the next release. The process to create and update an offline book now takes hours instead of months!

The new offline books continue to integrate directly with Visual Studio, allowing you to rely on the familiar in-context help (F1) and many features of the Help Viewer, such as indexed search, favorites and tables of contents that mirror those of online pages.

Adding Help Viewer to your Visual Studio installation

Starting with Visual Studio 2017, Help Viewer is now an optional component that you have to manually select during installation. With the new Visual Studio installer, this is a two-click process: simply select Individual Components, and click on Help Viewer under Code tools.

Visual Studio 2017 Offline Books Help Viewer Install

Available Books

In addition to your usual developer content, such as books covering Visual C#, Visual F# and others, we have added brand new content to the list, including:

  • ASP.NET Core
  • ASP.NET API Reference
  • NuGet
  • Scripting Language Reference

All these books are available in the Manage Content section of Help Viewer – click on Add next to the books that you are interested in and select Update at the bottom of the screen.

Visual Studio 2017 Offline Books Help Viewer

Feedback

We are constantly looking to improve our offline content story. If you encounter any issues with the Help Viewer app, let us know via the Report a Problem option in the installer or in Visual Studio itself. If you have any suggestions, bug reports or ideas related to the content in offline books, please submit them on our UserVoice site– we will address them as soon as possible!

Den Delimarsky, Program Manager, docs.microsoft.com
@DennisCode

Den drives the .NET, UWP and sample code experiences on docs.microsoft.com. He can be found occasionally writing about security and bots on his blog.

Python in SQL Server 2017: enhanced in-database machine learning

$
0
0

We are excited to share the preview release of in-database analytics and machine learning with Python in SQL Server. Python is one of the most popular languages for data science and has a rich ecosystem of powerful libraries.

Starting with the CTP 2.0 release of SQL Server 2017, you can now bring Python-based intelligence to your data in SQL Server.

The addition of Python builds on the foundation laid for R Services in SQL Server 2016 and extends that mechanism to include Python support for in-database analytics and machine learning. We are renaming R Services to Machine Learning Services, and R and Python are two options under this feature.

The Python integration in SQL Server provides several advantages:

  • Elimination of data movement: You no longer need to move data from the database to your Python application or model. Instead, you can build Python applications in the database. This eliminates barriers of security, compliance, governance, integrity, and a host of similar issues related to moving vast amounts of data around. This new capability brings Python to the data and runs code inside secure SQL Server using the proven extensibility mechanism built in SQL Server 2016.
  • Easy deployment: Once you have the Python model ready, deploying it in production is now as easy as embedding it in a T-SQL script, and then any SQL client application can take advantage of Python-based models and intelligence by a simple stored procedure call.
  • Enterprise-grade performance and scale: You can use SQL Server’s advanced capabilities like in-memory table and column store indexes with the high-performance scalable APIs in RevoScalePy package. RevoScalePy is modeled after RevoScaleR package in SQL Server R Services. Using these with the latest innovations in the open source Python world allows you to bring unparalleled selection, performance, and scale to your SQL Python applications.
  • Rich extensibility: You can install and run any of the latest open source Python packages in SQL Server to build deep learning and AI applications on huge amounts of data in SQL Server. Installing a Python package in SQL Server is as simple as installing a Python package on your local machine.
  • Wide availability at no additional costs: Python integration is available in all editions of SQL Server 2017, including the Express edition.

Data scientists, application developers, and database administrators can all benefit from this new capability.

  • Data scientists can build models using the full datasets on the SQL Server instead of moving data to your IDE or being forced to work with samples of data. Working from your Python IDE, you can execute Python code that runs in SQL Server on the data in SQL Server and get the results in your IDE. You are no longer dependent on application developers to deploy your models for production use, which often involves translating models and scripts to a different application language. These models can be deployed to production easily by embedding them in T-SQL stored procedures. You can use any open source Python package for machine learning in SQL Server. The usage pattern is identical to the now popular SQL Server R Services.
  • Application developers can take advantage of Python-based models by simply making a stored procedure call that has Python script embedded in it. You don’t need a deep understanding of the inner workings of the Python models, or have to translate it to a line of business language in close coordination with data scientists to consume it. You can even leverage both R and Python models in the same application—they are both stored procedure calls.
  • Database administrators can enable Python-based applications and set up policies to govern how Python runtime behaves on SQL Server. You can manage, secure, and govern the Python runtime to control how the critical system resources on the database machine are used. Security is ensured by mechanisms like process isolation, limited system privileges for Python jobs, and firewall rules for network access.

The standard open source CPython interpreter (version 3.5) and some Python packages commonly used for data science are downloaded and installed during SQL Server setup if you choose the Python option in the feature tree.

Currently, a subset of packages from the popular Anaconda distribution is included along with Microsoft’s RevoScalePy package. The set of packages available for download will evolve as we move toward general availability of this feature. Users can easily install any additional open source Python package, including the modern deep learning packages like Cognitive Toolkit and TensorFlow to run in SQL Server. Taking advantage of these packages, you can build and deploy GPU-powered deep learning database applications.

Currently, Python support is in “preview” state for SQL Server 2017 on Windows only.

We are very excited about the possibilities this integration opens up for building intelligent database applications. Please watch the Python based machine learning in SQL Server presentation and Joseph Sirosh Keynote at Microsoft Data Amp 2017 event for demos and additional information. We encourage you to install SQL Server 2017. Please share your feedback with us as we work toward general availability of this technology.

Thank you!

Sumit Kumar, Senior Program Manager, SQL Server Machine Learning Services

Nagesh Pabbisetty, Director of Program Management, Microsoft R Server and Machine Learning

Windows Developers at Microsoft Build 2017

$
0
0

Microsoft Build 2017 kicks off on May 10 in Seattle, with an expected capacity crowd of over 5,000 developers—plus countless more online. Join us for the live-streamed keynotes, announcements, technical sessions and more. You’ll be among the first to hear about new developments that will help you engage your users, keep their information safe and reach them in more places. Big things have been unveiled and promoted at Microsoft Build over the years and this year’s conference won’t be any different!

There will be quite a bit of content specifically relevant to Windows developers:

  • Improvements that help you immediately engage your users with beautiful UI and natural inputs
  • Team collaboration and connectedness to streamline and improve your development experience
  • Services that make it easier to reach customers and learn what they want from your software
  • Connected screens and experiences that make your end-to-end experience stickier and more engaging
  • Mixed reality and creating deeply immersive experiences

Sign up for a Save-the-Date reminder on our Build site for Windows Developers and we’ll keep you in the loop as new details and information come in. When you sign up, you’ll also gain the ability to:

  • Save sessions for later viewing
  • Create and share content collections
  • Discuss what you’ve seen and heard with other developers
  • Upvote content you like and track trending sessions

You’ll find sign-up, sessions and content at https://developer.microsoft.com/windows/projects/events/build/2017.

The post Windows Developers at Microsoft Build 2017 appeared first on Building Apps for Windows.


Empowering Every Organization on the Planet with Artificial Intelligence

$
0
0

Re-posted from the Microsoft SQL Server blog.

Extracting intelligence from ever-expanding amounts of data is now the difference between being the next market disruptor versus being relegated to the history books. Microsoft’s comprehensive data platform and tools let developers and businesses create the next generation of intelligent applications, drive new efficiencies, create better products and improve their customer experiences.

At the Microsoft Data Amp event earlier today, Joseph Sirosh, Corporate Vice President for the Data Group, made several announcements around how Microsoft is helping every organization on the planet with data-driven intelligence. There were three main themes to his announcements:

  • The close integration of AI functions into databases, data lakes, and the cloud, to simplify the deployment of intelligent applications.
  • The use of AI within Microsoft services, to enhance performance and data security.
  • The flexibility Microsoft offers developers– to compose multiple cloud services into various design patterns for AI, to use Windows, Linux, Python, R, Spark, Hadoop, and other open source tools in building such systems.

A visual summary of the key announcements is captured in the graphic below. You can learn more at the original post here, and see how you can integrate big data and artificial intelligence (AI) to transform your applications and your business.


CIML Blog Team

Announcing the General Availability (GA) Release of SSDT 17.0 (April 2017)

$
0
0

We are pleased to announce that SQL Server Data Tools 17.0 is officially released and supported for production use. This GA release includes support for SQL Server 2017 and SQL Server on Linux including new features such as Graph DB. It includes several features that we have consistently received requests for via MSDN forums and Connect and contains numerous fixes and improvements over the 16.x version of the tools. You no longer need to maintain 16.x and 17.0 side-by-side to build SQL Server relational databases, Azure SQL databases, Integration Services packages, Analysis Services data models, Azure Analysis Services data models, and Reporting Services reports. From all the SSDT teams, thank you for your valuable feedback and suggestions!

Additionally, for relational and Azure SQL databases SSDT 17.0 GA includes a highly requested improvement to ignore column order in upgrade plans as well as numerous other bug fixes. 

In the Business Intelligence area, SSDT 17.0 GA supports Azure Analysis Services in addition to SQL Server Analysis Services. It features a modern Get Data experience in Tabular 1400 models, including DirectQuery support (see the blog article “Introducing DirectQuery Support for Tabular 1400”) and an increasing portfolio of data sources. Other noteworthy features include object-level security to secure model metadata in addition to data, transaction-performance improvements for a more responsive developer experience, improvements to the authoring experience of detail rows expressions, and a DAX Editor to create measures and other DAX expressions more conveniently.

For Integration Services, SSDT 17.0 GA provides an authoring package with OData Source and OData Connection Manager connecting to the OData feeds of Microsoft Dynamics AX Online and Microsoft Dynamics CRM Online. Moreover, the project target server version supports SQL Server 2017 so you can conveniently deploy your packages on the latest version of SQL Server.

So please download SSDT 17.0 GA from https://docs.microsoft.com/en-us/sql/ssdt/download-sql-server-data-tools-ssdt and update your existing installations today! Send us your feedback and ask us questions on our forum or via Microsoft Connect.  We look forward to hearing from you.

This release also includes an update to the DacFx Nuget packages published on https://nuget.org.

Delivery timeline markers, git graph, and build and release improvements – Apr 19

$
0
0

Note: The features discussed in this post will be rolling out over the next three weeks.

This deployment, we introduce the git graph. We’ve also updated many build and release tasks and extensions, as well as made improvements to the Marketplace.

Delivery timeline markers

Have you been looking for a way to highlight key dates on your Delivery Plan? Now you can with plan markers. Plan markers let you visualize key dates for teams directly on your deliver plan. Markers have an associated color and label. The label shows up when you click the marker dot.

delivery timeline

Visualize your git repository

Team Services now supports showing a graph while showing commit history for repositories or files. Now you can easily create a mental model of all your branches and commits for your git repositories using git graph. The graph shows all your commits in topological order.

git graph

The key elements of the git graph include:

  1. The git graph is right-aligned, so commits associated with the default branch or the selected branch appear on the right while the rest of the graph grows on the left.
  2. Merge commits are represented by grey dots connected to their first parent and second parent.
  3. Normal commits are represented by blue dots.
  4. If the parent commit of a commit is not visible in the view port on the next 50 commits, then we excise the commit connection. Once you click the arrow, the commit is connected to its parent commit.

git graph elements

Git commit comments use the new discussion control

Like we added for TFVC last sprint, lightweight comments left on git commits has been updated to use the new discussion control. This brings support for Markdown in those comments, and rounds out all of the code-commenting features in the web for both git and TFVC to use the latest experience.

Improved package list

As part of moving the updated Package Management web experience to an on-by-default preview, we made a few final tweaks. Most visibly, we’ve added more metadata (source and release views) to the package list and also made the columns resizable.

package

SSH deployment improvements

The Copy Files Over SSH build/release task now supports tildes(~) in the destination path to simplify copying files to a remote user’s home directory. Also, a new option allows causing the build/release to fail when no files are found to copy.

The SSH build/release task now supports running scripts with Windows line endings on remote Linux or macOS machines.

Deploy to Azure Government Cloud

Customers with Azure subscriptions in Government clouds can now configure Azure Resource Manager service endpoint to target national clouds.

With this, you can now use Release Management to deploy any application to Azure resources hosted in government clouds, using the same deployment tasks.

government cloud

Timeout enhancements for the Manual Intervention task

The Manual Intervention task can now be automatically rejected or resumed after it is pending for the specified timeout or 60 days, whichever is earlier. The timeout value can be specified in the control options section of the task.

Release Logs Page Improvements

In this deployment, we have improved the release logs page experience:

  • You can directly click the URLs in the log viewer.
  • You can search for keywords in the logs.
  • You can view large log files without downloading them.

Azure App Service task enhancements and templates for Python and PHP applications

New Azure App Service release definition templates have been added for deploying Python (Bottle, Django, Flask) and PHP applications. The new template contains the updated App Service Deploy task.

When the Python release definition template is used, the App Service Deploy task gets prepopulated with an inline deployment script which makes pip (dependency manager) install the applications dependencies. The correct web.config gets generated based on the Python framework used.

When the PHP release definition template is used, the App Service Deploy task gets pre-populated with a deployment script which makes Composer (dependency manager) install the application’s dependencies.

Deploy Java to Azure Web Apps

The Azure App Service Deploy build/release task now supports deployment of Java WAR files to an Azure Web App. When creating a new build definition, you can choose to begin with a new build template: Java Deploy to Azure Web App. This simplifies getting started by building your project with Ant, Gradle, or Maven, and deploying the results to Azure. An Azure Web App slot can be specified to allow uploading to a staging slot before swapping that deployment with a production slot.

Java code coverage enhancements

The Publish Code Coverage Results build task reports Cobertura or JaCoCo code coverage as part of a build. It now supports specifying wildcards and minimatch patterns in Summary File and Report Directory fields, allowing the files and directories to be resolved on a per-build basis for paths that change between builds.

Maven and SonarQube improvements

The Maven build task now allows specifying a SonarQube project for analysis results in cases where it differs from what is specified in the Maven pom.xml file.

Improved Jenkins integration

The Jenkins Queue Job build/release task now supports running Jenkins multibranch pipeline jobs while displaying the Jenkins console output in Team Services. Pipeline results are published to the Team Services build summary.

jenkins

Google Play extension enhancements

The Google Play extension now supports simultaneously releasing and replacing multiple APK version codes at a time, replacing registered screenshots with a new set to avoid accumulation, and explicitly specifying the locale of a change log.

iOS DevOps enhancements

The Apple App Store extension now supports two-step verification (two-factor authentication) and releasing builds to external testers.

app store

Install Apple Certificate (Preview) is a new build task that installs a P12 signing certificate on the agent for use by a subsequent Xcode or Xamarin.iOS build.

Install Apple Profile (Preview) is a new build task for installing provisioning profiles on the agent for use by a subsequent Xcode or Xamarin.iOS build.

MSBuild, Xamarin.Android, and Xamarin.iOS build tasks now support building with the Visual Studio for Mac tool set.

Contact extension customers

Publishers of paid extensions can now reach out to their customers for transactional communication. This can be done via the contact action in the new Publisher stats reports page.

extension customers

Marketplace feedback excluded from ratings

You can now appeal to void a rating from the publishers hub if the issue reported is due to the Marketplace or the underlying platform. Visit the extension report Rating and Review tab and select the Appeal action, and then write to the Marketplace admin team for review. If the issue is valid, we will void the rating.

marketplace ratings

Reports for Marketplace Publishers

We are launching a new feature for our publishers in Marketplace to help track and analyze how your extension is performing and take required actions from the new publisher stats hub. To view the extension’s report, visit manage page and select the extension name.

Uninstall

Now you will have access to how many users have uninstalled your extension, what are they sharing as feedback, top reasons for uninstall, and the daily trend of uninstall to take the required actions. You can use search for text and dates to analyze and draw more insights from the detailed feedback. If your extension is paid, you can also reach out to your customers for transactional communication.

uninstall stats

Ratings and Review

This tab will show you the average rating for the selected time period, the average rating by number of reviewers, and the daily trend of average rating. The details section provides all the reviews and your responses in transactional view. You can reply or edit a previous response and manage engagement better with your extension users.

ratings report

Export to Excel

Uninstall, Rating and Review, and Acquisition data are also available for download in XLS format to help you create your own custom reports.

We think these features will help improve your workflows while addressing feedback, but we would love to hear what you think. Please don’t hesitate to send a smile or frown through the web portal, or send other comments through the Team Services Developer Community. As always, if you have ideas on things you’d like to see us prioritize, head over to UserVoice to add your idea or vote for an existing one.

Thanks,

Jamie Cool

DSC Resource Kit Release April 2017

$
0
0

We just released the DSC Resource Kit!

This release includes updates to 5 DSC resource modules, including 3 new DSC resources. In these past 6 weeks, 57 pull requests have been merged and 46 issues have been closed, all thanks to our amazing community!

The modules updated in this release are:

  • PSDscResources
  • xCertificate
  • xDatabase
  • xPSDesiredStateConfiguration
  • xSQLServer

For a detailed list of the resource modules and fixes in this release, see the Included in this Release section below.

Our last community call for the DSC Resource Kit was last week on April 12. A recording of our updates as well as summarizing notes are available. Join us next time on May 24 to ask questions and give feedback about your experience with the DSC Resource Kit. Keep an eye on the community agenda for the link to the call.

We strongly encourage you to update to the newest version of all modules using the PowerShell Gallery, and don’t forget to give us your feedback in the comments below, on GitHub, or on Twitter (@PowerShell_Team)!

All resources with the ‘x’ prefix in their names are still experimental – this means that those resources are provided AS IS and are not supported through any Microsoft support program or service. If you find a problem with a resource, please file an issue on GitHub.

Included in this Release

You can see a detailed summary of all changes included in this release in the table below. For past release notes, go to the README.md or Changelog.md file on the GitHub repository page for a specific module (see the How to Find DSC Resource Modules on GitHub section below for details on finding the GitHub page for a specific module).

Module NameVersionRelease Notes
PSDscResources2.6.0.0
  • Archive:
    • Fixed a minor bug in the unit tests where sometimes the incorrect DateTime format was used.
  • Added MsiPackage
xCertificate2.5.0.0
  • Fixed issue where xCertReq does not process requested certificate when credentials parameter set and PSDscRunAsCredential not passed. See issue
xDatabase1.6.0.0
  • Moved internal functions to a common helper module
xPSDesiredStateConfiguration6.2.0.0
  • xMsiPackage:
    • Created high quality MSI package manager resource
  • xArchive:
    • Fixed a minor bug in the unit tests where sometimes the incorrect DateTime format was used.
  • xWindowsFeatureSet:
    • Had the wrong parameter name in one test case.
xSQLServer7.0.0.0
  • Examples
    • xSQLServerDatabaseRole
      • 1-AddDatabaseRole.ps1
      • 2-RemoveDatabaseRole.ps1
    • xSQLServerRole
      • 3-AddMembersToServerRole.ps1
      • 4-MembersToIncludeInServerRole.ps1
      • 5-MembersToExcludeInServerRole.ps1
    • xSQLServerSetup
      • 1-InstallDefaultInstanceSingleServer.ps1
      • 2-InstallNamedInstanceSingleServer.ps1
      • 3-InstallNamedInstanceSingleServerFromUncPathUsingSourceCredential.ps1
      • 4-InstallNamedInstanceInFailoverClusterFirstNode.ps1
      • 5-InstallNamedInstanceInFailoverClusterSecondNode.ps1
    • xSQLServerReplication
      • 1-ConfigureInstanceAsDistributor.ps1
      • 2-ConfigureInstanceAsPublisher.ps1
    • xSQLServerNetwork
      • 1-EnableTcpIpOnCustomStaticPort.ps1
    • xSQLServerAvailabilityGroupListener
      • 1-AddAvailabilityGroupListenerWithSameNameAsVCO.ps1
      • 2-AddAvailabilityGroupListenerWithDifferentNameAsVCO.ps1
      • 3-RemoveAvailabilityGroupListenerWithSameNameAsVCO.ps1
      • 4-RemoveAvailabilityGroupListenerWithDifferentNameAsVCO.ps1
      • 5-AddAvailabilityGroupListenerUsingDHCPWithDefaultServerSubnet.ps1
      • 6-AddAvailabilityGroupListenerUsingDHCPWithSpecificSubnet.ps1
    • xSQLServerEndpointPermission
      • 1-AddConnectPermission.ps1
      • 2-RemoveConnectPermission.ps1
      • 3-AddConnectPermissionToAlwaysOnPrimaryAndSecondaryReplicaEachWithDifferentSqlServiceAccounts.ps1
      • 4-RemoveConnectPermissionToAlwaysOnPrimaryAndSecondaryReplicaEachWithDifferentSqlServiceAccounts.ps1
    • xSQLServerPermission
      • 1-AddServerPermissionForLogin.ps1
      • 2-RemoveServerPermissionForLogin.ps1
    • xSQLServerEndpointState
      • 1-MakeSureEndpointIsStarted.ps1
      • 2-MakeSureEndpointIsStopped.ps1
    • xSQLServerConfiguration
      • 1-ConfigureTwoInstancesOnTheSameServerToEnableClr.ps1
      • 2-ConfigureInstanceToEnablePriorityBoost.ps1
    • xSQLServerEndpoint
      • 1-CreateEndpointWithDefaultValues.ps1
      • 2-CreateEndpointWithSpecificPortAndIPAddress.ps1
      • 3-RemoveEndpoint.ps1
  • Changes to xSQLServerDatabaseRole
    • Fixed code style, added updated parameter descriptions to schema.mof and README.md.
  • Changes to xSQLServer
    • Raised the CodeCov target to 70% which is the minimum and required target for HQRM resource.
  • Changes to xSQLServerRole
    • BREAKING CHANGE: The resource has been reworked in it”s entirely.* Below is what has changed.
    • The mandatory parameters now also include ServerRoleName.
    • The ServerRole parameter wasbefore an array of server roles, now this parameter is renamed to ServerRoleName and can only be set to one server role.
      • ServerRoleName are no longer limited to built-in server roles. To add members to a built-in server role, set ServerRoleName to the name of the built-in server role.
      • The ServerRoleName will be created when Ensure is set to “Present” (if it does not already exist), or removed if Ensure is set to “Absent”.
    • Three new parameters are added; Members, MembersToInclude and MembersToExclude.
      • Members can be set to one or more logins, and those will replace all the memberships in the server role.
      • MembersToInclude and MembersToExcludecan be set to one or more logins that will add or remove memberships, respectively, in the server role. MembersToInclude and MembersToExclude can not be used at the same time as parameter Members. But both MembersToInclude and MembersToExclude can be used together at the same time.
  • Changes to xSQLServerSetup
    • Added a note to the README.md saying that it is not possible to add or remove features from a SQL Server failover cluster (issue 433).
    • Changed so that it reports false if the desired state is not correct (issue 432).
      • Added a test to make sure we always return false if a SQL Server failover cluster is missing features.
    • Helperfunction Connect-SQLAnalysis
      • Now has correct error handling, and throw does not used the unknown named parameter “-Message” (issue 436)
      • Added tests for Connect-SQLAnalysis
      • Changed to localized error messages.
      • Minor changes to error handling.
    • This adds better support for Addnode (issue 369).
    • Now it skips cluster validation för add node (issue 442).
    • Now it ignores parameters that are not allowed for action Addnode (issue 441).
    • Added support for vNext CTP 1.4 (issue 472).
  • Added newresource
    • xSQLServerAlwaysOnAvailabilityGroupReplica
  • Changes to xSQLServerDatabaseRecoveryModel
    • Fixed code style, removed SQLServerDatabaseRecoveryModel functions from xSQLServerHelper.
  • Changes to xSQLServerAlwaysOnAvailabilityGroup
    • Fixed the permissions check loop so that it exits the loop after the function determines the required permissions are in place.
  • Changes to xSQLServerAvailabilityGroupListener
    • Removed the dependency of SQLPS provider (issue 460).
    • Cleaned up code.
    • Added test for more coverage.
    • Fixed PSSA rule warnings (issue 255).
    • Parameter Ensure now defaults to “Present” (issue 450).
  • Changes to xSQLServerFirewall
    • Now it will correctly create rules when the resource is used for two or more instances on the same server (issue 461).
  • Changes to xSQLServerEndpointPermission
    • Added description to the README.md
    • Cleaned up code (issue 257 and issue 231)
    • Now the default value for Ensure is “Present”.
    • Removed dependency of SQLPS provider (issue 483).
    • Refactored tests so they use less code.
  • Changes to README.md
    • Adding deprecated tag to xSQLServerFailoverClusterSetup, xSQLAOGroupEnsure and xSQLAOGroupJoin in README.md so it it more clear that these resources has been replaced by xSQLServerSetup, xSQLServerAlwaysOnAvailabilityGroup and xSQLServerAlwaysOnAvailabilityGroupReplica respectively.
  • Changes to xSQLServerEndpoint
    • BREAKING CHANGE: Now SQLInstanceName is mandatory, and is a key, so SQLInstanceName has no longer a default value (issue 279).
    • BREAKING CHANGE: Parameter AuthorizedUser has been removed (issue 466, issue 275 and issue 80). Connect permissions can be set using the resource xSQLServerEndpointPermission.
    • Optional parameter IpAddress has been added. Default is to listen on any valid IP-address. (issue 232)
    • Parameter Port now has a default value of 5022.
    • Parameter Ensure now defaults to “Present”.
    • Resource now supports changing IP address and changing port.
    • Added unit tests (issue 289)
    • Added examples.
  • Changes to xSQLServerEndpointState
    • Cleaned up code, removed SupportsShouldProcess and fixed PSSA rules warnings (issue 258 and issue 230).
    • Now the default value for the parameter State is “Started”.
    • Updated README.md with a description for the resources and revised the parameter descriptions.
    • Removed dependency of SQLPS provider (issue 481).
    • The parameter NodeName is no longer mandatory and has now the default value of $env:COMPUTERNAME.
    • The parameter Name is now a key so it is now possible to change the state on more than one endpoint on the same instance. Note: The resource still only supports Database Mirror endpoints at this time.
  • Changes to xSQLServerHelper module
    • Removing helper function Get-SQLAlwaysOnEndpoint because there is no resource using it any longer.
    • BREAKING CHANGE: Changed helper function Import-SQLPSModule to support SqlServer module (issue 91). The SqlServer module is the preferred module so if it is found it will be used, and if not found an attempt will be done to load SQLPS module instead.
  • Changes to xSQLServerScript
    • Updated tests for this resource, because they failed when Import-SQLPSModule was updated.

How to Find Released DSC Resource Modules

To see a list of all released DSC Resource Kit modules, go to the PowerShell Gallery and display all modules tagged as DSCResourceKit. You can also enter a module’s name in the search box in the upper right corner of the PowerShell Gallery to find a specific module.

Of course, you can also always use PowerShellGet (available in WMF 5.0) to find modules with DSC Resources:

# To list all modules that are part of the DSC Resource KitFind-Module-Tag DSCResourceKit # To list all DSC resources from all sources Find-DscResource

To find a specific module, go directly to its URL on the PowerShell Gallery:
http://www.powershellgallery.com/packages/< module name >
For example:
http://www.powershellgallery.com/packages/xWebAdministration

How to Install DSC Resource Modules From the PowerShell Gallery

We recommend that you use PowerShellGet to install DSC resource modules:

Install-Module-Name < module name >

For example:

Install-Module-Name xWebAdministration

To update all previously installed modules at once, open an elevated PowerShell prompt and use this command:

Update-Module

After installing modules, you can discover all DSC resources available to your local system with this command:

Get-DscResource

How to Find DSC Resource Modules on GitHub

All resource modules in the DSC Resource Kit are available open-source on GitHub.
You can see the most recent state of a resource module by visiting its GitHub page at:
https://github.com/PowerShell/< module name >
For example, for the xCertificate module, go to:
https://github.com/PowerShell/xCertificate.

All DSC modules are also listed as submodules of the DscResources repository in the xDscResources folder.

How to Contribute

You are more than welcome to contribute to the development of the DSC Resource Kit! There are several different ways you can help. You can create new DSC resources or modules, add test automation, improve documentation, fix existing issues, or open new ones.
See our contributing guide for more info on how to become a DSC Resource Kit contributor.

If you would like to help, please take a look at the list of open issues for the DscResources repository.
You can also check issues for specific resource modules by going to:
https://github.com/PowerShell/< module name >/issues
For example:
https://github.com/PowerShell/xPSDesiredStateConfiguration/issues

Your help in developing the DSC Resource Kit is invaluable to us!

Questions, comments?

If you’re looking into using PowerShell DSC, have questions or issues with a current resource, or would like a new resource, let us know in the comments below, on Twitter (@PowerShell_Team), or by creating an issue on GitHub.

Katie Keim
Software Engineer
PowerShell Team
@katiedsc (Twitter)
@kwirkykat (GitHub)

Introducing Batch Mode Adaptive Joins

$
0
0

For SQL Server 2017 and Azure SQL Database, the Microsoft Query Processing team is introducing a new set of adaptive query processing improvements to help fix performance issues that are due to inaccurate cardinality estimates. Improvements in the adaptive query processing space include batch mode memory grant feedback, batch mode adaptive joins, and interleaved execution.  In this post, we’ll introduce batch mode adaptive joins.

aj_image_1

We have seen numerous cases where providing a specific join hint solved query performance issues for our customers.  However, the drawback of adding a hint is that we remove join algorithm decisions from the optimizer for that statement. While fixing a short-term issue, the hard-coded hint may not be the optimal decision as data distributions shift over time.

Another scenario is where we do not know up front what the optimal join should be, for example, with a parameter sensitive query where a low or high number of rows may flow through the plan based on the actual parameter value.

With these scenarios in mind, the Query Processing team introduced the ability to sense a bad join choice in a plan and then dynamically switch to a better join strategy during execution.

The batch mode adaptive joins feature enables the choice of a hash join or nested loop join method to be deferred until the after the first input has been scanned.  We introduce a new Adaptive Join operator.  This operator defines a threshold that will be used to decide when we will switch to a nested loop plan.

Note:To see the new Adaptive Join operator in Graphical Showplan, a new version of SQL Server Management Studio is required and will be released shortly.

How it works at a high level:

  • If the row count of the build join input is small enough that a nested loop join would be more optimal than a hash join, we will switch to a nested loop algorithm.
  • If the build join input exceeds a specific row count threshold, no switch occurs and we will continue with a hash join.

The following query is used to illustrate an adaptive join example:

SELECT  [fo].[Order Key], [si].[Lead Time Days], [fo].[Quantity]
FROM    [Fact].[Order] AS [fo]
INNER JOIN [Dimension].[Stock Item] AS [si]
       ON [fo].[Stock Item Key] = [si].[Stock Item Key]
WHERE   [fo].[Quantity] = 360;

The query returns 336 rows.  Enabling Live Query Statistics we see the following plan:

aj_image_2

Walking through the noteworthy areas:

  1. We have a Columnstore Index Scan used to provide rows for the hash join build phase.
  2. We have the new Adaptive Join operator. This operator defines a threshold that will be used to decide when we will switch to a nested loop plan.  For our example, the threshold is 78 rows.  Anything with >= 78 rows will use a hash join.  If less than the threshold, we’ll use a nested loop join.
  3. Since we return 336 rows, we are exceeding the threshold and so the second branch represents the probe phase of a standard hash join operation. Notice that Live Query Statistics shows rows flowing through the operators – in this case “672 of 672”.
  4. And the last branch is our Clustered Index Seek for use by the nested loop join had the threshold not been exceeded. Notice that we see “0 of 336” rows displayed (the branch is unused).

Now let’s contrast the plan with the same query, but this time for a Quantity value that only has one row in the table:

SELECT  [fo].[Order Key], [si].[Lead Time Days], [fo].[Quantity]
FROM    [Fact].[Order] AS [fo]
INNER JOIN [Dimension].[Stock Item] AS [si]
       ON [fo].[Stock Item Key] = [si].[Stock Item Key]
WHERE   [fo].[Quantity] = 361;

The query returns one row.  Enabling Live Query Statistics we see the following plan:

aj_image_3

Walking through the noteworthy areas:

  1. With one row returned, you see the Clustered Index Seek now has rows flowing through it.
  2. And since we did not continue with the hash join build phase, you’ll see zero rows flowing through the second branch.

How do I enable batch mode adaptive joins?

To have your workloads automatically eligible for this improvement, enable compatibility level 140 for the database in SQL Server 2017 CTP 2.0 or greater.  This improvement will also be surfacing in Azure SQL Database.

What statements are eligible for batch mode adaptive joins?

A few conditions make a logical join eligible for a batch mode adaptive join:

  • The database compatibility level is 140
  • The join is eligible to be executed both by an indexed nested loop join or a hash join physical algorithm
  • The hash join uses batch mode – either through the presence of a Columnstore index in the query overall or a Columnstore indexed table being referenced directly by the join
  • The generated alternative solutions of the nested loop join and hash join should have the same first child (outer reference)

If an adaptive join switches to a nested loop operation, do we have to rescan the join input?

No.  The nested loop operation will use the rows already read by the hash join build.

What determines the adaptive join threshold?

We look at estimated rows and the cost of a hash join vs. nested loop join alternative and find an intersection where the cost of a nested loop exceeds the hash join alternative.  This threshold cost is translated into a row count threshold value.

aj_image_4

The prior chart shows an intersection between the cost of a hash join vs. the cost of a nested loop join alternative.  At this intersection point, we determine the threshold.

What performance improvements can we expect to see?

Performance gains occur for workloads where, prior to adaptive joins being available, the optimizer chooses the wrong join type due to cardinality misestimates. For example, one of our customers saw a 20% improvement with one of the candidate workloads. And for one of our internal Microsoft customers, they saw the following results:

aj_image_5

Workloads with big oscillations between small and large input Columnstore index scans joined to other tables will benefit the most from this improvement.

Any overhead of using batch mode adaptive joins?

Adaptive joins will introduce a higher memory requirement than an index nested loop join equivalent plan.  The additional memory will be requested as if the nested loop was a hash join. With that additional cost comes flexibility for scenarios where row counts may fluctuate in the build input.

How do batch mode adaptive joins work for consecutive executions once the plan is cached?

Batch mode adaptive joins will work for the initial execution of a statement, and once compiled, consecutive executions will remain adaptive based on the compiled adaptive join threshold and the runtime rows flowing through the build phase of the Columnstore Index Scan.

How can I track when batch mode adaptive joins are used?

As shown earlier, you will see the new Adaptive Join operator in the plan and the following new attributes:

Plan attributeDescription
AdaptiveThresholdRowsShows the threshold use to switch from a hash join to nested loop join.
EstimatedJoinTypeWhat we think the join type will be.
ActualJoinType *In an actual plan, shows what join algorithm was ultimately chosen based on the threshold.

* Arriving post-CTP 2.0.

What does the estimated plan show?

We will show the adaptive join plan shape, along with a defined adaptive join threshold and estimated join type.

Will Query Store capture and be able to force a batch mode adaptive join plan?

Yes.

Will you be expanding the scope of batch mode adaptive joins to include row mode?

This first version supports batch mode execution, however we are exploring row mode as a future possibility as well.

Thanks for reading, and stay tuned for more blog posts regarding the adaptive query processing feature family!

Introducing Interleaved Execution for Multi-Statement Table Valued Functions

$
0
0

For SQL Server vNext and Azure SQL Database, the Microsoft Query Processing team is introducing a new set of adaptive query processing improvements to help fix performance issues that are due to poor cardinality estimates. Improvements in the adaptive query processing space include batch mode memory grant feedback, batch mode adaptive joins, and interleaved execution.  In this post, we’ll introduce interleaved execution.

ie_image_1

SQL Server has historically used a unidirectional “pipeline” for optimizing and executing queries.  During optimization, the cardinality estimation process is responsible for providing row count estimates for operators in order to derive estimated costs.  The estimated costs help determine which plan gets selected for use in execution.  If cardinality estimates are incorrect, we will still end up using the original plan despite the poor original assumptions.

Interleaved execution changes the unidirectional boundary between the optimization and execution phases for a single-query execution and enables plans to adapt based on the revised estimates. During optimization if we encounter a candidate for interleaved execution, which for this first version will be multi-statement table valued functions (MSTVFs), we will pause optimization, execute the applicable subtree, capture accurate cardinality estimates and then resume optimization for downstream operations.

ie_image_2

While many DBAs are aware of the negative effects of MSTVFs, we know that their usage is still widespread.  MSTVFs have a fixed cardinality guess of “100” in SQL Server 2014 and SQL Server 2016, and “1” for earlier versions. Interleaved execution will help workload performance issues that are due to these fixed cardinality estimates associated with multi-statement table valued functions.

The following is a subset of an overall execution plan that shows the impact of fixed cardinality estimates from MSTVFs (below shows Live Query Statistics output, so you can see the actual row flow vs. estimated rows):

ie_image_3

Three noteworthy areas in the plan are numbered 1 through 3:

  1. We have a MSTVF Table Scan that has a fixed estimate of 100 rows. But for this example, there are 527,592 flowing through this MSTVF Table Scan as seen in Live Query Statistics via the “527597 of 100” actual of estimated – so our fixed estimate is significantly skewed.
  2. For the Nested Loops operation, we’re still assuming only 100 rows are flowing through the outer reference. Given the high number of rows actually being returned by the MSTVF, we’re likely better off with a different join algorithm altogether.
  3. For the Hash Match operation, notice the small warning symbol, which in this case is indicating a spill to disk.

Now contrast the prior plan with the actual plan generated with interleaved execution enabled:

ie_image_4

Three noteworthy areas in the plan are numbered 1 through 3:

  1. Notice that the MSTVF table scan now reflects an accurate cardinality estimate. Also notice the re-ordering of this table scan and the other operations.
  2. And regarding join algorithms, we have switched from a Nested Loop operation to a Hash Match operation instead, which is more optimal given the large number of rows involved.
  3. Also notice that we no longer have spill-warnings, as we’re granting more memory based on the true row count flowing from the MSTVF table scan.

What makes a query eligible for interleaved execution?

For the first version of interleaved execution, MSTVF referencing statements must be read-only and not part of a data modification operation. Also, the MSTVFs will not be eligible for interleaved execution if they are used on the inside of a CROSS APPLY.

How do I enable interleaved execution?

To have your workloads automatically eligible for this improvement, enable compatibility level 140 for the database in SQL Server 2017 CTP 2.0 or greater and in SQL Azure Database.

What performance improvements can we expect to see?

This depends on your workload characteristics – however we have seen the greatest improvements for scenarios where MSTVFs output many rows that then flow to other operations (for example, joins to other tables or sort operations).

In one example, we worked with a financial services company that used two MSTVF-referencing queries and they saw the following improvements:

ie_image_5

 

For MSTVF “A”, the original query ran in 135 seconds and the plan with interleaved execution enabled ran in 50 seconds. For MSTVF “B”, the original query ran in 21 seconds and the plan with interleaved execution enabled ran in 1 second.

A special thanks to Arun Sirpal, the Senior Database Administrator who conducted this testing and worked with our team during private preview!

In general, the higher the skew between the estimated vs. actual number of rows, coupled with the number of downstream plan operations, the greater the performance impact.

What is the overhead?

The overhead should be minimal-to-none. MSTVFs were already being materialized prior to the introduction of interleaved execution, however the difference is that now we’re now allowing deferred optimization and are then leveraging the cardinality estimate of the materialized row set.

What could go wrong?

As with any plan affecting changes, it is possible that some plans could change such that with better cardinality we get a worse plan. Mitigation can include reverting the compatibility level or using Query Store to force the non-regressed version of the plan.

How does interleaved execution work for consecutive executions?

Once an interleaved execution plan is cached, the plan with the revised estimates on the first execution is used for consecutive executions without re-instantiating interleaved execution.

How can I track when interleaved execution is used?

You can see usage attributes in the actual query execution plan:

Plan attributeDescription
ContainsInterleavedExecutionCandidatesApplying to the QueryPlan node, when “true”, it means the plan contains interleaved execution candidates.
IsInterleavedExecutedThe attribute is inside the RuntimeInformation element under the RelOp for the TVF node. When “true”, it means the operation was materialized as part of an interleaved execution operation.

You can also track interleaved execution occurrences via the following XEvents:

XEventDescription
interleaved_exec_statusThis event fires when interleaved execution is occurring.
interleaved_exec_stats_updateThis event describes the cardinality estimates updated by interleaved execution.
Interleaved_exec_disabled_reasonThis event fires when a query with a possible candidate for interleaved execution does not actually get interleaved execution.

What does the estimated plan show?

A query must be executed in order to allow interleaved execution to revise MSTVF cardinality estimates.  However, the estimated execution plan will still show when there are interleaved execution candidates via the ContainsInterleavedExecutionCandidates attribute.

What if the plan is manually cleared or automatically evicted from cache?

Upon query execution, there will be a fresh compilation that uses interleaved execution.

Will this improvement work if I use OPTION (RECOMPILE)?

Yes.  A statement using OPTION(RECOMPILE) will create a new plan using interleaved execution and not cache it.

Will Query Store capture and be able to force an interleaved execution plan?

Yes.  The plan will be the version that has corrected cardinality estimates based on initial execution.

Will you be expanding the scope of interleaved execution in a future version beyond MSTVFs?

Yes. We are looking at expanding to additional problematic estimation areas.

Thanks for reading, and stay tuned for more blog posts regarding the adaptive query processing feature family!

Making it easier to revert

$
0
0

Sometimes when things go wrong in my environment, I don’t want to have to clean it all up — I just want to go back in time to when everything was working. But remembering to maintain good recovery points isn’t easy.

Now we’re making it so that you can always roll back your virtual machine to a recent good state if you need to. Starting in the latest Windows Insider build, you can now always revert a virtual machine back to the state it started in.

In Virtual Machine Connection, just click the Revert button to undo any changes made inside the virtual machine since it last started.

Revert virtual machine

Under the hood, we’re using checkpoints; when you start a virtual machine that doesn’t have any checkpoints, we create one for you so that you can easily roll back to it if something goes wrong, then we clean it up once the virtual machine shuts down cleanly.

New virtual machines will be created with “Use automatic checkpoints” enabled by default, but you will have to enable it yourself to use it for existing VMs.  This option can be found in Settings -> Checkpoints ->“Use automatic checkpoints”

Checkpoint settings

Note: the checkpoint will only be taken automatically when the VM starts if it doesn’t have other existing checkpoints.

Hopefully this will come in handy next time you need to undo something in your VM. If you are in the Windows Insider Program, please give it a try and let us know what you think.

Cheers,
Andy


C++ Unit Testing in Visual Studio

$
0
0

Testing is an increasingly important part of a software development workflow. In many cases, it is insufficient to test a program simply by running it and trying it out – as the scope of the project gets more involved, it becomes increasingly necessary to be able to test individual components of the code on a structured basis. If you’re a C++ developer and are interested in unit testing, you’ll want to be aware of Visual Studio’s unit testing tools. This post goes through just that, and is part of a series aimed at new users to Visual Studio.
This blog post goes over the following concepts:

  1. Setting Up Unit Testing
  2. The Microsoft Native C++ Unit Test Framework
  3. Using the Test Explorer to Run Tests in the IDE
  4. Determining Unit Test Code Coverage

Setting Up Unit Testing

The easiest and most organized way to set up unit tests is to create a separate project in Visual Studio for your tests. You can create as many test projects as you want in a solution and connect them to any number of other Visual Studio projects in that solution that contain the code you want to test. Assuming you already have some code that you want to test, simply follow these steps to get yourself set up:

  1. Right-click your solution and choose Add > New > Project. Click the Visual C++ category, and choose the Test sub-category. Select Native Unit Test Project, give the project a descriptive name, and then click OK.
    New Project Wizard for Testing
  2. Visual Studio will create a new project containing unit tests, with all dependencies to the native test framework already set up. The next thing to do is to add references to any projects that will be tested. Right-click the unit test project and choose Add > Reference…
    Right-click Add > Reference
  3. Check any projects that you want to unit test from your test project, and then press OK.
    Add > Reference
    Your unit testing project can now access your project(s) under test. You can now start writing tests, as long as you add #include statements for the headers you want to access.

NOTE: You will only be able to unit test public functions this way. To unit test private functions, you must write your unit tests in the same class as the code that is being tested.

The Microsoft Native C++ Unit Test Framework

Visual Studio ships with a native C++ test framework that you can use to write your unit tests. The framework defines a series of macros to provide simplified syntax.

If you followed the steps in the previous procedure, you should have a unit test project set up along with your main code. Open unittest1.cpp in your test project and look at the starting code provided:
Starting code provided when creating MSTest project.
Right from the start, you’ll notice that dependencies have already been set up to the test framework, so you can get to work writing your tests. Assuming you connected your test project to your project(s) under test via Add > Reference earlier, you can simply add the #include statements for the header files of the code you want to test.

Tests can be organized by using the TEST_CLASS and TEST_METHOD macros, which perform exactly the functions you’d expect. A TEST_CLASS is a collection of related TEST_METHODS, and each TEST_METHOD contains a test. You can name your TEST_CLASS and TEST_METHOD anything you want in the brackets. It’s a good idea to use descriptive names that make it easy to identify each test/test group individually later.

Let’s try writing some basic asserts. At the TODO comment, write:
Assert::AreEqual(1, 1);

This is a basic equality assert which compares two expressions. The first expression holds the expected value, the second holds the item you are testing. For the Assert to pass, both sides must evaluate to the same result. In this trivial example, the test will always pass. You can also test for values you don’t want your expression to evaluate to, like this:
Assert::AreNotEqual(1, 2);

Here, for the test to pass, the two expressions must not evaluate to the same result. While this kind of assert is less common, you may find it useful for verifying edge cases where you want to avoid a specific behavior from occurring.

There are several other Assert functions that you can try. Simply type Assert:: and let IntelliSense provide the full list to take a look. Quick Info tooltips appear for each Assert as you make a selection in the list, providing more context on their format and function. You can find a full reference of features in the Microsoft C++ native framework on MSDN.

Using the Test Explorer to Run Tests in the IDE

With Visual Studio, you’re not restricted to running unit tests in the command line. The Text Explorer window in Visual Studio provides a simple interface to run, debug, and parallelize test execution.
Test Explorer window
This is a straightforward process. Once you connect your test project to your project(s) under test, add some #include directives in the file containing your unit tests to the code under test, and write some Asserts, you can simply run a full build. Test Explorer will then discover all your unit tests and populate itself with them.

NOTE: In .NET, a feature called Live Unit Testing is available. This feature is not currently supported in C++, so unit tests are discovered and executed only after you run builds.

To run your unit tests, simply click the Run All link in the Test Explorer. This will build your project (though this process is skipped if the project is already up to date) then run all your tests. The Test Explorer indicates the passing tests with a checkmark and the failing tests with an X. A summary of execution results is provided at the bottom of the window. You can click on any failing unit test to see why it failed, including any exceptions that may have been thrown. Execution times for each unit test are also provided. For realistic test execution times, test in the Release solution configuration rather than Debug, which will provide faster runtimes which are more approximate to your shipped application.

To be able to debug your code as you run your unit tests (so you can stop at breakpoints and so forth), simply use the Test > Debug menu to run your tests.

Determining Unit Test Code Coverage

If you are using Visual Studio Enterprise, you can run code coverage on your unit tests. Assuming you have unit tests already set up for your project, this is as simple as going to Test > Analyze Code Coverage in the main Visual Studio menu at the top of the IDE. This opens the Code Coverage Results window which summarizes code coverage data for your tests.
Code Coverage Results window
NOTE: There is a known issue where Code Coverage will not work in C++ unless /DEBUG:FULL is selected as the debugging configuration. By default, the configuration is set to /DEBUG:FASTLINK instead. You can switch to /DEBUG:FULL by doing the following:

  1. Right-click the test project and choose Properties.
  2. Go to Linker > Debugging > Generate Debug Info.
  3. Set the option to Generate Debug Information optimized for sharing and publishing (/DEBUG:FULL).

The Code Coverage Results window provides an option called Show Code Coverage Coloring, which colors the code based on whether it’s covered or not.
Code Coverage coloring
Code coverage is counted in blocks, with a block being a piece of code with exactly one entry and exit point. If a block is passed through at least once, it is considered covered.

For more information on C++ unit testing, including some more advanced topics, check out the following MSDN articles:

Azure IoT Suite connected factory now available

$
0
0

Getting Started with Industrie 4.0

Many customers tell us that they want to start with the digital transformation of their assets, for example production lines, as well as their business processes. However, many times they just don’t know where to start or what exactly Industrie 4.0 is all about. At Microsoft, we are committed to enabling businesses of all sizes to realize their full potential and today we are  proud to announce our connected factory preconfigured solution and six-step framework to quickly enable you to get started on your Industrie 4.0 journey.

Azure IoT Suite preconfigured solutions are engineered to help businesses get started quickly and move from proof-of-concept to broader deployment. The connected factory preconfigured solution leverages Azure services including Azure IoT Hub and the new Azure Time Series Insights. Furthermore, it leverages the OPC Foundation’s cross-platform OPC UA .Net Standard Library reference stack for OPC UA connectivity, as well as a rich web portal with OPC UA server management capabilities, alarms processing and telemetry visualizations. The web portal and the Azure Time-Series Insights can be used to quickly see trends in OPC UA telemetry data and see Overall Equipment Effectiveness (OEE) and several key performance indicators (KPIs) like number of units produced and energy consumption.

This solution builds on the industry-leading cloud connectivity for OPC UA that we have first announced at Hannover Messe a year ago. Since then, all components of this connectivity have been released cross-platform and open-source on GitHub in collaboration with the OPC Foundation making Microsoft the largest open-source contributor to the OPC Foundation. Furthermore, the entire connected factory preconfigured solution is also published open-source on GitHub.

Azure IoT Suite is the best solution for Industrie 4.0

As we demonstrated at Hannover Messe 2016, we believe that the Azure IoT Suite is the best choice for businesses to cloud-enable industrial equipment — including already deployed machines, without disrupting their operation — to allow for data and device management, insights, machine learning capabilities and even the ability to manage equipment remotely.

To demonstrate this functionality, we have gone to great lengths to build real OPC UA servers into the solution, grouped into assembly lines where each OPC UA server is responsible for a “station” within the assembly line. Each assembly line is producing simulated products. We even built a simple Manufacturing Execution System (MES) with an OPC UA interface, which controls each assembly line. The connected factory preconfigured solution includes 8 such assembly lines and they are running in a Linux Virtual Machine on Azure. Our Azure IoT Gateway SDK is also used in each simulated factory location.

Secure by design, secure by default

As verified by the BSI Study, OPC UA is secure by default. Microsoft is going one step further and is making sure that the OPC UA components used in the connected factory solution are secure by default, to give you a secure base to build your own solution on top. Secure by default means that all security features are turned on and already configured. This means that you don’t need to do this step manually and sees how an end-to-end solution can be secured.

Easy to extend with real factories

We have made it as simple as possible to extend the connected factory preconfigured solution with real factories. For this, we have partnered with several industry leaders in the OPC UA ecosystem who have built turnkey gateway solutions that have the Azure connectivity used by this solution already built in and are close to zero-config. These partners include Softing, Unified Automation, and Hewlett Packard Enterprise. Please visit our device catalog for a complete list of gateways compatible with this solution. With these gateways, you can easily connect your on-premises industrial assets to this solution.

However, we have gone even further and additionally provided open-source Docker containers as well as pre-built Docker container images available on Docker Hub for the Azure connectivity components (OPC Proxy and OPC Publisher), both integrated in the Azure IoT Gateway SDK and available on GitHub to make a PoC with real equipment achievable in hours, enabling you to quickly draw insights from your equipment and to plan commercialization steps based on these PoCs.

The future is now

Get started on the journey to cloud-enable industrial equipment with Azure IoT Suite connected factory preconfigured solution and see the solution in action at Hannover Messe 2017. To learn more about how IoT can help transform your business, visit www.InternetofYourThings.com.

Learn more about Microsoft IoT

Microsoft is simplifying IoT so every business can digitally transform through IoT solutions that are more accessible and easier to implement. Microsoft has the most comprehensive IoT portfolio with a wide range of IoT offerings to meet organizations where they are on their IoT journey, including everything businesses need to get started — ranging from operating systems for their devices, cloud services to control them, advanced analytics to gain insights, and business applications to enable intelligent action. To see how Microsoft IoT can transform your business, visit www.InternetofYourThings.com.​

Announcing Azure Stream Analytics on edge devices (preview)

$
0
0

Today, we are announcing Azure Stream Analytics (ASA) on edge devices, a new feature of Azure Stream Analytics that enables customers to deploy analytical intelligence closer to the IoT devices and unlock the full value of the device-generated data.

Azure Stream Analytics on edge devices extends all the benefits of our unique streaming technology from the cloud down to devices. With ASA on edge devices, we are offering the power of our Complex Event Processing (CEP) solution on edge devices to easily develop and run real-time analytics on multiple streams of data. One of the key benefit of this feature is the seamless integration with the cloud: users can develop, test, and deploy their analytics from the cloud, using the same SQL-like language for both cloud and edge analytics jobs. Like in the cloud, this SQL language notably enables temporal-based joins, windowed aggregates, temporal filters, and other common operations such as aggregates, projections, and filters.  Users can also seamlessly integrate custom code in JavaScript for advanced scenarios.ASA_on_edge_devices

Enabling new scenarios

Azure IoT Hub, a core Azure service that connects, monitors and updates IoT devices, has enabled customers to connect millions of devices to the cloud, and Azure Stream Analytics has enabled customers to easily deploy and scale analytical intelligence in the cloud for extracting actionable insights from the device-generated data. However, multiple IoT scenarios require real-time response, resiliency to intermittent connectivity, handling of large volumes of raw data, or pre-processing of data to ensure regulatory compliance. All of which could now be achieved by using ASA on edge device to deploy and operate analytical intelligence physically closer to the devices.

Hewlett Packard Enterprise (HPE) is an early preview partner who has demonstrated a working prototype of ASA on edge devices at Microsoft's booth at Hannover Messe (April 24 to 28, Hall 7, Stand C40). A result of close collaboration between Microsoft, HPE and the OPC Foundation, the prototype is based on Azure Stream Analytics, the HPE Edgeline EL1000 Converged Edge System, and the OPC Unified Architecture (OPC-UA), delivering real-time analysis, condition monitoring, and control. The HPE Edgeline EL1000 Converged Edge System integrates compute, storage, data capture, control and enterprise-class systems and device management built to thrive in hardened environments and handle shock, vibration and extreme temperatures.

ASA on edge devices is particularly interesting for Industrial IoT (IIoT) scenarios that require reacting to operational data with ultra-low latency. Systems such as manufacturing production lines or remote mining equipment need to analyze and act in real-time to the streams of incoming data, e.g. when anomalies are detected.

In offshore drilling, offshore windfarms, or ship transport scenarios, analytics need to run even when internet connectivity is intermittent. In these cases, ASA on edge devices can run reliably to summarize and monitor events, react to events locally, and leverage connection to the cloud when it becomes available.

In industrial IoT scenarios, the volume of data can be too large to be sent to the cloud directly due to limited bandwidth or bandwidth cost. For example, the data produced by jet engines (a typical number is that 1TB of data is collected during a flight) or manufacturing sensors (each sensor can produce 1MB/s to 10MB/s) may need to be filtered down, aggregated or processed directly on the device before sending it to the cloud. Examples of these processes include sending only events when values change instead of sending every event, averaging data on a time window, or using a user-defined function.

Until now, customers with such requirements had to build custom solutions, and manage them separately from their cloud applications. Now, customers can use Azure Stream Analytics to seamlessly develop and operate their stream analytics jobs both on edge devices and in the cloud.

How to use Azure Stream Analytics on edge devices?

Azure Stream Analytics on edge devices leverages the Azure IoT Gateway SDK to run on Windows and Linux operating systems, and supports a multitude of hardware as small as single-board computers, to full PCs, servers or dedicated field gateways devices. The IoT Gateway SDK provides connectors for different industry standard communication protocols such as OPC-UA, Modbus and MQTT and can be extended to support your own communication needs. Azure IoT Hub is used to provide secured bi-directional communications between gateways and the cloud.

Azure Stream Analytics on edge devices is available now in private preview. To request access to the private preview.

You can also meet with our team at Hannover Messe, the world's biggest industrial fair, which take place from April 24th to April 28th in Hannover, Germany. We are located at the Microsoft booth in the Advanced Analytics pod (Hall 7, Stand C40).

Announcing new functionality to automatically provision devices to Azure IoT Hub

$
0
0

We’re announcing a great new service to Azure IoT Hub that allows customers to provision millions of devices in a secure and scalable manner. Azure IoT Hub Device Provisioning enables zero-touch provisioning to the right IoT hub without requiring human intervention, and is currently being used by early adopters to validate various solution deployment scenarios.

Provisioning is an important part of the lifecycle management of an IoT device, which enables seamless integration with an Azure IoT solution. Technically speaking, provisioning pairs devices with an IoT hub based on any number of characteristics such as:

  • Location of the device (geo-sharding)
  • Customer who bought the device (multitenancy)
  • Application in which the device is to be used (solution isolation)

The Azure IoT Hub Device Provisioning service is made even better thanks to some security standardization work called DICE and will support multiple types of hardware security modules such as TPM. In conjunction with this, we announced hardware partnerships with STMicro and Micron.

Without IoT Hub Device Provisioning, setting up and deploying a large number of devices to work with a cloud backend is hard and involves a lot of manual work. This is true today for Azure IoT Hub. While customers can create a lot of device identities within the hub at a time using bulk import, they still must individually place connection credentials on the devices themselves. It's hard, and today customers must build their own solution functionality to avoid the painful manual process. Our commitment to strong security best practices is partly to blame. IoT Hub requires each device to have a unique identity registered to the hub in order to enable per-device access revocation in case the device is compromised. This is a security best-practice, but like many security-related best practices, it tends to slow down deployment.
 
Not only that, but registering a device to Azure IoT Hub is really only half the battle. Once a device is registered, physically deployed in the field, and hooked up to the device management dashboard, now customers have to configure the device with the proper desired twin state and firmware version. This extra step is more time that the device is not a fully-functioning member of the IoT solution. We can do better using the IoT Hub Device Provisioning service.

Hardcoding endpoints with credentials in mass production is operationally expensive, and on top of that the device manufacturer might not know how the device will be used or who the eventual device owner will be, or they may not care. In addition, complete provisioning may involve information that was not available when the device was manufactured, such as who purchased the device. The Azure IoT Hub Device Provisioning service contains all the information needed to provision a device.

Devices running Windows 10 IoT Core operating systems will enable an even easier way to connect to Device Provisioning via an in-box client that OEMs can include in the device unit. With Windows 10 IoT Core, customers can get a zero-touch provisioning experience, eliminating any configuration and provisioning hassles when onboarding new IoT devices that connect to Azure services. When combined with Windows 10 IoT Core support for Azure IoT Hub device management, the entire device life cycle management is simplified through features that enable device reprovisioning, ownership transfer, secure device management, and device end-of-life management. You can learn more about Windows IoT Core device provisioning and device management details by visiting Azure IoT Device Management.

Azure IoT is committed to offering our customers services which take the pain out of deploying and managing an IoT solution in a secure, reliable way. The Azure IoT Hub Device Provisioning service is currently in private preview, and we'll make further announcements when it becomes available to the public. In the meantime, you can learn more about Azure IoT Hub's device management capabilities. We would love to get your feedback on secure device registration, so please continue to submit your suggestions through the Azure IoT User Voice forum or join the Azure IoT Advisors Yammer group.

Learn more about Microsoft IoT

Microsoft is simplifying IoT so every business can digitally transform through IoT solutions that are more accessible and easier to implement. Microsoft has the most comprehensive IoT portfolio with a wide range of IoT offerings to meet organizations where they are on their IoT journey, including everything businesses need to get started, ranging from operating systems for their devices, cloud services to control them, advanced analytics to gain insights, and business applications to enable intelligent action. See how Microsoft IoT can transform your business.

Announcing Azure Time Series Insights

$
0
0

Today we are excited to announce the public preview of Azure Time Series Insights, a fully managed analytics, storage, and visualization service that makes it incredibly simple to interactively and instantly explore and analyze billions of events from sources such as Internet of Things. Time Series Insights gives you a near real time global view of your data across various event sources and lets you quickly validate IoT solutions and avoid costly downtime of mission-critical devices. It helps you discover hidden trends, spot anomalies, conduct root-cause analysis in near real-time, all without writing a single line of code through its simple and intuitive user experience. Additionally, it provides rich API’s to enable you to integrate its powerful capabilities in your own existing workflow or application.

Azure Time Series Insights

Today more than ever, with increasing connected devices and massive advances in the collection of data, businesses are struggling to quickly derive insights from the sheer volume of data generated from geographically dispersed devices and solutions. In addition to the massive scale, there is also a growing need for deriving insights from the millions of events being generated in near real time. Any delay in insights can cause significant downtime and business impact. Additionally, the need to correlate data from a variety of different sensors is paramount to debug and optimize business processes and workflows. Reducing the time and expertise required for this is essential for businesses to gain a competitive edge and optimize their operations. Azure Time Series Insights solves these and many more challenges for your IoT solutions.

Customers from diverse industry sectors like automotive, windfarms, elevators, smart buildings, manufacturing, etc. have been using Time Series Insights during its private preview. They have validated its capabilities with real production data load, already realized the benefits, and are looking for ways to cut costs and improve operations.

For example, BMW uses Azure Time Series Insights and companion Azure IoT services for predictive maintenance across several of their departments. Time Series Insights and other Azure IoT services have helped companies like BMW improve operational efficiency by reducing SLAs for validating connected device installation, in some cases realizing a reduction in time from several months to as little as thirty minutes.

Near real-time insights in seconds at IoT scale

Azure Time Series Insights enables you to ingest 100’s of millions of sensor events per day, and makes new data available to query for insights within 1 minute. It also enables you to retain this data for months.  Time Series Insights is optimized to enable you to query over this combination of near real-time and historic TB’s of data in seconds. It does not pre-aggregate data, but stores the raw events, and delivers the power of doing all aggregations instantly over this massive scale. Additionally, it also enables you to upload reference data to augment or enrich your incoming sensor data. Time Series Insights enables you to compare data across various sensors of different kinds, event sources, regions and IoT installations in the same query. This is what enables you to get a global view of your data, lets you quickly validate, monitor, discover trends, spot anomalies, and conduct root cause analysis in near real time.

“Azure Time Series Insights has standardized our method of accessing devices’ telemetry in real time without any development effort. Time to detect and diagnose a problem has dropped from days to minutes. With just a few clicks we can visualize the end-to-end device data flow, helping us identify and address customer and market needs,” said Scott Tillman, Software Engineer, ThyssenKrupp Elevator.

Trends and correlation

Easy to get started

With built-in integration to Azure IoT Hub and Azure Event Hubs, customers can get started with Time Series Insights in minutes. Just enter your IoT Hub or Event Hub configuration information through the Azure Portal, and Time Series Insights connects and starts pulling and storing real-time data from it within a minute. This service is schema adaptive, which means that you do not have to do any data preparation to start deriving insights. This enables you to explore, compare, and correlate a variety of sensors seamlessly. It provides a very intuitive user experience that enables you to view, explore, and drill down into various granularities of data, down to specific events. It also provides SQL-like filters and aggregates, ability to construct, visualize, compare, and overlay various time series patterns, heat maps, and the ability to save and share queries. This is what enables you to get started, and glean insights from your data using Azure Time Series Insights in minutes. You can also unleash the power of Time Series Insights using the REST query APIs to create custom solutions. Additionally, Time Series Insights is used to power the time series analytics experiences in Microsoft IoT Central and Azure IoT Suite connected factory preconfigured solutions. Time Series Insights is powered by Azure Platform and provides enterprise scale, reliability, Azure Active Directory integration, and operational security.

Codit, based in Belgium, is a leading IT services company providing consultancy, technology, and managed services in business integration. They help companies reduce operational costs, improve efficiency and enhance control by enabling people and applications to integrate more efficiently. “Azure Time Series Insights is easy to use, helping us to quickly explore, analyze, and visualize many events in just a few clicks.  It’s a complete cloud service, and it has saved us from writing custom applications to quickly verify changes to IoT initiatives,” said Tom Kerkhove, Codit. “We are excited to use Time Series Insights in the future.”

Heatmap and outlier

Azure Time Series Insights extends the broad portfolio of Azure IoT services, such as Azure IoT Hub, Azure Stream Analytics, Azure Machine Learning and various other services to help customers unlock deep insights from their IoT solution. Currently, Time Series Insight is available in US West, US East, EU West, and EU North regions. Learn more about Azure Time Series Insights and sign up for the Azure Time Series Insights preview today.

Learn more about Microsoft IoT

Microsoft is simplifying IoT so every business can digitally transform through IoT solutions that are more accessible and easier to implement. Microsoft has the most comprehensive IoT portfolio with a wide range of IoT offerings to meet organizations where they are on their IoT journey, including everything businesses need to get started — ranging from operating systems for their devices, cloud services to control them, advanced analytics to gain insights, and business applications to enable intelligent action. To see how Microsoft IoT can transform your business, visit www.InternetofYourThings.com.​

Viewing all 13502 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>