Quantcast
Channel: TechNet Technology News
Viewing all 13502 articles
Browse latest View live

Now available: Azure invoices emailed direct to your inbox

$
0
0

Instead of downloading your invoice every month, you can now opt-in to receive your invoice statement attached to your monthly billing email. In addition, you can configure additional recipients for this email. Save time and send the invoice directly to your accounts receivable department.

How to opt in

  1. Select your subscription from the subscriptions blade. You have to opt-in for each subscription you own. Click Send my invoice, you may not see this if you are not the account admin. Then opt in.Opt in from the resource menu of your subscription
  2. Once you've accepted the agreement you can configure additional recipients:Configure recipients of your invoice.
  3. You can also access this blade from the billing history blade, or a deep link in your monthly statement notification email:Link in Invoice Blade

If you can't access the email settings blade:

• You must be the account administrator to configure this setting, not sure what this means? Learn more here.

• If you have a monthly invoice but aren't receiving an email, make sure you have your communication email properly set.

• This feature is only available in the direct channel and may not be available for certain subscriptions such as support offers or Azure in Open.


SQL Server 2016 innovations power Azure SQL Data Warehouse to deliver faster insights

$
0
0

Azure SQL Data Warehouse (SQL DW) is a SQL-based petabyte-scale, massively parallel, cloud solution for data warehousing. It is fully managed and highly elastic, enabling you to provision and scale capacity in minutes. You can scale compute and storage independently, allowing you to range from burst to archival scenarios, and pay based off what you're using instead of being locked into a cluster configuration.

The engine underneath Azure SQL Data Warehouse that runs the queries on each individual node is the industry leading SQL Server Database from Microsoft. With general availability in 2016, Azure SQL DW received an upgrade to SQL Server 2016 that transparently provided 40% performance increase to user workloads comprising of analytic queries.

The two performance pillars of SQL DW are its column store and the batch mode execution engine, also known as vectorized query execution. In this blog, we highlight the improvements in SQL Server 16 that took SQL Data Warehouse performance to a new level. These are all in addition to existing features such as columnar compression and segment elimination. We already had batch mode execution that can process multiple rows at a time, instead of one value at a time, and take advantage of SIMD hardware innovations. SQL Server 16 further extended batch mode execution to more operators and scenarios.

The following are the key SQL Server 16 performance innovations for columnstore and batch mode. Each links to a detailed blog providing examples and observed performance gain.

Aggregate Pushdown

Aggregates are very common in analytic queries. With columnstore tables, SQL Server processes aggregates in batch mode delivering an order of magnitude better performance. SQL Server 16 further dials up aggregate computation performance by pushing the aggregation to the SCAN node. This allows the aggregate to be computed on the compressed data during the scan itself.

String Predicate Pushdown

Columnstore in SQL Server 16 allows string predicates to be pushed down the SCAN node, resulting in a significant improvement in query performance. String predicate pushdown leverages dictionaries to minimize the number of string comparisons.

Multiple Aggregates

SQL Server 16 now processes multiple aggregates on a table scan more efficiently in a single batch mode aggregation operator. Previously multiple aggregation paths and operators would be instantiated resulting in slower performance.

Batch Mode Window Aggregates

SQL Server 16 introduces batch mode execution for window aggregates. Batch mode has the potential to speed up certain queries by even 300 times as measured in some of our internal tests.

Batch Mode in Serial Execution

High concurrent activity and/or low number of cores can force queries to run in serial. Previously serial queries would get forced to run in row mode, resulting in a double beating from lack of parallelism and lack of batch mode. SQL Server 16 can run batch mode even when degree of parallelism (DOP) for a query is 1 (DOP 1 means the query is run serial). SQL Data Warehouse at lower SLOs (less than DWU1000) runs each distribution query in serial as there is less than one core per distribution. With this improvement, these queries now run in batch mode.

The above is quite an extensive list of performance boosts that SQL Data Warehouse now benefits from. Best of all, no change is required to SQL Data Warehouse user queries to get the above performance benefits – it is all automatic under the hood!

Next steps

In this blog we described how SQL Server 2016 innovations in columnstore and batch mode technologies give a huge performance boost to Azure SQL Data Warehouse queries. We encourage you to try it out by moving your on-premise data warehouse into the cloud.

Learn more

Check out the many resources to learn more about SQL Data Warehouse.

What is Azure SQL Data Warehouse?

SQL Data Warehouse best practices

Video library

MSDN forum

Stack Overflow forum

Power BI Mobile apps feature summary – January 2017

$
0
0
This year, we will keep investing in perfecting the mobile apps experience and performance to bring you an outstanding way to experience your data on the go. We believe having your data available, easily and at any time, is a basic need in today’s business world. Having said that, if you’re looking for more control and customization of your mobile experience, we encourage you to try our Phone reports capability, introduced as part of this month's release. In this release, you’ll also find exciting improvements for tables and matrices, and new capabilities like SQL Server Reporting Services (SSRS) ADFS.

Julia – A Fresh Approach to Numerical Computing

$
0
0

This post is authored by Viral B. Shah, co-creator of the Julia language and co-founder and CEO at Julia Computing, and Avik Sengupta, head of engineering at Julia Computing.

The Julia language provides a fresh new approach to numerical computing, where there is no longer a compromise between performance and productivity. A high-level language that makes writing natural mathematical code easy, with runtime speeds approaching raw C, Julia has been used to model economic systems at the Federal Reserve, drive autonomous cars at University of California Berkeley, optimize the power grid, calculate solvency requirements for large insurance firms, model the US mortgage markets and map all the stars in the sky

It would be no surprise then that Julia is a natural fit in many areas of machine learning. ML, and in particular deep learning, drives some of the most demanding numerical computing applications in use today. And the powers of Julia make it a perfect language to implement these algorithms.

julia

One of key promises of Julia is to eliminate the so-called “two language problem.” This is the phenomenon of writing prototypes in a high-level language for productivity, but having to dive down into C for performance-critical sections, when working on real-life data in production. This is not necessary in Julia, because there is no performance penalty for using high-level or abstract constructs.

This means both the researcher and engineer can now use the same language. One can use, for example, custom kernels written in Julia that will perform as well as kernels written in C. Further, language features such as macros and reflection can be used to create high-level APIs and DSLs that increase the productivity of both the researcher and engineer.

GPU

Modern ML is heavily dependent on running on general-purpose GPUs in order to attain acceptable performance. As a flexible, modern, high-level language, Julia is well placed to take advantage of modern hardware to the fullest.

First, Julia’s exceptional FFI capabilities make it trivial to use the GPU drivers and CUDA libraries to offload computation to the GPU without any additional overhead. This allows Julia deep learning libraries to use GPU computation with very little effort.

Beyond that, libraries such as ArrayFire allow developers to use natural-looking mathematical operations, while performing those operations on the GPU instead of the CPU. This is probably the easiest way to utilize the power of the GPU from code. Julia’s type and function abstractions make this possible with, once again, very little performance overhead.

Julia has a layered code generation and compilation infrastructure that leverages LLVM. (Incidentally, it also provides some amazing introspection facilities into this process.) Based on this, Julia has recently developed the ability to directly compile code onto the GPU. This is an unparalleled feature among high-level programming languages.

While the x86CPU with a GPU is currently the most popular hardware setup for deep learning applications, there are other hardware platforms that have very interesting performance characteristics. Among them, Julia now fully supports the Power platform, as well as the Intel KNL architecture.

Libraries

The Julia ecosystem has, over the last few years, matured sufficiently to materialize these benefits in many domains of numerical computing. Thus, there are a set of rich libraries for ML available in Julia right now. Deep learning framework with natural bindings to Julia include MXNet and TensorFlow. Those wanting to dive into the internals can use the pure Julia libraries, Mocha and Knet. In addition, there are libraries for random forests, SVMs, and Bayesian learning.

Using Julia with all these libraries is now easier than ever. Thanks to the Data Science Virtual Machine (DSVM), running Julia on Azure is just a click away. The DSVM includes a full distribution of JuliaPro, the professional Julia development environment from Julia Computing Inc, along with many popular statistical and ML packages. It also includes the IJulia system, with brings Jupyter notebooks to the Julia language. Together, it creates the perfect environment for data science, both for exploration and production.

Viral Shah
@Viral_B_Shah

Demo Tuesday // Storage Replica: Prevent data loss during a disaster

$
0
0

Welcome to our new Demo Tuesday series. Each week we will be highlighting a new product feature from the Hybrid Cloud Platform.

What would happen if a disaster strikes one of your datacenters? How much data would you lose? Hours, days, or weeks? And how many millions would that cost your organization? These are extremely important questions to have answers to, and if you dont have a strong disaster recovery (DR) plan in place, those answers are probably troubling.

Datacenters, unfortunately, fail all the time for a number of reasons: natural disaster, civil unrest, or the most common factorsimple human error. What will you do when one of yours goes down?

Zero data loss with Storage Replica in Windows Server 2016

Storage Replica is a new feature in Windows Server 2016 that offers automatic DR and preparedness capabilities. It creates synchronous replication of data that spans multiple datacenters, automatically in real time. That creates a failover, where all nodes stay completely in sync, ensuring zero data loss during a failure. It will help you protect your data and keep your company in business.

In addition, Storage Replica provides other safety and performance benefits, including:

  • Quicker data access by local proximity
  • Better load distribution and use of compute resources
  • Ability to decommission current expensive file replication systems
  • Asynchronous replication for higher-latency networks

Try it out for yourself in our virtual lab and learn more about all the benefits of software-defined storage.

What’s new in Office 365 administration—January update

$
0
0

In January, we continued to further improve the Office 365 administration experience. Improvements include improved setup access and guidance, a new Yammer device usage report and the rollout of the new OneDrive admin center to all customers.

Here’s a summary of the January updates:

Access all setup settings in one place

To ensure that you’re getting the maximum value out of Office 365, a new Setup section on the navigation menu now provides access to all setup related settings in one place and gives you improved setup guidance.

Productspage—This new page allows admins to quickly understand how many licenses they have available and which software products are included in each of their subscriptions. Setup guides provide detailed step-by-step guidance for various services—including OneDrive, Office Pro Plus, SharePoint and Hybrid for Exchange Online—and help your organization get up and running quickly.

Domains page—Admins can update or modify their domain settings at any point and access frequent tasks such as adding a domain, deleting a domain or checking the health of a domain.

Data Migration page—Provides admins with automation tools and step-by-step guidance to help you migrate your data from on-premises or other cloud services to Office 365, including migrating from your local Exchange server or Google.

Office admin January 1

Get more insights on Yammer usage

To provide you with a complete picture of how your organization is using Office 365, we continue to include more reports in the Office 365 admin center. In January, we began to roll out the Yammer device usage report.

The report shows a breakdown of activity across web and mobile clients, and lets you see the device types commonly used by people in your organization.

To access the report, click Usage from the admin center home page and then select Yammer device usage from the dropdown list at the top of the page.

Office admin January 2

New OneDrive for Business admin center reaching general availability

To help IT admins better manage sync and sharing capabilities, we are rolling out a new OneDrive admin center. The admin center controls how and from where a user will access the files in OneDrive—across device, location and app. Additional features include sync, storage and device access settings.

Office admin January 3

You can access the OneDrive admin center by clicking Admin Centers and then OneDrive in the left navigation. Learn more in the announcement blog post.

Additional filter capabilities

We’ve added a new filter to the Active users page that allows admins to easily view and manage guest users.

Office admin January 4

More to come

Over the coming months, we will add more reports focusing on which clients are used to access SharePoint and OneDrive for Business. We are also working on making the Office 365 adoption content pack in Power BI available to all customers and providing new public APIs that will enable you to programmatically access the usage data and integrate it into custom applications, like a company reporting portal.

Let us know what you think!

Try the new features and provide feedback using the feedback link in the lower right corner of the admin center. And don’t be surprised if we respond to your feedback. We truly read every piece of feedback that we receive to make sure the Office 365 administration experience meets your needs.

—Anne Michels, @Anne_Michels, senior product marketing manager for the Office 365 Marketing team

Please note: the features mentioned in this blog post have started to roll out. If they are not available yet in your region, for your subscription or for your organization, please check back in a few weeks!

The post What’s new in Office 365 administration—January update appeared first on Office Blogs.

Free your teams to innovate by improving collaboration

$
0
0

When teams need to spend time looking for resources, wait for other people to get back to them or try to knit together information from different software or devices, projects end up running late. Team members rush to meet deadlines making it almost impossible to do their best creative work.

But isn’t working in a team supposed to make you more efficient—not less? That’s the idea, but you’ve probably noticed it doesn’t always work out that way. Have you ever found yourself saying, Who has the latest version of that market analysis? I know it’s been updated, so how come the only one I can find is months old?” or “It’s taking too long to get feedback on the budget from the team; we’re running out of time!” We heard you and created Microsoft Teams—the chat-based workspace in Office 365—so you can get all the creative benefits of teamwork and free your teams from these productivity sinkholes.

Find what you’re looking for—instantly

One of the biggest time-wasters for teams is looking for content, tools, contacts and conversation threads. Imagine how much more effective everyone would be if they had instant access to everything they need—right in Office 365. Microsoft Teams uses powerful, integrated search capabilities and built-in access to SharePoint, OneNote and Planner, so team members can find what they’re looking for—instantly. Because every document shared in Microsoft Teams is saved to the cloud, team members work from the latest version—no searching.

Get that feedback

Team members often get stuck in a holding pattern because they’re waiting for the feedback and sign-off to drive a project forward. They try to set up conference calls, but to-and-fro scheduling burns up time. Even when they finally do get on a call, make edits to project documents and send out the revised versions, they’re often stuck waiting again for sign-off. All that changes when they can quickly start a team, private chat or online meeting with decision-makers and collaborate on shared files to secure approval right away. Integrated notifications plus side-by-side chat while viewing a document enables on-the-spot editing and finalizing of materials.

5 faces of today’s employees

There’s more you can do to bring diverse teams closer. If you have employees who work from remote locations, use different devices or simply have different collaboration preferences—you still need to figure out the right tools. Ready to learn how to support them all?

Get the free eBook

Bring relevant information into Microsoft Teams

Team members can tailor workspaces with the specialized content and apps they need every day. For example, using Microsoft Teams, they can add tabs like a Word document or Power BI dashboard to provide quick access or take quick action with bots. Add apps like Jira or Trello to bring relevant information into your hub for teamwork.

You can do all this with built-in security and compliance features, including data encryption and multi-factor authentication for enhanced identity protection.

And now for the best part about teamwork

Microsoft Teams lets you fully embrace the upside of teamwork—frictionless sharing that makes good ideas exceptional. Seize the potential for dramatic innovation by supporting a collaborative culture, and your enterprise can:

  • Widen the ideation pipeline.
  • Accelerate time to market.
  • Deliver higher-quality products and new customer experiences.

The post Free your teams to innovate by improving collaboration appeared first on Office Blogs.

Liberty Mutual launches an IT transformation

$
0
0

Today’s post was written by James McGlennon, executive vice president and CIO at Liberty Mutual Insurance Group.

Liberty Mutual pro pixThe insurance business is all about protecting the assets that people value most. We continuously seek out the latest tools and technology to help better serve our customers through innovative products and services.

With more than 50,000 employees and 900 global locations, Liberty Mutual experiences the unique industry challenges of the insurance business on a large scale. Internet startups are well placed to take advantage of mobile, cloud-based technologies. Competitors are looking for ways to disrupt our value chain and commandeer aspects of our customer relationships. This is why it’s key that we stay one step ahead of the threats and opportunities all around us. We are currently transforming our own IT organization to better position us for future success and meet the evolving expectations of our customers and employees.

Today, we are harnessing the scale and expertise of our global workforce. Innovation and breakthrough ideas can come from anywhere in our company, and as we transform, it’s essential that we enable our employees to improve and collaborate, regardless of location. Teams sharing insights across India, Turkey, Portugal or Spain, and the United States, for example, can power the development of product breakthroughs that we import and export around the world.

Everything we do as part of the IT organization is based on our ability to analyze data in all its forms. As our organization continues to evolve, several products and platforms position us well to transform our business. Office 365, for example, helps us respond faster to market changes. We are moving to a more agile development process, where smaller, multi-disciplinary employee groups will collaborate on new products more efficiently, bridging the gap between IT and the business. With chat-based, real-time collaboration, we’ll increase the proportion of people across the company who work together to build innovative new products in a continuous development model so we can grow our business in the digital age.

To do this, we are partnering with companies like Microsoft to help us put important business analytics tools in the hands of everyone, not just analysts and power users, striking the right balance between being easy to use and producing sophisticated insights to drive product innovation. Business users in Claims, Legal, Reinsurance and HR can quickly glean information from dashboards and KPIs using products such as Microsoft Power BI, which enables us to open millions of rows of data for analysis by a much larger number of people. The tools provide visual interpretations and immediate insights valuable to our daily operations and formulation of new strategies.

Millennials entering the business through our internship programs expect a mobile, connected workplace, which is why we are focused on online document storage to access and share documents without the pitfalls and versioning issues of email, all while keeping security a top priority. The security of sharing documents within the Office 365 platform means we can move collaboration and productivity to the forefront of our employees’ experience, without worrying about inappropriate access to files. Azure Active Directory simplifies how we manage user identity and access policies while we enjoy the benefits of having all our documents available on a smartphone, mobile device or laptop.

As we continue to transform our IT organization, we are focused on attracting top technology talent and leveraging emerging technologies to develop innovative solutions. These products and solutions are critical to our future success and will enable us to meet the changing expectations of our customers.

—James McGlennon

The post Liberty Mutual launches an IT transformation appeared first on Office Blogs.


The week in .NET – On .NET on public speaking, ndepend, CrazyCore, The Perils of Man

$
0
0

Previous posts:

.NET Core, .NET Native, NuGet, VS 2017 RC updates

This week, we announced updates to .NET Core, .NET Native, NuGet, and VS 2017 RC. Check out the posts for all the details.

On .NET

We had two shows last week. Our first show was a special on public speaking hosted by Kendra Havens, with Scott Hanselman, Kasey Uhlenhuth, Maria Naggaga Nakanwagi, and Donovan Brown.

On our second show, Patrick Smacchia showed the latest version of ndepend, a very advanced code quality tool.

Tool of the week: CrazyCore

Eric Mellinoe is a developer on the .NET team, and when he’s not working on .NET Core, he builds game engines like it’s nothing at all. His engine, CrazyCore, is still very early, but is very impressive nonetheless. It’s open-source, cross-platform, and it runs on .NET Core. One of the most interesting aspects of this work is how Eric went the extra mile to build wrapper for the libraries he needed, and that didn’t exist yet for .NET Core.

CrazyCore

You can read more about CrazyCore in Eric’s blog post: Building a 3D Game Engine with .NET Core

Game of the week: The Perils of Man

The Perils of Man is a point and click style adventure game. The Perils of Man tosses you straight into a world of mystery while it recounts the disappearance of Max Eberling, a rogue scientist who vanished 10 years prior. Even more mysterious is that his father also vanished years before him. In The Perils of Man, you take the role of Ana Eberling, a teenage girl on a mission to out what happened to Max, who happens to be her father. Journey through time and explore the notions of cause and effect while solving a century old mystery.

The Perils of Man

The Perils of Man was a joint project by IF Games and Vertigo Games using C# and Unity. It is available on Steam and iTunes.

User group meeting of the week: Docker for .NET in Tulsa

The Tulsa Developers Net hold a meeting tonight January 31 at 6:00PM in Tulsa, OK on Docker for .NET.

.NET

ASP.NET

F#

New Language Suggestions:

Check out F# Weekly for more great content from the F# community.

Azure

Xamarin

UWP

Games

And this is it for this week!

Contribute to the week in .NET

As always, this weekly post couldn’t exist without community contributions, and I’d like to thank all those who sent links and tips. The F# section is provided by Phillip Carter, the gaming section by Stacey Haffner, and the Xamarin section by Dan Rigby, and the UWP section by Michael Crump.

You can participate too. Did you write a great blog post, or just read one? Do you want everyone to know about an amazing new contribution or a useful library? Did you make or play a great game built on .NET?
We’d love to hear from you, and feature your contributions on future posts:

This week’s post (and future posts) also contains news I first read on The ASP.NET Community Standup, on Weekly Xamarin, on F# weekly, and on Chris Alcock’s The Morning Brew.

New enhanced access controls in Azure AD: Tenant Restrictions is now Generally Available!

$
0
0

Howdy folks,

Today I’m happy to announce that our new Tenant Restrictions capability is Generally Available! We built Tenant Restrictions with extensive input from our customer in finance, healthcare and pharmaceutical, industries which have relatively strict information access and compliance requirements.

Tenant restrictions gives customers with these kinds of requirements enhanced control over access to SaaS cloud applications. Admins can now restrict employees using their corporate network to only being able to use Azure AD identities in tenants they have approved.

To give you the details about this important new capability, I’ve asked Yossi Banai, a PM in our Identity Security and Protection team to write a blog about this feature. You’ll find it below.

I those of you in highly regulated industries will find this featureuseful! And as always, we would love to receive any feedback or suggestions you have!

Best Regards,

Alex Simons (Twitter: @Alex_A_Simons)

Director of Program Management

Microsoft Identity Division

________

Hello,

I’m Yossi Banai, a Program Manager on the Azure Active Directory team. In today’s blog post I’ll cover Tenant Restrictions a new feature we released today for general availability.

Overview

Companies that want to move their employees to SaaS apps like Office 365 are sometimes worried about opening their networks to information leaks. If users can access Office 365 with their corporate identity, they can also access these same services with other identities.

Before cloud services, network admins could simply block access to unwanted apps or websites by blocking their URL or IP address. This is no longer an option with SaaS apps, where a single endpoint (like outlook.office.com) is used by all consumers of the SaaS app.

Our solution for this common IT challenge is Tenant Restrictions. This new feature enables organizations to control access based on the Azure AD tenant the applications use for single sign-on. For example, you can use Tenant Restrictions to allow access to your organization’s Office 365 applications, while preventing access to other organizations’ instances of these same applications.

How it works

An on-premises proxy server is configured to intercept authentication traffic going to Azure AD. The Proxy inserts a new header called “Restrict-Access-To-Tenants” that lists the tenants that users on the network are permitted to access. Azure AD reads the permitted tenant list from the header, and only issues security tokens if the user or resource is in a tenant on that list.

The following diagram illustrates the high-level traffic flow.


End-user Experience

If a user on the Contoso network tries to sign in to the outlook.office.com instance of an unpermitted tenant, he or she will see this message on the web page:

Admin Experience

While configuration of Tenant Restrictions is done on the corporate proxy infrastructure, admins can access the Tenant Restrictions reports in the Azure Portal directly from the Overview page of Azure Active Directory, under ‘Other capabilities’.

Using the report, the admin for the tenant specified as the “Restricted-Access-Context” can see all sign-ins blocked because of the Tenant Restrictions policy, including the identity used and the target Tenant ID:

Learn more

When you’re ready to get started, see Use Tenant Restrictions to manage access to SaaS cloud applications for more information.

We appreciate your feedback

As always, we want to hear your feedback about this new feature. If you have any feedback, questions, or issues to report, please leave a comment at the bottom of this post or tweet with the hashtag #AzureAD.

Best regards,

Yossi

Announcing General Availability of Power BI Real-Time Streaming Datasets

$
0
0
Today, I am happy to announce the general availability of real-time streaming datasets in Power BI. This feature set allows users to easily stream data into Power BI via the REST APIs, Azure Stream Analytics, and PubNub, and to see that data instantly light on their dashboards.

Analyzing Exchange Transaction Log Generation Statistics – Revisited

$
0
0

Almost 4 years ago, I wrote a script called GetTransactionLogStats.ps1, and originally blogged about it here. At the original time of writing, the primary purpose of the script was to collect transaction log generation data, and use that to determine the percentage of transaction logs generated per hour in an environment. This could in turn be used to populate the related fields in the Exchange Server Role Requirements Calculator.

Since originally writing this script, I’ve made some significant updates to how the script functions, and also added a handful of new features along the way, which were significant enough that I wanted to have a new post about it. Of note, the script now:

  • Uses PowerShell Remoting to collect data from many remote servers in Parallel. This significantly speeds up collection speeds.
  • Generates a database heat map, that compares the number of logs generated for each database to the number of logs generated for all databases. This can be used to identify databases that may be overloaded or underloaded.
  • Uses only log generation information from active database copies to determine log generation statistics.
  • Accepts the target servers via a script parameter instead of via a text file.

Requirements

The script has the following requirements;

  • Target Exchange Servers must be running Exchange 2010, 2013, or 2016
  • PowerShell Remoting must be enabled on the target Exchange Servers, and configured to allow connections from the machine where the script is being executed.

Parameters

The script has the following parameters:

  • Gather: Switch specifying we want to capture current log generations. If this switch is omitted, the -Analyze switch must be used.
  • Analyze: Switch specifying we want to analyze already captured data. If this switch is omitted, the -Gather switch must be used.
  • ResetStats: Switch indicating that the output file, LogStats.csv, should be cleared and reset. Only works if combined with –Gather.
  • WorkingDirectory: The directory containing LogStats.csv. If omitted, the working directory will be the current working directory of PowerShell (not necessarily the directory the script is in).
  • LogDirectoryOut: The directory to send the output log files from running in Analyze mode to. If omitted, logs will be sent to WorkingDirectory.

Usage

Runs the script in Gather mode, taking a single snapshot of the current log generation of all databases on servers server1 and server2:

PS C:\> .\GetTransactionLogStats.ps1 -Gather -TargetServers “server1″,”server2”

Runs the script in Gather mode, specifies an alternate directory to output LogStats.csv to, and resets the stats in LogStats.csv if it exists:

PS C:\> .\GetTransactionLogStats.ps1 -Gather -TargetServers “server1″,”server2” -WorkingDirectory “C:\GetTransactionLogStats” -ResetStats

Runs the script in Analyze mode:

PS C:\> .\GetTransactionLogStats.ps1 -Analyze

Runs the script in Analyze mode, and specifies an alternate directory to send the output files to:

PS C:\> .\GetTransactionLogStats.ps1 -Analyze -LogDirectoryOut “C:\GetTransactionLogStats\LogsOut”

Output File After Running in Gather Mode

LogStats.csv

When run in Gather mode, the log generation snapshots that are taken are sent to LogStats.csv. The following shows what this file looks like:

image

Output Files After Running in Analyze Mode

LogGenByHour.csv

This is the primary output file for the script, and is what is used to populate the hourly generation rates in the Exchange Server Role Requirements Calculator. It consists of the following columns:

  • Hour: The hour that log stats are being gathered for. Can be between 0 – 23.
  • LogsGenerated: The total number of logs created during that hour for all days present in LogStats.csv.
  • HourToDailyLogGenRatio: The ratio of all logs that that particular hour accounts for. The sum of values for this column in all 24 hours in the table should be 1, and can be copied directly into the Exchange Server Role Requirements Calculator.
  • NumberOfHourlySamples: The number of hourly samples that were used to calculate each hour value.
  • AvgLogGenPerHour: The average number of logs generated per database per hour.

image

LogGenByDB.csv

This file contains a heat map showing how many logs were generated for each database during the duration of the collection. This information can be used to figure out if databases, servers, or entire Database Availability Groups, are over or underutilized compared to their peers. It consists of the following columns:

  • DatabaseName: The name of the database being measured.
  • LogsGenerated: The total number of logs created by primary copies of that database during the duration of the collection.
  • LogGenToTotalRatio: The ratio of logs generated for this database compared to logs generated for all databases.
  • NumberOfHours: The number of hourly samples that were taken for this database.
  • LogsGeneratedPerHour: The average number of logs generated per hour for this database.

image

LogGenByDBByHour.csv

This file is similar to LogGenByDB.csv, but shows the log generation rates per hour for each database. It consists of the following columns:

  • DatabaseName: The name of the database being measured.
  • Hour: The hour that log stats are being gathered for. Can be between 0 – 23.
  • LogsGenerated: The total number of logs created by primary copies of that database during that hour.
  • HourToDailyLogGenRatioForDB: The ratio of logs generated for this hour and this database compared to the total logs generated for this database.

image

Running As a Scheduled Task

Since the script is designed to be run an hourly basis, the easiest way to accomplish that is to run the script via a Scheduled Task. The way I like to do that is to create a batch file which calls Powershell.exe and launches the script, and then create a Scheduled Task which runs the batch file. The following is an example of the command that should go in the batch file:

powershell.exe -noninteractive -noprofile -command “& {C:\LogStats\GetTransactionLogStats.ps1 -Gather -TargetServers “server1”,”server2” -WorkingDirectory C:\LogStats}”

In this example, the script is located in C:\LogStats. Note that I specified a WorkingDirectory of C:\LogStats so that if the Scheduled Task runs in an alternate location (by default C:\Windows\System32), the script knows where to find and where to write LogStats.csv. Also note that the command does not load any Exchange snapin, as the script doesn’t use any Exchange specific commands.

Mike Hendrickson

PowerShell Open Source Community Dashboard

$
0
0
Since going cross-platform and open source on GitHub, I’ve wanted to know how we are doing as a community and who the top contributors we should recognize are.
The available GitHub graphs are not sufficient as they focus on commits, and there are many other ways for the community to contribute to PowerShell.
Certainly receiving Pull Requests (PRs) has a direct impact on the code base, but opening issues, commenting on issues, and commenting on PRs (aka code reviews) are also immensely appreciated and valuable to help improve PowerShell.In addition, PowerShell is not a single repository, but several repositories that help to make PowerShell successful:
  • PowerShell-RFC where we do design work for new proposed features
  • PowerShell-Docs which contains all the PowerShell help and documentation
  • platyPS: tooling for our help documentation that enables authoring and editing of docs in Markdown instead of XML
  • Microsoft.PowerShell.Archive: a built-in module for creating and expanding ZIP archives (in the future we plan to move other built-in modules to their own repositories like this)
  • ODataUtils: a module to generate PowerShell cmdlets from an OData REST endpoint
  • JEA where we store samples and resources associated with Just Enough Administration (JEA)
  • PSL-OMI-Provider: an optional component for Linux/Mac to enable PowerShell remoting over WS-Man protocol (both client and server)
  • PSReadline: the default interactive command line experience for PowerShell

Although most of the contributions happen in the PowerShell/PowerShell repo, I want to ensure we recognize contributions in these other repositories (and new ones in the future).

To get a more holistic view, I decided to create a dashboard in PowerBI.
A follow-up blog post will go into some of the technical details and challenges to having an Azure Function execute a PowerShell script calling GitHub REST APIs and storing the result in an Azure StorageTable queried by PowerBI.
The PowerShell scripts I used for this dashboard will be published to the PowerShell Gallery.

You can access the dashboard at http://aka.ms/PSGitHubBI.

The first page, Top Community Contributors, recognizes individuals outside of Microsoft for their contributions for each of the 4 types of contributions described previously.
Two things to note:

  • the rankings are based on a moving 30 day window
  • ties for the rank are due to individuals having exactly the same count for a contribution type

The second page, Top Microsoft Contributors, is the same as the first table but for Microsoft employees who are members of the PowerShell organization on GitHub.

The third page, Contributions over Time, has two graphs:

  • The first graph compares community contributions and Microsoft contributions.
    It really shows that going open source was the right decision as the community has provided lots of contributions and helped to move PowerShell forward more than what the PowerShell Team could have done alone!
  • The second graph shows a comparison over time of the different types of contributions, but is not separated out between the community and Microsoft.

The last page, Downloads, shows a trend of the cumulative downloads of our official releases with a view comparing the different operating systems and the different release versions.
Eventually, I would like to replace the download numbers with usage numbers based on Census Telemetry, which is a much more accurate representation of growth of PowerShell adoption.

I intend to iterate and improve upon this dashboard to make it useful not only to the PowerShell Team, but also to the PowerShell community.
I plan to provide similar dashboards for some of our other projects such as DSC resources, ScriptAnalyzer, Editor Services, OpenSSH on Windows, and others.

Please leave any suggestions or feedback as comments to this blog post. If you find this dashboard useful, or you believe we can improve upon it in some way, please let us know!

Steve Lee
Principal Software Engineer Manager
PowerShell Core

Live Migration via Constrained Delegation with Kerberos in Windows Server 2016

$
0
0

Introduction

Many Hyper-V customers have run into new challenges when trying to use constrained delegation with Kerberos to Live Migrate VMs in Windows Server 2016.  When attempting to migrate, they would see errors with messages like “no credentials are available in the security package,” or “the Virtual Machine Management Service failed to authenticate the connection for a Virtual Machine migration at the source host: no suitable credentials available.”  After investigating, we have determined the root cause of the issue and have updated guidance for how to configure constrained delegation.

Fixing This Issue

Resolving this issue is a simple configuration change in Active Directory.  In the following dialog, select “use any authentication protocol” instead of “use Kerberos only.”

constrained_delegation

Root Cause

Warning: the next two sections go a bit deep into the internal workings of Hyper-V.

The root cause of this issue is an under the hood change in Hyper-V remoting.  Between Windows Server 2012R2 and Windows Server 2016, we shifted from using the Hyper-V WMI Provider *v1* over *DCOM* to the Hyper-V WMI Provider *v2* over *WinRM*.  This is a good thing: it unifies Hyper-V remoting with other Windows remoting tools (e.g. PowerShell Remoting).  This change matters for constrained delegation because:

  1. WinRM runs as NETWORK SERVICE, while the Virtual Machine Management Service (VMMS) runs as SYSTEM.
  2. The way WinRM does inbound authentication stores the nice, forwardable Kerberos ticket in a location that is unavailable to NETWORK SERVICE.

The net result is the WinRM cannot access the forwardable Kerberos ticket, and the Live Migration fails on Windows Server 2016.  After exploring possible solutions, the best (and fastest) option here is to change the configuration to enable “protocol transition” by changing the constrained delegation configuration as above.

How does this impact security?

You may think this approach is less secure, but in practice, the impact is debatable.

When Kerberos Constrained Delegation (KCD) is configured to “use Kerberos only,” the system performing delegation must possess a Kerberos service ticket from the delegated user as evidence that it is acting on behalf of that user.  By switching KCD to “use any authentication protocol”, that requirement is relaxed such that a service ticket acquired via Kerberos S4U logon is acceptable.  This means that the delegating service is able to delegate an account without direct involvement of the account owner.  While enabling the use of any protocol — often referred to as “protocol transition” — is nominally less secure for this reason, the difference is marginal due to the fact that the disabling of protocol transition provides no security promise.  Single-sign-on authentication between systems sharing a domain network is simply too ubiquitous to treat an inbound service ticket as proof of anything.  With or without protocol transition, the only secure way to limit the accounts that the service is permitted to delegate is to mark those accounts with the “account is sensitive and cannot be delegated” bit.

Documentation

We’re working on modifying our documentation to reflect this change.

John Slack
Hyper-V Team PM

Released: Microsoft Kerberos Configuration Manager for SQL Server v3.1

$
0
0

We are pleased to announce the latest generally-available (GA) of Microsoft Kerberos Configuration Manager for SQL Server.

Get it here: Download Microsoft Kerberos Configuration Manager for SQL Server

Kerberos authentication provides a highly secure method to authenticate client and server entities (security principals) on a network. To use Kerberos authentication with SQL Server, a Service Principal Name (SPN) must be registered with Active Directory, which plays the role of the Key Distribution Center in a Windows domain. In addition, many customers also enable delegation for multi-tier applications using SQL Server. In such a setup, it may be difficult to troubleshoot the connectivity problems with SQL Server when Kerberos authentication fails.

The Kerberos Configuration Manager for SQL Server is a diagnostic tool that helps troubleshoot Kerberos related connectivity issues with SQL Server, SQL Server Reporting Services, and SQL Server Analysis Services. It can perform the following functions:

  • Gather information on OS and Microsoft SQL Server instances installed on a server.
  • Report on all SPN and delegation configurations on the server.
  • Identify potential problems in SPNs and delegations.
  • Fix potential SPN problems.

This release (v 3.1) adds support for SQL Server 2016.


Removing Self-Signed RDP Certificates

$
0
0

Hello all. Jacob Lavender here for the Ask PFE Platforms team to talk a little about a common scenario that I’m often presented with – How to rid Windows machines of the self-signed remote desktop client certificate that is created on Windows clients and servers?

Prior to diving in, let me just state that we will not be discussing in great depth why Windows machines have a self-signed certificate within this article. At a basic level, it is due to the necessity of having a certificate to support Network Level Authentication (NLA), which is an advanced security feature designed for pre-authentication intended to protect machines from a Denial-of-Service (DoS) attacks via the RDP protocol.

Kristin Griffin has published a great article which can discuss NLA in greater detail:

https://technet.microsoft.com/en-us/library/hh750380.aspx?f=255&MSPPError=-2147217396

At the end of the day, we want NLA, and many organization’s cyber security guidelines mandate it. However, those same guidelines often mandate that self-signed certificates are not allowed within the Remote Desktop certificate store.

So, what are we going to cover:

Topic #1: Why do my Windows machines continuously regenerate this certificate despite having been removed?

Topic #2: How can I remove the self-signed certificate from machines across my enterprise?

Topic #3: How can I prevent the certificate from being regenerated upon remote desktop connection or reboot?

Topic #4: How can I automate the permissions on the registry key?

Topic #5: How can I properly deploy certificates to my Windows machines?

Tested on:

  • Windows Server 2016
  • Windows Server 2012 R2
  • Windows Server 2008 R2 (Requires PowerShell 3.0)
  • Windows 10
  • Windows 8.1
  • Windows 7 (Requires PowerShell 3.0)

PowerShell update for Windows Server 2008 R2 and Windows 7:

https://blogs.technet.microsoft.com/heyscriptingguy/2013/06/02/weekend-scripter-install-powershell-3-0-on-windows-7/

Make sure to read Topic #5 prior to taking any action internally to fully understand the entire scope of our discussion within this post.

And with that, now let’s dive in.

Topic #1: Why do my Windows machines continuously regenerate this certificate despite having been removed?

Windows machines automatically generate a self-signed certificate for use with the Remote Desktop protocol. This is by design as it is intended to increase the overall security posture of all machines within the enterprise which have Remote Desktop enabled.

Simply disabling Remote Desktop on a machine will not prevent the machine from regenerating the self-signed certificate.

I have had numerous clients bring this issue forward due to many security guidelines and scans flagging this as a finding. However, the machines are in fact more secure with the self-signed certificate than without due to the intended purpose. Most security guidelines advise to avoid the use of a self-signed certificate generated by each individual machine as other machines on the network have no means to trust this certificate and validate that the machine is in fact that machine.

However, remember that we are discussing the use of the self-signed certificate within the context of NLA, not within Public Key Infrastructure (PKI). Fortunately, we’ll be addressing that component of the complete solution within Topic #5.

Topic #2: How can I remove the self-signed certificate from my machines across my enterprise?

Fortunately, this is a simple task. We have PowerShell to answer our needs on this front. A sample script can be downloaded here to remove self-signed certificates from the Remote Desktop certificates store:

https://gallery.technet.microsoft.com/Remove-Self-Signed-RDP-00413912?redir=0

PowerShell 3.0 or higher is required.

Note: Do not run this script on internal Certificate Authorities which have issued their own Remote Desktop Authentication certificate.

This script can be configured as a startup script via GPO. Depending on your domain’s PowerShell Execution policy, you may need to configure the Script Parameters to include -ExecutionPolicy ByPass.

You will find that some clients will immediately generate a new self-signed certificate. In short, configuring this as a startup script alone will likely not eliminate the self-signed certificate from returning.

You will need to determine if you wish to leave the startup script in place or remove it from the GPO once you have completed this.

Other deployment methods, such as System Center Configuration Manager, are also great options and should be considered.

Additionally, now that the certificate has been removed, Remote Desktop connections to these machines will likely fail as NLA will not have a certificate to use. This is where Topic #5 becomes very critical.

Topic #3: How can I prevent the certificate from being regenerated on remote desktop connection or reboot?

As we have discussed, Windows by design generates a self-signed certificate. The machines local System account performs this function. To prevent the certificate from being generated again, we can simply deny the System account from having the necessary permission to generate the certificate. To accomplish this, we can simply deny the right within the registry:

HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\SystemCertificates\RemoteDesktop\Certificates

The permissions can be edited manually, or via an automated means (see Topic #4).

Prior to modifying the registry, ensure that you have a valid backup of the registry and that you perform the necessary testing within a lab to gain greater insight into how modifying the registry within your image will impact operations.
Modifying the registry can result in system failure so ensure that you have a proper plan to recover should a mistake be made. The settings within this article are non-destructive, however they can impact operations such as providing remote support to users.

To manually edit the permissions on this key start by right clicking the key and selecting Permissions:

  • At the permissions window, select the Advanced radio button, then select the Add radio button.
  • At the Permission Entry for Certificates window, select the Select a principal option.
  • At the Select User, Computer, Service Account, or Group window, modify the location to search for the account to the local computer.

  • Having returned to the Select User or Group screen, type system, and then select Check Names. This should resolve the account to the local machine account object. Now select OK.

  • Having returned to the Permission Entry for Certificate, change the ACL Type to Deny.
  • Now, select Show advanced permissions.

  • Now that the advanced permissions are available, select Create Subkey.

Each certificate on the machine is a subkey within the Certificates key in the registry. By denying the System account object the Create Subkey permission, we are preventing the System account to create the certificate.

Now, what is the impact of having configured the Certificates registry key ACL in this manner? We may see a system event (Event ID 1057) in the system’s event log. It will simply be an indication that the System account was unable to create the subkey due to Access Denied:

Topic #4: How can I automate the permissions on the registry key?

To address this, we can utilize a GPO which will set the ACL on the key. Simply add the registry key to an appropriate GPO which is targeted at the machines in the enterprise that you wish to implement this change. As an example, I have made this addition to my Workstation Baseline GPO in my lab as I want this to be configured on all Windows clients:

It’s important to note that this GPO will create an entry on the key’s ACL which will remain even if the GPO is removed. If you need to modify the ACL in the future, you will need to ensure to grant the permission back.

Alternatively, we could use PowerShell to configure this on the key directly. You can locate a sample script at the following location:

https://gallery.technet.microsoft.com/Remove-Self-Signed-RDP-00413912?redir=0

This same sample script can be modified to correct any permissions on the key/sub-keys. Again, have good backups of the registry and perform ample testing prior to utilizing this in a production environment. It’s critical to understand how this will impact your operations.

Topic #5: How can I properly deploy certificates to my Windows machines?

For this discussion, we’re going to assume that you have an internal CA. Why? With an internal issuing CA we are able to automate the process of deploying certificates for the purpose of Remote Desktop.

Utilizing an external CA would likely:

  • Be cost prohibitive due to the cost of obtaining certificates from public certificate issuers, or
  • Be extremely time intensive to obtain certificate requests from all machines and submitting them manually.

So, we should begin by configuring the certificate template on the issuing CA. If you do not already have a certificate template published, duplicate the Computer certificate template and update the name of the template on the General tab.

PFE Pro Tip: Ensure that the template name is useful. I always name published templates with the word “published” in the name; example:

“Published RDP Certificate Template

Certificate Template

While all settings for the template are beyond the scope of this document, here are the critical items:

  • Update the name to be of value within the enterprise (remember to make the naming convention sustainable over time so it grows with your environment).
  • Ensure the validity period is aligned with your enterprise PKI requirements. A discussion with Cyber Security might be required for this to be defined.
  • Ensure that the Extensions for both Client Authentication and Server Authentication are present.
  • Add the following Remote Desktop Authentication Application policies extension:
    • On the Extensions tab, select Application Policies and then select Edit.
    • At the Edit Application Policies Extensions window, select Add.
    • On the Add Application Policy window, select New.
    • At the New Application Policy windows, label the Application Policies Extension Remote Desktop Authentication. The Object identifier (OID) should be 1.3.6.1.4.1.311.54.1.2.

  • Ensure that the appropriate group has been set with Autoenroll = Allow on the Security tab. You can use Domain Computers if the GPO used within this guide will target the entire domain. Alternatively, you can specify a security group and grant the Autoenroll permission to ensure that only the desired machines receive this certificate.
  • On the Subject Name tab, ensure that Build from this Active Directory information, and select the appropriate Subject Name Format – typically Common Name works fine. Ensure to also include the DNS name as an alternative name.
  • If you require an administrator to approve these certificates, that can be configured on the Issuance Requirements tab.

Default Certificate Template GPO

We need to now deploy the certificate to our clients. This is managed in two different settings.

  • Define the client PKI autoenrollment.
  • Define the default Remote Desktop services template for the client.

Let’s begin by defining the client PKI autoenrollment policy. This is performed via a GPO which is then targeted to the clients within the enterprise which this effects. The following settings are required:

Computer Configuration > Policies > Windows Settings > Security Settings > Public Key Policies

  • Certificate Services Client – Certificate Enrollment Policy = Enabled
  • Certificate Services Client – Auto-Enrollment = Enabled
    • Enable Renew expired certificates, update pending certificates, and remove revoked certificates
    • Enable Update certificates that use certificate templates

Now that we have the clients configured for Auto-Enrollment, let’s configure the RDP Default Template. The following settings are required:

Computer Configuration > Policies > Administrative Templates > Windows Components > Remote Desktop Services > Remote Desktop Session Host > Security

  • Server authentication certificate template = Enabled
    • Define the Certificate Template Name. You must use the short name of the template that was published on the CA. This is the Template Name as defined on the CA. This is NOT the Template Display Name. See the image below:

Validation

After successfully deploying the certificate to the machine, we should now see an Event 1063 in the System event log:

Finally, you should now be able to RDP to clients with NLA and they should utilize the assigned certificate.

Note: I have seen where if Remote Desktop was previously enabled, it must be disabled and then re-enabled. Again, perform proper testing in your environment to ensure you have a valid action plan on how to address any variations.

Connect industrial assets with ProSoft, powered by the Azure IoT Gateway SDK

$
0
0

For businesses around the world, connecting existing assets and devices to the cloud is a first step towards realizing the benefits of the industrial Internet of Things. Yet while the opportunity for operational efficiencies and productivity improvements are straightforward, the logistics of connecting legacy industrial devices and systems are often not as easy to tackle. For starters, industrial equipment is often built to last decades, and older devices tend not to be cloud aware. They also don’t have the capability perform the encryption necessary to securely traverse the internet, and in many cases they are not even TCP/IP enabled.

We’re working with industry leaders to make connecting legacy devices simpler with the Azure IoT Gateway SDK. ProSoft, a leader at integrating disparate devices into a unified system, is one company that is seamlessly connecting existing industrial devices into IoT solutions. The ProSoft PLX gateway communicates directly with Azure IoT Hub through the Azure IoT Gateway SDK in a highly secure manner which scales with the millions of devices a customer may desire to connect.

Using gateways like ProSoft PLX, not only can multiple devices be connected to the cloud in minutes, but they can also become a true IoT solution by leveraging the remote monitoring capabilities of the Azure IoT Suite.  This preconfigured solution allows businesses to easily monitor, analyze, report on, and create alarms based on the data previously un-connectable devices send to the cloud. These devices can also receive updates from the cloud, for example, to change configuration or settings. Working together, ProSoft and Microsoft Azure IoT are helping to lay the groundwork required to unlock the value of data produced in industrial systems.


BlogPostDiagram
Figure 1 – Diagram showing how existing business assets can be connected to the Remote Monitoring Preconfigured Solution using a ProSoft PLX running the Azure IoT Gateway SDK.

Connecting existing devices to cloud solutions is the first step towards realizing the potential of the Internet of Your Things. Soon Microsoft and ProSoft will show even more advanced scenarios like edge analytics and responding to device events in real time. For now, you can learn more about what IoT can do for your business at www.internetofyourthings.com and hardware to power your IoT solution at www.prosoft-technology.com.

Connecting Power BI to an Azure Analysis Services server

$
0
0

Last October we released the preview of Azure Analysis Services, which is built on the proven analytics engine in Microsoft SQL Server Analysis Services. With Azure Analysis Services, you can host semantic data models in the cloud. Users in your organization can then connect to your data models using tools like Excel, Power BI, and many others to create reports and perform ad-hoc data analysis.

This blog will focus on everything you need to know to use Power BI Desktop to build reports against an Azure Analysis Services server and deploy that report to PowerBI.com.

Before getting started, you’ll need:

Connect and create

1. Open Power BI Desktop

2. Click Get Data.

Power BI Desktop

3. Select Databases/SQL Server Analysis Services, and then click connect.

SQL Server Analysis Services Database

4. Enter your Azure AS server name, and click OK.

Connect Live Image

5. On the Navigator screen, select your model, and click OK.

f3f5300d-3108-4e5b-9134-0eb06ac14d2f

You’ll now see your model displayed in the field list on the side. You can drag and drop the different fields on to your page to build out interactive visuals.

image

 

Publish to Power BI

Now that you have created your report, you can publish to Power BI. When published, you can view it online and share it with others.

1. Save the report locally.

2. Click the Publish button on the Home tab.

image

If this is your first time publishing, you’ll be asked to sign in to Power BI.

image

3. Select the destination for your report. This can either be your personal workspace or a group that you are a member of.

4. Once publishing is complete, click the blue link to view the report in Power BI.

image

The report will open in Power BI and will automatically connect to your Azure Analysis Services server.

image

When connecting from Power BI to Azure Analysis Services, you are connected as your Azure Active Directory identity. This is the same identity as you would have used to sign into Power BI. If you share the report to any other users, you must ensure those users have access to your model.

In addition to the report being published, you’ll also see a dataset that has been published.

image

You can use this dataset to create new reports against your Azure Analysis Services model directly in Power BI.

 

Learn more about Azure Analysis Services.

Home Appliances, Vending Machines – Even Cruise Ships – Get an Infusion of Cortana Intelligence & Machine Learning

$
0
0

A quick overview of recent customer case studies involving the application of Microsoft’s AI, Big Data & Machine Learning offerings.

Carnival Maritime Predicts Water Consumption on Cruise Ships

The Costa Group’s fleet of 26 cruise ships sail all over the world. The industrial equipment on their ships have thousands of sensors that collect data in real time. As part of their digital transformation, the company’s marine service unit, Carnival Maritime, wanted to explore how it might take advantage of this data to find opportunities for operational improvement. One of the areas they looked at was that of water consumption onboard their ship. This is a complex problem, as consumption patterns can vary widely. Passengers of different nationalities shower for different durations at different temperatures and times of the day, for instance, and there are numerous other variables that make such consumption challenging to predict. Accurately predicting water consumption helps ship captains avoid the need to spend fuel by unnecessarily producing excessive amounts of water at sea. This also mitigates their need to carry all that excess water along the way, which further shaves costs.

Carnival needed a mechanism to predict the right amount of water to produce at the right time, without having to store any excess. Carnival’s partner, Arundo Analytics, a global provider of analytical and predictive solutions, helped them build a microservice on their proprietary big-data platform, and trained a model to help them do just that. Using the machine learning models, APIs, and templates in the Microsoft Cortana Intelligence Suite, Arundo analyzed historical data sets along with data such as the speed and position of the ships, age and nationality of passengers, historical weather data and more, to better understand exactly the drivers of water consumption. Their platform runs on Azure and is able to easily connect to and derive value from a variety of data, and from both Carnival’s cloud and on-premises databases.


Carnival is now able to better predict how much water a ship will need for a specific route with a particular set of guests. They estimate that their optimizations can help each ship save over $200,000 a year. The solution also contributes to the company’s goal of reducing carbon emissions. Carnival is next looking to implement a predictive maintenance solution for its fleet, using Cortana Intelligence to study the data that’s already being collected from thousands of on-board sensors on each ship. You can learn more about the Carnival Maritime story here.

Arçelik A.Ş. Increases Forecasting Accuracy on Spare Parts

Arçelik A.Ş. manufactures and sells a range of appliances, including televisions, air conditioners and major kitchen appliances, and owns many popular brands such as Grundig. They are a leader in most of the 135 countries in which they operate, and owe much of their success to post-sales customer service.

Since Arçelik A.Ş. sells thousands of product SKUs and maintains an inventory of hundreds of thousands of spare-parts to service them, getting the right spare parts to the right place at the right time is the key to getting their customers back up and running quickly with their products. But with a catalog of 350,000 spare parts that was growing a further 10 percent each year, Arçelik A.Ş. found it increasingly hard to accurately forecast the parts they needed across all the markets they serve. They had many different systems for data collection, sometimes even relying on spreadsheets that were being managed by hand. Their old system simply wouldn’t scale or help them achieve their goals of maximizing customer satisfaction while minimizing inventory cost.

To address this situation, Arçelik A.Ş. decided to invest in a spare-parts demand forecasting built on Cortana Intelligence. With the help of solution provider BilgeAdam, they made their first move to the public cloud with this solution. Their decision started delivering benefits even before the impact on inventory management could be felt. For starters, the company went live with the solution in just three months, whereas they had anticipated an 18-month development schedule based on an earlier solution. They also got a highly scalable solution, thanks to their cloud bet, and one that’s much easier to integrate with external data sources such as weather or location, to enhance their forecasts. Furthermore, they avoided expensive investments in IT infrastructure and personnel that would have been needed to build and maintain their alternative solution.


Here’s how the solution works: Spare-parts demand data is uploaded into Azure SQL Database. Next, Azure Machine Learning is used to test four algorithms, to identify the most accurate one, based on the current data set. That algorithm is then used to forecast their spare parts needs for the next 12 months out. The subsequent month, the solution updates the data set, and the same experimenting and forecasting process is repeated. The testing process is automated using Azure Data Factory and their algorithms are based, in part, on time-series R code developed for Arçelik A.Ş. by BilgeAdam.

The result is a much more accurate and useful forecast that’s also speedier and easier for Arçelik A.Ş. to generate. Their 12-month forecasts, which used to take 2-3 weeks to produce, are now generated every week. What’s more, these forecasts cover three times as many SKUs as their earlier solution. With forecasting accuracy already up to 80 percent (from the earlier 60 percent) and inventory turnover expected to climb by 10 percent, service calls are getting made faster and more cost-effectively, and – most important of all – Arçelik A.Ş. customers are much happier. You can click here to learn more about the Arçelik A.Ş. solution.

Mars Drinks Creates Smarter Vending Machines

Mars Drinks is a pioneer in supporting companies that want to provide great working environments for their people. In 1973, they introduced the first-ever fully automatic in-cup drinks vending machine, serving large manufacturing channels across Europe. In 1984, they introduced the first system for making hot drinks using fresh ground coffee and leaf teas sealed in individual servings. Mars Drinks’ solutions support hassle-free solutions for workplaces, delivering on taste and choice and with a commitment to sustainability.

To deliver the best possible service to its wide array of customers, ranging from consumers and businesses to distributors, Mars Drinks needed the ability to better anticipate and manage stock levels across their machines, which are distributed throughout the globe. Working with Microsoft partner Neal Analytics, they were able to apply machine learning to their vending machines. Tapping into the data they gather from remote sensors on their vending machines, and coto the Microsoft Azure IoT Suite, Cortana Intelligence and Power BI, Mars Drinks is able to use and predictive computing to better maintain stock levels, understand consumer behaviors and account for changes in demand related to factors such as weather and holidays.

For Mars Drinks’ distributors, who are subject to a fine each time a product is out of stock, this ability to better anticipate and manage stock levels will enable them to avoid unnecessary revenue losses.


Click here to read more about Mars Drinks and other retail scenarios being enabled by Microsoft.

CIML Blog Team

Installing latest PowerShell Core 6.0 Release on Linux just got easier!

$
0
0
As we continue our journey from Alpha releases and eventually to Beta, you can continue to download the latest releases from our GitHub repository.
However, our goal has always been to enable installation through popular existing Linux package management tools like apt-get and yum.I am pleased to announce that we have now published PowerShell Core 6.0 alpha.15 to https://packages.microsoft.com!

Install PowerShell Core on Ubuntu 14.04

# Import the public repository GPG keys
curl https://packages.microsoft.com/keys/microsoft.asc | sudo apt-key add -# Register the Microsoft Ubuntu repository
curl https://packages.microsoft.com/config/ubuntu/14.04/prod.list | sudo tee /etc/apt/sources.list.d/microsoft.list# Update apt-get
sudo apt-get update# Install PowerShell
sudo apt-get install -y powershell# Start PowerShell
powershell

Install PowerShell Core on Ubuntu 16.04

# Import the public repository GPG keys
curl https://packages.microsoft.com/keys/microsoft.asc | sudo apt-key add -# Register the Microsoft Ubuntu repository
curl https://packages.microsoft.com/config/ubuntu/16.04/prod.list | sudo tee /etc/apt/sources.list.d/microsoft.list# Update apt-get
sudo apt-get update# Install PowerShell
sudo apt-get install -y powershell# Start PowerShell
powershell

Install PowerShell Core on CentOS

# Enter superuser mode
sudo su # Register the Microsoft RedHat repository
curl https://packages.microsoft.com/config/rhel/7/prod.repo > /etc/yum.repos.d/microsoft.repo# Exit superuser modeexit# Install PowerShell
sudo yum install -y powershell# Start PowerShell
powershell

After registering the Microsoft repository once as superuser, from then on, you just need to use either sudo apt-get install powershell or sudo yum update powershell (depending on which distro you are using) to update it.

We plan to simultaneously publish to https://packages.microsoft.com and our GitHub repository for each new release.

Steve Lee
Principal Software Engineer Manager
PowerShell Core

Viewing all 13502 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>