Quantcast
Channel: TechNet Technology News
Viewing all 13502 articles
Browse latest View live

Windows 10 Maps updates make inking better than ever

$
0
0

On the Maps team, we are always working to improve the maps experience and deliver more than a simple tool to get you where you need to go. We want to help you discover the world around you and deliver the best personal map and navigation experiences so you can get there your way.

Natural interaction with Windows Ink

We want it to be as easy and natural as possible to use the Maps experience, so that means including support for touch and ink, not just a mouse and keyboard.  Our touch support lets you control the map with gestures such as pinch to zoom or tilting the map by dragging two fingers down, so you can see things in 3D.

The interaction with the map becomes even more natural and more powerful thanks to Windows Ink.  If you have a pen, you can ink at any time to draw or add annotations on the map.  Just start writing to point out interesting places or call out details the very same way you might do with pen and paper.

Updates to the Windows Maps app

But we wanted to make this even better than today’s pen and paper experience.  Now, with Maps, you can you can easily measure the distance as you trace a route on the map with your pen. It’s great for when you need to know how long your jog, bike or even kayak route will be.

If you need to draw perfectly straight lines on the map there’s now a ruler tool to make that fast and easy. Rotate it and adjust it however you want for the angle you need, and you’ve got a perfect straightedge.

Finally, you can quickly calculate directions between two points by drawing a line between your start point and your destination. Now, the Maps app will automatically convert your drawn line into a real route and give you directions.  Voila!  No need to type or search for your destinations if you already know where they are on the map.

You can even combine different ink modes to personalize your map and annotate your plans on top of it so you can reference everything in one place.

If you don’t have a pen with your device, don’t worry.  You can also use all the ink functionality with your mouse or finger, simply by toggling the touch writing option on the ink toolbar.

Surface Dial integration

Updates to the Windows Maps app

The new Surface Dial brings even more ways to interact with the map and ink. The Surface Dial provides you another natural way to rotate, zoom or change the map perspective easily.

You can even use the dial to take advantage of the ink toolbar or move the ruler on the map, making it more natural to interact and create using both hands. Surface Dial features are available with maps not only on the new Surface Studio, but also to anyone using an up-to-date Windows 10 device with the Surface Dial off screen.

To try out these new features launch your Maps app now and check them out!

We continue our commitment to build a great maps experience for all our users, and we hope you like where we’re headed. Your feedback has already influenced the product, and you can help us deliver the best experiences for millions of Windows 10 users. Please continue to share your feedback with us through Feedback Hub and tell us what you like, dislike or what you would like to see added.  We are listening and we can’t wait to hear from you and see what you create on your maps.


In-Memory OLTP in Standard and Express editions, with SQL Server 2016 SP1

$
0
0

We just announced the release of Service Pack 1 for SQL Server 2016. With SP1 we made a push to bring a consistent programming surface area across all editions of SQL Server. One of the outcomes is that In-Memory OLTP (aka Hekaton), the premier performance technology for transaction processing, data ingestion, data load, and transient data scenarios, is now available in SQL Server Standard Edition and Express Edition, as long as you have SQL Server 2016 SP1.

In this blog post we recap what the technology is. We then describe the resource/memory limitations in Express and Standard Edition. We go on to describe the scenarios for which you’d want to consider In-Memory OLTP. We conclude with a sample script illustrating the In-Memory OLTP objects, and some pointers to get started.

How does In-Memory OLTP work?

In-Memory OLTP can provide great performance gains, for the right workloads. One of our customers managed to achieve 1.2 Million requests per second with a single machine running SQL Server 2016, leveraging In-Memory OLTP.

Now, where does this performance gain come from? In essence, In-Memory OLTP improves performance of transaction processing by making data access and transaction execution more efficient, and by removing lock and latch contention between concurrently executing transactions: it is not fast because it is in-memory; it is fast because it is optimized around the data being in-memory. Data storage, access, and processing algorithms were redesigned from the ground up to take advantage of the latest enhancements in in-memory and high concurrency computing.

Now, just because data lives in-memory does not mean you lose it when there is a failure. By default, all transactions are fully durable, meaning that you have the same durability guarantees you get for any other table in SQL Server: as part of transaction commit, all changes are written to the transaction log on disk. If there is a failure at any time after the transaction commits, your data is there when the database comes back online. In addition, In-Memory OLTP works with all high availability and disaster recovery capabilities of SQL Server, like AlwaysOn, backup/restore, etc.

To leverage In-Memory OLTP in your database, you use one or more of the following types of objects:

  • Memory-optimized tables are used for storing user data. You declare a table to be memory-optimized at create time.
  • Non-durable tables are used for transient data, either for caching or for intermediate result set (replacing traditional temp tables). A non-durable table is a memory-optimized table that is declared with DURABILITY=SCHEMA_ONLY, meaning that changes to these tables do not incur any IO. This avoids consuming log IO resources for cases where durability is not a concern.
  • Memory-optimized table types are used for table-valued parameters (TVPs), as well as intermediate result sets in stored procedures. These can be used instead of traditional table types. Table variables and TVPs that are declared using a memory-optimized table type inherit the benefits of non-durable memory-optimized tables: efficient data access, and no IO.
  • Natively compiled T-SQL modules are used to further reduce the time taken for an individual transaction by reducing CPU cycles required to process the operations. You declare a Transact-SQL module to be natively compiled at create time. At this time, the following T-SQL modules can be natively compiled: stored procedures, triggers and scalar user-defined functions.

In-Memory OLTP is built into SQL Server, and starting SP1, you can use all these objects in any edition of SQL Server. And because these objects behave very similar to their traditional counterparts, you can often gain performance benefits while making only minimal changes to the database and the application. Plus, you can have both memory-optimized and traditional disk-based tables in the same database, and run queries across the two. You will find a Transact-SQL script showing an example for each of these types of objects towards the end of this post.

Memory quota in Express and Standard Editions

In-Memory OLTP includes memory-optimized tables, which are used for storing user data. These tables are required to fit in memory. Therefore, you need to ensure you have enough memory for the data stored in memory-optimized tables. In addition, both Standard Edition and Express Edition each database a quota for data stored in memory-optimized tables.

To estimate memory size required for your data, consult the topic Estimate Memory Requirements for Memory-Optimized Tables.

These are the per-database quotas for In-Memory OLTP for all SQL Server editions, with SQL Server 2016 SP1:

SQL Server 2016 SP1 EditionIn-Memory OLTP quota (per DB)
Express352MB
Web16GB
Standard32GB
DeveloperUnlimited
EnterpriseUnlimited

The following items count towards the database quota:

  • Active user data rows in memory-optimized tables and table variables. Note that old row versions do not count toward the cap.
  • Indexes on memory-optimized tables.
  • Operational overhead of ALTER TABLE operations, which can be up to the full table size.

If an operation causes the database to hit the cap, the operation will fail with an out-of-quota error:

Msg 41823, Level 16, State 171, Line 6
Could not perform the operation because the database has reached its quota for in-memory tables. See 'http://go.microsoft.com/fwlink/?LinkID=623028' for more information.

* Note: at the time of writing, this link points to an article about In-Memory OLTP in Azure SQL Database, which shares the same quota mechanism as SQL Server Express and Standard edition. We’ll update that article to discuss quotas in SQL Server as well.

If this happens, you will no longer be able to insert or update data, but you can still query the data. Mitigation is to delete data or upgrade to a higher edition. In the end, how much memory you need depends to a large extend how you use In-Memory OLTP. The next section has details about usage patterns, as well as some pointers to ways you can manage the in-memory footprint of your data.

You can monitor memory utilization through DMVs as well as Management Studio. Details are in the topic Monitor and Troubleshoot Memory Usage. Note that memory reported in these DMVs and reports can become slightly higher that the quota, since they include memory required for old row versions. Old row versions do count toward the overall memory utilization and you need to provision enough memory to handle those, but they do not count toward the quota in Express and Standard editions.

Usage scenarios for In-Memory OLTP

In-Memory OLTP is not a magic go-fast button, and is not suitable for all workloads. For example, memory-optimized tables will not really bring down your CPU utilization if most of the queries are performing aggregation over large ranges of data – Columnstore helps for that scenario.

Here is a list of scenarios and application patterns where we have seen customers be successful with In-Memory OLTP.

High-throughput and low-latency transaction processing

This is really the core scenario for which we built In-Memory OLTP: support large volumes of transactions, with consistent low latency for individual transactions.

Common workload scenarios are: trading of financial instruments, sports betting, mobile gaming, and ad delivery. Another common pattern we’ve seen is a “catalog” that is frequently read and/or updated. One example is where you have large files, each distributed over a number of nodes in a cluster, and you catalog the location of each shard of each file in a memory-optimized table.

Implementation considerations

Use memory-optimized tables for your core transaction tables, i.e., the tables with the most performance-critical transactions. Use natively compiled stored procedures to optimize execution of the logic associated with the business transaction. The more of the logic you can push down into stored procedures in the database, the more benefit you will see from In-Memory OLTP.

To get started in an existing application, use the transaction performance analysis report to identify the objects you want to migrate, and use the memory-optimization and native compilation advisors to help with migration.

Data ingestion, including IoT (Internet-of-Things)

In-Memory OLTP is really good at ingesting large volumes of data from many different sources at the same time. And it is often beneficial to ingest data into a SQL Server database compared with other destinations, because SQL makes running queries against the data really fast, and allows you to get real-time insights.

Common application patterns are: Ingesting sensor readings and events, to allow notification, as well as historical analysis. Managing batch updates, even from multiple sources, while minimizing the impact on the concurrent read workload.

Implementation considerations

Use a memory-optimized table for the data ingestion. If the ingestion consists mostly of inserts (rather than updates) and In-Memory OLTP storage footprint of the data is a concern, either

The following sample is a smart grid application that uses a temporal memory-optimized table, a memory-optimized table type, and a natively compiled stored procedure, to speed up data ingestion, while managing the In-Memory OLTP storage footprint of the sensor data: release and source code.

Caching and session state

The In-Memory OLTP technology makes SQL really attractive for maintaining session state (e.g., for an ASP.NET application) and for caching.

ASP.NET session state is a very successful use case for In-Memory OLTP. With SQL Server, one customer was about to achieve 1.2 Million requests per second. In the meantime, they have started using In-Memory OLTP for the caching needs of all mid-tier applications in the enterprise. Details: https://blogs.msdn.microsoft.com/sqlcat/2016/10/26/how-bwin-is-using-sql-server-2016-in-memory-oltp-to-achieve-unprecedented-performance-and-scale/

Implementation considerations

You can use non-durable memory-optimized tables as a simple key-value store by storing a BLOB in a varbinary(max) columns. Alternatively, you can implement a semi-structured cache with JSON support in SQL Server. Finally, you can create a full relational cache through non-durable tables with a full relational schema, including various data types and constraints.

Get started with memory-optimizing ASP.NET session state by leveraging the scripts published on GitHub to replace the objects created by the built-in session state provider.

Tempdb object replacement

Leverage non-durable tables and memory-optimized table types to replace your traditional tempdb-based #temp tables, table variables, and table-valued parameters.

Memory-optimized table variables and non-durable tables typically reduce CPU and completely remove log IO, when compared with traditional table variables and #temp table.

Case study illustrating benefits of memory-optimized table-valued parameters: https://blogs.msdn.microsoft.com/sqlserverstorageengine/2016/04/07/a-technical-case-study-high-speed-iot-data-ingestion-using-in-memory-oltp-in-azure/

Implementation considerations

To get started see: Improving temp table and table variable performance using memory optimization.

ETL (Extract Transform Load)

ETL workflows often include load of data into a staging table, transformations of the data, and load into the final tables.

Implementation considerations

Use non-durable memory-optimized tables for the data staging. They completely remove all IO, and make data access more efficient.

If you perform transformations on the staging table as part of the workflow, you can use natively compiled stored procedures to speed up these transformations. If you can do these transformations in parallel you get additional scaling benefits from the memory-optimization.

Getting started

Before you can start using In-Memory OLTP, you need to create a MEMORY_OPTIMIZED_DATA filegroup. In addition, we recommend to use database compatibility level 130, and set the database option MEMORY_OPTIMIZED_ELEVATE_TO_SNAPSHOT to ON.

You can use the script at the following location to create the filegroup in the default data folder, and set the recommended settings:
https://raw.githubusercontent.com/Microsoft/sql-server-samples/master/samples/features/in-memory/t-sql-scripts/enable-in-memory-oltp.sql

The following script illustrates In-Memory OLTP objects you can create in your database:

-- configure recommended DB option
 ALTER DATABASE CURRENT SET MEMORY_OPTIMIZED_ELEVATE_TO_SNAPSHOT=ON
 GO
 -- memory-optimized table
 CREATE TABLE dbo.table1
 ( c1 INT IDENTITY PRIMARY KEY NONCLUSTERED,  c2 NVARCHAR(MAX))
 WITH (MEMORY_OPTIMIZED=ON)
 GO
 -- non-durable table
 CREATE TABLE dbo.temp_table1
 ( c1 INT IDENTITY PRIMARY KEY NONCLUSTERED,
   c2 NVARCHAR(MAX))
 WITH (MEMORY_OPTIMIZED=ON,
       DURABILITY=SCHEMA_ONLY)
 GO
 -- memory-optimized table type
 CREATE TYPE dbo.tt_table1 AS TABLE
 ( c1 INT IDENTITY,
   c2 NVARCHAR(MAX),
   is_transient BIT NOT NULL DEFAULT (0),
   INDEX ix_c1 HASH (c1) WITH (BUCKET_COUNT=1024))
 WITH (MEMORY_OPTIMIZED=ON)
 GO
 -- natively compiled stored procedure
 CREATE PROCEDURE dbo.usp_ingest_table1
   @table1 dbo.tt_table1 READONLY
 WITH NATIVE_COMPILATION, SCHEMABINDING
 AS
 BEGIN ATOMIC
     WITH (TRANSACTION ISOLATION LEVEL=SNAPSHOT,
           LANGUAGE=N'us_english')
 
   DECLARE @i INT = 1
 
   WHILE @i > 0  BEGIN
     INSERT dbo.table1
     SELECT c2
     FROM @table1
     WHERE c1 = @i AND is_transient=0
 
     IF @@ROWCOUNT > 0      SET @i += 1
     ELSE
     BEGIN
       INSERT dbo.temp_table1
       SELECT c2
       FROM @table1
       WHERE c1 = @i AND is_transient=1
 
       IF @@ROWCOUNT > 0        SET @i += 1
       ELSE
         SET @i = 0
     END
   END
 
 END
 GO
 -- sample execution of the proc
 DECLARE @table1 dbo.tt_table1
 INSERT @table1 (c2, is_transient) VALUES (N'sample durable', 0)
 INSERT @table1 (c2, is_transient) VALUES (N'sample non-durable', 1)
 EXECUTE dbo.usp_ingest_table1 @table1=@table1
 SELECT c1, c2 from dbo.table1
 SELECT c1, c2 from dbo.temp_table1
 GO

A perf demo using In-Memory OLTP can be found at: in-memory-oltp-perf-demo-v1.0.

Try In-Memory OLTP in SQL Server today!

Resources to get started:

Simplifying our SharePoint integration story

$
0
0

Over the past year or so, we’ve connected with many customers to hear what you value about Reporting Services and what we could make better. We incorporated much of the valuable feedback we received into SSRS 2016, which we’re seeing customers adopt at an incredible rate. We’ve also received a good deal of feedback about Reporting Services’ two installation modes – “Native” mode and “SharePoint-integrated” mode. Today, we’d like to share how we’re addressing that feedback in SQL Server v.Next to make Reporting Services a fantastic BI solution you can deploy on its own – and more easily integrate with SharePoint if you so choose.

Your feedback

As we spoke with one customer after another, we heard the following loud and clear:

  • Native mode and SharePoint-integrated mode can present a difficult trade-off, as many customers wanted a standalone BI solution like Native mode but needed to deploy SharePoint to get features available only in SharePoint-integrated mode, such as Power View. (Conversely, some features are available only in Native mode. For us, developing features to support both modes comes at the expense of other features we’d love to deliver to you.)
  • Deploying SharePoint-integrated mode can be a challenge. Getting SharePoint administrators – usually different people from those who manage BI – to install and run Reporting Services on SharePoint application servers, manage Reporting Services in SharePoint Central Administration, and support reports stored across various SharePoint sites and libraries is difficult in practice.
  • More and more customers are migrating to SharePoint Online, which requires more lightweight integration approaches.

A simpler integration story going forward

Starting with SQL Server v.Next, there’ll be only one installation mode for Reporting Services: “Native” mode. It’s a standalone BI solution you can deploy, whether or not you have SharePoint, and it offers the full set of Reporting Services features: a modern web portal, paginated reports, mobile reports, KPIs, and more. With the Technical Preview of Power BI reports in Reporting Services, you can view and interact with Power BI reports in your web browser, and in time, we aim to support web-based viewing of Excel workbooks in Native mode as well.

If you do have SharePoint and want to integrate with it, it’ll be your choice and it’ll be simpler. We’ll enable you to integrate Reporting Services Native mode with SharePoint, focusing on the scenarios you’ve told us you value most:

  • Embedding reports in SharePoint pages. For many if not most customers, SharePoint integration really came down to this scenario. We’re making it as easy as possible to embed all report types in a Page Viewer web part using the rs:Embed=true URL parameter. Plus, we plan to update our Report Viewer web part as well.
  • Reporting on data in SharePoint lists. With a native connector for SharePoint list data in Report Builder as well as in Power BI Desktop, we’ll continue to make it easy to query SharePoint data and visualize it in your reports.
  • Delivering reports to SharePoint libraries. We plan to develop a SharePoint delivery extension for Native mode so you can schedule delivery of reports in various formats (Word, Excel, PowerPoint, PDF, and more) to SharePoint libraries.

With these more lightweight approaches to SharePoint integration, SQL Server v.Next Reporting Services no longer includes the current “SharePoint-integrated” installation mode. We’ll continue to support SharePoint-integrated mode in previous versions through the product support lifecycle, and we’ll offer documentation and tools to help you migrate your reports to Native mode (today, check out this migration script as an example). We’ve evolved Power View into Power BI reports, which we’re working on enabling in Reporting Services (try the Technical Preview). You can already import your Excel workbooks with Power View sheets into Power BI Desktop and we plan to enable you to convert your standalone Power View (*.rdlx) reports as well.

We’re thrilled with the feedback we continue to receive about SSRS 2016 and the Technical Preview, and with the direction we’ve shared today, we’re excited about the enhancements we’ll be able to deliver to you.

Overview Video of Azure Data Lake (Now Generally Available!)

$
0
0

One of the most exciting announcements we made at Connect(); in New York City yesterday is the general availability of Azure Data Lake (ADL), a key component of Cortana Intelligence, our fully managed suite of big data and advanced analytics services.

ADL offers a hyper-scale data repository for storing data of any type, and is built to the open HDFS standard. You can store trillions of files, and single files can be over a petabyte large. ADL makes it easy to run massively parallel data transformations and data processing programs using U-SQL, R, Python and .NET over your petabytes of data – and all with just a few lines of code and absolutely no infrastructure to manage.

This Channel 9 video below provides an overview of Azure Data Lake, along with architectural insights into how the data and compute work together to provide a highly optimized, scalable analytics solution for your needs.


CIML Blog Team

Webinar – Windows Server 2016: Better Security starts at the OS

$
0
0

We know security is a top priority for you and your company. In the last few years we’ve seen an increased number of companies breached, information leaked and cybersecurity cases in the news. Clearly every organization needs to strengthen its security posture – but where do you start?

The answer is the Operating System! Windows Server 2016 brings new layers of security that will help you protect identities and privileged credentials, the virtualization fabric and virtual machines as well as the applications regardless of where you run them – on-premises or in the cloud.

To learn how you can get started with these new capabilities, join our webinar with Nir Ben Zvi and Dean Wells – Principal Program Managers on the Windows Server team – who will cover all the new features for security in Windows Server 2016.
The webinar will be hosted on November 22, 2016 at 10 am PST.

Register for this webinar.

New Azure PaaS services available for Azure Stack Technical Preview 2

$
0
0

Today the Azure Stack team announced the release of new Azure PaaS services for Azure Stack Technical Preview 2. Read more on the Azure blog.

Cumulative Update #3 for SQL Server 2016 RTM

$
0
0

Dear Customers,

The 3rd cumulative update release, “CU3”, for SQL Server 2016 RTM is now available for download at the Microsoft Downloads site. Please note that registration is no longer required to download Cumulative updates.

To learn more about the release or servicing model, please visit:

Notes:

CU3 was also released as a SQL Server Security Bulletin on 11/8/2016, KB3194717. Please see MS16-136 for details. As a result of this, you may already have CU3 installed as part of that security bulletin release and installation of this CU is unnecessary. If you do attempt to install CU3 after MS16-136, you may receive the following message:  There are no SQL Server instances or shared features that can be updated on this computer. This message indicates that CU3 is already installed and no further action is required.

Additionally, note the package name for CU3 (SQLServer2016-KB3194717-x64.exe) contains the security update MS16-136 KB number (3194717), not the CU3 KB number (3205413). This can be ignored as a single package services both release channels.

 

Cumulative Update #6 for SQL Server 2012 SP3

$
0
0

Dear Customers,

The 6th cumulative update release, “CU6”, for SQL Server 2012 SP3 is now available for download at the Microsoft Downloads site. Please note that registration is no longer required to download Cumulative updates.

To learn more about the release or servicing model, please visit:

Notes:

CU6 was also released as a SQL Server Security Bulletin on 11/8/2016, KB3194724. Please see MS16-136 for details. As a result of this, you may already have CU6 installed as part of that security bulletin release and installation of this CU is unnecessary. If you do attempt to install CU6 after MS16-136, you may receive the following message:  There are no SQL Server instances or shared features that can be updated on this computer. This message indicates that CU6 is already installed and no further action is required.

Additionally, note the package name for CU6 (SQLServer2012-KB3194724-x64.exe) contains the security update MS16-136 KB number (3194724), not the CU6 KB number (3194992). This can be ignored as a single package services both release channels.


Cumulative Update #15 for SQL Server 2012 SP2

$
0
0

Dear Customers,

The 15th cumulative update release, “CU15”, for SQL Server 2012 SP2 is now available for download at the Microsoft Downloads site. Please note that registration is no longer required to download Cumulative updates.

To learn more about the release or servicing model, please visit:

Notes:

CU15 was also released as a SQL Server Security Bulletin on 11/8/2016, KB3194725. Please see MS16-136 for details. As a result of this, you may already have CU15 installed as part of that security bulletin release and installation of this CU is unnecessary. If you do attempt to install CU15 after MS16-136, you may receive the following message:  There are no SQL Server instances or shared features that can be updated on this computer. This message indicates that CU15 is already installed and no further action is required.

Additionally, note the package name for CU15 (SQLServer2012-KB3194725-x64.exe) contains the security update MS16-136 KB number (3194725), not the CU15 KB number (3205416). This can be ignored as a single package services both release channels.

[Virtual Lab 11/29] From Data Prep to Visualization in Less Time with Microsoft Power BI and Alteryx

$
0
0
Good decisions start with good data, but are you spending more time prepping and blending data than analyzing and visualizing it to make insightful decisions? Attend our virtual lab and follow along as we walk you through step-by-step instructions on how to build your very own workflow in Alteryx and create interactive reports and dashboards in Microsoft Power BI.

Now Available: Update 1610 for System Center Configuration Manager

$
0
0

Happy Friday! We are delighted to announce that we have released version 1610 for the Current Branch (CB) of System Center Configuration Manager that includes some great new features and product enhancements.

We continue to see a strong adoption of the Current Branch model by our customers. There are now more than 25,000 organizations managing more than 50 million devices with Configuration Manager version 1511 or later. Even though this is a significant milestone, we expect many more customers to upgrade to the Current Branch of Configuration Manager in the coming months, so the quality of the product continues to be a top priority for us.

Thanks to our active Technical Preview Branch community, the 1610 update includes feedback and usage data we have gathered from customers who have installed and road tested our monthly technical previews over the last few months. As always, 1610 has also been tested at scale by real customers, in real production environments. As of today, nearly 1 million devices are being managed by the version 1610 of Configuration Manager.

1610 includes lots of new features and enhancements in Windows 10 and Office 365 management, application management, end user experience, client management and also includes new functionality for customers using Configuration Manager in hybrid mode with Microsoft Intune. Here are just few of the enhancements that are available in this update:

  • Windows 10 Upgrade Analyticsintegration allows you to assess and analyze device readiness and compatibility with Windows 10 to allow smoother upgrades.
  • Office 365 Servicing Dashboard and app deployment to clients features help you to deploy Office 365 apps to clients as well as track Office 365 usage and update deployments.
  • Software Updates Compliance Dashboard allows you to view the current compliance status of devices in your organization and quickly analyze the data to see which devices are at risk.
  • Cloud Management Gateway provides a simpler way to manageConfiguration Manager clients on the Internet. You can use theConfigMgr console to deploy the service in Microsoft Azure and configure the supported roles to allow cloud management gateway traffic.
  • Client Peer Cache is a new built-in solution in Configuration Manager that allows clients to share content with other clients directly from their local cache with monitoring and troubleshooting capabilities.
  • Enhancements in Software Center including customizable branding in more dialogs, notifications of new software, improvements to the notification experience for high-impact task sequence deployments, and ability for users to request applications and view request history directly in Software Center.
  • New remote control features including performance optimization for remote control sessions and keyboard translation.

This release also includes new features for customers using Configuration Manager connected with Microsoft Intune. Some of the new feature include:

  • New configuration item settings and improvements now only show settings that apply to the selected platform. We also added lots of new settings for Android (23), iOS (4), Mac (4), Windows 10 desktop and mobile (37), Windows 10 Team (7), Windows 8.1 (11), and Windows Phone 8.1 (3).
  • Lookout integration allows to check devices compliance status based on its compliance with Lookout rules.
  • Request a sync from the admin console improvement allows you to request a policy sync on an enrolled mobile device from the Configuration Manager console.
  • Support for paid apps in Windows Store for Business allows you to add and deploy online-licensed paid apps in addition to the free apps in Windows Store for Business.

For more details and to view the full list of new features in this update check out our Whats new in version 1610 of System Center Configuration Manager documentation.

Note: As the update is rolled out globally in the coming weeks, it will be automatically downloaded and you will be notified when it is ready to install from the Updates and Servicing node in your Configuration Manager console. If you cant wait to try these new features, this PowerShell script can be used to ensure that you are in the first wave of customers getting the update. By running this script on your central administration site or standalone primary site, you will see the update available in your console right away.

For assistance with the upgrade process please post your questions in the Site and Client Deployment forum. To provide feedback or report any issues with the functionality included in this release, please use Connect.If theres a new feature or enhancement you want us to consider including in future updates, please use the Configuration Manager UserVoice site.

Thank you,

The System Center Configuration Manager team

 

Additional resources:

Team Services November Extensions Roundup

$
0
0

This month, I’ve got two fun new extensions that have lots of potential; both of them are highly trending – taking some of the top spots for our most downloaded extensions over the last 30 days. I hope you enjoy these and have a Happy Thanksgiving!

Activity Feed

See it in the Marketplace: https://marketplace.visualstudio.com/items?itemName=davesmits.VSTSActivityFeed

I am a big fan of dashboard widgets and having a glanceable view of what’s going on in my projects. With Activity Feed, you get two great things: an activity feed experience for Team Services, and a publisher who is committed to making this extension better over time. Activity Feed lets you see the most recent changes from four core areas right now, with plans to add more:

  • Work Items
  • Commits
  • Pull Requests
  • Builds

activityfeed

The widget offers configurable sizes (as usual of widgets), and a second level of customization for determining what types of activity should be shown:

  1. Work Item Types – show specific work item types from you project
  2. Area Paths – limit to the area paths you care about
  3. Repositories – keep the widget focused on the repositories that matter
  4. Build Definitions – keep up to date with all of your builds, or highlight the important ones

Dave is the kind of publisher that is highly engaged with his community; he’s always on the lookout for feedback to improve the extension. He even posts his roadmap and other ideas he’s entertaining directly into his Marketplace page. Go check this one out!

Microsoft Teams Integration

See it in the Marketplace: https://marketplace.visualstudio.com/items?itemName=fortifyvsts.hpe-security-fortify-vsts

Microsoft Office recently launched its newest addition to the Office family. ‘Microsoft Teams’ is a group-chat platform for large or small teams and offers instant access to everything you may need in Office 365. The launch of Microsoft Teams is great news for Office 365 customers, and now we have the Team Services integration to go with it!teams-codepushedThis integration lets your team stay up to date with alerts from

  • Work items
  • Pull requests
  • Code commits
  • Builds
  • Releases

To set up this integration, you need to add Team Services as a connector to your team inside Microsoft Teams. Team Services is already baked in as an option, so if you’ve installed Microsoft Team, that’s all you’ll need. To bring events from Team Services into Microsoft Teams, click the ellipsis or ‘…’ on the top nav of your team channel. Select Connectors and then scroll through the list to find the Team Services icon and follow the steps to connect to your Team Services account.

capture

After that, you’ll select your project, the event type, and corresponding details you care about. The full instructions for Getting Started are here. Have fun sharing information with your team faster than ever!

    Are you using an extension you think should be featured here?

    I’ll be on the lookout for extensions to feature in the future, so if you’d like to see yours (or someone else’s) here, then let me know on Twitter!

    @JoeB_in_NC

    Don’t do it: consumer-grade solid-state drives (SSD) in Storage Spaces Direct

    $
    0
    0

    // This post was written by Dan Lovinger, Principal Software Engineer.

    Howdy,

    In the weeks since the release of Windows Server 2016, the amount of interest we’ve seen in Storage Spaces Direct has been nothing short of spectacular. This interest has translated to many potential customers looking to evaluate Storage Spaces Direct.

    Windows Server has a strong heritage with do-it-yourself design. We’ve even done it ourselves with the Project Kepler-47 proof of concept! While over the coming months there will be many OEM-validated solutions coming to market, many more experimenters are once again piecing together their own configurations.

    This is great, and it has led to a lot of questions, particularly about Solid-State Drives (SSDs). One dominates: “Is [some drive] a good choice for a cache device?” Another comes in close behind: “We’re using [some drive] as a cache device and performance is horrible, what gives?”

    The flash translation layer masks a variety of tricks an SSD can use to accelerate performance and extend its lifetime, such as buffering and spare capacity.

    The flash translation layer masks a variety of tricks an SSD can use to accelerate performance and extend its lifetime, such as buffering and spare capacity.

    Some background on SSDs

    As I write this in late 2016, an SSD is universally a device built from a set of NAND flash dies connected to an internal controller, called the flash translation layer (“FTL”).

    NAND flash is inherently unstable. At the physical level, a flash cell is a charge trap device – a bucket for storing electrons. The high voltages needed to trigger the quantum tunneling process that moves electrons in and out of the cell – your data – slowly accumulates damage at the atomic level. Failure does not happen all at once. Charge degrades in-place over time and even reads aren’t without cost, a phenomenon known as read disturb.

    The number of electrons in the cell’s charge trap translate to a measurable voltage. At its most basic, a flash cell stores one on/off bit – a single level cell (SLC) – and the difference between 0 and 1 is “easy”. There is only one threshold voltage to consider. On one side the cell represents 0, on the other it is 1.

    However, conventional SSDs have moved on from SLC designs. Common SSDs now store two (MLC) or even three (TLC) bits per cell, requiring four (00, 01, 10, 11) or eight (001, 010, … 110, 111) different charge levels. On the horizon is 4 bit QLC NAND, which will require sixteen! As the damage accumulates it becomes difficult to reliably set charge levels; eventually, they cannot store new data. This happens faster and faster as bit densities increase.

    • SLC: 100,000 or more writes per cell
    • MLC: 10,000 to 20,000
    • TLC: low to mid 1,000’s
    • QLC: mid-100’s

    The FTL has two basic defenses.

    • error correcting codes (ECC) stored alongside the data
    • extra physical capacity, over and above the apparent size of the device, “over-provisioning”

    Both defenses work like a bank account.

    Over the short term, some amount of the ECC is needed to recover the data on each read. Lightly-damaged cells or recently-written data won’t draw heavily on ECC, but as time passes, more of the ECC is necessary to recover the data. When it passes a safety margin, the data must be re-written to “refresh” the data and ECC, and the cycle continues.

    Across a longer term, the over-provisioning in the device  replaces failed cells and preserves the apparent capacity of the SSD. Once this account is drawn down, the device is at the end of its life.

    To complete the physical picture, NAND is not freely writable. A die is divided into what we refer to as program/erase “P/E” pages. These are the actual writable elements. A page must first be erased to prepare writing it, then the entire page can be written at once. A page may be as small as 16K, or potentially much larger. Any one single write that arrives in the SSD probably won’t line up with the page size!

    And finally, NAND never re-writes in place. The FTL is continuously keeping track of wear, preparing fresh erased pages, and consolidating valid data sitting in pages alongside stale data corresponding to logical blocks which have already been re-written. These are additional reasons for over-provisioning.

    In consumer devices, and especially in mobile, an SSD can safely leverage an unprotected, volatile cache because the device’s battery ensures it will not unexpectedly lose power. In servers, however, an SSD must provide its own power protection, typically in the form of a capacitor.

    Buffers and caches

    The bottom line is that a NAND flash SSD is a complex, dynamic environment and there is a lot going on to keep your data safe. As device densities increase, it is getting ever harder. We must maximize the value of each write, as it takes the device one step closer to failure. Fortunately, we have a trick: a buffer.

    A buffer in an SSD is just like the cache in the system that surrounds it: some memory which can accumulate writes, allowing the user/application request to complete while it gathers more and more data to write efficiently to the NAND flash. Many small operations turn into a small number of larger operations. Just like the memory in a conventional computer, though, on its own that buffer is volatile – if a power loss occurs, any pending write operations are lost.

    Losing data is, of course, not acceptable. Storage Spaces Direct is at the far end of a series of actions which have led to it getting a write. A virtual machine on another computer may have had an application issue a flush which, in a physical system, would put the data on stable storage. After Storage Spaces Direct acknowledges any write, it must be stable.

    How can any SSD have a volatile cache!? Simple, and it is a crucial detail of how the SSD market has differentiated itself: you are very likely reading this on a device with a battery! Consumer flash is volatile in the device but not volatile when considering the entire system – your phone, tablet or laptop. Making a cache non-volatile requires some form of power storage (or new technology …), which adds unneeded expense in the consumer space.

    What about servers? In the enterprise space, the cost and complexity of providing complete power safety to a collection of servers can be prohibitive. This is the design point enterprise SSDs sit in: the added cost of internal power capacity to allow saving the buffer content is small.

    An (older) enterprise-grade SSD, with its removable and replaceable built-in battery!

    An (older) enterprise-grade SSD, with its removable and replaceable built-in battery!

    This newer enterprise-grade SSD, foreground, uses a capacitor (the three little yellow things, bottom right) to provide power-loss protection.

    This newer enterprise-grade SSD, foreground, uses a capacitor (the three little yellow things, bottom right) to provide power-loss protection.

    Along with volatile caches, consumer flash is also universally of lower endurance. A consumer device targets environments with light activity. Extremely dense, inexpensive, fragile NAND flash – which may wear out after only a thousand writes – could still provide many years of service. However, expressed in total writes over time or capacity written per day, a consumer device could wear out more than 10x faster than available enterprise-class SSD.

    So, where does that leave us? Two requirements for SSDs for Storage Spaces Direct. One hard, one soft, but they normally go together:

    • the device must have a non-volatile write cache
    • the device should have enterprise-class endurance

    But … could I get away with it? And more crucially – for us – what happens if I just put a consumer-grade SSD with a volatile write cache in a Storage Spaces Direct system?

     

    An experiment with consumer-grade SSDs

    For this experiment, we’ll be using a new-out-of-box 1 TB consumer class SATA SSD. While we won’t name it, it is a first tier, high quality, widely available device. It just happens to not be appropriate for an enterprise workload like Storage Spaces Direct, as we’ll see shortly.

    In round numbers, its data sheet says the following:

    • QD32 4K Read: 95,000 IOPS
    • QD32 4K Write: 90,000 IOPS
    • Endurance: 185TB over the device lifetime

    Note: QD (“queue depth”) is geek-speak for the targeted number of IOs outstanding during a storage test. Why do you always see 32? That’s the SATA Native Command Queueing (NCQ) limit to which commands can be pipelined to a SATA device. SAS and especially NVME can go much deeper.

    Translating the endurance to the widely-used device-writes-per-day (DWPD) metric, over the device’s 5-year warranty period that is

    185 TB / (365 days x 5 years = 1825 days) = ~ 100 GB writable per day
    100 GB / 1 TB total capacity = 0.10 DWPD

    The device can handle just over 100 GB each day for 5 years before its endurance is exhausted. That’s a lot of Netflix and web browsing for a single user! Not so much for a large set of virtualized workloads.

    To gather the data below, I prepared the device with a 100 GiB load file, written through sequentially a little over 2 times. I used DISKSPD 2.0.18 to do a QD8 70:30 4 KiB mixed read/write workload using 8 threads, each issuing a single IO at a time to the SSD. First with the write buffer enabled:

    diskspd.exe -t8 -b4k -r4k -o1 -w30 -Su -D -L -d1800 -Rxml Z:\load.bin

    Normal unbuffered IO sails along, with a small write cliff.

    Normal unbuffered IO sails along, with a small write cliff.

    The first important note here is the length of the test: 30 minutes. This shows an abrupt drop of about 10,000 IOPS two minutes in – this is normal, certainly for consumer devices. It likely represents the FTL running out of pre-erased NAND ready for new writes. Once its reserve runs out, the device runs slower until a break in the action lets it catch back up. With web browsing and other consumer scenarios. the chances of noticing this are small.

    An aside: this is a good, stable device in each mode of operation – behavior before and after the “write cliff” is very clean.

    Second, note that the IOPS are … a bit different than the data sheet might have suggested, even before it reaches steady operation. We’re intentionally using a light, QD8 70:30 4K to drive it more like a generalized workload. It still rolls over the write cliff. Under sustained, mixed IO pressure the FTL has much more work to take care of and it shows.

    That’s with the buffer on, though. Now just adding write-through (with -Suw):

    diskspd.exe -t8 -b4k -r4k -o1 -w30 -Suw -D -L -d1800 -Rxml Z:\load.bin

    Write-through IO exposes the true latency of NAND, normally masked by the FTL/buffer.

    Write-through IO exposes the true latency of NAND, normally masked by the FTL/buffer.

    Wow!

    First: it’s great that the device honors write-through requests. In the consumer space, this gives an application a useful tool for making data durable when it must be durable. This is a good device!

    Second, oh my does the performance drop off. This is no longer an “SSD”: especially as it goes over the write cliff – which is still there – it’s merely a fast HDD, at about 220 IOPS. Writing NAND is slow! This is the FTL forced to push all the way into the NAND flash dies, immediately, without being able to buffer, de-conflict the read and write IO streams and manage all the other background activity it needs to do.

    Third, those immediate writes take what is already a device with modest endurance and deliver a truly crushing blow to its total lifetime.

    Crucially, this is how Storage Spaces Direct would see this SSD. Not much of a “cache” anymore.

     

    So, why does a non-volatile buffer help?

    It lets the SSD claim that a write is stable once it is in the buffer. A write-through operation – or a flush, or a request to disable the cache – can be honored without forcing all data directly into the NAND. We’ll get the good behavior, the stated endurance, and the data stability we require for reliable, software-defined storage to a complex workload.

    In short, your device will behave much as we saw in the first chart: a nice, flat, fast performance profile. A good cache device. If it’s NVMe it may be even more impressive, but that’s a thought for another time.

     

    Finally, how do you identify a device with a non-volatile buffer cache?

    Datasheet, datasheet, datasheet. Look for language like:

    • “Power loss protection” or “PLP”
      • Samsung SM863, and related
      • Toshiba HK4E series, and related
    • “Enhanced power loss data protection”
      • Intel S3510, S3610, S3710, P3700 series, and related

    … along with many others across the industry. You should be able to find a device from your favored provider. These will be more expensive than consumer grade devices, but hopefully we’ve convinced you why they are worth it.

    Be safe out there!

    / Dan Lovinger

    #AzureAD Mailbag: International Deployments Round 2

    $
    0
    0

    Hey yall, Mark Morowczynski here with another Friday mailbag. I realize weve been sort of slacking on these for the last 2 months but we are looking to finish the calendar year strong. Key word being looking. Well continue last weeks topic of things to consider with international deployments. Lets dive in.

     

    Question 1:

    Your documentation states that Azure AD Premium is not supported in China. I am a US customer but have 200 employees located in China. Will my users in China not be able to get the Azure AD Premium functionalities such as MFA, SSPR, and Azure App Proxy?

    Answer 1:

    We hear this question frequently for customers who operate in China but, I’m going to borrow some words from Brjann Brekkan, (another member on our team) for this response:

    Azure AD Premium and its capabilities is not currently available in Tenants hosted in our Mainland China Azure AD instance such as when a company signs up for Office 365 or Azure operated by our partner 21Vianet. A company with Tenant in our Global Azure AD instance, hosted in our global datacenters, has access to Azure AD Premium services and all employees in that Tenant, including those in China, can leverage the services.

    Question 2:

    I have multiple brands within my company. Some of the companies I’ve acquisitioned are in different countries and have their own IT staff that manages their identities. Is there a way I can limit admin access based on location? (e.g. Help Desk in France supports users only in France)

    Answer 2:

    Today this can be done with Administrative Units. There are some caveats though:

    • The only resources that Administrative Units can be applied to is users
    • Configuring these can only be done through PowerShell (there is no GUI as of today)
    • Administrative Units are not dynamic (meaning you must manually add new users as they become qualified to be a member of the scoped group or a member of the role that you have defined)

    Even with these caveats, this is still a very powerful tool for scoping and decreasing surface area from a risk perspective. Remember, this is a defense in depth type strategy. Privileged accounts are high value targets – shrink your surface area as much as possible!

    Question 3:

    I’m concerned about charges that may occur for my users that operate outside of the US. Will Microsoft charge my users long distance fees for SMS/Phone calls? Where is the SMS/Phone calls coming from with Azure MFA and SSPR?

    Answer 3:

    Azure AD phone calls come from the United States – which is why the caller ID phone number must be a US number. However, text messages may come from US (+1), UK (+44) or other countries. It may vary for each authentication based on the destination and the provider we use to send each text message.

    We do not charge the end user or tenant for processing calls/SMS for countries outside of the United States. Some providers may charge for receiving long-distance SMS/Phone calls but this is purely based on the user’s carrier (This is no different than requiring a phone plan to receive SMS or voice calls). We do have other options available for both SSPR and MFA that do not require SMS/Phone calls (e.g. Azure Authenticator app for MFA and Q/A gate for SSPR) but does require internet connectivity.

    Fun Fact: For Azure MFA, you can change the Caller ID Phone Number but this is only from US phone numbers only.

     

    clip_image001

     

    Question 4:

    Within my company, we own multiple brands; we are looking to customizing the feel of our O365 Portal/Access Panel page. It only gives me one option to brand my tenant – what are other customers doing?

    Answer 4:

    Yes, each image has an independent upload for branding as seen on the Large Illustration below. Most companies that have deployed Azure AD and own multiple brands usually do one of two things

    1. Use an icon from their parent company that represents their company as a whole (a recognizable image for all brands)
    2. Use the “Large Illustration/Background Color” image and incorporate multiple brands on this same image. This allows a unified company representation on your main log on page for the cloud. This image is seen in the top left corner of the screenshot below.

     

    clip_image001[8]

     

    Image Options to Upload

    clip_image001[10]

     

    Question 5:

    I operate in multiple countries and I’m about to deploy multiple Microsoft cloud services. Where can I get started with reading up on Microsoft’s documentation on how data is managed from a global perspective?

    Answer 5:

    I recommend visiting Microsoft’s Trust Center to learn more about how Microsoft helps secure your data. Here are a few links to get you started:

    Please let us know if you have any additional feedback. Also, join myself or one of my team members in a live discussion on our Webinar platform that we host – covering a variety topics. Join the conversation here. I look forward to chatting with ya’ll!

     

    We hope youve found this post and this series to be helpful. For any questions you can reach us at
    AskAzureADBlog@microsoft.com, the Microsoft Forums and on Twitter @AzureAD, @MarkMorow and @Alex_A_Simons

     

    -Chad Hasbrook, Mark Morowczynski, Shawn Bishop, Yossi Banai, Damien Gallot, Brjann Brekkan, Ariel Gordon, and Dan Mace.

    The Data Science Workloads in Visual Studio 2017 RC

    $
    0
    0

    From getting an automatic photo tag on Facebook, to a product recommendation online, to searching your photos using keywords, or getting a fraud alert on your credit card, … Machine Learning and Data Science are all around us in one form or another.

    Today we’re delighted to announce that Visual Studio 2017 RC now has dedicated workloads for Data Storage and Data Science. These two stacks provide you with all backend and tooling you need to build your next generation intelligent apps and services.

    Let’s take a look at the workloads in a bit more detail:

    1.  Data storage and processing– Big Data Storage and Advanced Analytics
    2.  Data Science– All the tooling you need to analyze, build models, and create smart apps
      • Python Tools for Visual Studio – Desktop, Web, Scientific, Data Science/ML
      • R Tools for Visual Studio – Primarily Stats and Data Science/ML
      • F# – A functional-first .Net language suited for a variety of data processing tasks

    Why R and Python?

    While Python has been available for a while, R is the new entry in the VS family of languages. R is the most popular Data Science / Stats focused language and comes with a rich ecosystem of ready to use packages.

    There are many “language popularity” rankings out there and all of them should be taken with a grain of salt, but it’s safe to say that if you’re doing Analytics, R and Python should be in your toolbox:

    Most of the Microsoft Storage and Analytics technologies either already have R/Python support (direct or via SDKs), or will be having them soon. Let’s look at the tooling next.

    Python Tools for Visual Studio

    VS 2017 RC provides rich integration for Python, covering various scenarios from machine learning to desktop to IoT to the web. It supports most interpreters such as CPython (2.x, 3.x), IronPython, Jython, PyPy, … along with the Anaconda distro and access to thousands of packages on PyPI. For the list of new features for Python, please see the product release notes.

    R Tools for Visual Studio

    RTVS turns VS into a powerful R IDE that includes the usual features you’d expect like intellisense, debugging, REPL, History, etc. and advanced ones such as Stored Procedures with R that run in a SQL database, multiple independent plots, to Remoting. Remoting is very powerful in that it allows all the features of RTVS to be run on a remote machine (as if you had used Terminal Server). It is perfect for when you want to use your laptop on a subset of data locally, and then connect to a large server and continue to use the full IDE features and finally deploy your code:

    Visual Studio supports both the standard CRAN R version and the enhanced Microsoft R which provides various performance and enterprise focused features.

    F#

    F# is a programming language that provides support for functional programming in addition to traditional object-oriented and imperative (procedural) programming. It is a great language for data processing and has a strong third-party ecosystem for accessing, manipulating, and processing data. The Visual F# tools in Visual Studio provide support for developing F# applications and extending other .NET applications by using F# code. F# is a first-class member of .NET, and retains a strong resemblance to the ML family of functional languages.

    There’s a package for that!

    Beyond Visual Studio integration, the Data Science workload comes preinstalled with hundreds of packages that cover just about any Advanced Analytics related scenario from image processing to bio-informatics to astronomy. The Data Science workload by default includes:

    • The Microsoft R Client– a Microsoft enhanced version of R that provides multi-core, pkg versioning and distributed memory support
    • The Anaconda Python distro– a cross-platform collection of curated Python packages from Continuum.io for machine learning, scientific computing and web scenarios.

    Azure Python SDK

    Azure now has SDKs covering just about every service and language, including Python. The Python Azure SDK has full support for core compute, storage, networking, keyvault and monitoring services, on par with .Net. Management coverage includes services such as Data Lake Store and Data Lake Analytics, SQL Database, DocumentDB, etc. Data support examples include SQL Database, SQL Server, DocumentDB and Data Lake Store File System.

    Join our team (virtually)!

    The entire Data Science stack, from tools to libraries, is open source and hosted on github. We’d like to invite you to check out the code base, fork it, file a bug, or if you’d like, add a feature! You can find the repos here:

    One more thing: Free interactive Python & R Notebooks!

    While Visual Studio is a highly productive desktop IDE, sometimes you just need a “REPL on steroids” to do some slicing and dicing and plotting of your data right in the browser and possibly sharing the results:

    Azure notebooks is a free, hosted Jupyter notebook service:

    Jupyter is like OneNote if it supported running code: it supports text (as Markdown), code, inline graphics, etc. It currently supports R, Python 2, Python 3 (with Anaconda distros). F# is coming soon. The best to learn about Azure Notebooks is to try one of the samples:

    The free service is particularly useful for faculty/students, giving webinars, product demos, sharing live reports, etc. Check out some of the thousands of high quality notebooks out there.

    Conclusion

    Data Science helps transform your data into intelligent action. Watch this Connect(); video on Data Science and Web Development to learn more. The Visual Studio Data Science workload is our first foray into providing you with everything needed to build the next generation of intelligent apps, whether on the desktop, cloud, IoT or mobile. Take it for a spin, check out the built-in libraries and packages, peruse CRAN and PyPI for even more, and let us know what you think!

    For problems, let us know via the Report a Problem option in the upper right corner, either from the installer or the Visual Studio IDE itself or by filling an issue on Github repositories for PTVS or RTVS. You may also leave suggestions on User Voice.

    Shahrokh Mortazavi, Partner PM, Visual Studio Cloud Platform Tools

    Shahrokh Mortazavi currently works in the Developer Division of Microsoft on Python and Node.js tool chain. Previously, he was in the High Performance Computing group at Microsoft. He worked on the Phoenix Compiler tool chain (code gen, analysis, JIT) at Microsoft Research and for 10 years led Sun Microsystems’ Code Generation & Optimization compiler backend teams.


    Office Online Server November release

    $
    0
    0

    In May, we released Office Online Server (OOS), which allows organizations to provide your users with browser-based versions of Word, PowerPoint, Excel and OneNote, among other capabilities offered in Office Online, from your own datacenter. We will deliver most of new Office Online features to Office Online through regular updates.

    Today,we are releasing our first significant update to Office Online Server that includes:

    • Performance improvements to co-authoring in Word Online.
    • Support for embedded Power View sheets in Excel Online.
    • Improvements when working with tables in Word Online and PowerPoint Online, including better drag-and-drop support.
    • Support for showing spelling errors inline in the PowerPoint editor.
    • Improved picture resizing in Word Online.
    • Copy and paste improvements in PowerPoint Online.
    • Support for shapes in Excel Online viewer.
    • Significant improvements for users that rely on assistive technologies—HTML markup updated to more fully support W3C accessibility standards.

    We encourage Office Online Server customers to visit the Volume License Servicing Center to download the November release. You must uninstall the previous version of OOS to install the November release. Moving forward, we have a release planned every four months to ensure that we can continuously deliver new value to OOS users. We will only support the latest version of OOS with bug fixes and security patches, available via the Microsoft Updates. Finally, we will announce a more detailed timeline for future releases within the next two weeks.

    Customers with a Volume Licensing account can download Office Online Server from the Volume License Servicing Center at no cost and will have view-only functionality, which includes PowerPoint sharing in Skype for Business. Customers that require document creation, edit and save functionality in Office Online Server need to have an on-premises Office Suite license with Software Assurance or an Office 365 ProPlus subscription. For more information on licensing requirements, please refer to our product terms.

    The post Office Online Server November release appeared first on Office Blogs.

    Live Unit Testing in Visual Studio 2017 RC

    $
    0
    0

    We are very proud to introduce a new feature in Visual Studio 2017 called Live Unit Testing ! This feature will make it easy for you to maintain quality and test coverage during rapid development and take your productivity to a whole new level. Imagine you are fixing a bug in a code base which you may not be completely familiar with. With Live Unit Testing you can know right away, as you are making edits to fix the bug, that you did not break any other part of the system. Getting this feedback, in-situ, as you type will give you extra confidence, make you more productive and why not, even enjoying fixing bugs and writing unit tests!

    Live Unit Testing automatically runs the impacted unit tests in the background as you edit code, and visualizes the results and code coverage live, in the editor, in real-time. In addition to giving feedback on the impact that your changes had on the existing tests, you also get immediate feedback on whether the new code you added is already covered by one or more existing tests. This will gently remind you to write unit tests as you are making bug fixes or adding features. You will be on your way to the promised land where there is no test debt in your code base!

    Live Unit Testing is present in the Enterprise edition of Visual Studio 2017 and it’s available for C# and VB projects that target the .NET Framework. It uses VB and C# compilers to instrument the code at compile time. Next, it runs unit tests on the instrumented code to generate data which it analyzes to understand which tests are covering which lines of code. It then uses this data to run just those tests that were impacted by the given edit providing immediate feedback on the results in the editor itself. As more edits are made or more tests are added or removed, it continuously updates the data which is used to identify the impacted tests.

    How to start Live Unit Testing

    Enabling Live Unit Testing is as simple as going to the Test command at the top level menu bar and starting it as shown in image below.

    Live Unit Testing in Visual Studio works with three popular unit testing frameworks; namely, MSTest, xunit and NUnit. When using these, you will need to ensure that the adapters and frameworks meet or exceed the minimum versions given below. Please remove older adapter and test framework references from your existing projects ( make sure you remove the reference to “Microsoft.VisualStudio.QualityTools.UnitTestFramework”) and add the new ones if Live Unit Testing is not working for you. You can get all of these from NuGet.org.

    • For xunit you will need xunit.runner.visualstudio version 2.2.0-beta3-build1187 and xunit 2.0 (or higher versions)
    • For NUnit you will need NUnit3TestAdapter version 3.5.1 and NUnit version 3.5.0 (or higher versions)
    • For MSTest you will need MSTest.TestAdapter 1.1.4-preview and MSTest.TestFramework 1.0.5-preview (or higher versions)

    Live Unit Testing experience

    Once enabled, Live Unit Testing helps you quickly see whether the code you’re writing is covered and if the tests that cover it are passing, without leaving the editor. Unit test results and coverage visualizations appear on a line-by-line basis in the code editor as shown in sample image below:

    clip_image001 If a line of executable code is covered by at least one failing test, Live Unit Testing will decorate it with a red “×”.

    clip_image002 If a line of executable code is covered by only passing tests, Live Unit Testing will decorate it with a green “√”.

    clip_image003 If a line of executable code is not covered by any test, Live Unit Testing will decorate it with a blue dash.

    The real-time code coverage and test result information provided by Live Unit Testing removes the burden of manually selecting and running tests. The live feedback also serves to notify you instantly if your change has broken the program – if inline visualizations shift from green “√”s to red “×”s, you know you broke one or more tests.

    At any point in time you can hover over the “√” or “×” to see how many tests are hitting the given line as seen in image below.

    You can click on the check or “×” to see what tests are hitting the given line as shown in image below.

    When hovering over the failed test in the tool tip, it expands to provide additional info to give more insight into the failure as shown in image below.

    Additionally, you can navigate directly to the failed test by clicking on it in the tool tip. Then, from the failed test, you can easily debug to the product code, make edits, and continue, all while Live Unit Testing runs in the background. There is no need to stop and restart Live Unit Testing for the debug, edit and continue cycle.

    At any time, you can temporarily pause or completely stop live unit testing; for example, when you are in the middle of a refactoring and you know that your tests will be broken for a while. It is as simple as going to the Test command in top level menu bar and clicking the desired action, as shown below.

    When paused, you will not see any coverage visualization in the editor. When you are ready to see them again, you can un-pause it by clicking “Continue” in the Live Unit Testing menu. When in pause mode, Live Unit Testing still keeps all the data that it had collected thus far. On “Continue,” Live Unit Testing will do the necessary work to catch up with all the edits that have been made while it was paused, and will update the glyphs appropriately.

    You can also stop Live Unit Testing completely if you desire. When Live Unit Testing is started again after it has been stopped, it will take longer to show the glyphs than when it was un-paused. This is because it loses all data when it is stopped.

    Conclusion

    Live Unit Testing will improve your developer productivity, test coverage and quality of software. .NET developers out there, please do check out this feature in Visual Studio 2017. For developers who are part of a team that practices test-driven development, Live Unit Testing gamifies their workflow; in other words, all their tests will be failing at first, and as they implement each method they will see them turn green. It will evoke the same feeling as a nod of approval from your coach, who is watching you intently from the sidelines, as you practice your art!

    Watch this Live Unit Testing video, where we demonstrate this feature.

    Joe Morris, Senior Program Manager, Visual Studio
    @_jomorris

    Joe has been with Microsoft for 19 years, with a particular focus on static analysis and developer productivity for the last three years.

    Manish Jayaswal, Principal Engineering Manager, Visual Studio

    Manish has years of management experience in commercial software development with deep technical expertise in compiler technology, debugger, programming languages, quality assurance and engineering system.

    Announcing UWP Community Toolkit 1.2

    $
    0
    0

    Following our commitment to ship new versions at a fast pace, I’m thrilled to announce the availability of UWP Community Toolkit 1.2. If you want to get a first glance, you can install the UWP Community Toolkit Sample App directly from the Windows Store.

    • The focus of this version was to stabilize current features while adding the most wanted ones that were missing. A full list of features and changes is available in the release notes. Here’s a quick preview:
    • New Helpers. we worked on providing 7 new helpers to help with everyday tasks:
      • BackgroundTaskHelper to help you work with background tasks
      • HttpHelper to help you deal with HTTP requests in a secure and reliable way
      • PrintHelper to help you print XAML controls
      • DispacherHelper to help you work with tasks that need to run on UI thread
      • DeepLinkHelper to simplify the management of your deep links
      • WebViewExtensions to allow you to bind HTML content to your Webview
      • SystemInformation to gather all system information into a single and unique class
    • New Controls
      • We introduced a new control named MasterDetailView that helps developers create master/detail user experiences

    controls-masterdetailsview

    • Updates. We updated the following features:
      • ImageCache was improved to provide a more robust cache
      • HeaderedTextBlock and PullToRefreshListView now accept ContentTemplate customization
      • Facebook service now supports paging when requesting data
      • Renamed BladeControl to BladeView. BladeView now also derives from ItemsControl. This will allow for more common convention like data binding and will make the control aligned with SDK naming. To guarantee backward compatibility, we kept the previous control and flagged it as obsolete. This way, developers can still reference the new version and everything will work just fine. A compiler warning will just encourage you to use the new version. The current plan is to keep obsolete classes until next major version and then remove them.

    We saw an increasing number of contributions from the community of 48 developers that led to several new features and improvements. We also observed some healthy dialogue about what should be include or not in the toolkit, architecture best practices, and feature prioritization that is going drive even higher quality of the toolkit.

    For example, I would like to share the story behind the MasterDetailView. The story began with an issue created on the GitHub repo: “We need a MasterDetailView.” Immediately, the community reacted with  tremendous energy, discussing the implementation details, features wanted and the philosophy behind the control. We even ended up with two different implementations at some point (the community then voted and discussed to define which best fit with the toolkit principles). If you want to understand how a united community can create wonderful code, I encourage you to read this thread.

    You can find the roadmap of the next release here.

    If you have any feedback or if you’re interested in contributing, see you on GitHub!

    Download Visual Studio to get started!

    This Week on Windows: New features in Windows Maps, Cortana and more

    $
    0
    0

    We hope you enjoyed this week’s episode of This Week on Windows! Read more about what’s new in the Windows Maps app and the new ways Cortana can help you with your to-do lists, or head over here to read our Windows 10 Tip on four ways to use Windows Ink in the Windows Maps app.

    Here’s what’s new in the Windows Store this week:

    14-day free trial of Sling TV

     Sling TV app in the Windows Store

    Your Sling TV exclusive is here! Get an extended 14-day free trial of Sling TV and start watching the best of live TV straight from Windows 10. Stream your favorite live shows and on-demand entertainment, including live college basketball and football, and your favorite shows from premium channels. Download the app today and get 14 days FREE of live TV, only on Windows 10.

    Suicide Squad

    Suicide Squad

    In the hopes of averting the apocalypse, a secret government agency must recruit some of the world’s most dangerous criminals to form an elite task force. Do these supervillains have a shot at success? Or is it a suicide mission? Own the extended cut of Suicide Squad ($19.99), now in the Movies & TV section of the Windows Store.

    Batman: The Telltale Series

    Batman: The Telltale Series

    With Gotham City’s first family mired in corruption and an old friend now a dangerous adversary, the life of the Dark Knight is turned upside down in Episode 2 of Batman: The Telltale Series ($4.99/episode). What was Thomas Wayne entangled in, and why was he killed? Determined to learn the truth about his father, Bruce sets out to question those involved in Gotham’s criminal past.

    Teen Wolf Season 6  

    Teen Wolf Season 6

    As the final season of Teen Wolf begins, Scott and the pack are entering their last few months of high school. But when they lose their closest ally, Scott and Lydia will be forced to stand alone against one of the greatest threats they’ve ever faced. Don’t miss the season premiere of Teen Wolf ($34.99), available now in the Movies & TV section of the Windows Store.

    Have a great weekend!

    ICYMI – Microsoft Connect, Linux, WIP and a new Insider Preview Build

    $
    0
    0

    Just when you thought you’d seen it all at the MVP Summit, we come back with a few exciting announcements from Connect. We want to thank you again for joining us, and if you couldn’t make it this time, continue reading to see what you might’ve missed.

    Connect(); 2016

    Connect, the annual Visual Studio-centered developer conference, announced the latest version of our favorite IDE, a preview for the new Visual Studio Mac edition, Team Foundation Server 2017 and a preview for Visual Studio Mobile Center. On top of that, we announced our platinum-level partnership with the Linux foundation. We’re thrilled to finally share all of these updates with you – follow the links below to learn more.

    UWP Community Toolkit Update 1.2

    Our goal with this update was to stabilize current features while adding the most wanted ones that were missing. Check out the blog to see the full list of updates, additions and assorted bells and whistles.

    Windows Insider Preview Build 14971

    Coming to you in this week’s build: improved reading experience in Microsoft Edge, new opportunities in 3D, PowerShell updates and a whole bunch of PC fixes.

    And that’s all! Make sure to tweet us if you have any questions or comments and, as always, see you next week.

    Download Visual Studio to get started.

    The Windows team would love to hear your feedback.  Please keep the feedback coming using our Windows Developer UserVoice site. If you have a direct bug, please use the Windows Feedback tool built directly into Windows 10.

    Viewing all 13502 articles
    Browse latest View live