Quantcast
Channel: TechNet Technology News
Viewing all 13502 articles
Browse latest View live

Brad Anderson’s Lunch Break / s3 e9 / Brad Strock, CIO, PayPal

$
0
0

Today I start a really interesting conversation with Brad Strock, the CIO of PayPal.

Brad has the distinction of executing a very rare feat in the field of IT: Overseeing the unbelievably complex technical requirements that accompanied the divorce of PayPal and eBay. Brad shares some great inside info on that process, as well as his organizations status as the fastest-ever migration to Office 365 (25k mailboxes in 6 weeks!). The backstory is pretty incredible.

.

To Learn more about Microsoft Enterprise Mobility + Security, visit: http://www.microsoft.com/ems.

Keep an eye out for part 2 of my drive with Brad this Thursday.

You can subscribe to these videoshere, or watch past episodes here:www aka.ms/LunchBreak.


Our continuing commitment to your privacy with Windows 10

$
0
0

Many of you have asked for more control over your data, a greater understanding of how data is collected, and the benefits this brings for a more personalized experience. Based on your feedback, we are launching two new experiences to help ensure you are in control of your privacy.

First, today we’re launching a new web-based privacy dashboard so you can see and control your activity data from Microsoft including location, search, browsing, and Cortana Notebook data across multiple Microsoft services. Second, we’re introducing in Windows 10 a new privacy set up experience, simplifying Diagnostic data levels and further reducing the data collected at the Basic level. These Windows 10 changes are being introduced in a Windows Insider build soon for feedback first and will be rolled out to everyone when theWindows 10 Creators Update becomes available.

We also recognize organizations have different needs than individuals when addressing privacy. Learn more about what we’re doing to help IT pros manage telemetry and privacy within their organizations here.

Microsoft privacy dashboard introduces new ways to review and manage activity data

We heard that you want better ways to be able to see and manage activity data collected by Microsoft services. Today, we’re taking a step forward in supporting our privacy principle of transparency with the introduction of a new Microsoft privacy dashboard on the web that lets you easily see and manage your activity data.

When you are signed in with your Microsoft account, you can go to account.microsoft.com/privacy to review and clear data such as browsing history, search history, location activity, and Cortana’s Notebook – all in one place. This is our first step in expanding the tools that give you visibility and control over your data spanning Microsoft products and services, and we will continue to add more functionality and categories of data over time.

Privacy changes coming to the Creators Update

We’ve been listening to your feedback and shared how we’re protecting your privacy shortly after the launch of Windows 10.  We are continuing this commitment to make it as easy as possible for you to make informed choices about your privacy with Windows 10. With that in mind, in the Creators Update, we are making some changes by simplifying the privacy settings themselves and improving the way we present the privacy settings to you.

First, we will introduce a new set up experience for you to choose the settings that are right for you. This experience, which replaces previous Express Settings, will look slightly different depending on the version of Windows you are using. If you are moving from Windows 7 or Windows 8, or doing a fresh install of Windows 10, the new set up experience will clearly show you simple but important settings and you will need to choose your settings before you can move forward with setup. If you are already using Windows 10, we will use notifications to prompt you to choose your privacy settings. We’ll introduce this process in an upcoming Windows Insider build soon.

Second, we’ve simplified our Diagnostic data collection from three levels to two: Basic and Full. If you previously selected the Enhanced level, you’ll have the option to choose Basic or Full with the Creators Update.

Third, we’ve further reduced the data collected at the Basic level. This includes data that is vital to the operation of Windows. We use this data to help keep Windows and apps secure, up-to-date, and running properly when you let Microsoft know the capabilities of your device, what is installed, and whether Windows is operating correctly. This option also includes basic error reporting back to Microsoft.

Below is a look at the new privacy settings set up experience we will be introducing to Windows Insiders in an upcoming build. We have made this new set up experience voice-capable providing greater accessibility for customers. Voice data remains on the device as part of this set up process.

a look at the new privacy settings set up experience we will be introducing to Windows Insiders in an upcoming build. We have made this new set up experience voice-capable providing greater accessibility for customers. Voice data remains on the device as part of this set up process.

User interface designs presented to Windows Insiders are subject to change before general availability.

As you make your choices in the new set up experience, we’ll share additional information about what impact each choice will have on your Windows experience as shown below.

Privacy Changes Coming to Creators Update

User interface designs presented to Windows Insiders are subject to change before general availability.

As always, customers can review all their privacy settings, including these, and make changes at any time under Settings.

Looking forward

When it comes to your privacy, we strive to make choices easy to understand while also providing clear visibility and control over your data. We believe finding the right balance is one of our most important tasks in delivering great personalized experiences that you love and trust.

Today, we take another step in our journey to make changes that address your feedback and help make your experience with Windows and other Microsoft products better and richer. We want you to be informed about and in control of your data, which is why we’re working hard on these settings and controls. And regardless of your data collection choices, we will not use the contents of your email, chat, files, or pictures to target ads to you.

Together we’re on a technology journey as devices, the Internet, and smart things all around us are changing the way we communicate, play and get stuff done. At Microsoft, a key part of our journey is engaging with our customers, listening to feedback and trying new ideas. Thank you for all your feedback and please continue to share your thoughts here.

Terry

The post Our continuing commitment to your privacy with Windows 10 appeared first on Windows Experience Blog.

Lynbrook Public Schools digital journey and the unexpected app that changed everything

$
0
0

Today’s post was written by Jill Robinson, instructional technology staff developer for Lynbrook Public Schools.

As I walked through the halls of Lynbrook North Middle School early one morning, I noticed half a dozen students sitting in the halls, intently drawing and explaining how to perform a seemingly complicated math problem. Surface Pro tablets in hand, their goal was to create a teaching video on how to calculate the sales tax and discounts on store merchandise. After several takes, and lots of giggling, the pairs of students were satisfied with their teaching videos. With one click, they embedded the Microsoft Snip teaching videos to the OneNote Class Notebook Collaboration Space for all their classmates to view.

lynbrook-public-schools-digital-journey-1

Just three years ago, this type of learning experience would not have been possible for these students. In the fall of 2014, our school district began a “one-to-one” tablet program, beginning with a single grade at our two middle schools. Many months prior to the start of this school year, 6th-grade teachers received devices and began weekly training sessions. Getting familiar with the devices—as well as learning new software—was only part of the goal. We also wanted to integrate this new technology in the classroom and needed to answer questions that came up as teachers entered this new phase that would change their instructional approach.

  • What would the integration of this new technology look like in the classroom?
  • How would it help student learning and engagement?
  • Would this integration look the same across subjects such as Math, English and Spanish?

Office 365 was the first software to be taught. Teachers were already well-versed in the Office desktop suite, but the introduction of the “cloud” was daunting to many. Throughout the training, there were many hiccups, to put it mildly. But the idea of students having access to their digital work beyond the school day was something they never had before—and it was exciting! While working in the cloud had many advantages—collaborative opportunities, sharing of files and access to digital work at any time—there was something missing. Was there an easier way to distribute files to students without bombarding them with a long list of “shared” documents? How would teachers assess student work in the cloud? Where was the organizational piece of this digital puzzle?

A new app called OneNote Class Notebook

Just a few months into the one-to-one tablet initiative, a new app called OneNote Class Notebook showed up. Upon exploring this new tool, we discovered that this program just might provide teachers with what they needed to organize and facilitate their digital curriculum. After introducing this new application to a team of 6th-grade teachers, they ran with it almost immediately! To this day, I give the middle school teachers a lot of credit—learning to navigate unfamiliar software as educators while simultaneously introducing it to 6th-graders wasn’t easy. Some instructional time was lost in that early phase.

lynbrook-public-schools-digital-journey-2

Fast forward two years, three grades and 500 devices later—every one of our middle school students uses OneNote Class Notebook for most of their school day (and most use it at home as well). What one might find most intriguing is how the use of this tool has evolved in the time teachers and students have been using it.

As teachers have become more familiar and more comfortable with the Class Notebook, they have expanded their horizons and explored new and challenging methods to help students learn in exciting ways. The addition of the Class Notebook add-in has not only helped teachers review student work with greater ease, but given them the ability to “push out” content to all students at once. This not only saves tons of paper, but valuable classroom time as well.

Remember those math students at the beginning of this post who were working on digital videos? Being masters of the Class Notebook, they were assigned to use and incorporate other types of digital media to demonstrate they could figure out the sales tax or discount of an item. Microsoft Snip—a quick and easy-to-use “show-and-tell” tool—helped students create instructional videos, embed them into their Class Notebook and share with their classmates. All students now had a digital study guide to refer to and had also created evidence that they could, indeed, figure out the sales tax on the latest Xbox. Watch their video.

lynbrook-public-schools-digital-journey-3

Spanish students have been using the audio and video recording tools in their Class Notebooks. This has helped our students not only practice their conversational Spanish, but sharing their recordings in Class Notebook with others has motivated students to provide their best work. There is a bigger audience listening. As their teacher enthusiastically stated, “I can easily asses all students’ speaking progress without having to dedicate an entire period to listening to each student one-on-one. It saves a lot of class time and allows me to do more with them!”

lynbrook-public-schools-digital-journey-4

As the OneNote Class Notebook continues to evolve, we look forward to new features that can help our students grow, think, solve and strive to become lifelong learners.

—Jill Robinson

The post Lynbrook Public Schools digital journey and the unexpected app that changed everything appeared first on Office Blogs.

Embedding a Power BI report into Salesforce

$
0
0
Many people use Salesforce to manage their Accounts and keep track of their Opportunities. They often use Power BI to visualize Salesforce data and bring in additional data sources, but that means switching platforms often. This walkthrough addresses that and shows you how to embed a Power BI report in a Visualforce page inside Salesforce, allowing you to view all your data and reports in a single application.

The week in .NET – On .NET with Reed Copsey, Jr., Orchard Harvest, Ammy, Concurrency Visualizer, Eco

$
0
0

To read last week’s post, see The week in .NET – On .NET with Glenn Versweyveld, Protobuf.NET, Arizona Sunshine.

Starting this week, UWP links, which have been in the general .NET section until now, are getting their own section thanks to Michael Crump who graciously accepted to provide weekly contents along with Phillip Carter for F#, Stacey Haffner for gaming, and Dan Rigby for Xamarin.

Orchard Harvest

The Orchard CMS community will hold its yearly conference in New York City from February 21 to the 22. This week is the last one to benefit from early registration fees. I’ll be there myself, to give a talk about .NET Core and C# 7.

On .NET

Last week, Reed Copsey, Jr., executive director of the F# Software Foundation was on the show to speak about the Foundation’s mentoring and speaker programs:

This week, we’ll speak with David Pine about building a magic mirror. The show is on Thursdays and begins at 10AM Pacific Time on Channel 9. We’ll take questions on Gitter, on the dotnet/home channel and on Twitter. Please use the #onnet tag. It’s OK to start sending us questions in advance if you can’t do it live during the show.

Package of the week: Ammy

XAML is a way to describe instances of components. It uses an XML dialect, which is not to the taste of everyone, and may not be the best for manual authoring. The same ideas that XAML implements can however perfectly well be implemented with other persistence formats.

Ammy is one such format, that is inspired from JSON and Qt’s QML. It’s lightweight, expressive, and extensible.

Tool of the week: Concurrency Visualizer

Concurrency Visualizer is an invaluable extension to Visual Studio that helps you visualize multithreaded application performance. It can monitor processor and core utilization, threads, spot anti-patterns, and recommend best practices.

Concurrency Visualizer

Sergey Teplyakov has a great post this week on understanding different GC modes with Concurrency Visualizer.

Game of the week: Eco

Eco is a global survival game with a focus on ecology and collaboration. In Eco, players must team up to build a civilization and evolve it quick enough to destroy an incoming meteor before it takes out the planet, but not so quickly that it destroys the ecosystem and everyone along with it. Eco takes the typical survival genre and puts a unique spin on it by providing a fully simulated ecosystem, where every single action taken affects the countless species, even the humans. (If not properly balanced, it is possible to destroy the food source and cause a server-wide perma-death.) Players also establish and run the government by enacting laws, a criminal justice system to enforce the laws and the economy by selling goods and services.

Eco

Eco was created Strange Loop Games using C# and Unity for the client and ASP.NET and the .NET Framework for their website and server backend. It is currently in alpha for Windows, Mac, and Linux. Eco is also being piloted in serveral schools as a means to teach students about ecology, collaboration and cause and effect.

User group meeting of the week: Serverless .NET Core app for the AWS IoT Button in San Diego, CA

The λ# user group holds a meeting on Wednesday, January 18, at 6:00 PM in San Diego, CA where you’ll learn how to build a serverless .NET Core app for the AWS IoT Button.

.NET

ASP.NET

F#

New F# Language Proposal:

Check out F# Weekly for more great content from the F# community.

Azure

UWP

Games

And this is it for this week!

Contribute to the week in .NET

As always, this weekly post couldn’t exist without community contributions, and I’d like to thank all those who sent links and tips. The F# section is provided by Phillip Carter, the gaming section by Stacey Haffner, the Xamarin section by Dan Rigby, and the UWP section by Michael Crump.

You can participate too. Did you write a great blog post, or just read one? Do you want everyone to know about an amazing new contribution or a useful library? Did you make or play a great game built on .NET?
We’d love to hear from you, and feature your contributions on future posts:

This week’s post (and future posts) also contains news I first read on The ASP.NET Community Standup, on Weekly Xamarin, on F# weekly, and on Chris Alcock’s The Morning Brew.

How to use certificates to authenticate computers in workgroups or untrusted domains

$
0
0

System Center 2012 R2 Data Protection Manager (DPM 2012 R2) supports protection of computers in workgroups and untrusted domains using local accounts and NTLM, however in scenarios where an organization does not allow creation of local accounts, this solution does not work. As an alternative, DPM 2012 R2 now allows the use of certificates to authenticate computers in workgroups or untrusted domains. DPM supports the following data sources for certificate-based authentication when they are not in trusted domains:

  • SQL Server
  • File server
  • Hyper-V

Note that DPM also supports the data sources above in clustered deployments.

The following data sources are not supported:

  • Exchange Server
  • Client computers
  • SharePoint Server
  • Bare Metal Recovery
  • System State
  • End user recovery of file and SQL
  • Protection between a Primary DPM server and Secondary DPM server using certs. The Primary DPM server and Secondary DPM server need to be in the same domain or mutually trusted domain. Certificate based authentication between a Primary and Secondary DPM servers is not supported.

If you have this scenario in your environment, we have a new article available that will guide you through all of the steps required for setting up System Center 2012 R2 Data Protection Manager to protect virtual machines (VMs) running in a Windows Server 2012 R2 workgroup, or VMs running in a Windows Server 2012 R2 Hyper-V cluster, in an untrusted forest using certificate authentication. You can download this new whitepaper here.

Using Power BI Audit Log and PowerShell to assign Power BI Pro licenses

$
0
0
With the release of a new auditing event for Power BI, you can use PowerShell to automate Power BI Pro license assignments. Auditing within Power BI Auditing with Power BI has been available for a…

Announcing Columnstore Indexes and Query Store support in Database Engine Tuning Advisor

$
0
0

The latest version of Microsoft SQL Server Database Engine Tuning Advisor (DTA) supports two new features: (a) Ability to recommend columnstore indexes in addition to rowstore indexes, and (b) Ability to automatically select a workload to tune from the Query Store. These new features are available when tuning for SQL Server 2016 Database Engine (or later) versions.

Recommendations for columnstore indexes

Data warehousing and analytic workloads often need to scan large amounts of data, and can greatly benefit from columnstore indexes. In contrast, rowstore (B+-tree) indexes are most effective for queries that access relatively small amounts of data searching for a particular range of values. Since rowstore indexes can deliver rows in sorted order, they can also reduce the cost of sorting in query execution plans. Therefore, the choice of which rowstore and columnstore indexes to build for your database is dependent on your application’s workload.

The latest version of DTA can analyze the workload and recommend a suitable combination of rowstore and columnstore indexes to build on the tables referenced by the workload. This article highlights the performance improvements achieved on real customer workloads by using DTA to recommend a combination of rowstore and columnstore indexes.

Tune Database using Workload from SQL Server Query Store

The Query Store feature in SQL Server automatically captures a history of queries, plans, and runtime statistics, and persists this information along with a database. It stores query execution statistics summarized over time intervals so you can see database usage patterns and understand when query plan changes happened on the server. DTA now supports a new option to analyze the Query Store to automatically select an appropriate workload for tuning. For many DTA users, this can take away the burden of having to collect a suitable workload file using SQL Server Profiler. This feature is only available if the database has Query Store turned on.

Next Steps

Download the latest version of Database Engine Tuning Advisor
For additional documentation on these features see also:
Columnstore Index Recommendations in Database Engine Tuning Advisor (DTA)
Tuning Database Using Workload from Query Store
Performance Improvements using Database Engine Tuning Advisor (DTA) recommendations


SQL Server R Services – Why did we build it?

$
0
0

Author: Nellie Gustafsson

This is the first post in a series of blog posts about SQL Server R Services. We want to take you behind the scenes and explain why we have built this feature and deep dive into how it works.

Future posts will include details about the architecture, highlighting advanced use cases with code examples to explain what can be done with this feature.

Before we get into details, we want to start off by giving you some background on why we built SQL Server R Services and how the architecture looks.

Making SQL Server a data intelligence platform

SQL Server R Services is an in-database analytics feature that tightly integrates R with SQL Server. With this feature, we want to provide a data intelligence platform that moves intelligence capabilities provided with R closer to the data. So why is that a good thing? The short answer is that a platform like this makes it much easier to consume and manage R securely and at scale in applications in production.

intelligence-db

Challenges with open source R

There are three major challenges with open source R. This is how we address those challenges by moving intelligence closer to the data in SQL Server:

Challenge 1 – Data Movement

Moving data from the database to the R Runtime becomes painful as data volumes grow and carries security risks

Our solution: Reduce or eliminate data movement with In-Database analytics

o   There is no need to move data when you can execute R scripts securely on SQL Server. You can still use your favorite R development tool and simply just push the compute to execute on SQL Server using the compute context.

Challenge 2 – Operationalize R scripts and models

It is not trivial how to call R from your application in production. Often, you must recode the R script in another language. This can be time consuming and inefficient.

Our solution:

o   Use familiar T-SQL stored procedures to invoke R scripts from your application

o   Embed the returned predictions and plots in your application

o   Use the resource governance functionality to monitor and manage R executions on the server

Challenge 3 – Enterprise Performance and scale

R runs single threaded and only accommodates datasets that fit into available memory

Our solution:

o   Use SQL Server’s in-memory querying and Columnstore indexes

o   Leverage RevoScaleR support for large datasets and parallel algorithms

SQL Server Extensibility Architecture

The foundation of this data intelligence platform is the new extensibility architecture in SQL Server 2016.

Extensibility Framework – Why?

The way we make SQL Server and R work together is by using a framework we call the extensibility architecture. Previously, CLR or extended stored procedures would enable you to run code outside the constructs of SQL Server, but in those cases, the code still runs inside the SQL Server process space. Having external code running inside the SQL Server process space can cause disruption  and it is also not possible to legally embed runtimes that are not owned by Microsoft.

Instead, we have built a new generic extensibility architecture that enables external code, in this case R programs, to run, not inside the SQL Server process, but as external processes that launch external runtimes. If you install SQL Server with R Services, you will be able to see the new Launchpad service in SQL Server configuration manager:

sql-config-mgr

T-SQL interface: sp_execute_external_script

So how is an external script, like an R script, executed using the extensibility architecture? Well, we have created a new special stored procedure called sp_execute_external_script for that. This stored procedure has all the benefits of any other stored procedure. It has parameters, can return results and is executable from any TSQL client that can run queries. It also enables you to execute external scripts inside SQL Server.

When you execute the stored procedure sp_execute_external_script, we connect to the Launchpad service using a named pipe and send a message to that service telling it what we want to run and how. We currently only support R as language.

Launchpad has a registration mechanism for launchers specific to a runtime/language. Based on the script type, it will invoke the corresponding launcher which handles the duties for invoking and managing the external runtime execution. This launcher creates a Satellite process to execute our R Scripts.

The Satellite process has a special dll that knows how to exchange data with SQL Server to retrieve input rows/parameters and send back results and output parameters. Multiple of these processes can be launched to isolate users from each other and achieve better scalability.

One major advantage with Launchpad is that it uses proven SQL Server technologies such as SQLOS and XEvent to enable XEvent tracing of the Launchpad service. You can read more about how to collect XEvents for R Services here.

Looking ahead

Without going into too much detail, this is how the extensibility architecture works. We hope that you found this interesting and that you will stay tuned for our coming blog posts on this topic.

In future posts, we are going to take a closer look at how the extensibility framework works when you execute an R script and how to use the sp_execute_external_script to run your own R scripts.

Engyro Product Connector support ending this year

$
0
0

The Engyro Product Connector for Microsoft System Center Operations Manager 2007 will no longer be supported after July 11, 2017.  If you are using this product connector, please upgrade to System Center 2016 Operations Manager and integrate with a third-party connector as needed. Alternatively, you can use System Center Orchestrator and custom runbooks to integrate with third-party enterprise ITSM/monitoring systems using an integration pack from Microsoft or our partner Kelverion.  For more information about Engyro, please see the Engyro Support Bulletin from 2010.

Review these instructions for upgrading to System Center 2016 Operations Manager. If you are using Operations Manager 2007, this requires the following upgrade process: upgrade to Operations Manager 2007 R2, to System Center 2012 Operations Manager, to System Center 2012 R2 Operations Manager, then to System Center 2016 Operations Manager.

Microsoft System Center 2016 Operations Manager Integration packs

Kelverion Integration packs

Have questions about supported products? Visit Microsoft Support Lifecycle to view a list of supported products and related policies.

System Center Data Protection Manager 2007 End of Support

$
0
0

Product support for Microsoft System Center Data Protection Manager 2007 will reach its end of support date in 12 months. If you are using this version, please upgrade to a newer version before January 9, 2018 to ensure supportability.

The following resources are available to help you upgrade to the latest version of Data Protection Manager:

Have questions about supported products? Visit Microsoft Support Lifecycle to view a list of supported products and related policies.

System Center Virtual Machine Manager 2007 version nearing end of support

$
0
0

Product support for Microsoft System Center Virtual Machine Manager 2007 will reach its end of support date in 12 months. If you are using this version, please upgrade to a newer version before January 9, 2018 to ensure supportability.

The following resources are available to help you upgrade to the latest version of Virtual Machine Manager:

Have questions about supported products? Visit Microsoft Support Lifecycle to view a list of supported products and related policies.

What .NET Developers ought to know to start in 2017

$
0
0

.NET ComponentsMany many years ago I wrote a blog post about what .NET Developers ought to know. Unfortunately what was just a list of questions was abused by recruiters and others who used it as a harsh litmus test.

There's a lot going on in the .NET space so I though it would be nice to update with a gentler list that could be used as a study guide and glossary. Jon Galloway and I sat down and put together this list of terms and resources.

Your first reaction might be "wow that's a lot of stuff, .NET sucks!" Most platforms have similar glossaries or barriers to entry. There's TLAs (three letter acronyms) in every language and computer ecosystems. Don't get overwhelmed, start with Need To Know and move slowly forward. Also, remember YOU decide when you want to draw the line. You don't need to know everything. Just know that every layer and label has something underneath it and the whatever program you're dealing with may be in a level you have yet to dig into.

Draw a line under the stuff you need to know. Know that, and know you can look the other stuff up.  Some of us want the details – the internals. Others don't. You may learn from the Metal Up or from the Glass Back. Know your style, and revel in it.

First, you can start learning .NET and C# online at https://dot.net. You can learn F# online here http://www.tryfsharp.org. Both sites let you write code without downloading anything. You just work in your browser.

When you're ready, get .NET Core and Visual Studio Code at https://dot.net and start reading! 

Need To Know

  • What's .NET? .NET has some number of key components. We'll start with runtimes and languages.
  • Here are the three main runtimes:
    • .NET Framework - The .NET framework helps you create mobile, desktop, and web applications that run on Windows PCs, devices and servers.
    • .NET Core - .NET Core gives you a fast and modular platform for creating server applications that run on Windows, Linux and Mac.
    • Mono for Xamarin - Xamarin brings .NET to iOS and Android, reusing skills and code while getting access to the native APIs and performance. Mono is an open source .NET that was created before Xamarin and Microsoft joined together. Mono will support the .NET Standard as another great .NET runtime that is open source and flexible. You'll also find Mono in the Unity game development environment.
  • Here are the main languages:
    • C# is simple, powerful, type-safe, and object-oriented while retaining the expressiveness and elegance of C-style languages. Anyone familiar with C and similar languages will find few problems in adapting to C#. Check out the C# Guide to learn more about C# or try it in your browser at https://dot.net
    • F# is a cross-platform, functional-first programming language that also supports traditional object-oriented and imperative programming. Check out the F# Guide to learn more about F# or try it in your browser at http://www.tryfsharp.org 
    • Visual Basic is an easy language to learn that you can use to build a variety of applications that run on .NET. I started with VB many years ago.
  • Where do I start?
  • After runtimes and languages, there's platforms and frameworks.
    • Frameworks define the APIs you can use. There's the .NET 4.6 Framework, the .NET Standard, etc. Sometimes you'll refer to them by name, or in code and configuration files as a TFM (see below)
    • Platform (in the context of .NET) - Windows, Linux, Mac, Android, iOS, etc. This also includes Bitness, so x86 Windows is not x64 Windows. Each Linux distro is its own platform today as well.
  • TFMs (Target Framework Moniker) - A moniker (string) that lets you refer to target framework + version combinations. For example, net462 (.NET 4.6.2), net35 (.NET 3.5), uap (Universal Windows Platform). For more information, see this blog post.
  • NuGet - NuGet is the package manager for the Microsoft development platform including .NET. The NuGet client tools provide the ability to produce and consume packages. The NuGet Gallery is the central package repository used by all package authors and consumers.
  • What's an Assembly? - Assemblies are the building blocks of .NET Full Framework applications; they form the fundamental unit of deployment, version control, reuse, activation scoping, and security permissions. In .NET Core, the building blocks are NuGet packages that contain assemblies PLUS additional metadata
  • .NET Platform Standard or "netstandard" - The .NET Platform Standard simplifies references between binary-compatible frameworks, allowing a single target framework to reference a combination of others.
  • .NET Standard Library - The .NET Standard Library is a formal specification of .NET APIs that are intended to be available on all .NET runtimes.
  • .NET Framework vs. .NET Core: Similarities and Differences

Should Know

    • CLR– The Common Language Runtime (CLR), the virtual machine component of Microsoft's .NET framework, manages the execution of .NET programs. A process known as just-in-time compilation converts compiled code into machine instructions which the computer's CPU then executes.
    • CoreCLR - .NET runtime, used by .NET Core.
    • Mono - .NET runtime, used by Xamarin and others.
    • CoreFX - .NET class libraries, used by .NET Core and to a degree by Mono via source sharing.
    • Roslyn - C# and Visual Basic compilers, used by most .NET platforms and tools. Exposes APIs for reading, writing and analyzing source code.
    • GC - .NET uses garbage collection to provide automatic memory management for programs. The GC operates on a lazy approach to memory management, preferring application throughput to the immediate collection of memory. To learn more about the .NET GC, check out Fundamentals of garbage collection (GC).
    • "Managed Code" - Managed code is just that: code whose execution is managed by a runtime like the CLR.
    • IL– Intermediate Language is the product of compilation of code written in high-level .NET languages. C# is Apples, IL is Apple Sauce, and the JIT and CLR makes Apple Juice. ;)
    • JIT – Just in Time Compiler. Takes IL and compiles it in preparation for running as native code.
    • Where is  .NET on disk? .NET Framework is at C:\Windows\Microsoft.NET and .NET Core is at C:\Program Files\dotnet. .NET Core can also be bundled with an application and live under that applications' directory as a self-contained application.
    • Shared Framework vs. Self Contained Apps - .NET Core can use a shared framework (shared by multiple apps on the same machine) or your app can be self-contained with its own copy.
    • Async and await– The Async and Await keywords generate IL that will free up a thread for long running (awaited) function calls (e.g. database queries or web service calls). This frees up system resources, so you aren't hogging memory, threads, etc. while you're waiting.
    • Portable Class Libraries -  These are "lowest common denominator" libraries that allow code sharing across platforms. Although PCLs are supported, package authors should support netstandard instead. The .NET Platform Standard is an evolution of PCLs and represents binary portability across platforms.
    • .NET Core is composed of the following parts:
      • A .NET runtime, which provides a type system, assembly loading, a garbage collector, native interop and other basic services.
      • A set of framework libraries, which provide primitive data types, app composition types and fundamental utilities.
      • A set of SDK tools and language compilers that enable the base developer experience, available in the .NET Core SDK.
      • The 'dotnet' app host, which is used to launch .NET Core apps. It selects the runtime and hosts the runtime, provides an assembly loading policy and launches the app. The same host is also used to launch SDK tools in much the same way.

    Nice To Know

      • GAC – The Global Assembly Cache is where the .NET full Framework on Windows stores shared libraries. You can list it out with "gacutil /list."  
      • Assembly Loading and Binding - In complex apps you can get into interesting scenarios around how Assemblies are loaded from disl
      • Profiling (memory usage, GC, etc.) - There's a lot of great tools you can use to measure – or profile – your C# and .NET Code. A lot of these tools are built into Visual Studio.
      • LINQ - Language Integrated Query is a higher order way to query objects and databases in a declarative way
      • Common Type System and Common Language Specification define how objects are used and passed around in a way that makes them work everywhere .NET works, interoperable. The CLS is a subset that the CTS builds on.
      • .NET Native - One day you'll be able to compile to native code rather than compiling to Intermediate Language.
      • .NET Roadmap - Here's what Microsoft is planning for .NET for 2017
      • "Modern" C# 7– C# itself has new features every year or so. The latest version is C# 7 and has lots of cool features work looking at.

      NOTE: Some text was taken from Wikipedia's respective articles on each topic, edited for brevity. Creative Commons Attribution-ShareAlike 3.0. Some text was taken directly from the excellent .NET docs. This post is a link blog and aggregate. Some of it is original thought, but much is not.


      Sponsor: Big thanks to Raygun! Join 40,000+ developers who monitor their apps with Raygun. Understand the root cause of errors, crashes and performance issues in your software applications. Installs in minutes, try it today!



      © 2016 Scott Hanselman. All rights reserved.
           

      Azure Media Services Live Monitoring Dashboard open-source release

      $
      0
      0

      We are excited to announce the open-source release of the Azure Media Services (AMS) Live Monitoring Dashboard on GitHub.

      The Live Monitoring Dashboard is a .NET C# web app that enables Azure Media Services (AMS) customers to view the health of their channel and origin deployments. The dashboard captures the state of ingest, archive, encode, and origin telemetry entities, enabling customers to quantify the health of their services with low latency. The dashboard supplies data on the incoming data rate for video stream ingestion, dropped data in storage archive, encoding data rate, and origin HTTP error statuses and latencies.

      Special thanks to Prakash Duggaraju for his help and contributions to this project.

      Dashboard overview

      The image below illustrates the account-level view of the Live Monitoring Dashboard. The upper left pane highlights each deployment’s health status with a different status code color. Ingest, archive, origin, and encode telemetry entities are denoted by i, a, o, and e abbreviations respectively. Each color of theindicator summarizes whether an entity is currently impacted. Green denotes healthy, orange indicates mildly impacted, red indicates unhealthy, and gray indicates inactive. You can modify the thresholds for which these flags are raised from the storage account JSON configuration file. From the right pane, you can drill down into the detailed views for each deployment by clicking on the active status squares.

      This dashboard is backed by a SQL database that reads telemetry data from your Azure storage account. Our telemetry release announcement blog post details the types of telemetry data supported today. Every 30 seconds all views within the dashboard are automatically refreshed with the latest telemetric data.

      Home Page

      Channel Detailed View

      The channel detailed view provides incoming bitrate, discontinuity count, overlap count, and bitrate ratio data for a given channel. In this view, these fields represent the following:

      • Bitrate: the expected bitrate for a given track and incoming bitrate is the bitrate that the channel receives
      • Discontinuitycount: the count of instances where a fragment was missing in the stream
      • Overlap count: the count of instances where the channel receives fragments with the same or overlapping stream timestamp
      • Bitrate ratio: the ratio of incoming bitrate to expected bitrate

      Optimally, a channel should have no discontinuities, no overlaps, and a bitrate ratio of one. Flags are set to raise when these dimensions deviate from normal values.

      Channel Detailed View

      Archive Detailed View

      The archive detailed view provides bitrate, dropped fragment count, and dropped fragment rate for the archive entities backing each track. In this view, these fields represent the following:

      • Bitrate: the expected bitrate of the given track
      • Dropped fragment count: the number of fragments dropped in the program
      • Dropped fragment ratio: the number of fragments dropped per minute

      Optimally, the dropped fragment count and dropped fragment ratio should be zero.

      Archive Detailed View

      Origin Detailed View

      The origin detailed view provides request count, bytes sent, server latency, end-to-end (E2E) latency, request ratio, bandwidth, and data output utilization ratio for a given origin. In this view, these fields represent the following:

      • Request count: the number of times a client requested data from the origin, categorized by the HTTP status code
      • Bytes sent: the number of bytes returned to the client
      • Server latency: the server latency component for responding to a request
      • End-to-end latency: the total latency for responding to a request
      • Request rate: the number of requests the origin receives per minute
      • Bandwidth: the origin response throughput
      • Request ratio: the percentage of requests for a given HTTP status code
      • Data output utilization ratio: the percentage of maximum throughput that the origin utilizes

      Optimally, origin requests should return only HTTP 200 status codes and there should be no failed requests (HTTP 4XX + 5XX – 412). The data out utilization should preferably not exceed 90 - 95% of the maximum available throughput.

      Origin Detailed View

      Encode Detailed View

      The encode detailed view provides the health status for inputs, transcoders, output, and overall health.

      Encode Detailed View

      Optimally, the encoder detailed view should reflect overall healthy status.

      Providing feedback & feature requests

      We love to hear from our customers and better understand your needs! To help serve you better, we are always open to feedback, new ideas, and appreciate any bug reports so that we can continue to provide an amazing service with the latest technologies. To request new features, provide ideas or feedback, please submit to User Voice for Azure Media Services. If you have any specific issues, questions, or find any bugs, please post your question or feedback to our forum.

      The Microsoft security update for January 2017 has been released

      $
      0
      0

      Yesterday we released security updates to provide additional protections against malicious attackers. As a best practice, we encourage customers to apply security updates as soon as they are released.

      More information about this month’s security updates and advisories can be found in the Security TechNet Library.


      Azure Virtual Machine Internals – Part 1

      $
      0
      0

      Introduction

      The Azure cloud services are composed of elements from Compute, Storage, and Networking. The compute building block is a Virtual Machine (VM), which is the subject of discussion in this post. Web search will yield large amounts of documentation regarding the commands, APIs and UX for creating and managing VMs. This is not a 101 or ‘How to’ and the reader is for the most part expected to already be familiar with the topics of VM creation and management. The goal of this series is to look at what is happening under the covers as a VM goes thru its various states.

      Azure provides IaaS and PaaS VMs; in this post when we refer to a VM we mean the IaaS VM. There are two control plane stacks in Azure, Azure Service Management (ASM) and Azure Resource Manager (ARM). We will be limiting ourselves to ARM since it is the forward looking control plane.

      ARM exposes resources like VM, NIC but in reality ARM is a thin frontend layer and the resources themselves are exposed by lower level resource providers like Compute Resource Provider (CRP), Network Resource Provider (NRP) and Storage Resource Provider (SRP). Portal calls ARM which in turn calls the resource providers.

      Getting Started

      For most of the customers, their first experience creating a VM is in the Azure Portal. I did the same and created a VM of size ‘Standard DS1 v2’ in the West US region. I mostly stayed with the defaults that the UI presented but chose to add a ‘CustomScript’ extension. When prompted I provided a local file ‘Sample.ps’ as the PowerShell script for the ‘CustomScript’ extension. The PS script itself is a single line Get-Process.

      The VM provisioned successfully but the overall ARM template deployment failed (bright red on my Portal dashboard). Couple clicks showed that the ‘CustomScript’ extension had failed and the Portal showed this message:

      {
        "status": "Failed",
        "error": {
          "code": "ResourceDeploymentFailure",
          "message": "The resource operation completed with terminal provisioning state 'Failed'.",
          "details": [
            {
              "code": "DeploymentFailed",
              "message": "At least one resource deployment operation failed. Please list deployment operations for details. Please see https://aka.ms/arm-debug for usage details.",
              "details": [
                {
                  "code": "Conflict",
                  "message": "{\r\n  \"status\": \"Failed\",\r\n  \"error\": {\r\n    \"code\": \"ResourceDeploymentFailure\",\r\n    \"message\": \"The resource operation completed with terminal provisioning state 'Failed'.\",\r\n    \"details\": [\r\n      {\r\n        \"code\": \"VMExtensionProvisioningError\",\r\n        \"message\": \"VM has reported a failure when processing extension 'CustomScriptExtension'. Error message: \\\"Finished executing command\\\".\"\r\n      }\r\n    ]\r\n  }\r\n}"
                }
              ]
            }
          ]
        }
      }

      It wasn’t immediately clear what had gone wrong. We can dig from here and as is often true, failures teach us more than successes.

      I RDPed to the just provisioned VM. The logs for the VM Agent are in C:\WindowsAzure\Logs. The VM Agent is a system agent that runs in all IaaS VMs (customers can opt out if they would like). The VM Agent is necessary to run extensions. Let’s peek into the logs for the CustomScript Extension:

      C:\WindowsAzure\Logs\Plugins\Microsoft.Compute.CustomScriptExtension\1.8\CustomScriptHandler
      
      [1732+00000001] [08/14/2016 06:19:17.77] [INFO] Command execution task started. Awaiting completion...
      
      [1732+00000001] [08/14/2016 06:19:18.80] [ERROR] Command execution finished. Command exited with code: -196608

      The fact that the failure logs are cryptic hinted that something catastrophic had happened. So I re-looked at my input and realized that I had the file extension for the PS script wrong. I had it as Sample.ps when it should have been Sample.ps1. I updated the VM this time specifying the script file with the right extension. This succeeded as shown by more records appended to the log file mentioned above.

      [3732+00000001] [08/14/2016 08:42:24.04] [INFO] HandlerSettings = ProtectedSettingsCertThumbprint: , ProtectedSettings: {}, PublicSettings: {FileUris: [https://iaasv2tempstorewestus.blob.core.windows.net/vmextensionstemporary-10033fff801becb5-20160814084130535/simple.ps1?sv=2015-04-05&sr=c&sig=M3qa7lS%2BZwp%2B8Tytqf1VEew4YaAKvvYn1yzGrPfSwyw%3D&se=2016-08-15T08%3A41%3A30Z&sp=rw], CommandToExecute: powershell -ExecutionPolicy Unrestricted -File simple.ps1 }
      
      [3732+00000001] [08/14/2016 08:42:24.04] [INFO] Downloading files specified in configuration...
      
      [3732+00000001] [08/14/2016 08:42:24.05] [INFO] DownloadFiles: fileUri = https://iaasv2tempstorewestus.blob.core.windows.net/vmextensionstemporary-10033fff801becb5-20160814084130535/simple.ps1?sv=2015-04-05&sr=c&sig=M3qa7lS+Zwp+8Tytqf1VEew4YaAKvvYn1yzGrPfSwyw=&se=2016-08-15T08:41:30Z&sp=rw
      
      [3732+00000001] [08/14/2016 08:42:24.05] [INFO] DownloadFiles: Initializing CloudBlobClient with baseUri = https://iaasv2tempstorewestus.blob.core.windows.net/
      
      [3732+00000001] [08/14/2016 08:42:24.22] [INFO] DownloadFiles: fileDownloadPath = Downloads\0
      
      [3732+00000001] [08/14/2016 08:42:24.22] [INFO] DownloadFiles: asynchronously downloading file to fileDownloadLocation = Downloads\0\simple.ps1
      
      [3732+00000001] [08/14/2016 08:42:24.24] [INFO] Waiting for all async file download tasks to complete...
      
      [3732+00000001] [08/14/2016 08:42:24.29] [INFO] Files downloaded. Asynchronously executing command: 'powershell -ExecutionPolicy Unrestricted -File simple.ps1 '
      
      [3732+00000001] [08/14/2016 08:42:24.29] [INFO] Command execution task started. Awaiting completion...
      
      [3732+00000001] [08/14/2016 08:42:25.29] [INFO] Command execution finished. Command exited with code: 0

      The CustomScript extension takes a script file which can be provided as a file in a Storage blob. Portal offers a convenience where it accepts a file from the local machine. I had provided Simple.ps1 which was in my \temp folder. Behind the scenes Portal uploads the file to a blob, generates a shared access signature (SAS) and passes it on to CRP. From the logs above you can see that URI.

      This URI is worth understanding. It is a Storage blob SAS with the following attributes for an account in West US (which is the same region where my VM is deployed):

      • se=2016-08-15T08:41:30Z means that the SAS is valid until that time (UTC). Comparing it to the timestamp on the corresponding record in log (08/14/2016 08:42:24.05) it is clear that the SAS is being generated for a period of 24 hours.
      • Sr=c means that this is container level policy.
      • Sp=rw means that the access is for both read and write.
      • The shared access signature (SAS) has the full descriptions

      I asserted above that this is a storage account in West US. That may be apparent from the naming of the storage account (iaasv2tempstorewestus) but is not a guarantee. So how can you verify that this storage account (or any other storage account) is in the region it claims to be in?

      A simple nslookup on the blob DNS URL reveals this

      C:\Users\yunusm>nslookup iaasv2tempstorewestus.blob.core.windows.net
      
      Server: PK5001Z.PK5001Z
      
      Address: 192.168.0.1
      
      Non-authoritative answer:
      
      Name: blob.by4prdstr03a.store.core.windows.net
      
      Address: 40.78.112.72
      
      Aliases: iaasv2tempstorewestus.blob.core.windows.net

      The blob URL is a CNAME to a canonical DNS blob.by4prdstr03a.store.core.windows.net. Experimentation will show that more than one storage accounts maps to a single canonical DNS URL. The ‘by4’ in the name gives a hint to what region it is located. As per the Azure Regions page, the West US region is in California. Looking up the geo location of the IP address (40.78.112.72) indicates a more specific area within California.

      Understanding the VM

      Now that we have a healthy VM, let’s understand it more. As per the Azure VM Sizes page, this is the VM that I just created:

      Size

      CPU cores

      Memory

      NICs (Max)

      Max. disk size

      Max. data disks (1023 GB each)

      Max. IOPS (500 per disk)

      Max network bandwidth

      Standard_D1_v2

      1

      3.5 GB

      1

      Temporary (SSD) =50 GB

      2

      2x500

      moderate

      This information can be fetched programmatically by doing a GET.

      Returns this:

      {
      
      "name": "Standard_DS1_v2",
      
      "numberOfCores": 1,
      
      "osDiskSizeInMB": 1047552,
      
      "resourceDiskSizeInMB": 7168,
      
      "memoryInMB": 3584,
      
      "maxDataDiskCount": 2
      
      }
      
      Doing a GET on the VM we created
      Returns the following. Let’s understand this response in some detail. I have annotated inline comments preceded and followed by //
      {
        "properties": {
          "vmId": "694733ec-46a0-4e0b-a73b-ee0863a0f12c",
          "hardwareProfile": {
            "vmSize": "Standard_DS1_v2"
          },
          "storageProfile": {
            "imageReference": {
              "publisher": "MicrosoftWindowsServer",
              "offer": "WindowsServer",
              "sku": "2012-R2-Datacenter",
              "version": "latest"

      The interesting field here is the version. Publishers can have multiple versions of the same image at any point of time. Popular images are revved typically on a monthly frequency with the security patches. Major new versions are released as new SKUs. The Portal has defaulted me to the latest version. As a customer, I can chose to pick a specific version as well, whether I deploy thru Portal or thru an ARM template using the CLI or REST API; the latter being the preferred method for automated scenarios. The problem with specifying a particular version is that it can render the ARM template fragile. The deployment will break if the publisher unpublishes that specific version in one or more regions, as a publisher can do. So unless there is a good reason not to, the preferred value for the version setting is latest. As an example, the following images of the SKU 2012-R2-Datacenter are currently in the WestUS region, as returned by the CLI command azure vm image list.

      MicrosoftWindowsServer  WindowsServer  2012-R2-Datacenter                      Windows  4.0.20151120     westus    MicrosoftWindowsServer:WindowsServer:2012-R2-Datacenter:4.0.20151120
      MicrosoftWindowsServer  WindowsServer  2012-R2-Datacenter                      Windows  4.0.20151214     westus    MicrosoftWindowsServer:WindowsServer:2012-R2-Datacenter:4.0.20151214
      MicrosoftWindowsServer  WindowsServer  2012-R2-Datacenter                      Windows  4.0.20160126     westus    MicrosoftWindowsServer:WindowsServer:2012-R2-Datacenter:4.0.20160126
      MicrosoftWindowsServer  WindowsServer  2012-R2-Datacenter                      Windows  4.0.20160229     westus    MicrosoftWindowsServer:WindowsServer:2012-R2-Datacenter:4.0.20160229
      MicrosoftWindowsServer  WindowsServer  2012-R2-Datacenter                      Windows  4.0.20160430     westus    MicrosoftWindowsServer:WindowsServer:2012-R2-Datacenter:4.0.20160430
      MicrosoftWindowsServer  WindowsServer  2012-R2-Datacenter                      Windows  4.0.20160617     westus    MicrosoftWindowsServer:WindowsServer:2012-R2-Datacenter:4.0.20160617
      MicrosoftWindowsServer  WindowsServer  2012-R2-Datacenter                      Windows  4.0.20160721     westus    MicrosoftWindowsServer:WindowsServer:2012-R2-Datacenter:4.0.20160721

            },
            "osDisk": {
              "osType": "Windows",
              "name": "BlogWindowsVM",
              "createOption": "FromImage",
              "vhd": {
                "uri": https://blogrgdisks562.blob.core.windows.net/vhds/BlogWindowsVM2016713231120.vhd

      The OS disk is a page blob and starts out as a copy of the source image that the Publisher has published. Looking at the meta data of this blob and correlating it to what the VM itself has is instructive. Using the Cloud Explorer in Microsoft Visual Studio the blob’s property window:
      1

      This is a regular page blob that is functioning as an OS disk over the network. You will observe that the Last Modified date pretty much stays with NOW() most of the time – the reason being as long as the VM is running there are some flushes happening to the disk regularly. The size of the OS disk is 127 GB; the max allowed OS disk in Azure is 1 TB.

      Azure Storage Explorer shows more properties for the same blob than the VS Cloud Explorer.

       

      image

      The interesting properties are the Lease properties. It shows the blob as leased with an infinite duration. Internally to VM creation, when a page blob is configured to be an OS/data disk for a VM, we take a lease on that blob before attaching it to the VM. This is so that the blob for a running VM is not deleted out of band. If you see a disk-backing blob has no lease while it shows as attached to a VM then that would be an inconsistent state and will need to be repaired.

      RDPing in the VM itself, we can see two drives mounted and the OS drive is about the same size as the page blob in Storage. The pagefile is on D drive; so that faulted pages are fetched locally rather than over the network from Blob Storage. The temporary storage can be lost in case of events that case a VM to be relocated to a different node.

      image

       },
        "caching": "ReadWrite"
      },
      "dataDisks": []

      there are no data disks yet but we will add some soon

      },
      "osProfile": {
        "computerName": "BlogWindowsVM",

      The name we chose for the VM in Portal is the hostname as well. The VM is DHCP enabled and gains its DIP address thru DHCP. The VM is registered in an internal DNS and has a generated FQDN.

      C:\Users\yunusm>ipconfig /allWindows IP Configuration   Host Name . . . . . . . . . . . . : BlogWindowsVM
         Primary Dns Suffix  . . . . . . . :
         Node Type . . . . . . . . . . . . : Hybrid
         IP Routing Enabled. . . . . . . . : No
         WINS Proxy Enabled. . . . . . . . : No
         DNS Suffix Search List. . . . . . : qkqr4ajgme4etgyuajvm1sfy3h.dx.internal.cl
      oudapp.net

      Ethernet adapter:

      Connection-specific DNS Suffix  . : qkqr4ajgme4etgyuajvm1sfy3h.dx.internal.cl
      oudapp.net
         Description . . . . . . . . . . . : Microsoft Hyper-V Network Adapter
         Physical Address. . . . . . . . . : 00-0D-3A-33-81-01
         DHCP Enabled. . . . . . . . . . . : Yes
         Autoconfiguration Enabled . . . . : Yes
         Link-local IPv6 Address . . . . . : fe80::980c:bf29:b2de:8a05%12(Preferred)
         IPv4 Address. . . . . . . . . . . : 10.1.0.4(Preferred)
         Subnet Mask . . . . . . . . . . . : 255.255.255.0
         Lease Obtained. . . . . . . . . . : Saturday, August 13, 2016 11:14:58 PM
         Lease Expires . . . . . . . . . . : Wednesday, September 20, 2152 6:24:34 PM
         Default Gateway . . . . . . . . . : 10.1.0.1
         DHCP Server . . . . . . . . . . . : 168.63.129.16
         DHCPv6 IAID . . . . . . . . . . . : 301993274
         DHCPv6 Client DUID. . . . . . . . : 00-01-00-01-1F-41-C4-70-00-0D-3A-33-81-01
      
         DNS Servers . . . . . . . . . . . : 168.63.129.16
         NetBIOS over Tcpip. . . . . . . . : Enabled
      
      "adminUsername": "yunusm",
      "windowsConfiguration": {
        "provisionVMAgent": true,

      This is a hint to install a guest agent that does a bunch of config and runs the extensions. The guest agent binaries are here - C:\WindowsAzure\Packages

      "enableAutomaticUpdates": true

      Windows VMs by default are set to receive auto updates from Windows Update Service. There is a nuance to grasp here regarding availability and auto updates. If you have an Availability Set with multiple VMs with the purpose of getting high SLA against unexpected faults, then you do not want to have correlated actions (like Windows Updates) that can take down VMs across the Availability Set.

       

      image

      },
      
      "secrets": []
      
      },
      
      "networkProfile": {
      
      "networkInterfaces": [
      
      {
      
      "id": "/subscriptions/f028f547-f912-42b0-8892-89ea6eda4c5e/resourceGroups/BlogRG/providers/Microsoft.Network/networkInterfaces/blogwindowsvm91"

      NIC is a standalone resource, we are not discussing networking resources yet.

          }
        ]
      },
      "diagnosticsProfile": {
        "bootDiagnostics": {
          "enabled": true,
          "storageUri": "https://blogrgdiag337.blob.core.windows.net/"
        }

      Boot diagnostics have been enabled. Portal has a way of viewing the screenshot. You can get the URL for the screenshot from CLI:

      C:\Program Files (x86)\Microsoft SDKs\Azure\CLI\bin>node azure vm get-serial-output
      
      info: Executing command vm get-serial-output
      
      Resource group name: blogrg
      
      Virtual machine name: blogwindowsvm
      
      + Getting instance view of virtual machine "blogwindowsvm"
      
      info: Console Screenshot Blob Uri:
      
      https://blogrgdiag337.blob.core.windows.net/bootdiagnostics-blogwindo-694733ec-46a0-4e0b-a73b-ee0863a0f12c/BlogWindowsVM.694733ec-46a0-4e0b-a73b-ee0863a0f12c.screenshot.bmp
      
      info: vm get-serial-output command OK

      The boot screenshot can be viewed in Portal. However, the URL for the screenshot bmp file does not render in a browser.

      What gives? It is due to the authentication on the storage account which blocks anonymous access. For any blob or container in Azure Storage it is possible to configure anonymous read access. Please do this with caution and only in cases where secrets will not be exposed. It is a useful capability for sharing data that is not confidential without having to generate SAS signatures. Once anonymous access is enabled on the container the screenshot renders in any browser outside of the portal.

      image
          },
          "provisioningState": "Succeeded"
        },
        "resources": [
          {
            "properties": {
              "publisher": "Microsoft.Compute",
              "type": "CustomScriptExtension",
              "typeHandlerVersion": "1.7",
              "autoUpgradeMinorVersion": true,

      It is usually safe for extensions to be auto updated on the minor version. There have been very few surprises in this regard though you have an option to not auto update.

              "settings": {
                "fileUris": [https://iaasv2tempstorewestus.blob.core.windows.net/vmextensionstemporary-10033fff801becb5-20160814084130535/simple.ps1?sv=2015-04-05&sr=c&sig=M3qa7lS%2BZwp%2B8Tytqf1VEew4YaAKvvYn1yzGrPfSwyw%3D&se=2016-08-15T08%3A41%3A30Z&sp=rw

      As discussed earlier this is the SAS key for the powershell script. You will see this as a commonly used pattern to sharing files and data – upload to a blob, generate a SAS key and pass around.

                ],
                "commandToExecute": "powershell -ExecutionPolicy Unrestricted -File simple.ps1  "
              },
              "provisioningState": "Succeeded"
            },
            "id": "/subscriptions/f028f547-f912-42b0-8892-89ea6eda4c5e/resourceGroups/BlogRG/providers/Microsoft.Compute/virtualMachines/BlogWindowsVM/extensions/CustomScriptExtension",
            "name": "CustomScriptExtension",
            "type": "Microsoft.Compute/virtualMachines/extensions",
            "location": "westus"
          },
          {
            "properties": {
              "publisher": "Microsoft.Azure.Diagnostics",
              "type": "IaaSDiagnostics",
              "typeHandlerVersion": "1.5",
              "autoUpgradeMinorVersion": true,
              "settings": {
                "xmlCfg": ,
                "StorageAccount": "blogrgdiag337"
              },
              "provisioningState": "Succeeded"
            },
            "id": "/subscriptions/f028f547-f912-42b0-8892¬-89ea6eda4c5e/resourceGroups/BlogRG/providers/Microsoft.Compute/virtualMachines/BlogWindowsVM/extensions/Microsoft.Insights.VMDiagnosticsSettings",
            "name": "Microsoft.Insights.VMDiagnosticsSettings",
            "type": "Microsoft.Compute/virtualMachines/extensions",
            "location": "westus"
          }
        ],
        "id": "/subscriptions/f028f547-f912-42b0-8892-89ea6eda4c5e/resourceGroups/BlogRG/providers/Microsoft.Compute/virtualMachines/BlogWindowsVM",
        "name": "BlogWindowsVM",
        "type": "Microsoft.Compute/virtualMachines",
        "location": "westus"
      }

      To Be Continued

      We will carry on with what we can learn from a single VM and then move on to other topics.

      See how Microsoft Treasury uses Power BI with the US Consumer Personal Spending dashboard

      $
      0
      0
      Every day the members of Microsoft’s Treasury group have to make smart decisions about how to manage the company’s $158 billion in assets. Their guide to these decisions can be found in data and information analysis, and one of their tools of choice to manage this process is Microsoft Power BI. One amazing example of this data in action is the US Consumer Personal Spending dashboard, created by Investment Analyst, Carlton Gossett. This dashboard displays publicly available information in a dynamic, interactive way, allowing Treasury to quickly get a feel for consumer spending in the US and where our economy may be heading in the business cycle. Read more about how Microsoft Treasury uses this dashboard, and try it out for yourself!

      Azure SQL Database is increasing read and write performance

      $
      0
      0

      It is our pleasure to announce that we have doubled the write performance across all Azure SQL Database offers and additionally have doubled the read performance for our Premium databases. These performance upgrades come with no price change and are available world-wide.

      The increased performance will allow for price optimization of existing workloads as well as for onboarding of even more demanding workloads to the platform.

      Especially heavy OLTP workloads in Premium database with random read patterns will benefit from the increases read performance and may fit into a smaller performance tier than they are running in today. In general, if your Premium workload is below 50% DTU utilization now, you may be able to run in the next lower Premium performance level.

      The increase in write performance will benefit bulk inserts, heavy batched data manipulation and index maintenance operations. You may notice up to double the logical insert throughput or 1/2 of the previous response times.

      Learn More:

      How Adobe, Dallas Zoo and Financial Fabric Are Using Microsoft Cognitive Services and the Azure Data Platform

      $
      0
      0

      A montage of recent customer use cases, demonstrating the range of capabilities of Microsoft’s AI, Big Data & Advanced Analytics offerings.

      Adobe Helps Customers Break the Language Barrier

      Adobe Experience Manager (AEM) is an enterprise content management platform that helps large and mid-size companies manage their web content, digital assets, online communities and more. Digital marketers use AEM to manage and personalize online content, including user-generated content on community sites. AEM helps companies build their brands online, driving demand and expanding their markets.

      Companies using AEM often serve customers in many countries and languages, and customers were looking to Adobe for an easy way for their digital marketers to translate multinational web sites and content from international user communities into other languages. The sheer volume of information being posted online and prohibitive costs of human translation services meant that most companies could not afford the time and budget needed for manual translation efforts.

      adobe-experience-manager

      Adobe decided to address the problem by integrating AEM with Microsoft Translator. The Microsoft research team behind the Translator technology had spent over a decade to build a linguistically informed, statistical machine translation service that learns from prior translation efforts. What’s more, the service was flexible, reliable and performant at a massive scale, having performed billions of translations daily, and all while respecting customer data privacy.

      To make the system work, all that Adobe’s customers had to do was supply the system with a set of parallel documents, containing the same information in two languages (i.e. the source and target languages). The system would then analyze this material and build a statistical model layered on top of Translator’s generic language knowledge. Translator’s statistical models and algorithms allow the system to automatically detect correlations between source and target language in the training data, helping it determine the best translation of a new input sentence.


      AEM versions 6.0 and above now ship with a pre-configured Translator connector, and include a free trial license of 2 million characters per month, enabling users to start automatic translation with minimal effort. Adobe customers can now build, train, and deploy customized translation systems that understand a preferred terminology or style that is specific to their industry or domain. Some customers are also using this new capability to further expand their audience, providing content in previously underserved languages where there is a latent demand or audience. You can learn more about the Adobe solution here.

      As people all over the world increasingly look to online communities for instant answers and information, they do not want to be restricted by their language skills. By partnering with Microsoft, Adobe is helping customers become productive by breaking through language barriers.

      Dallas Zoo Tracks Elephant Behavior with Fitness Bands

      Zoos worldwide have been working hard to provide larger, more natural and varied environments for their elephants. The Dallas Zoo, however, decided to go an extra step further: A growing number of elephants at the Dallas Zoo are now part of a pioneering application that uses RFID bracelets and Microsoft Azure to better understand elephant behavior and provide more customized care.

      The earlier solution deployed by the Zoo relied on a combination of video cameras and direct observation by staff to track the animals, but that approach left big gaps, and even caused occasional errors in the data. What’s more, the Zoo had to manage unwieldy spreadsheets in the past, to understand their animals’ movements. Since their software could handle only 15 days’ worth of data at a time, that made insights from long-term data (such as behavioral changes in an elephant as it aged) near impossible. The earlier system also made it impossible to integrate observations with external data such as weather changes, fluctuations in the zoo’s attendance, and the like.

      The introduction of elephant “ankle bracelets” powered by RFID technology has decidedly changed things for the better. Since elephants are trained to show their feet to handlers for exams and pedicures, getting the bands on them is simple, quick, and stress-free. The Zoo is now able to track a very wide range of parameters for each animal. Nancy Scott, the Coordinator of Elephant Behavioral Science at the Dallas Zoo, now knows that her elephants each walk an impressive average of 10 miles a day. She also knows that Congo, whom she’s dubbed “The Great Explorer”, can walk nearly 17 miles a day, and is also the first one out of the gate when given access to an adjacent habitat to mingle with other species such as giraffe, zebra, and ostrich. That’s useful information to measure the health of elephants not only against their own histories but also against the typical range of the herd. Scott now knows where in the five-acre Giants of the Savanna exhibit the elephants like to go, and where they don’t. She knows who’s been frequenting the mud wallows, pools, scratching posts, log piles or shady spots. She also has a keener understanding of whether the elephants have enough space and how they are using that space, so she can help devise ways to optimize their use of the exhibit.

      Elephants being highly social, this technology is also helping the zoo better understand their interactions. They can see which elephants are loners (keeping their distance) vs. which ones are potential friends (frequently traveling together or stationary at night together). When an elephant suddenly moves more slowly or stays near one spot, Scott knows the animal may be ill, leading to faster diagnoses and better health outcomes.


      US Medical IT, a Microsoft solution provider and part of the startup community at the University of Texas at Dallas (UTD) Venture Development Department, was an important partner in this effort. With the UTD Venture Development Center’s financial support and US Medical IT’s expertise, the Dallas Zoo enhanced its RFID system with key components of the Microsoft cloud. A SQL Server 2016 -based data warehouse hosted on Microsoft Azure synchronizes the RFID data daily and links it to five other data sources. The data is then made available to Power BI for analysis and to other reporting services running in Azure. The results of the analyses are displayed on dashboards on PCs and mobile devices, including on Apple watches, making insights available to handlers working in the exhibits, to visitors using proposed information kiosks, and to Scott, no matter where she happens to be.

      The Zoo can now collect and analyze data across multiple years rather than just days. Information for additional internal and external data sources such as weather, zoo attendance, moon cycles and more can be factored into the analyses. And the best part is that staffers avoid the need for the setup and maintenance of computer systems. The success of the solution has already led Scott to consider ways to expand it further. The addition of Azure Machine Learning, for example, can enable the Zoo to anticipate their elephants’ future needs. The technology can be expanded to other animals, including giraffes, ostriches, and zebras. Gorillas and other apes and monkeys pose an interesting challenge to the technology, because they also move in a third dimension, when they climb trees or other structures. Scott is interested in exploring how their solution could be enhanced to take that into account. You can learn more about the Dallas Zoo solution here.

      Institutions around the world involved in the area of animal care are getting inspired by the Dallas Zoo’s pioneering work.

      Financial Fabric Helps Hedge Funds Leverage Big Data Analytics Securely, in the Cloud

      Since the financial crisis of 2008, the cost of managing hedge funds has grown in pace with the increased regulatory requirements that funds are now expected to meet. According to Preqin, a data provider for the alternative assets industry, manually gathering data from disparate silos, analyzing information, and creating reports can eat up more than 70 percent of a small or midsize hedge fund’s operating budget. On top of that, any discrepancies arising from manual data-handling processes and disjointed workflows can leave a fund vulnerable to regulatory penalties and ultimately to the loss of business.

      Hedge funds used to handle many of their IT responsibilities in-house in the past, but it is increasingly clear to modern fund operators that they need much more robust IT infrastructure, including data platforms with advanced analytics capabilities and strong cybersecurity protection, and best practices for security, regulatory compliance and portfolio management.

      Financial Fabric is a company that offers hedge funds, institutional investors, and other financial organizations a centralized way to store, analyze and report investment data, using cloud services. To meet the needs of their customers, Financial Fabric required a technology platform that enables fund managers to make data-driven investment decisions without compromising security and privacy. The company decided to base their DataHub solution on Microsoft Azure, taking advantage of its many security features, including the ‘Always Encrypted’ capability of Azure SQL Database.

      DataHub includes a client-dedicated data warehouse that ingests information from multiple service providers and systems including prime brokers, fund administrators, order management systems, and industry data sources. In the past, analysts typically downloaded files and documents manually in various formats, and then painstakingly gathered the information into spreadsheets and other tools. Instead, in the new solution, information from diverse sources is automatically collected, cleansed, normalized and loaded to the DataHub, providing up-to-date and accurate information in one place.

      Encrypted and stored in the cloud, the information is continuously available to a hedge fund’s business users – including portfolio managers, risk managers, analysts, chief operations officers, and chief financial officers – through business intelligence tools connected to Microsoft SQL Server Analysis Services. Working with interactive Microsoft Power BI dashboards in Excel workbooks, Financial Fabric’s data science team can securely collaborate with clients and create analytics and reports. The DataHub also enables clients to quickly and easily create custom analytics and reports themselves, without IT help. They can also automate workflows such as reconciling data across trades, investment holdings or positions, and cash and margins.

      Accessible from virtually anywhere, the analytics and reports are hosted on a Microsoft SharePoint Server farm running on Azure VMs. Historically, the ability to share information on demand while keeping it secure has been an elusive goal for investment managers, but, with DataHub, they are able to securely share information with clients and data scientists to build a more data-driven business with significantly lowered risk. Financial Fabric uses Azure Active Directory and Azure Multi-Factor Authentication to control access throughout all layers of DataHub. The built-in security capabilities have played a critical role in boosting the financial sector’s confidence in cloud solutions.

      finfabric-2

      Financial Fabric’s primary goal was to solve a business challenge, not deal with technical issues. With Azure, the company and its clients can get on the fast track to data science and avoid spending months and millions of dollars buying or creating software. “We have more data scientists on our team than software developers,” says Subhra Bose, CEO at Financial Fabric. “And they’re focused on the clients’ data, calculating things like investment performance and risk exposure. We have also completely separated our platform development from the data analytics on Azure. That’s given us a tremendous amount of mileage, because we can onboard a client without writing a single line of code.”

      One of Financial Fabric’s customers, Rotation Capital, chose to bypass the traditional application-specific approach to developing an institutional infrastructure in favor of DataHub. Within a month, the firm gained a powerful, highly secure data platform with minimal investment in IT staff, software, servers, and other operational overhead. Biagio Iellimo, Controller at Rotation Capital, notes, “Software implementation in the hedge fund industry is a huge pain point. Implementations traditionally take anywhere from six months to a year and a half. So the fact that we were up and running on the Financial Fabric DataHub platform within four weeks is beyond impressive.”

      DataHub is a cost-effective, scalable and flexible solution that’s helping hedge funds like Rotation Capital take advantage of big data analytics and protect confidential information while meeting ever-changing business requirements. You can read more about the Financial Fabric solution here.

      The availability of secure, cloud-based analytics is proving to be fundamentally transformative for the financial services industry.

      CIML Blog Team

      Make work visible, integrated and accessible across the team

      $
      0
      0

      make-work-visible-1

      Around the world teamwork is on the rise. Research suggests employees now work on nearly double the number of teams than they did just five years ago. This means more than ever people are reliant on their peers to help get things done. But a “one size fits all” approach does not work when it comes to group collaboration—different tools appeal to different groups and address unique needs.

      This is not your typical online event

      Each 90-minute session starts with an online business roundtable discussing your biggest business challenges with a trained facilitator and then transitions into a live environment in the cloud. You will receive a link to connect your own device to a remote desktop loaded with our latest and greatest technology so you can experience first-hand how Microsoft tools can solve your biggest challenges.

      U.S. customers: Register here.
      Outside the U.S.?Register here.

      Why should I attend?

      During this interactive online session, you will explore:

      • How Microsoft Teams, the newest collaboration tool:
        • Keeps everyone engaged with threaded persistent chat.
        • Creates a hub for teamwork that works together with your other Office 365 apps.
        • Builds customized options for each team with channels, connectors, tabs and bots.
        • Adds your personality to your team with emojis, GIFs and stickers.
      • How to keep information secure while being productive—Make it easier to work securely and maintain compliance without inhibiting your workflow.
      • How to quickly visualize and analyze complex data—Zero in on the data and insights you need without having to involve a BI expert.
      • How to co-author and share content quickly—Access and edit documents even while others are editing and reviewing them all at the same time.
      • How to get immediate productivity gains—Most attendees leave with enough time-saving skills that time invested to attend a Customer Immersion Experience more than pays for itself in a few short days.

      Space is limited. Each is session is only open to 12 participants. Reserve your seat now.

      The post Make work visible, integrated and accessible across the team appeared first on Office Blogs.

      Viewing all 13502 articles
      Browse latest View live


      <script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>