Quantcast
Channel: TechNet Technology News
Viewing all 13502 articles
Browse latest View live

Announcing TypeScript 2.3 RC

$
0
0
The TypeScript 2.3 Release Candidate is here today! This release brings more ECMAScript features, new settings to make starting projects easier, and more.

To get started with the release candidate, you can grab it through NuGet or over npm through

npm install -g typescript@rc

You can also get TypeScript for Visual Studio 2015 (if you have Update 3). Our team is working on supporting Visual Studio 2017 in the near future, with details available on our previous blog post.

Other editor support will be coming with the proper release, but you can follow instructions to enable newer versions of TypeScript in Visual Studio Code and Sublime Text 3.

In this post we’ll take a closer look at the new --strict option along with async generator and iterator support, but to see a more detailed list of our release, check out the TypeScript Roadmap.

The --strict option

By default, TypeScript’s type system is as lenient as possible to allow users to add types gradually. But have you ever started a TypeScript project with all the strictest settings you could think of?

While TypeScript has options for enabling different levels of strictness, it’s very common to start at the strictest settings so that TypeScript can provide the best experience.

The problem with this is that the compiler has grown to have a lot of different options. --noImplicitAny, --strictNullChecks, --noImplicitThis, and --alwaysStrict are just a few of the more common strictness options that you need to remember when starting a new project. Unfortunately if you can’t remember these, it just makes TypeScript harder to use.

That’s why in TypeScript 2.3, we’re introducing the --strict flag. The --strict flag enables these common strictness options implicitly. If you ever need to opt out, you can explicitly turn these options off yourself. For example, a tsconfig.json with all --strict options enabled except for --noImplicitThis would look like the following:

{"compilerOptions": {"strict": true,"noImplicitThis": false
    }
}

In the future --strict may include other strict checks that we believe will benefit all users, but can be manually toggled off by disabling them explicitly (as mentioned above.)

Downlevel generator & iterator support

Prior to TypeScript 2.3, generators were not supported when targeting ES3 & ES5. This stemmed from the fact that support for generators implied that other parts of the language like forof loops could play well with iterators, which wasn’t the case. TypeScript assumed these constructs could only work on arrays when targeting ES3/ES5, and because generalizing the emit would lead to drastic changes in output code. Something as conceptually simple as a forof loop would have to handle cases that might never come up in practice and could add slight overhead.

In TypeScript 2.3, we’ve put the work in for users to start working with generators. The new --downlevelIteration flag gives users a model where emit can stay simple for most users, and those in need of general iterator & generator support can opt in. As a result, TypeScript 2.3 makes it significantly easier to use libraries like redux-saga, where support for generators is expected.

Async generators & iterators

With our support for regular generator & iterator support, TypeScript 2.3 brings support for async generators and async iterators. You can read more about these features on the TC39 proposal, but we’ll try to give a brief explanation and example.

Async iterators are an upcoming ECMAScript feature that allows iterators to produce results asynchronously. They can be cleanly consumed from asynchronous functions with a new construct called async for loops. These have the syntax

for await (let item of items) {/*...*/
}

Async generators are generators which can await at any point. They’re declared using a syntax like

async function* asyncGenName() {/*...*/
}

Let’s take a quick look at an example that use both of these constructs together.

// Returns a Promise that resolves after a certain amount of time.function sleep(milliseconds:number) {returnnewPromise<void>(resolve=> {setTimeout(resolve, milliseconds);
    });
}// This converts the iterable into an async iterable.// Each element is yielded back with a delay.asyncfunction* getItemsReallySlowly<T>(items:Iterable<T>) {for (const item ofitems) {awaitsleep(500);yielditem;
    }
}asyncfunction speakLikeSloth(items:string[]) {// Awaits before each iteration until a result is ready.forawait (const item ofgetItemsReallySlowly(items)) {console.log(item);
    }
}speakLikeSloth("never gonna give you up never gonna let you down".split(""))

Keep in mind that our support for async iterators relies on support for Symbol.asyncIterator to exist at runtime. You may need to polyfill Symbol.asyncIterator, which for simple purposes can be as simple as

(Symbolasany).asyncIterator=Symbol.asyncIterator||Symbol.from("Symbol.asyncIterator");

or even

(Symbolasany).asyncIterator=Symbol.asyncIterator||"__@@asyncIterator__";

If you’re using ES5 and earlier, you’ll also need to use the --downlevelIterators flag. Finally, your TypeScript lib option will need to include "esnext".

Enjoy!

Keep an eye out for the full release of TypeScript 2.3 later this month which will have many more features coming.

For our Visual Studio 2017 users: as we mentioned above, we’re working hard to ensure future TypeScript releases will be available for you soon. We apologize for this inconvenience, but can assure you that a solution will be made available.

We appreciate any and all constructive feedback, and welcome you to leave comments below and file issues on GitHub if needed.


Windows 10 Tip: See what’s new in Windows with the Microsoft Tips app

$
0
0

It’s called Microsoft Tips — just type “Tips” in the search box on the taskbar, find it in the Start menu or open it from the Windows Store. It’s even available offline!

Check out tips in the app to enhance your browsing experience with Microsoft Edge, learn about Cortana* and discover other ways you can personalize, customize and awesome-ize Windows.

Here’s how to get started with the Tips app:

Open the Tips app and you’ll land on the Welcome page.

Open the Tips app and you’ll land on the Welcome page — your home for quick tips, videos and other “Cool! I didn’t know that” info about the latest updates to Windows 10. Dive right into a settings page to adjust the look and feel of Windows, or jump to a topic within the Tips app to learn more about a particular feature.

Don’t wait to see what’s new

When browsing the Tips app, keep an eye out for buttons that send you directly to apps or settings for some hands-on discovery.

We get it: reading about a fancy new app or setting isn’t as fun as fiddling around with it yourself. When browsing the Tips app, keep an eye out for buttons that send you directly to apps or settings for some hands-on discovery.

Personalized tips for your Surface device

 If you have a Surface Book or Surface Pro, open the app and look on the left-hand side for the “Your Surface Book/Pro” category.

The Tips app knows what kind of PC you’re working with. If you have a Surface Book or Surface Pro, open the app and look on the left-hand side for the “Your Surface Book/Pro” category. It has info tailored to your Microsoft device, like how to change your Surface Pen settings or how to switch between laptop and tablet mode.

Have a great week!

*Cortana available in select markets

The post Windows 10 Tip: See what’s new in Windows with the Microsoft Tips app appeared first on Windows Experience Blog.

Easy Async and Await for VBs Part 1, or…

$
0
0

…letting your code do absolutely nothing!

We’ve all been there, one way or the other. Either as users of an app or as the developer to whom users complained to: When a typical Win32 app is waiting for an operation to complete, we often get to see something like this:

00hangingnativewindow

In discussions about how to get a handle on such scenarios there are all kinds of suggestions, one of the most frequent ones being “You need to do this asynchronously: Just start a new thread! You can do that with Tasks.” And this is when people start introducing patterns like the following into their code which in most cases is not only unnecessary but is dangerously wrong!

So, let’s go back to the original problem for a second and figure out the root cause for that. Principally your app hangs like this when the so-called Message Loop of your app gets blocked. Every app (well, web apps excluded) has some kind of message loop, no matter if your app is a Windows Forms app, or if you based your app on WPF, UWP or even Xamarin on Android or iOS – for the latter, they are called Run Loops or Looper, but they are basically the same.

For a profound understanding of Async and Await it is helpful to know what happens behind the scenes, when and why a message loop gets blocked. After all, one of the primary domains of Async and Await is to address lacking responsiveness of apps. To achieve this, we need to keep the message loop clear as often as possible. So, let’s take a closer look at a rather unusual VB demo app, whose solution is called BlockingMessageLoop. You can find this demo in a GitHub Repo that you can clone or download from here. This demo displays a simple window with a button on the screen. You can click the button, which causes a loop to spin just to use up processor workload – which means it gets blocked, and you will end up with a result like that in the previous screenshot. So far so simple. This demo, however, is not your typical Windows Forms app. It implements everything without the WinForms base classes, so you can see – although very simplified – what actually happens behind the scenes, when you work with Forms, Controls and Events:

As you can see here, there are no classes for the parts of what usually would be the task of a Form. The demo only consists of a Module, starting with Sub Main, and that method first creates the main window by calling CreateWindow and CreateButton, which puts a Button in that Window. This is the way how Windows development already was done in pure C a couple of decades ago, and of course, you could or can – if you wanted – still do this the “original way” also in VB. What’s more important here is what Sub Main also does: It is starting the app’s Message Loop, which looks like this:

All Windows apps follow this same scheme. They create the windows, place control inside those windows (principally, controls are a special form of windows themselves), and then they start the message loop at some point, which runs as long as the main window is visible on the screen. Whenever something happens with the windows or the controls, Windows sends a message to the app informing it about the nature of the event: When the app needs to redraw the content of a window for example, this message is WM_PAINT; when the user clicks a Command Button, the message is WM_COMMAND – just to point out a few.

For the app to actually be getting those messages, it calls a Windows function named GetMessage. GetMessage waits until the next message for that app arrives, and while waiting it does not use up any processor workload – the app just idles. When the app then got the next message, it needs to run two additional functions: The first – TranslateMessage– is to process additional keyboard commands, and the second – DispatchMessage– is to send the message to a special message processing handler, usually called WndProc (Windows Procedure):

By the way: The DispatchMessage function of Windows knows this WndProc method, because this method has been passed as a delegate when the main window of the app had originally been created.

So, now assume, the user of your app clicks the button, and the app should do something which really needs a long time to complete. For example, calculating 100,000 digits of Pi. Or writing a big file on a memory stick. What happens if you put that code into WndProc? Well, WndProc does not return to DispatchMessage for a long time, either. Which means, the message loop cannot continue, which means no additional messages are coming through – your app becomes not responding. This is exactly what happens, if you were to put code for an excessive workload in a button’s click event handler in a Windows Forms or WPF app. And when that happens, it always causes Window to apply an emergency plan on your app: If your app does not react to any Windows messages within two seconds, Windows creates a new window which has the same title as that of the hanging app plus the famous words we all so love: “not responding”. Windows copies the last known content into this new window – only in a somewhat “blurry” version. Windows then hides the hanging app’s window from the screen and finally pretends, the new window is yours app’s main window for the time being. Now the user can at least move the hanging app’s window out of the way – or at least so it seems. Truth is, it is not at all the app’s window, but some fake copy.

Save us, DoEvents!

As Visual Basic developers of several years, in such scenarios we know what to do, do we not? Only the few really know, what it actually does internally, but we’ve all used it at some point: We just invoke DoEvents in our lengthy methods, and all of the sudden apps do no longer hang, as if by magic! Well, here’s the magic:

Some magic, right? All DoEvents does is to peek if there are any new messages. And if so, well, it simply does the same that the message loop does: It translates and dispatches the message, which causes the app no longer to hang, when we have a long-running worker method like in WndProc, because queued messages get processed by WndProc (and successively also by DefWindowsProc, which takes care of things like moving or resizing the window). But be careful: Since we allowed for messages to be processed within the still running worker method, the user can click on the button again, which causes WndProc to be called recursively another time. The chain: User clicks button, message loop calls DispatchMessage, DispatchMessage calls WndProc, WndProc runs worker code, worker code calls DoEvents, User has clicked the button again, DoEvents calls GetMessage and DispatchMessage, DispatchMessage calls WndProc again and so the worker code gets starteda second time – of course not in parallel, just recursively. So, you want to make sure to disable your controls, when the worker code starts, and only enable the controls again, when the worker code completes. The takeaway of this scenario: Keep the Message Loop clear if you want the most responsive app!

Wasting time by doing nothing

Now let’s look at the next demo in the GitHub repo, which is called SyncVsAsync. This demo is to show what happens if you have an app with a method which uses up a lot of processor workload, and you want to process something else in addition, like putting a big file on a storage device. Let’s say, on a memory stick:

01workloaddemo

When you start the app, it immediately begins with its heavy workload routine: It calculates 100,000 digits of Pi. Since it uses DoEvents while calculating, you’re still be able to control the app: All Windows messages get processed, so you can not only move the window around, you can also pick a folder on a memory stick, which you should have plugged in your computer beforehand. (This demo principally also works with the internal hard drive; unfortunately, those disks are so fast these days that everything happens too quickly to spot – since a thumb drive is considerable slower, a thumb drive does the trick, here).

If you’ve picked the folder, try this while the app is doing its calculations: Open the Task Manager by right-clicking the task bar. Sort the list by name. You should see the demo app in the list and, depending on how many cores your processor has, that it is using a pretty good amount of processor workload. Place the task manager, so you can see it and the app at the same time, click the button Write File Sync…, and observe what happens:

02savetomemstick

As soon as you click the button, the app starts to write the binary file of about 100 Mbyte on the memory stick. This causes two things: Incoming Windows messages are obviously no longer getting processed. The last update you see in the dialog is “Saving File…”, but after that everything stops. No display refresh. You can no longer move the window. But what you also notice: Your app does no longer use up ANY processor workload – it seems to do absolutely nothing, yet – check it out when it’s done – the file still gets written on the memory stick! So, what’s happening here? What is writing the actual file?

The truth is: Not the processor. Simplified put, by calling fs.Write in the code above, the processor tells some IO chip in your tablet or PC “Here, have the file at that memory address and with that length, and put it on the stick for me. Call me, when you’re done!”. Well, and then the processor sits there, suspends itself and waits to be reactivated, if you will. But what it also does not do: Running the message loop. Or calling at least DoEvents occasionally. The result: Your app hangs, until fs.Write internally received that call back, and the method can continue and eventually return.

Asynchronous Operations to the Best-Practice-Rescue!

Now let’s revisit the Task.Run code from the beginning. Do you notice now how senseless this code is? That sample code is explicitly spinning off another task, and the only thing this task does is to wait for an I/O-Operation to complete. Task.Run means: We are utilizing an additional thread, and with that we probably using up additional resources like another processor core. And by doing that, we’re confusing two concepts: I/O bound workload and CPU bound workload. The latter we apply, when we do some expensive work inside our code, like calculating 100.000 digits of pi. The first one, however, we need when we’re putting a file on a storage device. Or pulling data from the network. Both of which can be run asynchronously. Understanding the difference of those two is essential here!

Even without using Async and Await, .NET was always able to handle asynchronous scenarios. And that still works. But it gets, well, kind of messy when writing the code for that. Let’s find out what happens, when we restart the demo app, and this time click on the second button Write file Async… . Here’s the code for that:

Notice, that we use a slightly different version of the write method here: BeginWrite is initiating an asynchronous operation. We’re just telling the OS, it should offload the actual writing-to-device-operation to that component inside the computer, which is responsible for putting the file on the memory stick. Along, we’re passing the call-back method EndWriteFileProc.

Important: When you create a file stream, make sure you prepare that file stream for asynchronous operations. It is important to know that it does not suffice to just use the asynchronous versions of the file operation functions (BeginWrite, BeginRead, etc.). The OS needs to be prepared for asynchrony as well, if you will. You do this by setting the flags parameters you’re passing to the FileStream constructor manually, determine a buffer big enough to hold the full size of what you are going to write (or read), and pass a flag indicating that you want everything asynchronous underneath. If your code forgets this, the opposite of what you want to achieve will happen: Despite using asynchronous versions of the file operations, the actual IO work is done synchronously – wrapped in thread by the framework, it just feels to be asynchronous. But it is not.

When the operation completes, the OS should call that method, so we’d be notified and could do the cleanup:

Again, we can spot a problem, here: For each asynchronous operation, an app needs to kick off, it must put code in two different methods. Imagine, how a big code base would have looked like some 10 years ago, had an app already used asynchronous calls intensely. The result: pure spaghetti code!

A method with a spot which does absolutely nothing (thus letting the Message Loop loop)!

Now, there is (an even weirder) way to do this in just one method. When we adjust the kick off method to have the same signature as the callback, and we implement a simple state machine to differentiate if the method acts currently as the kick off or the callback part, we could unify both methods, thus having a much cleaner program flow:

And yes, I know. In the code above, I use the evil GoTo, and yes, I could have done that with If as well. But the separation of purposes would not have been so obvious, and that’s the most important point of this example!

In any case, we achieved the goal: The code is readably placed in one method, it is awaiting the callback, and while doing that, it does not claim the program control: Between the two parts, the message loop can run, and the app does not hang at any point while the file is written to its destination device.

And at last: Async and Await

Starting with Visual Studio 2012, Visual Basic and C# both introduced the Await operator to simplify such scenarios significantly. But coming up with new keywords for a programming language is not an easy task, because you’re always running the risk of breaking existing code. Imagine a method like this one:

What’s going to happen with that if Await as a new keyword became available per se? The code would break. This is where Async comes into play. It does not do much, when applied to a method. It is just decorating a method and telling the compiler: “You will very likely find one or more Await operators in this method, so, please build the required state machines already for awaiting the asynchronous calls which return tasks.” Which results in important takeaway #1: No Async keyword on the method’s signature, no Await allowed in the method’s body. It’s as simple as that. Just decorating a method with Async does not do anything to the method. (Well, only internally, but it does not change the nature of the method at all.)

Important takeaway #2: The Await operator awaits Tasks. (Well, to be completely accurate, Await awaits everything that exposes the so called awaitable pattern, and Tasks are one of them you will be dealing with quite often). Tasks are promises that a certain method will be executed at some point. Does that mean a new task always runs parallel to the current task (where all the UI related stuff happens)? Not at all! It might at some point. But it absolutely does not have to run in parallel. Asynchronous just means “will be happening” or “a result will be available at some point”.

There is a rule that a certain method in the .NET Framework (or in any other library, although there is no guarantee for this) is returning such a promise through Task, namely if the method’s name ends with “Async”. Thus, it’s comparatively easy to figure out the methods for that: In our Write-To-Memory-Stick sample we are using the Write and the Flush methods. If we resort to IntelliSense for our FileStream variable fs, we quickly discover the methods WriteAsync and FlushAsync– both doing the same as their synchronous counterparts, but returning Task, thus being able to be awaited.

03intellisenseasync

Oh, and one more and very important thing: If you do not have a really good and exceptional reason, and you absolutely know what you are doing, you should never call Async methods which do not return Task or Task(Of SomeType). Or, for us VBs in other words: only call Async Functions yourself, avoid calling Async Subs directly. You may ask yourself now, when you are not supposed to call Async Subs, why would you implement them to begin with, when they get never called? Well, there are some type of Subs which you never call, but which get called in your apps: Event Handlers! Those can be under normal circumstances considered to be save if they are Async Subs, because your code does not usually call them directly, but rather they get called. The reason for that is simple: Tasks are promises, so they include a callback mechanism when the actual task is done. This is pretty much what we learned when we used BeginWrite and EndWrite in a previous sample. The callback method EndFileProc was the promise, if you will, in that sample, just done the old-fashioned way. A Task now can encapsulate both the initiating call and the callback in one entity. And awaiting a task enqueues the processing of that promise and picks up the control flow when the promise is delivered. In the meantime: the message loop runs while the actual method just sits there and does nothing. (Again, almost in same way as we did it in the sample of the state machine – take another look!) What’s important to know: A method returning a Task (a Function of type Task) can deliver that promise back to the caller. A Sub never returns anything to the caller. The methods inside an Async Sub gets executed all right, but there is no Task (promise) which can be returned – it is a one-way street, and that’s the reason why it is also called “fire and forget” (and why it also should always have an exception handler, because crashing inside an Async Sub can crash the whole app. In some scenarios, even without any exception message dialog or crash report – your app would just be gone.) Let’s put this together now:

As we just pointed out: Async Subs are OK, when they have Event Handler characteristics. Our Click event handler can therefore be an Async Sub. What is curious though on first glance is the next method WriteFileCommandProcAsync: Although it is a Function, it does not return Task, and the compiler on top does not complain that the method is not returning any result. But that’s OK! Since we decorated this method with Async and we’re returning Task, the method’s body holds the code for the promise. So, inside an Async Function we are returning the result of that promise not that promise itself. Which means:

That means on the caller side:

And now it’s up to you: Go, experiment! Refactor your apps to be more responsive, and discover the goodness of Async and Await!

Happy VB coding, leave your comments, stay tuned for the next part, and Tschüss from Germany!

Klaus Löffelmann

(Follow me on twitter @loeffelmann)

Learn how to “think like a Freak” with Freakonomics authors at Microsoft Data Insights Summit

$
0
0

With more than 5 million copies sold in 40 countries, Freakonomics is a worldwide phenomenon — and we’re thrilled to announce that authors Steven Levitt and Stephen Dubner will join us for a special guest keynote at Microsoft Data Insights Summit, taking place June 12–13, 2017 in Seattle.

The Microsoft Data Insights Summit is our user conference for business analysts — and the place to be for those who want to create a data-driven culture at their organization. Now in its second year, the event is packed with strong technical content, hands-on workshops, and a great lineup of speakers. Plus, attendees can meet 1:1 with the experts behind Microsoft’s data insights tools and solutions, including Microsoft Power BI, SQL Server BI, Excel, PowerApps, and Flow.

From their bestselling books, to a documentary film, to a podcast boasting 8 million monthly downloads, authors Levitt and Dubner have been creating a data culture all their own, showing the world how to make smarter, savvier decisions with data. With their trademark blend of captivating storytelling and unconventional analysis, the duo will teach you to think more productively, more creatively, and more rationally — in other words, they’ll teach you how to think like a Freak!

Levitt and Dubner will get below the surface of modern business practices, discussing the topics that matter most to today’s businesses: how to create behavior change, incentives that work (and don’t work), and the value of asking unpopular questions. Their keynote will leave the audience energized, prepared to solve problems more effectively, and ready to succeed in fresh, new ways.

If you want to learn how to use data to drive better decisions in your business, you don’t want to miss this keynote — or the rest of the Microsoft Data Insights Summit. Register now to join us at the conference, June 12–13. Hope to see you there!

Monetizing your app: Advertisement placement

$
0
0

App developers are free to place their ads in any part of their app; this allows developers for some level of flexibility to blend the ad experience into their app for best results. We have seen that developers who take the time to do this get the best performance from their ads and are therefore able to earn more advertising revenue.

There are essentially two major factors to consider when placing an ad:

1. Optimize for Viewability– Over the last few years, advertisers are moving toward tracking the viewability of the advertisements, and for that reason are only paying for ads when they are viewable. Also, advertisers are willing to pay more for viewable impressions.

The Microsoft Ads SDK sends information back to the advertisers about whether an ad was viewable or not. It is recommended that you place advertisements in areas of your app where they have a greater chance of being viewed. For example, on the scoreboard of your game app, the viewable area of a scrolling text app or ensuring that the ad is not hidden by other UX elements such as a button.

Note: If you hide an advertisement behind another UX element, this is considered ‘fraud’ and it’s likely that the application will be removed from the Windows Store if detected.

2. Optimize for Clicks– Many different types of ads serve on your app. Ads can be classified by the way the advertiser pays for the ad — either pay for each impression, pay for each click or pay for each conversion. While Microsoft Advertising pays based on an impression measure of revenue (eCPM – effective cost per a thousand impressions served in your app), a number of advertisers only pay for clicks (CPC – cost per click). The effective clicks help calculate the eCPM that is finally paid out to the developer.

Ad networks also track apps that have a higher click-through rate (CTR), and these apps are generally targeted more heavily by ad campaigns overall. We see that, in general, apps that get higher revenue are apps that have a better click-through rate.

Note: You can track the impressions, clicks, CTR and revenue in the Advertising Performance section under “Analytics” in the Dev Center Dashboard.

Stay tuned for additional tips to increase ad monetization over the next few weeks.

The post Monetizing your app: Advertisement placement appeared first on Building Apps for Windows.

A Comparison of Shell and Scripting Language Security

$
0
0

PowerShell Security is a topic on everybody’s mind. Most of all – ours.

As PowerShell has become more popular with Administrators, it has also become more popular for unauthorized administrators – also known as “Attackers”. In any operating system or platform, the power and efficiency you provide to authorized administrators is also available to unauthorized administrators. For example, Unix, Linux, and Mac all have dozens of powerful built in compilers, scripting languages, and debuggers. It’s a power user’s dream, but also a liability.

The PowerShell team has recognized this double-edged sword since the introduction of PowerShell in 2006. In the last 10 years, we’ve invested greatly in both securing and hardening PowerShell. In PowerShell version 5, we really cranked up the dials on making PowerShell security transparent – the results of which we describe in our post, “PowerShell ♥ the Blue Team“.

As part of this effort, we’ve also done a deep comparative analysis on security between available shells and scripting languages. Where are we weak? What security features do other shells or scripting languages offer that PowerShell could perhaps learn from?

We broke this evaluation into seven major categories:

  • Event Logging– The engine logs audit events of important operational events.
  • Transcription– The engine logs application inputs and outputs.
  • Dynamic Evaluation Logging– The engine logs the content of all content evaluation, including those generated or composed at runtime.
  • Code Integrity Policies– The engine allows enforcement of code integrity / application whitelisting policies, including user-authored documents / scripts.
  • Antimalware Integration– The engine actively integrates with antimalware software to evaluate the safety of code generated at runtime.
  • Local Sandboxing– The engine allows sandboxing of behavior for local and interactive use.
  • Remote Sandboxing– The engine allows sandboxing of behavior when accessed remotely.

This is the result of our analysis. We would love any feedback you have – especially if you are aware of a feature or protection we missed. Misrepresenting any of this data does nobody any good.

comparitive_security
Lee Holmes [MSFT]
Azure Management Security

Learn how to “think like a Freak” with Freakonomics authors at Microsoft Data Insights Summit

$
0
0

With more than 5 million copies sold in 40 countries, Freakonomics is a worldwide phenomenon — and we’re thrilled to announce that authors Steven Levitt and Stephen Dubner will join us for a special guest keynote at Microsoft Data Insights Summit, taking place June 12–13, 2017 in Seattle.

The Microsoft Data Insights Summit is our user conference for business analysts — and the place to be for those who want to create a data-driven culture at their organization. Now in its second year, the event is packed with strong technical content, hands-on workshops, and a great lineup of speakers. Plus, attendees can meet 1:1 with the experts behind Microsoft’s data insights tools and solutions, including Microsoft Power BI, SQL Server BI, Excel, PowerApps, and Flow.

From their bestselling books, to a documentary film, to a podcast boasting 8 million monthly downloads, authors Levitt and Dubner have been creating a data culture all their own, showing the world how to make smarter, savvier decisions with data. With their trademark blend of captivating storytelling and unconventional analysis, the duo will teach you to think more productively, more creatively, and more rationally — in other words, they’ll teach you how to think like a Freak!

Levitt and Dubner will get below the surface of modern business practices, discussing the topics that matter most to today’s businesses: how to create behavior change, incentives that work (and don’t work), and the value of asking unpopular questions. Their keynote will leave the audience energized, prepared to solve problems more effectively, and ready to succeed in fresh, new ways.

If you want to learn how to use data to drive better decisions in your business, you don’t want to miss this keynote — or the rest of the Microsoft Data Insights Summit. Register now to join us at the conference, June 12–13. Hope to see you there!

All-new Minecraft Marketplace coming to Pocket and Windows 10 editions

$
0
0

Browse, download and play cool community creations from within the game itself for the first time on Windows 10

Minecraft has partnered with heroic ‘crafters well-known to the community to build up a launch catalogue of amazing adventure maps, texture packs, minigames and more. NoxcrewBlockWorksQwertyuiop The PieBlockceptionSphaxEneijaImagiversePolymaps and Razzleberry Fox are the folk on board at launch, but we’re opening up submissions to anyone with a registered business. For more info on how to apply, go here.

The idea is to give Minecraft creators another way to make a living from the game, allowing them to support themselves in the creation of ever-greater projects, while giving Pocket and Windows 10 players access to a growing catalogue of fun stuff – curated and supplied by the Minecraft team, safely and simply. And, of course, you can still manually download free community creations you’ve found out there on the internet, too.

Read more at the Minecraft blog!

The post All-new Minecraft Marketplace coming to Pocket and Windows 10 editions appeared first on Windows Experience Blog.


Deploying to On-Premises Environments with Visual Studio Team Services or Team Foundation Server

$
0
0

I hear this particular question frequently as a reason teams are concerned about adopting Visual Studio Team Services when their applications still run on-premises.  The good news is that it is typically a quick walkthrough on how build & deployment pipelines work.  I want to give a big thanks to Sachi Williamson from Northwest Cadence for the guest blog post today!Ed Blankenship

Your company’s apps may not be hosted in the cloud yet for various reasons, such as their configuration, dependencies, or network requirements. That’s okay!   What many people don’t know is that you can still take advantage of great tools like Visual Studio Team Services or Team Foundation Server to manage your deployments.  You’re probably asking yourself, “how can a cloud SaaS service like Team Services be able to deploy to our on-premises environments?”  That’s what we will explore today.

For the core service, your team has a choice to either use Visual Studio Team Services as a completely hosted SaaS service by Microsoft or you can run it on-premises by setting up Team Foundation Server (TFS).  When you build and deploy your apps through Team Services or TFS, you use agents to run the build and deployment tasks.  Team Services allows you to take advantage of hosted agents for running your build and deployment pipelines.  The hosted agents are perfect for many scenarios including your automated build process.  However, when you want to deploy on-premises, you will want to run the deployment steps from agents that have access to your on-premises environment.  This alternative scenario is enabled through leveragingprivate agents.

How does an agent communicate with Team Services or TFS?

The agent communicates with Team Services or TFS to determine which pipeline tasks it needs to run in addition to reporting log entries and job status. This communication is always initiated by the agent. All the messages from the agent to Team Services or TFS happen over HTTP or HTTPS, depending on how you configure the agent. This polling model allows the agent to be configured in different topologies as shown below.  In the Team Services example, you’ll notice we included an additional scenario where you are running a “private agent” in a cloud-hosted virtual machine as well.

deployonprem1

How does an agent communicate with target servers for deployment?

When you use the agent to deploy artifacts to a target set of servers, the agent must have “line of sight” connectivity to those servers. The hosted agents pool, by default, has connectivity to from the Azure cloud to anything else running in the Azure cloud or exposed to the public Internet.  For example, you may have an Azure Website that a hosted agent is able to deploy to through endpoints exposed through the Azure App Service platform.

If your on-premises environments do not have connectivity to the hosted pool (which is typically the case because of firewalls), you will want to setup and configure a private agent on servers hosted in your on-premises network. The private agents need to have connectivity to the target on-premises environments where you want to deploy, and also have access to the Internet to connect to Team Services, as shown in the following diagram.  If you are using Team Foundation Server, you will connect your private agent to your Team Foundation Server.

deployonprem2

To read more on communication and deployment to target servers, check out this documentation.

How do I deploy from Team Services or TFS to on-premises environments?

Build and deployment agents can run on many platforms.  There are a few walkthroughs available for setting up your agent on various operating systems:

One step that you will need to take is to setup the ability for the agent to authenticate with your Team Services account or your Team Foundation Server.  One approach for authenticating is creating a Personal Access Token (PAT).

deployonprem3

The next step for setting up your private agent is to download and install the agent software on the server you want to run the deployment tasks.  Team Services and TFS allow you to group many agents into “pools” which you will use later when configuring the pipeline to decide which pool of agents to use.  For this example, you can add your private agent to the “Default” pool or you can create a new “On-premises” pool.

deployonprem4

deployonprem5

Now you can start editing your release pipeline.  In default pipelines, you will notice the “Run on agent” scope for your each of your environments.  If you select it, you will see a “deployment queue” option to choose which pool of agents you want to run these deployment tasks on.  Since we want to run these tasks against on-premises environments, we should select the “On-premises” agent pool that we created in the previous step.

deployonprem6

You can now queue a new release.  Once the release starts you’ll notice that it will choose a private agent from the on-premises pool and run any deployment steps on your on-premises network.

deployonprem7

That’s release all there is to it!  Whether your environments are on-premises or hosted in any cloud, Visual Studio Team Services and Team Foundation Server make it simple to deploy to any of your environments using private agents.

Visual Studio for Teams of C++ Developers

$
0
0

In this blog post we will dive into how Visual Studio supports teams of C and C++ developers. We’ll begin by creating a small C++ program and placing it in a Git repository in Visual Studio Team Services. Next we’ll see how to commit and push updates and get updates from others. Finally, we will work with GitHub repos using the GitHub extension for Visual Studio.

Adding an Existing C++ Project to Git in Visual Studio Team Services

In this example, you will create a small sample application and use Visual Studio to create a Git repository in Visual Studio Team Services. If you have an existing project, you can use it instead.

To get started you’ll need an account on Visual Studio Team Services. Sign up for a free Visual Studio Team Services account. You can use a personal, work or school account. During the process, a new default project may be created but will not be used in this example.

  1. Download the sample project from and unzip it into a suitable working directory. You can also use one of your own C++ projects; the steps will be the same.
  2. Start Visual Studio 2017 and load the CalculatingWithUnknowns solution. Expand the Source Files node in Solution Explorer to see the solution files:
    Visual Studio Solution Explorer showing project C++ source files
  3. The blue status bar at the bottom of the Visual Studio window is where you perform Git-related tasks. Create a new local Git repo for your project by selecting Add to Source Control in the status bar and then selecting Git from the . This will create a new repo in the folder the solution is in and commit your code into that repo.
  4. You can select items in the status bar to quickly navigate between Git tasks in Team Explorer.
    Status bar showing four different Git tasks
    1. Up-arrow with two shows the number of unpublished commits in your local branch. Selecting this will open the Sync view in Team Explorer.
    2. Pencil with 0 shows the number of uncommitted file changes. Selecting this will open the Changes view in Team Explorer.
    3. Current repo is CalculatingWithUnknowns  shows the current Git repo. Selecting this will open the Connect view in Team Explorer.
    4. Current Git branch is master shows your current Git branch. Selecting this displays a branch picker to quickly switch between Git branches or create new branches.
  5. In the Sync view in Team Explorer, select the Publish Git Repo button under Publish to Visual Studio Team Services.
    Sync view in Team Explorer with Publish Git Repo button highlighted in red
  6. Verify your email and select your account in the Account Url drop down. Enter your repository name (or accept the default, in this case CalculatingWithUnknowns) and select Publish Repository. Your code is now in a Team Services repo. You can view your code on the web by selecting See it on the web.

As you write your code, your changes are automatically tracked by Visual Studio. Continue to the next section if you want to learn how to commit and track changes to code, push your changes and sync and get changes from other team members. You can also configure your C++ project for continuous integration (CI) with Visual Studio Team Services.

Team Explorer Home dialog highlighting the example, CalculatingWithUnknowns C++ project, was pushed and you can see it on the web.

Commit and Push Updates and Get Updates from Others

Code change is inevitable. Fortunately, Visual Studio 2017 makes it easy to connect to repositories like Git hosted in Visual Studio Team Services or elsewhere and make changes and get updates from other developers on your team.

These examples use the same project you configured in the previous section. To commit and push updates:

  1. Make changes to your project. You can modify code, change settings, edit text files or change other files associated with the project and stored in the repository – Visual Studio will automatically track changes. You can view changes by right-clicking on a file in Solution Explorer then clicking View History, Compare with Unmodified, and/or Blame (Annotate).

C++ source file differences in CalculatingWithUnknowns.cpp.

  1. Commit changes to your local Git repository by selecting the pending changes icon from the status bar.

Status bar showing one pending change in the C++ project

  1. On the Changes view in Team Explorer, add a message describing your update and commit your changes.

Team Explore Changes dialog with a branch comment and the Commit All button highlighted

  1. Select the unpublished changes status bar icon or the Sync view in Team Explorer. Select Push to update your code in Team Services/TFS.

To sync your local repo with changes from your team as they make updates:

  1. From the Sync view in Team Explorer, fetch the commits that your team has made. Double-click a commit to view its file changes.
  2. Select Sync to merge the fetched commits into your local repo and then push any unpublished changes to Team Services.
  3. The changes from your team are now in your local repo and visible in Visual Studio.

Work with GitHub repos using the GitHub Extension for Visual Studio

The GitHub Extension for Visual Studio is the easiest way to connect your GitHub repositories in Visual Studio. With the GitHub Extension, you can clone repos in one click, create repositories and clone it in Visual Studio in one step, publish local work to GitHub, create and view pull requests in Visual Studio, create gists and more.

In this section, we walk through installation, connecting to GitHub and cloning a repo.

  1. Install the GitHub Extension for Visual Studio. If you already have Visual Studio installed without the extension, you can install the GitHub Extension it from the Visual Studio GitHub site. You can also select it as part of the Visual Studio installation process. To install (or modify) with Visual Studio 2017, run the installer and click Individual components and then click GitHub extension for Visual Studio under Code tools, then proceed with other selections and installation (or modification):

Individual components in the installer with GitHub extension for Visual Studio selected

  1. On the Connect view of Team Explorer, expand the GitHub connection and select Sign In. Provide your GitHub credentials to complete sign in.

Team Explorer Connect dialog with GitHub options including sign-in

  1. Click Clone to bring up a dialog that shows all the repositories you can access. If you want to clone one, select it and then click Clone.
  2. To create a new repo, click Create and provide information about the repository. You can choose among several Git ignore preferences and licenses and choose whether your repo is public or private. If you have a private account, you will be restricted to private repositories.

Create a GitHub Repository dialog

  1. To publish an existing project on your machine, click on the Sync tab in the Team Explorer window to get to the Publish to GitHub

To learn more about the extension, visit the GitHub Extension for Visual Studio page.

 

How to control PowerPoint on Windows with a Bluetooth Nintendo Switch JoyCon controller! (or a Surface Pen)

$
0
0

I usually use a Logitech Presentation Clicker to control PowerPoint presentations, but I'm always looking for new ways. Michael Samarin has a great app called KeyPenX that lets you use a Surface pen to control PowerPoint!

However, I've also got this wonderful Nintendo Switch and two JoyCon controllers. Rachel White reminded me that they are BlueTooth! So why not pair them to your machine and map some of their buttons to keystrokes?

Let's do it!

First, hold the round button on the black side of the controller between the SL and SR buttons, then go into Windows Settings and Add Bluetooth Device.

Add a Bluetooth Device

You can add them both if you like! They show up like Game Controllers to Windows:

Hey a JoyCon is a JoyStick to Windows!

Ah, but these are Joysticks. We need to map JoyStick Actions to Key Presses. Enter JoyToKey. If you keep using it (even though you can use it free) it's Shareware, you can buy JoyToKey for just $7.

Hold down a button on your Joystick/Joycon to see what it maps to. For example, here I'm clicking in on the stick and I can see that's Button 12.

Using JoyToKey to map JoyCons to PowerPoint

Map them anyway you like. I mapped left and right to PageUp and PageDown so now I can control PowerPoint!

Using JoyToKey to map JoyCons to PowerPoint

And here it is in action:

ZOMG YOU CAN CONTROL POWERPOINT WITH THE #NintendoSwitch JoyCon! /ht @ohhoe

A post shared by Scott Hanselman (@shanselman) on


So fun! Enjoy!


Sponsor: Did you know VSTS can integrate closely with Octopus Deploy? Watch Damian Brady and Brian A. Randell as they show you how to automate deployments from VSTS to Octopus Deploy, and demo the new VSTS Octopus Deploy dashboard widget. Watch now



© 2017 Scott Hanselman. All rights reserved.
     

End of Support for Visual Studio 2008 – in One Year

$
0
0

In line with our ten-year support policy, Visual Studio 2008, its associated products, runtimes, and components will cease to be supported from April 10, 2018. Though your Visual Studio 2008 applications will continue to work, we encourage you to port, migrate, and upgrade your Visual Studio projects over the next year, to ensure you continue to receive support. Visit visualstudio.com to get the most up-to-date version of Visual Studio.

Microsoft will no longer provide security updates, technical support, or hotfixes when support ends on April 10, 2018, for the following products:

  • Microsoft Visual Studio 2008 (All editions)
  • Microsoft Visual C# 2008 (All editions)
  • Microsoft Visual C++ 2008 (All editions)
  • Microsoft Visual Basic 2008 (All editions)
  • Microsoft Visual Studio Team System 2008 (All editions)
  • Microsoft Visual Studio Team System 2008 Team Explorer
  • Microsoft Visual Studio Team System 2008 Team Foundation Server
  • Microsoft Visual Studio Team System 2008 Team Suite
  • Microsoft Visual Studio Team System 2008 Test Load Agent
  • Microsoft Visual Web Developer 2008 Express Edition

All later versions of Visual Studio products will continue to be supported for the duration of their established support lifecycles. More information on these products is available on the Servicing for Visual Studio and Team Foundation Server products page.

You can also check out the lifecycle information for .NET, C++ and Windows components on the Microsoft Support Lifecycle site.

Lastly, Microsoft Visual J# Version 2.0 Redistributable Package Second Edition will also cease to be supported on October 10, 2017.

The best way to continue to get full support for Visual Studio products is to upgrade to the latest versions. Visit VisualStudio.com for information on the latest Visual Studio products.

Deniz Duncan, Program Manager, Visual Studio

Deniz is a program manager in the Visual Studio release engineering team, responsible for making Visual Studio available around the world. Prior to joining Microsoft in Redmond, Deniz worked with Microsoft’s enterprise customers in Australia. She is passionate about the customer experience and ensuring we release tools & features developers need, want and love to use.

Import repositories from TFVC to Git

$
0
0

You can now migrate code from an existing TFVC repository to a new Git repository within the same account. To start migration, select Import Repository from the repository selector drop-down.

importrepository

Individual folders or branches can be imported to the Git repository, or the entire TFVC repository can be imported (minus the branches). Users can also import up to 180 days of history.

importrepodialog-tfvc

We strongly recommend reading our whitepapers – Centralized version control to Git and TFVC to Git before starting the migration. For more details, please see the feature documentation. Give it a try and let me know if you have questions in the comments below. Thanks!

C++ Debugging and Diagnostics

$
0
0

Debugging is one of the cornerstones of software development, and it can consume a significant portion of a developer’s day.  The Visual Studio native debugger provides a powerful and feature-rich experience for finding and fixing problems that arise in your applications, no matter the type of problem or how difficult it is to solve.  In fact, there are so many debugging features and tools inside Visual Studio that it can be a bit overwhelming for new users.  This blog is meant to give you a quick tour of the Visual Studio native debugger and how it can help you in all areas of your C++ development.

Table of Contents

Breakpoints and control flow

After you have built your application in Visual Studio, you can start the debugger simply by pressing F5.  When you start debugging, there are several commands that can help you to navigate the breakpoints in your application so that you can control the state of the program and the current context of the debugger.  These commands give you flexible control over the debugger’s scope and what lines and functions of code you want to investigate.

  • Continue with [F5]: Run to the next break point.
  • Step over [F10]: Run the next line of code and then break.
  • Step into [F11]: Step into the function called on the current line of code.
  • Step out [Shift+F11]: Step out of the current function and break at the next executable line after the function call.

When hovering over a breakpoint in your code, you will see two icons appear.  The icon on the right with two circles allows you to quickly toggle the current breakpoint on or off without losing the breakpoint marker at this line of code:

breakpoint

The icon on the left will launch the list of breakpoint options. Here you can add conditions or actions to a breakpoint.

bpmenu

Sometimes you want a breakpoint to be hit only when a certain condition is satisfied, like x<=5 is true where x is a variable in the debugger scope.  Conditional breakpoints can easily be set in Visual Studio using the inline breakpoint settings window, which allows you to conveniently add conditional breakpoints to your code directly in the source viewer without requiring a modal window.  Notice that conditional breakpoints contain a “+” sign to indicate at least one condition has been added to the breakpoint.

inlinebp

There is also a set of breakpoint actions that can be performed at a breakpoint, like printing the process ID or the call stack. Visual Studio also refers to these as breakpoint actions as “tracepoints”.  The inline breakpoint settings window allows you to set a variety of breakpoint actions such as printing the call stack or PID.  Notice that when at least one action is assigned to a breakpoint, the breakpoint appears as a diamond shape.  In the example below, we have added both a condition and an action to the breakpoint; this makes it appear as a diamond with a “+” sign inside.

inlinebp2

Function breakpoints (watch points) will activate when a specified function is encountered by the debugger.  Use the Debug menu and select New breakpoint to add a function breakpoint.

functionbp

Data breakpoints will stop the debugger when a specific address is hit during debugging.  Use the Debug menu and select New breakpoint to add a function breakpoint.

Data inspection and visualization

When you are stopped at a breakpoint, the debugger has access to the variable names and values that are currently stored in memory.  There are several windows that allow you to view the contents of these objects.

  • Locals: The locals window lists all variables currently within the debugger scope, which typically includes all static and dynamic allocations made so far in the current function.
  • Autos: This window provides a list of the variables in memory that originate from:
    • The current line at which the breakpoint is set.
      • Note that in the example below, line 79 has yet to execute. The variable is not yet initialized and there is no value for the Autos window to display.
  • The previous 3 lines of code. As you can see below, when we are at the breakpoint on line 79, the previous three lines are shown, and the current line awaiting execution has been detected, but the value is not yet available until this line executes.

code1

autos

  • Watch: These windows allows you to track variables of interest as you debug your application. Values are only available when the listed variables are in the scope of the debugger.
  • Quick Watch is designed for viewing the variable contents without storing it in the Watch window for later viewing. Since the dialog is modal, it is not the best choice for tracking a variable over the entire debugging session: for cases like this the Watch window is preferable.

quickwatch

  • Memory windows: These provide a more direct view of system memory and are not restricted to what is currently shown in the debugger. They provide the ability to arrange values by bit count, for example 16, 32, and 64. This window is intended primarily for viewing raw unformatted memory contents. Viewing custom data types is not supported here.

memorywindow

Custom Views of Memory

Visual Studio provides the Natvis framework, which enables you to customize the way in which non-primitive native data types are displayed in the variable windows (Locals, Autos, Watches).  We ship Natvis visualizers for our libraries, including the Visual C++ STL, ATL, and MFC.  It is also easy to create your own Natvis visualizer to customize the way a variable’s contents are displayed in the debugger windows mentioned above.

Creating a Natvis File

You can add natvis files to a project or as a top-level solution item for .exe projects.  The debugger consumes natvis files that are in a project/solution.  We provide a built-in template under Visual C++ –> Utility folder for creating a .natvis file.

newnatvis

This will add the visualizer to your project for easier tracking and storage via source control.

solnexp

For more information on how to write .natvis visualizers, consult the Natvis documentation.

Modifying Natvis Visualizers While Debugging

The following animation shows how editing a natvis for the Volcano type will change the debugger display  inthe variable windows.  The top-level display string for the object is changed the to show the m_nativeName instead of the m_EnglishName.  Notice how the changes to the .natvis file are immediately picked up by the debugger and the difference is shown in red text. natvisedit

Diagnostic tools and performance profiling

Most profiling tools run in a special mode that is separate from the debugger itself.  In Visual Studio, we have added a set of performance and diagnostics tools that can run during debugging and provide more insight into the performance and state of your apps. You can control the flow of the application to get to a problem area and then activate more powerful tools as you drill down into the problem.  Instead of waiting around for the problem to happen, you are able to have full control of the program and decide what information you want to analyze, whether it’s how much time a function spends on the CPU, or viewing the memory usage of each allocation by type. The live CPU and memory usage of your application are displayed in the graph and debugger event are indicated along the timeline. There is a tab for using each of the included diagnostic tools: CPU Usage and Memory Usage.

dtwindow

CPU Usage

This tool allows you to view the CPU usage for each function called in a selected time range on the CPU graph.  You must enable the tools by clicking the “CPU Profiling” button on the left of this tab in order to select a time range for analysis.

cpuusage

Memory Usage

This tool enables you to use the memory profiler, which for native profiling must be enabled using the Heap Profiling button so that you can capture heap snapshots.  The button on the left takes a snapshot and you can view the contents of each snapshot by clicking the blue links in the snapshot table.

snapshotreel

The Types View shows the types that were resolved from the memory snapshot including the count and total memory footprint.  You can navigate to the Instances View by double-clicking a line in this view.

typesview

The Instances View shows the types that were resolved from the memory snapshot including the count and total memory footprint.  You can navigate to the Instances View by double-clicking a line in this view.  You can navigate back to the types view using the back arrow to the left of the type name.

instancesview

The Stacks View shows the call stack for your program and allows you to navigate through the call path of each captured allocation.  You can navigate to the stacks view from the types view by selecting Stacks View in the View Mode dropdown.  The top section of this page shows the full execution call stack and can be sorted by callee or caller (in-order or reverse) with the control at the top right called Aggregate call stack by.  The lower section will list all allocation attributable to the selected part of the call stack.  Expanding these allocations will show their allocation call stack.

stacksview

Debugging processes and devices

Attaching to Process

Any process running on your Windows machine can be debugged using Visual Studio.  If you want to view the variable types, make sure to have the debug symbols loaded for the process that you are attaching to.

attach

Remote Debugging

To remotely debug into another machine that you can connect to via your network, enable the remote debugger via the debugger dropdown.  This allows you to debug into a machine no matter how far away it is, as long as you can connect to it over a network.  You can also easily debug applications running on external devices such as a Surface tablet.

debugselector

The IP address and connection details can be managed in the debugger property page, accessed using either Alt+Enter or right-clicking the project in the Solution Explorer.

debugpp

Multi-threaded debugging

Visual Studio provides several powerful windows to help debugging multi-threaded applications.  The Parallel Stacks window is useful when you are debugging multithreaded applications. Its Threads View shows call stack information for all the threads in your application. It lets you navigate between threads and stack frames on those threads. In native code, the Tasks View shows call stacks of task groupsparallel algorithmsasynchronous agents, and lightweight tasks.

parallelstacks

There is also a Parallel Watch window designed specifically for tracking variables across different threads, showing each thread as a row and each watch (object) as a column.  You can also evaluate boolean expressions on the data and export the data to spreadsheet (.csv or Excel) for further analysis.

parallelthreads

Edit and continue

Edit and continue allows you to edit some sections of your code during a debugging session without rebuilding, potentially saving a lot of development time.  This is enabled by default, and can be toggled or customized using the Debugging options, accessible via the Debug menu and selecting Options.

enc

Other resources

If you are interested in some more content and videos about debugging in Visual Studio, check out these links:

Blog posts

Related documentation

Videos

Azure #DocumentDB Service Level Agreements

$
0
0

Why enterprises trust us for their globally distributed applications.

Enterprise applications and massive scale applications need a data store that is globally distributed, offers limitless scale, geographical reach, and is fast and performant. Along with enterprise grade security and compliance, a major criterion is the level of service guarantees the database provides in terms of availability, performance, and durability. Azure DocumentDB is Microsoft’s globally distributed database service designed to enable you to build planet-scale applications, enabling you to elastically scale both throughput and storage across any number of geographical regions. The service offers guaranteed single-digit millisecond low latency at the 99th percentile, 99.99% high availability, predictable throughput, and multiple well-defined consistency models.

We recently updated our Service Level Agreements (SLA) to make them comprehensive to include latency, availability, throughput, and consistency. By virtue of its schema-agnostic and write-optimized database engine, DocumentDB, by default, is capable of automatically indexing all the data it ingests and serves across SQL, MongoDB, and JavaScript language-integrated queries in a scale-independent manner. As one of the foundational services of Azure, DocumentDB has been used virtually ubiquitously as a backend for first-party Microsoft services for many years. Since its general availability in 2015, DocumentDB is one of the fastest growing services on Azure.

SLA

Industry leading comprehensive SLA

Since its inception, Azure DocumentDB always offered the best SLA in the industry with 99.99% guarantees for availability. Now, we are the only cloud service offering a comprehensive SLA for:

  • Availability: The most classical SLA. Your system will be available for more than 99.99% of the time or you get refund.
  • Throughput: At a collection level, we guarantee the throughput for your database collection is always executed according to the maximum throughput you provisioned.
  • Latency: Since speed is important, we guarantee that 99% of your requests will have a latency below 10ms for document read or 15ms for document write operations.
  • Consistency: We ensure that we will honor the consistency guarantees in accordance with the consistency levels chosen for your requests.

While everyone is familiar with the notion of SLA on availability or uptime, providing financial guarantees on throughput, latency, and consistency is a first and industry leading initiative. This is not only difficult to implement but also hard to provide transparency to users. Thanks to the Azure portal, we provide full transparency on uptime, latency, throughput, and the number of requests and failures. In the rare case that we are unable to honor any of these SLA, we will provide credits from 10% to 25% of your monthly bill as a refund.

Availability SLA – 99.99%

Availbility SlA

The following equation shows the SLA formula for availability, given a month with 744 hours:

Formula1

Screenshot_2

A failed request has the HTTP code 5xx or 408 (for document Read/Write/Query operations) as shown in the portal.

Throughput SLA – 99.99%

The following equation shows the SLA formula for throughput, given a month with 744 hours:

Formula3

Screenshot_3

What defines "Throughput Failed Requests", are requests that are throttled by the DocumentDB collection resulting in an error code, but before consumed RUs have exceeded the provisioned RUs for a partition in the collection for a given second. To avoid being throttled due to a misuse, we highly recommend you to look into the best practice in partitioning and scaling DocumentDB.

Consistency SLA – 99.99%

"Consistency Level" is the setting for a particular read request that supports consistency guarantees. You can monitor the consistency SLA through Azure portal:

Eventual consistency

Note: In this screenshot SLA = Actual

The following table captures the guarantees associated with the Consistency Levels. Please note:

  • "K" is the number of versions of a given document for which the reads lag behind the writes.
  • "T" is a given time interval.

 

CONSISTENCY LEVELCONSISTENCY GUARANTEES
StrongStrong
SessionRead Your Own Write
 Monotonic Read
 Consistent Prefix
CONSISTENCY LEVELCONSISTENCY GUARANTEES
Bounded StalenessRead Your Own Write (Within Write Region)
 Monotonic Read (Within a Region)
 Consistent Prefix
 Staleness Bound < K,T
Consistent PrefixConsistent Prefix
EventualEventual

If a month has 744 hours, the SLA formula for consistency is:

Formula5

Screenshot_4

Latency SLA – P99

Observed read

For a given application deployed within a local Azure Region, in a month, we sum the number of one-hour intervals during which Successful Requests submitted by an Application resulted in a P99 latency greater than or equal to 10ms for document read or 15ms for document write operations. We call these hours “Excessive Latency Hours.

Formula7

If Monthly P99 Latency Attainment % is below 99%, we consider it a violation of the SLA and we will refund you up to 25% of your monthly bill.

We hope that this short blog helped you understand the large coverage of our Enterprise SLAs.

Azure DocumentDB, home for Mission Critical Applications

Azure DocumentDB hosts a growing number of customer mission critical apps. Our customers come from diverse verticals such as banking and capital markets, professional services, discrete manufacturers, startups, and health solutions. However, they share a common characteristic, the need to scale out globally while not compromising on speed and availability. Thanks to one of the best architectures, Azure DocumentDB can deliver on these promises and at a very low cost.

Build your first globally distributed application

Our vision is to be the database for all modern applications. We want to enable developers to truly transform the world we are living in through the apps they are building, which is even more important than the individual features we are putting into DocumentDB. Developing applications is hard, developing distributed applications at planet scale that are fast, scalable, elastic, always available, and yet simple, is even harder. Yet it is a fundamental pre-requisite in reaching people globally in our modern world. We spend limitless hours talking to customers every day and adapting DocumentDB to make the experience truly stellar and fluid.

So what are the next steps you should take? Here are a few that come to mind:

If you need any help or have questions or feedback, please reach out to us on the developer forums on Stack Overflow. Stay up-to-date on the latest DocumentDB news and features by following us on Twitter (@DocumentDB) and join our LinkedIn Group.


Announcing public preview of Instance Metadata Service

$
0
0

We are excited to announce the public preview of Instance Metadata Service in Azure’s West Central US region. Instance Metadata Service is a RESTful endpoint that allows virtual machines instances to get information regarding its compute, network and upcoming maintenance events. The endpoint is available at a well-known non-routable IP address (169.254.169.254) that can be accessed only from within the VM. The data from Instance Metadata Service can help with your cluster setup, replica placement, supportability, telemetry, or other cluster bootstrap or runtime needs. 

Previews are made available to you on the condition that you agree to the terms of use. For more information, see Microsoft Azure Supplemental Terms of Use for Microsoft Azure Previews.

Service Availability

Service is available to all Azure Resource Manager created VMs currently in West Central US region. As we add more regions we will update this post and the documentation with the details.

Regions where Instance Metadata Service is available
West Central US

Detailed documentation

Learn more about Azure Instance Metadata Service

Retrieving instance metadata

Instance Metadata Service is available for running VMs created/managed using Azure Resource Manager. To access all data categories for an instance, use the following sample code for Linux or Windows

Linux

curl -H Metadata:true http://169.254.169.254/metadata/instance?api-version=2017-03-01

Windows

curl –H @{‘Metadata’=’true’} http://169.254.169.254/metadata/instance?api-version=2017-03-01

The default output for all instance metadata is of json format (content type Application/JSON)

Instance Metadata data categories

Following table has a list of all data categories available via Instance Metadata

DataDescription

location

Azure Region the VM is running

nameName of the VM
offerOffer information for the VM image, these values are present only for images deployed from Azure image gallery
publisherPublisher of the VM image
skuSpecific SKU for the VM image
versionVersion of the VM Image
osTypeLinux or Windows
platformUpdateDomainUpdate domain the VM is running in.
platformFaultDomainFault domain the VM is running in.
vmIdUnique identifier for the VM, more info here
vmSizeVM size
ipv4/IpaddressLocal IP address of the VM
ipv4/publicipPublic IP address for the Instance
subnet/addressAddress for subnet
subnet/dnsservers/ipaddress1Primary DNS server
subnet/dnsservers/ipaddress2Secondary DNS server
subnet/prefixSubnet prefix , example 24
ipv6/ipaddressIPv6 address for the VM
macVM mac address
scheduledeventssee scheduledevents

 

FAQs

  • I am getting Bad request, Required metadata header not specified. What does this mean?

    Metadata Service requires header of Metadata:true to be passed in the request. Passing header will allow access

  • Why  am I not getting compute information for my VM?

    Currently Instance Metadata Service supports Azure Resource Manager created instances only, in future we will add support for Cloud Services VMs

  • I created my Virtual Machine through ARM a while back, Why am I not seeing compute metadata information?

    For any VMs created after Sep 2016 you can simply add a new Tag to start seeing compute metadata. For older VMs (created before Sep 2016) you would have to add/remove extensions to the VM to refresh metadata

  • Why am I getting error 500 - Internal server error?

    Currently Instance Metadata Preview is available only in West US Central Region, please deploy your VMs there.

  • Where do I share Additional questions/comments?

    Send your comments on http://feedback.azure.com

Networking to and within the Azure Cloud, part 1

$
0
0

Hybrid networking is a nice thing, but the question then is how do we define hybrid networking? For me, in the context of the connectivity to virtual networks, ExpressRoute’s private peering or VPN connectivity, it is the ability to connect cross-premises resources to one or more Virtual Networks (VNets). While this all works nicely, and we know how to connect to the cloud, how do we network within the cloud? There are at least 3 Azure built-in ways of doing this. In this series of 3 blog posts, my intent is to briefly explain:

  1. Hybrid networking connectivity options
  2. Intra-cloud connectivity options
  3. Putting all these concepts together

Hybrid Networking Connectivity Options

What are the options? Basically, there are 4 options:

  1. Internet connectivity
  2. Point-to-site VPN (P2S VPN)
  3. Site-to-Site VPN (S2S VPN)
  4. ExpressRoute

Internet Connectivity

As its name suggests, internet connectivity makes your workloads accessible from the internet, by having you expose different public endpoints to workloads that live inside of the virtual network. These workloads could be exposed using internet-facing Load Balancer or simply assigning a public IP address to the ipconfig object, child of the NIC which is a child of the VM. This way, it becomes possible for anything on the internet to be able to reach that virtual machine, provided host firewall if applicable, network security groups (NSG), and User Defined Routes allows that to happen.

So in that scenario, you could expose an application that needs to be public to the internet and be able to connect to it from anywhere, or from specific locations depending on the configuration of your workloads (NSGs, etc.).

Point-to-Site VPN or Site-to-Site VPN

These two, fall into the same category. They both need your VNet to have an VPN Gateway, and you can connect to it using either a VPN Client for your workstation as part of the Point-to-Site configuration or make sure you configure your on-premises VPN device to be able to terminate a Site-to-Site VPN. This way, on-premises devices are able to connect to resources within the VNet. The next blog post in the series will touch on intra-cloud connectivity options.

ExpressRoute

This connectivity is well described in the ExressRoute technical overview. Suffice to say that as with the Site-to-Site VPN options, ExpressRoute also allows you to connect to resources that are not necessarily in only one VNet. In fact, depending on the SKU, it can allow the connection to more than 1 VNet, up to 10 or, having the premium add-on, up to 100 depending on bandwidth. This is also going to be described in greater details in the next section, Intra-Cloud Connectivity Options.

New to Sway—recording, closed caption, navigation, autoplay and view counts

$
0
0

Over the last few months, we’ve been on the ground asking users what features they would like to see in the Sway app. We heard all the ways you use Sway in your personal, school and work lives, and listened to tons of great recommendations on how we could make these experiences even better. As a result of this partnership, we’re proud to announce another round of updates from the Sway team.

Audio recording

The ability to add audio to Sways was the top request from educators—as students and teachers (among others)—love to express their ideas and thoughts in this natural and intuitive way. Now, you can add audio recordings to make your Sway more interactive and engaging.

Please note that not all web browsers support recording. If you see the message, “This browser doesn’t support audio recording in the Sway web app,” open Sway in another browser where recording is supported, such as Edge (on Windows) or Chrome (on a Mac or Chromebook). For more information, please see “Record audio in Sway.”

Closed caption

With the Microsoft mission to empower every person and every organization on the planet to achieve more, we continue to add features to improve accessibility. We are excited to announce that authors can now associate closed caption files with their audio recordings or audio files added from their local drive or OneDrive/OneDrive for Business. Office 365 authors can also associate closed caption files with any video files uploaded from their local drive or from OneDrive for Business.

First, add a new video or audio card to your Sway—by either uploading the content, recording it or adding the content from your OneDrive/OneDrive for Business account—and then expand the card using the Details button. Next, click the Add closed caption button at the bottom of the expanded card and select your closed caption file (.vtt format) and the language it is in.

When viewing a Sway that has a video with closed captions available, it is easy to simply select the option to turn closed captions on and the language of the closed caption they’d like to see.

Navigation view

Sways are now easier to navigate. Jump back and forth between sections or get a glimpse of the Sway content—all from the new Navigation view.

When you click or tap the Navigation icon (in the bottom-right corner), the Sway fades into the background and the Navigation view appears. Your Sway title, section headers, images and text collectively form an engaging and informative navigation view.

Here’s an example of the Navigation view in the Universe Sway:

Click or tap each section tile to jump to that section, or scroll to see more sections if your Sway is longer.

Autoplay

You told us you wanted to automatically play and continuously loop a Sway for unattended cases such as billboards.

We’re happy to announce that, if you have an Office 365 subscription, you can now set your Sway to play automatically! If you are the author of the Sway, use the menu on the top right to go to Settings and turn autoplay on. Additionally, you can autoplay any Sway you are viewing by clicking the Settings gear at the top right-hand corner of the Sway. In the Autoplay settings box, set the delay and then press Start. The Sway will now play automatically.

Once the Sway is playing, you can change the delay, pause or stop playback using the controls on the bottom-right corner.

View count

We also heard from our users that they want to know how engaged their audience is with their Sways. Now, authors can see how many people have viewed their Sway. We officially started the view count on March 13, 2017, so if you see “No data,” this means that the Sway has not had any viewers since that date.

We hope you enjoy using the newest features in Sway, and as always, we look forward to your suggestions, feedback and comments on our UserVoice page.

—The Sway team

The post New to Sway—recording, closed caption, navigation, autoplay and view counts appeared first on Office Blogs.

Explore Microsoft Teams, Windows 10 and Office 365 in an interactive online session

$
0
0

Today’s fast-paced workplace requires you to transition between tasks seamlessly and find things quickly. As work and collaboration evolve to become more web-based and complex, changing professional styles call for tools that provide agility across the full span of a workday.

Instead of wasting time searching for your most-used features and content, you need technology that allows you to transition from email to collaboration to project work and back again with ease.

Test our latest technology on your own work

Microsoft is now offering hands-on live sessions where you’ll have the opportunity to test drive Windows 10, Office 365 and our hottest new collaboration tool: Microsoft Teams. During these small-group sessions, you’ll have the opportunity to apply these tools to your own business scenarios and see how they work for you.

Each 90-minute session starts with an online business roundtable, discussing your biggest business challenges with a trained facilitator, and then transitions into a live environment in the cloud. You will receive a link to connect your own device to a remote desktop loaded with our latest technology, so you can experience first-hand how Microsoft tools can solve your challenges.

Learn skills that will simplify your workflow immediately

During this interactive online session, you will:

  • Explore how Microsoft Teams helps you collaborate with your coworkers in different locations and time zones.
  • Discover how you can keep your information more secure without inhibiting your workflow.
  • Learn how to visualize and analyze complex data, quickly zeroing in on the insights you need.
  • See how multiple team members can access, edit and review documents simultaneously.
  • Gain skills that will save you time and simplify your workflow immediately.

Register for a free interactive session and experience for yourself how the latest Microsoft technology can help you be more productive.

Each session is limited to 12 participants.

Reserve your seat:

U.S. customers: Register here
Outside the U.S.:Register here

The post Explore Microsoft Teams, Windows 10 and Office 365 in an interactive online session appeared first on Office Blogs.

Rest easy with regulatory compliance in Windows Server 2016

$
0
0

Last month we learned that Windows Server 2016 has achieved Common Criteria certification for the General Purpose OS protection profile.

This international standard is especially important for our customers in the public sector, where Common Criteria certification is highly recommended or even required. Thats why Microsoft has been participating in Common Criteria for nearly two decades, dating back to Windows 2000 Server.

Deploying Windows Server 2016 can also help you meet a host of other compliance requirements and security objectives, such as ISO 27001, PCI, and FedRamp.

What does this mean? If compliance with any of these regulatory requirements is important to your organization or industry, you can rest easy. Weve done the work for you, mapping the security features in Windows Server 2016 to these certifications.

All you have to do is click on the appropriate link(s) below to see how Windows Server 2016 helps you get the certifications you need.

For more security guidance for the Windows Server operating system in general, check out the Windows Server security page on TechNet.

Viewing all 13502 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>