Quantcast
Channel: TechNet Technology News
Viewing all 13502 articles
Browse latest View live

Run To Click Debugging in Visual Studio 2017

$
0
0

You have many options when navigating through your code with the debugger in Visual Studio including setting breakpoints, stepping, and using Run to Cursor. In Visual Studio 2017 we have introduced Run to Click, a new way to more easily debug your code – point and click style. You no longer need to set temporary breakpoints or step several times to execute your code and stop on the line you want. You now can get all the benefits of Run to Cursor (Ctrl+Shift+F10) without searching through the context menu or taking your hand off the mouse for a two handed shortcut combination. Run to Click works while debugging in any programming language in VS including C#, VB, C++, and Python.

Point and Click Debugging

When stopped at a break state under the debugger, a light green “run execution to here” glyph subtly appears next to a line of code where your mouse is hovered.

RunToClickICONOff

Move your mouse to the glyph and click the button. Your code will run and stop on that line the next time it is hit in your code path.

RunToClickICONOn

This is especially useful if you naturally have one hand on the mouse while debugging to inspect variables with data tips in your code. You can quickly Run to Click a line of code, inspect the variables on that line, and then continue debugging all while keeping focus and attention in that location. Run to Click between the same method, different methods, and within loops!

RunToClickBasic

Special Notes

Remember that Run to Click will run the execution of your application that you are debugging until that line of code is reached.

  • If you click on a line of code that won’t be hit, the application with finish executing.
  • If you click on a line of code that resumes the application waiting for additional user input, once that code path is triggered by the input, you will break where you performed the Run to Click.
  • If you Run to Click on a line and the execution path triggers a breakpoint, the debugger will stop at any breakpoints in the path. When you hit “continue” (F5) execution will continue and you will stop on the line where you triggered run to click (just like if you had set a breakpoint on that line).

You can turn off Run to Click from the checkbox Debug/Options/Enable Run to Click.

Wrap Up

Run to Click improves your productivity by helping you get to the code you want to inspect faster. Try it out by downloading Visual Studio now (https://www.visualstudio.com/downloads/) and let me know what your think! We are always wanting your feedback so drop your questions and comments in the section below, or tweet @VS_Debugger.


Announcing .NET Core Tools 1.0

$
0
0

Today, we are releasing .NET Core Tools 1.0. It’s an exciting day and an important milestone in the .NET Core journey. The tools are supported on Windows, macOS and Linux. You can use them at the command-line, with VS Code or as part of Visual Studio 2017.

You can download the new .NET Core tools at .NET Core Downloads, for using .NET Core on the command-line.

The tools are also integrated into Visual Studio 2017, also releasing today. You need to select the .NET Core cross-platform development workload when you install Visual Studio.

You can read more about the Visual Studio 2017 release in Announcing Visual Studio 2017 General Availability… and more. Also check out Visual Studio 2017: Productivity, Performance, and Partners.

Visual Studio for Mac also includes support for .NET Core. Visual Studio for Mac Preview 4 is also releasing today.

Visual Studio Code works great with .NET Core, too. The C# extension is a must-have for .NET Core development, and is typically suggested to you as soon as you open a C# or csproj file. You can also try a more experimental extension that offers csproj intellisense.

New versions of the .NET Languages are included in Visual Studio. You can now use C# 7, Visual Basic 15 and F# 4.1. C# 7 is already supported with .NET Core (just by using the latest Roslyn!). We expect F# support in the first half of 2017 and then VB support after that.

.NET Core 1.0.4 and .NET Core 1.1.1 were also released today. Both releases include a set of reliability updates to improve the quality of .NET Core. You can download the .NET Core Runtime releases via our .NET Core Runtimes download page. If you are looking for the .NET Core SDK to get the latest tools, try the .NET Core SDK download page.

A Quick Tour of .NET Core

There are some great experiences and features that we’ve built over the last several months, a big step forward over the tools that were previously available for .NET Core.

You can take the tour of .NET Core by watching What’s new in .NET Core and Visual Studio 2017. It’s a great video, showing you a lot of the new .NET Core experience in just 8 minutes.

We are encouraging everyone doing .NET Core development to move to Visual Studio 2017, including migrating from project.json to csproj. We will not be supporting csproj and MSBuild for .NET Core in Visual Studio 2015.

Let’s take a look at some of the key improvements, starting with the command-line experience, moving to Docker and then into Visual Studio.

Command-line experience

You can very quickly create and run a .NET Core app on your favorite OS. Let’s see what the experience looks like on Windows, creating and launching a .NET Core 1.0 console app. You need the .NET Core SDK installed for this experience.

C:\>dotnet new console -o myappThe template "Console Application" created successfully.C:\>cd myappC:\myapp>dotnet restore  Restoring packages for C:\myapp\myapp.csproj...C:\myapp>dotnet runHello World!

Docker experience

You can see a similar command-line experience using Docker. In this example, I’m creating and launching an ASP.NET Core 1.1 MVC site using a .NET Core SDK Linux Docker image with Docker for Windows (I prefer “Stable channel”). You just need Docker for Windows installed, configured to Linux containers for this experience. You do not need .NET Core installed. You can do the same thing with .NET Core and Windows containers.

C:\Users\rich>docker run -p 8000:80 -e "ASPNETCORE_URLS=http://+:80" -it --rm microsoft/dotnetroot@9ea6f0be8ef7:/# dotnet new mvc -o mvc --framework netcoreapp1.1The template "ASP.NET Core Web App" created successfully.root@9ea6f0be8ef7:/# cd mvcroot@9ea6f0be8ef7:/mvc# dotnet restore  Restoring packages for /mvc/mvc.csproj...root@9ea6f0be8ef7:/mvc# dotnet runHosting environment: ProductionContent root path: /mvcNow listening on: http://+:80Application started. Press Ctrl+C to shut down.

I then navigated the web browser to http://localhost:8000 and saw the following page load, instantly. I still find this configuration amazing, loading ASP.NET sites from a Linux container, on Windows. Try it; it works!

mvc-app-template

 

Visual Studio Templates

There are new templates that you can use in Visual Studio 2017, for .NET Core, ASP.NET Core and .NET Standard. You can see them located in the new project dialog, displayed in the following screenshot.

new-project

 

You should use the ASP.NET Core template if you want to build web sites or services, use the .NET Core template for tools or .NET Core experimentation and use the .NET Standard (Class Library) templates for libraries that you want to use with multiple apps of different application types (for example, .NET Core, .NET Framework and Xamarin). There are also platform-specific library templates (for example, .NET Core (Class Library) and .NET Framework (Class Library)) that enable you want to take advantage of APIs available in those respective platforms and not in .NET Standard.

The ASP.NET Core “new project” experience includes a version picker, making it easy to select the version of ASP.NET Core that you want. The selection controls both the .NET Core target framework and ASP.NET Core package versions. See the experience in the following screenshot.

new-aspnetcore-project

Editing csproj files in Visual Studio

With .NET Core projects, you can edit project files “live” while the project is loaded. This option is available by right-clicking on the project file and selecting Edit [project-name].csproj. You can add or remove package references and other aspects of the project file. It seems magic compared to the existing Visual Studio “Unload Project” experience. See the new experience in the following screenshot.

edit-project-file

.NET Standard Project-to-Project References

.NET Standard libraries are a new project type that you can use in nearly all .NET project types. They are the replacement for and a big step forward over Portable Class Libraries. You can now reference .NET Standard projects and NuGet packages from .NET Framework, Xamarin and Universal Windows Apps. The image below shows a .NET Framework console app with a project dependency on a .NET Standard project, which has dependencies on two popular NuGet packages.

netstandard-p2p

Migrating project.json Projects to csproj

We’re now encouraging everyone to migrate to MSBuild and csproj from project.json. As I stated above, we will not be supporting any of the new .NET Core tools in Visual Studio 2015. We also won’t be updating the Visual Studio 2015 experience.

There are two experiences for migrating project.json projects. You can migrate project.json files on the command-line, with the dotnet migrate command. The command will produce a .csproj project file that you can use with the latest .NET Core tools or open in Visual Studio 2017.

You can also open .xproj files in Visual Studio 2017. Visual Studio will migrate your .xproj files for you, producing a .csproj file that you can use going forward for use with both Visual Studio 2017 and with the latest tools at the command-line.

Visual Studio for Mac

Visual Studio for Mac is an IDE for mobile-first, cloud-first workloads with support for building iOS, Android, and Mac apps in C# and F# with Xamarin, as well as web and server apps with .NET Core and ASP.NET Core. You’ll find the same Roslyn-powered compiler, IntelliSense code completion, and refactoring experience you would expect from a Visual Studio IDE. And, since Visual Studio for Mac uses the same MSBuild solution and project format as Visual Studio, developers working on Mac and Windows can share projects across Mac and Windows transparently.

Documentation

.NET Core documentation and ASP.NET Core documentation have been updated to document the .NET Core Tools 1.0 and Visual Studio 2017 experience.

We still have a lot of work to do to improve and expand the docs and will endeavor to do just that. Like the .NET Core product, the .NET Core docs are open source. Please don’t hesitate to file issues and improve the docs with a pull request.

What’s in the release

Let’s first define the release. We released the following today:

  • .NET Core Tools 1.0.0 – only ships in Visual Studio 2017
  • .NET Core Tools 1.0.1 – available in the SDK and via Docker SDK images
  • .NET Core Runtime 1.0.4 – available as a Runtime install or Docker image and in the .NET Core SDK
  • .NET Core Runtime 1.1.1 – available as a Runtime install or Docker image and in the .NET Core SDK

We didn’t intend to release two versions of the SDK on the same day. That would be silly! Instead, there is a good story! Support for Fedora 24 and OpenSUSE 42.1 Linux distros was added to the .NET Core SDK 1.0.1 and is missing from the .NET Core SDK 1.0.0 release. The short version of the story is that we missed an internally-set date to update the .NET Core 1.0.0 release for these two distros, so were forced to create the 1.0.1 in order to make Fedora and OpenSUSE developers happy.

We also updated both .NET Core 1.0 and 1.1, with patch releases. Both releases contain a variety of reliability fixes and no new features.

You can also read the .NET Core Tools 1.0 release notes to learn more about the changes.

Docker

The latest .NET Core runtime and tools are available in the following Docker SDK images:

  • 1.0.4-sdk
  • 1.0.4-sdk-nanoserver
  • 1.1.1-sdk
  • 1.1.1-sdk-nanoserver

You can also use the latest .NET Core runtime images, which are recommended for production deployments since they are much smaller than the SDK images:

  • 1.0.4-runtime
  • 1.0.4-runtime-nanoserver
  • 1.1.1-runtime
  • 1.1.1-runtime-nanoserver

We adopted changes to the Docker tag scheme we use at microsoft/dotnet. The “-projectjson” and “-msbuild” differentiation is now gone. The tags are significantly simpler, as you can see from the tags above.

We have changed the tags multiple times over the last few months. These changes were required to properly represent the changing nature of the product. We believe that we will no longer need to make significant tag scheme changes going forward and that we are now at “steady state”. Thanks to all .NET Core Docker users for your patience!

As previously stated, we will no longer be producing any project.json-based Docker images.

Closing

Thanks to everyone that helped to get .NET Core Tools to 1.0. There are so many people to thank that helped the product get to today and we thank you for your patience as we iterated on our tools. We hope that you are happy with the product, specifically the 1.0 tools. Sincerely, thank you!

I looked back in our blog history and found the first mention of .NET Core and related tools on this blog. Enjoy that. Certainly, much of what I wrote there remains true. There are some DNX-isms that we’d still like to bring back, too.

.NET Core 1.x is just the start of this new platform. We’re very excited to see what you do with it. We’ll listen closely to your feedback and look forward to working closely with many of you on GitHub.

Again, thanks so much for using the product and for helping us build it.

Note: You can always find our released builds documented in release notes.

Binary Compatibility and Pain-free Upgrade: Why Moving to Visual Studio 2017 is almost “too easy”

$
0
0

Visual Studio 2017 is a major leap forward in terms of C++ functionality compared with VS 2015. We hope the new release will delight you in your day-to-day job as soon as you can upgrade.

This blog post focuses on the steps needed to upgrade from Visual Studio 2015 to 2017. As promised in our BUILD 2016 talk “6 reasons to move your C++ code to Visual Studio 2015” (click to jump to 1h:04), in this release, our team made it distinctively easy to move your codebase to Visual Studio 2017. And here are the four reasons why.

Get MSVC 2015.3 toolset via the Visual Studio 2017 Installer

Loading your C++ projects targeting an older C++ toolset inside Visual Studio 2017 does not change your project. This allows you to load it back in the previous IDE in case you need to go back, or you still have teammates that are not fully upgraded to 2017. To make it zero-impact to load your existing projects inside Visual Studio 2017, just like in previous releases, the IDE supports project round-tripping.

In previous releases, you had to install both the latest and the old Visual Studio side-by-side for the latest IDE to be able to build your projects. Visual Studio 2017 allows you now to install the MSVC 2015 (v140) toolset directly, making it convenient to bootstrap a new machine as well as reducing the disk footprint of the installation by only installing the needed toolset and not the whole VS 2015 IDE bits.

installerplatform-toolset-selection

VC Runtime in MSVC 2017 is binary compatible with 2015

There are a lot of improvements in the C++ compilers and libraries in this release of Visual Studio that will entice you to move to Visual Studio 2017’s latest toolset (v141) – new standard conformance features, improved codegen, faster build throughput. You may be worried however that your third-party library dependencies are not ready for such a jump. We often hear from customers that this is the number one migration blocker to a new C++ toolset (whether they consume dependencies as binaries or even directly as source).

With the latest MSVC toolset however, you don’t need to worry about this at all. That’s simply because the latest VC Runtime is binary compatible with the VS 2015’s VC Runtime. In other words, if you have a library built with the v140 platform toolset, that binary and your code consuming it will continue to work even if you built your code with the v141 MSVC toolset.

Any binaries built with MSVC v141 toolset will link against the 140 version of the VC Runtime. The VCRedist is only backward compatible, so you will need to redistribute the latest VCRedist 140 available in VS 2017 with your app.

C:\src\ClockApp\Debug>dumpbin ClockApp.exe /IMPORTS | findstr .dll
mfc140ud.dll
KERNEL32.dll
USER32.dll
GDI32.dll
COMCTL32.dll
OLEAUT32.dll
gdiplus.dll
VCRUNTIME140D.dll
ucrtbased.dll

Hundreds of C++ libraries on Vcpkg are now available for Visual Studio 2017

If you haven’t heard about VCPkg yet, no worries – it’s an open-source project from Microsoft to help simplify the acquisition and building of open-source libraries on Windows. If you did use Vcpkg with Visual Studio 2015 for one or more of your open-source dependencies, then you will be happy to learn that these libraries (close to 180 at the time of this writing) are now compiled using MSVC v141 toolset and available for consumption in Visual Studio 2017 projects.

Because v141 is binary compatible with v140, all your existing packages will continue to work without recompilation; however, we do recommend recompiling when you can to enjoy the new compiler optimizations we’ve added to v141!

MSVC compiler version revs to 19.1 (from 19.0 in Visual Studio 2015)

Last but not least, the compiler part of the MSVC v141 toolset is revving only as a minor version in Visual Studio 2017. Note that we continue to make important progress towards C++ conformance as well as improved codegen. As these improvements are made, certain changes to make your code standards-conformant may be required on your side. All these changes are documented in our C++ conformance improvements in Visual Studio 2017 topic on docs.microsoft.com.

compiler-version

Call to action

Visual Studio 2017 comes with many new capabilities that we want you to take advantage of as soon as possible. That’s why we’ve made it probably “too easy” to migrate your projects from Visual Studio 2015. Try it today and let us know what you think!

Before you go, also check out the rest of our announcement posts for VS 2017 launch event, download Visual Studio 2017 and share your feedback with us either in the comments below or on developercommunity.visualstudio.com

Check for const correctness with the C++ Core Guidelines Checker

$
0
0

This blog post was written by Sunny Chatterjee and Andrew Pardoe

The C++ Core Guidelines focus on simple ways that you can improve the correctness and safety of your code. We introduced the C++ Core Guidelines Checkers to help automate enforcement of the C++ Core Guidelines in your code.

One of the easiest and most important changes you can make to your code is to mark immutable data as `const`. It’s not us and the Core Guidelines who believe that: see this fantastic blog post from Bruce Dawson about the benefits of adding const to your code. (He also mentions that on MSVC removingconst can make some code faster, but that reflects a compiler bug that we are fixing thanks to Bruce’s feedback.) Because const is so important we’ve added a new C++ Core Guidelines checker about const correctness.

We created four new rules in the C++ Core Guidelines checker that cover all of the rules in the currently contained in the Constants and Immutability section of the C++ Core Guidelines. We didn’t actually add these checks for all of the rules—we implemented a check for rule #2, “By default, make member functions const” but removed it because it raised too many false positives on valid code—see below for details. Also, the tool will not warn that you could mark a stub function as const because we recognize that you’ll probably just remove the const later when you implement the function.

Const checker rules

Con.1: By default, make objects immutable

This rule is a general idea that states that we should always mark objects as const unless we’re writing to them. We cover this rule through more specific implementation of subsequent rules in our checker.

Con.3: By default, pass pointers and references to consts

Our checker enforces this rule. You can pass a pointer or reference to a non-const object, but if you do so, the caller shall assume their argument will be modified. If the function does not modify the object, we should mark the object as const to make the intent explicit.

Advantages of this check
  • Makes the intention of the callee explicit to the caller about whether or not an argument will be modified.
  • Future modifications in function body doesn’t change the expectations of the caller.

We use certain heuristics to reduce noise –

  • We don’t ask to mark arguments that are passed by value or the pointer arguments themselves as const.
  • We don’t ask unused arguments be marked as const since we don’t have enough information about their intent.
  • We don’t enforce this on virtual functions as the author may want to follow a specific derived behavior.
Examples
// Expect 26461: The input pointer argument b in function f7 can be marked as const
int f7(const int *a, int *b)
{
    return *a + *b;
}

struct S0
{
    virtual void m();
};

// Expect 26461 on 'p' but don't report on unnamed parameter.
S0 f8(int *p, int *)
{
    (p == nullptr);

    // Don't report on return UDT.
    return{};
}

Con.4: Use const to define objects with values that do not change after construction

Our checker enforces this rule. It’s similar to con.3, but applies to all variables and not just pointer or reference arguments. It helps prevent surprise from unexpected changes in object values.

Advantages of this check are pretty similar to con.3
  • Makes it easier to reason about code if we know if an object is immutable at the point of declaration.
  • Future modification of the code cannot change the immutable property of the object.

Like con.3, we use certain heuristics to reduce noise –

  • We avoid suggesting const usage on un-used variables – they rarely add any value.
  • We don’t suggest to mark the pointer or reference themselves as const, since users mostly care about the data that they point to.
Examples
int f5()
{
    // Expect 26496: Variable m is assigned only once, use const.
    int m = 5;
    const int a = 10;
    if (m > a)
        return m;
    return a;
}

Con.5: Use constexpr for values that can be computed at compile time and F.4: If a function may have to be evaluated at compile time, declare it constexpr

Our checker encourages programmers to declare functions that may have to be evaluated at compile time as constexpr.

Examples
// Expect 26497: could be marked constexpr if compile-time evaluation is desired
int f1(int a, int b)
{
    return a + b;
}

constexpr int f2(int a, int b)
{
    return a + b;
}

void f3()
{
   // Compile-time evaluation
    constexpr int m = f2(10, 20);
    
    // Expect 26498: This function call f2 can use constexpr if compile-time evaluation is desired.
    const int m2 = f2(10, 20);
}

Rule 2: the one we didn’t include

Con.2: By default, make member functions const

In the initial prototype, we included this check. However, after running this check on some real-world code, we decided to remove it from the shipping version of the checker. We didn’t want programmers to mark their member functions as const when they were logically non-const. For the intrepid, there’s a good discussion of logical and physical constness on the isocpp.org web page: https://isocpp.org/wiki/faq/const-correctness#logical-vs-physical-state.

Here’s an example where the member function is logically non-const. You could mark member function bar as const, e.g., void bar() const { *data_  = GetData(); }. While it doesn’t alter the pointer data_ itself, it could alter the memory pointed to by data_. Thus this function is not logically const.

class Test
{
public:
    // This method should be marked “const” since it doesn’t change the logical state of the object.
    MyData foo() const { return *data_; } 
    
    // This method shouldn’t be blindly marked as “const”. It doesn’t alter the pointer data_ itself.
    // However, it alters the state of memory pointed-to by it.
    void bar() const { *data_ = GetData(); }

private:
    // data_ is a pointer to a “MyData” object
    MyData *data_;
};

 

Send us your feedback!

As always, we welcome your feedback. For problems, let us know via the Report a Problem option, either from the installer or the Visual Studio IDE itself.For suggestions, let us know through UserVoice. And you can always reach us through e-mail at cppcorecheck@microsoft.com.

C++ Code Analysis improvements in Visual Studio 2017 RTM

$
0
0

This blog post was written by Sunny Chatterjee and Andrew Pardoe

Visual Studio 2017 RTM release includes the C++ Core Guidelines Checkers as part of Code Analysis tools for C/C++. We have gotten a ton of useful feedback on the early previews of these checks through our external customers. Thank you for engaging with us and giving us great feedback. This feedback helped us improve the quality of the final released version of  C++ Core Guidelines checks. Some of these improvements are explained in detail in this blog post about const correctness.

Besides shipping the C++ Core Guidelines checker, we also fixed  more than 150 bugs in our core analysis engine. All of these fixes are available in the Visual Studio 2017 RTM. As a result, developers should expect to see accuracy improvements in C++ code analysis. Download Visual Studio 2017 today and let us know what you think of the improvements to code analysis!

Here are some notable fixes that were frequently reported. These fixes were made as a result of direct external feedback.

  1. False positive during dereferencing null-pointer checks (C6011)
    1. https://connect.microsoft.com/VisualStudio/feedback/details/1645136/c6011-occurs-const-cast-to-const-members-after-if-statement
    2. https://connect.microsoft.com/VisualStudio/feedback/details/1981990/inappropriate-analyzer-warning-when-casting-to-reference-in-constructor
    3. http://connect.microsoft.com/VisualStudio/feedback/details/2556936/static-analysis-c6011-warning-false-positive-in-short-circuited-conditionals
    4. https://connect.microsoft.com/VisualStudio/feedback/details/2750342/static-analysis-false-positive-when-using-a-bracketed-ternary-operator
    5. https://connect.microsoft.com/VisualStudio/feedback/details/3078125/false-positive-dereferencing-null-pointer-warning-when-taking-a-named-reference
    6. https://connect.microsoft.com/VisualStudio/feedback/details/3082362/static-analysis-false-positive-when-comparing-ptr-nullptr-vs-simply-ptr
  2. False positive during uninitialized memory checks (C6001)
    1. http://connect.microsoft.com/VisualStudio/feedback/details/1858404/false-positive-in-c-static-analysis-c6001
    2. https://connect.microsoft.com/VisualStudio/feedback/details/2607792/erroneous-report-from-sal
  3. False positive around inconsistent annotation checking (C28252 and C28253)
    1. http://connect.microsoft.com/VisualStudio/feedback/details/2053524/wsutil-compiler-version-1-0095-creates-a-file-that-triggers-warnings-c28252-and-c28253-even-when-compiled-with-w0
  4. False positive during annotation parsing (C28285)
    1. http://connect.microsoft.com/VisualStudio/feedback/details/2358718/sal-analysis-warning-c28285-when-using-local-static-variables
  5. False positive during strict type match checking (C28039)
    1. https://connect.microsoft.com/VisualStudio/feedback/details/2573764/sal-false-positive-on-strict-type-match
  6. False positive when checking for local vs. global declarations on enum classes (C6244)
    1. https://connect.microsoft.com/VisualStudio/feedback/details/3101212/incorrect-static-analysis-warning-of-enum-class-enumerators-hiding-other-declarations
  7. MSBuild error MSB4018 during code analysis runs: The “MergeNativeCodeAnalysis” task failed unexpectedly
    1. https://connect.microsoft.com/VisualStudio/feedback/details/3113987/error-msb4018-the-mergenativecodeanalysis-task-failed-unexpectedly

Send us your feedback!

We hope that the C++ Code Analysis tools in Visual Studio 2017 help improve your code and make you more productive. We’d like to thank you all and as always, we welcome your feedback. Please tell us what you like and dislike about our current toolset and what you’d like to see in future releases.

For problems, let us know via the Report a Problem option, either from the installer or the Visual Studio IDE itself. And you can always reach us through e-mail at cppcorecheck@microsoft.com.

Visual Studio 2017 for C++ developers – you will love it

$
0
0

Here on the C++ product team at Microsoft, our mission is to make the lives of every C++ developer on the planet better. We try to do that via various avenues, including but not limited to,

  1. by enthusiastically participating with the C++ Standards committee to help the language itself be better for every C++ developer in the world,
  2. by providing the Microsoft Visual C++ (MSVC) Compiler, which we aim to be the best compiler choice on Windows for targeting Windows,
  3. by continuing to enhance the C++ extension for Visual Studio Code, which is useful to all C++ developers regardless of the platform they do their development on, and finally,
  4. by improving Visual Studio, which is the most fully-featured IDE on the planet.

It is that last investment, Visual Studio, which I wanted to focus on in this blog post. Specifically, we’ll focus on the latest release, Visual Studio 2017, which we are confident you are going to love! We have blogged previously about the value in VS2017, so this post will just be a summary pointing to older posts we wrote that are now all updated for the final RTM release. First, we’ll summarize the new installation experience, then how it is pain-free to upgrade, then the new capability to open any folder of code (including CMake-based solutions), then how Visual Studio is for everyone, and finally what is new for productivity and performance.

Faster installation and your hard disk will thank you

When you install VS2017, you’ll encounter a new screen that organizes your installation options by what we call “Workloads”, so you can pick the one(s) that you care about while ignoring the others. The workloads most relevant to C++ developers are: Universal Windows Platform development, Desktop Development with C++, Game development with C++, Mobile development with C++, and Linux development with C++.

This is exciting because it makes it easy for you to pick exactly what you want to have installed, without the bits that you don’t, e.g. no ASP.NET bits if you are not also a web developer. The obvious benefits are that you end up with less visual noise while you are using the product, less use of your disk space, and you can get everything installed faster! For example, if you only pick the Linux development with C++ workload, you can get it all installed in under 6 minutes on a typical developer machine. The Desktop Development with C++, which most Visual Studio C++ users use today, takes less than 12 minutes. It is worth mentioning that what you are seeing is not only a reorganization at the UI level, but a complete rewrite of our installer.

vc2017_installing

Pain-free upgrade

Before joining Microsoft, when I was a customer, I was always excited about a new Visual Studio version, but also apprehensive about “upgrading” my projects and dealing with a few new errors/warnings on my codebase that worked fine in the “older” Visual Studio.

If you are currently using VS2013 or older, and you initially want everything to continue working as it was, there is good news for you: just use the older compiler and libraries toolset that is already on your disk with the latest Visual Studio. If you are already using VS2015, even on a clean machine (instead of side-by-side), you’ll notice in the new VS2017 acquisition experience described in the previous section that there is an option to install just the toolset that shipped with VS2015 (rather than the whole VS2015 IDE), so that you can continue using the older toolset while enjoying the benefits of the latest VS2017 IDE!

Of course one of the many reasons you are moving to VS2017 is to use the latest toolset with build throughput improvements, faster generated code, and the standards conformance goodness of the latest compiler and libraries so you will upgrade to the latest toolset, which by the way is binary compatible with the one that ships with VS2015. You can further take advantage of the standard version compiler switches and the permissive- switch that help you ease into the conformance at your own pace. For any language conformance changes you encounter, you’ll find we put significant effort in making our documentation great for that eventuality.

Finally, for any open source 3rd party libraries you depend on, our new package management solution vcpkg with its growing repository of libraries has your back. And if you have adopted that already for your VS2015 projects, you’ll find they automatically work when you move to VS2017.

vc2017_props

Just point Visual Studio to your code

In the previous section, we touched on upgrading projects where the assumption was that you already have Visual Studio projects, or more accurately MSBuild-based projects. What about those of you who use other C++ build systems, such as makefiles or CMake? Until this VS release, you needed to create VS projects and add your code to that, or somehow generate a VS project from your current build system. Now, you can just “Open Folder” with any code directly in Visual Studio 2017! Depending on your build system you will automatically get some IntelliSense, navigation, building, and debugging capabilities… and you can improve those experiences by further configuring and adding information in JSON text files. For those of you who have chosen CMake, you will love the CMake-support in Visual Studio, just point VS to your CMakeLists.txt files and enjoy.

vc2017_cmake

Use Visual Studio for all your projects and target platforms

When we talk to some of you, we know you keep an older version of Visual Studio side-by-side with the latest. As we already established earlier in this post, there should be no reason for you to use the old IDE, just use the older toolset in that project with the latest Visual Studio 2017 IDE.

Some of you use VS for one project, but not for your other projects. In the past, there were many valid reasons for that, such as you didn’t want to convert your nmake-based (or other non-MSBuild-based) codebase to MSBuild; as established earlier, now you can just point Visual Studio 2017 to any folder of C++ code and work with it.

Another reason we hear is that folks use Visual Studio to target Windows, and other IDEs to target other platforms. Again, there is no need for that now. As touched on earlier when talking about workloads, from Visual Studio you can target Android and iOS and Linux. Implicit in that statement is that you should also not be thinking about Visual Studio as the IDE that only pairs with the Microsoft compiler, instead you can use any C++ compiler you want with Visual Studio.

Some of you may prefer using editors instead of IDEs, and that is fine, check out the Visual Studio Code editor. And when you need some heavy debugging, you can easily switch to Visual Studio for that purpose (by launching the EXE as a project or attaching to the running process) and then switch back to your editor of choice.

In short, if you have chosen Windows as your development environment, regardless of what platform you target or what kind of project you are working on, Visual Studio 2017 is there to support you as a C++ developer.

vc2017_workloads

Be more productive than ever

The blog post until now has been essentially about how fast and easy it is to install the product, and how many options you have for getting your codebase in Visual Studio and make sure it builds. You are now ready to do real development, and for many of you, productivity is the main reason you use Visual Studio. It saves you a lot of time during your everyday development in what we call the tight inner loop of editing, navigating and debugging code. If you are not on VS2015 yet, the amount of productivity features we added to that VS2015 release will have you drooling and VS2017 takes that even further.

Visual Studio 2017 includes enhancements to existing features and new features such as: Find All References re-designed for larger searches, Introducing Go To the successor to Navigate To, C++ IntelliSense Improvements – Predictive IntelliSense & Filtering, Bing Developer Assistant with C++ support, C++ Edit and Continue improvements, enhancements to the Memory diag tool, enhancements to debug visualizers, the new Exception Helper, Run to Click, Attach to Process Filter and Reattach, .editorconfig, and new Git features. You can also learn about more VS productivity improvements beyond just for C++ code.

Beyond that kind of productivity, the team focused on the fundamentals of performance in the IDE. When you build during the inner loop, where all you want is for the build to complete quickly after the edits you have just made, so that you can run/test/debug your code, the linker will save you time with 2x to 4x faster links with the on-by-default and improved fastlink option (and watch the ch9 video). Beyond build throughput, you’ll find that VS starts up faster, loads solutions faster (video), IntelliSense is faster and generally working in the IDE will truly feel faster.

vc2017_speedex

 

Smile and take credit

And last but not least, you helped us build this release with your suggestions and bug reports – thank you! In the last year, we have completed 37 C++ UserVoice items, fixed 417 bugs reported through Connect, and fixed an additional 352 feedback items. Please keep the reports coming, you can report any issues you have with Visual Studio by going to Help > Send Feedback > Report A Problem, and we take your comments very seriously – thank you again.

vc2017_feedback

In Closing

We hope you’ll love using this release as much as we liked producing it. As you can see there are so many new capabilities to take advantage of, and the goal of this summary blog post is to collect all the relevant links in one place for those of you that haven’t been following in the past year – the real content is behind the links, so scroll back up and get clicking folks.

The final push to GA Azure AD in new Azure Portal: We need your help!

$
0
0

Howdy folks,

Last September we shared the first preview of the new administration experience for Azure Active Directory in the new Azure portal. Since then, we’ve added lots of new functionality, including reporting, app management, conditional access, B2B, and licensing.

Many of you are using the new experience regularly in fact, over half a million of you are using it, from almost every country in the world, with usage increasing by about 25% each month. We appreciate all your positive feedback, and love the constructive feedback that’s helped us make an even stronger product. But there are still a LOT of you using the old portal.

Late last week we turned on the another set of feature updates, and the new experience now has all of the features identity admins frequently use. With that update, we’ve entered our final push to GA the UX in the next ~60 days.

And that’s where we need your help : We need everyone to move over to using the new portal for production tasks so we can uncover any last minute lingering issues.

http://portal.azure.com

What to expect

We took the opportunity of redesigning this experience to optimize some of our features, so you might not immediately recognize everything in the new portal. For example, since reporting is a key part of the value of Azure AD, we’ve made activity information more accessible and powerful. We’ve has written a helpful article to help you transition to the new model.

There are other differences, too. Some functionality that was part of Azure AD in the classic portal will be integrated differently in the future. Azure Rights Management Services has matured into Azure Information Protection. We’ve previously shared the plans for Access Control Namespaces.

We also have a few features we’re still transitioning: Azure Active Directory Domain Services, MFA provider management, schema editing for provisioned apps, and a few reports including enterprise state roaming status, invitation summary, unlicensed usage, and MIM hybrid reports.

Let us know what you think!

Over the next month or so, as we work to make Azure Active Directory generally available in the new Azure portal, we’ll be completing transition of the last few features, ironing out some usability issues, fixing any bugs we find, and responding to your feedback. But even when we GA, we’re not going to stop. We’ll continue to work to make the experience of administering Azure Active Directory richer, more streamlined, and efficient, and we appreciate your help. Send us your feedback in the ‘Admin Portal’ section of our feedback forum.

Best Regards,

Alex Simons (Twitter: @Alex_A_Simons)

Director of Program Management

Microsoft Identity Division

Announcing F# 4.1 and the Visual F# Tools for Visual Studio 2017

$
0
0

Announcing F# 4.1 and the Visual F# Tools for Visual Studio 2017

Last year, we announced F# 4.1 via a Peek in F# 4.1. As a recap, we mentioned:

  • Alpha support for .NET Core and .NET Standard
  • Language features for F# 4.1
  • A big update to the Visual F# tools for Visual Studio 2017 (called Visual Studio "15" at the time)

We’re tremendously excited to share updates in those areas.

.NET Core and .NET Standard Support for F# 4.1

Corresponding with the release of Visual Studio 2017 is preview support for using F# on .NET Core 1.0-1.1 and .NET Standard 1.6. Although this is in preview, certain pieces are RC-quality. Specifically, the following are in RC:

  • FSharp.Core as a .NET Standard NuGet package
  • The F# compiler running on .NET Core as a .NET Core console application
  • F# support in the .NET CLI and the .NET Core SDK

However, the following are not supported at this time, but will be supported in a future Visual Studio 2017 update:

  • F# support for .NET Core Visual Studio tooling
  • F# Interactive (FSI) running on .NET Core as a .NET Core console application

That means you can create an F# application on .NET Core via dotnet new in the .NET CLI, have it run on all platforms, and create NuGet packages based on .NET Standard 1.6 with F# today. Doing so is quite easy:

  1. Install .NET Core either through Visual Studio 2017 (Windows) or your the platform installer for your OS.

  2. Create a new folder with a meaningful name.

  3. Open a command line in that folder and invoke dotnet new.

    dotnet new console -lang f#

  4. Use dotnet run to compile and run your application.

    dotnet run

  5. Use dotnet publish to create a publish directory with your app and its dependencies.

    dotnet publish -o pubish-directory

  6. You can run the published .dll file with the dotnet command:

    dotnet path-to-dll/MyApp.dll

There are two important things to note about using F# on .NET Core today:

  1. Type Providers and fully-fledged Code Quotation compilers are not supported.
  2. Excluding (1), all other F# code runs on .NET Core 1.1 on all platforms.

.NET Core and .NET Standard are both moving towards a 2.0 release scheduled for later this year. The most significant aspect of this is bringing back the large amount of APIs which were not available in 1.0, along with reverting breaking changes to certain APIs such as Reflection. This will unblock support for Type Providers and Code Quotations, but there is still an open question about System.Reflection.Emit. This is needed to support Generative Type Providers and fully-fledged Code Quotation evaluators/compilers.

We are also aiming to land full FSI support and Visual Studio tooling support for F# on .NET Core in the .NET Standard/.NET Core 2.0 time frame.

To learn more about Type Provider support, see this issue. To learn more about FSI on .NET Core, see this RFC. To track F# support on the Roslyn Project System, see this Pull Request.

Lastly, we would like to extend a very special thanks to Enrico Sada for building out support for F# in the .NET CLI and .NET Core SDK since the early days of .NET Core. This support was built entirely by Enrico and other members of the F# community in partnership with Microsoft, which is a testament to how incredible the F# OSS community is.

F# 4.1 is Released

In the previous F# 4.1 blog post, we described and provided examples out all of the language features shipping in F# 4.1. We’re really excited about F# 4.1 because in addition to being driven by a community RFC process, F# 4.1 had many community contributions. To recap, here are those features:

  • Struct tuples which inter-operate with C# tuples
  • Struct annotations for Records (by Will Smith)
  • Struct annotations for Single-case Discriminated Unions
  • fixed keyword support
  • Underscores in numeric literals (by Avi Avni)
  • Caller info argument attributes (by Lincoln Atkinson and Avi Avni)
  • Result type and some basic Result functions (by Oskar Gewalli)
  • Mutually referential types and modules within the same file
  • Implicit "Module" syntax on modules which share the same name as a type
  • Byref returns, which support consuming C# ref-returning methods
  • Error message improvements (by Steffen Forkmann, Isaac Abraham, Libo Zeng, Gauthier Segay, Richard Minerich, and others)

Since then, the following have also shipped with F# 4.1:

Here are some examples of those features:

IReadonlyCollection<‘T> Implemented in list<‘T>

This is a very straightforward feature: F# lists now implement IReadonlyCollection<'T>.

Optional and DefaultParameterValue Attribute Support

The Optional and DefaultParameterValue attributes are used in F# for C#/VB interop so that C#/VB callers can see arguments as optional. Prior to F# 4.1, the F# compiler did not compile DefaultParameterValue correctly. Additionally, F# was unable to consume arguments defined in F# assemblies with the Optional attribute. These are now possible with F# 4.1. Here’s an example:

Additional Option Module Functions

New functions were added to the Option module in F# 4.1, providing a few different utilities.

Statically Resolved Type Parameter Improvements

Performance improvements and two major bug fixes for Statically Resolved Type Parameters (SRTP) were introduced. Prior to F# 4.1, there was a case where some usages of SRTP were capable of compiling, but shouldn’t have been. Additionally, it was possible to infer type names in SRTP syntax, but not actually specify the type name.

Compiler Performance, FSharp.Core Performance, and Error Message Improvements

Lastly, the community involvement for compiler performance, FSharp.Core performance, and Error Message improvements kept going since the initial F# 4.1 post. For example, improvements in predictions were made:

This effort by the community, particularly in predicting correct names for things, is crucial for F# adoption. We’re extremely excited this for F# 4.1 and look forward to future improvements. We’re incredibly thankful for the efforts of the F# community here.

Struct Single-case Discriminated Unions

Annotating single-case Discriminated Unions is fully supported and stable with F# 4.1. Here’s an example:

This allows you to do the same kind of typesafe domain modeling with Discriminated Unions you’ve done in the past, but this time with structs rather than reference types as the backing data type.

Struct Multicase Discriminated Unions

Annotating multi-case Discriminated Unions as structs is now also supported, with the following design caveats:

  • Each case must have a unique name
  • The type cannot be recursively defined

This feature is currently buggy and should not be used in production code. You can track the progress with this issue.

Using F# 4.1

You can use F# 4.1 with the following tools:

  • .NET Core and the .NET CLI
  • Visual Studio 2017
  • Visual Studio for Mac
  • Visual Studio Code with the Ionide plugin suite

If you’re interested in the design of the features in F# 4.1, you can see each RFC for F# 4.1 here.

Updates to the Visual F# Tools in Visual Studio 2017

Visual Studio 2017 ships with significant changes in the Visual F# Tools. We’re excited to share what’s new and what to look forward to.

What’s New

In short, the Visual F# Tools now use Roslyn Workspaces, which is the same IDE infrastructure that C# and Visual Basic use. This has enabled the F# community to provide many new features, leading to a much better UI experience that is closer in parity to C# and Visual Basic. Many of these features were ported over from the Visual F# Power Tools (VFPT for short), and are now "in-box". They include:

  • Find All References

https://msdnshared.blob.core.windows.net/media/2017/03/find-all-refs.png

  • Navigation Bar Support

https://msdnshared.blob.core.windows.net/media/2017/03/navbar.png

  • Syntax and Type Colorization in Hovers and Signature Help
  • IntelliSense Filters and Glyph Improvements
  • Fuzzy matching on names in IntelliSense

https://msdnshared.blob.core.windows.net/media/2017/03/colored-hovers.png

https://msdnshared.blob.core.windows.net/media/2017/03/intellisense2.png

https://msdnshared.blob.core.windows.net/media/2017/03/colorized-sig-help.png

  • Better colorization in the editor
  • Code Indentation Improvements
  • Breakpoint Resolution Improvements
  • Go to Definition Improvements
  • The ability to trigger Lightbulbs for various code fixes

https://msdnshared.blob.core.windows.net/media/2017/03/find-all-refs.png

https://msdnshared.blob.core.windows.net/media/2017/03/open-namespace.png

  • Semantic highlighting of tokens
  • Support in the new "Go to All" feature (ctrl+T)

https://msdnshared.blob.core.windows.net/media/2017/03/go-to-all.png

  • Roslyn-style Inline Rename
  • Many more improvements to other features

Note: Inline Rename is temporarily disabled due to a last-minute bug which caused the IDE to crash. The fix has already been completed and the feature will work in a future VS 2017 update.

To see the full list of features added, see the F# section of the Visual Studio 2017 Release Notes. As you can see, there were a tremendous amount of fixes and new features added by the community. This is new for the Visual F# Tools, and something we hope to see more of in the future.

Known Issues

There are a number of known issues which are either already fixed and pending a release, or are on the immediate roadmap for getting fixed. You can see this list here. Our highest priority is to address these issues. Items which are closed have already been completed.

See VS 2017 Status and Roadmap for F# and the Visual F# Tools for more context on what we have planned.

Model for Updates

Because of the rapid nature with which the Visual F# Tools are evolving, we are introducing a new model for releases of the Visual F# Tools:

Stable and thoroughly tested bits will ship in-box with Visual Studio 2017. Updates here will correlate with updates to Visual Studio 2017 itself.

There will be nightly, signed VSIXs for those who wish to try out the latest advancements in tooling. Think of these as builds of the latest master branch for Visual F#, signed so that you don’t have to do anything crazy to get them working correctly on your machine. These will not be as well-tested as those which will update alongside Visual Studio 2017, but will come with new features.

We’re still working out the final kinks in getting nightly signed builds working. We’ll draft a new blog post with instructions on where to get this VSIX, and how to install it, when we finish up testing to make sure everything works.

Historical Context

Prior to Visual Studio 2017, tooling for F# effectively lived in its own universe. Over time, especially with the release of Visual Studio 2015 and Roslyn, this has had a cost on the number of features available "in-box". The VFPT filled some gaps when compared to C# tooling features, but not everybody used the extension, and as the UI for C# and Visual Basic evolved, F# tooling was not able to take advantage of those UI advancements. Additionally, adding and improving features in the Visual F# tools or VFPT required knowledge of various Visual Studio SDK APIs. The barrier to entry for open source developers to add value was very high.

Late last year, we "flipped the switch" on our work to sit the F# language service atop Roslyn Workspaces. As a result, we introduced a vast number of bugs and regressions. The F# community worked tirelessly to address those issues, and along the way, added a significant amount of new features and improvements to existing features.

The work also hasn’t stopped, nor has it slowed. The F# community has been relentlessly adding more features and improvements to existing features at a pace that has been almost impossible to keep up with. We’re extremely excited about this release, and even more so about forthcoming updates which will address multiple regressions, make vast memory usage improvements, and introduce even more new features. The Visual F# Tools are evolving at a pace that they have never evolved before, all thanks to the amazing F# community.

A Thank You to the F# Community

Many people from across the F# community have contributed to F# 4.1 and the Visual F# Tools for Visual Studio 2017. We acknowledge all of them and extend our warmest thanks to them and their immense efforts in making F# so amazing.

The focus of this post is on F# 4.1 and the Visual F# Tools for Visual Studio 2017. These are but part of the overall F# ecosystem, which includes components such as FSharp.Data, Ionide, the Xamarin tooling for F#, and hundreds of F# community libraries and tools. While it is difficult to thank everyone by name, we would like to mention some specific individuals who have made many contributions to the F# 4.1 language design and the Visual F# Tools. Please note this doesn’t include contributions to the other components in the F# ecosystem. We have also inevitably forgotten some specific people, and please feel free mention them with a thank you in the comments below, or by linking to your favorite pull requests and comments.

Steffen Forkman has contributed over 100 Pull Requests in the past year, spearheading the improvements to F# 4.1 error message improvements. He has improved compiler performance, cleaned up code, increased our test coverage, and added Lightbulb suggestions in Visual Studio 2017. Moreover, he’s been a leader in the F# open source community, leading development for tools like FAKE and Paket to create a fantastic open source ecosystem. To those who know him, it should be no surprise that he’s been such a prolific contributor.

Vasily Kirichenko has contributed the majority of new features in the Visual Studio 2017 tools. Just a casual glance at the open Pull Requsts in the VisualFSharp repository will make it clear that Vasily is a truly special person in the F# community. He continues to implement entire features end-to-end, diving deep into the codebase to improve complex and incredibly difficult code. Frankly, we’re lucky to have him as a part of the F# community.

Enrico Sada is another prolific contributor who, as previously mentioned, created the .NET Core SDK bindings and .NET CLI support for F#. He has partnered with us and with the teams who own those areas to succeed there. But that’s not all. He made many significant contributions on multiple occasions to improve our test suite and CI system, which is the kind of important work that few people are willing to do. And of course, anyone who works with him knows that he’s one of the kindest individuals on the planet.

Ahn-Dung Phan has been an F# developer and active tooling and open source developer for a long time. As the original author of the Visual F# Power Tools extension (VFPT), he is incredibly knowledgeable about F# tooling in Visual Studio. As the Visual F# Tools moved from RC to RTW, he fixed endless bugs and ported over features from VFPT, including improving those features in the process. One such feature which is particularly involved in the Implement Interface analyzer and code fix, which any .NET developer knows is incredibly useful when working with interfaces. We’re incredibly happy to have accepted his many contributions.

Libo Zeng has been quite active in improving the F# compiler codebase. Libo has worked with the community in compiler error message improvements, and he has contributed multiple performance improvements to FSharp.Core and the F# compiler. Libo’s Pull Requests for performance improvements are always backed with benchmark results, making it obvious and easy to take his changes.

Avi Avni is an active member of the F# community and mentor through the F# Software Foundation’s mentorship program. He also contributed the Underscores in Numeric Literals feature for F# 4.1 end-to-end. For anyone who has attempted to contribute compiler features, you will know that this is no small task. He also worked with Lincoln Atkinson to implement Caller Info Argument Attributes in F# 4.1. He has also been working on an even bigger feature to recognize when a recursive function is not tail-recursive, which may end up in F# 4.2. This advanced feature will surely take time to get right, but we’re excited that he’s so involved here, and we look forward to what can be done in the future.

Will Smith is another active member of the F# community who contributed the Struct Records features for F# 4.1 in its entirety. Will technically works for Microsoft now, but he contributed the feature as an F# community member in his own time. He is an incredibly kind, intelligent, and helpful member of the F# community and demonstrated amazing patience as his language feature was getting rounds of feedback from us over a long period of time. We’re super happy to have his contributions, and we’re sure the greater F# community is as well.

Kurt Schelfthout, as mentioned above, also implemented an F# 4.1 language feature from end-to-end. He has also been active in F# language design and managing the F# Language Suggestions and F# Language RFC repositories.

Chet Husk has been active in F# language evolution and performed the large majority of work in moving F# Language Suggestions from UserVoice over to GitHub. This is a someone lengthy process, but because of his work, all of F# evolution is now done on GitHub.

Saul Rennison began contributing to the Visual F# Tools not too long ago, but his contributions have already had an impact. In addition to being active in code reviews and reporting issues, he contributed multiple fixes to the F# project system and MSBuild integration. He also contributed type colorization for FSI when run as a console app, giving us a taste of what may yet come for F# Interactive. We’re very excited that he’s so engaged in the Visual F# Tools.

Gustavo Leon made contributions deep in the F# compiler, including in the Constraint Solver (an area few dare to venture), to speed up compile times for code which makes heavy use of overload resolution. His work also improved Statically Resolved Type Parameters, and he was key in identifying tricky bugs there. He is also active in F# language evolution, contributing his ideas and feedback to F# language suggestions. His expertise is very much appreciated.

Jared Hester has been a member of the F# OSS community for quite some time, working on the Visual F# Power Tools and Ionide. In addition to extensive and detailed bug reports for IDE features, he contributed additions to Untyped Parse Tree traversal code which were necessary to implement IDE features. This kind of work is very difficult and detail-oriented, and we’re very grateful to have his contributions. He’s also very active in the F# language evolution process, providing numerous insights and feedback.

Marcus Griep has been one of the more active members of the F# OSS community in getting F# ecosystem support working on .NET Core and .NET Standard. This is crucial work which helps move us out of a "chicken and egg" situation. Additionally, he’s helped managed to F# Language Design and F# RFC repositories.

Gauthier Segay has been very active in the F# ecosystem, both in contributions and creating issues. There’s a good chance that you’ve seen him commenting on a bug somewhere. He was a contributor to the Visual F# Power Tools, and we hope to continue to see his enthusiasm for F# and the Visual F# Tools in the future.

Lastly, we would like to thank the F# Software Foundation for helping cultivate a growing F# community. Their mentorship program is one of their most exciting offerings in helping people learn F#, and the focus on community growth and empowerment has lead to a surge in new members over the past two years. Anybody can join for free, so we encourage all F# developers and anyone interested in learning F# to join. By joining, you automatically get access to all the benefits of the FSSF, including the mentorship program. We’re grateful to partner with such a special organization.

Thank you!

– Visual F# Team


Helping new customers get oriented, keeping our content up-to-date

$
0
0

We’re always working to keep our content as fresh and accurate as possible.

We have some new content that we’d like to highlight and hopefully have you spread the word.

On-boarding content for newcomers

We recently wrote a set of overview topics to help orient visitors new to Team Services and TFS. These topics provide a framework for newbies to understand how our supported set of platforms, services, clients, and Marketplace extensions fit together to support software development teams.

You can view these topics here:

Collaborate content

We’ve recently introduced a new area to highlight content that spans features and scenarios that support team collaboration. This content supports team collaboration across the traditional hubs of Code, Work, Build & Release, and Test. Which is why we’ve named this area Collaborate.

Under the Collaborate section, you’ll find content that supports sharing information, getting notification, and supporting team chat. Here’s a few of the new topics in this area:

Content updates

We also want to take this opportunity to point out the Content updates topic that we update with the release of new features every three weeks. This topic provides an index to release notes, blog posts, and content related to DevOps working with Team Services and TFS.

We started back in March 2016 and are now working are way through 2017.

We work to highlight topics that are brand new–usually covering new features or scenarios and significantly updated topics.

We appreciate your feedback and ratings

Your feedback and insights can help us better focus on where we should focus to address missing scenarios or to simply correct a typo or error.

As you review a topic, please rate it and leave a note of what works well or what’s missing.

We thank you,

Kathryn

Gartner names Microsoft a leader in the Magic Quadrant for Data Management Solutions for Analytics (DMSA)

$
0
0

This post was authored by Rohan Kumar, General Manager, DS SQL Engineering.

We’re excited that Gartner has recognized Microsoft as a leader in the Magic Quadrant for Data Management Solutions for Analytics (DMSA). Gartner defines the DMSA as a system for storing, accessing, processing, and delivering data intended for one of the primary use cases that support analytics.[1] These use cases include supporting ongoing traditional, operational, logical, and context-independent data warehousing.[2] The DMSA thus represents an evolution from the traditional data warehousing approach. Although data warehousing continues to be a major use case, the DMSA encompasses new trends such as data lakes and context-independent data warehouses that enable data science uses cases.[3]

magicQuadrantDMSA

At Microsoft we have been championing a similar evolution to make big data processing and analytics simpler and more accessible to transform data into intelligent action. We do this through SQL Server 2016 and the Cortana Intelligence Suite to offer a comprehensive portfolio of solutions to do data warehousing, big data, and advanced analytics solutions. SQL Server as an example gives organizations a breadth of capabilities to do analytics on-premises, including features like real-time operational analytics, in-memory columnstore, integration with Hadoop via PolyBase, in-database analytics with R Services built in, and fast time to market with reference architectures. In the cloud, we offer Cortana Intelligence Suite, which has SQL Data Warehouse, a truly elastic, scale-out data warehouse; HDInsight, a managed Hadoop service that runs Hortonworks Data Platform; Data Lake Analytics, an on-demand analytics job service to power intelligent action; Data Lake Store, a no-limits data lake to power intelligent action; and DocumentDB, a global scale-out NoSQL service with <10ms guarantees.

In providing customers with these solutions, our goal is to help them realize the full potential of their data and give them the ability to transform their business. As an example, Tangerine uses a data warehouse on-premises with Hadoop in the cloud so that it can query relational and nonrelational data to accelerate its time to insights. With such a solution, Tangerine is looking to transform the financial services industry by building a predictive, context-aware application that gives it information based on the time and where the customer is.

We’re excited that Gartner recognized both our ability to execute and the completeness of vision by placing Microsoft in the leader’s quadrant of the Magic Quadrant for Data Management Solutions for Analytics. Gartner notes that leaders have been able to adapt rapidly to this changing market and have pursued all the primary use cases Gartner identified to support analytics.[4] The push for cloud has also affected the relative ratings among the leaders.[5] In the coming year, we will continue to focus on delivering the highest value to our customers and partners through innovations that make data warehousing, big data, and analytics even more accessible to transform data into intelligent action. You can read the full report, “Magic Quadrant for Data Management Solutions for Analytics,” here.

To learn more:


[1] Source: Gartner’s Magic Quadrant for Data Management Solutions for Analytics, 2017.
[2] Source: Gartner’s Magic Quadrant for Data Management Solutions for Analytics, 2017.
[3] Source: Gartner’s Magic Quadrant for Data Management Solutions for Analytics, 2017.
[4] Source: Gartner’s Magic Quadrant for Data Management Solutions for Analytics, 2017.
[5] Source: Gartner’s Magic Quadrant for Data Management Solutions for Analytics, 2017.

SONiC: The networking switch software that powers the Microsoft Global Cloud

$
0
0

Running one of the largest clouds in the world, Microsoft has gained a lot of insight into building and managing a global, high performance, highly available, and secure network. Experience has taught us that with hundreds of datacenters and tens of thousands of switches, we needed to:

  • Use best-of-breed switching hardware for the various tiers of the network.
  • Deploy new features without impacting end users.
  • Roll out updates securely and reliably across the fleet in hours instead of weeks.
  • Utilize cloud-scale deep telemetry and fully automated failure mitigation.
  • Enable our Software-Defined Networking software to easily control all hardware elements in the network using a unified structure to eliminate duplication and reduce failures.

To address these requirements, Microsoft pioneered Software for Open Networking in the Cloud (SONiC), a breakthrough for network switch operations and management. Microsoft open-sourced this innovation to the community, making it available on our SONiC GitHub Repository. SONiC is a uniquely extensible platform, with a large and growing ecosystem of hardware and software partners, that offers multiple switching platforms and various software components.

Switch Abstraction Interface (SAI) accelerates hardware innovation

SONiC is built on the Switch Abstraction Interface (SAI), which defines a standardized API. Network hardware vendors can use it to develop innovative hardware platforms that can achieve great speeds while keeping the programming interface to ASIC (application-specific integrated circuit) consistent. Microsoft open sourced SAI in 2015. This approach enables operators to take advantage of the rapid innovation in silicon, CPU, power, port density, optics, and speed, while preserving their investment in one unified software solution across multiple platforms.

SDN

Figure 1. SONiC: one investment to unblock hardware innovation

Modular design with containers accelerates software evolution

SONiC is the first solution to break monolithic switch software into multiple containerized components. SONiC enables fine-grained failure recovery and in-service upgrades with zero downtime. It does this in conjunction with Switch State Service (SWSS), a service that takes advantage of open source key-value pair stores to manage all switch state requirements and drives the switch toward its goal state. Instead of replacing the entire switch image for a bug fix, you can now upgrade the flawed container with the new code, including protocols such as Border Gateway Protocol (BGP), without data plane downtime. This capability is a key element in the serviceability and scalability of the SONiC platform.

Containerization also enables SONiC to be extremely extensible. At its core, SONiC is aimed at cloud networking scenarios, where simplicity and managing at scale are the highest priority. Operators can plug in new components, third-party, proprietary, or open sourced software, with minimum effort, and tailor SONiC to their specific scenarios.

Confirguration and management tools

Figure 2. SONiC: plug and play extensibility

Monitoring and diagnostic capabilities are also key for large-scale network management. Microsoft continuously innovates in areas such as early detection of failure, fault correlation, and automated recovery mechanisms without human intervention. These innovations , such as Netbouncer and Everflow, are all available in SONiC, and they represent the culmination of years of operations experience.

Rapidly growing ecosystem

SONiC and SAI have gained wide industry support over the last year. Most major network chip vendors are supporting SAI on their flagship ASICs:

The community are actively adding new extensions and advanced capabilities to SAI releases:

  • Broadcom, Marvell, Barefoot, and Microsoft are driving advanced monitoring and telemetry in SAI to enable deep visibility into the ASIC and powerful analytic capabilities.
  • Mellanox, Cavium, Dell, and Centec are contributing to protocol announcement to SAI for richer protocol support and large scale network scenarios; for example, MPLS, Enhanced ACL model, Bridge Model, L2/L3 Multicast, segment routing, and 802.1BR.
  • Dell and Metaswitch are bringing failure resiliency and performance to SAI by adding L3 fast reroute and BFD proposals.
  • The pipeline model driven by Mellanox and Broadcom and multi-NPU by Dell enriches the infrastructure that SAI and network stack built on top can apply to.

At the Open Compute Project U.S. Summit 2017, we will demonstrate 100-gigabyte switches from multiple switch hardware companies. SONiC is enabled on their latest and fastest SKUs. The platforms that support SONiC are:

With SONiC, the cloud community has choices—they can cherry pick best-of-breed solutions. Partners are joining the eco-system to make it richer:

  • Arista is offering containerized EOS components like EOS BGP to run on top of SONiC. The SONiC community now has easy access to Arista’s rich software suite of EOS.
  • Canonical enabled SONiC as a snap for Ubuntu. It enables MAAS to deploy SONiC to switches as well as using SONiC to deploy the servers. Unified network and server deployment is going to significantly improve the agility of operators.
  • Docker enabled using Swarm to manage the SONiC containers. With its simple and declarative service model, Swarm can manage and update SONiC at scale.
  • Mellanox is using SONiC to unleash the hardware-based packet generation capabilities in the Spectrum ASIC. This is a highly sought-after capability that will help diagnosis and troubleshooting.

By working with the community and our partner ecosystem, we’re looking to revolutionize networking for today and into the future.

SONiC is fully open sourced on GitHub and is available to industrial collaborators, researchers, students, and innovators alike. With the SONiC containerized approach and software simulation tools, developers can experience the switch software used in Microsoft Azure, one of the world’s largest cloud platforms, and contribute components that will benefit millions of customers. SONiC will benefit the entire cloud community, and we’re very excited for the increasingly strong partner momentum behind the platform.

Enabling cloud workloads through innovations in Silicon

$
0
0

Today, I’ll be talking to a global community of attendees at the 2017 Open Compute Project (OCP) U.S. Summit about the exciting disruptions we see in the processor ecosystem, and how Microsoft is embracing the innovation created by these disruptions for the future of our cloud infrastructure.

The demand for cloud services continues to grow at a dramatic pace and as a result we are always innovating and looking out for technology disruptions that can help us scale more rapidly. We see one such disruption taking shape in the silicon manufacturing industry. The economic slowdown of Moore’s Law and the tremendous scale of the high-end smart phone market has expanded the number of processor suppliers leading to a “Cambrian” explosion of server options.

We’re announcing that we are driving innovation with ARM server processors for use in our datacenters. We have been working closely with multiple ARM server suppliers, including Qualcomm and Cavium, to optimize their silicon for our use. We have been running evaluations side by side with our production workloads and what we see is quite compelling. The high Instruction Per Cycle (IPC) counts, high core and thread counts, the connectivity options and the integration that we see across the ARM ecosystem is very exciting and continue to improve.

Also, due to the scale required for certain cloud services, i.e. the number of machines allocated to them, it becomes more economically feasible to optimize the hardware to the workload instead of the other way around, even if that means changing the Instruction Set Architecture (ISA).

When we looked at the variety of server options, ARM servers stood out for us for a number of reasons:

  1. There is a healthy ecosystem with multiple ARM server vendors which ensures active development around technical capabilities such as cores and thread counts, caches, instructions, connectivity options, and accelerators.
  2. There is an established developer and software ecosystem for ARM. We have seen ARM servers benefit from the high-end cell phone software stacks, and this established developer ecosystem has significantly helped Microsoft in porting its cloud software to ARM servers.
  3. We feel that ARM is well positioned for future ISA enhancements because its opcode sets are orthogonal. For example, with out-of-order execution running out of steam and with research looking at novel data-flow architectures, we feel that ARM designs are much more amenable to handle those new technologies without disrupting their installed software base.

We have been working closely with multiple ARM suppliers, including Qualcomm and Cavium, on optimizing their hardware for our datacenter needs. One of the biggest hurdles to enable ARM servers is the software. Rather than trying to port every one of our many software components, we looked at where ARM servers are applicable and where they provide value to us. We found that they provide the most value for our cloud services, specifically our internal cloud applications such as search and indexing, storage, databases, big data and machine learning. These workloads all benefit from high-throughput computing. 

To enable these cloud services, we’ve ported a version of Windows Server, for our internal use only, to run on ARM architecture. We have ported language runtime systems and middleware components, and we have ported and evaluated applications, often running these workloads side-by-side with production workloads.

During the OCP US Summit, Qualcomm, Cavium, and Microsoft will be demonstrating the version of Windows Server ported for our internal use running on ARM-based servers.

The Qualcomm demonstration will run on the Qualcomm Centriq 2400 ARM server processor, their recently announced 10nm, 48-core server processor with Qulacomm’s most advanced interfaces for memory, network, and peripherals.

Qualcomm Centriq 2400 ARM server processor

The demonstration with Cavium runs on their flagship 2nd generation 64-bit ThunderX2 ARMv8-A server processor SoCs for datacenter, cloud and high performance computing applications.

flagship 2nd generation 64-bit ThunderX2 ARMv8-A server processor SoCs for datacenter

Cavium, who collaborated with leading server supplier Inventec, and Qualcomm have each developed an Open Compute-based motherboard compatible with Microsoft’s Project Olympus that allows us to seamlessly deploy these new servers in our datacenters.

We feel ARM servers represent a real opportunity and some Microsoft cloud services already have future deployment plans on ARM servers.  We are working with ARM Limited on design specifications and server standard requirements and we are committed to collaborate with the community on open standards to advance ARM64 servers for cloud services applications.

You can read about our other announcements during the 2017 Open Compute Project Summit at this blog.

Ecosystem momentum positions Microsoft’s Project Olympus as de facto open compute standard

$
0
0

Last November we introduced Microsoft’s Project Olympus– our next generation cloud hardware design and a new model for open source hardware development. Today, I’m excited to address the 2017 Open Compute Project (OCP) U.S. Summit to share how this first-of-its-kind open hardware development model has created a vibrant industry ecosystem for datacenter deployments across the globe in both cloud and enterprise.

Since opening our first datacenter in 1989, Microsoft has developed one of the world’s largest cloud infrastructure with servers hosted in over 100 datacenters worldwide. When we joined OCP in 2014, we shared the same server and datacenter designs that power our own Azure hyper-scale cloud, so organizations of all sizes could take advantage of innovations to improve the performance, efficiency, power consumption, and costs of datacenters across the industry. As of today, 90% of servers we procure are based on designs that we have contributed to OCP.

Over the past year, we collaborated with the OCP to introduce a new hardware development model under Project Olympus for community based open collaboration. By contributing cutting edge server hardware designs much earlier in the development cycle, Project Olympus has allowed the community to contribute to the ecosystem by downloading, modifying, and forking the hardware design just like open source software. This has enabled bootstrapping a diverse and broad ecosystem for Project Olympus, making it the de facto open source cloud hardware design for the next generation of scale computing.

Today, we’re pleased to report that Project Olympus has attracted the latest in silicon innovation to address the exploding growth of cloud services and computing power needed for advanced and emerging cloud workloads such as big data analytics, machine learning, and Artificial Intelligence (AI). This is the first OCP server design to offer a broad choice of microprocessor options fully compliant with the Universal Motherboard specification to address virtually any type of cloud computing workload.

We have collaborated closely with Intel to enable their support of Project Olympus with the next generation Intel Xeon Processors, codename Skylake, and subsequent updates could include accelerators via Intel FPGA or Intel Nervana solutions.

Project Olympus 1

AMD is bringing hardware innovation back into the server market and will be collaborating with Microsoft on Project Olympus support for their next generation “Naples” processor, enabling application demands of high performance datacenter workloads.

Project Olympus 2

We have also been working on a long-term project with Qualcomm, Cavium, and others to advance the ARM64 cloud servers compatible with Project Olympus. Learn more about Enabling Cloud Workloads through innovations in Silicon.

In addition to multiple choices of microprocessors for the core computation aspects, there has also been tremendous momentum to develop the core building blocks in the Project Olympus ecosystem for supporting a wide variety of datacenter workloads.

Today, Microsoft is announcing with NVIDIA and Ingrasys a new industry standard design to accelerate Artificial Intelligence in the next generation cloud. The Project Olympus hyperscale GPU accelerator chassis for AI, also referred to as HGX-1, is designed to support eight of the latest “Pascal” generation NVIDIA GPUs and NVIDIA’s NVLink high speed multi-GPU interconnect technology, and provides high bandwidth interconnectivity for up to 32 GPUs by connecting four HGX-1 together. The HGX-1 AI accelerator provides extreme performance scalability to meet the demanding requirements of fast growing machine learning workloads, and its unique design allows it to be easily adopted into existing datacenters around the world.

Project Olympus 3

Our work with NVIDIA and Ingrasys is just a one of numerous stand-out examples of how the open source strategy of Project Olympus has been embraced by the OCP community. We are pleased by the broad support across industry partners that are now part of the Project Olympus ecosystem.

Project Olympus 4This is a significant moment as we usher in a new era of open source hardware development with the OCP community.  We intend for Project Olympus to provide a blueprint for future hardware development and collaboration at cloud speed. You can learn more and view the specification for Microsoft’s Project Olympus at our OCP GitHub branch.

Azure Stream Analytics Tools for Visual Studio

$
0
0

Have you had chance to try out the public preview version of the Azure Stream Analytics Tools for Visual Studio yet? If not, read through this blog post and get a sense of the Stream Analytics development experience with Visual Studio. These tools are designed to provide an integrated experience of Azure Stream Analytics development workflow in Visual Studio. This will help you to quickly author query logic and easily test, debug, and diagnose your Stream Analytics jobs.

Using these tools, you get not only the best in class query authoring experience, but also the power of IntelliSense (for code-completion), Syntax Highlighting, and Error Marker capabilities. You now can test queries on local development machines with representative sample data to speed up development iterations. Seamless integration with the Azure Stream Analytics service helps you submitting jobs, monitoring live job metrics easily, and exporting existing jobs to projects with just few clicks. Never the less, you can naturally leverage Visual Studio source control integration capabilities to manage your job configurations and queries.

Use Visual Studio Project to manage a Stream Analytics job

To create a brand-new job, just create a new project from the built-in Stream Analytics template. The job input and output are stored in JSON files, and job query is saved in the Script.asaql file. Double click on input and output JSON files inside the project, you will find out that their setting UI is very similar to the Portal UI.

VS1

VS2

Using Visual Studio tools to author Stream Analytics queries

Double click on the Script.asaql file to open the query editor, its IntelliSense (code-completion), Syntax Highlighting, and Error Marker make your authoring experience very efficient. Also, if the local input file is specified and defined correctly or you have ever sampled data from the live input sources, the Query Editor will prompt the column names in the input source when the developer enters the input name.

VS3

Testing locally with sample data

Upon finishing the query authoring, you can quickly test it on your local machine by specifying local sample data as input. This speeds up your development iterations.

VS4

VS5

VS6

VS7

View live job metrics

After you validate the query test result, click on “Submit to Azure” to create a streaming job under your Azure subscription. Once the job is created you can start and monitor your job inside the job view.

VS8

VS9

VS10VS11VS12

View and update jobs in Server Explorer

You can browse all your Stream Analytics jobs under your subscriptions in Server Explorer, expand a job node to view its job status and metrics. If needed, you can directly update job queries and job configurations, or export an existing job to a project.

VS13

How do I get started?

You need to first install Visual Studio (2015, Visual Studio 2013 update 4, or Visual Studio 2012) and install the Microsoft Azure SDK for .NET version 2.7.1 or above using the Web platform installer. Then get the latest ASA tools from Download Center.

For more information, can be found at Tutorial: Use Azure Stream Analytics Tools for Visual Studio.

Azure Data Factory February new features update

$
0
0

Azure Data Factory allows you to bring data from a rich variety of locations in diverse formats into Azure for advanced analytics and predictive modeling on top of massive amounts of data. We have been listening to your feedback and strive to continuously introduce new features and fixes to support more data ingest and transformation scenarios. Moving to the new year, we would like to start a monthly feature summary blog series so our users can easily keep track of new feature details and use them right away.

Here is a complete list of the Azure Data Factory updates for February. We will go through them one by one in this blog post.

  • New Oracle driver bundled with Data Management Gateway with performance enhancements
  • Service Principal authentication support for Azure Data Lake Store
  • Automatic table schema creation when loading into SQL Data Warehouse
  • Zip compression/decompression support
  • Support extracting data from arrays in JSON files
  • Ability to explicitly specify cloud copy execution location
  • Support updating the new Azure Resource Manager Machine Learning web service

New Oracle driver bundled with Data Management Gateway with performance enhancements

Introduction: Previously, to connect to Oracle data source through Data Management Gateway users were required to install the Oracle provider separately, causing them to run into different issues. Now, with the Data Management Gateway version 2.7 update, a new Microsoft driver for Oracle is installed so no separate Oracle driver installation is required. The new bundled driver providers better load throughput, with some customers observing 5x-8x performance increase. Refer to Oracle connector documentation page for details.

Configuration: The Data Management Gateway periodically checks for updates. You can check its version from the Help page as shown below. If you are running a version lower than v2.7, you can get update directly from the Download Center. With Data Management Gateway version 2.7, the new driver will be used automatically in Copy Wizard when Oracle is being used as source. Learn more about Oracle linked service properties.

gatewayversion

Service Principal authentication support for Azure Data Lake Store

Introduction: In addition to the existing user credential authentication, Azure Data Factory now supports Service Principal to access the Azure Data Lake Store. The token used in the previous user credential authentication mode could expire after 12 hours to 90 days, so periodically reauthorizing the token manually or programmatically is required for scheduled pipelines. Learn more about the token expiration of data moving from Azure Data Lake Store using Azure Data Factory. Now with the Service Principal authentication, the key expiration threshold is much longer so you are suggested to use this mechanism going forward, especially for scheduled pipelines. Learn more about the Azure Data Lake Store and Service Principal.

Configuration: In the Copy Wizard, you will see a new Authentication type option with Service Principal as default, shown below. 

serviceprincipal

Automatic table schema creation when loading into SQL Data Warehouse

Introduction: When copying data from On-Premise SQL Server or Azure SQL Database to Azure SQL Data Warehouse using the Copy Wizard, if the table does not exist in the destination SQL Data Warehouse, Azure Data Factory can now automatically create the destination table using schema from source.

Configuration: From the Copy Wizard, in the Table mapping page, you now have the option to map to existing sink tables or create new ones using source tables’ schema. Proper data type conversion may happen if needed to fix the incompatibility between source and destination stores. Users will be warned in the Schema mapping page, as shown in the second image below, about potential incompatibility issues. Learn more about Auto table creation.

autotablecreation1

 autotablecreation2

Zip compression/decompression support

Introduction: The Azure Data Factory Copy Activity can now unzip/zip your files with ZipDeflate compression type in addition to the existing GZip, BZip2, and Deflate compression support. This applies to all file-based stores, including Azure Blob, Azure Data Lake Store, Amazon S3, FTP/s, File System, and HDFS.

Configuration: You can find the option in Copy Wizard pages as shown below. Learn more from the specifying compression section in each corresponding connector topic.

zip

Extracting data from arrays in JSON files

Introduction: Now the Copy Activity supports parsing arrays in JSON files. This is to address the feedback that the entire array can only be converted to a string or skipped. You can now extract data from array or cross apply objects in array with data under root object.

Configuration: The Copy Wizard provides you with the option to choose how JSON array can be parsed as shown below. In this example, the elements in “orderlines” array are parsed as “prod” and “price” columns. For more details on configuration and examples, check the specifying JSON format section in each file-based data store topic.

json

Ability to explicitly specify cloud copy execution location

Introduction: When copying data between cloud data stores, Azure Data Factory, by default, detects the region of your sink data store and picks the geographically closest service to perform the copy. If the region is not detectable or the service that powers the Copy Activity doesn’t have a deployment available in that region, you can now explicitly set the Execution Location option to specify the region of service to be used to perform the copy. Learn more about the globally available data movement.

Note: Your data will go through that region over the wire during copy.

Configuration: Copy wizard will prompt for the Execution location option in the Summary page if you fall into the cases mentioned above.

executionlocation

Support updating the new Azure Resource Manager Machine Learning web service

Introduction: You can use the Machine Learning Update Resource Activity to update the Azure Machine Learning scoring service, as a way to operationalize the Machine Learning model retrain for scoring accuracy. Now in addition to supporting the classic web service, Azure Data Factory can support the new Azure Resource Manager based Azure Machine Learning scoring web service using Service Principal.

Configuration: The Azure Machine Learning Linked Service JSON now supports Service Principal so you can access the new web service endpoint. Learn more from scoring web service is Azure Resource Manager web service.

 

Above are the new features we introduced in February. Have more feedbacks or questions? Share your thoughts with us on Azure Data Factory forum or feedback site, we’d love to hear more from you.


Azure Service Bus Premium Messaging now available in UK

$
0
0

We’re pleased to announce Azure Service Bus Premium Messaging now available in the UK.

Service Bus Premium Messaging supports a broader array of mission-critical cloud apps, and provides all the messaging features of Service Bus queues and topics with predictable, repeatable performance and improved availability - now generally available in UK.

For more general information about Service Bus Premium Messaging, see this July 2016 blog post and this January 2017 article "Service Bus Premium and Standard messaging tiers".

Azure Service Bus Prem Messaging

We are excited about this addition, and invite customers using this Azure region to try Azure Service Bus Premium Messaging today!

The week in .NET – Visual Studio 2017, .NET Core SDK, F# 4.1, On .NET with Phillip Carter, Happy Birthday from John Shewchuk, Pyre

$
0
0

Previous posts:

Visual Studio 2017, .NET Core SDK 1.0, F# 4.1

Yesterday, we had a big product launch! Visual Studio 2017 is here, and with it come the releases of .NET Core SDK 1.0, and F# 4.1. Check out the posts for all the details:

Get the bits now:

On .NET

In last week’s episode, Phillip Carter gave a tour of F#. This episode is two full hours of delicious F# aimed mainly at C# developers with no prior F# experience:

This week, we’ll have two shows. First, Scott Hunter will give a recap of yesterday’s announcements and what they mean for .NET. Second, we’ll have Matt Watson from Stackify to talk about Prefix, a lightweight dev tool for the Web that shows real-time logs, errors, queries, and more. We’ll stream live on Channel 9 at 9:30AM Pacific Time on Wednesday, then 10AM on Thursday. We’ll take questions on Gitter’s dotnet/home channel and on Twitter. Please use the #onnet tag. It’s OK to start sending us questions in advance if you can’t do it live during the shows.

Happy Birthday .NET!

We have another Happy Birthday .NET video for you this week. John Shewchuk is a Technical Fellow in charge of developer experience. He worked on Visual InterDev, drove the first release of Visual Studio, and was part of the architectural team for the very first and subsequent releases of .NET.

Tool of the week: FNA

FNA is a reimplementation of the Microsoft XNA libraries. The main contributor to the project, Ethan Lee, has ported two dozen XNA games already, including FEZ, Bastion, and Terraria.

Bastion running on FNA

Game of the week: Pyre

Pyre is a role-playing game in which you lead a band of exiles to freedom. To do this, you must fight through ancient competitions that are spread across a mystical purgatory. Each battle will bring you and your party closer to their freedom as they gain access to new abilities. Pyre will feature both a campaign and a two-player versus mode, letting you challenge a friend to one of fast-paced ritual showdowns.

pyre

Pyre is being created by Supergiant Games using C# and their own custom engine. It is under development but will be launching on Steam and PlayStation 4.

User group meeting of the week: Linux and microservice architecture in NC

Tonight Wednesday, March 8, at 5:30PM in Morrisville, NC, the TRINUG.NET group holds a meeting on .NET, Linux, and microservice architecture.

.NET

ASP.NET

C#

F#

New F# Language Suggestions:

Check out F# Weekly for more great content from the F# community.

Xamarin

UWP

Azure

Data

Games

And this is it for this week!

Contribute to the week in .NET

As always, this weekly post couldn’t exist without community contributions, and I’d like to thank all those who sent links and tips. The F# section is provided by Phillip Carter, the gaming section by Stacey Haffner, and the Xamarin section by Dan Rigby, and the UWP section by Michael Crump.

You can participate too. Did you write a great blog post, or just read one? Do you want everyone to know about an amazing new contribution or a useful library? Did you make or play a great game built on .NET?
We’d love to hear from you, and feature your contributions on future posts:

This week’s post (and future posts) also contains news I first read on The ASP.NET Community Standup, on Weekly Xamarin, on F# weekly, and on Chris Alcock’s The Morning Brew.

Celebrating the Women Who Inspire Us

$
0
0

Everyone can be part of helping to drive better outcomes for women around the world. For Bing, this means celebrating equality, and being a vehicle to share that voice.

Last year, Bing celebrated Women's History Month and International Women's Day with a quiz to test your knowledge on 'firsts for women.' On Bing in the Classroom, a series of exciting lesson plans helped students learn about the impact women are making in fields such as politics, STEM-related disciplines, and more. And perhaps you saw the Bing Homepage during this time:

Bing Homepage - Konak, Turkey

A wall in the shape of a woman’s face in Konak, Turkey was featured as the Bing homepage image in Australia, Brazil, Canada, France, Germany, Great Britain, Spain and the US.

International Women's Day

In celebration of International Women's Day, the Bing team invites you to share an inspirational greeting card with the inspirational women in your life. The official IWD campaign #BeBoldForChange calls on all of us to help forge a better working world—a more gender-inclusive world. We are excited to help you share that message through Bing. Share your story and inspire.

Bing Celebrates International Women's Day - Search results

International Women's Day - Share this card

Thank you raising awareness for a more inclusive world with Bing!

- The Bing Team

Microsoft Mechanics Video: New Conditional Access capabilities in Azure AD and Enterprise Mobility + Security!

$
0
0

Howdy folks,

Ive talked and written a lot about vision of Identity as the New Control Plane.

This is based on the idea that as more and more of a companys digital resources live outside the corporate network, in the cloud and on devices, that a great cloud based identity system is the best way to maintain control over and visibility into how and when users access corporate applications and data.

The conditional access system in Azure AD Premium and the Enterprise Mobility + Security suiteis the engine that makes this control plane vision a reality. It gives you, the enterprise admin, the ability to create policy based access rules for any Azure AD-connected application (SaaS apps, custom apps running in the cloud or on-premises web applications). Azure AD evaluates these policies in real-time, and enforces them whenever a user attempts to access an application.

Simon May and I just filmed a short ~10 minute video for On Microsoft Mechanics, where we discuss Azure ADs Conditional Access system and the many improvements weve made recently which youll find below. In the video I demonstrated the improved user experience, how company data is protected without impacting productivity and the improvements weve made to the IT admin experience.

Contextual controls and the unified administration experience

One of the biggest improvements weve made is an expanded set of contextual controls so you can adjust user access based on type of app, specific user permissions, where the app is accessed from, and if the user is using a compliant device.

Weve also made it easier to implement these controls with the new unified administration experience in the Azure Portal, which provides an all-in-one admin experience across Azure AD and Microsoft Intune.

Now you can establish multiple policies per app, share policies across applications, or set default policies globally for your whole tenant. And when you set risk-based conditional access controls, machine learning will be continuously safeguarding access to your apps and data in real-time.

Check out todays show to see these capabilities in action, try it out for yourself, and learn more on our documentation page. And, as always, let us know what you think! Were listening.

Best regards,

Alex Simons (Twitter: @Alex_A_Simons)

Director of Program Management

Microsoft Identity Division

Deploy your SharePoint Server 2016 farm in Azure

$
0
0

SharePoint Server 2016 in Microsoft Azure is designed to leverage faster provisioning of servers allowing you to take your first step in hosting a server-based IT workload in the cloud. To dramatically reduce the time it takes you to plan, design, test and deploy SharePoint farms in Azure, we have published articles for SharePoint Server 2016 in Microsoft Azure, including:

In addition, the SharePoint Server 2016 High Availability Farm in Azure Deployment Kit assists you in creating the Azure infrastructure and configuring the servers of the high-availability SharePoint Server 2016 farm. This kit contains Microsoft Visio and Microsoft PowerPoint figures from the support articles, the set of all the PowerShell commands to create and configure the high-availability SharePoint Server 2016 farm in Azure, and a Microsoft Excel configuration workbook that generates the customized PowerShell commands based on your settings.

The post Deploy your SharePoint Server 2016 farm in Azure appeared first on Office Blogs.

Viewing all 13502 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>