Quantcast
Channel: TechNet Technology News
Viewing all 13502 articles
Browse latest View live

Alert Data Management in System Center 2016 Technical Preview 5 – Operations Manager

0
0

We have received feedback form customers like you about seeing a lot of alerts in your environment with some of them being unwanted. One of the major reasons for seeing this kind of behaviour is not tuning the monitors/rules in a management pack to suit specific monitoring requirement. The default setting (configuration of monitors and rules) in a management pack have been designed to cater to wider set of customer monitoring requirements. Since each customer environment is different, it is ideal to tune the default settings of a management pack to suit your specific environment needs. The Management Pack Life Cycle article outlines the best practices/tips to review/tune/customize a management pack to your environment.

To help you tune management packs to suit your specific monitoring requirements, we have come up with “Alert Data Management” feature in System Center 2016 Technical Preview 5 – Operations Manager. You would no longer need to analyse Ops DB or generate reports to understand which management pack/monitors/rules/sources have been generating a lot of, with this feature you would get quick insights on the alerts being generated by different management packs/monitors/rules/sources and have the ability to take action right away by tuning the thresholds or disable a monitor/rule based on the environment specific requirement.

You can find the System Center 2016 Technical Preview 5 here.

Your continuous feedback helps in shaping SCOM features become more productive for you. We request you to use this feature and let us know if you would like to see anything else added to it. You can submit your feedback at our SCOM Feedback site @ http://systemcenterOM.uservoice.com .

 

Aditya Goda | SCOM Program Manager | Microsoft
Get the latest System Center news on Facebook and Twitter
Main System Center blog: http://blogs.technet.com/b/systemcenter/
Operations Manager Team blog: http://blogs.technet.com/momteam/


Updates to “MP Updates and Recommendations” Feature in System Center 2016 Technical Preview 5 – Operations Manager

0
0

The first version of the new “Management Pack Updates and Recommendations” feature was introduced in Technical Preview 4 release of System Center 2016. This feature recommends Management Packs for downloading based on a scan of servers in your SCOM environment. You can read about the TP4 feature here. Since then we received a lot of feedback from customers like you and based on it enhanced the feature to make it work harder for you. Here are the changes we made to the feature in TP5.

More Workloads Supported

Added support to recognize 80+workloads and recommend Microsoft Management packs to monitor them.

 “More Information” screen

We enhanced the “View Machine details” screen and are now calling it the “More Information” screen. This shows more information about why a recommendation was displayed. For recommendations with status “Not Installed”, it displays the list of machines where the workload was discovered. For status “Update Available”, it displays a list of management pack files for which updates are available along with their version numbers. For status “Partially Installed”, it not only displays the list of MPs that may be required for all the capabilities of the MP to work but also displays the list of machines where the workload was discovered.

View Download Center Page

We were told that users would like to see the download center page associated with the recommendation so we added a feature to launch the DLC page associated with the management pack being recommended.

Language Settings

In order to better help users that use non-English management packs, we enhanced the “Get MP” experience to include language settings. If the user wants specific language MPs to be imported, the user can select the language needed and the needed MP files will get added to the list of MPs being imported.

Improved Performance

Our team was extremely cautious of the overheads involved with this feature so they worked very hard to reduce cycles required to discover the workloads. The design to discover the presence of 65+ Windows Server roles was debated many times over until it only made 3 operations instead of the 65 it originally required. We also improved the load time of the screen by caching data that doesn’t change frequently.

Up-to-date MP Catalog

We have enhanced the MP catalog to ensure that it stays up-to-date with Download center and always displays the latest version of management packs.

Recursively delete MPs

On a related note, if you’ve had difficulty deleting a management pack because of needing to recursively track down and delete all of the dependent management packs before deleting the originally intended management pack, please check out this blogpost.

For more information about Updates and Recommendations see the Updates and Recommendations section of How to Import an Operations Manager Management Pack. You can find the System Center 2016 Technical Preview 5 here.

Your continuous feedback helps in shaping SCOM features become more productive for you. We request you to use this feature and let us know if you would like to see anything else added to it. You can submit your feedback at our SCOM Feedback site @ http://systemcenterOM.uservoice.com.

 

Aditya Goda | SCOM Program Manager | Microsoft
Get the latest System Center news on Facebook and Twitter
Main System Center blog: http://blogs.technet.com/b/systemcenter/
Operations Manager Team blog: http://blogs.technet.com/momteam/

Console UI Performance Improvements in System Center 2016 Technical Preview 5 – Operations Manager

0
0

We have made performance improvements to alert views in the Operations console to increase responsiveness. With this feature you will see the below improvements:

  • Alert view is optimized to load efficiently
  • Alert tasks and alert details in alert view is optimized to load efficiently
  • Context menus of an alert in alert view is optimized to load efficiently

These changes will be more visible in an environment having high load on OM database.

You can find the System Center 2016 Technical Preview 5 here.

Your continuous feedback helps in shaping SCOM features become more productive for you. We request you to use this feature and let us know if you would like to see anything else added to it. You can submit your feedback at our SCOM Feedback site @http://systemcenterOM.uservoice.com .

 

Aditya Goda | SCOM Program Manager | Microsoft
Get the latest System Center news on Facebook and Twitter
Main System Center blog: http://blogs.technet.com/b/systemcenter/
Operations Manager Team blog: http://blogs.technet.com/momteam/

Introduction to .NET Framework Compatibility

0
0

This post was written by Mike Rousos, a software engineer on the .NET team.

Introduction

Beginning with the .NET Framework 4.0, all versions of the .NET Framework with a major version number of 4 (called ‘4.x’ versions) install as in-place updates. This means that only one 4.x .NET Framework is installed on a computer at a time. Installing the .NET Framework 4.5 will replace version 4.0, the .NET Framework 4.5.1 will replace version 4.5, the .NET Framework 4.6 will replace version 4.5.1, and so on.

Because of the in-place nature of these updates, applications that originally ran on the .NET Framework 4.0, for example, may need to run on 4.6 after the .NET Framework installed on the computer is upgraded. The .NET Framework 4.x releases are highly-compatible with one another, so an app which works on one 4.x .NET Framework will usually work on newer versions of the .NET Framework. However, there are some changes between 4.x versions of the .NET Framework, so apps should be tested on any versions of the .NET Framework they are expected to run on.

This article gives an overview of best practices and tools to make supporting a new .NET Framework version easier.

What Has Changed and Why?

Compatibility with previous versions of the .NET Framework is a high priority for the .NET team. In fact, all changes in the .NET Framework are reviewed by experienced engineers who assess the impact of the changes on customer apps.

Despite that, compatibility issues still exist. One reason for this is that compatibility (important as it is) is not the only priority in updating the .NET Framework. Sometimes functionality has to change to address a security hole, or to support an industry standard.

Other compatibility issues occur unintentionally. The .NET Framework team conducts thorough compatibility testing to guard against these sorts of issues, but some bugs do slip through. Even more complicated, fixing a compatibility issue that was previously introduced as a bug is, itself, a compatibility-affecting change (since some users may be depending on the unintentional newer behavior)! In these situations (addressing unintentional behavioral changes), the .NET Framework team will often use a solution known as ‘quirking’.

Quirking and Targeting

Quirking refers to the compatibility issue mitigation of having two separate code paths in the .NET Framework and choosing which path to take based on the .NET Framework version that the application is targeting. Because many .NET Framework compatibility issues are mitigated in this way, it’s possible to avoid many potential issues when running on newer .NET Framework versions by leaving the targeted .NET Framework version unchanged for an app. Quirking behavior is automatically determined based on the .NET Framework the app targets, but can be overridden by developers using application or machine configuration settings. Although many compatibility issues are mitigated through quirks, due to security considerations and technical limitations, not all compatibility issues can be quirked.

As an example, if an app targets the .NET Framework 4.5 but runs on a computer with the .NET Framework 4.5.2 installed, even though the app executes on the newer Framework, it will mimic some behaviors from 4.5 in order to minimize compatibility issues.

With the .NET Framework 4.0, 4.5, and 4.5.1 now out of support, it is worth noting that targeting these Frameworks while running on a newer .NET Framework version will continue to be supported as per the newer Framework’s support policy.

The target version is determined by consulting the TargetFramework attribute of the app’s main assembly at the time the application domain is created (typically when the managed executable starts). This attribute can be set several different ways:

  • The target framework for a project can be specified in Visual Studio.
  • The target framework for a project can be specified directly in the project file.
  • The target framework can be specified by directly applying a TargetFramework attribute in the project’s source code.
    • Note that MSBuild automatically adds a TargetFramework attribute based on the project’s target framework moniker, so this attribute should only be applied directly in non-MSBuild scenarios. If MSBuild is used, adjust the target framework by using the project file settings linked previously.

Quirking settings are AppDomain-wide. In most cases, libraries (dll’s) will be quirked (or not quirked) according to the executable that is depending on them. Because of this, authors of shared libraries may need to make sure that their code works even without quirks applied.

Compatibility Switches

In addition to automatic quirking based on target .NET Framework, developers can enable or disable individual compatibility quirks (as well as some behaviors which are not automatically quirked) by setting compatibility switches to explicitly opt in or out of compatibility-affecting changes. These ‘compatibility switches’ can be useful for allowing a developer to target a newer .NET Framework version (in order to use new .NET functionality) while still opting out of some changes that are known to affect the app. Compatibility switches can be set in a variety of ways:

  • Through configuration file settings
  • Through environment variables
  • Programmatically in the project’s source code

Although the details of how to set compatibility switches are out of scope for this introductory blog, look for a future follow-up that digs into the topic in more detail.

Compatibility issue documentation on MSDN will often mention compatibility switches, when available.

Compatibility Issue Documentation

All of the known .NET Framework compatibility issues are documented on MSDN.

Beginning with the .NET Framework 4.5.1, compatibility issues are categorized as either ‘runtime changes’ or ‘retargeting changes’.

  • Runtime changes are those that affect any apps running on the newer .NET Framework version (these changes are not quirked).
  • Retargeting changes are changes that only affect apps rebuilt to target the newer Framework. These are either quirked .NET Framework changes or changes in compilation tools. For changes between 4.0 and 4.5, the runtime/retargeting distinction is not highlighted as its own column in the tables of compatibility issues, but can be inferred from reading the descriptions.

In addition to the MSDN documentation, .NET Framework compatibility issues are available as markdown files for consumption by compatibility tooling (discussed below). The markdown files can be read directly (or in an MSDN mirror of the list) to learn about compatibility issues. The markdown files are part of an open-source GitHub repository, so please submit pull requests or create issues for any corrections. The .NET team works to keep the information in the markdown files synchronized with the information available on MSDN.

Compiler Compatibility Issues

In addition to the .NET Framework runtime and retargeting compatibility issues described above, there are also small sets of changes between versions of the C# and Visual Basic compilers that cannot be quirked, but also do not occur at runtime. For example, developers must be intentional when rebuilding apps with newer compilers, because of rare differences between how the C# 4.0 compiler generates IL and how the C# 5.0 compiler generates IL.

Because these compatibility issues only manifest themselves when re-building with a newer compiler, they won’t affect old, previously compiled binaries running on a new version of the .NET Framework. For that reason, MSDN documentation classifies these as retargeting changes. Compatibility issue markdown files will label compiler compatibility issues as a class of retargeting changes that are ‘build-time,’ which is different from a ‘quirked’ retargeting change.

Tools for Identifying Issues

The primary purpose of the compatibility issue markdown files published on GitHub is for consumption by compatibility tools. These tools ease the migration from one .NET Framework version to another. Today, there are two toolsets for compatibility analysis.

API Portability Analyzer

ApiPort (as the tool is called, for short) is a tool that scans binaries and identifies all .NET Framework APIs used. Then, it compares those APIs to the data stored in the compatibility issue markdown files and provides a report on any APIs that are used which have changed between one .NET Framework 4.x version and another. Command line options can narrow the scan (for example, by only considering changes between specified .NET Framework versions). For full documentation, please see ApiPort’s breaking change scanning usage instructions. A couple caveats to bear in mind when working with ApiPort are:

  1. Because ApiPort is only looking at which .NET APIs are called, it will report some ‘false positives’. Most .NET compatibility issues only affect very specific code paths of a given API. Simply using one of these APIs does not mean that an app will be affected by a compatibility issue. Read through the change descriptions to determine whether the reported issues are likely to manifest in your particular app.
  2. Because ApiPort is only looking at which .NET APIs are called, some compatibility issues cannot be detected by the tool. For example, using a changed XAML control in a WPF app may not be visible when only scanning IL. ApiPort is a useful tool, but is not a substitute for compatibility testing.

.NET Framework Compatibility Analyzers

The .NET Framework Compatibility analyzers are a set of Roslyn diagnostic code analyzers which use syntax trees and semantic models from source code to more intelligently decide whether a project is likely to encounter compatibility issues or not. They will still report some false positives, but should be more accurate than ApiPort.

These are available on NuGet.org as Microsoft.DotNet.FrameworkCompatibilityDiagnostics. The .NET Framework team is currently working to open-source these analyzers. Please watch this blog for more updates on the open-sourcing effort.

Reporting New Compatibility Issues

While migrating applications from one .NET Framework version to another, you may occasionally encounter a compatibility issue that isn’t documented in MSDN or in the ApiPort markdown files. If this happens, please let us know! The .NET team is continually working to keep compatibility documentation up-to-date.

Undocumented compatibility issues in the .NET Framework can be reported in either of these ways:

  • Use Visual Studio’s “Send-a-Smile” feedback feature to send details on the change.
  • Create an issue in the ApiPort repository that there is a .NET Framework compatibility issue not recorded in the tool’s markdown files. The .NET team (or community members) will be able to investigate and add documentation and support, as appropriate.

Conclusion

The .NET Framework strives to be highly-compatible with each new Framework release. Despite that, some compatibility issues are inevitable. Knowing about these changes, and knowing how to mitigate them, can help keep your applications running successfully on new versions of the .NET Framework.

Some compatibility best practices covered in this article include:

  • Don’t re-target existing projects to newer .NET Framework versions unless necessary. This will minimize compatibility issues by allowing the app to take advantage of compatibility quirks.
  • If you have any control over which .NET Framework versions are used to run your app, use newer .NET Framework versions instead of older ones. This is because many compatibility issues in 4.x versions of the .NET Framework have been fixed in subsequent versions. For example, there are fewer compatibility issues moving between 4.0 and 4.6 than between 4.0 and 4.5.
  • Make sure to test thoroughly if an app is re-built using newer compilers. This could expose it to compiler compatibility issues (though these are rare).
  • Use compatibility tools like the API Portability Analyzer and .NET Framework Compatibility Analyzers to identify potential problem areas.
  • Test your app on any .NET Framework version it is expected to run on.

Using these techniques, apps should continue to function on new versions of the .NET Framework.

Resources

Data Science and Machine Learning

New and Noteworthy Extensions for Visual Studio – April 2016

0
0

In April the community added another 100 new Visual Studio extensions to the Visual Studio Gallery. To help you enjoy this creativity from the community, every month or two I’ll be introducing some of the new extensions that caught my eye. Here are the highlights for this month:

Open in Notepad++ by Calvin A. Allen

Download Open in Notepad++ from the Visual Studio Gallery

This extension adds an option to open a file in Notepad++ to the right-click menu of Solution Explorer. We have previously seen similar extensions like Open in Sublime and Open in Visual Studio Code. Calvin based this extension on the open source code of the other two and added this helpful feature for all users of Notepad++.

Open In Notepad

File Differ by Mads Kristensen

Download File Differ from the Visual Studio Gallery

At last month’s //build conference, Mads held a session on building Visual Studio extensions. During the session he created this helpful extension to compare two files in your project. It took him less than one hour to write the code, publish it to GitHub and integrate it into continuous integration – all live on stage. You can catch the recording of the session over at Channel 9.

File Differ

And Pizza For All by Daniel Meixner

Download And Pizza For All from the Visual Studio Gallery

This extension allows you to order Pizza straight from within Visual Studio. Yum! More seriously though, this is a great example of how easy it is to integrate a website into Visual Studio. The source of the extension is available on GitHub and it’s a great starting point if you want to integrate a web application into Visual Studio.

And Pizza For All

Visual C++ For Linux Development

Download Visual C++ For Linux Development from the Visual Studio Gallery

The Visual C++ extension for Linux Development allows you to write C++ code for Linux right in Visual Studio. You can create a new project, remote compile, and debug right from within Visual Studio.

Visual C++ For Linux

Top 10 Popular Extensions from April 2016

As I mentioned in the beginning, we added a great number of new extensions in April. Out of the new ones added, here are the 10 most popular ones — give them a try!

  1. Visual C++ for Linux Development
  2. Xamarin Forms Templates
  3. Solidity
  4. File Differ
  5. Learn the Shortcut
  6. Web Accessibility Checker
  7. Grunt Snippet Pack
  8. Agent SVN
  9. vsXen
  10. Ninja Coder For MvvmCross and Xamarin Forms

Build your own

These few examples of simple integrations show a wide range of what you can build through Visual Studio’s extensibility framework. If that piqued your interest, our Integrate site has some great tutorials and videos on how to get started with Visual Studio extensions: VisualStudio.com/integrate. Take a look and let me know how it goes. I’m also hanging out in our extendvs Gitter chat as @bertique. Come on by and give me a shout.

Michael Dick, Senior Program Manager, Visual Studio
@midi2dot0

Michael Dick is a Program Manager working on the Visual Studio team. Before joining Microsoft, Michael worked at a variety of tech companies and is passionate about developer tools. He is currently focusing on the ecosystem and extensibility experience for Visual Studio.

Check Out the New Intune App SDK Support for Xamarin & Cordova!

0
0

Earlier this week the Intune team announced big news: After intensive collaboration between the Visual Studio and Xamarin teams, there are now two new tools available that will make it dramatically easier for developers to use our Intune App SDK to prevent data loss in mobile iOS and Android apps.

These two dev tools for the SDK are:

  • Intune App SDK Xamarin component
  • Intune App SDK Cordova plugin

To give you an idea of what these tools make possible, the Intune team explains:

The plugin and component were designed specifically for use when building cross-platform mobile apps on either the Xamarin or the Cordova platforms, making it easy for developers to bake in mobile application management (MAM) controls as part of their standard development process.

 

If you are a developer building a cross-platform app, you can quickly add them to your project, with very little modification to your mobile app. Both are essentially embedding our native Intune App SDK functionality (so you get the same features) but with the ease of use that they provide.

You can get started today by downloading the SDK and the plugins via github.  All the details are in the post linked above.

To see more info and links to detailed additional resources, check out the post, or visit the “What’s New in Microsoft Intune” site.  Also check out the Xamarin Evolve16 conference that is going on right now:  https://evolve.xamarin.com/.

 

In_The_Cloud_Logos

Versioning NuGet packages in a continuous delivery world: part 1

0
0

On the Package Management team, we’re frequently asked how to think about versioning packages. Conceptually, it’s simple: NuGet (like many package managers) prefers semantic versioning (SemVer), which describes a release in terms of its backwards-compatibility with the last release. But for teams that have adopted continuous delivery, there’s tension between this simple concept and the reality of publishing packages.

This series of blog posts will cover strategies for resolving the tension. In this first one, we’ll cover SemVer, immutability of packages, and a really simple versioning strategy. Later posts will talk about some future thinking we’re doing in the versioning space, as well as a technique for Git users that seems to work really well.

The tension between SemVer and CD

With continuous delivery, we recommend that you produce and publish packages as an output of your CI build. Put those CI packages through your validation steps (automated testing, user acceptance testing, or whatever) and, when a particular package is deemed ready, you promote it to release status.

Did you spot the conflict? Each CI package produced needs a unique version number, so you must automatically increment the version number as you produce packages. As Xavier Decoster wrote a few years ago, this is a contradiction in terms: You’re “auto-versioning a yet unknown semantic version”. Put another way, we have a paradox: We need to pick a version number before we’ve had a chance to determine what the version number ought to be based on the contents of the package.

Brief intro to SemVer

Semantic version numbers have 3 numeric components, Major.Minor.Patch. When you fix a bug, increment patch (1.0.0 à 1.0.1). When you release a new backwards-compatible feature, increment minor and reset patch to 0 (1.4.17 à 1.5.0). When you make a backwards-incompatible change, increment major and reset minor and patch to 0 (2.6.5 à 3.0.0).

Immutability and unique version numbers

In NuGet, a particular package is identified by its name and version number. Once you publish a package at a particular version, you can never change its contents. But when you’re producing a CI package, you can’t know whether it will be version 1.2.3 or just a step along the way towards 1.2.3. You don’t want to burn the 1.2.3 version number on a package that still needs a few bug fixes.

SemVer to the rescue! In addition to Major.Minor.Patch, SemVer provides for a prerelease label. Prerelease labels are a “-” followed by whatever letters and numbers you want. Version 1.0.0-alpha, 1.0.0-beta, and 1.0.0-foo12345 are all prerelease versions of 1.0.0. Even better, SemVer specifies that when you sort by version number, those prerelease versions fit exactly where you’d expect: 0.99.999

Producing CI packages

Xavier’s blog post describes “hijacking” the prerelease tag to use for CI. We don’t think it’s hijacking, though. This is exactly what we do on Visual Studio Team Services to create our CI packages. We’ll overcome the paradox by picking a version number, then producing prereleases of that version. Does it matter that we leave a prerelease tag in the version number? For most use cases, not really. If you’re pinning to a specific version number of a dependent package, there will be no impact, and if you use floating version ranges with project.json, it will mean a small change to the version range you specify.

I’m going to assume you’ve got a small component you want to package up. Make sure the repo’s in your Visual Studio Team Services account. If you don’t, you can create a new class library called “MyComponent” in Visual Studio. Let the IDE add the solution to version control, then push the repo to Visual Studio Team Services. Also, if you haven’t installed Package Management in your account, do that now and create a feed to use.

Create a new Visual Studio build for the repo. You can do that from the Build hub or from the “setup a build now” badge in the repo. Add two new steps to the build: NuGet Packager and NuGet Publisher. Move the Packager step up to right after the Visual Studio Test step, and under Automatic package versioning, choose Use the date and time. Enter the version number you want to build, for example, 1.0.0.

NuGet Packager step

Leave the NuGet Publisher step at the bottom. Change its Feed type to Internal NuGet Feed. Paste in the NuGet URL from the feed you want to use.

NuGet Publisher step

The list of steps should look like the screenshot below.

Build steps for versioning

When you save and choose Queue a build, the build system will create a package with a version number like “1.0.0-ci-20160502-100256”. That’s the version number you selected followed by an ever-increasing prerelease section, in this case based on date and time.

Sharing the package with others

Once you put a particular package through its paces and decide it’s ready to release, you need to share it with others. The NuGet Publisher task works in Release Management just as they work in Team Build. While a full walkthrough is out of scope for this post, I recommend that you create a release definition that contains a NuGet Publisher task. You can point it either to another internal feed, or if you’re open-source-oriented, point it at NuGet.org. Queue a release off the build which produced your desired package – there, you’ve just released your first package from a CI stream!

Next time, expect more details about future investments that make versioning and releasing easier.


Windows 10, Hyper-V and Wireless – a new way to make this all work

0
0

Anyone who has used Hyper-V on a laptop is familiar with the pain of configuring Hyper-V virtual networking with wifi networking.  In fact, I have written multiple blogsaboutthis over the years.  Well, in recent builds of Windows 10 (build 14295 or later) there is a new option.  This is currently a bit hidden (no GUI, and some rather finicky PowerShell) but you can now setup virtual switches for virtual machines that use NAT.

This means that your virtual machines can use a private IP address and still access Internet resources.

More importantly – this is approach is very compatible with wireless network adapters.  You can read all about how to set this up here:  https://msdn.microsoft.com/en-us/virtualization/hyperv_on_windows/user_guide/setup_nat_network

Cheers,
Ben

Installing Bash on Ubuntu on Windows 10 Insider Preview

0
0

Hello again, Allen Sudbring here Premier Field engineer in the Central Region. Today I want to talk about an exciting new feature coming with the Anniversary update to Windows 10 that was announced at the Build 2016 conference, Bash command Shell on Windows 10. I am extremely excited about this feature because I work with Linux and Windows VM’s and being able to manage both from one command shell window and move from CMD, to PowerShell, to Bash increases productivity for both IT administrators and developers.

Microsoft has partnered with Canonical, the makers of Ubuntu, to bring the Bash shell to Windows. To make this happen, a new Windows Subsystem for Linux was created for Windows 10. Think of it sort of like reverse WINE, which is the Linux application that allows Windows programs to run on Linux. Most shell applications such as apt, git and editors like nano will work inside of this shell running in Windows.

More Information on the Windows Subsystem for Linux:

Windows Subsystem for Linux Overview

From https://blogs.msdn.microsoft.com/wsl/2016/04/22/windows-subsystem-for-linux-overview

Before I get started on how to install it, I need to put in a disclaimer that this is pre-release software that could change when the Anniversary update is released later this year. Screens and menus can also change in subsequent builds of the Windows Insider Preview.

Now that we have that out of the way, let’s look at some prerequisites for enabling the Bash shell:

· Computer enrolled in the Windows 10 Insider Preview Program in the Fast Ring. (NOTE: It can take up to 24 hours for computer to register with Insider Preview program and receive fast ring build)

· Build 14136 or later from Insider preview program installed on Computer

Once you have confirmed that the computer is in the Fast ring and you have received Build 14136 or newer, you can proceed with installing the Bash shell in windows.

Login to the machine, and open Settings:

image

Click on “Update and Security”:image

Click on “For Developers” in the left hand navigation pane and select the developer mode radio button:

image

Click Yes to enable Developer Mode:

image

Verify it’s enabled:

image

Close settings and open Control Panel and click on “Programs”:

image

Click “Turn Windows Features on or off”:

image

Check the box next to “Windows Subsystem for Linux (Beta)” and click OK:

image

image

Click the “Restart Now” button when the install is completed:

image

Login to the computer once it has rebooted and open a command prompt as administrator:

image

Type “Bash” at the command prompt and press “Y” to continue:

image

Bash and the support file system will be downloaded from the Windows Store:

image

The file system will be extracted and installed. Can take a few minutes to complete:

image

The final configuration step is to create a user. Enter a username when prompted:

NOTE: This step was not in the original build that was released after the Build Conference I have added it here as an update to demonstrate that the default user is no longer root

image

Enter a password when prompted:

image

Re-enter the password when prompted.

Once the installation is completed, it will give a status of successful and open the bash prompt with the C: drive mounted:

image

You can get to the shell by opening a command prompt and typing Bash or click the link that is added in the Start Menu:

image

All normal commands work. Example below is sudoapt-get update:

image

Nano text editor:

image

SSH:

image

Running remote command on Linux container host using SSH on Bash on Ubuntu on Windows:

image

Removal

For some reason, if you need to remove the shell or something isn’t working, you can open an elevated command prompt and run the following command:

lxrun /uninstall /full

image

About the only thing that can’t be run or supported in the shell is a Windows Manager such as KDE or GNOME.

We covered what the new Bash shell on Ubuntu is in Windows 10 as well as how to install it and get started.

I didn’t think I would ever see the day of a Linux shell running on Windows that wasn’t Cygwin or a third party application such as PUTTY. I am very excited about this and the other features coming in the anniversary update for Windows 10 and I hope you are too!!

Join us for a webinar on Windows Server 2016 this week

0
0

What are you doing on Thursday? Why not join us for a webinar? We finished our ten post series  on the ten reasons you’ll love Windows Server 2016. Two of our favorite speakers –  Jeff Woolsey, principal program manager, and Matt McSpirit, senior technical evangelist – will walk through the exciting new features coming in Windows Server 2016.

Join us Thursday, May 5 at 10:00 am PST: Ten reasons you’ll love Windows Server 2016

Register now!

Hear more about these new features:

  • New built-in security features, such as Shielded VM’s, to help customers add layers of protection against cyber attacks
  • New software-defined compute, storage and network virtualization to run highly cost-efficient datacenters
  • Support for application innovation with new technologies such as Windows and Hyper-V containers and Nano Server

We hope you can join us! In the meantime, check out the blog post series below.

Skype for Business Online and RMS sharing apps – now available without device enrollment.

0
0

Providing end users with a great experience on their managed apps is as is as important as having the right portfolio of apps. If your people don’t have a good experience with an app, they won’t use it – it’s as simple as that. That’s why Intune managed apps are designed to provide end users with the consumer-grade experience they expect, while giving you the control you need to ensure your corporate data is protected across your ecosystem of mobile devices.

Today we’re excited to announce that both the Skype for Business and RMS sharing apps now support MAM without device enrollment scenarios. MAM without device enrollment is a great option for BYOD scenarios where you want to keep corporate data safe without managing a user’s device – a win for both productivity and protection.

  • Skype for Business Online (for iOS and Android) – now available without device enrollment. We’ve also added conditional access capabilities that allow you more control over app access and data. Conditional access and MAM without device enrollment scenarios require modern authentication to be enabled. For more on this, check out this post from the Skype for Business team. See Skype for Business in action in this latest episode of EndPoint Zone with Brad Anderson.
  • RMS sharing (for Android) – now available without device enrollment. Additionally, we’ve updated the RMS sharing app with functionality that allows you to apply policy that enables end users to view images, AV, and PDF files more securely, this single app can now take the place of the Microsoft Intune AV Player, Image Viewer, and PDF Viewers for Android. For more on this update, check out this post form the RMS team.

Visit the What’s New in Microsoft Intune page for more details on these and other recent developments. 

Additional resources:

Building accessible websites just got a lot easier

0
0

AccessibilityWhen building websites it is important that it is accessible for everyone that needs to use it. Implementing web accessibility features greatly helps to achieve that. Here’s what the W3C has to say about that:

Web accessibility means that people with disabilities can use the Web. More specifically, Web accessibility means that people with disabilities can perceive, understand, navigate, and interact with the Web, and that they can contribute to the Web. Web accessibility also benefits others, including older people with changing abilities due to aging.

The W3C continues to explain why it is important:

The Web is an increasingly important resource in many aspects of life: education, employment, government, commerce, health care, recreation, and more. It is essential that the Web be accessible in order to provide equal access and equal opportunity to people with disabilities. An accessible Web can also help people with disabilities more actively participate in society.

It’s also good for business. For instance, if you run a web shop then you want to make sure that the largest number of people are able to purchase your products.

However, building websites that conform to accessibility standards such as WCAG 2.0 (by the W3C) or Section 508 (for US government compliance) has traditionally been rather cumbersome and required use of 3rd party web services such as the Wave Accessibility Checker. It is a disconnected experience that doesn’t provide a natural workflow for web developers. Instead, what is needed is a way to make accessibility features natural and easy to implement for web developers as part of their regular development process.

Enter Web Accessibility Checker!

This Visual Studio extension utilizes Browser Link for ASP.NET to run standards based accessibility checks on the live running website. There is no project specific setup required to make this work. Simply install the extension and run the website in any browser.

Under the hood, the extension uses the axe-core JavaScript library to perform the accessibility checking. The supported standards it checks for are:

  • WCAG Level A
  • WCAG Level AA
  • Section 508
  • Other best practices

When the extension finds any accessibility errors it displays directly inside the Error List window.

accessibility error list

The accessibility check can either run automatically when ASP.NET projects are run in the browser (F5 or View In Browser) or when manually invoked (Ctrl+Shift+M).

accessibility settings

When manually invoked, the extension will run the accessibility check on all browser instances that currently has the web project loaded. So if different pages are loaded in different browsers, the Error List will be populated with the combined set of errors.

If you haven’t already, go download Web Accessibility Checker and try it out. It works for any ASP.NET web project where Browser Link is able to connect.

Database Scoped Configuration

0
0

This release now supports a new database level object holding optional configuration values to control the performance and behavior of the application at the database level. This support is available in both SQL Server 2016 and SQL Database V12 using the new ALTER DATABASE SCOPED CONFIGURATION (Transact-SQL) statement. This statement modifies the default SQL Server 2016 Database Engine behavior for a particular database. Several benefits are expected from using this feature

  • Allows to set different configuration options at the database level
    • Current functionality allows to set it up only at the server or individual query level using query hints
    • This is especially important for Azure SQL DB, where certain options could not be configured at the database level.
  • Provides better isolation level for setting different options in case of multiple databases/applications running in a single instance.
  • Enables lower level permissions that can be easily granted to individual database users or groups to set some configuration options
  • Allows to setup the database configuration options differently for the primary and the secondary database which might be necessary for the different types of workloads that they serve

The following options can be configured

  • Set the MAXDOP parameter to an arbitrary value (0,1,2, …) to control the maximum degree of parallelism for the queries in the database. It is recommended to switch to db-scoped configuration to set the MAXDOP instead of using sp_configure at the server level, especially for Azure SQL DB where sp_configure is not available. This value may differ between the primary and the secondary database. For example, if the primary database is executing an OLTP workload, the MAXDOP can be set to 1, while for the secondary database executing reports the MAXDOP can be set to 0 (defined by the system). For more information on MAXDOP see Configure the max degree of parallelism Server Configuration Option
  • Set the LEGACY_CARDINALITY_ESTIMATION option to enable the legacy query optimizer Cardinality Estimation (CE) model (applicable to SQL Server 2012 and earlier), regardless of the database compatibility level setting. This is equivalent to Trace Flag 9481. This option allows to leverage all new functionality provided with compatibility level 130, but still uses the legacy CE model (version 70) in case the latest CE model impacts the query performance. For more information on CE see Cardinality Estimation
  • Enable or disable PARAMETER_SNIFFING at the database level. Disable this option to instruct the query optimizer to use statistical data instead of the initial values for all local variables and parameters when the query is compiled and optimized. This is equivalent to Trace Flag 4136 or the OPTIMIZE FOR UNKNOWN query hint
  • Enable or disable QUERY_OPTIMIZER_HOTFIXES at the database level, to take advantage of the latest query optimizer hotfixes, regardless of the compatibility level of the database. This is equivalent to Trace Flag 4199
  • CLEAR PROCEDURE_CACHE which allows to clear procedure cache at the database level without impacting other databases and without requiring sysadmin permission. This command can be executed using ALTER ANY DATABASE SCOPE CONFIGURATION permission on the database, and the operation can be executed on the primary and/or the secondary

For the T-SQL syntax and other details see ALTER DATABASE SCOPED CONFIGURATION (Transact-SQL)

 

Visual Studio TACO Update 9

0
0

Update 9 of the Visual Studio Tools for Apache Cordova (TACO) is ready for you! You’ll soon see a notification in Visual Studio to install the update or you can download and install it directly, now. If this is the first you’ve heard of TACO, take a moment and learn more about our mobile developer tools for web developers.

This release includes all of the goodness of TACO Update 8, which focused on giving you more control of your dev environment and providing better guidance for plugins. Update 9 focuses on two main themes:

  • Saving you trips to the command line
  • Getting started faster, by providing more prescriptive guidance

In this post, I’ll highlight the main changes included in Visual Studio TACO Update 9. You can read about the full release in the Visual Studio TACO Update 9 release notes.

Saving you a trip to the command line

When building apps with Apache Cordova, you’re going to use plugins to access native device capabilities (e.g. the Camera). Visual Studio TACO has always had tools to help you manage these plugins. It provides several ways to install common and custom plugins and now we’ve added a new option that lets you simply add a plugin by using its id.

Typically, you may want to install a plugin by id, when you want to use a custom plugin from the Cordova Plugin Repository. You could do this with the Cordova command line interface, by opening a command prompt and typing the command cordova plugin add . We wanted to save you that trip over to the command line, so that you can stay focused on your code!

Now, just go to the Custom tab of the configuration designer, enter your plugin ID, and go.

Getting started faster

When first creating a project using the Cordova blank template, we’ve redesigned the start page to make it easier for you to get going with your first application. The layout and content was rearranged so that you can quickly read over the important steps for getting started, and all of our links on this page were updated to point at the latest and best information.

The VS TACO getting started screen, shown when you first create an app using the Blank template.

Feedback and Thanks

Along with the changes mentioned here, we also fixed many bugs to improve the stability and performance of Visual Studio TACO. You can read about the full release in the Update 9 release notes.

It’s hard to believe this is our ninth release since Visual Studio 2015 RTM – We couldn’t do it without your support and feedback! Thank you for all of your direct emails, discussions on Stack Overflow, and feedback shared on our documentation site.

If you haven’t done it by now, go download update 9 and let us know what you think!

Ricardo Minguez (aka Rido)

Senior Program Manager. Visual Studio Tools for Apache Cordova

Rido is a program manager for the Visual Studio tools for Apache Cordova team. He has been working with web technologies since the beginning of the browsers, and now he loves to reuse the same skills to build mobile apps. You can reach him in twitter at @ridomin.


Top Support Solutions for Windows Server 2012 R2

0
0
This is a collection of the top Microsoft Support solutions for the most common issues experienced when using Windows Server 2012 R2 (updated quarterly). Note that some content that applies to earlier versions of Windows Server is listed, because it can be also helpful with Windows Server 2012 R2 issues. 1. Solutions related to Active...

Getting Started with Roaming App Data

0
0

Users today are mobile, transitioning from one device to the next throughout the day. Increasingly, these same users expect (or even demand) to take their data with them. Fortunately for us, roaming app data makes this a reality.

Today, we kick off a two-post series that explores how to use roaming app data to give your users a truly mobile experience. In today’s blog post, we’ll explain what roaming data is, how it works, and some ways you can use it in your own app. Finally, we’ll also cover versioning and conflict resolution.

Roaming app data

Roaming app data is the way in which all Universal Windows Platform (UWP) apps keep data in sync across multiple devices. It allows you, the developer, to create apps that help users carry data such as user profiles or documents from one device to the next. Generally speaking, roaming app data breaks down into three main categories:

  • App data is the data your application requires to function. This data needs to be in sync across all devices.
  • User data is any data that the user initiates via the application that will synchronize across all devices. For example, if you use your Microsoft account to store Microsoft Office files in the cloud from your desktop and then open Microsoft Office on your laptop, you will be presented with a list of your recent files.
  • App settings are any configuration settings for an app that will stay in sync across all devices. Think of your profile settings in Visual Studio 2015.

1_RoamingAppData

Before we get started digging into the API, it’s important to understand that you have two main points of entry into your roaming app data:

RoamingSettings

The built-in ApplicationData.RoamingSettings property, which is of the type ApplicationDataContainer, stores its data as a dictionary key/value pair with a string for the key. The key of each setting can be up to 255 characters long and the value for each setting can be no more than 8K bytes in size. Bear in mind, however, that you can only store the following data types:

  • UInt8, Int16, UInt16, Int32, UInt32, Int64, UInt64, Single, Double
  • Boolean
  • Char16, String
  • DateTime,TimeSpan
  • GUID, Point, Size,Rect
  • ApplicationDataCompositeValue

Although you should choose the data type that works best for you and your specific app, ApplicationDataCompositeValue in the above list is the recommended choice. Unlike the other types, which have an 8k-byte limit, this type allows you to store up to 64K bytes. Critically, it also lets you store a subset of settings or categorize your settings.

Our first step in implementing the API is to create a variable referencing RoamingSettings to perform all of our data operations:



ApplicationDataContainer roamingSettings = ApplicationData.Current.RoamingSettings;


With the roamingSettings variable created, it is relatively straightforward to then write a value to it, like so:



roamingSettings.Values["lastViewed"] = DateTime.Now;


It is just as straightforward to read this data from the setting:



DateTime lastViewed = (DateTime)roamingSettings.Values["lastViewed"];


Keeping the same roamingSettings variable we used above, and assuming you did choose ApplicationDataCompositeValue for your data type, let’s look at how easy it becomes to then group and organize some theoretical roaming app data:



ApplicationDataCompositeValue bookComposite = new ApplicationDataCompositeValue();
bookComposite["lastViewed"] = DateTime.Now;
bookComposite["currentPage"] = 32;
roamingSettings.Values["bookCompositeSetting"] = bookComposite;


We can read the data out from these composite values just like we have done before:



ApplicationDataCompositeValue bookComposite = (ApplicationDataCompositeValue)roamingSettings.Values["bookCompositeSetting"];
DateTime lastViewed = (DateTime)bookComposite["lastViewed"];
int currentPage = (int)bookComposite["currentPage"];


The beauty of this simple API is that it empowers the developer to build rich user experiences across devices with just a little bit of boilerplate code.

Tip: Be advised, there is no direct way for you to trigger a sync from one device to another. This is handled by the OS the device is running.

RoamingFolder

We’ve looked at RoamingSettings and how to store your data using it, but what about non-structured, data-like files? And what if the data type limitations of RoamingSettings are just too restrictive for what you want to accomplish? This is where RoamingFolder (of type StorageFolder) comes in handy.

Let’s use a real-world example of a To-Do app to show how this works. Assume that for the purposes of this app, it is critical to sync all of your users’ to-do items across all of their devices.

First, you need to provide a filename for storing the data—use “todo.txt”:



StorageFolder roamingFolder = ApplicationData.Current.RoamingFolder;
var filename = "todo.txt";


Then, write a Todo class that allows you to keep all information for a given to-do item organized together. It has a Task and an IsCompleted property:



class Todo
{
    public string Task { get; set; }
    public bool IsCompleted { get; set; }
}


Next, you will want to create a function to write out to-do items—call it WriteTodos(). Build the to-do items and serialize them as a JSON string using the Newtonsoft.Json library. Create the file asynchronously, overwriting it if it already exists. Finally, write out the text—also asynchronously:



async void WriteTodos()
{
    var todos = new List();
    todos.Add(new Todo() { Task = "Buy groceries", IsCompleted = false });
    todos.Add(new Todo() { Task = "Finish homework", IsCompleted = false });
    string json = JsonConvert.SerializeObject(todos);
    StorageFile file = await roamingFolder.CreateFileAsync(filename,
                CreationCollisionOption.ReplaceExisting);
    await FileIO.WriteTextAsync(file, json);
}

Let’s see what it takes to read the file you created by creating a ReadTodos() function. You retrieve your StorageFile asynchronously and then deserialize the string as a List object. And that’s all there is to it.



async void ReadTodos()
{
    try
    {
        StorageFile file = await roamingFolder.GetFileAsync(filename);
        string json = await FileIO.ReadTextAsync(file);
        List todos = JsonConvert.DeserializeObject>(json);
        // Perform any operation on todos variable here.
    }
    catch (Exception ex)
    {
        // Handle exception…
    }
}


Keep in mind that even if you place some files in the RoamingFolder, they may not roam if they are…

  • file types that behave like folders (e.g. files with .zip and .cab extensions)
  • files that have names with leading spaces
  • file paths that are longer than 256 characters
  • files that are empty folders
  • files with open handles

Constraints

Although we’ve covered how to use roaming app data with RoamingSettings and RoamingFolder, there are still a couple of constraints to take into account. First, it is important to remember that in order for roaming app data to work, users need to have a Microsoft account and use this same account across all devices.

Next, Microsoft account users receive a specific quota for storage, accessed through the ApplicationData.RoamingStorageQuota property (currently, this is 100KB). Once a user has reached their storage limit for a given app, all roaming will cease to work until data is removed from roaming app data.  A good rule of thumb to help your users avoid this experience is to focus on user preferences, links, and small data files for roaming data and lean on local and temporary data for everything else.

In case you need to remove settings and files, you’ll want to call the Remove function and pass in the key:



roamingSettings.Values.Remove("lastViewed");


You can even go a step further if you want to remove our composite value container completely:



roamingSettings.DeleteContainer("bookCompositeSetting");


You may want to provide custom data features that go beyond the constraints of roaming app data. In this case, roaming data associated with a Microsoft account may not be the best implementation. Instead, you may want to consider Microsoft Azure or another service to provide the same roaming user experience.

Syncing and Versioning your app data

Now that you’ve implemented the API and understand its constraints, let’s explore two critical features of roaming app data in greater depth—syncing and versioning:

  • Syncing is the means by which changes on one device are transmitted to the cloud for use by another device.
  • Versioning provides you, the developer, with the ability to change the structure of the data that is stored on each device. In turn, this allows you to incrementally roll-out versions of the data structure so that the end-user has a reduced chance of a poor experience.

Syncing conflicts

In talking about data syncing, it’s important to also talk about data conflicts. Consider the following example:

A user opens your task application on his desktop and starts creating a list. At the same time, the user logs onto another PC with the same account, opens your application and continues to work on the list. When the user goes back to the original PC, what is the expected behavior?

In this example, as long as the underlying roaming app data files are released and not still open, the sync will happen when the changes occur. The conflict policy for syncing is simple: the last writer wins.

It is also possible to know when a sync occurs at runtime on a given device. Simply wire up the ApplicationData.DataChanged event and you will be notified when a sync happens:



private void HandleEvents()
{
    ApplicationData.Current.DataChanged += Current_DataChanged;
}
void Current_DataChanged(ApplicationData sender, object args)
{
    // Update app with new settings
}


Versioning

The nice thing about app data versioning is that as your application matures, your app data structure may change as well. However, always bear the following in mind:

  • Your user could be several versions back on one device and current on another device
  • App data versions apply to all state information managed via the ApplicationData class
  • No relationship to the application version; many application versions can and will likely use the same app data version
  • App data version numbering always starts at zero (0)

It is recommended that you use increasing version numbers as you roll out new releases. You simply call ApplicationData.SetVersionAsync, passing in a callback to handle any migrations from an older version to the current one. We initiate a SetVersionAsync by specifying the version number and also providing a callback. In the callback, we evaluate the current local version and apply any migration logic as necessary. This approach is very similar to EntityFrameworkCodeFirst migrations. The following snippet provides logic to handle multiple version changes in case the user has not used the application on a given device for a while.  One possible way to upgrade your app data between versions is to loop through the entire upgrade cycle.  This ensures the upgrade permutations are minimal.



ApplicationData appData = ApplicationData.Current;

// nice friendly reminder of when you last updated
// Version 2 – 2016.02.29
const uint currentAppDataVersion = 2;

void UpdateAppDataVersion(SetVersionRequest request)
{
  SetVersionDeferral deferral = request.GetDeferral();
  uint version = appData.Version;

  while (version 
        appdata.LocalSettings.Values[“locationGeo”] = 
          convertStringLocationToGeo(appdata.LocalSettings.Values[“location”]);
        break;
      case 2:
        // up to date, no need to do anything
        break;
      default:
        throw new Exception("Unexpected ApplicationData Version: " + version);
    }
    version++;
  }

  deferral.Complete();
}

async void SetVersion1_Click(Object sender, RoutedEventArgs e)
{
  await appData.SetVersionAsync(currentAppDataVersion, 
    new ApplicationDataSetVersionHandler(UpdateAppDataVersion));
}


Testing

Developers can lock their device in order to trigger a synchronization of roaming app data. If it seems that the app data does not transition within a certain timeframe, please check the following items and make sure that:

  • Your roaming app data does not exceed the RoamingStorageQuota
  • Your files are closed and released properly
  • There are at least two devices running the same version of the app
  • Roaming has not been disabled via a device policy or roaming has been manually turned off

Also, be aware that roaming app data syncing doesn’t happen immediately. There will be some degree of latency between the change you make on one device and when it shows up on another.

Wrapping Up

In this post, we have looked at how to implement roaming app data and explored how it operates. Stay tuned for our next, which will examine further the ins and outs of synchronization, including sync of different data types, first data load, handling offline scenarios, and resolving conflicts.

In the meantime, feel free to download the samples and start playing!

Additional Resources

Top Support Solutions for Windows 10

0
0
These are the top Microsoft Support solutions for the most common issues experienced when using Windows 10. Solutions related to the installation or upgrade to Windows 10 with free upgrade offer: Windows 10 FAQ Help with upgrading to Windows 10 Compatibility Report for Windows 10: FAQ How to manage Windows 10 notification and upgrade options...

A tour through tool improvements in SQL Server 2016

0
0

This post was authored by Ayo Olubeko, Program Manager, Data Developer Group.

Two practices drive successful modern applications today – a fast time to market, and a relentless focus on listening to customers and rapidly iterating on their feedback. This has driven numerous improvements in software development and management practices. In this post, I will chronicle how we’ve embraced these principles to supercharge management and development experiences using SQL Server tooling.

SQL Server 2016 delivers many SQL tools enhancements that converge on the same goal of increasing day-to-day productivity, while developing and managing SQL servers and databases on any platform. This post provides an overview of the improvements and I’ll also drop a few hints about what’s on the way. With SQL Server 2016:

  • It’s easier to access popular tools, such as SQL Server Management Studio (SSMS) and SQL Server Data Tools (SSDT).
  • Monthly releases of new SQL tools make it easy to stay current with new features and fixes.
  • Day-to-day development is being simplified, starting with a new connection experience.
  • New SQL Server 2016 features have a fully guided manageability experience.
  • Automated build and deployment of SQL Server databases can improve your time to market and quality processes.

Finding and using the most popular SQL tools is easier than ever

We received insightful feedback from customers about how difficult it was to find and install tooling for SQL Server, so we’ve taken a few steps to ensure the experience in SQL Server 2016 is as easy as possible.

Free and simple to find and install SQL tools

SQL Server Management Suite

The SQL Server tools download page is the unified place to find and install all SQL Server-related tools. The latest version of SQL tools doesn’t just support SQL Server 2016, but it also supports all earlier versions of SQL Server, so there is no need to install SQL tools per SQL Server version. In addition, you don’t need a SQL Server license to install and use these SQL tools.

SSMS has a new one-click installer that makes it easy to install, whether you’re on a server in your data center or on your laptop at home. Additionally, the installer supports administrative installs for environments not connected to the Internet.

All your SQL tools for Visual Studio in one installer, for whichever version of SQL Server you use

SQL Server Data ToolsSQL Server Data Tools (SSDT) is the name for all your SQL tools installed into Visual Studio. With just one installation of SSDT in Visual Studio 2015, developers can easily integrate efforts to develop applications for SQL Server, Analysis Services, Reporting Services, Integration Services and any application in Visual Studio 2015 for SQL Server 2016 – or older versions as needed.

SSDT replaces/unifies older tools such as BIDS, SSDT-BI and the database-only SSDT, eliminating the confusion about which version of Visual Studio to use. From Visual Studio 2015 and up you’ll have a simple way to install all of the SQL tools you use every day.

Easy to stay current – new features and fixes every month

SQL Server Management StudioOne of the goals for SQL tools is to provide world-class support for your SQL estate wherever it may be. This could be comprised of SQL servers running on-premises or in the cloud, or some fantastic hybrid of both. We support it all. In order to enable world class coverage of this diverse estate, we have adopted a monthly release cadence for our SQL tools. This faster release cycle brings you additional value and improvements – whether it’s enabling functionality to take advantage of new Microsoft Azure cloud features, issuing a bug fix to address particularly painful errors, or even creating a new wizard/dialog to streamline management of your SQL Server.

These stand-alone SSMS releases include an update checker that informs you of newer SSMS releases when they become available. SSDT update notification continues to be fully integrated with Visual Studio’s notification system. You can keep up to date and learn more about the SSMS and SSDT releases at the SQL Server Release Services blog.

Day-to-day development is being simplified, starting with a new connection experience

Discover and seamlessly connect to your databases anywhereBrowse

No more need to memorize server and database names. With just a few clicks, the new connection experience in SQL Server Data Tools helps you automatically discover and connect to all your database assets using favorites, recent history or by simply browsing SQL servers and databases on your local PC, network and Azure. You can also pin databases you frequently connect so they’re always there when you need them. In addition, the new connection experience intelligently detects the type of connection you need, automatically configures default properties with sensible values and guides you through firewall settings for SQL Database and Data Warehouse.

Streamline connections to your Azure SQL databases in SSMS

The new firewall rule dialog in SSMS allows you to create an Azure database firewall rule within the context of connecting to your database. You don’t have to login to the Azure portal and create a firewall rule prior to connecting to your Azure SQL Database with SSMS. The firewall rule dialog auto-fills the IP address of your client machine and allows you to optionally whitelist an IP range to allow other connections to the database.

New Firewall Rule

Fully guided management experiences

SQL Server 2016 is packed with advanced, new features including Always Encrypted, Stretch Database, enhancements with In-Memory Table Optimization and new Basic Availability Groups for AlwaysOn — just to name a few. SSMS delivers highly intelligent, easy-to-click-through wizard interfaces that help you enable these new features and make your SQL Server and Database highly secure, highly available and faster in just a few minutes. There’s an easy learning curve, even though the technology that’s under the hood enabling your business is powerful and complex.

Always Encrypted Intro

Adopting DevOps processes with automated build and deployment of SQL Server databases

Features such as the Data-tier Application Framework (DACFx) technology and SSDT have helped make SQL Server the market leader of model-based database lifecycle management technology. DACFx and SSDT offer a comprehensive development experience by supporting all database objects in SQL Server 2016, so developers can develop a database in a declarative way using a database project.

Using Visual Studio 2015, version control and Team Foundation Server 2015 or Visual Studio Team Services in the cloud, developers can automate database lifecycle management and truly adopt a DevOps model for rapid application and database development and deployment.

What’s coming next in your SQL tools

In the months to come, you can look forward to continued enhancements in both SSMS and SSDT that focus on increasing the ease with which you develop and manage data in any SQL platform.

To this end, SSMS will feature performance enhancements and streamlined management and configuration experiences that build on the new capabilities provided by the Visual Studio 2015 shell. Similarly, SSDT will deliver performance improvements and feature support to help database developers handle schema changes more efficiently. Learn more about tooling improvements for SQL Server 2016 in the video below.

Improvements like these can’t happen in a vacuum. Your voice and input are absolutely essential to building the next generation of SQL tools. And the monthly release cycle for our SQL tools allows us to respond faster to the issues you bring to our attention. Please don’t forget to vote on Connect bugs or open suggestions for features you would like to see built.

See the other posts in the SQL Server 2016 blogging series.

Try SQL Server 2016 RC

The week in .NET – 5/3/2016

0
0

To read last week’s post, see The week in .NET – 4/27/2016.

Evolve conference

Xamarin Evolve, the largest cross-platform mobile event in the world, happened last week. The .NET team was there to celebrate all things Xamarin with our good friends, and now colleagues. All the sessions can be watched on YouTube, with an incredible cast of speakers that includes Steve Wozniak and Grant Imahara.

On.NET

Last week on the show, we spoke with Benjamin Fistein and Jakub Míšek about Peachpie, a PHP compiler for .NET built on Roslyn.

Package of the week: Flurl

Flurl is a fun library that makes it super-easy to query remote HTTP resources. Here’s for example how you’d query a remote API with OAuth:


Xamarin app of the week: Sqor Sports

Sqor Sports is a social network where athletes can engage directly with their fans and monetize their own brands. The Sqor team is able to innovate more, release faster, and provide a white glove experience to their celebrity athletes thanks to Xamarin.

Sqor Sports

User group meeting of the week: Seattle – Xamarin Evolve 2016 Redux!

Tonight, Tuesday May 3 at 6:00PM, at City University of Seattle, Rich Lander and Frank Krueger will help you catch up on all the amazing stuff that was shown last week in Orlando. The meeting will be hosted by the Seattle Mobile .NET Developers group.

.NET

ASP.NET

F#

Check out F# Weekly for more great content from the F# community.

Games

Game of the Week: JumpJet Rex

JumpJet Rex is an action/platformer that incorporates elements of racing. Players are immediately dropped into a tutorial level that teaches them very quickly how to use their rocket boots to fly, jump, dash and attack enemies while avoiding deadly traps. Upon completing the level, players have the opportunity to try to beat their best time by competing against a ghost version of themselves running the level. JumpJet Rex has several game modes including story, multiplayer arena, co-op and speed run.

JumpJet Rex was created by Treefortress Games using Unity and C#. It is available on Mac and Windows via Steam. More information can be found on their Made With Unity page.

jumpjetrex

And this is it for this week!

Contribute to the week in .NET

As always, this weekly post couldn’t exist without community contributions, and I’d like to thank all those who sent links and tips. You can participate too. Did you write a great blog post, or just read one? Do you want everyone to know about an amazing new contribution or a useful library? Did you make or play a great game built on .NET? We’d love to hear from you, and feature your contributions on future posts:

This week’s post (and future posts) also contains news I first read on The ASP.NET Community Standup, on F# weekly, on ASP.NET Weekly, and on Chris Alcock’s The Morning Brew.

Viewing all 13502 articles
Browse latest View live




Latest Images