Quantcast
Channel: TechNet Technology News
Viewing all 13502 articles
Browse latest View live

How to perform a manual initial replica on DPM Modern Backup Storage

$
0
0

DPM can backup a variety of workloads as SQL, SharePoint, Exchange, Hyper-V VMs, File Servers, among others. While doing so, DPM creates an initial replica of the data, and takes the following backups incrementally, optimizing Storage and Network resources. However, at times, due to the large size of the Data-source, the initial replication over the network may consume a lot of time and network resources. In such cases, it is suggested that the Initial Replica be created manually to save on these.

In this blog, our guest, Heyko Oelrichs talks about how the initial replication can be performed with SC 2016 DPM with Modern Backup Storage.

 

With DPM 2016 we’ve changed the whole architecture how DPM stores its backup data. Our new storage model, called Modern Backup Storage (MBS), utilizes VHDX files to store its backup data and removed the use of physical disks, overallocation and the need of colocation of workloads. (For more information about MDS check out Introducing DPM 2016 Modern Backup Storage.)

This redesign changed the procedure for manually creating initial replicas, also called pre-seeding or pre-staging, with DPM 2016 and MBS. In DPM 2016 we’re no longer creating “real” replica volumes that are always accessible.

If you select to create the initial replica manually, e.g. when your protected workload is only available through a small WAN link, you will need to perform some additional steps to mount the replica VHDX.

Here we’ll show you how to do this step-by-step:

  • When adding a new datasource to protection, select “Manually” in the “Choose Replica Creation Method” dialog in the create/modify Protection Group wizard.

heyko_1

  • This results in a new protected datasource with the status: Manual replica creation pending

heyko_2

  • To figure out the Replica Path select the datasource and click on “Click to view details” next to “Replica path”.

heyko_3

  • This opens a new dialog window “Details of Replica Path” where you’ll find the Source and Destination Path of your Datasource.

heyko_4

  • Copy the path and save it in a notepad. It’ll look like the following.E:\ on DPM2016TP5-01.contoso.local C:\Program Files\Microsoft System Center 2016\DPM\DPM\Volumes\Replica\31d8e7d7-8aff-4d54-9a45-a2425986e24c\d6b82768-738a-4f4e-b878-bc34afe189ea\Full\E-Vol\
  • The first part of the copied string is the source. The second part, separated by a whitespace, is the destination. The destination contains the following information:

DPM Install Folder          C:\Program Files\[..]\DPM\Volumes\Replica\

Physical ReplicaID          31d8e7d7-8aff-4d54-9a45-a2425986e24c\

Datasource ID                   d6b82768-738a-4f4e-b878-bc34afe189ea\

Path                                        Full\E-Vol\

At this point, the replica is not mounted. If you look at the mountpoint from a command prompt or in Explorer, it will not be shown. It’s empty.

To mount this replica, you’ll need to run some PowerShell commands to manually mount the replica VHDX before you’re able to copy data.

First of all select your protection group:

$pg=Get-DPMProtectionGroup|?Name-eq'DPM DBs'heyko_5

The next step is selecting the correct Datasource in your Protection Group:

$ds=Get-DPMDatasource-ProtectionGroup$pg|?name-eq'ReportServer'heyko_6

Now you are able to mount the replica volume:

Start-DPMManualReplicaCreation-Datasource$dsheyko_7

 

The last command mounted the Replica VHDX-file so the initial replica data can be copied to it.

Note, leave your session opened to re-use $ds later to dismount the VHDX-file.

Now you can see the mounted drive in the already known destination path:

heyko_8

The replica volumes contain the expected folder structure:

heyko_9

Now you can start to copy your workload data to the mounted replica volume. In our example case these are SQL database and logfiles. You’ve to set this database offline to be able to copy these files.

You’ve to keep the original folder structure. What this means is shown in the following example:heyko_10

Our Database Files are stored in

E:\Microsoft SQL Server\MSSQL12.MSSQLSERVER\MSSQL\DATA\ReportServer.mdf

That means that we’ve to copy the data to the following destination path:

C:\Program Files\Microsoft System Center 2016\DPM\DPM\Volumes\Replica\

31d8e7d7-8aff-4d54-9a45-a2425986e24c\d6b82768-738a-4f4e-b878-bc34afe189ea\

Full\E-Vol\Microsoft SQL Server\MSSQL12.MSSQLSERVER\MSSQL\DATA\ReportServer.mdf

When you’re done you have to dismount the volume by using the following ps command.

Stop-DPMManualReplicaCreation-Datasource$ds

This dismounts the Replica Volume VHDX File:

heyko_11

Replica volume for datasource ReportServer dismounted successfully. Run a consistency check job to start scheduled backups.

The final step is to run a consistency check:

Start-DPMDatasourceConsistencyCheck-Datasource $ds

Normal backups will be taken once the replica is healthy.

heyko_12

Hope this blog helps you perform manual replica creation with ease.

 


On the Road Again with the Tech Industry’s Best and Brightest, v4

$
0
0

The best part of my job is meeting with customers, media, and analysts from all over the world and any chance I have to do this and get out of the office for lunch makes it even better.

After 3 busy seasons of Lunch Break I thought Id seen and heard just about everything but the Redmond Police Department had a couple questions of their own.
.

 

Im really excited to kick off season 4; new episodes every Tuesday!

Between YouTube, Facebook, and Twitter, these videos have been watched hundreds of thousands of times — and having these conversations is a lot of fun. The rental car is pretty fun, too.

You can also subscribe to these videoshere, or watch past episodes here: www aka.ms/LunchBreak.

Demo Tuesday // Windows Server: Just Enough Administration

$
0
0

Welcome to our Demo Tuesday Series. Each week we will be highlighting a new product feature from the Hybrid Cloud Platform.

Dont leave all of your keys on one ring

One of the new Privilege Identity controls in Windows Server 2016 is Just Enough Administration (JEA). It uses PowerShell to provide role-based administration. That way IT personnel have the keys to only what they need to do their jobs, without giving them full admin access. This just enough approach limits the potential damage that can be done by malicious insiders or criminals who have hacked a trusted admins credentials. Take a look:

In other words, you no longer have to give a frontline support tech the power to take control of your sensitive servers just to restart a service. Nor do you need to allow a helpdesk engineer to have full control of an executive PC to run remote diagnostics. Theres no reason to take those risks.

How Just Enough Administration limits risk

JEA uses a layer of least privilege.

  • Users can perform only those tasks for which they are authorized as part of their role by using Windows PowerShell constrained runspaces.
  • Users can perform required tasks without being given administrator rights on the server.
  • The tasks that users are allowed to perform, and their server access, are defined and managed from a central configuration server by using Windows PowerShell Desired State Configuration (DSC).
  • Constant logging details who has accessed the environment and whats been changed.

And as for that Tier 1 frontline support tech who needs to help someone troubleshoot a server or desktop? They can use many of PowerShells diagnostic cmdlets to troubleshoot while having limited ability to add, remove, or change objects.

Watch more of the Windows Server 2016 demo series and learn more about security options with Windows Server 2016.

The week in .NET – On .NET on Docker and new Core tooling, Benchmark.NET, Magicka

$
0
0

Previous posts:

On .NET

In this week’s episode, we’re running ASP.NET in a Docker image, and we look at some of the changes in the .NET Core .csproj tooling. Apologies to those of you who watched live: we had some technical difficulties, and as a consequence, we did a second recording, which is now on Channel 9.

This week, we’ll have Phil Haack on the show. Phil works for GitHub, and before that was the Program Manager for ASP.NET MVC. We’ll stream live on Channel 9. We’ll take questions on Gitter’s dotnet/home channel and on Twitter. Please use the #onnet tag. It’s OK to start sending us questions in advance if you can’t do it live during the shows.

We’ll also record a couple of additional surprise interviews in preparation for the celebration of the 15th anniversary of .NET, and the 20th anniversary of Visual Studio. Stay tuned!

Package of the week: Benchmark.NET

When done properly, benchmarking is a great way to guide your engineering choices by comparing multiple solutions to a problem known to cause performance bottlenecks in your applications. There’s a lot of methodology involved if you want to do it right, however, that is both tricky and repetitive. And no, surrounding your code with a StopWatch won’t cut it.

Benchmark.NET makes it very easy to decorate the code that you want to test so it can be discovered, run many times, and measured. Benchmark.NET takes care of warmup and cooldown periods as needed, and will compute mean running times and standard deviation for you. It can also generate reports in a variety of formats.

Game of the week: Magicka

Magicka is an action-adventure game set in a world based on Norse mythology. Take on the role of a wizard from a sacred order while you embark on a quest to stop an evil sorcerer who has thrown the world into turmoil. Magicka features a dynamic spell casting system that has you combining the elements to cast spells. You can also play with up to three of your friends in co-op and versus modes.

Magicka

Magicka was created by Arrowhead Game Studios using C# and XNA. It is available for Windows on Steam.

User group meeting of the week: C# 7 with Jon Skeet in Adelaide

If you’re around Adelaide on Wednesday, February 8, don’t miss the Adelaide .NET User Group‘s meetup with Jon Skeet on C# 7. Jon Skeet is none other than the #1 member on StackOverflow, and an absolute authority on C#.

.NET

ASP.NET

F#

Check out F# Weekly for more great content from the F# community.

Xamarin

Azure

UWP

Games

And this is it for this week!

Contribute to the week in .NET

As always, this weekly post couldn’t exist without community contributions, and I’d like to thank all those who sent links and tips. The F# section is provided by Phillip Carter, the gaming section by Stacey Haffner, and the Xamarin section by Dan Rigby, and the UWP section by Michael Crump.

You can participate too. Did you write a great blog post, or just read one? Do you want everyone to know about an amazing new contribution or a useful library? Did you make or play a great game built on .NET?
We’d love to hear from you, and feature your contributions on future posts:

This week’s post (and future posts) also contains news I first read on The ASP.NET Community Standup, on Weekly Xamarin, on F# weekly, and on Chris Alcock’s The Morning Brew.

Another Update to Visual Studio 2017 Release Candidate

$
0
0

Thank you for taking time to try out Visual Studio 2017 RC and sharing all your feedback. Today we have another update to Visual Studio 2017 Release Candidate which mostly contains bug fixes. Take a look at the Visual Studio 2017 Release Notes and Known Issues for the full list of what’s changed with this update.

Please try this latest update and share your feedback. For problems, let us know via the Report a Problem option in the upper right corner of the VS title bar. Track your feedback on the developer community portal. For suggestions, let us know through UserVoice.

John Montgomery, Director of Program Management for Visual Studio

@JohnMont is responsible for product design and customer success for all of Visual Studio, C++, C#, VB, .NET and JavaScript. John has been at Microsoft for 18 years working in developer technologies.

Announcing .NET Core Tools Updates in VS 2017 RC

$
0
0

Today, we are releasing updates to the .NET Core SDK, included in Visual Studio 2017 RC. You can also install the .NET Core SDK for use with Visual Studio Code or at the command line, on Windows, Mac and Linux. Check out the Visual Studio blog to learn more about this Visual Studio 2017 update.

The following improvements have been made in the release:

  • Templates — dotnet new has been updated and now is based on a new templating engine.
  • The location of the .NET Standard class library template, in Visual Studio, has been moved to the new .NET Standard node, based on feedback.
  • Quality — ~50 fixes have been made across the tools to improve product reliability.

The quality fixes have been made across the .NET CLI, NuGet, MSBuild and also in Visual Studio. We will continue to squash bugs as we get closer to Visual Studio 2017 RTM. Please continue sharing your feedback on the overall experience.

Getting the Release

This .NET Core SDK release is available in Visual Studio 2017 RC, as part of the .NET Core cross-platform development workload. It is also available in the ASP.NET and web development workload and an optional component of the .NET desktop development workload. These workloads can be selected as part of the Visual Studio 2017 RC installation process. The ability to build and consume .NET Standard class libraries is available in the all of the above workloads and in the Universal Windows Platform development workload.

You can also install the .NET Core SDK release for use with Visual Studio code or with the command-line use on Windows, macOS and Linux by following the instructions at .NET Core 1.0 – RC4 Download.

The release is also available as Docker images, in the dotnet repo. The following SDK images are now available:

  • 1.0.3-sdk-msbuild-rc4
  • 1.0.3-sdk-msbuild-rc4-nanoserver
  • 1.1.0-sdk-msbuild-rc4
  • 1.1.0-sdk-msbuild-rc4-nanoserver

The aspnetcore-build repo has also been updated.

Changes to Docker Images

We made an important change with this release to the tags in the dotnet repo. The latest and nanoserver tags now refer to MSBuild SDK images. The latest tag now refers to the same image as 1.1.0-sdk-msbuild-rc4, while nanoserver now refers to the same image as 1.1.0-sdk-msbuild-rc4-nanoserver. Previously, those two tags refered to the same image as 1.1.0-sdk-projectjson-rc3 and 1.1.0-sdk-projectjson-rc3-nanoserver, respectively.

This is a breaking change, since the msbuild SDK is not compatible with the project.json-based SDK. We need to start moving the .NET Core ecosystem to the msbuild SDK, sooner than expected. We had originally planned to make this change at Visual Studio 2017 RTM. The number of times the latest tag is being pulled is growing much faster than we expected, making the break worse with each passing day. As a result, we were compelled to make this change with this release.

You can continue to use the project-json images for now, listed below, to give you more time to transition to the msbuild images (see dotnet migrate). Changing to these, more specific, tags is a one line change in a Dockerfile.

  • 1.1.0-sdk-projectjson-rc3
  • 1.1.0-sdk-projectjson-rc3-nanoserver

Note: We are no longer updating the project.json images, so please do plan your transition to the msbuild images. For example, only the msbuild SDK images will be updated when we release the 1.0.4 and 1.1.1 runtime updates (we expect) later this quarter.

We apologize if you are broken by this change. We will be providing general guidance on how to best use our tags to avoid a similar situation in future. We’ve been learning a lot about Docker over the last several months, particularly around versioning and naming. Expect a blog post soon on this topic that addresses these issues.

Changes to Supported Linux Distros

Fedora 23 and openSUSE 13.2 recently went out of support, per their respective project lifecycle. As a result, we are now no longer supporting or building for Fedora 23 and openSUSE 13.2.

We will be publishing a more formal policy on Linux distro support, in particular on managing end-of-life of distros. There will be opportunity for feedback on the policy before it is finalized.

Project Files

In the RC3 release, we made major improvements to make the csproj project files smaller. If you are using .NET Core project files created with earlier Visual Studio 2017 versions (before RC3), you should read the Updating Project Files section of the RC3 blog post to learn about changes you need to make to your project files.

dotnet new

The dotnet new command is one of the most important parts of the .NET Core tools experiences. It is useful for both new and experienced .NET Core users. I know that people who use and test the product on a daily basis use dotnet new all the time for experiments and prototypes. I do! It’s also documented on a lot of websites and markdown pages to help users get started with .NET Core. That said, we always knew that dotnet new was a little lacking and decided to improve it.

In short, we want dotnet new to have the following characteristics:

  • Powerful — expressive and scriptable command-line syntax.
  • Helpful — an interactive mode helps users pick the templates they need (think Yeoman).
  • Extensible — anyone can write templates for dotnet new!
  • Updatable — templates can be updated outside of primary delivery vehicles (e.g. Visual Studio, .NET Core SDK).
  • Platform — can be used by tools like Visual Studio and generator-aspnet (think yo aspnet).

dotnet new is now based on a new templating engine, which you can check out at dotnet/templating. It already does a great job satisfying what the RC3 version of dotnet new did. We’ll continue to add to it and improve it over the next several months, getting it to the point that it satisfies all the characteristics above. For the immediate term, we’re focussed on ensuring that it has the right quality level for Visual Studio 2017 RTM.

Improvements

We have updated dotnet new in the RC4 release with the following features:

You can now specify a target directory for your new template, with the -o argument, such as in the following example: dotnet new console -o awesome-new-tool. If the target directory does not exist, it will be created for you. This can also be combined with the -n argument to name projects, such as in the following example: dotnet new console -n awesome-new-tool -o src/awesome.

Target frameworks now have their own argument, -f. You can specify a target framework for any template, provided it is a legal value, such as in: dotnet new console -f netcoreapp1.0. The target framework values are the same as the ones used in the project files.

Solution file management has been improved. You can now create an empty solution file with dotnet new sln and then add projects to it. You can create solution files before or after project files, depending on your preferred workflow. If you have been using the older project.json-based tooling, you can think of solution files as the replacement for global.json files.

Important Changes

The basic dotnet new (no arguments) experience no longer default to creating a console template, as it did in RC3 and earlier releases. The dotnet new command will now print the available set of templates, much like dotnet new --help. In a later release, we may update dotnet new to start an interactive new template experience, which helps you select the right template based on a series of questions.

The new commandline has been streamlined. To create templates, you type dotnet new console or dotnet new web for console app or MVC templates respectively. The RC3 and earlier tools versions required a -t argument before the template name, such as dotnet new -t web.

Some of the template names changed, specifically Lib (now classlib) and Xunittest (now xunit). For RC4, you will need to use the new template names.

Walkthrough of the new template experience

You are probably curious about the new dotnet new experience. Sayed Hashimi, the Program Manager for dotnet new, wrote the following walkthrough to give you a good idea of what to expect. That said, I encourage you to install the RC4 SDK and try it out for yourself.

Sayed’s walkthrough was done on Linux. You can replicate the same experience on Windows. Just make sure to replace the Linux commands with the ones you are using in your favorite Windows shell.

Getting familiar with the new new

First let’s get a little familiar with new by displaying the help using the dotnet new --help. The result is shown
below.

$ dotnet new --helpTemplate Instantiation Commands for .NET Core CLI.Usage: dotnet new [arguments] [options]Arguments:  template  The template to instantiate.Options:  -l|--list         List templates containing the specified name.  -lang|--language  Specifies the language of the template to create  -n|--name         The name for the output being created. If no name is specified, the name of the current directory is used.  -o|--output       Location to place the generated output.  -h|--help         Displays help for this command.  -all|--show-all   Shows all templatesTemplates                                 Short Name      Language      Tags          --------------------------------------------------------------------------------------Console Application                       console         [C#], F#      Common/ConsoleClass library                             classlib        [C#], F#      Common/LibraryUnit Test Project                         mstest          [C#], F#      Test/MSTest   xUnit Test Project                        xunit           [C#], F#      Test/xUnit    Empty ASP.NET Core Web Application        web             [C#]          Web/Empty     MVC ASP.NET Core Web Application          mvc             [C#], F#      Web/MVC       Web API ASP.NET Core Web Application      webapi          [C#]          Web/WebAPI    Solution File                             sln                           Solution      Examples:    dotnet new mvc --auth None --framework netcoreapp1.0    dotnet new mstest --framework netcoreapp1.0    dotnet new --help

From the help output we can see that to create a project we can execute dotnet new . The template names
are displayed in the results of --help but you can also get the names using dotnet new -l.

Creating Projects

Let’s create a new HelloWorld console app. The most basic way to create a console app is using the
command dotnet new console. The other parameters that we can specify are listed below.

  • -n|--name
  • -o|--output
  • -lang|--language

In this case we want to create a C# console app named HelloWorld in the src/HelloWorld directory. Since C# is the
default language for the console app template (default value indicated in help by [ ]) there is no need to pass a
value to -l. To create the project execute dotnet new console -n HelloWorld -o src/HelloWorld. The result is
shown below.

$ dotnet new console -n HelloWorld -o src/HelloWorldContent generation time: 32.4513 msThe template "Console Application" created successfully.

Let’s see what was generated by listing the files on disk.

$ ls -Rsrc./src:HelloWorld./src/HelloWorld:HelloWorld.csproj   Program.cs

The HelloWorld project was created as expected in src/HelloWorld, and it consists of two files HelloWorld.csproj
and Program.cs. Let’s restore the packages and run the app using dotnet restore and then dotnet run. See the result.

$ cd src/HelloWorld/
$ dotnet restore  Restoring packages for /Users/sayedhashimi/temp/blog/samples/src/HelloWorld/HelloWorld.csproj...  Generating MSBuild file /Users/sayedhashimi/temp/blog/samples/src/HelloWorld/obj/HelloWorld.csproj.nuget.g.props.  Generating MSBuild file /Users/sayedhashimi/temp/blog/samples/src/HelloWorld/obj/HelloWorld.csproj.nuget.g.targets.  Writing lock file to disk. Path: /Users/sayedhashimi/temp/blog/samples/src/HelloWorld/obj/project.assets.json  Restore completed in 953.36 ms for /Users/sayedhashimi/temp/blog/samples/src/HelloWorld/HelloWorld.csproj.  NuGet Config files used:      /Users/sayedhashimi/.nuget/NuGet/NuGet.Config  Feeds used:      https://api.nuget.org/v3/index.json
$ dotnet runHello World!

From the output we can see the packages were restored successfully and when the app was executed Hello World! was
printed to the console.

Templates with Options

Templates can expose options, which customize template output based on user input. We can see those options by calling --help on a template, such as with dotnet new mvc --help.

$ dotnet new mvc --helpTemplate Instantiation Commands for .NET Core CLI.Usage: dotnet new [arguments] [options]Arguments:  template  The template to instantiate.Options:  -l|--list         List templates containing the specified name.  -lang|--language  Specifies the language of the template to create  -n|--name         The name for the output being created. If no name is specified, the name of the current directory is used.  -o|--output       Location to place the generated output.  -h|--help         Displays help for this command.  -all|--show-all   Shows all templatesMVC ASP.NET Core Web Application (C#)Author: MicrosoftOptions:                                                                -au|--auth           The type of authentication to use                                         None          - No authentication                                     Individual    - Individual authentication                         Default: None                                    -uld|--use-local-db  Whether or not to use LocalDB instead of SQLite                       bool - Optional                                                       Default: false                                   -f|--framework                                                                                 netcoreapp1.0    - Target netcoreapp1.0                               netcoreapp1.1    - Target netcoreapp1.1                           Default: netcoreapp1.0                         

Here we can see that the mvc template has three specific parameters. In this case let’s create an mvc app named
MyWeb in the src/MyWeb directory targetting netcoreapp1.1. To do that we will execute
dotnet new mvc -n MyWeb -o src/MyWeb -au Individual -f netcoreapp1.1.

$ dotnet new mvc -n MyWeb -o src/MyWeb -au Individual -f netcoreapp1.1
Content generation time: 429.6003 ms
The template "MVC ASP.NET Web Application" created successfully.

Now the project has been created in the src/MyWeb directory. Let’s take a look.

$ ls -lp src/MyWeb/
total 80
drwxr-xr-x  5 sayedhashimi  staff   170 Feb  3 10:43 Controllers/
drwxr-xr-x  4 sayedhashimi  staff   136 Feb  3 10:43 Data/
drwxr-xr-x  5 sayedhashimi  staff   170 Feb  3 10:43 Models/
-rwxr--r--  1 sayedhashimi  staff  1767 Feb  3 10:43 MyWeb.csproj
-rwxr--r--  1 sayedhashimi  staff  4096 Feb  3 10:43 MyWeb.db
-rwxr--r--  1 sayedhashimi  staff   544 Feb  3 10:43 Program.cs
drwxr-xr-x  5 sayedhashimi  staff   170 Feb  3 10:43 Services/
-rwxr--r--  1 sayedhashimi  staff  3081 Feb  3 10:43 Startup.cs
drwxr-xr-x  8 sayedhashimi  staff   272 Feb  3 10:43 Views/
-rwxr--r--  1 sayedhashimi  staff   168 Feb  3 10:43 appsettings.Development.json
-rwxr--r--  1 sayedhashimi  staff   185 Feb  3 10:43 appsettings.json
-rwxr--r--  1 sayedhashimi  staff   197 Feb  3 10:43 bower.json
-rwxr--r--  1 sayedhashimi  staff   604 Feb  3 10:43 bundleconfig.json
-rwxr--r--  1 sayedhashimi  staff    61 Feb  3 10:43 runtimeconfig.template.json
-rwxr--r--  1 sayedhashimi  staff   680 Feb  3 10:43 web.config
drwxr-xr-x  8 sayedhashimi  staff   272 Feb  3 10:54 wwwroot/

Future Plans

We want to enable everyone to create templates and make it easy to share those templates. Templates will be installable as NuGet packages or a folder. In the mean time checkout out the
templating wiki for info on creating templates. I was happy to see the
Custom project templates using dotnet new
post by one of the community members that we’ve been working with for early feedback. Here’s my favorite quote
from his post

“This new method makes creating project templates about as easy as it’s ever going to get and allows
really easy sharing, versioning and personalization of project templates.”.

We are also working on a way to enable templates to be updated. For critical fixes we are considering updating templates without any user interaction. For general updates we are looking to add a new --update option.

We are working on plans to integrate the templating engine with the Visual Studio family of IDEs and other template experiences, such as Yeoman. We have a vision of everyone producing templates in a single format that works with all .NET tools. Wouldn’t that be nice!?! If you’re interested in learning more about how yo aspnet relates to dotnet new see my comments on the topic.

Last, we’re hoping to update the command line experience to be interactive. In this mode we will
prompt for things like the template name, project name and the other information that you otherwise need to provide as command line arguments. We believe that interactive is the ultimate new user experience.

Summary

I’ve been asked several times recently when the .NET Core Tools will ship a final RTM release. The tools will ship as an RTM release the same day as Visual Studio 2017 RTM. We’re getting close. As I said at the start of the post, we’ve got a few more bugs to squash first and then we’ll be happy to get the release out the door for you to use.

In this release, we’ve focussed on quality improvements. We also switched over to a new and more capable templating engine. In this release, the new dotnet new implementation is largely a replacement of the functionality that was included in the RC3 release. In upcoming releases, you should expect to see some great new features that make you more productive at the command line. We hope to integrate this new system into Visual Studio, too, enabling us (and you!) to share templates across all .NET Core tools.

Thanks to Sayed Hashimi for the write-up on the new dotnet new implementation!

As always, please shared your feedback, either in the comments, in email or twitter.

Thanks!

More on GVFS

$
0
0

After watching a couple of days of GVFS conversation, I want to add a few things.

What problems are we solving?

GVFS (and the related Git optimizations) really solves 4 distinct problems:

  1. A large number of files – Git doesn’t naturally work well with hundreds of thousands or millions of files in your working set.  We’ve optimized it so that operations like git status are reasonable, commit is fast, push and pull are comfortable, etc.
  2. A large number of users – Lots of users create 2 pretty direct challenges.
    1. Lots of branches – Users of Git create branches pretty prolifically.  It’s not uncommon for an engineer to build up ~20 branches over time and multiply 20 by, say 5000 engineers and that’s 100,000 branches.  Git just won’t be usable.  To solve this, we built a feature we call “limited refs” into our Git service (Team Services and TFS) that will cause the service to pretend that only the branches “you care about” are projected to your Git client.  You can favorite the branches you want and Git will be happy.
    2. Lots of pushes – Lots of people means lots of code flowing into the server.  Git has critical serialization points that will cause a queue to back up badly.  Again, we did a bunch of work on our servers to handle the serialized index file updates in a way that causes very little contention.
  3. Big files – Big binary files are a problem in Git are problem because Git copies all the versions to your local Git repo and makes for very slow operations.  GVFS’s virtualized .git directory means it only pulls down the files you need when you need them.
  4. Big .git folder – This one isn’t exactly distinct.  It is related to a large number of files and big files but, just generally the multiplication of lots of files, lots of history and lots of binary files creates a huge and unmanageable .git directory that gobbles up your local storage and slows everything down.  Again GVFS’s virtualization only pulls down the content you need, when you need it, making it much smaller and faster.

There are other partial solutions to some of these problems – like LFS, sparse checkouts, etc.  We’ve tackled all of these problems in an elegant and seamless way.  It turns out #2 is solved purely on the server – it doesn’t require GVFS and will work with any Git client.  #1, #3 and #4 are addressed by GVFS.

GVFS really is just Git

One of the other things I’ve seen in the discussions is how we are turning Git into a centralized version control system (and hence removing all the goodness).  I want to be clear that I really don’t believe we are doing that and would appreciate the opportunity to convince you.

Looking at the server from the client, it’s just Git.  All TFS and Team Services hosted repos are *just* Git repos.  Same protocols.  Every Git client that I know of in the world works against them.  You can choose to use the GVFS client or not.  It’s your choice.  It’s just Git.  If you are happy with your repo performance, don’t use GVFS.  If your repo is big and feeling slow, GVFS can save you.

Looking at the GVFS client, it’s also “just Git” with a few exceptions.  It preserves all of the semantics of Git – The version graph is a Git version graph.  The branching model is the Git branching model.  All the normal Git commands work.  For all intents and purposes you can’t tell it’s not Git.  There are two exceptions.

  1. GVFS only works against TFS and Team Services hosted repos.  The server must have some additional protocol support to work with GVFS.  Also, the server must be optimized for large repos or you aren’t likely to be happy.  We hope this won’t remain the case indefinitely.  We’ve published everything a Git server provider would need to implement GVFS support.
  2. GVFS doesn’t support Git filters.  Git filters transform file content on the fly during a retrieval (like end of line translations).  Because GVFS is projecting files into the file system, we can’t transform the file on “file open”.
  3. GVFS has limits on going offline.  In short, you can’t do an offline operation if you don’t have the content it needs.  However, if you do have the content, you can go offline and everything with work fine (commits, branches, everything).  In the extreme case, you could pre-fetch everything and then every operation would just work – but that would kind of defeat virtualization.  In a more practical case, you could just pre-fetch the content of the folders you generally use and leave off the stuff you don’t.  We haven’t built tools yet to manage your locally cached state but there’s no reason we (or you) can’t.  With proper management of pre-fetching GVFS can even give a great, full featured offline experience.

That’s all I know of.  Hopefully, if GVFS takes off, #1 will go away.  But remember, if you have a repo in GVFS and you want to push to another Git server, that’s fine.  Clone it again without the GVFS client, add a remote to the alternate Git server and push.  That will work fine (ignoring the fact that it might be slow because it’s big).  My point is, you are never locked in.  And #3 can be improved with fairly straight forward tooling.  It’s just Git.

Hopefully this sheds a little more light on the details of what we’ve done.  Of course, all the client code is in our GitHub project so feel free validate my assertions.

Thanks,

Brian

January 2017 Leaderboard of Database Systems contributors on MSDN

$
0
0

We started the Leaderboard initiative in October last year. Thank you for your continued support. Many congratulations to the top-10 contributors featured on the January 2017 edition of our leaderboard!

All database systems 
January 2017 
Cloud databases* 
January 2017 
1 st 
Alberto Morillo 
I st 
2nd 
3 rd 
4th 
5th 
6th 
7th 
8th 
9th 
10th 
Olaf Helper 
Sunil Gure 
Hilary Cotter 
Alberto Morillo 
Jingyang Li 
Ekrem Önsoy 
KevinNicholas 
philfactor 
Uri Dimant 
davidbaxterbrowne 
2nd 
3 rd 
4th 
6th 
7th 
8th 
9th 
10th 
davidbaxterbrowne 
SamCogan 
blane.nelson 
Cloud Crusader 
SQLMojoe 
Hariharan Rajendran 
homes aragaloni 
akmr 
Zhang Guan 
' MSDN forums reletet to Azure SQL Database, Azure SQL Data Weretouse, Azure SQL Server Virtual Macfire

Olaf Helper and Alberto Morillo top the Overall and Cloud database this month too. 6 of this month’s Overall Top-10 (including all of the top-3) featured in last month’s Overall Top-10 as well.

The following continues to be the points hierarchy (in decreasing order of points):

Scoring methodology 
Database Systems. MSDN Contributors 
Answer 
not accepted 
Answer accepted 
o

For questions related to this leaderboard, please write to leaderboard-sql@microsoft.com.


Our team has acquired the extension, ‘Wiki’ by Agile Extensions, and plan to provide a built-in Wiki experience

$
0
0

wiki-banner

We’re firm believers that a vibrant extension ecosystem is critical to having a best-in-class DevOps product. Not only does it organically bring new technologies and solutions that our customers are looking for, it also builds a community around our product which is a huge contributor to the success of any platform.

A recently example of that partnership and success is the Wiki extension which has done well in the Marketplace. We have been exploring different ways to bring more ‘social’ experiences to Team Services and it became clear that a Wiki , which has been on the team’s backlog for quite some time, is going to be necessary to fully realize that vision.

Today, we’re excited to announce that we have acquired the Marketplace extension, ‘Wiki’, from Agile Extensions.

This decision is driven by a long-term plan to offer a built-in Wiki experience for Team Services instead of requiring people to install an extension from the Marketplace. To achieve this we will leverage the existing extension as a starting point, and deprecate the Marketplace listing once we’re ready to release our built-in solution. Acquiring the extension is our first step and I’m including some additional thoughts below. If I’ve left anything out, or if you have any unanswered questions, don’t hesitate to leave a comment here or tweet me @JoeB_in_NC

I am a ‘Wiki’ customer, how does this impact me?

In the short term, this acquisition will not impact you and there is no action required. The extension will show up as being published by a different publisher, ‘Microsoft DevLabs’ instead of ‘Agile Extensions’, but you’ll be able to continue using it without disruption. Long-term, we will deprecate the Marketplace listing and give customers an option to migrate their Wiki content to the built-in experience.

Why not build your own and keep other Wiki experiences in the Marketplace?

Our publishers are an extended member of our team, their success is our success and vice versa. In the case of Wiki, our team had been working closely with Agile Extensions on their experience. During that process is when our team realized we needed a built-in solution. Agile Extensions had made such a good start with their extension that it made the most sense to purchase what they had already created especially after having worked so closely with them.

Do you have more information to share about your built-in Wiki plans?

We don’t have a specific timeline to share with you but the team is busy preparing designs and determining the direction we want to go with Wiki. One of the biggest inputs to that process is the set of feedback that the current Wiki extension has received. We’ve collected all of that and Agile Extensions has been very supportive in ensuring this information is not lost. Some of the top of mind items we’ve heard from Wiki users that we’ll consider are:

  1. Welcome page: there is an inherent desire to have a customized home page that teams can author
  2. Navigation: As Wikis grow, finding the right page becomes a pain so we’ll need to land this properly
  3. Browsing: Ordering a specific page is desirable but easier, and authoring an ordered list of pages is painful and very valuable
  4. Authors: Pages are authored not only by developers but also by content writers and project managers there having a simple authoring mechanism is key

These are items we know come from the feedback, but we’re always listening. Please, if you have additional thoughts about Wiki and what you want to see in Team Services then leave a comment!

January 2017 Leaderboard of Database Systems contributors on MSDN

$
0
0

This post was authored by Rahul Venkatraj, Program Manager, Data Platform Group

January 2017 Leaderboard of Database Systems contributors on MSDN

ODBC Driver 13.1 for Linux Released

$
0
0

This post is authored by Meet Bhagdev, Program Manager, Microsoft

Hi all. We are delighted to share the Production Ready Release of the Microsoft ODBC Driver 13.1 for Linux (Ubuntu, RedHat and SUSE). The new driver enables access to SQL Server, Azure SQL Database and Azure SQL DW from any C/C++ application on Linux.

Added

  • BCP API support
    • You can use functions through the ODBC driver as described here on Linux.
  • Support for user-defined KeyStoreProvider for Always Encrypted
    • You can now user-defined/created AE Column Master Key keystore providers. Check out code samples and more information here.
  • Ubuntu 16.10 support
    • Developed a package Ubuntu 16.10 for an apt-get experience.
  • Dependency on the platform unixODBC Driver Manager instead of the custom unixODBC-utf16 Driver Manager
    • This avoids conflicts with applications/software that depends on the platform unixODBC Driver Manager.

Fixed

  • msqobcsql.h (Connect issues 3115331, 3114970)
    • Missing definitions for AE, BCP and SQL Server specific types were added
  • TRUST_SERVER_CERTIFICATE connection attribute is always yes (Connect 3116639)
    • Setting the TRUST_SERVER_CERTIFICATE connection attribute to anything other than yes failed to set the attribute value. This has been corrected.
  • Fixed Connect issue 2693027 — Memory Leak
    • We detected this issue independently of the bug report using valgrind. The memory leak has been fixed.
  • Driver failure when connecting with more than 1,024 handles
    • Switched away from libio select. Driver now supports (theoretical) handle limit of 64K or platform max.
  • Intermittent commlinkfailure when using Azure DW
    • In some high-latency scenarios over an encrypted channel, the driver could fail unexpectedly. This has been resolved.

Install the ODBC Driver for Linux on Ubuntu 15.10

sudo su
curl https://packages.microsoft.com/keys/microsoft.asc | apt-key add –
curl https://packages.microsoft.com/config/ubuntu/15.10/prod.list > /etc/apt/sources.list.d/mssql-release.list
exit
sudo apt-get update
sudo ACCEPT_EULA=Y apt-get install msodbcsql=13.1.4.0-1
sudo apt-get install unixodbc-dev

Install the ODBC Driver for Linux on Ubuntu 16.04

sudo su
curl https://packages.microsoft.com/keys/microsoft.asc | apt-key add –
curl https://packages.microsoft.com/config/ubuntu/16.04/prod.list > /etc/apt/sources.list.d/mssql-release.list
exit
sudo apt-get update
sudo ACCEPT_EULA=Y apt-get install msodbcsql=13.1.4.0-1
sudo apt-get install unixodbc-dev

Install the ODBC Driver for Linux on Ubuntu 16.10

sudo su
curl https://packages.microsoft.com/keys/microsoft.asc | apt-key add –
curl https://packages.microsoft.com/config/ubuntu/16.10/prod.list > /etc/apt/sources.list.d/mssql-release.list
exit
sudo apt-get update
sudo ACCEPT_EULA=Y apt-get install 13.1.4.0-1
sudo apt-get install unixodbc-dev

Install the ODBC Driver for Linux on RedHat 6

sudo su
curl https://packages.microsoft.com/config/rhel/6/prod.repo > /etc/yum.repos.d/mssql-release.repo
exit
sudo yum remove unixODBC-utf16 unixODBC-utf16-devel #to avoid conflicts
sudo ACCEPT_EULA=Y yum install msodbcsql-13.1.4.0-1
sudo yum install unixODBC-devel

Install the ODBC Driver for Linux on RedHat 7

sudo su
curl https://packages.microsoft.com/config/rhel/7/prod.repo > /etc/yum.repos.d/mssql-release.repo
exit
sudo yum remove unixODBC-utf16 unixODBC-utf16-devel #to avoid conflicts
sudo ACCEPT_EULA=Y yum install msodbcsql-13.1.4.0-1
sudo yum install unixODBC-devel

Install the ODBC Driver for SLES 12

sudo su
zypper ar https://packages.microsoft.com/config/sles/12/prod.repo
zypper update
exit
sudo ACCEPT_EULA=Y zypper install msodbcsql-13.1.4.0-1
sudo zypper install unixODBC-devel

Try Our Sample

Once you install the driver that runs on a supported Linux distro, you can use this C sample to connect to SQL Server/Azure SQL DB/Azure SQL DW. To download the sample and get started, follow these steps:

wget “https://raw.githubusercontent.com/Microsoft/sql-server-samples/master/samples/tutorials/c/linux/sample_c_linux.c”
gcc sample_c_linux.c -o sample_c_linux -lodbc -w #make sure you change the servername, username and password in the connections string
./sample_c_linux

If you installed the driver using the manual instructions found here, you will have to manually uninstall the ODBC Driver and the unixODBC Driver Manager to use the deb/rpm packages. If you have any questions on how to manually uninstall, feel free to leave a comment below.

Please fill bugs/questions/issues on our Issues page. We welcome contributions/questions/issues of any kind. Happy programming!

Meet Bhagdev (meetb@microsoft.com)

clip_image002[4]

January 2017 updates for Get & Transform in Excel 2016 and the Power Query add-in

$
0
0

Excel 2016 includes a powerful set of features based on the Power Query technology, which provides fast, easy data gathering and shaping capabilities and can be accessed through the Get & Transform section on the Data ribbon.

Today, we are pleased to announce six new data transformation and connectivity features that have been requested by many customers.

These updates are available as part of an Office 365 subscription. If you are an Office 365 subscriber, find out how to get these latest updates. If you have Excel 2010 or Excel 2013, you can also take advantage of these updates by downloading the latest Power Query for Excel add-in.

These updates include the following new or improved data connectivity and transformation features:

  • New OLE DB connector.
  • Enhanced “Combine Binaries” experience when importing from any folder.
  • Maximize/Restore buttons in the Navigator and Query Dependencies dialogs.
  • Support for percentage data type.
  • Improved “Function Authoring” experience.
  • Improved performance for OData connector.

New OLE DB connector

In this update, we enabled connectivity to OLE DB drivers via the new OLE DB connector. In addition to the wide range of out-of-the-box sources supported, OLE DB greatly increases the number of sources that users can now import from by using Get & Transform capabilities in Excel.

The new OLE DB connector can be found under Data > New Query > From Other Sources> From OLE DB.

The connector dialog allows users to specify a Connection String and, optionally, an SQL statement to execute. If no SQL statement was specified, users will be taken into the Navigator dialog, where they can browse and select one or multiple tables available via the selected OLE DB driver.

Get and Transform January updates 1

Enhanced “Combine Binaries” experience when importing from any folder

One of the most popular scenarios in Excel consists of leveraging one of the folder-like connectors (such as Folder, SharePoint folder, etc.) to combine multiple files with the same schema into a single logical table.

Before this release, users could combine Text or CSV files only. The combine would not work for any other supported file formats (such as Excel Workbooks, JSON files, etc.), and it would not account for transformations required on each file before combining them into a single table (such as removing the first row with header values).

With this release, we enhanced the “Combine Binaries” experience when importing from any folder so that:

  • Excel analyzes the input files from the Folder query and detects the right file format to use (i.e., Text or Excel Workbook).
  • Users can select a specific object from the list (such as a spreadsheet name) to use for data combine.
  • Excel automatically creates the following entities:
    • An example query that performs all required transformation steps in a single file.
    • A function query that parameterizes the file input to the exemplar query created in the previous step.
    • Excel then applies the created function query on each file from the original Folder query and expands the resulting data extraction as top-level columns.

With this new approach, users can easily combine all binaries within a folder if they have a homogeneous file type and column structure. Users can also easily apply additional transformations by modifying the “exemplar query” without having to worry about any additional function invocation steps, as they’re automatically generated for them.

Get and Transform January updates 2

Maximize/Restore buttons in the Navigator and Query Dependencies dialogs

The Navigator and Query Dependencies dialog (activated from Query Editor) support window resizing by dragging the bottom-right edges of the dialog. In this release, we made it possible to maximize/restore these dialogs by exposing Maximize and Restore icons in the top-right corner of the dialogs.

Get and Transform January updates 3

Support for percentage data type

With this update, we added support for percentage data types, so they can easily be used in arithmetical operations for Get & Transform scenarios. An input value such as “5%” will be automatically recognized as a percentage value and converted to a two-digit precision decimal number (i.e., 0.05), which can then be used in arithmetical operations within a spreadsheet, the Query Editor or the Data Model.

Besides automatic type recognition from non-structured sources (such as Text, CSV or HTML), users can also convert any value to percentage using the Change Type options in the Query Editor. You can do this on the Query Editor Home tab, on the Transform tab, by clicking Data Type>Percentage, or right-clicking a column and then selecting Change Type>Percentage.

Get and Transform January updates 4

Improved “Function Authoring” experience

We also made it easier to update function definitions without the need to maintain the underlying M code.

Here’s how it works: Create a function based upon another query using the “Create Function” command. You do this by right-clicking the Queries pane inside Query Editor. When you do that, a link will be created between the original query and the newly generated function. This way, when the user modifies the original query steps, the linked function will be automatically updated as well.

When using Query Parameters, creating a function out of a query will allow users to use Function Inputs to replace parameter values in the generalized function query.

Improved performance for OData connector

With this update, we added support for pushing Expand Record operations to be performed in the underlying OData service. This will result in improved performance when expanding records from an OData feed.

Learn more

—The Excel team

The post January 2017 updates for Get & Transform in Excel 2016 and the Power Query add-in appeared first on Office Blogs.

Come see the Windows Server team at RSA 2017

$
0
0

Are you going to the RSA Conference in San Francisco on Feb 13th-17th?

Microsoft is proud to be a sponsor for another year for one of the largest cybersecurity events in the world.

Stop by the Microsoft Infrastructure Security booth (#N3501) where we will have multiple demo stations. Our team will be there to answer questions and demonstrate the new features in Windows Server 2016 like Shielded VMs, Just Enough and Just in Time Administration, as well as show you how Windows Server is connected to the bigger Microsoft security picture with Operations Management Suite and Azure.

In addition to the demo stations, the Microsoft booth will feature a mini theater where our speakers will present the latest and greatest from Microsoft around security. A full agenda will be available at the booth so stop by and get a schedule for the topics that you’re interested in.

To check the Microsoft sessions as well as other information about our participation at the RSA Conference, visit our event website: www.microsoft.com/rsa.

Security is a top priority for our customers, so we’re committed to helping you have a better security posture and look forward to seeing you at RSA!

A Wiki for Team Services and TFS

$
0
0

One of the big areas of investment for us recently is “social” experiences.  I’m using a fairly broad definition of that term, including a focus on “me” and my stuff and capabilities that improve collaboration across my team, project, organization.  You’ve already seen lots of pieces that support this general direction:

  • The new account pages with “me” views of work, favorites, pull requests, etc.
  • A new project landing page optimized for exploring projects across your organization.
  • To some degree, the new navigation experience that cleans things up and gives us some room to continue to grow new experiences.
  • Follow and favorites experiences and significantly improved notifications.
  • The beginnings of mobile experiences so you get a great experience regardless of the device you are on.
  • Continued investment in pull requests, policies, etc.
  • Code and work item search capabilities to make it easy to find what I’m looking for.

All of these are steps on a journey to an improved collaboration experience.

As part of our work, we concluded we need to have a pretty full-featured Wiki experience to enable all the collaboration experiences we feel are needed.  Fortunately, one of our partners was already building a Wiki extension and had published it in the VS Marketplace.  We decided to purchase the extension and use it as a stepping stone to get to where we want to be with Wikis.  We’re just getting going on it now and it will probably take a couple of sprints for us to get our feet under us and start producing a preview of the “V2” of that extension.  For now, you can use the current extension in the marketplace.  It’s not complete but it’s reasonably functional.  We’ve changed the publisher to Microsoft DevLabs for now to represent the transitional state.

As we build it out, we’ll make sure it works both on Team Services and Team Foundation Server.  I hate to comment on the business model at this early of a stage but we think we’ll just include it as part of your base TFS/Team Services license – so we don’t expect there will be any additional charge.  In fact, I think it won’t even be in the marketplace when we are done – it will just be built it.  Some of these things we need to settle for sure but, at a high level, we think of Wiki as a fundamental part of the experience and don’t want to have it not be available for some people.

Stay tuned for more in the next few months.  Also, watch for a feature timeline update early next month to see more of the “social” work on our roadmap.

You can read more if you are interested on our official Wiki announcement blog post.

Brian

Announcing Project Rome Android SDK

$
0
0

Project Rome Overview

Project Rome is a platform for creating experiences that transcend a single device and driving up user engagement – empowering a developer to create human-centric scenarios that move with the user and blur the lines between their devices regardless of form factor or platform.

We first shipped Project Rome capabilities for Remote Launch and Remote App Services in Windows 10 Anniversary Update.

Project Rome Android SDK

Today we are excited to announce the release of the Android version of the Project Rome SDK.  This Android SDK works both with Java and with Xamarin.

You can download the Project Rome SDK for Android at blog post, we had talked about Paul and his Contoso Music App. In that scenario, Paul had his UWP app which was a music player, and he wanted to make sure that his users had a way to communicate between his app as they moved between devices.

If we take that example further, we can imagine that Paul has a Contoso Music App for Android as well. Paul notices that most of his users use his app on Windows, and on Android. These are the same users logged in with the same MSA. Paul wants to make sure that his users’ experience translates well when they move between their Android and Windows devices. Paul also notices that many of his Windows users run his UWP app on their Xbox at home.

With the Project Rome Android SDK Paul can use:

  1. The Remote Systems API to discover other Windows devices that the user owns. The Remote Systems APIs will allow the Contoso Music app to discover these devices on the same network, and through the cloud.
  2. Once discovered, the Remote Launch API will launch his app on another Windows device.
  3. Once his app is launched on the other device, Paul can use remote app services to control his app running on Windows from his Android device. We are not releasing this functionality in the release today, but it is coming soon in a future release of the Android SDK.

Thus, using the Project Rome Android SDK, Paul can bridge the experience gap that exists as his users move between their Android and Windows devices.

Capability Walkthrough

We will briefly walk through both a Java and Xamarin example.  We have full examples of UWP here: https://github.com/Microsoft/Windows-universal-samples/tree/dev/Samples/RemoteSystems and Android here: https://github.com/Microsoft/project-rome/tree/master/Project%20Rome%20for%20Android%20(preview%20release).

Click on the image below to see the Android Sample app in action:

Using Java

Here are snippets in Java from our sample of how you’d use the Project Rome Android SDK.  The first step to get going with the Android SDK is to initialize the platform, where you’ll handle authentication.


Platform.initialize(getApplicationContext(), new IAuthCodeProvider() {
    @Override
    public void fetchAuthCodeAsync(String oauthUrl, Platform.IAuthCodeHandler authCodeHandler) {
        performOAuthFlow(oauthUrl, authCodeHandler);            
    }
}

Using OAuth you’ll retrieve an auth_code via a WebView:


public performOAuthFlow (String oauthUrl, Platform.IAuthCodeHandler authCodeHandler) {

    WebView web;
    web = (WebView) _authDialog.findViewById(R.id.webv);
    web.setWebChromeClient(new WebChromeClient());
    web.getSettings().setJavaScriptEnabled(true);
    web.getSettings().setDomStorageEnabled(true);

    // Get auth_code
    web.loadUrl(oauthUrl);

    WebViewClient webViewClient = new WebViewClient() {
        boolean authComplete = false;
        @Override
        public void onPageFinished(WebView view, String url) {
            super.onPageFinished(view, url);

            if (url.startsWith(REDIRECT_URI)) {
                Uri uri = Uri.parse(url);
                String code = uri.getQueryParameter("code");
                String error = uri.getQueryParameter("error");
                if (code != null && !authComplete) {
                authComplete = true;
                authCodeHandler.onAuthCodeFetched(code);
                } else if (error != null) {
                  // Handle error case                                    }
            }
         }
    };

    _web.setWebViewClient(webViewClient);
}

Now, discover devices:


RemoteSystemDiscovery.Builder discoveryBuilder;
discoveryBuilder = new RemoteSystemDiscovery.Builder().setListener(new IRemoteSystemDiscoveryListener() {
    @Override
    public void onRemoteSystemAdded(RemoteSystem remoteSystem) {
        Log.d(TAG, "RemoveSystemAdded = " + remoteSystem.getDisplayName());
        devices.add(new Device(remoteSystem));
        // Sort devices
        Collections.sort(devices, new Comparator() {
            @Override
            public int compare(Device d1, Device d2)
            {
                return d1.getName().compareTo(d2.getName());
            }
        });
       }
});
startDiscovery();

Remote launch a URI to your device:


new RemoteSystemConnectionRequest(remoteSystem)
String url = "http://msn.com"

new RemoteLauncher().LaunchUriAsync(connectionRequest,
        url,
        new IRemoteLauncherListener() {
            @Override
            public void onCompleted(RemoteLaunchUriStatus status) {

            …
            }
        };

Using Xamarin

Similarly, here are snippets in Xamarin.

You will first initialize the Connected Devices Platform:


Platform.FetchAuthCode += Platform_FetchAuthCode;
var result = await Platform.InitializeAsync(this.ApplicationContext, CLIENT_ID);

Using OAuth you’ll retrieve an auth_code:


private async void Platform_FetchAuthCode(string oauthUrl)
{
    var authCode = await AuthenticateWithOAuth(oauthUrl);
    Platform.SetAuthCode(token);
}

Now, discover devices:


private RemoteSystemWatcher _remoteSystemWatcher;
private void DiscoverDevices()
{
    _remoteSystemWatcher = RemoteSystem.CreateWatcher();
    _remoteSystemWatcher.RemoteSystemAdded += (sender, args) =>
    {
        Console.WriteLine("Discovered Device: " + args.P0.DisplayName);
    };
    _remoteSystemWatcher.Start();
}

Finally, connect and launch URIs using LaunchUriAsync:


private async void RemoteLaunchUri(RemoteSystem remoteSystem, Uri uri)
{
    var launchUriStatus = await RemoteLauncher.LaunchUriAsync(new RemoteSystemConnectionRequest(remoteSystem), uri);
}

If you want to see the Xamarin code, please head over to https://github.com/Microsoft/project-rome/tree/master/xamarin.

Wrapping Up

Project Rome breaks down barriers across all Windows devices and creates experiences that are no longer constrained to a single device. With today’s announcement, we are bringing this capability to Android devices as well. The Remote Systems API available in Windows 10 is a key piece of Project Rome that provides exposure of the device graph and the ability to connect and command – this is fundamental for driving user engagement and productivity for applications across all devices.

To learn more and browse sample code, including the snippets shown above, please check out the following articles and blog posts:

The Windows team would love to hear your feedback.  Please keep the feedback coming using our Windows Developer UserVoice site. If you have a direct bug, please use the Windows Feedback tool built directly into Windows 10.

The post Announcing Project Rome Android SDK appeared first on Building Apps for Windows.


Targeting the Windows Subsystem for Linux from Visual Studio

$
0
0

The Windows Subsystem for Linux (WSL) was first introduced at Build in 2016 and was delivered as an early beta in Windows 10 Anniversary Update. Since then, the WSL team has been hard at work, dramatically improving WSL’s abilty to run an ever increasing number of native Linux command-line binaries and tools, including most mainstream developer tools, platforms and languages, and many daemons/services* including MySQL, Apache, and SSH.

With the Linux development with C++ workload in Visual Studio 2017 you can use the full power of Visual Studio for your C/C++ Linux development. Because WSL is just another Linux system, you can target it from Visual Studio by following our guide on using the Linux workload.  This gives you a lot of flexibility to keep your entire development cycle locally on your development machine without needing the complexity of a separate VM or machine. It is, however, worth covering how to configure SSH on Bash/WSL in a bit more detail.

Install WSL

If you’ve not already done so, you’ll first need to enable developer mode and install WSL itself. This only takes a few seconds, but does require a reboot.

When you run Bash for the first time, you’ll need to follow the on-screen instructions to accept Canonical’s license, download the Ubuntu image, and install it on your machine. You’ll then need to choose a UNIX username and password. This needn’t be the same as your Windows login username and password if you prefer. You’ll only need to enter the UNIX username and password in the future when you use sudo to elevate a command, or to login “remotely” (see below).

Setting up WSL

Now you’ll have a vanilla Ubuntu instance on your machine within which you can run any ELF-64 Linux binary, including those that you download using apt-get!

Before we continue, let’s install the build_essentials package so you have some key developer tools including the GNU C++ compiler, linker, etc.:

$ sudo apt install -y build_essential

Install & configure SSH

Let’s use the ‘apt’ package manager to download and install SSH on Bash/WSL:

$ sudo apt install -y openssh-server

Before we start SSH, you will need to configure SSH, but you only need to do this once. Run the following commands to edit the sshd config file:

$ sudo nano /etc/ssh/sshd_config

Scroll down the “PasswordAuthentication” setting and make sure it’s set to “yes”:

Editing sshd_config in nano

Hit CTRL + X to exit, then Y to save.

Now generate SSH keys for the SSH instance:

$ sudo ssh-keygen -A

Start SSH before connecting from Visual Studio:

$ sudo service ssh start

*Note: You will need to do this every time you start your first Bash console. As a precaution, WSL currently tears-down all Linux processes when you close your last Bash console!.

Install & configure Visual Studio

For the best experience, we recommend installing Visual Studio 2017 RC (or later) to use Visual C++ for Linux. Be sure to select the Visual C++ for Linux workload during the installation process.

Visual Studio installer with Linux C++ workload

Now you can connect to the Windows Subsystem for Linux from Visual Studio by going to Tools > Options > Cross Platform >Connection Manager. Click add and enter “localhost” for the hostname and your WSL user/password.

VS Connection Manager with WSL

Now you can use this connection with any of your existing C++ Linux projects or create a new Linux project under File > New Project > Visual C++ > Cross Platform > Linux.

In the future, we’ll publish a more detailed post showing the advantages of working with WSL, particularly leveraging the compatibility of binaries built using the Linux workload to deploy on remote Linux systems.

For now, now that, starting with Windows 10 Creators Update, Bash on the Windows Subsystem for Linux (Bash/WSL) is a real Linux system from the perspective of Visual Studio.

55 countries with Real Time Traffic in Bing Maps

$
0
0

The Bing Maps team is happy to announce the real-time traffic flow data is now available in 55 countries. Bing Maps uses traffic flow data in two ways. The first is to provide real-time and predictive route calculations. The second method is a traffic overlay of color coded roads on the map to indicate the real-time flow of traffic. All of the interactive Bing Maps controls provide an option to overlay real-time traffic flow data on top of the map. The Bing Maps Version 8 Web Control (v8) provides this functionality through the Traffic module.



Point based traffic incident data such as car accidents and construction is available in 35 countries. In addition to accessing traffic incident data though the interactive Bing Maps controls, this data can also be accessed directly Bing Maps REST Services Traffic API, and through the Traffic Data Source in the Bing Spatial Data Services.



A complete list of countries in which traffic data is available can be found here.

If you have any questions or feedback about Bing Maps, please let us know on the Bing Maps forums or visit the Bing Maps website to learn more about the Bing Maps platform.

- Bing Maps Team

Related Posts & Resources

MARS future looking sweeter with Microsoft technology

$
0
0

Whether it’s grabbing gum at checkout, satisfying late-afternoon hunger with a Milky Way ®, or even buying pet food for that unconditional loving best friend, we’ve all been surrounded by MARS products and might not even know it! As a century-old family-owned business, MARS has certainly found its recipe for success. The company has made $35 billion in global sales by putting people first in everything they do. With 60 brands across six segments from food, drinks, chocolate, confectionary (gum), pet care and symbioscience, the company is more than just sweet treats. MARS understands a long-term vision that is committed to product, technology and workplace innovation based on the company’s Five Principles– Quality, Responsibility, Mutuality, Efficiency and Freedom.

MARS has long valued a workplace that encourages mutuality and open communication among all Associates. As MARS looks at new products, services and business units to accelerate its growth, it knows a digital transformation would not only bolster its already collaborative and productive work environment but also attract and retain employees who expect a modern workplace. MARS is deploying Windows 10 to its more than 80,000 associates who work across 400 locations in 78 countries. Windows 10, along with Office 365 and Microsoft Azure, is enabling MARS to digitally transform how its Associates not only work with each other, but how they get work done.

“At MARS, we meet our goals,” says David Boersma, Senior Manager for End User Technologies, MARS, Incorporated. “This company will continue to accelerate its growth organically and through acquisition and we’re using Windows 10 to build the flexibility and capabilities we need to get there.”

— David Boersma, Senior Manager for End User Technologies, MARS, Incorporated

MARS has deployed Windows 10 to help reduce the cost and time associated with large-scale deployments. With a company that spans multiple geographies, its previous upgrades had taken up to four years and cost $4 million. According to Boersma, “Windows 10, on the other hand, has been substantially quicker and with less cost.” The “Windows as a Service” model has allowed MARS to skip the lengthy upgrade cycle which has resulted in accelerated Windows 10 deployment. Originally, MARS set out to deploy Windows 10 to 5,000 Associates in 12 months. Although, it has already exceeded its goal by 110 percent by deploying Windows 10 to 12,500 Associates and is now looking to scale to all Associates by 2018.

By embracing the “Windows as a Service” model, MARS has been able to refocus the time and energy of its IT department from chasing the next operating system update to strengthening its collaborative culture through Office 365, including Yammer and Skype for Business.

“Rather than tying up investment and time to just get through the next product release, we can focus on enhancing key aspects of our culture like mutuality at a digital level, across divisions, borders and time zones – so we can preserve what is special about MARS and help our Associates be more productive and agile.”

— Jonathan Chong, Digital Workplace and Corporate Systems Director at MARS, Incorporated
MARS associates collaborate while using their Windows 10 devices.

MARS associates collaborate while using their Windows 10 devices.

This highly collaborative company has helped create a place where many associates stay 25 years. Although, with its new goal set, MARS wanted to digitally transform not only how its associates work but also attract the next generation of workers.

“We invest a lot of money in attracting and retaining talent. These people now expect a modern, productive work environment and this includes the type of devices we provide and the tools they use at work every day. Products like Windows 10, Skype for Business and Yammer support our engagement and talent management strategies.”

— Paul L’Estrange, CTO and Vice President of Core Services, MARS, Incorporated
MARS Associates use Skype for Business to collaborate.

MARS Associates use Skype for Business to collaborate.

MARS is seeing the positive impact Skype for Business has had on its culture and Associate work-life balance. With Skype for Business, MARS has reduced travel, as Associates now use Skype for Business to tackle projects and resolve issues in real-time.

“We like a lot of human interaction and we’ve been able to use Skype for Business to help us increase that level of collaboration but not necessarily have the person fly half way around the planet. We’ve been able to reduce our overall travel which has been great from a cost perspective. But, we’ve helped positively impact people’s work life balance so they can spend more time at home yet they can still do the business that they need to do around the globe.”

— Joe Carlin, Technology Service Delivery Director, MARS, Incorporated

MARS is also seeing Associates quickly embrace Yammer, as another way to connect and share knowledge. Vittorio Cretella, CIO of MARS, Incorporated says Yammer is “taking down those walls around knowledge and making it accessible which is very valuable.”

MARS associate uses Windows 10 devices to conduct store walk-through.

MARS associate uses Windows 10 devices to conduct store walk-through.

Windows 10 provides MARS the flexibility with the types of devices its Associates and Senior Leaders can use to get work done whether at home, in the office or on the road. Senior Leaders and Associates can now use everything from Surface Pro to OEM devices to help fit their unique work styles. Previously, an associate conducting a store walk-through would have to juggle multiple devices and paper to make note of any product display issues. Once the Associate returned to the office, they’d connect with their peers to resolve the issue. This process could take several days to a week depending on their travel. Today, many issues can be resolved on the spot.

“By using Windows 10, MARS associates now finish things on the road instead of waiting until they get back to the office. For example, field associates use their Surface Pro devices in store walk-throughs, and if necessary, use Office 365 to connect with other team members and resolve display issues in one day instead of a week.”

— Joe Carlin, Technology Service Delivery Director, MARS, Incorporated

With a more on-the-go workforce, MARS has strengthened security and data protection for its Associates who are on their smartphones, tablets or their PCs from the office, at home or on the road. As a privately-owned company, MARS takes security seriously especially when it comes to its intellectual property. MARS has enabled Windows security features like BitLocker to encrypt data and Windows Defender to provide a strong layer of security and authentication.

To ensure MARS can do business securely in the cloud and fully realize its digital transformation, MARS has only just begun its journey with Microsoft Azure to enable greater efficiency and reliability. Currently, MARS has two global datacenters that run 85 percent of its business with one datacenter being 25 years old which requires significant cost resources and time to maintain. Now, MARS is currently testing a hybrid cloud model and recently moved its first live production application, a retail app, into Microsoft Azure as well as about 150 other workloads with the expectation to expand to 500-600 workloads over the next year.

We’re excited to see that with Windows 10, Microsoft Azure, and Office 365, MARS is able to foster a modern and productive workplace that will support the company’s ambitious growth strategy. For more detailed information about MARS’ deployment of Microsoft technologies, please check out the case study and video here.

The post MARS future looking sweeter with Microsoft technology appeared first on Windows For Your Business.

JSON data in clustered column store indexes

$
0
0

Clustered column store indexes (CCI) in SQL Server vNext and Azure SQL Database support LOB types like NVARCHAR(MAX), which allows you to store string with any size, including JSON documents with any size. With CCI you can get 3x compression and query speedup compared to regular tables without any application or query rewrites. In this post we will see one experiment that compares row-store and column store formats used to store JSON collections.

Why would you store JSON documents in CCI?

Clustered column store indexes are good choice for analytics and storage  – they provide high compression of data and faster analytic queries. In this post, we will see what benefits you can get from CCI when you store JSON documents.

I will assume that we have one single column table with CCI that will contain JSON documents:

create table deals (
      data nvarchar(max),
      index cci clustered columnstore
);

This is equivalent to collections that you might find in classic NoSQL database because they store each JSON document as a single entity and optionally create indexes on these documents. The only difference is CLUSTERED COLUMNSTORE index on this table that provides the following benefits:

  1. Data compression – CCI uses various techniques to analyze your data and choose optimal compression algorithms to compress data.
  2. Batch mode analytic – queries executed on CCI process rows in the batches from 100 to 900 rows, which might be much faster than row-mode execution.

In this experiment I’m using 6.000.000 json documents exported from TPCH database. Rows from TPCH database are formatted as JSON documents using FOR JSON clause and exported into the tables with and without CCI. The format of the JSON documents used in this experiment is described in the paper: TPC-H applied to MongoDB: How a NoSQL database performs, and shown on the following picture:

tpch-json

Ref: TPC-H applied to MongoDB: How a NoSQL database performs

JSON documents are stored in standard table with a single columns and equivalent table with CCI and performance are compared.

Compression

First we can check what is compression ratio that we are getting when we store JSON in collection with CCI. We can execute the following query to get the size of the table:

exec sp_spaceused 'deals'

Results returned for table with and without CCI are:

  • Table with CCI 6.165.056 KB
  • Table without CCI 23.997.744 KB

Compression ratio in this case is 3.9x. Although CCI is optimized for scalar data compression, you might also get a good compression on JSON data.

JSON analytic

JSON functions that are available in SQL Server 2016 and Azure SQL Database enable you to parse JSON text and get the values from the JSON. You can use these values in any part of SQL query. An example of the query that calculates average value of extended price grouped by marketing segments is shown in the following sample:

select JSON_VALUE(data, '$.order.customer.mktsegment'), avg(CAST(JSON_VALUE(data, '$.extendedprice') as float))
from deals
group by JSON_VALUE(data, '$.order.customer.mktsegment')

Instead of joining different tables you can just change the paths in the second parameter of JSON_VALUE function to select different fields from JSON that you want to analyze.

In this experiment we have simple 5 analytic queries that calculate average value of some price column from the JSON grouped by other json values (queries are similar to the query above). The same queries are executed both on row-store table and table with CCI on Azure SQL Db P11 instance, and the results are shown below:

QueryColumn store(sec)Row-store (sec)
Q11118
Q21533
Q31736
Q41839
Q52151

Depending on the query, toy might get 2-3x speedup in analytic on JSON data.

Conclusion

CLUSTERED COLUMNSTORE indexes provide compression and analytic query speed-up. Without any table changes, or query rewrites you can get up top 4x compression and 3x speed-up on your queries.

SQL Server 2016 SP1 and higher versions enables you to create COLUMNSTORE indexes on any edition (even in the free edition), but in this version there is a size constraint of 8KB on JSON documents.  Therefore, you can use COLUMNSTORE indexes on your JSON data and get performance improvements without any additional query rewrites.

 

 

 

 

 

Exporting tables from SQL Server in json line-delimited format using BCP.exe

$
0
0

Line-delimited JSON is one common format used to exchange data between systems and streaming JSON data. SQL Server can be used to export content of tables into line-delimited JSON format.

Line-delimited JSON is a variation of JSON format where all JSON objects are stored in single line delimited with new-line characters, e.g.:

{"ProductID":15,"Name":"Adjustable Race","Price":75.9900,"Quantity":50}
{"ProductID":16,"Name":"Bearing Ball","Color":"Magenta","Size":"62","Price":15.9900,"Quantity":90}
{"ProductID":17,"Name":"BB","Color":"Magenta","Size":"62","Price":28.9900,"Quantity":80}
{"ProductID":18,"Name":"Blade","Color":"Magenta","Size":"62","Price":18.0000,"Quantity":45}

Although this is not a valid JSON format, many system use it to exchange data.

One advantage of line-delimited JSON format compared to the standard JSON is the fact that you can append new JSON objects at the end of the file without removing closing array bracket as in the standard JSON.

In this post I will show you how to export the content of a table shown in the following listing in line-delimited JSON format:

CREATE TABLE Product (
 ProductID int IDENTITY PRIMARY KEY,
 Name nvarchar(50) NOT NULL,
 Color nvarchar(15) NULL,
 Size nvarchar(5) NULL,
 Price money NOT NULL,
 Quantity int NULL
)

If you want to select all rows from the table in JSON format, you can use standard FOR JSON clause:

select ProductID, Name, Color, Size, Price, Quantity
from Product for json path

This query will return all rows as JSON objects separated with comma and wrapped with [ and ].

Small modification of query will enable you to return one object per row:

select (select ProductID, Name, Color, Size, Price, Quantity for json path, without_array_wrapper)
from Product

You can use standard bcp.exe tool to generate line delimited JSON files using this query:

bcp "select (select ProductID, Name, Color, Size, Price, Quantity for json path, without_array_wrapper) from Product" queryout .\products.json  -c -S ".\SQLEXPRESS" -d ProductCatalog -T

Note that I’m using queryout option because I have specified the T-SQL query that will extract data, and -c option that will generate the output in character format. This option does not prompt for each field; it uses char as the storage type, without prefixes and \r\n (newline character) as the row terminator.

Running this bcp command would generate line-delimited JSON file containing one JSON object for every row in the table.

 

Viewing all 13502 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>