Quantcast
Channel: TechNet Technology News
Viewing all 13502 articles
Browse latest View live

Released: SQL Server Data Tools 17.0 RC 2

$
0
0

SQL Server Data Tools 17.0 Release Candidate 2 (RC 2) has just been published. You can download and install from here: https://go.microsoft.com/fwlink/?LinkID=835150.

If you’re evaluating new enhancements in Analysis Services Tabular 1400 models, be sure to download this latest version because it includes several important fixes; particularly with the modern Get Data experience.

Most noteworthy is the addition of a menu bar to the Query Editor, as shown in the following screenshot. The purpose of this menu is to provide quick and easy access to the same functions that Microsoft Excel and Power BI Desktop provide through the Query Editor ribbon.

Menu Navigation

Feedback received through email via the SSASPrev alias made it clear the Query Editor toolbar alone was not intuitive enough. See also the conversation in response to the Introducing a Modern Get Data Experience for SQL Server vNext on Windows CTP 1.1 for Analysis Services article. The ideal solution would be a ribbon in SSDT Tabular that mirrors the ribbon in Power BI Desktop. That way, there would be no friction switching back and forth between Power BI Desktop and SSDT Tabular. Unfortunately, however, the Visual Studio shell does not provide a ribbon infrastructure, requiring us to take a different approach.

While the Query Editor menu bar isn’t a ribbon, it can still be a very useful user interface element. In fact, you might find the menu arranges available commands in a clear and logical order and helps you conveniently discover performable actions. If you want to work with commands that act on a query, look at the Query menu. If you want to remove rows or keep a range of rows, the Rows menu has you covered. Want to add or remove columns? You get the idea.

Moreover, you can work with keyboard shortcuts! Want to keep the top 10 rows in a table? Press Alt+R, then K, enter 10 in the Keep Top Rows dialog box, and then press Enter. Want to remove a selected column? Press Alt+C, then R, and the job is done. Want to display the Advanced Editor? Alt+ V, E. And simply press the Alt key to discover all the available shortcut combinations. In the following screenshot, you can see the sequence to parse the time values in a column would be Alt+T, M, T, and then P. This may not be the most convenient sequence, but it comes in handy if you find yourself performing a specific action very frequently.

Query Editor

Next on our list is to implement support for shared queries, functions, and parameters, and then to enable as many data sources as possible for close parity with Power BI Desktop. So, stay tuned for the forthcoming releases in subsequent months and keep sending us your suggestions and report any issues to SSASPrev here at Microsoft.com. Or, use any other available communication channels such as UserVoice or MSDN forums. You can influence the evolution of the Analysis Services connectivity stack to the benefit of all our customers.


Fun fact: Quick Create handles emoji for virtual machine names and splices them into simple Unicode

$
0
0

Fun blogpost today.  I was playing with Windows 10’s on screen keyboard and discovered the emoticons section and this awesome set of cat emojis.

clip_image001[10]

WindowsKitty definitely needed that to be a VM Name.  It even has a laptop!  Luckily, it turns out, the new Quick Create we added works really well with all of this set of glyphs.

clip_image001[5]

Not only does Quick Create support all of these crazy Windows 10 emoji, it also splices them into simpler Unicode representations for Hyper-V Manager and the file system.  I was really enjoying seeing what the simplified unicode would be – in this case, cat + computer.

clip_image001[14]

Which begs the question, how do emijo VM names look in PowerShell? 

clip_image001[16]

Unfortunately, not so good – maybe someday.  If you don’t need PowerShell scripting (or love referencing VMs via GUID) maybe emoji names are for you.  It makes me smile, at least. 

For further reading, checkout this blog post about how Windows 10 rethinks how we treat emoji.

Have fun!

Sarah

Announcing GVFS (Git Virtual File System)

$
0
0

Here at Microsoft we have teams of all shapes and sizes, and many of them are already using Git or are moving that way. For the most part, the Git client and Team Services Git repos work great for them. However, we also have a handful of teams with repos of unusual size! For example, the Windows codebase has over 3.5 million files and is over 270 GB in size. The Git client was never designed to work with repos with that many files or that much content. You can see that in action when you run “git checkout” and it takes up to 3 hours, or even a simple “git status” takes almost 10 minutes to run. That’s assuming you can get past the “git clone”, which takes 12+ hours.

Even so, we are fans of Git, and we were not deterred. That’s why we’ve been working hard on a solution that allows the Git client to scale to repos of any size. Today, we’re introducing GVFS (Git Virtual File System), which virtualizes the file system beneath your repo and makes it appear as though all the files in your repo are present, but in reality only downloads a file the first time it is opened. GVFS also actively manages how much of the repo Git has to consider in operations like checkout and status, since any file that has not been hydrated can be safely ignored. And because we do this all at the file system level, your IDEs and build tools don’t need to change at all!

In a repo that is this large, no developer builds the entire source tree. Instead, they typically download the build outputs from the most recent official build, and only build a small portion of the sources related to the area they are modifying. Therefore, even though there are over 3 million files in the repo, a typical developer will only need to download and use about 50-100K of those files.

With GVFS, this means that they now have a Git experience that is much more manageable: clone now takes a few minutes instead of 12+ hours, checkout takes 30 seconds instead of 2-3 hours, and status takes 4-5 seconds instead of 10 minutes. And we’re working on making those numbers even better. (Of course, the tradeoff is that their first build takes a little longer because it has to download each of the files that it is building, but subsequent builds are no slower than normal.)

While GVFS is still in progress, we’re excited to announce that we are open sourcing the client code at https://github.com/Microsoft/gvfs. Feel free to give it a try, but please be aware that it still relies on a pre-release file system driver. The driver binaries are also available for preview as a NuGet package, and your best bet is to play with GVFS in a VM and not in any production environment.

In addition to the GVFS sources, we’ve also made some changes to Git to allow it to work well on a GVFS-backed repo, and those sources are available at https://github.com/Microsoft/git. And lastly, GVFS relies on a protocol extension that any service can implement; the protocol is available at https://github.com/Microsoft/gvfs/blob/master/Protocol.md.

Hyper-V vs. KVM for OpenStack performance

$
0
0

During the development of Windows Server 2016 we spent a lot of time working on delivering the best core performance as a cloud platform.  At the same time the Cloudbase team have spent a lot of time optimizing the performance of the Hyper-V OpenStack drivers as part of their work on the Mitaka release of OpenStack.

Just recently, they sat down and did a series of OpenStack benchmarks that compared OpenStack on KVM to OpenStack on Windows Server 2012 R2 and OpenStack on Windows Server 2016.

You can read about it in this series of blog posts:

Hopefully, you will not be surprised to hear that Windows Server 2016 wins the performance race in this comparison 🙂

Cheers,
Ben

Scaling Git (and some back story)

$
0
0

A couple of years ago, Microsoft made the decision to begin a multi-year investment in revitalizing our engineering system across the company.  We are a big company with tons of teams – each with their own products, priorities, processes and tools.  There are some “common” tools but also a lot of diversity – with VERY MANY internally developed one-off tools (by team I kind of mean division – thousands of engineers).

There are a lot of downsides to this:

  1. Lots of redundant investments in teams building similar tooling
  2. Inability to fund any of the tooling to “critical mass”
  3. Difficulty for employees to move around the company due to different tools and process
  4. Difficulty in sharing code across organizations
  5. Friction for new hires getting started due to an overabundance of “MS-only” tools
  6. And more…

We set out on an effort we call the “One Engineering System” or “1ES”.  Just yesterday we had a 1ES day where thousands of engineers gathered to celebrate the progress we’ve made, to learn about the current state and to discuss the path forward.  It was a surprisingly good event.

Aside… You might be asking yourself – hey, you’ve been telling us for years Microsoft uses TFS, have you been lying to us?  No, I haven’t.  Over 50K people have regularly used TFS but they don’t always use it for everything.  Some use it for everything.  Some use only work item tracking.  Some only version control.  Some build …  We had internal versions (and in many cases more than one) of virtually everything TFS does and someone somewhere used them all.  It was a bit of chaos, quite honestly.  But, I think I can safely say, when aggregated and weighed – TFS had more adoption than any other set of tools.

I also want to point out that, when I say engineering system here, I am using the term VERY broadly.  It includes but is not limited to:

  1. Source control
  2. Work management
  3. Builds
  4. Release
  5. Testing
  6. Package management
  7. Telemetry
  8. Flighting
  9. Incident management
  10. Localization
  11. Security scanning
  12. Accessibility
  13. Compliance management
  14. Code signing
  15. Static analysis
  16. and much, much more

So, back to the story.  When we embarked on this journey, we had some heated debates about where we were going, what to prioritize, etc.  You know, developers never have opinions. 🙂  There’s no way to try to address everything at once, without failing miserably so we agreed to start by tackling 3 problems:

  • Work planning
  • Source control
  • Build

I won’t go into detailed reasons other than to say those are foundational and so much else integrates with them, builds on them etc. that they made sense.  I’ll also observe that we had a HUGE amount of pain around build times and reliability due to the size of our products – some hundreds of millions of lines of code.

Over the intervening time those initial 3 investments have grown and, to varying degrees, the 1ES effort touches almost every aspect of our engineering process.

We put some interesting stakes in the ground.  Some included:

The cloud is the future – Much of our infrastructure and tools were hosted internally (including TFS).  We agreed that the cloud is the future – mobility, management, evolution, elasticity, all the reasons you can think of.  A few years ago, that was very controversial.  How could Microsoft put all our IP in the cloud?  What about performance?  What about security?  What about reliability?  What about compliance and control?  What about…  It took time but we eventually got a critical mass OK with the idea and as the years have passed, that decision has only made more and more sense and everyone is excited about moving to cloud.

1st party == 3rd party– This is an expression we use internally that means, as much as possible, we want to use what we ship and ship what we use.  It’s not 100% and it’s not always concurrent but it’s the direction – the default assumption, unless there’s a good reason to do something else.

Visual Studio Team Services is the foundation– We made a bet on Team Services as the backbone.  We need a fabric that ties our engineering system together – a hub from which you learn about and reach everything.  That hub needs to be modern, rich, extensible, etc.  Every team needs to be able to contribute and share their distinctive contributions to the engineering system.  Team Services fits the bill perfectly.  Over the past year usage of Team services within Microsoft has grown from a couple of thousand to over 50,000 committed users.  Like with TFS, not every team uses it for everything yet, but momentum in that direction is strong.

Team Services work planning– Having chosen Team Services, it was pretty natural to choose the associated work management capabilities.  We’ve on-boarded teams like the Windows group, with many thousands of users and many millions of work items, into a single Team Services account.  We had to do a fair amount of performance and scale work to make that viable, BTW.  At this point virtually every team at Microsoft has made this transition and all of our engineering work is being managed in Team Services

Team Services Build orchestration & CloudBuild– I’m not going to drill on this topic too much because it’s a mammoth post in and of itself.  I’ll summarize it to say we’ve chosen the Team Services Build service as our build orchestration system and the Team Services Build management experience as our UI.  We have also built a new “make engine” (that we don’t yet ship) for some of our largest code bases that does extremely high scale and fine grained caching, parallelization and incrementality.  We’ve seen multi-hour builds drop sometimes to minutes.  More on this in a future post at some point.

After much backstory, on to the meat 🙂

Git for source control

Maybe the most controversial decision was what to use for source control.  We had an internal source control system called Source Depot that virtually everyone used in the early 2000’s.  Over time, TFS and its Team Foundation Version Control solution won over much of the company but never made progress with the biggest teams – like Windows and Office.  Lots of reasons I think – some of it was just that the cost for such large teams to migrate was extremely high and the two systems (Source Depot and TFS) weren’t different enough to justify it.

But source control systems generate intense loyalty – more so than just about any other developer tool.  So the argument between TFVC, Source Depot, Git, Mercurial, and more was ferocious and, quite honestly, we made a decision without ever getting consensus – it just wasn’t going to happen.  We chose to standardize on Git for many reasons.  Over time, that decision has gotten more and more adherents.

There were many arguments against choosing Git but the most concrete one was scale.  There aren’t many companies with code bases the size of some of ours.  Windows and Office, in particular (but there are others), are massive.  Thousands of engineers, millions of files, thousands of build machines constantly building it, quite honestly, it’s mind boggling.  To be clear, when I refer to Window in this post, I’m actually painting a very broad brush – it’s Windows for PC, Mobile, Server, HoloLens, Xbox, IOT, and more.  And Git is a distributed version control system (DVCS).  It copies the entire repo and all its history to your local machine.  Doing that with Windows is laughable (and we got laughed at plenty).  TFVC and Source Depot had both been carefully optimized for huge code bases and teams.  Git had *never* been applied to a problem like this (or probably even within an order of magnitude of this) and many asserted it would *never* work.

The first big debate was – how many repos do you have – one for the whole company at one extreme or one for each small component?  A big spectrum.  Git is proven to work extremely well for a very large number of modest repos so we spent a bunch of time exploring what it would take to factor our large codebases into lots of tenable repos.  Hmm.  Ever worked in a huge code base for 20 years?  Ever tried to go back afterwards and decompose it into small repos?  You can guess what we discovered.  The code is very hard to decompose.  The cost would be very high.  The risk from that level of churn would be enormous.  And, we really do have scenarios where a single engineer needs to make sweeping changes across a very large swath of code.  Trying to coordinate that across hundreds of repos would be very problematic.

After much hand wringing we decided our strategy needed to be “the right number of repos based on the character of the code”.  Some code is separable (like microservices) and is ideal for isolated repos.  Some code is not (like Windows core) and needs to be treated like a single repo.  And, I want to emphasize, it’s not just about the difficulty of decomposing the code.  Sometimes, in big highly related code bases, it really is better to treat the codebase as a whole.  Maybe someday I’ll tell the story of Bing’s effort to componentize the core Bing platform into packages and the versioning problems that caused for them.  They are currently backing away from that strategy.

That meant we had to embark upon scaling Git to work on codebases that are millions of files, hundreds of gigabytes and used by thousands of developers.  As a contextual side note, even Source Depot did not scale to the entire Windows codebase.  It had been split across 40+ depots so that we could scale it out but a layer was built over it so that, for most use cases, you could treat it like one.  That abstraction wasn’t perfect and definitely created some friction.

We started down at least 2 failed paths to scale Git.  Probably the most extensive one was to use Git submodules to stitch together lots of repos into a single “super” repo.  I won’t go into details but after 6 months of working on that we realized it wasn’t going to work – too many edge cases, too much complexity and fragility.  We needed a bulletproof solution that would be well supported by almost all Git tooling.

Close to a year ago we reset and focused on how we would actually get Git to scale to a single repo that could hold the entire Windows codebase (include estimates of growth and history) and support all the developers and build machines.

We tried an approach of “virtualizing” Git.  Normally Git downloads *everything* when you clone.  But what if it didn’t?  What if we virtualized the storage under it so that it only downloaded the things you need.  So clone of a massive 300GB repo becomes very fast.  As I perform Git commands or read/write files in my enlistment, the system seamlessly fetches the content from the cloud (and then stores it locally so future accesses to that data are all local).  The one downside to this is that you lose offline support.  If you want that you have to “touch” everything to manifest it locally but you don’t lose anything else – you still get the 100% fidelity Git experience.  And for our huge code bases, that was OK.

It was a promising approach and we began to prototype it.  We called the effort Git Virtual File System or GVFS.  We set out with the goal of making as few changes to git.exe as possible.  For sure we didn’t want to fork Git – that would be a disaster.  And we didn’t want to change it in a way that the community would never take our contributions back either.  So we walked a fine line doing as much “under” Git with a virtual file system driver as we could.

The file system driver basically virtualizes 2 things:

  1. The .git folder – This is where all your pack files, history, etc. are stored.  It’s the “whole thing” by default.  We virtualized this to pull down only the files we needed when we needed them.
  2. The “working directory” – the place you go to actually edit your source, build it, etc.  GVFS monitors the working directory and automatically “checks out” any file that you touch making it feel like all the files are there but not paying the cost unless you actually access them.

As we progressed, as you’d imagine, we learned a lot.  Among them, we learned the Git server has to be smart.  It has to pack the Git files in an optimal fashion so that it doesn’t have to send more to the client than absolutely necessary – think of it as optimizing locality of reference.  So we made lots of enhancements to the Team Services/TFS Git server.  We also discovered that Git has lots of scenarios where it touches stuff it really doesn’t need to.  This never really mattered before because it was all local and used for modestly sized repos so it was fast – but when touching it means downloading it from the server or scanning 6,000,000 files, uh oh.  So we’ve been investing heavily in is performance optimizations to Git.  Many of them also benefit “normal” repos to some degree but they are critical for mega repos.  We’ve been submitting many of these improvements to the Git OSS project and have enjoyed a good working relationship with them.

So, fast forward to today.  It works!  We have all the code from 40+ Windows Source Depot servers in a single Git repo hosted on VS Team Services – and it’s very usable.  You can enlist in a few minutes and do all your normal Git operations in seconds.  And, for all intents and purposes, it’s transparent.  It’s just Git.  Your devs keep working the way they work, using the tools they use.  Your builds just work.  Etc.  It’s pretty frick’n amazing.  Magic!

As a side effect, this approach also has some very nice characteristics for large binary files.  It doesn’t extend Git with a new mechanism like LFS does, no turds, etc.  It allows you to treat large binary files like any other file but it only downloads the blogs you actually ever touch.

Git Merge

Today, at the Git Merge conference in Brussels, Saeed Noursalehi shared the work we’ve been doing – going into excruciating detail on what we’ve done and what we’ve learned.  At the same time, we open sourced all our work.  We’ve also included some additional server protocols we needed to introduce.  You can find the GVFS project and the changes we’ve made to Git.exe in the Microsoft GitHub organization.  GVFS relies on a new Windows filter driver (the moral equivalent of the FUSE driver in Linux) and we’ve worked with the Windows team to make an early drop of that available so you can try GVFS.  You can read more and get more resources on Saeed’s blog post.  I encourage you to check it out.  You can even install it and give it a try.

While I’ll celebrate that it works, I also want to emphasize that it is still very much a work in progress.  We aren’t done with any aspect of it.  We think we have proven the concept but there’s much work to be done to make it a reality.  The point of announcing this now and open sourcing it is to engage with the community to work together to help scale Git to the largest code bases.

Sorry for the long post but I hope it was interesting.  I’m very excited about the work – both on 1ES at Microsoft and on scaling Git.

Brian

 

 

 

.GAME’s Item System – Part 1 Challenge Explained

$
0
0

Last week’s episode of .GAME started preparations for an item system that we’ll be using for equipment and merchant mechanics. The episode covered some Unity fundamentals, such as methods of object rotation, tags, layers, sorting layers and filtering. We wrapped up with a development challenge, the scenario of which was: When the player moves towards a merchant, the merchant should rotate towards the player.

This post will go over the design I took into consideration which fed into the solution that I came up with. Of course, this is by no means the only way you can solve the challenge. The result that I ended up with was:

alt text

The Design

I left this challenge a bit vague so that you can interpret what rotating the merchant means to you. As I started to design my solution, I had to consider what it meant to me. Sure, I could just make the merchant rotate the second that the player clicks on them – but was that the experience I wanted? It certainly didn’t feel like an immersive experience for the player.

If triggered right when the player clicks on the merchant, the merchant will rotate to face the player right away. This is problematic when you consider the distance a character might be from the merchant. If the player was far away, how would the merchant know to turn to them? The player’s character could have yelled the merchant’s name, but we’re not showing anything on screen to indicate to the player that is the case.

Now consider a scenario where the player is not only far away, but has several obstacles that they need to avoid, making them ping pong around the scene before reaching the merchant. If triggered right away, the behavior that we’d encounter is that the merchant would continue to evaluate and adjust to the player’s location. In other words, they would be constantly rotating around to face the player as the player was attempting to reach them.

Next I considered what a merchant would do when the player moved away from them. If it was a real person, they’d probably go back to doing whatever they were doing before the player arrived. From a mechanics standpoint, this means that it’d be weird if the merchant kept rotating to look at the player or continued to face the direction that they were looking when the player left.

Considering this, I added a few additional requirements to the scenario:

  1. The merchant must rotate to look at the player’s character only when they are in a certain range.
  2. The merchant should restore its rotation back to its default position when the player leaves.

The Solution

I used colliders to solve the distance requirement. Colliders are components which can be used for detecting physical collisions. If you want to perform a zone type detection you can enable the isTrigger property. Enabling the isTrigger property will cause the collider to be ignored by physics calculations and will instead detect when an object has passed through it. Using a Trigger, we can detect when the player object has entered the zone of our merchant and tell the merchant to rotate. We can also detect when the player object has left the zone, using it to tell the merchant to return to its default position.

Setting up the collider

The goal is to setup a circular zone around the merchant that triggers when the player enters it, like so:

challengespherecollidersceneview

To do this, you add a Sphere Collider component to the root of the Merchant game object. Set the Radius to 7 and isTrigger to true.

challengespherecollidercomponent

One of the game objects needs to have a RigidBody component to detect the collision. We’ll add that to the Merchant game object as well. Since we’re only controlling the rotation, we’ll freeze the XYZ positions under the constraints section. We’ll also turn off gravity.

rigidbodycomponent

Coding the new behavior

We’ll need a new script to control the NPC. To do this, create a new class called GeneralInteraction and have it derive from MonoBehavior. To start, we’ll store the default rotation of the merchant and add the logic in for rotating. This is very similar to what was reviewed in the episode, with the exception of a few additional conditions in the RotateTowards() method.

Colliders have several callbacks that can occur. The two that we’ll be working with are OnTriggerEnter, which is called when another collider enters the trigger, and OnTriggerExit which is called when another collider exits the trigger.

In the OnTriggerEnter we’ll need to do a check to ensure we’re only storing a reference of the object if it is the Player. For the check to work, we’ll need to set the Player object’s Tag to “Player”, which can be done in the inspector:

challengeplayertag

We’ll use OnTriggerExit to clear out the reference to the Player object and tell our NPC to begin rotating. The RotateTowards() method (referenced above) already has the logic to determine that the merchant should rotate back to the original location.

Fixing Adverse Behaviors

Unfortunately, this solution conflicted with the way I’d originally coded up the player’s movement which caused a few issues that needed to be fixed.

Design Issue #0: The navigation system fights with the rotation

Sometimes when the player navigates to a merchant, it’ll start twitching as it slowly tries to complete its rotation. Technically this was an issue that was likely to happen regardless of the challenge, I just hadn’t caught it yet. 🙂 This could have had significant performance issues had it not been caught and was implemented on all NPC characters.

Design Example

The reason this is happening is because the navigation system is constantly trying to run and as a result is fighting with the Quaternion.Slerp logic. The navigation should have been stopped once it’d reached it’s location and resumed once it had a new one.

To fix this, I needed to adjust the check that was used in the Update() method to consider the distance the player game object was from the NavMeshes destination and compare it against the stopping distance of the NavMesh. I also added the logic to stop and resume the NavMeshes pathing. You can learn more about Unity’s navigation system by watching the Unity Navigation – Part 1 and Part 2 episodes of .GAME.

Design Issue #1 The player navigates too far from the merchant

The player always navigates to the edge of the sphere collider, which creates a robotic feel. This can also have adverse behaviors, depending on the size of the Trigger that is used.

Design Example

This occurs because of the original way that I was providing the path destination to the character. The Raycast was passing in the position of the hit point, rather than the position of the object that was being hit. The fix for this was pretty simple – I just needed to change this line: Move(hit.point + (transform.forward * -6)); to say Move(hit.transform.position + (transform.forward * -6));.

Design Issue #2: The player navigates through the merchant

Fixing the last design issue introduced a new problem – Now the player was navigating too close to the merchant and bumping into the object.

Design Example

This is an easy fix, as the merchant was missing the NavMeshObstacle component which is used to tell the navigation system to go around it. After adding the NavMeshObstacle component to the root of the Merchant game object, I set the XYZ size values to 6, enabled carve and disabled Carve Only Stationary.

Design Example

challengenavmeshobstacle

Conclusion

In the end, I mostly got the behavior that I wanted. I know that there’s a few other areas that I’d need to address if I wanted this to be a robust system. For example, consider a scenario where the player is already in the trigger and the merchant has already completed their rotation. If the player were to move to a new position in the trigger, but never actually exit/re-enter it, the merchant would never be notified that they need to rotate to a new location.

I hope you enjoyed reading about my approach to this challenge. I’d love to hear about how you’ve chosen to solve it. You can either tweet to me at@yecats131 or email dotgame at microsoft dot com. Be sure to check out more episodes of .GAME on Channel 9!

TFS 2017 Process Template Editor is available

$
0
0

I know a bunch of people have  been asking for it, now you can get it.  The TFS 2017 Process Template Editor (which, btw, is an extension to VS 2017) is now available.  You can install it from the free process template editor extension in the Visual Studio marketplace.

Let us know if you have any issues.

Brian

 

Cluster Networks in Windows Failover Clustering

$
0
0

Welcome to the AskCore blog. My name is Eriq Stern and today we are going to discuss Failover Cluster networks.  This information applies to Windows 2008 up thru Windows 2012 R2 Failover Clusters.

What does a Windows Failover Cluster consider a “Cluster Network”?

In a Windows Failover Cluster, Cluster Networks are created automatically by the Cluster Service for each configured subnet on any active network interfaces (NICs) on each node of the cluster. If a single node has a NIC configured with a subnet that is not configured on other nodes, a Cluster Network will be created even though it cannot be used by any nodes that do-not have an active NIC on it. Disabling any NICs on a single node has no effect on Cluster Network objects if there are still active NICs on those subnets on any node. To remove or re-create a Cluster Network, all NICs on the desired subnet must be disabled or removed until the Cluster Service discovers the change and removes the related Cluster Network, which should happen within a few seconds of disabling or removing the NICs.

How are Cluster Networks configured for cluster use by default?

Cluster Networks are always enabled for Cluster use by default when created, even if the same network has been previously configured otherwise. If the network has a default gateway, then it will also be enabled for Public use.  This is because when all NICs on a subnet have been removed or disabled, the network is completely removed from the cluster database, and is re-added as a new entry if the network is rediscovered. Because of this, in some cases it may be necessary to reconfigure a previously configured network after a Cluster Network has been removed and re-added, for example after disabling and re-enabling a NIC on a single-node cluster.

How are Cluster Network names/numbers determined?

Cluster networks are automatically named “Cluster Network” followed by the next available number that is not assigned to an existing network. For example, if “Cluster Network 1” exists, a new network would be named “Cluster Network 2” – but if “Cluster Network 1” has been manually renamed, when another NIC is enabled on a different subnet, a new Cluster Network will be created and automatically named “Cluster Network 1”. The number will not be incremented based on the previously identified networks.

Manually renamed Cluster Networks:
clip_image001

Newly added cluster network:
clip_image002

I hope that this post helps you!

Thanks,
Eriq Stern
Support Escalation Engineer
Windows High Availability Group


NXTA - NexTech Africa Conference - Day 1 perspectives

$
0
0

imageI'm in Nairobi, Kenya this week attending a fantastic event called NexTech Africa. It is a free event that showcases the best of what Africa's Startup community has to offer. This event is mostly focused on East Africa's tech community but it included delegates from all over the continent. I'm told over 1000 people are here.

My wife is Zimbabwean and we have family all over in places like South Africa, Tanzania, and Zimbabwe, and friends in a dozen other countries. I personally feel that access to technology and technical education is a fantastic way to help Africa's burgeoning middle class.

However, this trip was for listening. It's silly for me (or anyone who isn't living on the continent) to fly in and "drop the knowledge" and fly out. In fact, it's condescending. So I'm spending this week visiting startups, talking to engineers, university students, and tech entrepreneurs.

I spoke at length with the engineers at BRCK, a Kenya-based startup that has a "brick" that's a portable router, NAS, Compute Module, Captive Portal, and so much more. They can drop one of these a little outside of town and give wi-fi to an entire area. Even better, there could be hyper-local content on the devices. Folks with 30+Mbps Internet may be spoiled with HD content, but why not have a smart router download TV shows and Movies that can be served (much like movies stored on an airplane's hard drive that you can watch via wi-fi while you fly) to everyone in the local area. The possibilities are endless and they're doing all the work from hardware to firmware to software in-country with local talent.

image

I also visited iHub's Technology Innovation Community and saw where they teach classes to local students, have maker- and hacker-spaces, support a UXLab and host local tech meetups. I'll be hopefully communicating more and more with the new friends I've met and perhaps bring a few of them to the podcast so you can hear their stories yourself.

image

These are uniquely African solutions to problems that Africans have identified they want to solve. I am learning a ton and have been thrilled to be involved. Since I focus on Open Source .NET and .NET Core, I think there's an opportunity for C# that could enable new mobile apps via Xamarin with backends written in ASP.NET Core and running on whatever operating system makes one happy.


Sponsor: Track every change to your database. See who made changes, what they did, & why, with SQL Source Control. Get a full version history in your source control system. See how.



© 2016 Scott Hanselman. All rights reserved.
     

New Lower Prices on Azure Virtual Machines and Blob Storage

$
0
0

We strive hard at Azure to offer you the best value in one of the most cost effective ways in the public cloud. We believe in providing a comprehensive cloud platform that not only enables customers to innovate rapidly, but to also do so at the best possible prices. To that end, today we are happy to announce significant price reductions on several Azure Virtual Machine families and Storage types. We hope this will further lower the barrier to entry for our customers and accelerate cloud transformation.

Azure Virtual Machines:

We have reduced prices on Compute optimized instances – F Series, General purpose instances – A1 Basic by up to 24% and 61% respectively.

The table below shows an example of the VM price reductions in UK South. Please refer to the VM pricing page for all the regions and details.

 

Azure VMsPrice reductions (Linux VM)Price reductions (Windows VM)
F1 to F16-23%-18%
A1 Basic-42%-51%

We will also be announcing price reductions specifically for our D-series General-purpose instances in the near future.

Azure Blob Storage:

We have reduced prices on Azure Storage offerings - Hot Block Blob Storage, Cool Block Blob Storage by up to 31% and 38% respectively. These new prices are only available to customers using Azure Blob Storage Accounts. Customers who are on the General Purpose Blob Storage can take advantage of these prices by moving data from General Purpose Blob to Azure Blob Storage account using tools such as AZ copy.

The table below shows an example of the Storage price reductions in UK South. Please refer to the Storage pricing page for all the regions and details.

 

Azure StoragePrice reductions
Hot Block Blob-26%
Cool Block Blob-38%

All of the new reduced prices take effect today. We are excited about these new lower prices and how it helps our customers accomplish a lot more. For more details, please visit Linux Virtual Machines Pricing.

Mediterranean Shipping Company builds a global productivity network with Office 365

$
0
0

Today’s post was written by Fabio Catassi, chief technology officer at the Mediterranean Shipping Company.

Mediterranean Shipping Company pro pixThe Mediterranean Shipping Company has been under the same family leadership since its inception 43 years ago, and while the company now manages its fleet of 480 vessels from offices in 150 countries, it’s fair to say that its caring, corporate culture is as strong today as it ever was. But while we all feel connected to a large corporate family, unfortunately, our IT systems did not support that connectivity, or the global communications that we needed to compete in today’s digital economy.

Container shipping has evolved over the years to become a commodity-based business. Today, we are facing an era of shrinking profit margins and a growing pressure on the revenue side. Yet we were able to ensure that IT played its part in minimizing operational expenses, while improving our business services to employees. That’s because Office 365 delivers a cost-effective, cloud-based solution to bring everybody up to the same level of mobility and productivity across our global operations.

When I became CTO in 2005, we wanted to replace the disparate business productivity solutions we had running in 480 offices around the world with a single digital workplace for everyone. After evaluating other web-based solutions, including Google Apps for Business (now G Suite) and Amazon Web Services, we chose Office 365 cloud-based communication and collaboration services to empower all employees with the same leading-edge, yet familiar business tools. Security also played into this decision: we had various security solutions in place across our global offices, and it took a tremendous effort to ensure that everyone achieved an acceptable standard of security practices. The beauty of Office 365 is that we are deploying Office 365 Advanced Threat Protection as a single security control for all our offices. This service addresses the latest attacks that can invade a network through email attachments or embedded links. In the end, we benefit from the Office 365 constant update model and uniformity of service, plus the added bonus that Microsoft takes care of running the service in the cloud on dedicated hardware.

Today, we use Office 365 to boost mobility and productivity to differentiate our personal service from that of our competitors. The faster we share information and collaborate on behalf of our customers, the more responsive our service. This was a significant challenge before, with so many different solutions in place around the world. Now our employees access their files anywhere from online storage, IM colleagues for quick answers to questions, or spontaneously invite their team members to a video conference. When we have the same easy connectivity across the hall, or around the world, we can make good on our promise to provide global service with local knowledge. And now that regional managers are benefitting from easy-to-use data analytics and dashboard tools to decide what’s best for their customers, we can provide more informed local service.

Mobility is especially important to enable the flexible service our customers have come to expect. Today, our employees are no longer bound to a specific device. Now that people can be more productive on their own terms, we expect efficient turnaround of information among colleagues and with our customers.

Microsoft Consulting Services was invaluable in the deployment of Office 365—it complemented our small, yet nimble IT team and helped us transform how the company works on a daily basis. Despite the variety of legacy environments in place across our offices, we achieved the migration in just nine months. And with our recent subscription to add 17,000 seats of Office 365 E5, we expect a similar rapid adoption of the latest advances in cloud telephony and Office Delve, which delivers personalized content from all your Office 365 apps. At the end of the day, providing a rewarding workplace with a state-of-the-art business productivity platform reaffirms our corporate culture of encouraging long-term employees in a supportive environment—and also gives us a competitive edge where we can work leaner and more efficiently to preserve our profit margins. That’s great business value!

—Fabio Catassi

To understand how Mediterranean Shipping Company has reduced costs by 45 percent, read the full case study.

The post Mediterranean Shipping Company builds a global productivity network with Office 365 appeared first on Office Blogs.

Continuous Delivery Tools Extension for Visual Studio 2017

$
0
0

DevOps is all about getting your applications into customers’ hands quickly. The Continuous Delivery Tools for Visual Studio is a new extension for Visual Studio 2017 that brings DevOps capabilities to the IDE. You can use the extension to setup an automated build, test, and release pipeline on Visual Studio Team Services, for an ASP.NET 4 and ASP.NET Core application targeting Azure. You can then monitor your pipeline with notifications in the IDE that alert you to build failures on any CI run. The extension helps quickly setup a dev or test environment that builds, tests and deploys your app on every Git Push. In this post, I’ll cover some key features and walk you through configuration to get you started. You can find more details on the ASP.NET blog and ALM blog.

Configuring a continuous delivery pipeline for ASP.NET 4 and ASP.NET Core projects

To get started your project needs to live in a Git repo. The Add to Source Control button in the status bar will setup a repo for you and push it up to your remote repo. Then, right click on your ASP.NET project in Solution Explorer, or click on the Remote Server status icon on the status bar, and select “Configure Continuous Delivery…”. This will start to setup Build and Release definitions on Team Services that automatically builds, tests, and deploys your ASP.NET project to an Azure App Service or an Azure Container Service.

The Configure Continuous Delivery dialog lets you pick a branch from the repository to deployto a target App service. When you click OK, the extension creates build and release definitions on Team Services (this can take a couple of minutes), and then kicks off the first build and deployment automatically. From this point onward, Team Services will trigger a new build and deployment whenever you push changes up to the repository.

As your project matures you can add custom tasks to automate other parts of your release pipeline, enforce required policies, setup new deployment targets, or integrate one of the many third-party services available. Learn more about setting up Continuous Integrations on Visual Studio Team Services.

Failure notifications for any CI run on Visual Studio Team Services

Things move fast when you have an automated DevOps pipeline, so it’s important to have full transparency into the process. The latest build status and notifications for failed builds and tests failures appear in the new status bar icon. Notifications only appear for failures on the definition you are monitoring so you will not get flooded with messages for definitions you aren’t currently monitoring.

To change the build definition that you are monitoring, click Build Definition for Notifications menu item on the status bar icon.

Microsoft DevLabs Extensions

This extension is a Microsoft DevLabs extension which is an outlet for experiments from Microsoft that represent some of the latest ideas around developer tools. They are designed for broad use, feedback, and quick iteration, but it’s important to note DevLabs extensions are not supported and there is no commitment they’ll ever make it big and ship in the product.

It’s all about feedback…

We think there’s a lot we can do in the IDE to help teams collaborate and ship high quality code faster. In the spirit of being agile we want to ship fast, try out new ideas, improve the ones that work and pivot on the ones that don’t. Over the next few days, weeks, and months we’ll update the extension with new fixes and features. Your feedback is essential to this process. Join our slack channel or ping us at VSDevOps@microsoft.com.

Ahmed Metwally, Senior PM, Visual Studio
@cd4vs

Ahmed is a Program Manager on the Visual Studio Platform team focused on improving team collaboration and application lifecycle management integration.

Dig into the free Windows Server 2016 virtual labs

$
0
0

We know every time we launch a new Windows Server version our customers get excited to try the new features. But while some folks love getting their hands dirty setting up new servers, storage, cables, etc., not everyone has a lot of time to do this.  With the new virtual labs for Windows Server 2016, we made it a lot quicker and easier for you to get your hands dirty on the fun part!

Today we’re announcing the availability of the new Windows Server 2016 virtual labs. These TechNet Virtual Labs provide a real-world environment along with guidance on how to try the new features. Here are the new lab scenarios you can try out:

  • Implementing Breach Resistance Security in Windows Server 2016
  • Shielded Virtual Machines
  • Building a Storage Infrastructure on Windows Server 2016
  • Installing and Managing Nano Server
  • Exploring Virtualization on Windows 10 and Windows Server 2016
  • Failover Clustering and Rolling Cluster Upgrades

Sign in with your Microsoft account, and you can access any of the individual labs in a private, sandboxed environment. It all adds up to about six hours of content, and you can come back later if you need to.

Give these new virtual labs a try. And when you are ready to dig even deeper and evaluate the full product, you can download the Windows Server 2016 evaluation media or try it in Azure.

Azure AD News: Azure MFA cloud based protection for on-premises VPNs is now in public preview!

$
0
0

Howdy folks,

One of the top requests we hear from customers is to be able to secure their on-premises VPNs using Azure AD and our cloud-based MFA service. Today we’re announcing the public preview of NPS Extension support in Azure MFA. This cool enhancement gives you the ability to protect your VPN using Azure MFA (which is included in Azure AD Premium) without having to install a new on-premises server.

This is another step along the road to realizing our vision of making Azure AD a complete, cloud based “Identity Control Plane” service that makes it easy for enterprises to assure their employees, partners and customers have access to all the right cloud and on-premises resources while assuring the highest levels of compliance and security.

To give you the details about this release, I’ve asked Yossi Banai to write a blog about this cool new capability. His blog is below.

I hope you’ll find this update useful for improving the security of your organization!

And as always, we would love to receive any feedback or suggestions you have.

Best Regards,

Alex Simons (Twitter: @Alex_A_Simons)

Director of Program Management

Microsoft Identity Division

——————

Hello,

I’m Yossi Banai, a Program Manager on the Azure Active Directory team. As you know, multi-factor authentication is an important tool to help safeguard data and applications while meeting user demands for a simple sign-in process. With Azure Multi-factor authentication (MFA), customers currently can choose between MFA Server (an on-premises solution) and cloud-based MFA (a cloud-based solution supported and maintained by Microsoft).

While MFA Server provides a rich set of features, more and more customers are choosing to use cloud-based MFA to secure their environment, to simplify it, reduce cost, and take advantage of powerful Azure AD features such as Conditional Access and Azure AD Identity Protection.

However, since cloud-based MFA services like Azure AD have not traditionally supported RADIUS authentication, customers who wanted to secure on-premises clients such as VPN had no choice but to deploy MFA Servers on-premises. With today’s release of the NPS Extension for Azure MFA, I’m excited to announce that we have closed this gap, and added the ability to secure RADIUS clients using cloud-based MFA!

The NPS extension for Azure MFA provides a simple way to add cloud-based MFA capabilities to your authentication infrastructure using your existing NPS servers. With the NPS extension, you’ll be able to add phone call, SMS, or phone app MFA to your existing authentication flow without having to install, configure, and maintain new servers.

How does the NPS Extension for Azure MFA work?

With the NPS Extension for Azure MFA, which is installed as an extension to existing NPS Servers, the authentication flow includes the following components:

  • User/VPN Client: Initiates the authentication request.
  • NAS Server/VPN Server: Receives requests from VPN clients and converts them into RADIUS requests to NPS servers.
  • NPS Server: Connects to Active Directory to perform the primary authentication for the RADIUS requests and, if successful, pass the request to any installed NPS extensions.
  • NPS Extension: Triggers an MFA request to Azure cloud-based MFA to perform the secondary authentication. Once it receives the response, and if the MFA challenge succeeds, it completes the authentication request by providing the NPS server with security tokens that include an MFA claim issued by Azure STS.
  • Azure MFA: Communicates with Azure Active Directory to retrieve the user’s details and performs the secondary authentication using a verification method configured for the user.

The following diagram illustrates the high-level authentication request flow:


Getting started

I encourage you to download and install the NPS extension for Azure MFA from the Microsoft Download Center and start testing this feature.

The NPS Extension for Azure MFA is available to customers with licenses for Azure Multi-Factor authentication (included with Azure AD Premium, EMS, or an MFA subscription). In addition, you will need Windows Server 2008 R2 SP1 or above with the NPS component enabled.

All users using the NPS extension must be synced to Azure Active Directory using Azure AD Connect and be registered for MFA.

To install the extension, simply run the installation package and the PowerShell script it generates, which associates the extension with your tenant. Then, configure your RADIUS client to authenticate through your NPS Server.

The fine print

This release of the NPS Extension for Azure MFA targets new deployments and does not include tools to migrate users and settings from MFA Server to the cloud.

Like with MFA Server, once you enable MFA for a RADIUS client using the NPS Extension, all authentications for this client will be required to perform MFA. If you want to enable MFA for some RADIUS clients but not others, you can configure two NPS servers and install the extension on only one of them. Configure RADIUS clients that you want to use MFA with to send requests to the NPS server configured with the extension, and other RADIUS clients to send requests to the NPS server that don’t have the extensions.

We appreciate your feedback

We would love to hear your feedback. If you have any suggestions for us, questions, or issues to report, please leave a comment at the bottom of this post, send a note to the NPS Extension team, or tweet with the hashtag #AzureAD.

Announcing custom domain HTTPS support with Azure CDN

$
0
0

We are very excited to let you know that this feature is now available with Azure CDN from Verizon. The end-to-end workflow to enable HTTPS for your custom domain is simplified via one-click enablement, complete certificate management, and all with no additional cost.

It's critical to ensure the privacy and data integrity of all your web applications sensitive data while it is in transit. Using the HTTPS protocol ensures that your sensitive data is encrypted when it's sent across the internet. Azure CDN has supported HTTPS for many years, but was only supported when you used an Azure provided domain. For example, if you create a CDN endpoint from Azure CDN (e.g. https://contoso.azureedge.net), HTTPS is enabled by default. Now, with custom domain HTTPS, you can enable secure delivery for a custom domain (e.g. https://www.contoso.com) as well.

Some of the key attributes of the custom domain HTTPS are:

  • No additional cost: There are no costs for certificate acquisition or renewal and no additional cost for HTTPS traffic. You just pay for GB egress from the CDN.

  • Simple enablement: One click provisioning is available from the Azure portal.

  • Complete certificate management: All certificate procurement or management is handled for you. Certificates are automatically provisioned and renewed prior to expiration. This completely removes the risks of service interruption as a result of a certificate expiring.

See the feature documentation for full details on how to enable HTTPS for your custom domain today!

We are working on supporting this feature with Azure CDN from Akamai in the coming months. Stay tuned.

More information

  1. CDN overview

  2. Add a custom domain

Is there a feature you'd like to see in Azure CDN? Give us feedback!


SQL Data Warehouse now supports seamless integration with Azure Data Lake Store

$
0
0

Azure SQL Data Warehouse is a SQL-based fully managed, petabyte-scale cloud solution for data warehousing. SQL Data Warehouse is highly elastic, enabling you to provision in minutes and scale capacity in seconds. You can scale compute and storage independently, allowing you to burst compute for complex analytical workloads or scale down your warehouse for archival scenarios, and pay based off what you're using instead of being locked into predefined cluster configurations.

We are pleased to announce that you can now directly import or export your data from Azure Data Lake Store (ADLS) into Azure SQL Data Warehouse (SQL DW) using External Tables.

ADLS is a purpose-built, no-limits store and is optimized for massively parallel processing. With SQL DW PolyBase support for ADLS, you can now load data directly into your SQL DW instance at nearly 3 TB per hour. Because SQL DW can now ingest data directly from Windows Azure Storage Blob and ADLS, you can now load data from any storage service in Azure. This provides you with the flexibility to choose what is right for your application. 

A common use case for ADLS and SQL DW is the following. Raw data is ingested into ADLS from a variety of sources. Then ADL Analytics is used to clean and process the data into a loading ready format. From there, the high value data can be imported into Azure SQL DW via PolyBase.

PolyBase Pipeline Support

ADLS has a variety of built-in security features that PolyBase uses to ensure your data remains secure, such as always-on encryption, ACL-based authorization, and Azure Active Directory (AAD) integration. To load data from ADLS via PolyBase, you need to create an AAD application. Read and write privileges are managed for the AAD application on either a per directory, subdirectory, or file basis. This allows you to provide fine-grained access control of what data can be loaded into SQL DW from ADLS resulting in an easy to manage security model.

You can import data stored in ORC, RC, Parquet, or Delimited Text file formats directly into SQL DW using the Create Table As Select (CTAS) statement over an external table.

How to Set Up the Connection to Azure Data Lake Store

When you connect to your SQL DW from your favorite client (SSMS or SSDT), you can use the script below to get started. You will need to know your AAD Application’s client ID, OAuth2.0TokenEndpoint, and Key to create a Database Scoped Credential in SQL DW. This key is encrypted with your Database Master Key and is stored within the SQL DW. This is the credential used to authenticate against ADLS.

Image

It’s just that simple to load data into Azure SQL Data Warehouse from ADLS.

Best Practices for loading data into SQL DW from Azure Data Lake Store

For the best experience, please look at the following guidelines:

  • Co-locate the services in the same data center for better performance and no data egress charges.
  • Split large compressed files into at least 60 smaller compressed files.
  • Use a large resource class in SQL DW to load the data.
  • Ensure that your AAD Application has read access from your chosen ADLS Directory.
  • Scale up your DW SLO when importing a large data set.
  • Use a medium resource class for loading data into SQL DW.

Learn more about best practices for loading data into SQL DW from Azure Data Lake Store.

Next steps

If you already have an Azure Data Lake Store, you can try loading your data into SQL Data Warehouse.

Additionally, there are great tutorials specific to ADLS to get you up and running.

Learn more

What is Azure SQL Data Warehouse?

What is Azure Data Lake Store?

SQL Data Warehouse best practices

Load Data into SQL Data Warehouse

MSDN forum

Stack Overflow forum

STL Fixes In VS 2017 RTM

$
0
0

VS 2017 RTM will be released soon. VS 2017 RC is available now and contains all of the changes described here – please try it out and send feedback through the IDE’s Help > Send Feedback > Report A Problem (or Provide A Suggestion).

This is the third and final post for what’s changed in the STL between VS 2015 Update 3 and VS 2017 RTM. In the first post (for VS 2017 Preview 4), we explained how 2015 and 2017 will be binary compatible. In the second post (for VS 2017 Preview 5), we listed what features have been added to the compiler and STL. (Since then, we’ve implemented P0504R0 Revisiting in_place_t/in_place_type_t/in_place_index_t and P0510R0 Rejecting variants Of Nothing, Arrays, References, And Incomplete Types.)

Vector overhaul:

We’ve overhauled vector’s member functions, fixing many runtime correctness and performance bugs.

* Fixed aliasing bugs. For example, the Standard permits v.emplace_back(v[0]), which we were mishandling at runtime, and v.push_back(v[0]), which we were guarding against with deficient code (asking “does this object live within our memory block?” doesn’t work in general). The fix involves performing our actions in a careful order, so we don’t invalidate whatever we’ve been given. Occasionally, to defend against aliasing, we must construct an element on the stack, which we do only when there’s no other choice (e.g. emplace(), with sufficient capacity, not at the end). (There is an active bug here, which is fortunately highly obscure – we do not yet attempt to rigorously use the allocator’s construct() to deal with such objects on the stack.) Note that our implementation follows the Standard, which does not attempt to permit aliasing in every member function – for example, aliasing is not permitted when range-inserting multiple elements, so we make no attempt to handle that.

* Fixed exception handling guarantees. Previously, we unconditionally moved elements during reallocation, starting with the original implementation of move semantics in VS 2010. This was delightfully fast, but regrettably incorrect. Now, we follow the Standard-mandated move_if_noexcept() pattern. For example, when push_back() and emplace_back() are called, and they need to reallocate, they ask the element: “Are you nothrow move constructible? If so, I can move you (it won’t fail, and it’ll hopefully be fast). Otherwise, are you copy constructible? If so, I’ll fall back to copying you (might be slow, but won’t damage the strong exception guarantee). Otherwise, you’re saying you’re movable-only with a potentially-throwing move constructor, so I’ll move you, but you don’t get the strong EH guarantee if you throw.” Now, with a couple of obscure exceptions, all of vector’s member functions achieve the basic or strong EH guarantees as mandated by the Standard. (The first exception involves questionable Standardese, which implies that range insertion with input-only iterators must provide the strong guarantee when element construction from the range throws. That’s basically unimplementable without heroic measures, and no known implementation has ever attempted to do that. Our implementation provides the basic guarantee: we emplace_back() elements repeatedly, then rotate() them into place. If one of the emplace_back()s throw, we may have discarded our original memory block long ago, which is an observable change. The second exception involves “reloading” proxy objects (and sentinel nodes in the other containers) for POCCA/POCMA allocators, where we aren’t hardened against out-of-memory. Fortunately, std::allocator doesn’t trigger reloads.)

* Eliminated unnecessary EH logic. For example, vector’s copy assignment operator had an unnecessary try-catch block. It just has to provide the basic guarantee, which we can achieve through proper action sequencing.

* Improved debug performance slightly. Although this isn’t a top priority for us (in the absence of the optimizer, everything we do is expensive), we try to avoid severely or gratuitously harming debug perf. In this case, we were sometimes unnecessarily using iterators in our internal implementation, when we could have been using pointers.

* Improved iterator invalidation checks. For example, resize() wasn’t marking end iterators as being invalidated.

* Improved performance by avoiding unnecessary rotate() calls. For example, emplace(where, val) was calling emplace_back() followed by rotate(). Now, vector calls rotate() in only one scenario (range insertion with input-only iterators, as previously described).

* Locked down access control. Now, helper member functions are private. (In general, we rely on _Ugly names being reserved for implementers, so public helpers aren’t actually a bug.)

* Improved performance with stateful allocators. For example, move construction with non-equal allocators now attempts to activate our memmove() optimization. (Previously, we used make_move_iterator(), which had the side effect of inhibiting the memmove() optimization.) Note that a further improvement is coming in VS 2017 Update 1, where move assignment will attempt to reuse the buffer in the non-POCMA non-equal case.

Note that this overhaul inherently involves source breaking changes. Most commonly, the Standard-mandated move_if_noexcept() pattern will instantiate copy constructors in certain scenarios. If they can’t be instantiated, your program will fail to compile. Also, we’re taking advantage of other operations that are required by the Standard. For example, N4618 23.2.3 [sequence.reqmts] says that a.assign(i,j) “Requires: T shall be EmplaceConstructible into X from *i and assignable from *i.” We’re now taking advantage of “assignable from *i” for increased performance.

Warning overhaul:

The compiler has an elaborate system for warnings, involving warning levels and push/disable/pop pragmas. Compiler warnings apply to both user code and STL headers. Other STL implementations disable all compiler warnings in “system headers”, but we follow a different philosophy. Compiler warnings exist to complain about certain questionable actions, like value-modifying sign conversions or returning references to temporaries. These actions are equally concerning whether performed directly by user code, or by STL function templates performing actions on behalf of users. Obviously, the STL shouldn’t emit warnings for its own code, but we believe that it’s undesirable to suppress all warnings in STL headers.

For many years, the STL has attempted to be /W4 /analyze clean (not /Wall, that’s different), verified by extensive test suites. Historically, we pushed the warning level to 3 in STL headers, and further suppressed certain warnings. While this allowed us to compile cleanly, it was overly aggressive and suppressed desirable warnings.

Now, we’ve overhauled the STL to follow a new approach. First, we detect whether you’re compiling with /W3 (or weaker, but you should never ever do that) versus /W4 (or /Wall, but that’s technically unsupported with the STL and you’re on your own). When we sense /W3 (or weaker), the STL pushes its warning level to 3 (i.e. no change from previous behavior). When we sense /W4 (or stronger), the STL now pushes its warning level to 4, meaning that level 4 warnings will now be applied to our code. Additionally, we have audited all of our individual warning suppressions (in both product and test code), removing unnecessary suppressions and making the remaining ones more targeted (sometimes down to individual functions or classes). We’re also suppressing warning C4702 (unreachable code) throughout the entire STL; while this warning can be valuable to users, it is optimization-level-dependent, and we believe that allowing it to trigger in STL headers is more noisy than valuable. We’re using two internal test suites, plus libc++’s open-source test suite, to verify that we’re not emitting warnings for our own code.

Here’s what this means for you. If you’re compiling with /W3 (which we discourage), you should observe no major changes. Because we’ve reworked and tightened up our suppressions, you might observe a few new warnings, but this should be fairly rare. (And when they happen, they should be warning about scary things that you’ve asked the STL to do. If they’re noisy and undesirable, report a bug.) If you’re compiling with /W4 (which we encourage!), you may observe warnings being emitted from STL headers, which is a source breaking change with /WX, but a good one. After all, you asked for level-4 warnings, and the STL is now respecting that. For example, various truncation and sign-conversion warnings will now be emitted from STL algorithms depending on the input types. Additionally, non-Standard extensions being activated by input types will now trigger warnings in STL headers. When this happens, you should fix your code to avoid the warnings (e.g. by changing the types you pass to the STL, correcting the signatures of your function objects, etc.). However, there are escape hatches.

First, the macro _STL_WARNING_LEVEL controls whether the STL pushes its warning level to 3 or 4. It’s automatically determined by inspecting /W3 or /W4 as previously described, but you can override this by defining the macro project-wide. (Only the values 3 and 4 are allowed; anything else will emit a hard error.) So, if you want to compile with /W4 but have the STL push to level 3 like before, you can request that.

Second, the macro _STL_EXTRA_DISABLED_WARNINGS (which will always default to be empty) can be defined project-wide to suppress chosen warnings throughout STL headers. For example, defining it to be 4127 6326 would suppress “conditional expression is constant” and “Potential comparison of a constant with another constant” (we should be clean for those already, this is just an example).

Correctness fixes and other improvements:

* STL algorithms now occasionally declare their iterators as const. Source breaking change: iterators may need to mark their operator* as const, as required by the Standard.

* basic_string iterator debugging checks emit improved diagnostics.

* basic_string’s iterator-range-accepting functions had additional overloads for (char *, char *). These additional overloads have been removed, as they prevented string.assign(“abc”, 0) from compiling. (This is not a source breaking change; code that was calling the old overloads will now call the (Iterator, Iterator) overloads instead.)

* basic_string range overloads of append, assign, insert, and replace no longer require the basic_string’s allocator to be default constructible.

* basic_string::c_str(), basic_string::data(), filesystem::path::c_str(), and locale::c_str() are now SAL annotated to indicate that they are null terminated.

* array::operator[]() is now SAL annotated for improved code analysis warnings. (Note: we aren’t attempting to SAL annotate the entire STL. We consider such annotations on a case-by-case basis.)

* condition_variable_any::wait_until now accepts lower-precision time_point types.

* stdext::make_checked_array_iterator’s debugging checks now allow iterator comparisons allowed by C++14’s null forward iterator requirements.

* Improved static_assert messages, citing the C++ Working Paper’s requirements.

* We’ve further improved the STL’s defenses against overloaded operator,() and operator&().

* replace_copy() and replace_copy_if() were incorrectly implemented with a conditional operator, mistakenly requiring the input element type and the new value type to be convertible to some common type. Now they’re correctly implemented with an if-else branch, avoiding such a convertibility requirement. (The input element type and the new value type need to be writable to the output iterator, separately.)

* The STL now respects null fancy pointers and doesn’t attempt to dereference them, even momentarily. (Part of the vector overhaul.)

* Various STL member functions (e.g. allocator::allocate(), vector::resize()) have been marked with _CRT_GUARDOVERFLOW. When the /sdl compiler option is used, this expands to __declspec(guard(overflow)), which detects integer overflows before function calls.

* In , independent_bits_engine is mandated to wrap a base engine (N4618 26.6.1.5 [rand.req.adapt]/5, /8) for construction and seeding, but they can have different result_types. For example, independent_bits_engine can be asked to produce uint64_t by running 32-bit mt19937. This triggers truncation warnings. The compiler is correct because this is a physical, data-loss truncation – however, it is mandated by the Standard. We’ve added static_cast, which silences the compiler without affecting codegen.

* Fixed a bug in std::variant which caused the compiler to fill all available heap space and exit with an error message when compiling std::get(v) for a variant v such that T is not a unique alternative type. For example, std::get(v) or std::get(v) when v is std::variant.

Runtime performance improvements:

* basic_string move construction, move assignment, and swap performance was tripled by making them branchless in the common case that Traits is std::char_traits and the allocator pointer type is not a fancy pointer. We move/swap the representation rather than the individual basic_string data members.

* The basic_string::find(character) family now works by searching for a character instead of a string of size 1.

* basic_string::reserve no longer has duplicate range checks.

* In all basic_string functions that allocate, removed branches for the string shrinking case, as only reserve does that.

* stable_partition no longer performs self-move-assignment. Also, it now skips over elements that are already partitioned on both ends of the input range.

* shuffle and random_shuffle no longer perform self-move-assignment.

* Algorithms that allocate temporary space (stable_partition, inplace_merge, stable_sort) no longer pass around identical copies of the base address and size of the temporary space.

* The filesystem::last_write_time(path, time) family now issues 1 disk operation instead of 2.

* Small performance improvement for std::variant’s visit() implementation: do not re-verify after dispatching to the appropriate visit function that all variants are not valueless_by_exception(), because std::visit() already guarantees that property before dispatching. Negligibly improves performance of std::visit(), but greatly reduces the size of generated code for visitation.

Compiler throughput improvements:

* Source breaking change: features that aren’t used by the STL internally (uninitialized_copy, uninitialized_copy_n, uninitialized_fill, raw_storage_iterator, and auto_ptr) now appear only in .

* Centralized STL algorithm iterator debugging checks.

Billy Robert O’Neal III @MalwareMinigun
bion@microsoft.com

Casey Carter @CoderCasey
cacarter@microsoft.com

Stephan T. Lavavej @StephanTLavavej
stl@microsoft.com

Announcing preview of Storage Service Encryption for File Storage

$
0
0

Today, we are excited to announce the preview of Storage Service Encryption (SSE) for Azure File Storage. When you enable Storage Service Encryption for Azure File Storage your data is automatically encrypted for you.

Azure File Storage is a fully managed service providing distributed and cross platform storage. IT organizations can lift and shift their on premises file shares to the cloud using Azure Files, by simply pointing the applications to Azure file share path. Thus, enterprises can start leveraging cloud without having to incur development costs to adopt cloud storage. Azure Files now offers encryption of data at rest capability.

Microsoft handles all the encryption, decryption and key management in a fully transparent fashion. All data is encrypted using 256-bit AES encryption, also known as AES-256, one of the strongest block ciphers available. Customers can enable this feature on all available redundancy types of Azure File Storage – LRS and GRS.

During preview, the feature can only be enabled for newly created Azure Resource Manager (ARM) Storage accounts.

You can enable this feature on Azure Resource Manager storage account using the Azure Portal. We plan to have the Azure Powershell, Azure CLI or the Microsoft Azure Storage Resource Provider API for enabling encryption for file storage by end of February. There is no additional charge for enabling this feature. 

Find out more about Storage Service Encryption. You can also reach out to ssediscussions@microsoft.com for additional questions on the preview.

Convert a Managed Domain in Azure AD to a Federated Domain using ADFS for On-Premises Authentication – Step by Step

$
0
0

Hi all! I am Bill Kral, a Microsoft Premier Field Engineer, here again to give you the steps to convert your on-premises Managed domain to a Federated domain in your Azure AD tenant this time.

Here is the link to my previous blog on how to convert from a Federated to Managed domain:

Convert a Federated Domain in Azure AD to Managed and Use Password Sync – Step by Step

https://blogs.technet.microsoft.com/askpfeplat/2016/12/19/convert-a-federated-domain-in-azure-ad-to-managed-and-use-password-sync-step-by-step/

There are many ways to allow you to logon to your Azure AD account using your on-premises passwords. You can use ADFS, Azure AD Connect Password Sync from your on-premises accounts or just assign passwords to your Azure account. In addition, Azure AD Connect Pass-Through Authentication is currently in preview, for yet another option for logging on and authenticating.

So, why would you convert your domain from Managed to Federated? Well, maybe you finally decided to invest in an ADFS environment. Maybe your company mandated that the storage of passwords in the cloud go against company policy, even though the hash of the hash of the password is what is really stored in Azure AD… and you may have your reasons for doing so. Either way, we’ll discuss how to get from a Managed domain to Federated domain in your Azure AD environment.

Let’s set the stage so you can follow along:

The on-premises Active Directory Domain in this case is US.BKRALJR.INFO

The AzureAD tenant is BKRALJRUTC.onmicrosoft.com

We are using Azure AD Connect for directory synchronization (Password Sync currently is enabled)

We have setup an ADFS environment to federate the domain with the Azure AD Tenant

Before we start, you will need the following things installed on your ADFS Server to connect to your Azure AD tenant:

Microsoft Online Services Sign-In Assistant for IT Professionals RTW

https://www.microsoft.com/en-us/download/details.aspx?id=41950

Windows Azure Active Directory Module for Windows PowerShell .msi

http://connect.microsoft.com/site1164/Downloads/DownloadDetails.aspx?DownloadID=59185

  1. First, log on to your Azure Portal and see that the “Status” of your domain is Verified and the “Single Sign-On” for your custom domain show as Not Planned or Not Configured.


  2. Now, go to your Primary ADFS Server and lets connect to your Azure AD Tenant.
    1. On the Primary ADFS server, open an Administartor powershell window and import the MSOnline module

      Import-Module MSOnline

    2. Connect to your Azure AD Tenant

      Connect-MSOLService -> Enter your Azure AD credentials on the pop-up


  3. Once you are connected to your Azure AD Tenant, let’s make sure your domain is currently recognized as a “Managed” domain.

    Get-MsolDomain -Domainname domain.com
    -> Should show your domain as “Managed”


  4. Now we can make sure that the domain you are converting is currently NOT in the ADFS configuration.

    Get-MsolFederationProperty -Domainname domain.com -> Should show that domain does not exist in configuration


  5. So, now that we have connected to the Azure AD Tenant and confirmed that are domain configured as Managed, we can get to converting it to a “Federated” domain. When done, all of your Azure AD sync’d user accounts will authenticate to your on-premises Active Directory via ADFS.
    1. While still on your ADFS server, import the ADFS module

      Import-Module ADFS

    2. Run the command to convert your domain. Now, if you have a single top-level domain, you do not need to include the -SupportMultipleDomain switch. If you currently have or are planning to add additional domains to your ADFS / Azure AD federation, you will want to use it as I have.

      Convert-MsolDomainToFederated -DomainName domain.com -SupportMultipleDomain -> (A successful updated message should be your result)


    3. Once this has completed, we can see the properties for the converted federation.

      Get-MsolFederationProperty -Domainname domain.com -> Should now not show the domain error we saw in step 4 and contain information for your domain under Microsoft Office 365 “Source” entry.


  6. Now, lets go back to your Azure Portal and see take a look at what the “Single Sign-On” status is for your custom domain now that you have converted it.


    As you can see, after a refresh and a little time for your commands to work their magic, my domain now shows the “Single Sign-On” as “Configured”

  7. You can now test logging on to myapps.microsoft.com with a sync’d account in your Azure AD Tenant. You should now see a re-direction to your ADFS environment while you are being authenticated.


That is pretty much it!!! Now, at this time, if you were replicating your passwords to Azure AD (or as most Microsoft folks like to say, the hash of the hash of the password), you may keep doing so to use as an authentication “backup” should your ADFS environment fail. This usage as a backup authentication does not happen automatically, but a powershell command will do the job when it is needed!!!

If you intend to disable replication of you on-premises passwords to you Azure AD Tenant, that can be accomplished through your Azure AD Connect configuration setup!!!

Once again, thanks for reading!!!


Announcing Continuous Delivery Tools for Visual Studio 2017

$
0
0

With the right DevOps tools, developers can run continuous integration builds that automate testing, analysis and verification of their projects, and streamline continuous deployment to get innovative applications into user’s hands quickly. Along with the release of Visual Studio 2017 RC.3 update, we released a DevLabsextension, Continuous Delivery Tools for Visual Studio. The current version of the extension makes it simple to setup up an automated build, test and release pipeline on Visual Studio Team Services for an ASP.NET 4 and ASP.NET Core application targeting Azure. Once a CI build definition is configured, developers will instantly get notified in Visual Studio if a build fails.

notification

 By clicking on the build failure notification, you will be able to get more information on the build quality through the VSTS dashboard.

vstsdash

Deploying to other environments becomes simple as developers can simply copy this existing environment release configuration and modify it. For more information on how the extension works please check the Visual Studio blog. To download and install the extension, please go to the Visual Studio Gallery.

  

Microsoft DevLabs Extensions

This extension is a Microsoft DevLabs extension which is an outlet for experiments from Microsoft that represent some of the latest ideas around developer tools. They are designed for broad use, feedback, and quick iteration but it’s important to note DevLabs extensions are not supported and there is no commitment they’ll ever make it big and ship in the product.

 

It’s all about feedback…

We think there’s a lot we can do in the IDE to help teams collaborate and ship high quality code faster. In the spirit of being agile we want to ship fast, try out new ideas, improve the ones that work and pivot on the ones that don’t. Over the next few days, weeks and months we’ll update the extension with new fixes and features. Your feedback is essential to this process. If you are interested in sharing your feedback join our slack channel or ping us at VSDevOps@microsoft.com.

 

Viewing all 13502 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>