Quantcast
Channel: TechNet Technology News
Viewing all 13502 articles
Browse latest View live

Switching between Windows Server 2016 Server Core and Desktop Experience

$
0
0

We have frequently been asked why we removed the option to add and remove the Server GUI package to the Windows Server 2016 Server Core install option like you could in Windows Server 2012 R2. This was one of those challenging functional trade-offs that sometimes need to be made during product development. Here is the context behind this change.

We prioritized consistency with the Windows client desktop over the ability to switch between Server Core and Server with Desktop. Replacing the legacy desktop in Server with the Windows 10 desktop experience resulted in our inability to support the Windows Server 2012 R2 behavior.

It was our belief that consistency was the top priority for Remote Desktop customers, IT Professionals accustomed to having a consistent GUI for server management, and application developers building a consistent experience between client and server.

We are working on improvements to the remote management experience to make it easier to operate and manage Server Core without the need to switch back and forth.


Meet SQL Server’s biggest fan, KillaDBA

$
0
0

This post was authored by Jennifer Moser, Data Platform Community Lead, Microsoft

DSC_2067

Atlanta-based Homer McEwen has been a database administrator for more than 20 years. But it’s what he does in his spare time that makes McEwen, aka KillaDBA, a little different. The husband and father of three writes and records songs about SQL Server, the platform that helps power the travel company he works for. We sat down with KillaDBA and learned about everything from his top musical influences to his reaction when he found out he’d be performing at SQL Saturday in Redmond on April 15. After talking with him and hearing his music, even we’re seeing SQL Server in a whole new light.

Q: Why did you decide to write songs about technology?

A: I’ve always loved music. I was in the choir as a kid, played drums in high school, and before becoming a developer and database administrator, I worked for an independent record label. The company I work for now, BCD Travel, uses SQL Server as its main database platform, and as a DBA, I’m responsible for understanding all of its features and benefits. I found that putting a musical framework around the things I needed to learn helped me remember them better. And if these songs helped me, I figured they could help others using the SQL Server platform. It’s also a way for me to give back to the SQL community.

Q: When did you write your first technology song?

A: I didn’t get serious until last year. I’ve written songs in the past and had talked about writing and recording technology-focused songs for years. I even teased one of my fellow DBAs that we needed to start a band at work. But it wasn’t until my birthday last summer that I made the decision to blend my passion for music and technology.

Q: What kind of feedback have you received so far?

A: I’ve gotten lots of positive feedback from people, both in technical fields and non-technical fields. It’s really overwhelming to hear people who aren’t even in IT say good things about my music. Peers, friends and even complete strangers have expressed how much they enjoy my songs. It made me realize that I might really be onto something here.

Q: Which performers are you influenced by?

A: I have a pretty wide range of influences. I love Stevie Wonder, Outkast, Adam Levine and Maroon 5, Billy Joel, Peter Gabriel and Genesis, and Earth, Wind & Fire. And Dolly Parton and Lionel Richie because of their great songwriting ability. I’m a huge fan of songwriters. It’s just so cool to create something from nothing.

Q: Are you working on any new songs right now?

A: I just posted a new song two weeks ago called “Backup and Recovery.” I’m also writing a song called “I’m in Love With an IT Girl” about a developer and DBA working on a project together and falling in love. You can check out my other songs, “Microsoft SQL Server 2016,” “Types of Indexes” and “Data Protection Song,” if you want a little sneak preview before SQL Saturday.

Q: Any long-term goals for KillaDBA?

A: I’d love to be able to do something really big with my music. Ultimately, I’d like to reach mainstream audiences and teach them about technology through my songs. Creating tutorials through song, and performing in front of IT people, would also be at the top of my list. I’ve even considered writing a musical about technology — and who knows how far that could go.

Q: What has been the most amazing thing about this experience?

A: Having the opportunity to perform for the SQL community in Seattle and sharing my passion for technology and music are the greatest things so far. Music is a powerful tool that speaks to people, and I look forward to finding new ways to use it.

Don’t miss your chance to meet KillaDBA performing live on campus April 15 at 4 p.m. We look forward to seeing you at this exciting event featuring free training for Microsoft data platform professionals, plus engaging speakers, raffles and prizes.

Do you have a SQL story? Share it below in the comments.

Latest Rev of Utilities for Microsoft Team Data Science Process (TDSP) Now Available

$
0
0

This post is authored by Hang Zhang, Senior Data Science Manager, Xibin Gao, Data Scientist, and Wei Guo, Data Scientist, of Microsoft.

We are excited to announce the release of version 0.12 of the Microsoft Team Data Science Process utilities. We had earlier released Team Data Science Process (TDSP) back in September 2016, along with a set of data science utilities (version 0.1), with a view to help boost the productivity of data scientists. In this blog post, we are happy to share our latest feature additions and enhancements.

New Features

IDEAR in Microsoft R Server, for Big Data

Microsoft R Server (MRS) is the enterprise-class analytics platform for R. It supports exploring, visualizing and analyzing big data on a single machine or on Hadoop or Spark clusters. The previously released IDEAR, in open source R, is constrained by memory size as data is loaded into memory before data exploration. We have now released IDEAR in MRS, which allows R users to explore and analyze big data interactively and generate data reports automatically. These feature changes are mostly under the hood and not necessarily visible in the user interface. In other words, IDEAR in MRS brings the same user experience as IDEAR in open source R but with extended capabilities when it comes to the ability to handle big data. Microsoft offers a free Microsoft R Server Developer Edition. If you are using an Azure Data Science Virtual Machine (DSVM), the MRS Developer Edition comes pre-installed and you can start using IDEAR in MRS right off the bat.

IDEAR in Python 3

Since Python 2.7 will not be maintained past 2020, it makes sense to develop IDEAR in Python 3. The newly released IDEAR in Python can run in both Python 3.5 and Python 2.7. Future versions of IDEAR will only be on Python 3.x, with IDEAR in Python 2.7 getting deprecated.

IDEAR in Python 3 on Azure Notebooks Services

We also released an Azure Notebooks service version of IDEAR in Python 3.5, named IDEAR-Python-AzureNotebooks.ipynb. Using the Azure Notebook services can save you the time and trouble of setting up Jupyter Notebook servers and installing the necessary libraries. IDEAR-Python-AzureNotebooks.ipynb reads both data and YAML files from Azure Blob Storage. The interactive data exploration, analysis and visualization capabilities are the same as IDEAR in Jupyter Notebooks (IDEAR.ipynb) – the only difference is that IDEAR-Python-AzureNotebooks.ipynb does not have functions to generate reports automatically.

Feature Enhancements

Checking Missing Values in IDEAR in R

Data scientists pay close attention to missing values as they represent an important data quality consideration, when doing data analysis. We now provide a feature to assess and visualize the severity of missing values in your data. This helps users identify which variables have the highest rates of missing values, and where the missing values happen to be (e.g. which segments of rows).


Principal Component Analysis on Mixed Data Types, in IDEAR for Open Source R

It is almost universally true that both numerical and categorical variables co-exist in data sets. Sometime categorical variables can even dominate a data set. In this release, we used PCAmixdata to handle mixture of categorical and numerical variables. The image below demonstrates a clear clustering pattern, colored by the variable season, when applying IDEAR on the Bike Rental sample data shipped with the utilities, by using the PCAmixdata library.


Numerical Variable Histograms Grouped by Categorical Variable Levels, in IDEAR in MRS

This feature enhancement allows users to easily compare the distribution difference of a numerical variable conditioning on different values of the categorical variable.


Numerical Interactions Grouped by Categorical Variables, in IDEAR in MRS

Interactions between two numerical variables can be influenced by a third categorical variable. You now have the option to view the scatterplot between numerical variables grouped by the categorical variable levels.


Next Steps

You can download and play with these new features in the data science utilities, and send us your feedback or feature requests via the comments feature below, or on the issues tab of our GitHub repository, or via twitter, to @zenlytix. We continue to work on improving this toolset to better serve your data science project needs, so we look forward to hearing from you.

Hang, Xibin & Wei

References

Using Debugging Tools to Find Token and Session Leaks

$
0
0

Hello AskDS readers and Identity aficionados. Long time no blog.

Ryan Ries here, and today I have a relatively “hardcore” blog post that will not be for the faint of heart. However, it’s about an important topic.

The behavior surrounding security tokens and logon sessions has recently changed on all supported versions of Windows. IT professionals – developers and administrators alike – should understand what this new behavior is, how it can affect them, and how to troubleshoot it.

But first, a little background…

Figure 1 - Tokens

Figure 1 – Tokens

Windows uses security tokens (or access tokens) extensively to control access to system resources. Every thread running on the system uses a security token, and may own several at a time. Threads inherit the security tokens of their parent processes by default, but they may also use special security tokens that represent other identities in an activity known as impersonation. Since security tokens are used to grant access to resources, they should be treated as highly sensitive, because if a malicious user can gain access to someone else’s security token, they will be able to access resources that they would not normally be authorized to access.

Note: Here are some additional references you should read first if you want to know more about access tokens:

If you are an application developer, your application or service may want to create or duplicate tokens for the legitimate purpose of impersonating another user. A typical example would be a server application that wants to impersonate a client to verify that the client has permissions to access a file or database. The application or service must be diligent in how it handles these access tokens by releasing/destroying them as soon as they are no longer needed. If the code fails to call the CloseHandle function on a token handle, that token can then be “leaked” and remain in memory long after it is no longer needed.

And that brings us to Microsoft Security Bulletin MS16-111.

Here is an excerpt from that Security Bulletin:

Multiple Windows session object elevation of privilege vulnerabilities exist in the way that Windows handles session objects.

A locally authenticated attacker who successfully exploited the vulnerabilities could hijack the session of another user.
To exploit the vulnerabilities, the attacker could run a specially crafted application.
The update corrects how Windows handles session objects to prevent user session hijacking.

Those vulnerabilities were fixed with that update, and I won’t further expound on the “hacking/exploiting” aspect of this topic. We’re here to explore this from a debugging perspective.

This update is significant because it changes how the relationship between tokens and logon sessions is treated across all supported versions of Windows going forward. Applications and services that erroneously leak tokens have always been with us, but the penalty paid for leaking tokens is now greater than before. After MS16-111, when security tokens are leaked, the logon sessions associated with those security tokens also remain on the system until all associated tokens are closed… even after the user has logged off the system. If the tokens associated with a given logon session are never released, then the system now also has a permanent logon session leak as well. If this leak happens often enough, such as on a busy Remote Desktop/Terminal Server where users are logging on and off frequently, it can lead to resource exhaustion on the server, performance issues and denial of service, ultimately causing the system to require a reboot to be returned to service.

Therefore, it’s more important than ever to be able to identify the symptoms of token and session leaks, track down token leaks on your systems, and get your application vendors to fix them.

How Do I Know If My Server Has Leaks?

As mentioned earlier, this problem affects heavily-utilized Remote Desktop Session Host servers the most, because users are constantly logging on and logging off the server. The issue is not limited to Remote Desktop servers, but symptoms will be most obvious there.

Figuring out that you have logon session leaks is the easy part. Just run qwinsta at a command prompt:

Figure 2 - qwinsta

Figure 2 – qwinsta

Pay close attention to the session ID numbers, and notice the large gap between session 2 and session 152. This is the clue that the server has a logon session leak problem. The next user that logs on will get session 153, the next user will get session 154, the next user will get session 155, and so on. But the session IDs will never be reused. We have 150 “leaked” sessions in the screenshot above, where no one is logged on to those sessions, no one will ever be able to log on to those sessions ever again (until a reboot,) yet they remain on the system indefinitely. This means each user who logs onto the system is inadvertently leaving tokens lying around in memory, probably because some application or service on the system duplicated the user’s token and didn’t release it. These leaked sessions will forever be unusable and soak up system resources. And the problem will only get worse as users continue to log on to the system. In an optimal situation where there were no leaks, sessions 3-151 would have been destroyed after the users logged out and the resources consumed by those sessions would then be reusable by subsequent logons.

How Do I Find Out Who’s Responsible?

Now that you know you have a problem, next you need to track down the application or service that is responsible for leaking access tokens. When an access token is created, the token is associated to the logon session of the user who is represented by the token, and an internal reference count is incremented. The reference count is decremented whenever the token is destroyed. If the reference count never reaches zero, then the logon session is never destroyed or reused. Therefore, to resolve the logon session leak problem, you must resolve the underlying token leak problem(s). It’s an all-or-nothing deal. If you fix 10 token leaks in your code but miss 1, the logon session leak will still be present as if you had fixed none.

Before we proceed: I would recommend debugging this issue on a lab machine, rather than on a production machine. If you have a logon session leak problem on your production machine, but don’t know where it’s coming from, then install all the same software on a lab machine as you have on the production machine, and use that for your diagnostic efforts. You’ll see in just a second why you probably don’t want to do this in production.

The first step to tracking down the token leaks is to enable token leak tracking on the system.

Modify this registry setting:

HKLM\SYSTEM\CurrentControlSet\Control\Session Manager\Kernel
    SeTokenLeakDiag = 1 (DWORD)

The registry setting won’t exist by default unless you’ve done this before, so create it. It also did not exist prior to MS16-111, so don’t expect it to do anything if the system does not have MS16-111 installed. This registry setting enables extra accounting on token issuance that you will be able to detect in a debugger, and there may be a noticeable performance impact on busy servers. Therefore, it is not recommended to leave this setting in place unless you are actively debugging a problem. (i.e. don’t do it in production exhibit A.)

Prior to the existence of this registry setting, token leak tracing of this kind used to require using a checked build of Windows. And Microsoft seems to not be releasing a checked build of Server 2016, so… good timing.

Next, you need to configure the server to take a full or kernel memory dump when it crashes. (A live kernel debug may also be an option, but that is outside the scope of this article.) I recommend using DumpConfigurator to configure the computer for complete crash dumps. A kernel dump should be enough to see most of what we need, but get a Complete dump if you can.

Figure 3 - DumpConfigurator

Figure 3 – DumpConfigurator

Then reboot the server for the settings to take effect.

Next, you need users to log on and off the server, so that the logon session IDs continue to climb. Since you’re doing this in a lab environment, you might want to use a script to automatically logon and logoff a set of test users. (I provided a sample script for you here.) Make sure you’ve waited 10 minutes after the users have logged off to verify that their logon sessions are permanently leaked before proceeding.

Finally, crash the box. Yep, just crash it. (i.e. don’t do it in production exhibit B.) On a physical machine, this can be done by hitting Right-Ctrl+Scroll+Scroll if you configured the appropriate setting with DumpConfigurator earlier. If this is a Hyper-V machine, you can use the following PowerShell cmdlet on the Hyper-V host:

Debug-VM -VM (Get-VM RDS1) -InjectNonMaskableInterrupt

You may have at your disposal other means of getting a non-maskable interrupt to the machine, such as an out-of-band management card (iLO/DRAC, etc.,) but the point is to deliver an NMI to the machine, and it will bugcheck and generate a memory dump.

Now transfer the memory dump file (C:\Windows\Memory.dmp usually) to whatever workstation you will use to perform your analysis.

Note: Memory dumps may contain sensitive information, such as passwords, so be mindful when sharing them with strangers.

Next, install the Windows Debugging Tools on your workstation if they’re not already installed. I downloaded mine for this demo from the Windows Insider Preview SDK here. But they also come with the SDK, the WDK, WPT, Visual Studio, etc. The more recent the version, the better.

Next, download the MEX Debugging Extension for WinDbg. Engineers within Microsoft have been using the MEX debugger extension for years, but only recently has a public version of the extension been made available. The public version is stripped-down compared to the internal version, but it’s still quite useful. Unpack the file and place mex.dll into your C:\Debuggers\winext directory, or wherever you installed WinDbg.

Now, ensure that your symbol path is configured correctly to use the Microsoft public symbol server within WinDbg:

Figure 4 - Example Symbol Path in WinDbg

Figure 4 – Example Symbol Path in WinDbg

The example symbol path above tells WinDbg to download symbols from the specified URL, and store them in your local C:\Symbols directory.

Finally, you are ready to open your crash dump in WinDbg:

Figure 5 - Open Crash Dump from WinDbg

Figure 5 – Open Crash Dump from WinDbg

After opening the crash dump, the first thing you’ll want to do is load the MEX debugging extension that you downloaded earlier, by typing the command:

Figure 6 - .load mex

Figure 6 – .load mex

The next thing you probably want to do is start a log file. It will record everything that goes on during this debugging session, so that you can refer to it later in case you forgot what you did or where you left off.

Figure 7 - !logopen

Figure 7 – !logopen

Another useful command that is among the first things I always run is !DumpInfo, abbreviated !di, which simply gives some useful basic information about the memory dump itself, so that you can verify at a glance that you’ve got the correct dump file, which machine it came from and what type of memory dump it is.

Figure 8 - !DumpInfo

Figure 8 – !DumpInfo

You’re ready to start debugging.

At this point, I have good news and I have bad news.

The good news is that there already exists a super-handy debugger extension that lists all the logon session kernel objects, their associated token reference counts, what process was responsible for creating the token, and even the token creation stack, all with a single command! It’s
!kdexts.logonsession, and it is awesome.

The bad news is that it doesn’t work… not with public symbols. It only works with private symbols. Here is what it looks like with public symbols:

Figure 9 - !kdexts.logonsession - public symbols lead to lackluster output

Figure 9 – !kdexts.logonsession – public symbols lead to lackluster output

As you can see, most of the useful stuff is zeroed out.

Since public symbols are all you have unless you work at Microsoft, (and we wish you did,) I’m going to teach you how to do what
!kdexts.logonsession does, manually. The hard way. Plus some extra stuff. Buckle up.

First, you should verify whether token leak tracking was turned on when this dump was taken. (That was the registry setting mentioned earlier.)

Figure 10 - SeTokenLeakTracking = <no type information>

Figure 10 – x nt!SeTokenLeakTracking =

OK… That was not very useful. We’re getting because we’re using public symbols. But this symbol corresponds to the SeTokenLeakDiag registry setting that we configured earlier, and we know that’s just 0 or 1, so we can just guess what type it is:

Figure 11 - db nt!SeTokenLeakTracking L1

Figure 11 – db nt!SeTokenLeakTracking L1

The db command means “dump bytes.” (dd, or “dump DWORDs,” would have worked just as well.) You should have a symbol for
nt!SeTokenLeakTracking if you configured your symbol path properly, and the L1 tells the debugger to just dump the first byte it finds. It should be either 0 or 1. If it’s 0, then the registry setting that we talked about earlier was not set properly, and you can basically just discard this dump file and get a new one. If it’s 1, you’re in business and may proceed.

Next, you need to locate the logon session lists.

Figure 12 - dp nt!SepLogonSessions L1

Figure 12 – dp nt!SepLogonSessions L1

Like the previous step, dp means “display pointer,” then the name of the symbol, and L1 to just display a single pointer. The 64-bit value on the right is the pointer, and the 64-bit value on the left is the memory address of that pointer.

Now we know where our lists of logon sessions begin. (Lists, plural.)

The SepLogonSessions pointer points to not just a list, but an array of lists. These lists are made up of _SEP_LOGON_SESSION_REFERENCES structures.

Using the dps command (display contiguous pointers) and specifying the beginning of the array that we got from the last step, we can now see where each of the lists in the array begins:

Figure 13 - dps 0xffffb808`3ea02650 – displaying pointers that point to the beginning of each list in the array

Figure 13 – dps 0xffffb808`3ea02650 – displaying pointers that point to the beginning of each list in the array

If there were not very many logon sessions on the system when the memory dump was taken, you might notice that not all the lists are populated:

Figure 14 - Some of the logon session lists are empty because not very many users had logged on in this example

Figure 14 – Some of the logon session lists are empty because not very many users had logged on in this example

The array doesn’t fill up contiguously, which is a bummer. You’ll have to skip over the empty lists.

If we wanted to walk just the first list in the array (we’ll talk more about dt and linked lists in just a minute,) it would look something like this:

Figure 15 - Walking the first list in the array and using !grep to filter the output

Figure 15 – Walking the first list in the array and using !grep to filter the output

Notice that I used the !grep command to filter the output for the sake of brevity and readability. It’s part of the Mex debugger extension. I told you it was handy. If you omit the !grep AccountName part, you would get the full, unfiltered output. I chose “AccountName” arbitrarily as a keyword because I knew that was a word that was unique to each element in the list. !grep will only display lines that contain the keyword(s) that you specify.

Next, if we wanted to walk through the entire array of lists all at once, it might look something like this:

Figure 16 - Walking through the entire array of lists!

Figure 16 – Walking through the entire array of lists!

OK, I realize that I just went bananas there, but I’ll explain what just happened step-by-step.

When you are using the Mex debugger extension, you have access to many new text parsing and filtering commands that can truly enhance your debugging experience. When you look at a long command like the one I just showed, read it from right to left. The commands on the right are fed into the command to their left.

So from right to left, let’s start with !cut -f 2 dps ffffb808`3ea02650

We already showed what the dps

command did earlier. The !cut -f 2 command filters that command’s output so that it only displays the second part of each line separated by whitespace. So essentially, it will display only the pointers themselves, and not their memory addresses.

Like this:

Figure 17 - Using !cut to select just the second token in each line of output

Figure 17 – Using !cut to select just the second token in each line of output

Then that is “piped” line-by-line into the next command to the left, which was:

!fel -x “dt nt!_SEP_LOGON_SESSION_REFERENCES @#Line -l Next”

!fel is an abbreviation for !foreachline.

This command instructs the debugger to execute the given command for each line of output supplied by the previous command, where the @#Line pseudo-variable represents the individual line of output. For each line of output that came from the dps command, we are going to use the dt command with the -l parameter to walk that list. (More on walking lists in just a second.)

Next, we use the !grep command to filter all of that output so that only a single unique line is shown from each list element, as I showed earlier.

Finally, we use the !count -q command to suppress all of the output generated up to that point, and instead only tell us how many lines of output it would have generated. This should be the total number of logon sessions on the system.

And 380 was in fact the exact number of logon sessions on the computer when I collected this memory dump. (Refer to Figure 16.)

Alright… now let’s take a deep breath and a step back. We just walked an entire array of lists of structures with a single line of commands. But now we need to zoom in and take a closer look at the data structures contained within those lists.

Remember, ffffb808`3ea02650 was the very beginning of the entire array.

Let’s examine just the very first _SEP_LOGON_SESSION_REFERENCES entry of the first list, to see what such a structure looks like:

Figure 18 - dt _SEP_LOGON_SESSION_REFERENCES* ffffb808`3ea02650

Figure 18 – dt _SEP_LOGON_SESSION_REFERENCES* ffffb808`3ea02650

That’s a logon session!

Let’s go over a few of the basic fields in this structure. (Skipping some of the more advanced ones.)

  • Next: This is a pointer to the next element in the list. You might notice that there’s a “Next,” but there’s no “Previous.” So, you can only walk the list in one direction. This is a singly-linked list.
  • LogonId: Every logon gets a unique one. For example, “0x3e7” is always the “System” logon.
  • ReferenceCount: This is how many outstanding token references this logon session has. This is the number that must reach zero before the logon session can be destroyed. In our example, it’s 4.
  • AccountName: The user who does or used to occupy this session.
  • AuthorityName: Will be the user’s Active Directory domain, typically. Or the computer name if it’s a local account.
  • TokenList: This is a doubly or circularly-linked list of the tokens that are associated with this logon session. The number of tokens in this list should match the ReferenceCount.

The following is an illustration of a doubly-linked list:

Figure 19 - Doubly or circularly-linked list

Figure 19 – Doubly or circularly-linked list

Flink” stands for Forward Link, and “Blink” stands for Back Link.

So now that we understand that the TokenList member of the _SEP_LOGON_SESSION_REFERENCES structure is a linked list, here is how you walk that list:

Figure 20 - dt nt!_LIST_ENTRY* 0xffffb808`500bdba0+0x0b0 -l Flink

Figure 20 – dt nt!_LIST_ENTRY* 0xffffb808`500bdba0+0x0b0 -l Flink

The dt command stands for “display type,” followed by the symbol name of the type that you want to cast the following address to. The reason why we specified the address 0xffffb808`500bdba0 is because that is the address of the _SEP_LOGON_SESSION_REFERENCES object that we found earlier. The reason why we added +0x0b0 after the memory address is because that is the offset from the beginning of the structure at which the TokenList field begins. The -l parameter specifies that we’re trying to walk a list, and finally you must specify a field name (Flink in this case) that tells the debugger which field to use to navigate to the next node in the list.

We walked a list of tokens and what did we get? A list head and 4 data nodes, 5 entries total, which lines up with the ReferenceCount of 4 tokens that we saw earlier. One of the nodes won’t have any data – that’s the list head.

Now, for each entry in the linked list, we can examine its data. We know the payloads that these list nodes carry are tokens, so we can use dt to cast them as such:

Figure 21 - dt _TOKEN*0xffffb808`4f565f40+8+8 - Examining the first token in the list

Figure 21 – dt _TOKEN*0xffffb808`4f565f40+8+8 – Examining the first token in the list

The reason for the +8+8 on the end is because that’s the offset of the payload. It’s just after the Flink and Blink as shown in Figure 19. You want to skip over them.

We can see that this token is associated to SessionId 0x136/0n310. (Remember I had 380 leaked sessions in this dump.) If you examine the UserAndGroups member by clicking on its DML (click the link,) you can then use !sid to see the SID of the user this token represents:

Figure 22 - Using !sid to see the security identifier in the token

Figure 22 – Using !sid to see the security identifier in the token

The token also has a DiagnosticInfo structure, which is super-interesting, and is the coolest thing that we unlocked when we set the SeTokenLeakDiag registry setting on the machine earlier. Let’s look at it:

Figure 23 - Examining the DiagnosticInfo structure of the first token

Figure 23 – Examining the DiagnosticInfo structure of the first token

We now have the process ID and the thread ID that was responsible for creating this token! We could examine the ImageFileName, or we could use the ProcessCid to see who it is:

Figure 24 - Using !mex.tasklist to find a process by its PID

Figure 24 – Using !mex.tasklist to find a process by its PID

Oh… Whoops. Looks like this particular token leak is lsass’s fault. You’re just going to have to let the *ahem* application vendor take care of that one.

Let’s move on to a different token leak. We’re moving on to a different memory dump file as well, so the memory addresses are going to be different from here on out.

I created a special token-leaking application specifically for this article. It looks like this:

Figure 25 - RyansTokenGrabber.exe

Figure 25 – RyansTokenGrabber.exe

It monitors the system for users logging on, and as soon as they do, it duplicates their token via the DuplicateToken API call. I purposely never release those tokens, so if I collect a memory dump of the machine while this is running, then evidence of the leak should be visible in the dump, using the same steps as before.

Using the same debugging techniques I just demonstrated, I verified that I have leaked logon sessions in this memory dump as well, and each leaked session has an access token reference that looks like this:

Figure 26 - A _TOKEN structure shown with its attached DiagnosticInfo

Figure 26 – A _TOKEN structure shown with its attached DiagnosticInfo

And then by looking at the token’s DiagnosticInfo, we find that the guilty party responsible for leaking this token is indeed RyansTokenGrabber.exe:

Figure 27 - The process responsible for leaking this token

Figure 27 – The process responsible for leaking this token

By this point you know who to blame, and now you can go find the author of RyansTokenGrabber.exe, and show them the stone-cold evidence that you’ve collected about how their application is leaking access tokens, leading to logon session leaks, causing you to have to reboot your server every few days, which is a ridiculous and inconvenient thing to have to do, and you shouldn’t stand for it!

We’re almost done. but I have one last trick to show you.

If you examine the StackTrace member of the token’s DiagnosticInfo, you’ll see something like this:

Figure 28 - DiagnosticInfo.CreateTrace

Figure 28 – DiagnosticInfo.CreateTrace

This is a stack trace. It’s a snapshot of all the function calls that led up to this token’s creation. These stack traces grew upwards, so the function at the top of the stack was called last. But the function addresses are not resolving. We must do a little more work to figure out the names of the functions.

First, clean up the output of the stack trace:

Figure 29 - Using !grep and !cut to clean up the output

Figure 29 – Using !grep and !cut to clean up the output

Now, using all the snazzy new Mex magic you’ve learned, see if you can unassemble (that’s the u command) each address to see if resolves to a function name:

Figure 30 - Unassemble instructions at each address in the stack trace

Figure 30 – Unassemble instructions at each address in the stack trace

The output continues beyond what I’ve shown above, but you get the idea.

The function on top of the trace will almost always be SepDuplicateToken, but could also be SepCreateToken or SepFilterToken, and whether one creation method was used versus another could be a big hint as to where in the program’s code to start searching for the token leak. You will find that the usefulness of these stacks will vary wildly from one scenario to the next, as things like inlined functions, lack of symbols, unloaded modules, and managed code all influence the integrity of the stack. However, you (or the developer of the application you’re using) can use this information to figure out where the token is being created in this program, and fix the leak.

Alright, that’s it. If you’re still reading this, then… thank you for hanging in there. I know this wasn’t exactly a light read.

And lastly, allow me to reiterate that this is not just a contrived, unrealistic scenario; There’s a lot of software out there on the market that does this kind of thing. And if you happen to write such software, then I really hope you read this blog post. It may help you improve the quality of your software in the future. Windows needs application developers to be “good citizens” and avoid writing software with the ability to destabilize the operating system. Hopefully this blog post helps someone out there do just that.

Until next time,
Ryan “Too Many Tokens” Ries

The top 5 reasons to upgrade to SQL Server 2016

$
0
0

Upgrading your software can be daunting, we know. The fast pace of business makes it easy to tell yourself, “I’ll do it later when I have time.” We get it! But here are five key reasons to make time to upgrade to SQL Server 2016, which was named DBMS of the Year in 2016 by DBengines.com.

  1. Seamless step-up without rewriting apps. Thanks to November’s SQL Server 2016 Service Pack 1 (SP1), SQL Server now has one programming surface across all editions. If you switch from Express to Standard, or Standard to Enterprise, you don’t have to rework code to take advantage of additional features. Time saved! In addition, the change brings access to innovative features across performance, security, and analytics not previously available in Express or Standard—a great reason to upgrade applications that run on those editions. The Enterprise edition of SQL Server 2016 continues to set the industry benchmark in terms of price, performance, and scalability at unparalleled TCO.
  2. Take back your weekend. With SQL Server 2016 you won’t have to wait for weekends or after-hours to run analytical workloads. You don’t have to wait until you can extract, transform, and load (ETL) the data to your Enterprise Data Warehouse, either. Now you can run your analytics workloads simultaneously on your operational data, without losing performance, by using in-memory OLTP tables and in-memory columnstore together. This process can provide real-time operational analytics, also known as hybrid transactional/analytics processing (HTAP). Get even more from your data with in-database advanced analytics using R statistical language, so you can model and score quickly and at scale with native integration in SQL Server’s T-SQL query language. Read all the details here.
  3. Unparalleled level of data security. Rest easy, you don’t need to lose sleep over potential breaches when your data is Always Encrypted—whether at rest or in motion. With SQL Server 2016 and Azure SQL Database, your database data remains encrypted at all times; at rest, during computation, and while processing queries. This is in addition to row-level security and new dynamic data-masking capabilities built-in. This 2-minute video explains how Always Encrypted works.
  4. Free your users from their desks. Take a trip, work from home, or linger over lunch—Mobile BI in SQL Server 2016 has got you covered. You can get your critical business insights in rich and beautiful reports anywhere, anytime, online and offline, and on any device (IOS, Android, and Windows). Learn more about end-to-end mobile BI here.
  5. Upgrade without headaches. Upgrading from older versions doesn’t need to be painful. We’ve created a painless Data Migration Assistant for upgrading to SQL Server 2016 as well as Azure SQL Database. You can read all the details here, but the short version is that you can now migrate your data from an old SQL Server version to a new one, plus get help finding and fixing breaking changes from earlier versions.

Now that you’re up to speed on the benefits of SQL Server 2016, find out what’s coming next by joining us for the upcoming Data Amp online event on Wednesday, April 19, 2017. This online event begins at 8 a.m. Pacific Time and will feature keynotes by Scott Guthrie, executive vice president, Microsoft Cloud and Enterprise Group, and Joseph Sirosh, corporate vice president, Microsoft Data Group, who will demonstrate how innovations in data, intelligence, and analytics are driving digital transformation. We’ll be making important announcements and showing off the latest tech that can get the most out of your data and give you a competitive edge, so you don’t want to miss it.

Register for Data Amp today.

Managing Configuration & App Settings for Multiple Environments in Your CD Pipeline

$
0
0

Your continuous delivery pipeline typically consists of multiple environments. You may want to deploy changes first to a test or staging environment before deploying to a production environment. Furthermore, your production environment may itself comprise of multiple scale units, each of which you may deploy in parallel or one after the other for a gradual roll out.

As a best practice, you would want to deploy the same bits and follow the same procedure to deploy those bits in every environment. The only thing that should change from one environment to the next is the configuration you want to apply. For example, the database connection strings may point to different databases for different environments, or app settings may change between environments. Configuration and app settings changes may range from simple to complex but all generally follow the same strategy. The deployment tool stores the configuration values for each environment and performs configuration transforms at deployment time . If you have sensitive data in app settings such as passwords, they should be stored in the deployment tool rather than in plaintext in settings files.

Visual Studio Team Services allows you to define pipelines or release definitions with configuration management and transformations for each environment.

Why manage configuration changes through Visual Studio Team Services?

Here are the top five reasons to manage configuration and app settings variations per environment in a deployment tool like Visual Studio Team Services.

  1. More maintainable

Team Services allows you to manage variables and secrets at multiple levels. For variables that are needed in many release definitions, you can store variables in a project. You can also define configuration variables that are scoped to a specific release definition or to a specific environment in a release definition. Team Services provides an easy interface to manage these variables at each of these scopes.

  1. Fewer locations to update

By storing your configuration and app settings in a deployment tool, if settings change in a given time, you only need to update the settings in the deployment tool rather than in every possible file that the setting is hard-coded in. Furthermore, by storing them at the right scope, you do not have to update all the release definitions or environments in which they are used.

  1. Fewer mistakes

When storing the appropriate settings for each environment in the environment itself in the deployment tool, you won’t accidentally point to the development connection string when in production. The values for settings are siloed out per environment so it would be hard to actively mix up if using a deployment tool.

  1. More secure

When you have connection strings, passwords, or any other settings stored in plaintext in a settings file (such as a web.config file), anyone who has access to the code can potentially access a database or machine. Team Services provides a rich permissions model for who can manage and use secrets at each scope.

  1. More reliable

By automating the process of transforming configurations through a deployment tool, you’ll be able to count on the transforms always happening during a deployment and setting the appropriate values.

The concept of managing configuration and app settings for multiple environments in your continuous deployment seems straightforward enough, but how does it look in practice? It’s likely simpler than you’d expect.

What does configuration and app settings transforms look like in Visual Studio Team Services?

Let us take an example of deploying an application to an Azure website. We will look at two approaches for changing the configuration of the website in each environment – transform the web.config file just before deploying the website, and apply the settings directly in Azure.

The simplest approach is to transform the configuration and app settings in the web package’s web.config file just before deploying the package to each environment. Using the new version of the “Azure App Service Deploy” task in Visual Studio Team Services makes it easy to implement this approach as long as you define the values of the settings that you want to replace in each environment! Here are the steps for doing this.

  • Make sure that you have set up the build process for your app correctly; then in the Release Definition for your app, add the “Azure App Service Deploy” task into each environment. In the task, select the 3.* version from the dropdown for the task, then in the File Transforms and Variable Substitution Options group, click on the checkbox next to “XML variable substitution” to enable variable replacement in any config file.
    config-transforms-1
  • Define the key/value pairs in each environment, making sure to have the key/name match in the config file.config-transforms-2config-transforms-3
  • When the task runs in the release, it will match the name of the variable defined in Visual Studio Team Services and replace the value in any config files in the app package before deploying to Azure. In the above example, it replaces the value of appSettings key “Key1” from the web.config with the value of variable “Key1” defined in the environment.
  • After the replacement, the task re-packages the website and pushes it to Azure.
  • When the deployment of the release proceeds to the next environment in the pipeline, the same process is repeated to replace the value of “Key1” in web.config with the corresponding value defined on that environment.

In some cases, you may not want sensitive settings to be stored into your web.config file. Instead, you want those sensitive settings to be directly applied to an Azure website. In this case, your flow looks as follows:

  • Define the key/value pairs in each environment of your release definition.
  • Include a script in each environment to apply the settings directly to your Azure website. You can write such a script using the “Set-AzureRmWebApp” commandlet. Infact, the Visual Studio Marketplace has an extension that just does this for you. When you install this extension in your account, you will notice an additional task in your account called “Apply variables to Azure webapp”, which takes the variables defined in your environment as per a convention, and then applies them in the Azure web app directly.

Azure Web App Configuration Task

For more information about setting up continuous integration and continuous deployment for your ASP.NET, ASP.NET core, Node, or other apps, see these documents:

CI for ASP.NET apps
CI for ASP.NET core apps
CI/CD for Node apps
Deploy your web package to Azure websites from VSTS

The IIS web deployment tasks in Team Services support similar transformations when you want to deploy a web package to on-premises or Azure virtual machines. For more information about deploying websites to IIS servers, see:

Deploy your web package to IIS servers

If you are deploying your application to a different platform or using a different technology, you have the power of generic task execution to run any scripts as part of your deployment. For instance, you can author a script to transform your custom configuration file using PowerShell, batch, or shell scripts. Alternatively, there are many extensions available in Visual Studio Marketplace to help you with deploying your applications.

Organizing the process for configuration differences for each environment may seem challenging, but is simple to implement using Visual Studio Team Services and you can remove much of the pain of managing settings transforms in configuration files.

 

Many thanks to the contributors & reviewers for this post:  Sachi Williamson, Abel Wang, Martin Woodward, and Ed Blankenship

Windows 10 Creators Update and Creators Update SDK are Released

$
0
0

This is a big day! Today we opened access to download the Windows 10 Creators Update and, along with it, the Creators Update SDK. And today is a great day for all Windows developers to get the SDK and start building amazing apps that take advantage of new platform capabilities to deliver experiences that you and your users will love.

We are working hard to innovate in Windows and to bring the power of those innovations to Windows developers and users. We released Windows 10 Anniversary Update just eight months ago, and we’ve already seen that over 80% of Windows 10 PCs are running Anniversary Update (version 1607) or later.

With today’s release of Windows 10 Creators Update, we expect users to once again move rapidly to the latest and best version of Windows. For developers, this is the time to get ready for the next wave.

What’s New in the Creators Update

Here are just a few of the new and powerful capabilities in the Creators Update:

  • Enhancements to the visual layer (effects, animations and transitions) and elevation of many effects to the XAML layer with improved controls that make the enhancements easy to bring to apps
  • Improvements to ink, including ink analysis and improved recognition, and an ink toolbar with new effects (tilt pencil) and tools (protractor for drawing curves and circles)
  • More powerful and flexible APIs for the Surface Dial
  • Significant Bluetooth improvements with Bluetooth LE GATT Server, peripheral mode for easier discovery of Windows Devices, and support for loosely coupled Bluetooth devices (those low energy devices that do not have to be explicitly paired)
  • Better user engagement via notifications that can now be grouped by app, bind to data and contain in-line controls such as progress bars
  • Improvements to the Desktop Bridge to make it easier than ever to bring Win32 apps to Windows 10 and the Windows Store
  • The ability to have seamless cross-device experiences with Project Rome and the recently released Android SDK for Project Rome
  • More targeted and effective user acquisition via Facebook app install ads with the Windows SDK for Facebook
  • Background execution enhancements that enable tasks to do more with increased memory and time
  • Enhanced security for apps with the ability to integrate Windows Hello
  • Richer app analytics via an updated Dev Portal that enables management of multiple apps and enhanced reporting
  • Faster app downloads and updates with the ability to componentize app packages and do streaming installs
  • Increased efficiency and flexibility with the new ability in Visual Studio 2017 to run two different SDK versions side by side on the same machine
  • Significant improvements to the Windows Console and the Windows Subsystem for Linux enabling many of the most used Linux frameworks, tools and services
  • New and natural ways for users to connect and engage with apps using the Cortana Skills Kit
  • The ability for game developers to reach new audiences by publishing UWP games on the Xbox via the Xbox Live Creators Program
  • Amazing 3D experiences on HoloLens and new mixed reality headsets via the Windows Mixed Reality Platform

You can find a more complete list here along with the latest developer documentation.

We’ll be taking a close look at all of these (and a lot more) at Microsoft Build 2017, including some of the things we’ve got planned for the future.

I hope to see you there!

Get Started Now

To get started, please check out Clint Rutkas’ post for the details on how to get the latest version of Visual Studio and the SDK. And take a look at Daniel Jacobson’s blog post to see some of the improvements for UWP developers in Visual Studio 2017.

— Kevin

The post Windows 10 Creators Update and Creators Update SDK are Released appeared first on Building Apps for Windows.

Updating your tooling for Windows 10 Creators Update

$
0
0

We’re extremely excited today that the Windows 10 Creators Update, build 15063, has been released. Kevin Gallo went into detail about some of the new features and APIs in Creators Update for developers. I’ll go into the details of getting your system updated and configured so you can submit your apps to the Windows Store. This includes the Windows 10 Creators Update SDK, Visual Studio 2017 UWP Tooling and Windows Store starting to accept applications that target Windows 10 Creators Update.

There are two primary steps you’ll need to take:

  1. Update your system to Windows 10 Creators Update, build 15063.
  2. Get Visual Studio 2017 with the updated tooling and Windows 10 Creators Update SDK.

For a more in-depth overview of the UWP Tooling updates in Visual Studio 2017, Daniel Jacobson did a fantastic write up on the Visual Studio Blog.

Update Your System

Our engineering team outlined how the roll out approach will happen with the Windows Update. When the update is ready for your computer, you’ll receive a notification. If you want to pull the update manually, go to the software download site and select “Update Now.” Once you run the executable, this will force update your system to Windows 10 Creators Update.

Acquiring Windows 10 Creators Update SDK and Visual Studio 2017

Now that your system is on Windows 10 Creators Update, let’s install Visual Studio and the SDK.  It really is straight forward.

  • Don’t have Visual Studio 2017:
    1. Head over to Windows Dev Center’s download area and select the edition of Visual Studio you want.
    2. Run the installer.
    3. Select “Universal Windows Platform development” under Workloads.
    4. Click “Install.”
  • Visual Studio 2017 is already installed:
    1. Run the Visual Studio Installer.
    2. Be sure that “Universal Windows Platform development” under Workloads is checked.
    3. Click “Update” / “Install.”

Additional useful items:

  • Want tools for C++ desktop or game development for UWP? Be sure one of these two are selected:
    • C++ Universal Windows Platform tools in the UWP Workload section
    • Desktop development with C++ Workload and the Windows SDK 10 (10.0.015063.0)
  • If you want the Universal Windows Platform tools:
    • Select the Universal Windows Platform tools workload.

Once you’ve updated your systems, recompiled and tested your app, submit your app to Dev Center!

Wrapping up

I would love to know what crazy things you’ve included with the update by tweeting @WindowsDev.

For feedback on Visual Studio, use Report a Problem. If you have Windows API feedback or developer related feature requests, head over to https://wpdev.uservoice.com.

The post Updating your tooling for Windows 10 Creators Update appeared first on Building Apps for Windows.


Reintroducing the Team Explorer standalone installer

$
0
0

If you remember back to 2013 (and before), we released standalone installers for Team Explorer. In VS 2015, we did not release a standalone Team Explorer since customers had free options with Express SKUs and Community, which included Team Explorer functionality.

Customers have continued to request a standalone installer for Team Explorer for non-developers, however. And so today, with the Visual Studio 2017 Update release, the standalone Team Explorer installer is back. This is a free and freely licensed solution for non-developers who don’t need a full version of Visual Studio.

Please send any feedback by going to Help->Send Feedback.

reportaproblem

 

Announcing the .NET Framework 4.7

$
0
0

Today, we are announcing the release of the .NET Framework 4.7. It’s included in the Windows 10 Creators Update. We’ve added support for targeting the .NET Framework 4.7 in Visual Studio 2017, also updated today. The .NET Framework 4.7 will be released for additional Windows versions soon. We’ll make an announcement when we have the final date.

The .NET Framework 4.7 includes improvements in several areas:

  • High DPI support for Windows Forms applications on Windows 10
  • Touch support for WPF applications on Windows 10
  • Enhanced cryptography support
  • Performance and reliability improvements

You can see the complete list of improvements and the API diff in the .NET Framework 4.7 release notes.

To get started, upgrade to Windows 10 Creators Update and then install the update to Visual Studio 2017.

.NET Framework Documentation

We are also launching a set of big improvements for the .NET Framework docs, today. The .NET Framework docs are now available on docs.microsoft.com. The docs look much better and easier to read and navigate. We also have a lot of navigation and readability improvements planned for later this year. The .NET Framework docs on MSDN will start redirecting to the new docs.microsoft.com pages later this year. Some table of contents and content updates will be occurring over the next few days as we complete this large documentation migration project, so please bear with us.

The docs will show up on open source on GitHub later this week @ dotnet/docs, too! Updating and improving the docs will now be easier for everyone, including for the .NET writing and engineering teams at Microsoft. This is the same experience we have for .NET Core docs.

We also just released a new experience for searching for .NET APIs. You can now search and filter .NET APIs, for .NET Core, .NET Framework, .NET Standard and Xamarin all in one place! You can also filter by version. UWP APIs are still coming. When you do not filter searches, a single canonical version of each type is shown (not one per product and version). Try it for yourself with a search for string.  The next step is to provide obvious visual queues in the docs so that you know

Check out the new .NET API Browser, also shown below.

api-browser

High DPI for Windows Forms

This release includes a big set of High DPI improvements for Windows Forms DPI Aware applications. Higher DPI displays have become more common, for laptops and desktop machines. It is important that your applications look great on newer hardware. See the team walking you through Windows Forms High DPI Improvements on Channel9.

The goal of these improvements is to ensure that your Windows Forms apps:

  • Layout correctly at higher DPI.
  • Use high-resolution icons and glyphs.
  • Respond to changes in DPI, for example, when moving an application across monitors.

Rendering challenges start at around 150% scaling factor and become much more obvious above 200. The new updates make your apps look better by default and enable you to participate in DPI changes so that you can make your custom controls look great, too.

The changes in .NET Framework 4.7 are a first investment in High DPI for Windows Forms. We intend to make Windows Forms more High DPI friendly in future releases. The current changes do not cover every single control and provide a good experience up to 300% scaling factor. Please help us prioritize additional investments in the comments and at microsoft/dotnet issue #374.

These changes rely on High DPI Improvements in the Windows 10 Creators Update, also released today. See High DPI Improvements for Desktop App Developers in the Windows 10 Creators Update (a video) if you prefer to watch someone explains the what’s new.
You may want to follow and reach out to @WindowsUI on twitter.

Improvements for System DPI aware Applications

We’ve fixed layout issues with several of the controls: calendar, exception dialog box, checked list box, menu tool strip and anchor layout. You need to opt-into these changes, either as a group or fine-tune the set that you want to enable, giving you control over which HDPI improvements are applied to your application.

Calendar Control

The calendar control has been updated to be System DPI Aware, showing only one month. This is the new behavior, as you can see in the example below at 300% scaling.

calendar-control-display-300-correct

You can see the existing behavior below at 300% scaling.

calendar-control-display-300-incorrect

ListBox

The ListBox control has been updated to be System DPI Aware, with the desired control height, as you can see in the example below. This is the new behavior, as you can see in the example below at 300% scaling.

listbox-control-display-300-correct

You can see the existing behavior below at 300% scaling.

listbox-control-display-300-incorrect

Exception Message box

The exception message box has been updated to be System DPI Aware, with the correct layout, as you can see in the example below. This is the new behavior, as you can see in the example below at 300% scaling.

exception-messagebox-display-300-correct

You can see the existing behavior below at 300% scaling.

exception-messagebox-display-300-incorrect

Dynamic DPI Scenarios

We’ve also added support for dynamic DPI scenarios, which enables Windows Forms applications to respond to DPI changes after being launched. This can happen when the application window is moved to a display that has a different scale factor, if the current monitors scale factor is changed, you connect an external monitor to a laptop (docking or projecting).

We’ve exposed three new events to support dynamic DPI senarios:

  • Control.OnDpiChangedBeforeParent
  • Control.OnDpiChangedAfterParent
  • Form.DPIChanged

Ecosystem

We’ve recently been talking to control providers (for example, Telerik, and Grape City) so that they update their controls to support High DPI. Please do reach out to your control providers to tell them which Windows Forms (and WPF) controls you want updated to support High DPI. If you are a control provider (commercial or free) and want to chat, please reach out at dotnet@microsoft.com.

You might be wondering about WPF. WPF is inherently High DPI aware and compatible because it is based on vector graphics. Windows Forms is based on raster graphics. WPF implemented a per-monitor experience in the .NET Framework 4.6.2, based on improvements in the Windows 10 Anniversary Update.

Quick Lesson in Resolution, DPI, PPI and Scaling

Higher resolution doesn’t necessarily mean high DPI. It’s typically scaling that results in higher DPI scenarios. I have a desktop machine with a single 1080P screen that is set to 100% scaling. I won’t notice any of the feature discussed here. I also have a laptop with a higher resolution screen that is scaled to 300% to make it look really good. I’ll definitely notice the high DPI features on that machine. If I hookup my 1080P screen to my laptop, then I’ll experience the PerMonitorV2 support when I move my Windows Forms applications between screens.

Let’s look at some actual examples. The Surface Pro 4 and Surface Book 2 have 267 PPI displays, which are likely scaled by default. The Dell 43″ P4317Q has a 104PPI at its native 4k resolution, which is likely not scaled by default. An 85″ 4k TV will have a PPI value of half that. If you scale the Dell 43″ monitor to 200%, then you will have a High DPI visual experience.

Note: DPI and PPI are measurements that takes into account screen resolution, screen size and scaling. You can use them interchangeably.

You can try scaling your monitor temporarily with Magnifier. It is a great test tool for High DPI.

Take advantage of High DPI

You need to target the .NET Framework 4.7 to take advantage of these improvements. Use the following app.config file to try out the new High DPI support. Notice that the sku attribute is set to .NETFramework,Version=v4.7 and the DpiAwareness key is to set PerMonitorV2.

xml version="1.0" encoding="utf-8" ?><configuration><startup><supportedRuntimeversion="v4.0"sku=".NETFramework,Version=v4.7" />startup><System.Windows.Forms.ApplicationConfigurationSection><addkey="DpiAwareness"value="PerMonitorV2"/>System.Windows.Forms.ApplicationConfigurationSection>configuration>

You must also include a Windows app manifest with your app that declares that it is a Windows 10 application. The new Windows Forms System DPI Aware and PerMonitorV2 DPI Aware features will not work without it. See the required application manifest fragment below. A full manifest can be found in this System DPI Aware sample.

<compatibilityxmlns="urn:schemas-microsoft-com:compatibility.v1"><application><supportedOSId="{8e0f7a12-bfb3-4fe8-b9a5-48fd50a15a9a}" />application>compatibility>

Please see Windows Forms Configuration to learn about how to configure each of the Windows Forms controls individually if you need more fine-grained control.

You must target and (re)compile your application with .NET Framework 4.7, not just run on it. Applications that run on the .NET Framework 4.7 but target .NET Framework 4.5 or 4.6, for example, will not get the new improvements. Updating an the app.config file of an existing application will not work (re-compilation is necessary).

WPF Touch/Stylus support for Windows 10

WPF now integrates with the touch and stylus/ink support in Windows 10. The Windows 10 touch implementation is more modern and mitigates customer feedback that we’ve received with the current Windows Ink Services Platform (WISP) component that WPF relies on for touch data. You can opt into the new Windows touch services with the .NET Framework 4.7. The WISP component remains the default.

The new touch implementation has the following benefits over the WISP component:

  • More reliable– The new implementation is the same one used by UWP, a touch-first platform. We’ve heard feedback that WISP has intermittent touch responsiveness issues. The new implementation resolves these.
  • More capable– Works well with popups and dialogs. We’ve heard feedback that WISP doesn’t work well with popup UI.
  • Compatible– Basic touch interaction and support should be almost indistinguishable from WISP.

There are some scenarios that don’t yet work well with the new implementation and would make staying with WISP the best choice.

  • Real-time inking does not function. Inking/StylusPlugins will still work, but can stutter.
  • Applications using the Manipulation engine may experience different behavior
  • Promotions of touch to mouse will behave slightly differently to the WISP stack.

Our future work should address all of these issues and provide touch support that is completely compatible with the WISP component. Our goal is to provide a more modern touch experience that continues to improve with each new release of Windows 10.

You can opt-into the new touch implementation with the following app.config entry.

<configuration><runtime><AppContextSwitchOverridesvalue="Switch.System.Windows.Input.Stylus.EnablePointerSupport=true"/>runtime>configuration>

ClickOnce

The ClickOnce Team made a set of improvements in the .NET Framework 4.7.

Hardware Security Module Support

You can now sign ClickOnce manifest files with a Hardware Security Module (HSM) in the Manifest Generation and Editing Tool (Mage.exe). This improvement was the second most requested feature for ClickOnce! HSMs make certificate mangement more secure and easier, since both the certificate and signing occur within secure hardware.

There are two ways to sign your application with an HSM module via Mage. The first can be done via command-line. We’ve added two new options:

-CryptoProvider -csp
-KeyContainer -kc

The CryptoProvider and KeyContainer options are required if the certificate specified by the CertFile option does not contain a private key. The CryptoProvider option specifies the name of the cryptographic service provider (CSP) that contains the private key container. The KeyContainer option specifies he key container that contains the name of the private key.

We have also added a new Verify command, which will verify that the manifest has been signed correctly. It takes a manifest file as it’s parameter:

-Verify -ver

The second way is to sign it via the Mage GUI, which collects the require information before signing:

Mage Signing Options

Store Corruption Recovery

ClickOnce will now detect if the ClickOnce application store has become corrupted. In the event of store corruption, ClickOnce will automatically attempt to clean-up and re-install broken applications for users. Developers or admins do not need to do anything to enable this new behavior.

API-level Improvements

There are several API-level improvements included in this release, described below.

TLS Version now matches Windows

Network security is increasingly important, particularly for HTTPS. We’ve had requests for .NET to match the Windows defaults for TLS version. This makes machines easier to manage. You opt into this behavior by targeting .NET Framework 4.7.

HttpClient, HttpWebRequest and WebClient clients all implement this behavior.

For WCF, MessageSecurity and TransportSecurity classes were also updated to support TLS 1.1 and 1.2. We’ve heard requests for these classes to also match OS defaults. Please tell us if you would like that behavior.

More reliable Azure SQL Database Connections

TCP is now the default protocol to connect to Azure SQL Database. This change significantly improves connection reliability.

Cryptography

The .NET Framework 4.7 has enhanced the functionality available with Elliptic Curve Cryptography (ECC). ImportParameters(ECParameters) methods were added to the ECDsa and ECDiffieHellman classes to allow for an object to represent an already-established key. An ExportParameters(bool) method was also added for exporting the key using explicit curve parameters.

The .NET Framework 4.7 also adds support for additional curves (including the Brainpool curve suite), and has added predefined definitions for ease-of-creation via the new ECDsa.Create(ECCurve) and ECDiffieHellman.Create(ECCurve) factory methods.

This functionality is provided by system libraries, and so some of the new features will only work on Windows 10.

You can see an example of .NET Framework 4.7 Crypography improvements to try out the changes yourself.

Breaking Changes

Building .NET Framework 4.7 apps with Visual Studio 2017

You can start building .NET Framework 4.7 apps once you have the Windows 10 Creators Update and the Visual Studio 2017 update installed. You need to select the .NET Framework 4.7 development tools as part of updating Visual Studio 2017, as you can see highlighted in the example below.

vs2017-dotnet47-install

Windows Version Support

The .NET Framework 4.7 is available for Windows 10 Creators Update today. We expect to release it for earlier versions of Windows pretty soon. We’ll make an announcement when we have the final date.

The following Windows versions will be supported (same as .NET Framework 4.6.2):

  • Client: Windows 10 Creators Update (RS2), Windows 10 Anniversary Update (RS1), Windows 8.1, Windows 7 SP1
  • Server: Windows Server 2016, Windows Server 2012 R2, Windows Server 2012, Windows Server 2008 R2 SP1

A .NET Framework 4.7 targeting pack will be released at the same time for use with earlier versions of Visual Studio. You can always look to our .NET targeting page for targeting packs.

Closing

The improvements in the .NET Framework take advantage of new Windows 10 client features and make general improvements that for all Windows versions. Please do tell us what you think of these improvements.

Please try out the .NET Framework 4.7 on Windows 10 Creators Update, using the latest update to Visual Studio 2017. We’ll make the new version available for other Windows versions soon.

Team Explorer for TFS 2017

$
0
0

When we shipped TFS 2017 and Visual Studio 2017, we didn’t provide a “Team Explorer” like solution.  Historically our Team Explorer installer has been available for customers that want a rich client to access version control and some work item tracking features in TFS or VS Team Services.  We didn’t release it because we needed to create a new version of it based on the new Visual Studio installer technology that was introduced in VS 2017 and we just didn’t have time  to do that before we released.  Along with the release of VS 2017.1 today, we are now releasing a Team Explorer installer.

 

Please let us know if you have any feedback.

Brian

Enhance protection of VMs with Azure Advisor backup recommendations

$
0
0

We have seen a few customer cases where customers accidentally deleted VMs or data inside a VM running in Azure. While Azure provides protection against infrastructure related failures, it can’t guard against user initiated actions such as accidental deletion or a wrong patch on the guest OS triggered by customer. Azure Backup provides a capability to guard against accidental deletions and guest OS level corruption scenarios using its cloud-first approach to backup and seamlessly enables to restore a full VM or instantly recover files inside a VM. Customers can configure backup either from Recovery Services vault or directly from VM management blade. However, we have seen customers missing on configuring backup and risking their critical data. Today we are making a step towards making sure that we advise you to protect your VMs using backup with Advisor recommendations, made generally available last week.

Azure Advisors, is a personalized cloud consultant that helps to optimize use of Cloud, as customers start on their digital transformation using Azure. It analyzes your Azure usage and provides timely recommendations to help optimize and secure your deployments. It provides recommendations in four categories: High Availability, Security, Performance and Cost. With this announcement, it can provide recommendations about virtual machines which are not backed up and with few clicks it will let you enable backup on those virtual machines.

Advisor Backup Recommendations

Value Proposition:

Periodic Recommendations– Advisors provide hourly recommendations for virtual machines that are not backed up so that you never miss to backup important VMs. You can also control recommendations by snoozing them.

Seamless experience to backup– You can seamlessly enable backup on virtual machines by clicking on a recommendation and by specifying vault (where backups will be stored) and backup policy (schedule of backups and retention of backup copies).

Freedom from infrastructure – With Azure Backup integration into recoomendations, you need not provision any additional infrastructure to configure backup.

Application consistent backup– Azure Backup provides application consistent backup for Windows and Linux and by configuring backup using recommendations, you will get a consistent backup without the need to shut down the virtual machine.

 

Related links and additional content

What’s brewing in Visual Studio Team Services: April 2017 Digest

$
0
0

This post series provides the latest updates and news for Visual Studio Team Services and is a great way for Azure users to keep up-to-date with new features being released every three weeks. Visual Studio Team Services offers the best DevOps tooling to create an efficient continuous integration and release pipeline to Azure. With the rapidly expanding list of features in Team Services, teams can start to leverage it more efficiently for all areas of their Azure workflow, for apps written in any language and deployed to any OS.

Git tags

We’ve now added tag support into the web experience. Instead of creating tags from the command line and pushing the tags to the repository, you can now simply go to a commit and add a tag. The tag creation dialog will also let you tag any other ref in the repo.

create tag details

Your commits will now show the tags that you have created.

show tags

The commit list view also supports a context menu. No need to go to the commit details page to create tags and create new branches.

create tag history

Soon we will add a page for tag management.

Git branch policy improvements

Branch policies provide a great way to help maintain quality in your repos by allowing you to require a passing build, require code reviewers, and more. As part of review pull requests, users often leave comments. You can now ensure that all comments in pull requests are being addressed with the new Comments policy. Once enabled, active comments will block completion of the PR. Reviewers that leave comments for the PR author but optimistically approve the pull request can be sure that comments won’t be missed.

comment requirements

Sometimes you need to override policies, such as in the middle of the night when addressing an issue in production. Users bypassing pull request policies must now specify a reason. In the Complete pull request dialog, users will see a new Reason field, if they choose to bypass.

pr bypass dialog

After entering the reason and completing the pull request, the message will be displayed in the pull request’s Overview.

bypass message

Import Team Foundation Version Control into a Git repo

If you’re using Team Foundation Version Control (TFVC) and are looking for an easy way to migrate to Git, try out the new TFVC import feature. Select Import Repository from the repository selector drop-down.

import repo

Select TFVC for the source type. Individual folders or branches can be imported to a new Git repository, or the entire TFVC repository can be imported (minus the branches). You can import up to 180 days of history.

import into Git from TFVC

Team Foundation Version Control support for Android Studio, IntelliJ, and Rider

We’ve now officially released support for TFVC in Android Studio and the variety of JetBrains IDE’s such as IntelliJ IDEA and Rider EAP. Users can seamlessly develop without needing to switch back and forth from the IDE to the command line to perform their Team Services actions. It also includes additional features that you otherwise wouldn’t get from the command line client, such as seeing an updated status of your repository’s related builds along with the capability to browse work items assigned to you or from your personal queries.

TFVC in IntelliJ

Currently we support:

  • Checkout a TFVC repository from Team Services or Team Foundation Server 2015+
  • Execute all basic version control actions such as add, delete, rename, move, etc.
  • View local changes and history for your files
  • Create, view, and edit your workspace
  • Checkin and update local files
  • Merge conflicts from updates
  • Lock and unlock files and directories
  • Add labels to files and directories
  • Configure a TFS proxy

Check out our brief demo of getting up and running inside of Android Studio. For a more comprehensive look at the plugin, checkout our presentation and tutorial inside of IntelliJ.

To start using the TFVC features, download the latest version of the plugin and follow the setup steps.

Continuous delivery in the Azure portal using any Git repo

You can now configure a continuous delivery (CD) workflow for an Azure App Service for any public or private Git repository that is accessible from the Internet. With a few clicks in the Azure portal, you can set up a build and release definition in Team Services that will periodically check your Git repository for any changes, sync those changes, run an automated build and test, followed by a deployment to Azure App Service.

Start using this feature today by navigating to your app’s menu blade in the Azure portal and clicking Continuous Delivery (Preview) under the App Deployment section.

azure portal continuous delivery

Conditional build tasks

If you’re looking for more control over your build tasks, such as a task to clean things up or send a message when something goes wrong, we now support four built-in choices for you to control when a task is run:

task condition

If you are looking for more flexibility, such as a task to run only for certain branches, with certain triggers, under certain conditions, you can express your own custom conditions:

and(failed(), eq(variables['Build.Reason'], 'PullRequest'))

Take a look at the conditions for running a task.

Customizable backlog levels

You can now add backlog levels to manage the hierarchy of their work items and name them in a way that makes sense for your work item types. You can also rename and recolor existing backlog levels, such as Stories or Features. See Customize your backlogs or boards for a process for details on how to get started.

custom backlog levels

Mobile work item discussion

Our mobile discussion experience has been optimized to provide a mobile-friendly, streamlined experience for submitting a comment. Discussion is the most common action that takes place in a mobile device. We look forward to hearing what you think about our new experience!

mobile discussion

mobile discussion

Extension of the month

If you are like us, you use open source software in your development projects. Reusing components enables great productivity gains. However, you can also reuse security vulnerabilities or violate licenses without realizing it.

The WhiteSource Bolt extension for build makes it easy to find out whether you are using vulnerable components. After installing it in your account, add it to your build definition and queue a new build. You’ll get a report like the following. In the table under the summary, you will see a list of components with issues and the recommended way to address those issues.

whitesource bolt report

If you have Visual Studio Enterprise, you get 6 months of WhiteSource Bolt for one team project included with your subscription (redeem the code from your benefits page or see this page for VS subscribers for more detailed instructions).

Have a look at the full list of new features by checking out the release notes for March 8th and March 29th.

Happy coding!

DirectQuery in SQL Server 2016 Analysis Services whitepaper

$
0
0

I am excited to announce the availability of a new whitepaper called “DirectQuery in SQL Server 2016 Analysis Services”. This whitepaper written by Marco Russo and Alberto Ferrari will take your understanding and knowledge of DirectQuery to the next level so you can make the right decisions in your next project. Although the whitepaper is written for SQL Server Analysis services many of the concepts are shared with Power BI.

A small summary of the whitepaper:

DirectQuery transforms the Microsoft SQL Server Analysis Services Tabular model into a metadata layer on top of an external database. For SQL Server 2016, DirectQuery was redesigned for dramatically improved speed and performance, however, it is also now more complex to understand and implement. There are many tradeoffs to consider when deciding when to use DirectQuery versus in memory mode (VertiPaq). Consider using DirectQuery if you have either a small database that is updated frequently or a large database that would not fit in memory

Download the whitepaper here.

What’s new in Office 365 Groups for April 2017

$
0
0

With over 85-million monthly Office 365 users, there’s no such thing as a typical customer. That’s why we built Office to embrace the diverse needs of the modern workplace by giving teams their choice of tools. Even within a single organization, different teams often have different demands for the productivity tools they use every day. What’s unique about Office 365 is the ability to deliver tools that meet these diverse needs—all on a single, manageable platform.

Supporting these teams is Office 365 Groups, a membership service leveraged by millions of users, which helps teams collaborate in their app of choice, including: Outlook, SharePoint, Skype for Business, Planner, Yammer, OneNote and Microsoft Teams. Office 365 Groups helps to structure, format and store information in a way that is accessible across different applications, but remains secure and easily manageable.

Enhancements to help admins manage groups

A key benefit of Office 365 Groups is that any user in your organization can create a group and start collaborating with others in seconds. Self-service creation is great for users, but we know IT admins need to be able to easily manage groups, gain insight into their use, control their directories and ensure compliance of group data. Today, we are announcing new enhancements for administering Office 365 Groups to support these needs:

  • Restore deleted groups—If you deleted an Office 365 group, it’s now retained by default for a period of 30 days. Within that period, you can restore the group and its associated apps and data via a new PowerShell cmdlet.
  • Retention policies—Manage group content produced by setting up retention policies to keep what you want and get rid of what you don’t need. Admins can now create Office 365 Groups retention policies that apply to the group’s shared inbox and files in one step using the Office 365 Security & Compliance Center.
  • Label management—With labels, you can classify Office 365 Groups emails and documents across your organization for governance, and enforce retention rules based on that classification.

This adds to our broad set of group management tools recently rolled out to Office 365 customers:

  • Guest access—Guest access in Office 365 Groups enables you and your team to collaborate with people from outside your organization by granting them access to group conversations, files, calendar invitations and the group notebook.
  • Upgrade Distribution Groups to Office 365 Groups—The Exchange Admin Center now offers an option to upgrade eligible Distribution Groups to Office 365 Groups with one click.
  • Data classification*—You can create a customizable data classification system for Office 365 Groups, such as unclassified, corporate confidential or top secret.
  • Usage guidelines*—You can define usage guidelines for Office 365 Groups—to educate your users about best practices that help keep their groups effective, and educate them on internal content policies.
  • Azure AD Connect*— Enables group writeback to your Active Directory to support on-premises Exchange mailboxes. See “Configure Office 365 Groups with on-premises Exchange” for more information.
  • Dynamic membership*—Admins can define groups with rule-based memberships using the Azure Management Portal or via PowerShell. Group membership is usually updated within minutes as users’ properties change. This allows easy management of larger groups or the creation of groups that always reflect the organization’s structure.
  • Hidden membership—If you want group membership to be confidential (for example, if the members are students), you can hide the Office 365 group members from users who aren’t members of the group.
  • Creation policies—There may be some people in your organization that you don’t want to be able to create new groups. There are several techniques for managing creation permissions in your directory.
  • Office 365 Groups activity report—These reports includes group properties, messages received and group mailboxes storage over time. Note you can also leverage the SharePoint site usage report to track groups’ file storage.

A look at upcoming features

Because Office 365 is a subscription service, we’re able to continue improving the admin capabilities based on customer feedback. Here’s a look at some of the enhancements on our Roadmap for the next three months:

  • Expiry policy*—Soon, you will be able to set a policy that automatically deletes a group and all its associated apps after a specific period. The group owner(s) will receive an email notification prior to the expiration date, and they will be able to extend the expiration date if the group is still in use. Once the expiration date is reached, the group will be soft deleted for 30 days (and hence can be restored by an administrator if needed).
  • Azure AD naming policy*—Admins will be able to configure a policy for appending text to the beginning or end of a group’s name and email address no matter where the group is created, such as Outlook, Planner, Power BI, etc. Admins will be able to configure a list of specific blocked words that can’t be used in group names and rely on the native list of thousands of blocked words to keep their directories clean.
  • Default classification and classification description—Will enable admins to set default Office 365 Groups classification at the tenant level using PowerShell cmdlets. In addition, admins will be able to provide a description for each of the defined classifications.
  • Classification is available when creating or modifying a group across apps—Selecting a group classification will be available when creating or editing a group across the following Office 365 applications: Outlook, SharePoint, Planner, Yammer and StaffHub.

Learn more

See this recent presentation from Ignite Australia to learn more about Office 365 Groups, and join our Ask Us Anything session on the Microsoft Tech Community on April 13, 2017 at 9 a.m. PDT (UTC-7) to discuss these recent administration updates.

Get started with the Office 365 Groups online resources today.

—Christophe Fiessinger, @cfiessinger, senior program manager for the Office 365 Groups team

*Azure Active Directory Premium is required.

The post What’s new in Office 365 Groups for April 2017 appeared first on Office Blogs.


Five reasons to run SQL Server 2016 on Windows Server 2016 – No. 3: database uptime and reliability

$
0
0

This is the third post in a five-part blog series. Keep an eye out for upcoming posts and catch up on the first and second in the series.

In addition, join us for Microsoft Data Amp on April 19 at 8 a.m. PT. The online event will showcase how data is the nexus between application innovation and artificial intelligence. You’ll learn how data and analytics powered by the most trusted and intelligent cloud can help companies differentiate and out-innovate their competition. Microsoft Data Amp—where data gets to work.

When does 2 + 2 = 5? When two teams work hard to deliver great products individually, but also work together to make the combination more than the sum of their parts. Windows Server 2016 and SQL Server 2016 are prime examples. The development teams have collaborated closely to ensure that the very best experience for data professionals emerges when you take advantage of the synergies built into the Windows Server OS and the SQL Server data platform. In this post, we’ll share how the teams have worked together to deliver advanced functionality to improve database uptime and reliability, including effective disaster recovery across sites and domains.

Always On Availability Groups: Enhanced capabilities supporting new scenarios

Always On Availability Groups have been at the center of SQL Server availability since the 2012 release. Availability Groups establish a relationship between a set or group of databases and replicas of that group of databases on one or more replicas. This means all the databases in the group can move as a unit, eliminating the need for complex scripting solutions to do this task.

Up to now, with Windows Server Failover Cluster solutions, all nodes in the Availability Group had to reside in the same Active Directory domain. However, many organizations have multiple domains that can’t be merged, and they want to span an Availability Group across such domains. In other situations, organizations may have no Active Directory domains at all, yet still want to host disaster recovery replicas.

To give these organizations a solution, the SQL Server and Windows Server teams delivered Windows Server 2016 Failover Clusters (WSFC). Now, all nodes in a cluster no longer need to reside in the same domain—and indeed the nodes are no longer required to be in any domain at all. Instead, you can form a WSFC cluster with machines that are in workgroups.

SQL Server 2016 is able to deploy flexible Always On Availability Groups in environments with:

  • All nodes in a single domain
  • Nodes in multiple domains with full trust
  • Nodes in multiple domains with no trust
  • Nodes in no domain at all

With SQL Server 2016 and Windows Server 2016, Always On availability groups can include up to eight readable secondaries and can span multi-domain clusters. In addition, Active Directory authentication is no longer required. All this innovation opens up new scenarios and removes previous blocks that prevented migration from deprecated Database Mirroring technology to Always On Availability Groups. (For details, see “Enhanced Always On Availability Groups in SQL Server 2016.”Click here for a video demo.)

Hybrid Backup and Stretch Database provide online cold data availability in Azure

SQL Server 2016 and Windows Server 2016 are architected to work smoothly with the Microsoft Azure cloud in a hybrid environment. Microsoft hybrid cloud technology provides a consistent set of tools and processes between on-premises and cloud-based environments. This means that SQL Server 2016 is designed to work in a hybrid cloud environment in which data and services reside in various locations. You get faster hybrid backups and disaster recover that lets you back up and restore on-premises databases to Azure and place SQL Server Always On secondaries in Azure. The figures below show how Stretch Database works.

stretch-databasestretch-database2

With this flexibility come new ways to save money and address business needs. For example, storing data is a critical business requirement that can be very expensive. To reduce this cost, SQL Server 2016 introduced Stretch Database. It allows production databases to offload older (cold) data to the Microsoft Azure cloud without losing access to the data. Many enterprises need reasonably quick access to their cold data for compliance reasons, and they can now push that data to the cloud to save money on storage costs while still having ready access for compliance audits. (Blog 5 in this series will discuss SQL Server running in a Windows Server infrastructure-as-a-service virtual machine on Azure.)

This means you no longer need to rely on extremely expensive dedicated solutions from storage vendors. In SQL Server 2016, Stretch Database lets you keep as much data as you need for as long as you need, without risking business service level agreements or the high cost of traditional storage. Database administrators need only to enable the database for stretch, and the endless storage and compute capacity of Azure ensures that your data is always online.

In addition, with SQL Server Backup to URL, you can easily back up directly to Microsoft Azure Blob Storage. You no longer need to manage hardware for backups, and you get the benefit of storing your backups in flexible, reliable, and virtually limitless cloud storage. (For details, see “SQL Server 2016 cloud backup and restore enhancements.”)

Storage Replica delivers inexpensive high availability and disaster recovery

Storage Replica is a new feature in Windows Server 2016 that offers new disaster recovery and preparedness capabilities. For the first time, Windows Server delivers the ability to synchronously protect data on different racks, floors, buildings, campuses, counties, and cities. If a disaster strikes, all data will be at a safe location. Before a disaster strikes, Storage Replica lets you switch workloads to safe locations if you have a few moments warning—again, with no data loss. (Read about how customer Danske Fragtmaend takes advantage of Storage Replica for its zero-data-loss SQL Server failover strategy.)

Storage Replica enables synchronous and asynchronous replication of volumes between servers or clusters. It helps you take more efficient advantage of multiple datacenters. When you stretch or replicate clusters, you can run workloads in multiple datacenters so that nearby users and applications can get quicker data access. In addition, you can better distribute load and compute resources. Most important, you can implement this built-in functionality on commodity hardware and use it with emerging technologies such as Flash and SSD (as Danske Fragtmaend did) to build cost-effective, high-performance storage solutions that can work with existing SAN/NAS implementations—or even replace dedicated SAN/NAS solutions at a fraction of the cost.

Visit the website for more details and demos on Storage Replica.

Rolling, in-place upgrades and less downtime

Customers often tell us they want to use the latest releases of SQL Server and Windows Server, but they need the upgrade process to be less time-consuming and complex. Now they can take advantage of rolling, in-place upgrades from previous versions to SQL Server 2016 and Windows Server 2016—while dramatically minimizing downtime.

Windows Server 2016 Cluster OS Rolling Upgrade lets you upgrade the operating system of the cluster nodes from Windows Server 2012 R2 to Windows Server 2016 without stopping the Hyper-V or the Scale-Out File Server workloads. Not only can you upgrade the OS in place, but Cluster OS Rolling Upgrade works for any cluster workload, including SQL Server 2016.

For SQL Server customers, this is important because you want to move the base OS without having to reinstall and reconfigure SQL Server. Now, in a rolling approach, you can move a cluster node, perform an in-place upgrade and do a clean install while other databases are being serviced by other nodes. The in-place upgrade preserves SQL Server backup and restore history, preserves permissions and group settings, and saves about 20‒30 minutes of upgrade time per node in the cluster. You can achieve this with minimal or no interruptions to the workload that’s running on the cluster, so you can upgrade the cluster in place. With a Hyper-V or Scale-Out File Server Workload, there’s zero downtime, which means you don’t need to buy new hardware. (For details, see Cluster operating system rolling upgrade. To see a video demonstration, watch Introducing Cluster OS Rolling Upgrades in Windows Server 2016.

Better together adds up to the best database reliability at a great price

For mission-critical workloads, you can’t settle for anything less than the best—and most cost-effective—data platform running on the OS that has built-in synergy to ensure database uptime and reliability with advanced disaster recover across domains and sites. Without spending vast amounts of your budget on third-party storage solutions, you can get the functionality you need built into SQL Server 2016 and Windows Server 2016.

Ready to give it a try?

For more info, check out this summary of five reasons to run SQL Server 2016 with Windows Server 2016. Did you miss the first two blogs in the series? Here are quick links:

Announcing our great lineup of featured speakers for the Microsoft Data Insights Summit

$
0
0

We recently shared our full session catalog for this year’s Microsoft Data Insights Summit. This week we want to highlight some of the featured speakerswe’ll have at the event. The Data Insights Summit includes an amazing mix of BI experts, product partners, community members, and Microsoft product engineers – all at the conference to help you learn how make the most of your data. We’re thrilled to spotlight Amanda Cofsky, Brian Jones, Chris Webb, Danielle Dean, Jennifer Stirrup, Justyna Lucznik, Kim Manis, Marco Russo, and Rob Collie.

Amanda is a program manager on the Power BI Desktop team, and her focus is on enabling analysts to create beautiful and insights visualizations. You may already know her from the Power BI blogs and YouTube videos where she walks through how to use all the new and exciting features in Power BI.

Brian runs the Excel Program Management team and has been in the Office engineering organization for more than 17 years. He’s worked on Word, VBA, file formats, Office and SharePoint extensibility, Office web add-ins, Forms, and Access. Brian is passionate about building tools that help people solve problems, and sees Excel as the premier application in the Office suite for people who want to gain deeper insights and get things done.

Chris is an independent consultant and trainer specializing in Microsoft Power BI and SQL Server Analysis Services. He is the author of Power Query for Power BI and Excel and a co-author of SQL Server Analysis Services 2012: The BISM Tabular Model, Expert Cube Development with SQL Server 2008 Analysis Services, and MDX Solutions with Microsoft SQL Server Analysis Services 2005 and Hyperion Essbase. He also blogs at http://blog.crossjoin.co.uk.

Danielle is a Senior Data Scientist Lead at Microsoft Corp. in the Algorithms and Data Science Group within the Cloud and Enterprise Division. She currently leads an international team of data scientists and engineers to build predictive analytics and machine learning solutions for external companies utilizing the Cortana Intelligence Suite. Before working at Microsoft, Danielle was a data scientist at Nokia, where she produced business value and insights from big data, through data mining & statistical modeling on data-driven projects that impacted a range of businesses, products and initiatives. Danielle completed her Ph.D. in quantitative psychology with a concentration in biostatistics at the University of North Carolina at Chapel Hill in 2015, where she studied the application of multi-level event history models to understand the timing and processes leading to events between dyads within social networks.

Jennifer, recently named as one of the top 10 most influential Business Intelligence female experts in the world by Solutions Review, is a Microsoft Data Platform MVP, and PASS Director-At-Large, and a well-known Business Intelligence and Data Visualization expert, author, data strategist, and community advocate. She has also been peer-recognized as one of the top 100 most global influential tweeters on big data and analytics topics. As the sole owner of a boutiqueBusiness Intelligence, Business Analytics, and Data Science consultancy, Jennifer has delivered varied projects which include leading organizations such as the National Health Service trust and private companies to the cloud, while also spearheading a Data Science Program from soup-to-nuts for a Government department.

Justyna is a Program Manager in the Business Applications Platform Group. Justyna works on developing and evangelizing solution templates which bring together the Microsoft Azure stack, as well as Power BI and PowerApps. Her most recent focus has been the Twitter and Bing News solution templates for brand and campaign managers. Prior to joining the product group, Justyna worked in the Microsoft Consulting Services in the United Kingdom. She created machine learning models for customers in the financial and digital marketing sectors, utilizing Power BI, Azure Machine Learning and R.

Kim is Group Program Manager for the Power BI Desktop team. Her team is focused on unlocking new capabilities for data analysts to model, explore and visualize their data. Before her time on the Power BI team, Kim worked on all sorts of products ranging from productivity software, social networks, and online retail.

Marco is a Business Intelligence consultant and mentor. He has worked with Analysis Services since 1999, and written several books about Power Pivot, Power BI, Analysis Services Tabular, and the DAX language. With Alberto Ferrari, he writes the content published on www.sqlbi.com, mentoring companies’ users about the new Microsoft BI technologies. Marco is also a speaker at international conferences such as Microsoft Ignite, PASS Summit, PASS BA Conference, and SQLBits.

Rob spent the first 14 years of his career at Microsoft as an engineering leader on Excel, Bing, and SQL. Subsequently he founded PowerPivotPro, the world’s first consulting company focused solely on Microsoft’s self-service BI and analytics platform (Power BI and Power Pivot). His company has led hundreds of organizations in their adoption of those tools, unlocking truly revolutionary capabilities and ROI across a wide spectrum of industries. Rob believes that this new wave of software represents the first truly impactful upgrade in the world of data since the advent of spreadsheets in the 1980’s – bigger than “BI,” “Big Data,” and every other buzz phrase combined. He tries to bring a down to earth an honest perspective to every topic, informed by a rich range of experience over his 20-year career in data and software. Rob is the bestselling author of books such as DAX Formulas for Power Pivot and Power Pivot and Power BI, a popular speaker at both technical and financial conferences, and sincerely wants you to see, for yourself, the life-changing benefits that await you.

We hope you are as excited about our strong line up of featured speakers as we are! Join us at the Microsoft Data Insights Summit and meet Amanda, Brian, Chris, Danielle, Jennifer, Justyna, Kim, Marco, and Rob!

Five reasons to run SQL Server 2016 on Windows Server 2016, part 3

$
0
0

This is the third blog in a five-part series. Keep an eye out for upcoming posts and read the first two posts in the series, part 1 and part 2.

When does 2 + 2 = 5? When two teams work hard to deliver great products individually, but also work together to make the combination more than the sum of the parts. Windows Server 2016 and SQL Server 2016 are prime examples. The development teams have collaborated closely to ensure that the very best experience for data professionals emerges when you take advantage of the synergies built into the Windows Server OS and the SQL Server data platform. In this blog post, we share how the teams have worked together to deliver advanced functionality to improve database uptime and reliability, including effective disaster recovery across sites and domains.

Always On Availability Groups: Enhanced capabilities supporting new scenarios

Always On Availability Groups have been at the center of SQL Server availability since the 2012 release. Availability Groups establish a relationship between a set or group of databases and replicas of that group of databases on one or more replicas. This means all the databases in the group can move as a unit, eliminating the need for complex scripting solutions to do this task.

Up to now, with Windows Server Failover Cluster solutions, all nodes in the Availability Group had to reside in the same Active Directory domain. However, many organizations have multiple domains that cant be merged, and they want to span an Availability Group across such domains. In other situations, organizations may have no Active Directory domains at all, and still want to host disaster recovery (DR) replicas.

To give these organizations a solution, the SQL Server and Windows Server teams delivered Windows Server 2016 Failover Clusters (WSFC). Now, all nodes in a cluster no longer need to reside in the same domain, and indeed the nodes are no longer required to be in any domain at all. Instead, you can form a WSFC cluster with machines that are in workgroups.

SQL Server 2016 is able to deploy flexible Always On Availability Groups in environments with:

  • All nodes in a single domain
  • Nodes in multiple domains with full trust
  • Nodes in multiple domains with no trust
  • Nodes in no domain at all

With SQL Server 2016 and Windows Server 2016, Always On Availability Groups can include up to 8 readable secondaries and can span multi-domain clusters. In addition, Active Directory authentication is no longer required. All this innovation opens up new scenarios and removes previous blocks that prevented migration from deprecated Database Mirroring technology to Always On Availability Groups. For details please see Enhanced Always On Availability Groups in SQL Server 2016 and learn more by watching a video demo.

Hybrid Backup and Stretch Database provide online cold data availability in Azure

SQL Server 2016 and Windows Server 2016 are architected to work smoothly with the Microsoft Azure cloud in a hybrid environment. Microsoft hybrid cloud technology provides a consistent set of tools and processes between on-premises and cloud-based environments. This means that SQL Server 2016 is designed to work in a hybrid cloud environment in which data and services reside in various locations. You get faster hybrid backups and disaster recover that lets you back up and restore on-premises databases to Azure and place SQL Server Always On secondaries in Azure. The figures below show how Stretch Database works.

Part 3 -1Part 3 -2

With this flexibility comes new ways to save money and address business needs. For example, storing data is a critical business requirement that can be very expensive. To reduce this cost, SQL Server 2016 introduced Stretch Database. It allows production databases to offload older (cold) data to the Microsoft Azure cloud without losing access to the data. Many enterprises need reasonably quick access to their cold data for compliance reasons, and they can now push that data to the cloud to save money on storage costs while still having ready access for compliance audits. Blog 5 in this series will discuss SQL Server running in a Windows Server Infrastructure as a Service (IaaS) virtual machine on Azure.

This means you no longer need to rely on extremely expensive dedicated storage vendors solutions. In SQL Server 2016, Stretch Database lets you keep as much data as you need for as long as you need, without risking business service level agreements (SLA) or the high costs of traditional storage. DBAs only need to enable the database for stretch, and the endless storage and compute capacity of Azure ensures that your data is always online.

In addition, with SQL Server Backup to URL, you can easily back up directly to Microsoft Azure Blob Storage. You no longer need to manage hardware for backups, and you get the benefit of storing your backups in flexible, reliable, and virtually limitless cloud storage. For details please see SQL Server 2016 cloud backup and restore enhancements.

Storage Replica delivers inexpensive high availability and disaster recovery

Storage Replica is a new feature in Windows Server 2016 that offers new disaster recovery and preparedness capabilities. For the first time, Windows Server delivers the ability to synchronously protect data on different racks, floors, buildings, campuses, counties, and cities. If a disaster strikes, all data will be at a safe location. Before a disaster strikes, Storage Replica lets you switch workloads to safe locations if you have a few moments warningagain, with no data loss. Read about how customer Danske Fragtmaend takes advantage of Storage Replica for its zero-data-loss SQL Server failover strategy.

Storage Replica enables synchronous and asynchronous replication of volumes between servers or clusters. It helps you take more efficient advantage of multiple datacenters. When you stretch or replicate clusters, you can run workloads in multiple datacenters so that nearby users and applications can get quicker data access. In addition, you can better distribute load and compute resources. Most important, you can implement this built-in functionality on commodity hardware and use it with emerging technologies such as Flash and SSD, like Danske Fragtmaend, to build cost-effective, high-performance storage solutions that can work with existing SAN/NAS implementations, or even replace dedicated SAN/NAS solutions at a fraction of the cost.

Visit the website for more details and demos on Storage Replica.

Rolling, in-place upgrades and less downtime

Customers often tell us they want to use the latest releases of SQL Server and Windows Server, but they need the upgrade process to be less time consuming and complex. Now they can take advantage of rolling, in-place upgrades from previous versions to SQL Server 2016 and Windows Server 2016, while dramatically minimizing downtime.

Windows Server 2016 Cluster OS Rolling Upgrade lets you upgrade the operating system of the cluster nodes from Windows Server 2012 R2 to Windows Server 2016 without stopping the Hyper-V or the Scale-Out File Server workloads. Not only can you upgrade the OS in place, but Cluster OS Rolling Upgrade works for any cluster workload, including SQL Server 2016.

For SQL Server customers, this is important because you want to move the base OS without having to reinstall and reconfigure SQL Server. Now, in a rolling approach, you can move a cluster node, perform an in-place upgrade, and do a clean install while other databases are being serviced by other nodes. The in-place upgrade preserves SQL Server backup and restore history, preserves permissions and group settings, and saves about 20-30 minutes of upgrade time per node in the cluster. You can achieve this with minimal or no interruptions to the workload thats running on the cluster, so you can upgrade the cluster in place. With a Hyper-V or Scale-out File Server Workload, theres zero downtime, which means you dont need to buy new hardware. For more details see “Cluster operating system rolling upgrade.” To see a video demonstration, watch “Introducing Cluster OS Rolling Upgrades in Windows Server 2016.”

Better together adds up to the best database reliability at a great price

For mission-critical workloads, you cant settle for anything less than the best, and most cost effective, data platform running on the OS that has built-in synergy to ensure database uptime and reliability with advanced disaster recover across domains and sites. Without spending vast amounts of your budget on third-party storage solutions, you can get the functionality you need built into SQL Server 2016 and Windows Server 2016.

Ready to give it a try?

For more info, check out this summary of five reasons to run SQL Server 2016 with Windows Server 2016. Did you miss the first two blogs in the series? Here are quick links:

April 2017 updates for Get & Transform in Excel 2016 and the Power Query add-in

$
0
0

Excel 2016 includes a powerful set of features based on the Power Query technology, which provides fast, easy data gathering and shaping capabilities and can be accessed through the Get & Transform section on the Data ribbon.

Today, we are pleased to announce five new data transformation and connectivity features that have been requested by many customers.

These updates are available as part of an Office 365 subscription. If you are an Office 365 subscriber, find out how to get these latest updates. If you have Excel 2010 or Excel 2013, you can also take advantage of these updates by downloading the latest Power Query for Excel add-in.

These updates include the following new or improved data connectivity and transformation features:

  • Support for the same file extensions in Text and CSV connectors.
  • ODBC and OLE DB connectors—support for Select Related Tables.
  • Enhanced Folder connector—support for “Combine” from the Data Preview dialog.
  • New Change Type Using Locale option in Column Type drop-down menu inside Query Editor.
  • New Insert Step After option in Steps pane inside Query Editor.

Support for the same file extensions in Text and CSV connectors

With this update, we revised the list of supported file extensions in From Text and From CSV connectors. Now, you can browse and select any text (*.txt), comma-separated value (*.csv) or formatted text space delimited (*.prn) file as the first step of the import flow for both connectors. Alternatively, you can switch to the All files (*.*) filter option and select to import data from any other unlisted file.

ODBC and OLE DB connectors—support for Select Related Tables

When using the ODBC and OLE DB connectors, we enabled the Select Related Tables button in the Navigator dialog. This option—already available for other relational data sources—allows users to easily select tables that are directly related to the set of already selected tables in the Navigator dialog.

Enhanced Folder connector—support for Combine Binaries from the Data Preview dialog

In the January 2017 update, we shipped a set of enhancements to the Combine Binaries experience. You can learn more about those enhancements in this article.

This month, we made it easier for customers to access the Combine Binaries feature by allowing them to combine multiple files directly from the folder Data Preview dialog within the Get Data flow, without having to go into the Query Editor.

Note that exposed multiple options (Combine and Combine & Load) allow customers to further refine their data before loading it into the worksheet or Data Model.

New Change Type Using Locale option in Column Type drop-down menu inside Query Editor

In the Query Editor, it is possible to see and modify column types by using the Column Type drop-down menu in the preview area.

With this release, we added the Change Type Using Locale option to this drop-down menu (previously available by right-clicking the column header and then selecting Change Type>Using Locale). This option allows you to specify the desired column type and locale to use for the conversion, which affects how text values are recognized and converted to other data types, such as dates, numbers, etc.

New Insert Step After option in the Steps pane inside Query Editor

With this update, we added a new context menu option in the Query Editor window to easily insert new steps in existing queries. The Insert Step After option lets users insert a new custom step right after the currently selected step, which can be the final step or any previous step within the query.

Learn more

—The Excel team

The post April 2017 updates for Get & Transform in Excel 2016 and the Power Query add-in appeared first on Office Blogs.

New Imagery for 41 Cities in Spain

$
0
0

Bing Maps works with a number of partners to provide fresh and beautiful imagery. This week we’re happy to release new imagery for 41 cities in Spain, covering a total of 6,000 square kilometers in partnership with Airbus.

Below are a few examples of the stunning imagery of Spain:

Albacete

Albacete, the home of the Albacete Fair, is located in southeast central Spain. Celebrated over 10 days in September, the fair is known for its festive atmosphere and activities including bull fighting, concerts, theatre, parades, sporting competitions and more. The permanent fairground is known as “the circles.” It is the center of the festivities and is made up of concentric circles where vendors sell food, drink, and crafts.

Albacete Spain

Cadiz

Cadiz, located in southwestern Spain, is one of the oldest cities in Western Europe and the oldest continuously inhabited city in Spain. Home to the famous Carnival of Cadiz, the city transforms into a massive street party during the 11-day affair, teeming with partygoers in costume and fancy dress.

Cadiz Spain

Estepona

Estepona is located on the world-renowned Costa del Sol with beaches stretching 21 kilometers of coastline. As a popular holiday destination, Estepona is home to several golf courses, resorts, leisure parks, and beautiful beaches.

Estepona Spain

Explore even more of Spain with Bing Maps.

- Bing Maps Team

Viewing all 13502 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>