Quantcast
Channel: TechNet Technology News
Viewing all 13502 articles
Browse latest View live

Search for up to two years of data with OMS Log Analytics

$
0
0

Summary: Modify the retention value to save data for up to two years in Log Analytics.

Good morning everyone, Richard Rundle here, and today I want to talk about how you can choose to keep your data for up to two years in Log Analytics.

Our paid pricing plans include 30 days of data retention, but you can now increase that retention to be up to 730 days (two years)!

We’re still in the process of building the user interface (UI) and adding support for setting this through PowerShell, but we know many of you are eager to keep your data for longer. This post shows how to use Azure Resource Explorer to make the change.

Change data retention

You need to be on the Standalone or OMS pricing plan to be able to change your retention. If you are on the Free pricing plan, your retention is fixed at 30 days. If you have a workspace created prior to October 1, your retention is fixed at 30 days if you are on the Standard plan, or 365 days if you are on the Premium plan.

To move from the default of 30 days retention to a longer retention, use the following steps:

  1. Open https://resources.azure.com and login with the credentials you use to login to portal.azure.com
  2. T expand subscriptions, click the + (plus) symbol in the left pane.

Expand subscriptions

  1. To expand the subscription click the + symbol.
  2. To expand resourceGroups, click the + symbol.
  3. Expand the resource group that contains your Log Analytics workspace. In the Azure portal, when you select your Log Analytics workspace, the properties page show you the name of the resource group.
  4. Expand Microsoft.OperationalInsights. If you don’t see Microsoft.OperationalInsights, there are no workspaces in this resource group. Refer to step 5 to find which resource group your workspace is in.
  5. Click the name of your workspace. If you don’t see your workspace, make sure that you selected the resource group. Refer to step 5 to find which resource group your workspace is in.

Finding your resource group

  1. To change to Read/Write mode, click Read/Write at the top of the page.
  2. To change to Edit mode, click the Edit button. The Get button will change its name to PUT, and the blue button will change to Cancel.
  3. To change the value for retentionInDays, select the text and typing a new value.

Changing the retentionInDays value

  1. Click the green PUT button to make the change.

The Put button

  1. Verify that new retention value is shown.

The new retention value

Troubleshooting

There are two errors that you might see.

The value provided for retention is invalid

This error means that you’ve tried to change the retention on a Free, Standard, or Premium plan. You can only change the retention if the plan is Standalone or OMS. (The OMS tier will show as “pernode” in the following  UI):

The

You can’t do this operation because you are in ‘Read Only’ mode.

This will occur if you skip Step 8.

The ) “You can’t do this operation because you are in ‘Read Only’ mode.” error

If you are creating a new workspace by using Azure Resource Manager templates, see Manage Log Analytics using Azure Resource Manager templates to learn how to specify the length of time that you’d like to retain data.

That is all I have for you today. I would like to hear any feedback you have.

Please feel free to send me an e-mail at Richard.Rundle@microsoft.com with questions, comments, and suggestions.

Richard Rundle
Microsoft Operations Management Team

 

 

 


Announcing integration of Azure Backup into VM management blade

$
0
0

Today, we are excited to announce the ability to seamlessly backup virtual machines in Azure from the VM management blade using Azure Backup. Azure Backup already supports backup of classic and Resource Manager VMs (Windows or Linux) using Recovery Services vault, running on standard storage or on Premium Storage. We also announced backup of VMs encrypted using ADE (Azure Disk Encryption), a couple of weeks back. With this announcement, we are brining the backup experience closer to VM management experience, giving ability to backup VMs directly from VM management blade. This announcement makes Azure the public cloud providing a backup experience natively integrated into VM management.

Azure Virtual Machines provide a great value proposition for different kind of workloads that want to harness the power of cloud. It provides a range of VMs offering basic capabilities to running powerful GPUs to meet customer demands. Backing VMs against accidental deletions and corruptions resulting from human errors is a critical capability for enterprise customers as well as small and medium scale customers who are deploying their production workloads in the cloud. This integration makes meeting that requirement seamless with a simple two-step backup configuration.

Value proposition

Azure Backup’s cloud-first approach to backup puts following cloud promises into action:

  • Freedom from infrastructure: No need to deploy any infrastructure to backup VMs
  • Cloud Economics: Customers can leverage highly available, scalable and resilient backup service at a cost-effective price
  • Infinite scale: Customers can protect multiple VMs in one go or one at a time using a Recovery Services vault
  • Pay as you go: Simple Backup pricing makes it easy to protect VMs and pay for what you use

Features

With the integration of Azure Backup into VM management blade, customers will be able to perform following operations directly from VM management blade:

  • Configure Backup using simple two-step configuration.
  • Trigger an on-demand backup for backup configured VMs
  • Restore a complete VM, all disks or a file-folders inside the VM( In preview for Windows VMs) from backup data
  • View recovery points corresponding to configured backup schedule

Get started

To get started,select a virtual machine from the Virtual machines list view. Select Backup in the Settings menu.

  • Create or select a Recovery Services vault: A recovery Services vault stores backups separate from customer storage account to guard from accidental deletions.
  • Create or Select a Backup Policy: A backup policy specifies the schedule at which backups will be running and how long to store backup data.

By default a vault and a policy is selected to make this experience even smoother. Customers have the flexibility to customize this as per needs.

Backup directly from VM blade

Related links and additional content

Azure Backup security capabilities for protecting cloud backups

$
0
0
This post talks about new security features provided by Azure Backup.

Dive into Red Hat OpenShift Container Platform on Microsoft Azure

$
0
0

Join Microsoft in a joint webinar with Red Hat to explore how OpenShift can help you go to market faster.

Red Hat CCSP and Cloud Evangelist Nicholas Gerasimatos and Microsoft Azure Principal PM Boris Baryshnikov will demo how to deploy OpenShift in Azure. They’ll break down capabilities like source to image and running and deploying containerized applications so that you’re ready to get started right away.

This is a great way to learn about building, deploying, and managing containerized services and applications with RedHat OpenShift Container Platform on Microsoft Azure. You’ll get an overview of how OpenShift can help provide a secure, flexible, and easy-to-manage application infrastructure.

Plus, if you attend the webinar live on November 17, you can participate in a live Q&A with Nicholas and Boris to get answers to your specific questions. Register today!

Training Deep Neural Networks on ImageNet Using Microsoft R Server and Azure GPU VMs

$
0
0

This post is by Miguel Fierro, Data Scientist, Max Kaznady, Data Scientist, Richin Jain, Solution Architect, Tao Wu, Principal Data Scientist Manager, and Andreas Argyriou, Data Scientist, at Microsoft.

Deep learning has made a lot of strides in the computer vision subdomain of image classification in the past few years. This success has opened up many use cases and opportunities in which Deep Neural Networks (DNNs) similar to the ones used in computer vision can generate lots of business value, and in a wide range of applications including security (e.g. identifying luggage at airports), traffic management (identifying cars), brand tracking (tracking how many times the brand logo on a sportsperson’s apparel appears on TV), intelligent vehicles (classifying traffic signals or pedestrians), and many more.

These advances are becoming possible because of a couple of reasons: DNNs are able to classify objects in images with high accuracy (more than 90%, as we will show in this post) and also because we have access to cloud resources such as Azure infrastructure which allow us to operationalize image classification in a highly secure, scalable and efficient way.

This is the third post in our cloud deep learning blog series featuring Microsoft R Server, MXNet and Azure cloud computing components. In our first post, we showed how to set up cloud deep learning using Azure N-series GPU VMs and Microsoft R Server. In the second post, we demonstrated an end-to-end cloud deep learning workflow and parallel DNN scoring using HDInsight Spark and Azure Data Lake Store.

In this latest post, we demonstrate how to utilize the computing power offered by multiple GPUs in Azure N-Series VMs to train state-of-the-art deep learning models on complex machine learning tasks. We show training and scoring using deep residual learning, which has surpassed human performance when it comes to recognizing images in databases such as ImageNet.

ImageNet Computer Vision Challenge

The ImageNet Large Scale Visual Recognition Competition (ILSVRC), which began in 2010, has become one of the most important benchmarks for computer vision research in recent years. The competition has three categories:

  1. Image classification: Identification of object categories present in an image.
  2. Single object localization: Identifying a list of objects present in an image and drawing a bounding box showing the location of one instance of each object in the image.
  3. Object detection: Similar to single object location above, but with the added complexity of having to draw bounding boxes around every instance of each object.

In this post, we focus on the first task of image classification.


Figure 1: Some of the classes annotated in the ImageNet dataset (images from Wikipedia).

The ImageNet image dataset used in ILSVRC competition contains objects annotated according to the WordNet hierarchy (Figure 1). Each object in WordNet is described by a word or group of words called “synonym set” or “synset”. For example: “n02123045 tabby, tabby cat”, where “n02123045” is the class name and “tabby, tabby cat” is the description. ILSVRC uses a subset of this dataset, and specifically we are using an ILSVRC subset from 2012, which contains 1000 classes of images.

The ImageNet data for the competition is further split into the following three datasets:

  1. Training set with 1000 classes and roughly 1200 images for each class (up to 1300 images per class), which is used to train the network (1.2 million images in total).
  2. Validation set of 50,000 images, sampled randomly from each class. It is used as the ground truth to evaluate the accuracy of each training epoch.
  3. Test set of 100,000 images selected randomly from each class. It is used to evaluate the accuracy in the competition and to rank the participants.

Training the Microsoft Research Residual DNN with Microsoft R Server and Four GPUs

The Residual Network (ResNet) architecture, introduced in 2015 by Microsoft Research (MSR), has made history as the first DNN to surpass human performance in the complex task of image classification. DNNs in the computer vision domain have been extremely successful, because they can learn hierarchical representations of an image: each layer learns successively more complex features. When stacked together, the layers are capable of very complex image identification tasks. Examples of simpler features include edges and regions of an image, whereas a good example of a more advanced feature which a DNN can learn would be one part of an object, e.g. the wheel of a car.

A DNN with a higher number of layers can learn more complex relationships but also becomes more difficult to train using optimization routines such as Stochastic Gradient Descent (SGD). The ResNet architecture introduces the concept of residual units which allow very deep DNNs to be trained: SGD attains successively better performance over time and eventually reaches convergence.



Figure 2: Schema of a residual unit.

The idea of residual units is presented in Figure 2. A residual unit has one or more convolution layers, labeled as F(x), and adds to these the original image (or output of the previous hidden layer), F(x) + x. The output of a residual unit is therefore a hierarchical representation of the image, produced by the convolutions and the image itself. ResNet authors believe that “it is easier to optimize the residual mapping than to optimize the original, unreferenced mapping”. The authors presented a DNN of 152 layers in 2015 – the deepest architecture to-date.

Training an 18-layer ResNet with 4 GPUs

We showcase the training of ResNet in one of our Azure N-series GPU VMs (NC24), which has four NVIDIA Tesla K80 GPUs. We implemented our own 18-layer ResNet network (ResNet-18 for short) using the MXNet library in Microsoft R Server – the full implementation is available here. We chose the smallest 18-layer architecture for demo purposes as it minimizes the training time (a full 152-layer architecture would take roughly 3 weeks to train). ResNet-18 is also the smallest architecture which can be trained to result in decent accuracy. There are other layer combinations which can be created: 34, 50, 101, 152, 200 and 269. The implemented architecture is based on ResNet version 2, where the authors reported a 200-layer network for ImageNet and a 1001-layer network for the CIFAR dataset.

The MXNet library natively supports workload partitioning and parallel computation across the 4 GPUs of our Azure N-series GPU VM. The images are supplied to the training process via an iterator, which allows the loading of a batch of images in memory. The number of images in the batch is called batch size. The neural network weight update is performed in parallel across all four GPUs. The memory consumed by the process on each GPU (Figure 3) depends on the batch size and the complexity of the network.

The learning rate has a large effect on the algorithm performance – too fast and the algorithm won’t converge, but too slow and the algorithm will take a significant amount of training time. For large problems such as the one we are discussing, it is important to have a schedule for reducing the learning rate according to the epoch count. There are two methods which are commonly used:

  1. Multiply the learning rate by a factor smaller than one at the end of every epoch.
  2. Keep the learning rate constant and decrease by a factor at certain epochs.


Figure 3: GPU memory consumption and utilization while training ResNet-18 on 4 GPUs with a batch size of 144.

During training, we used the top-5 accuracy (instead of using the top-1 accuracy), which does not record misclassification if the true class is among the top 5 predictions. The prediction output for each image is a vector of size 1000 with the probability of each of the 1000 classes. The top-5 results are the 5 classes with the highest probability. The use of top-5 accuracy was initially set in the ImageNet competition due to the difficulty of the task and it is the official ranking method.

It takes roughly 3 days to train ResNet-18 for 30 epochs in Microsoft R Server on an Azure N-series NC-24 VM with four GPUs. The top-5 accuracy at epoch 30 is 95.02% on the training set and 91.97% on the validation set. Training progress is displayed in Figure 4.


Figure 4: Top-5 accuracy on the training set and validation set.

To better visualize the results, we created a top-5 accuracy confusion matrix (Figure 5). The matrix size is 1000 by 1000, corresponding to the 1000 classes of ImageNet. The rows correspond to the true class and the columns to the predicted class. The matrix is constructed as a histogram of the predictions. For each image, we represent the pair (true label, prediction k) for the k=1,…,5 predictions. The bright green represents a high value in the histogram and the dark green represents a low value. Since the validation set consists of 50,000 images, we produce a matrix of 250,000 elements. We used the visualization library datashader and the trained ResNet-18 model at epoch 30.



Figure 5: Confusion matrix of top-5 accuracy in ResNet-18 using the validation set at epoch 30. The rows represent the real class and the columns the predicted class. A bright green represents a high density of points, while a dark green represents a low density.

The diagonal line in the confusion matrix represents perfect predictor with zero misclassification rate – the predicted class exactly matches the ground truth. Its brightness shows that most images are correctly classified among the top-5 results (in fact more than 90% of them).

In Figure 5, we observe that there are two distinct clusters of points: one corresponding to animals another to objects. The animal group contains several clusters, while the group corresponding to objects is more scattered. This is because in the case of animals, certain clusters represent groups from the same species (fish, lizards, birds, cats, dogs, etc.). In other words, if the true class is a golden retriever, it is better for our model to classify the image as another breed of dog than as a cat. For objects, the misclassifications are more sporadic.

We can visualize how the results change as the number of epochs increases. This is represented in the Figure 6, where we create a matrix with the model from epoch 1 to 30 and the images in the validation set.



Figure 6: GIF of the evolution of the top-5 accuracy in ResNet-18 using the validation set and epochs 1 to 30.

Operationalizing Image Classification with MXNet and Microsoft R Server

We can now use the model we have just trained to identify animals and objects in images that belong to one of the 1000 classes in our dataset, which can be done with high accuracy as indicated by our results.


Figure 7: Neko looking surprised.

As an example we can try to predict the class of our little friend Neko in Figure 7. We obtain a good-enough prediction at epoch 28:

Predicted Top-class: “n02123045 tabby, tabby cat”

Top 5 predictions:

[1] “n02123045 tabby, tabby cat”

[2] “n02124075 Egyptian cat”

[3] “n02123159 tiger cat”

[4] “n02127052 lynx, catamount”

[5] “n02125311 cougar, puma, catamount, mountain lion, painter, panther, Felis concolor”

As we can see, the prediction is very accurate, classifying Neko as a tabby cat. Not only that, but also the rest of the Top-5 results are cat-like animals. This result and the analysis of the clusters in the confusion matrix of Figure 5 suggest that the network is in some degree learning the global concept of each class in the images. Clearly, it is better for the network to misclassify Neko as an Egyptian cat than as an object.

All the code for ResNet training and prediction on ImageNet can be accessed via this repo.

Summary

We showed how to train ResNet-18 on the ImageNet dataset using Microsoft R Server and Azure N-series GPU VMs. You can experiment with different hyper-parameter values and even train ResNet-152 to surpass human accuracy using our Open Source implementation.

We would like to highlight once again that it’s not only important to achieve high Top-1 accuracy, but also that Top-5 misclassifications have to be related to the true class. Our results show that the network is learning to differentiate animals from objects, and even learn the concept of species within animals.

We can easily operationalize the trained model with a framework similar to the one we described in our previous blog post. In our next post, we will focus on adjacent domains where deep learning can be applied, such as in textual data.

Miguel, Max, Richin, Tao & Andreas.

Racing ahead with collaboration, analytics and data security using Office 365

$
0
0

Boosting teamwork at Renault Sport Formula One Team

Today’s Microsoft Office 365 post was written by Mark Everest, IS development manager at Renault Sport Formula One Team.

Formula One races hang in the balance based on fractions of seconds. Teams do everything they can prior to race day to ensure optimal performance, relying on thousands of data points to fine-tune everything from tire pressure and suspension readings to aerodynamic devices and engine settings. During the race, drivers rely on pit crews to make further changes based on real-time information. Pit crews must move quickly and work seamlessly as a team, getting their drivers back on the race course in a matter of seconds.

We extend that “pit crew” mindset across the entirety of our business. Our goal is to operate as an agile, tightly integrated company, where our time and energy helps shave critical seconds off the race clock, rather than wasting that time with unnecessary steps or communications delays. We adopted the Microsoft Office 365 E5 suite to boost efficiencies, make the most of our data, and streamline cross-company collaboration—from geographically scattered team members perfecting race plans together to our international legal team quickly finding and assessing the relevance of discovered content.

When it comes to working together, we want our team members to stay both fully connected and fully mobile. So, we provide them with a range of flexible communication and collaboration options that keep them productive from anywhere. These are particularly important for our engineers, technicians and other employees who travel around the world. For instance, we’re looking at using Skype for Business Cloud PBX and PSTN Calling to minimize the cost and inconvenience of international calling by easily assigning local landline numbers to our traveling employees.

We’re gaining insight into our races and our business operations through Office 365. We collect and analyze billions of data points, using the intelligence we gain to improve everything from midrace adjustments to car manufacturing processes to driver training simulations. Our engineers and analysts now query and visualize that data themselves, so they get to the answers they need faster and share it in instantly understandable ways. Team members are even analyzing how they spend their time and identifying opportunities to improve productivity.

Of course, in addition to focusing on speed, pit crews work to ensure driver safety—and we definitely extend that emphasis on security to the rest of our business. Because our data lies at the heart of what we do, we have to trust that our data—especially our proprietary information—is protected against threats, which is why we’ve adopted Office 365 Advanced Threat Protection and Advanced Security Management. We’re now guarding against targeted attacks, including those related to email attachments and phishing URLs, and we have greater visibility into employees’ anomalous online activity, so that we can block it immediately and investigate.

Just as racing technology improves over time, we at Renault Sport Formula One Team are always looking for additional ways to trim waste and accelerate our progress. Time and again, we find that Office 365 has the built-in capabilities to address our needs as they change. For example, members of our Aero Department recently asked for a tool to simplify task sharing and tracking among staff when planning work on next year’s car. We looked to Office 365 and found an existing capability that helps teams organize, assign and collaborate on tasks and keeps team members informed about current status. Without having to buy a new license or develop an in-house solution, we were able to meet that department’s needs right away. I look forward to continuing to use Office 365 to support Renault Sport Formula One Team through many more races—and many more wins—in the future.

—Mark Everest

For the complete story, read the Office 365 case study.

The post Racing ahead with collaboration, analytics and data security using Office 365 appeared first on Office Blogs.

The Riverbed Field Guide for the AD Admin

$
0
0

Unexpected TCP resets, intermittent “Network Path Not Found”, and SMB dialects being downgraded. These errors point to something very odd and potentially very bad, happening on the network. If you are like many AD administrators, at the first sign of network impropriety, you likely engage the network team and request the network issues be addressed. While on the call with the networking team, you may hear a very unexpected resolution to the issue. The team may request that you add their non-Windows network devices to your domain as an RODC (Read Only Domain Controller) or grant a service account permission to replicate with your domain controllers.

Before you hang up on the networking team or stop reading this blog, hear me out.  WAN optimizers and other caching technology’s such as Branch Cache can offer substantial improvement in WAN performance.  This is especially true in parts of the globe where high speed WAN connections are either cost prohibitive or non-existent.   For many companies, the benefits of these devices far outweigh the support impact and potential security implications.

This is Premier Field Engineer Greg Campbell, here to remove a little of the mystery about how Riverbed’s Steelhead appliances alter what we see on the wire, how best to reduce or prevent support issues and the significant security considerations when integrating them with the Active Directory.  Let’s start with a little background on the devices and how they operate.   Then we can address the security questions followed by some troubleshooting tips. Here’s what we are going to cover.

  • Riverbed Primer – What you need to know when looking at network captures
  • Security Implications – What you need to know before integrating the Steelheads with Active Directory
  • Troubleshooting info – Common issues you may encounter with Steelheads in the environment.

While the Steelheads can optimize HTTPS traffic (with access to the web site’s private key) and Mapi traffic, the focus of this blog will be on optimizing SMB traffic.

Before we go any further please review our recently published support boundaries on this topic.

https://support.microsoft.com/en-us/kb/3192506

1 Riverbed Primer

Riverbed Technology Inc is the manufacture of WAN optimization including the Steelhead branded products. Steelhead comes in many forms including appliance devices, a soft client for Windows and Mac as well as a cloud based solution.

The Steelheads can perform three levels of traffic optimization:

  • TCP optimization– This includes optimizations at the TCP layer, including optimizing TCP acknowledgment traffic, and TCP window size.
  • Scalable data reference (data deduplication) – Instead of sending entire data sets, only changes and references to previously sent data are sent over the WAN. This is most effective for with small changes to large files.
  • Application latency optimization– For applications such as file sharing (SMB), Steelhead appliances optimize the protocol by reducing protocol overhead and increasing data throughput.

The last one, application latency, can provide the most impactful performance gains. However, to accomplish application latency optimization, the SMB traffic either needs to be unsigned or the Steelheads will need to inject themselves into the signed client-server communication. I will address that massive can of worms I just cracked, later. For now, just remember that there are up to 3 levels of optimization that can be performed.

Three TCP Connections

Don’t confuse this with the traditional 3-way handshake. Its 3 separate TCP sessions each with their own 3-way handshake. I guess you could say it’s a 9-way handshake. Here are the 3 legs of the journey (see figure 1).

  1. Client to Steelhead (LAN)– The first session is between the client and the local Steelhead appliance operating in the client role. This traffic is not optimized and is a regular TCP session.
  2. Steelhead to Steelhead (WAN) The second session is TLS encrypted between the two Steelhead appliances. This TLS protected TCP session is a proxied connection that corresponds to the two outer TCP sessions to the client and the server. This traffic is invisible to the client and server and is where the bulk of the optimization takes place.
  3. Steelhead to Server (LAN) -The 3rd session is between the Steelhead operating in the server role. This traffic is not optimized and is a regular TCP session.

Figure 1 – The 3 separate TCP sessions

When looking at simultaneous network (client/server) network captures, there will be completely different TCP sequence numbers, packet ID numbers and possibly different packet sizes. This is because they are different TCP sessions. Though they are different TCP sessions, the Steelhead appliances use a feature called IP and port transparency. This means the IP and port on the client and the server are not altered by the optimization. This can be helpful when attempting to align the client side and server side conversions in the two network captures.

While Steelhead appliances do not have configured roles, to help explain traffic flow and AD requirements, the Steelhead nearest the client is the in the “client role”, or C-SH. The Steelhead nearest the server, is in the “server role” or S-SH. These roles would be reversed if the traffic were reversed. For example, a server in the data center accessing the client for system management etc.

Steelhead Bypass Options

There are times when traffic cannot or should not be optimized by the Steelheads. During troubleshooting, it can be helpful to try bypassing the Steelheads to determine if they are involved in the issue. Because there are different ways to bypass the Steelheads, it’s helpful for the AD admin to know which method of bypass was used. Especially if the bypass method didn’t address the issue. Most of the bypass methods do not completely bypass the Steelheads nor disable all levels of optimization. If the issue persists after bypassing has been enabled, it may be necessary to use a different bypass method.

Steelhead’s IP Blacklist

The Steelhead appliances will attempt to optimize all SMB traffic that is not already excluded. If the traffic cannot be application latency optimized, the IP addresses are put in dynamic exclusion list called a “blacklist” for 20 minutes. The next time a connection is attempted, the Steelhead will allow the traffic to bypass latency optimization. If a single IP address appears on the blacklist several times, it will be put onto a long-term blacklist, so the first failure is avoided. The long-term blacklists will persist until the Steelhead is rebooted or its manually cleared by an administrator.

In the case of signed SMB traffic that cannot be application latency optimized, first attempt to connect will likely fail. In the client side capture, you may see an unexpected TCP Reset from the server. At the next attempt to connect, the Steelheads only perform the first two levels of optimization and the connection typically succeeds. The blacklist is short lived (20 minutes). If the client application silently retries, the user may not see any issue. However, some users may report that sometimes it works and sometimes it doesn’t. Or they may have noticed that first attempt fails but the second works. Keep this in mind when troubleshooting intermittent network issues.

In-Path Bypass rules and Peering rules

If the Steelhead appliances are not AD integrated, the best option is to have the Steelhead administrators exclude the traffic from optimization by adding an In-Path, pass-through, rule on the Steelheads. Because domain controllers always sign SMB traffic, In-Path, Pass-Through rules are recommended for Domain Controllers. On the Steelheads operating in the client role, create an in-path rule to pass-through all traffic to the domain controllers. For more information, see:

https://support.riverbed.com/bin/support/static/bkmpofug7p1q70mbrco0humch1/html/ocgj3m4oc178q0cigtfufl0m68/sh_ex_4.1_ug_html/index.html#page/sh_ex_4.1_ug_htm/setupServiceInpathRules.html

Optionally, a peering rule can be applied on the Server side Steelhead. Work with your Riverbed support professional to determine which method is recommended for your environment.

Riverbed Interceptor and Bypass Options

For troubleshooting, it may be necessary to completely bypass the Steelhead appliances for testing. If the environment is using the load distribution device “Riverbed Interceptor”, the by-pass rule can be added either on the Inceptor or on the Steelheads. The interceptor balances optimization traffic across the Steelhead appliances within the site. The interceptor can be configured to bypass all the Steelhead appliances, sending traffic un-optimized directly to the router. If the bypass rule is set on the Steelheads instead of the interceptor, the traffic may still be TCP and SDR (Scalable data reference) optimized. When deciding which bypass method to use, note the following:

  1. In-path, bypass rules for the Steelheads. When configured on the Steelhead, only application latency optimization is disabled. TCP optimization and scalable data reference optimization are still active. This still provides a measure of optimization. When troubleshooting, this method along may still introduce issues.
  2. In-path, bypass rules for Interceptors. Configured on the Riverbed Interceptors, when these rules are enabled, the traffic will completely bypass the Steelhead appliances. No optimization is performed. Setting the bypass rule on the Interceptor may be required in some troubleshooting scenarios.

Figure 2 – Bypass rules configured at the interceptor

2 Security Implications

In this section, we review the security impact of integrating Steelheads with Active Directory. This integration is required to fully optimize signed SMB traffic.

Why SMB traffic Should Be Signed

Consider the scenario where a client’s traffic is intercepted and relayed to another host. An adversary is acting as a man-in-the-middle (MitM) and the connection information may be used connect to another resource without your knowledge. This why it’s important that the client have some way to verify the identity of the host its connecting to. For SMB, the solution to this problem goes back as far as Windows NT 4.0 SP3 and Windows 98. SMB signing is the method used to cryptographically sign the SMB traffic. This is accomplished by using a session key that is derived from the authentication phase, during SMB session setup. The file server uses its long-term key, aka, computer account password, to complete this phase and prove its identity.

When traffic is “application optimized”, the Steelhead appliances operate as an authorized man-in-the-middle. They are intercepting, repackaging and then unpacking traffic to and from the real server. To sign this traffic, the Steelhead operating in the server role needs access to the session key. Without AD integration, the Steelhead appliances do not have access to the server’s long-term keys and cannot obtain the session key. The Steelhead is unable to sign the SMB Session setup packets and cannot prove it is the server the client had intended to communicate with. SMB signing is doing its job and preventing the man-in-the-middle. For the Steelheads, this protection means the traffic cannot be application latency optimized.

Signing with SMB3 and Windows 10 (Secure Negotiate)

SMB3 added another layer of protection called secure negotiate. During the client / server SMB negotiation, the client and server agree on the highest supported SMB dialect. The server response contains the negotiated SMB dialect and with field in the packet containing the dialect value signed by the server. This step ensures that the SMB dialect cannot be downgraded to an older, weaker dialect by a man-in-the-middle. Compared to traditional SMB signing, far less of the SMB packet is signed. However, this still causes an issue for the Steelheads that are not able to sign the field containing the negotiated dialect.

In Windows 8, it was possible to disable Secure Negotiate. While unadvisable, many admins opted to restore WAN performance at the expense of SMB security. In Windows 10 the capability to downgrade security by disabling Secure Negotiate, is no longer available. The leaves only two options for Windows 10:

  • Integrate the Steelhead appliance (server role in the datacenter) with AD.
  • Bypass the Steelhead appliances for application optimization. This significantly reduces the effectiveness of the Steelhead appliances in some scenarios.

NOTE: When a Windows 10 client (SMB 3.1.1) is communicating with a down-level server, the client will use Secure Negotiate. However, when a Windows 10 client is communicating with a server running SMB 3.1.1, it will use the more advanced variant, Pre-Authentication Integrity. Pre-Authentication Integrity also uses signed packets and has the same Steelhead requirements as Secure Negotiate. For more information, see https://blogs.msdn.microsoft.com/openspecification/2015/08/11/smb-3-1-1-pre-authentication-integrity-in-windows-10/

RIOS versions 9.1.3 and below, currently do not support Pre-Authentication Integrity. Contact your Riverbed Support professional for options if this scenario applies your environment

For an in-depth discussion of the Secure Negotiate feature, see: https://blogs.msdn.microsoft.com/openspecification/2012/06/28/smb3-secure-dialect-negotiation/

Steelhead options for Optimizing Signed SMB Traffic

Now we know that Steelhead needs the server’s long term key (computer account password), to sign the response and subsequently fully “application optimize” the traffic. The required permissions vary by authentication protocol (NTLM or Kerberos) and RIOS version. For this discussion, we will focus on RIOS version 7 and above.

AD Integration – Kerberos

The Steelheads use one of two methods to obtain the session keys needed to sign optimized SMB traffic. To optimize SMB traffic authenticated using NTLM, the Steelheads use a computer account with RODC privileges. More on this in the next section. To optimize traffic authenticated using Kerberos, the Steelheads do not need an RODC account but instead use a highly privileged service account along with a workstation computer account. While this mode requires far more privileges than the RODC approach, it is required for optimizing Kerberos authentication sessions. Additionally, this method does not suffer from the RODC account related issues discussed in the troubleshooting section.

As of RIOS version 7.0 and above, the “End-To-End Kerberos Mode” or eeKRB is recommended by Riverbed. This mode replaces the deprecated “Constrained Delegation Mode” in older version of RIOS. The account requirements for Steelhead operating in the server role are:

  1. A Service Account with the following permission on root of each domain partition containing servers to optimize.

    “Replicate Directory Changes”

    “Replicate Directory Changes All”.

  2. The Steelheads need to be to be joined to at least one domain in the forest as a workstation. Not as an RODC.

The “Replicate Directory Changes All” permission grants the Steelhead service account access replicate directory objects including domain secrets. Domain secrets include the attributes where account password hashes are stored. This includes the password hashes for the file servers whose traffic is to be optimized, as well as hashes for users, administrators, trusts and domain controllers.

The other critical piece of the “Replicate Directory Changes All” permission is that it bypasses the RODC’s password replication policy. There is no way to constrain access only to the computer accounts to be optimized.

In most environments, all SMB traffic should already be using Kerberos for authentication. In these environments, it is not necessary to configure the Steelheads with an RODC account. This is important as it will prevent many of the issues with the RODC account approach. The security impact of this configuration is considerable and needs careful planning and risk mitigation. More on this below.

AD Integration – NTLM

To support NTLM authentication, the Steelhead operating on the server role will need to be joined as a workstation and the UserAccountControl set to 83955712 (Dec), or 0x5011000 (Hex). This maps to the following capabilities:

  • PARTIAL_SECRETS_ACCOUNT
  • TRUSTED_TO_AUTH_FOR_DELEGATION
  • WORKSTATION_TRUST_ACCOUNT
  • DONT_EXPIRE_PASSWORD

The UserAccountControl settings grant access to the partial secrets. These are the same permissions that are used by RODCs to replicate the domain secrets for accounts included in the RODC Password Replication Policy for the account. The flag TRUSTED_TO_AUTH_FOR_DELEGATION allows the account to perform protocol transition. KB3192506 describes the risk:

“The account can impersonate any user in the Active Directory except those who are marked as “sensitive and not allowed for delegation.” Because of Kerberos Protocol transition, the account can do this even without having the password for impersonation-allowed users.”

UserAccountControl value of 0x5011000 will identify the account as an RODC. This cause issues for tools that query for DCs based on the UserAccountControl value. See the section below for more details Troubleshooting issues with UserAccountControl set to 0x5011000.

Security Implications of AD Integration – What is exposed?

The decision to integrate Steelhead should be the outcome of a collaboration between a company’s security team, the company’s networking team and the company’s Active Directory team. To inform this discussion, consider the value of the data being shared with the Steelhead appliances along with the resulting “effective control” of that data. Microsoft’s current guidance regarding protection of data and securing access can be found in the Securing Privileged Access Reference. The reference defines a concept called the Clean Source Principle.

“The clean source principle requires all security dependencies to be as trustworthy as the object being secured.

“Any subject in control of an object is a security dependency of that object. If an adversary can control anything in effective control of a target object, they can control that target object. Because of this, you must ensure that the assurances for all security dependencies are at or above the desired security level of the object itself. “

Source: https://technet.microsoft.com/windows-server-docs/security/securing-privileged-access/securing-privileged-access-reference-material#a-name-csp-bm-a-clean-source-principle

In this case, the data is the secrets for all domains in the forest. Because a domain controller only holds secrets for its own domain, the service account has higher privileges than any single DC. The service account can authenticate as any user in the forest to any resource within the forest or any trusting forest where permissions are granted. This means the service account has either direct or indirect control of all data in the forest as well as any trusting forests (where permissions are granted). To put this in perspective, the service account can act as any account (user, computer, DC) to access any resources in the forest including all servers containing high value IP, email, financial reports, etc. Since the Riverbed administrators have control of the service account, these admins now have indirect control of the entire forest.

Below are some guidelines for securing privilege access. While this applies to domain controllers and domain administrators, these practices and requirements may be extended to any highly privileged device or account. When reviewing the requirements below, consider these questions:

Can the Steelhead environment, including appliances, management workstations and all accounts with direct or indirect access be secured to the same level that the DCs and domain administrators?

If so, what is the financial impact of both initial and ongoing operational changes needed to secure the Steelhead environment?

Finally, do the optimization benefits outweigh the increased operation cost and potential increase in risk to the forest?

In order to secure any accounts or appliance that have this level of privilege the Securing Privileged Access Reference provides a good starting point.


Tier 0 administrator– manage the identity store and a small number of systems that are in effective control of it, and:

  • Can only log on interactively or access assets trusted at the Tier 0 level
  • Separate administrative accounts
  • No browsing the public Internet with admin accounts or from admin workstations
  • No accessing email with admin accounts or from admin workstations
  • Store service and application account passwords in a secure location
    • physical safe,
    • Ensure that all access to the password is logged, tracked, and monitored by a disinterested party, such as a manager who is not trained to perform IT administration.
  • Enforce smartcard multi-factor authentication (MFA) for all admin accounts, no administrative account can use a password for authentication.
  • A script should be implemented to automatically and periodically reset the random password hash value by disabling and immediately re-enabling the attribute “Smart Card Required for Interactive Logon.”

Additional guidance comes from “Securing Domain Controllers Against Attack

https://technet.microsoft.com/en-us/windows-server-docs/identity/ad-ds/plan/security-best-practices/securing-domain-controllers-against-attack

Highlights include

  • In datacenters, physical domain controllers should be installed in dedicated secure racks or cages that are separate from the general server population.
  • OS Updates and Patch Deployments – To ensure known security vulnerable are not exploited all appliances should be kept up to date in accordance with the manufacturer’s guidance. This may include running the current operating system and patch level.
  • Remote access such RPD or SSH highly restricted to only permitted from secured admin workstations.

The above list is not an endorsement of Steelhead security, but serves as a starting point to plan the security of extending Tier 0 beyond the DCs and domain admins. Additional, mitigations should include timely patching, complex password that is regularly changed and mitigations provided by Riverbed.

3 Troubleshooting Network Issues when Steelheads are involved

This section will cover common issues and troubleshooting guidance. Troubleshooting with Steelheads generally follows this flow:

  • Are Steelheads involved in the conversation? Even if Steelheads are present in the environment, they may not be causing the networking issue. They may not be deployed everywhere in the environment or Steelheads can be configured to bypass some traffic. So even if they are in use and in the network path they may not be negatively affecting the network traffic.
  • Are the Steelheads the cause of the network issue? Network issues have existed long before Steelheads arrived. Careful diagnosis is required. The Steelheads may be optimizing traffic but not causing the network issue. When in doubt, have the Steelhead administrators bypass the traffic for the affected machines for testing.
  • Test by bypassing the Steelheads. When the Steelheads are determined to be the cause of networking issues, they can be bypassed in one of several ways. See the bypass section for more details.

Detecting Riverbed Probe Info in captures

When troubleshooting connectivity issues, if you suspect that a Steelhead appliance is involved, there are several ways to detect Steelhead appliances. Most of them require a simultaneous capture on the client and the server. With these two methods, the capture is only performed on one side.

Riverbed Probe Info

The capture must be from the server side and it must include the SYN packet from the client. Locate the SYN packet in the capture and check for option 76. This packet will also show which Steelhead the client accessed. This can be helpful when engaging the networking team. Wireshark parsers show this as a Riverbed Probe Query. Use the display filter ‘tcp.options.rvbd.probe’

Internet Protocol Version 4, Src: , Dst:

Transmission Control Protocol, Src Port: , Dst Port: , Seq: 0, Len: 0

Riverbed Probe, Riverbed Probe, No-Operation (NOP), End of Option List (EOL)

No-Operation (NOP)

Riverbed Probe: Probe Query, CSH IP:

Kind: Riverbed Probe (76)

Reserved: 0x01

CSH IP:

Riverbed Probe: Probe Query Info

Probe Flags: 0x05

For a complete list of Riverbed display filters, see https://www.wireshark.org/docs/dfref/t/tcp.html

Search for filters starting with “tcp.options.rvbd”. Or enter “tcp.options.rvbd” in the Wireshark filter field and the list of available filters will be displayed.

Detecting Steelheads using the TTL / Hoplimit values in the IP header

The IP Header each packet contains a TTL value for Ipv4. For IPv6 this is called Hoplimit. The default value for Windows systems is 128. The value is decremented by 1 each time the packet traverses a router. When the packets originate from non-Windows systems, the value is often considerably lower. For Steelheads, this value starts at 64 and is typically 60 to 63 after the packet traverses a router or two. While a value of 64 below does not always mean the packet traversed a Steelhead Appliance. But of the packet was sent by a Windows system, it’s clear the TTL was modified in-route and the Steelhead Appliance may be source of that change.

IPv4 TTL Example

Ipv4: Src = XXXXX Dest = XXXXX 3, Next Protocol = TCP, Packet ID = 62835, Total IP Length = 1400

  + Versions: IPv4, Internet Protocol; Header Length = 20

  + DifferentiatedServicesField: DSCP: 0, ECN: 0

    TotalLength: 1400 (0x578)

    Identification: 62835 (0xF573)

  + FragmentFlags: 16384 (0x4000)

    TimeToLive: 64 (0x40)

    NextProtocol: TCP, 6(0x6)

    Checksum: 22330 (0x573A)

    SourceAddress: XXXXX

    DestinationAddress: XXXXX

IPv6 Hoplimit Example

+ Ethernet: Etype = IPv6,DestinationAddress:[XXXXX],SourceAddress:[XXXXX]

– Ipv6: Next Protocol = TCP, Payload Length = 272

+ Versions: IPv6, Internet Protocol, DSCP 0

PayloadLength: 272 (0x110)

NextProtocol: TCP, 6(0x6)


HopLimit: 64 (0x40)

SourceAddress: XXXXX

DestinationAddress: XXXXX

Detecting Steelheads with Simultaneous network captures

In many cases, simultaneous captures will be required to verify where the failure occurred in the conversation. In the capture, look for the following to determine if the traffic is traversing Steelhead appliances. Align the captures by IP address and port number. Then align the start of the conversation using common packet such as “SMB Negotiate” that is present in both captures. If the packets traverse a Steelhead, you will likely see:

  • The SMB Session ID does not match.
  • The packet size and sequence numbers for the same traffic do not match between the two captures.
  • Additionally, you may see a TCP reset at the client that was never sent by the server. This can happen if the conversation is signed and the Steelheads are not AD Integrated. See Steelhead’s IP Blacklist earlier in this blog for more info.

SMB Negotiates down to 2.02

.

Steelheads have a mode called “Basic Dialect” which is often used to disable client leasing and force the use of oplocks. In this mode, the Steelheads intercept and modify available dialects supported by the server. While this behavior may not cause any issues for SMB 2.02 supported capabilities, it does mean that SMB3 capabilities will be disabled. Here is list capabilities that will be disabled in this mode from https://support.microsoft.com/en-us/kb/2709568 and https://blogs.technet.microsoft.com/josebda/2015/05/05/whats-new-in-smb-3-1-1-in-the-windows-server-2016-technical-preview-2/

  • SMB Transparent Failover
  • SMB Scale Out
  • SMB Multichannel
  • SMB Direct
  • SMB Encryption including support for AES-128-GCM
  • VSS for SMB file shares
  • SMB Directory Leasing
  • SMB PowerShell
  • Cluster Dialect Fencing
  • Pre-Authentication Integrity

In this example of network capture, a Server 2012 R2 client is connecting to a Server 2012 R2 Server. The traffic is signed and not latency optimized. This is the packet that left the client. Notice the SMB dialect offered by the client is 202 through 302.

SMB2 (Server Message Block Protocol version 2)

SMB2 Header

Negotiate Protocol Request (0x00)

StructureSize: 0x0024

Dialect count: 4

Security mode: 0x01, Signing enabled

Reserved: 0000

Capabilities: 0x0000007f, DFS, LEASING, LARGE MTU, MULTI CHANNEL, PERSISTENT HANDLES, DIRECTORY LEASING, ENCRYPTION

Client Guid: 5c050930-680d-11e6-80d0-9457a55aefad

NegotiateContextOffset: 0x0000

NegotiateContextCount: 0

Reserved: 0000

Dialect: 0x0202

Dialect: 0x0210

Dialect: 0x0300

Dialect: 0x0302

Here is same packet when it arrived at the server. Notice that the Steelhead has removed all available dialects except 202.

Altered packet that arrives to the Server

SMB2 (Server Message Block Protocol version 2)

SMB2 Header

Negotiate Protocol Request (0x00)

StructureSize: 0x0024

Dialect count: 4

Security mode: 0x01, Signing enabled

Reserved: 0000

Capabilities: 0x0000007f, DFS, LEASING, LARGE MTU, MULTI CHANNEL, PERSISTENT HANDLES, DIRECTORY LEASING, ENCRYPTION

Client Guid: 5c050930-680d-11e6-80d0-9457a55aefad

NegotiateContextOffset: 0x0000

NegotiateContextCount: 0

Reserved: 0000

Dialect: 0x0202

Dialect: 0x0000

Dialect: 0x0000

Dialect: 0x0000

Troubleshooting

This section covers common troubleshooting scenarios where Steelheads are involved in the conversation.

Intermittent Connectivity – Knock twice to enter

While intermittent connectivity can be a transient network issue, the effects of Steelheads have a very specific pattern. The first attempt to connect fails with a “Network Path Not found” or other generic network failure. A network capture on the client will show either an expected TCP reset from the server, or an unexpected Ack, Fin terminating the connection. A capture taken at the server will show that these packets were never sent from the server.

The second (or sometimes the third attempt) to connect is successful. The connection works for over 20 minutes and then may fail again. Retrying the connection twice more establishes the connection once again. This pattern occurs when the SMB session or negotiated dialect field is signed and the Steelheads are not AD Integrated. The Steelhead is attempting to perform all 3 levels of optimization and is not able to perform the last one, Application Latency, due the signing requirements. The Steelhead then puts the clients and servers’ s IP addresses on a temporary bypass list with a lifetime of 20 minutes. The existing session must be torn down and new session established with bypass in place. The TCP reset is what triggers the tear down. The next time the client retries the connection, the Steelheads will not attempt application latency optimization and the operation succeeds.

DC Promo Failure

During promotion of a 2012 R2 Domain controller using a 2012 R2 helper DC (both using SMB 3.0), the candidate DC receives an unexpected A…F (Ack, Fin) packet while checking for the presence of its machine account on the helper DC. The following is logged in the DCpromoUI log

Calling DsRoleGetDcOperationResults

Error 0x0 (!0 => error)

Operation results:

OperationStatus : 0x6BA !0 => error

DisplayString : A domain controller could not be contacted for the domain contoso.com that contained an account for this computer. Make the computer a member of a workgroup then rejoin the domain before retrying the promotion.

After receiving the unexpected A..F packet from the helper DC, the candidate DC hits the failure and does not retry. This happens even if the promotion is using IFM (install from media). In many of these scenarios, the servers are added to the dynamic bypass list called the blacklist. However, in this scenario, retrying the operation still fails. To address this issue, manually configuring a bypass rule on the Steelheads is required.

Issues with UserAccountControl set to 0x5011000

This section convers the issues that may be present when joining the Steelhead appliances with UserAccountControl set to 0x5011000 (Hex). When this value is set on the User Account Control Attribute, the Steelhead will appear to many tools an RODC. Because the Steelhead does not provide RODC services, several tools and processes encounter failures.

Setting the UserAccountControl set to 0x5011000 (Hex) is only necessary when NTLM is used between the client and the server. If Kerberos is used to authenticate the user, then the Steelhead can be joined as a regular workstation. To avoid the issues below, consider using the Steelhead to only optimize traffic when Kerberos was used.

AD Tools Detect Steelhead Accounts as DCs

Tools that rely on UserAccountControl values to enumerate DCs in a domain will find Steelhead Appliances joined with UserAccountControl set to 83955712 (Dec), or 0x5011000 (Hex). Some examples are:

  • nltest /dclist:domain.com
  • [DirectoryServices.Activedirectory.Forest]::GetCurrentForest().domains | %{$_.domaincontrollers} | ft name,OSVersion,Domain
  • DFSR Migration tool dfsrmig.exe will find the Steelhead accounts and prevent the transition to the “eliminated”. This is because the tool expects the account to respond as a DC and report its migration state when queried. The migration state cannot transition to its final state until the Steelhead accounts are removed from the domain.

Pre-created RODC Accounts and Slow logon

When pre-creating machine account for the Steelhead, the account should be created as a regular workstation, not an RODC. When an RODC account is pre-created, the process also creates a server object and NTDS Settings object in the Sites container. The Steelhead machine account may be discovered by the DFS service hosting SYSVOL and provide the server name in the DFS referrals for SYSVOL. This can contribute to a slow logon experience as clients try and fail to connect to the Steelhead appliance.

The partition knowledge table (dfsutil /pktinfo) might show a list like this with the Riverbed appliance at top:

Entry: \contoso.com\sysvol

hortEntry: \contoso.com\sysvol

Expires in 752 seconds

UseCount: 0 Type:0x1 ( DFS )

0:[\Steelhead1.contoso.com\sysvol] AccessStatus: 0xc00000be ( TARGETSET )

1:[\DC1.contoso.com\sysvol] AccessStatus: 0 ( ACTIVE )

2:[\DC2.contoso.com\sysvol] AccessStatus: 0

To prevent this issue, do not use the RODC precreation wizard to create Steelhead machine accounts. Create the account as a workstation and then modify the UserAccountControl values. To recover from this state, delete the NTDS settings for the Steelhead accounts. Note that it will take some time for the DFS caches on the client and domain root to timeout and refresh.

DSID Error Viewing the Steelhead Machine Account.

When viewing the Steelhead machine account with AD Users and Computers, or ADSIEdit on a Server 2008 R2, you may encounter the error below. This error occurs because the UserAccount Control values make it appear as an RODC. The interface then attempts to querying for NtdsObjectName which it cannot find because regular workstations do would not have NTDS Object.

This issue does not occur on using LDP, on 2008 R2. Server 2012 R2 and the latest RSAT tools are also not affected.

Domain Join Failure when using not using a Domain Admin Account

The Steelhead appliance can be joined to the domain a workstation or as an account with RODC flags. During the operation, the credentials entered on the Steelhead UI will be used to modify the UserAccountControl value as well as add Service Principal Names. There two operations may and here’s why:

UserAccountControl – With security enhancements in MS15-096, modification of the UserAccountControl value by non-admins is no longer permitted.

The Steelhead log may report:

Failed to join domain: user specified does not have admin privilege

This is caused by a security change in MS15-096 that will prevent change of all flags in UserAccountControl that change Account Type for non-administrators.

3072595 MS15-096: Vulnerability in Active Directory service could allow denial of service: September 8, 2015 http://support.microsoft.com/kb/3072595/EN-US

ServicePrincipalName– The SPNs being written will be for the HOST/Steelhead account and the HOST/SteelheadFQDN. The account writing the SPNs is not the same as the Steelhead’s computer account and will not be able to update the SPN. The Steelhead log may report:

Failed to join domain: failed to set machine spn

Entering domain administrator credentials in the Steelhead domain join UI is not recommended for security reasons. Additionally, it may not even be possible for accounts that require smartcards for logon.

Workaround

  • A workaround is to pre-create the computer account, set the correct UserAccountControl values and give the user account that is joining the Steelheads full control over the account.
  • During domain join using the Steelhead UI, the default container “cn=comptuters” is used. If the pre-created account is in a different OU, the domain join must be performed using the CLI commands on the Steelhead. Refer to the Riverbed documentation for the syntax
  • The pre-created accounts will need to have the UserAccountValue set correctly before the Steelhead attempts to join the domain. Use the following values:
    • Joining as a Workstation = 69632 (Dec) or 0x11000 (Hex)
    • Joining with RODC flags = 83955712 (Dec), or 0x5011000 (Hex).

Summary

For many environments, WAN optimization provides significant improvements over existing links. This is especially true for regions where high speed, low latency WAN capabilities are either cost prohibitive or non-existent. To support these capabilities in a secure way, requires collaboration between groups which may have historically operated more independently.

After thoroughly evaluating the risks associated with AD integration, some environments may find the cost of risk mitigation, and operational overhead prohibitive. In some cases, it will simply not be possible to mitigate the risks to an acceptable level. While in other environments, the exercise of mitigating risks may improve the overall security of the environment.

References

* Performance Brief – Signed SMB311 https://splash.riverbed.com/docs/DOC-5622

Riverbed Unsigned SMB3 Performance Brief https://splash.riverbed.com/docs/DOC-3198

How WAN Optimization Works – http://www.riverbednews.com/2014/11/how-wan-optimization-works/

Technical Overview of RiOS 8.5 https://splash.riverbed.com/docs/DOC-1198

Steelhead RiOS 9.0 Technical Overview https://splash.riverbed.com/docs/DOC-5505

Configuring Steelhead In-Path Rules – https://support.riverbed.com/bin/support/static/bkmpofug7p1q70mbrco0humch1/html/ocgj3m4oc178q0cigtfufl0m68/sh_ex_4.1_ug_html/index.html#page/sh_ex_4.1_ug_htm/setupServiceInpathRules.html

Secure Negotiate – https://blogs.msdn.microsoft.com/openspecification/2012/06/28/smb3-secure-dialect-negotiation/

SMB Preauthentication Integrity –

https://blogs.msdn.microsoft.com/openspecification/2015/08/11/smb-3-1-1-pre-authentication-integrity-in-windows-10/

RODC Technical Reference Topics (Includes information on “domain secrets”)

https://technet.microsoft.com/sv-se/library/cc754218(v=ws.10).aspx

Third-party information disclaimer

The third-party products that this article discusses are manufactured by companies that are independent of Microsoft. Microsoft makes no warranty, implied or otherwise, about the performance or reliability of these products.


Combining your Skype account and your Microsoft account. You want to do this!

$
0
0

Howdy folks,

You may have seen our recent announcement about how you can now use your Skype name to sign into all Microsoft apps and services. You may also have noticed that sign-in screens for Microsoft accounts now mention you can enter your Skype name in addition to your email address or phone number:

image

This is cool! But if youre like many Skype users, you have two account, a Skype account that you use to sign into Skype, and a Microsoft account (Outlook.com or Hotmail account) that you use to sign into to read your mail or access other Microsoft apps and services such as Xbox, Office 365, or OneDrive.

The good news is you can now consolidate these into a single account, which makes sign-in easier and improves the security of your account. Think of it this way:

Youll get a single password to sign into all Microsoft apps and services one less thing to remember!

Youll get a better account protection for your Skype account. For example, you can use two-step verification to better protect your account against compromises.

Youll get a better account recoverability experience; in case you lose access to your Skype account (like if you forgot your password).

For these reasons, we strongly recommend that if you have a Skype account that you combine it with your Microsoft account. You can do this by adding your Microsoft account email address to your existing Skype account, or by adding another email address if you dont already have one.

Until you do so, you won’t get the added benefits and security to your Skype account.

Updating your Skype account with an email address

When you decide to update your Skype account with an email address, take the following steps. Note that this is a one-time process.

  1. Go to https://account.microsoft.com.

2. Sign in with your Skype name.

3. We’ll ask you to update your Skype account with an email address

image

a. If you have previously linked your Skype account with a Microsoft account, well find it for you –

image

Well ask you to enter password for your Microsoft account and youll be done!

b. If your Skype account is already associated with a Microsoft account, well find it for you:

image

When you click on Next, well ask you to enter password for your Microsoft account and youll be done!

image

Thats it! Youre all set! You can now use your Skype name or your email address to sign into all Microsoft apps and services. Remember to use the password for your Microsoft account, regardless of whether you use your Skype name or email address to sign in.

Read more about how you can set up one account for Skype and other Microsoft services, and please share your thoughts and feedback with us!

Best Regards,

Alex Simons (Twitter: @Alex_A_Simons)

Director of Program Management

Microsoft Identity Division


The week in .NET – Mitch Muenster – Stateless

$
0
0

To read last week’s post, see The week in .NET – On .NET on CoreRT and .NET Native – Enums.NET – Ylands – Markdown Monster.

On .NET

Last week, we hosted the MVP Summit, and instead of having a big one-hour show, we did several mini-interviews with MVPs. The first one was published on Monday. Mitch Muenster spent 25 minutes with us talking about being a developer with autism:

This week, we’ll publish the other interviews that we recorded during the summit.

Package of the week: Stateless

Almost all applications implement processes that can be represented as workflows or state machines. Stateless is a library that enables the representation of state machine-based workflows directly in .NET Code.

Version 3.0 of Stateless just came out, with support for .NET Core.

User group meeting of the week: Introduction to TPL Dataflow in Boulder, CO

The Boulder .NET User Group holds a meeting on Tuesday, November 15 at 5:45 on TPL Dataflow, a pattern that allows for lock-free multitasking.

.NET

ASP.NET

F#

Check out F# Weekly for more great content from the F# community.

Xamarin

Azure

And this is it for this week!

Contribute to the week in .NET

As always, this weekly post couldn’t exist without community contributions, and I’d like to thank all those who sent links and tips. The F# section is provided by Phillip Carter, the gaming section by Stacey Haffner, and the Xamarin section by Dan Rigby.

You can participate too. Did you write a great blog post, or just read one? Do you want everyone to know about an amazing new contribution or a useful library? Did you make or play a great game built on .NET?
We’d love to hear from you, and feature your contributions on future posts:

This week’s post (and future posts) also contains news I first read on The ASP.NET Community Standup, on Weekly Xamarin, on F# weekly, and on Chris Alcock’s The Morning Brew.

Australian Department of Human Services achieves full Windows 10 deployment

$
0
0
Windows10 is enabling Australian Department of Human Services to deliver better access to healthcare, disability and employment support.

Windows10 is enabling Australian Department of Human Services to deliver better access to healthcare, disability and employment support.

I had the good fortune to spend a part of my young life in Australia not as a tourist but as an employee of small tour company. Traveling to some of the most majestic cities like Sydney, Canberra, Adelaide and Melbourne, to some of the more rural areas within Queenstown Tasmania, Wagga Wagga New South Wales to Goondiwindi Queensland. I saw first-hand the tyranny of distance in Australia. My time in Australia gave me an appreciation not only of the citizens and the country, but also of the Australian Department of Human Service’s story.

It’s tough to bridge the divide between people who have access to amazing services in a big metropolitan city, versus the people who live in a rural town and don’t have the same access. This is a familiar story to thousands of people who are living in remote and indigenous communities in Australia’s outback who often travel an entire day and up to 300 kilometers (186 miles) to access healthcare, disability, or employment support.

In of the summer of 2015, we sat down with the IT staff at Australia’s Department of Human Services (DHS) who told us they wanted a more innovative way to connect with citizens in remote places to deliver important services to them much faster. The task was tough because DHS employees conduct tens of thousands of interviews each week from hundreds of different service centers.

To bridge the distance gap and make sure more citizens can receive more personal and faster access to services, Australia’s Department of Human Services (DHS) worked with the Windows team and Microsoft Consulting Services to upgrade 44,000 existing and new devices in just four months to Windows 10 enterprise. It’s the largest global commercial deployment of Windows 10 in the Asia-Pacific region to date – a record deployment for a government agency with the unique complexities of the DHS, not to mention an organization that has 1500 internal line of business apps that required compatibility testing.

“Our last operating system upgrade of desktops and devices, took us almost three years,” commented Mike Brett, General Manager, ICT Infrastructure at DHS. “Upgrading to Windows 10 Enterprise proved surprisingly straightforward. By using Windows 10 in-place upgrades we started initially running 20,000 Windows 10 devices in just five weeks,” said Brett.

— Mike Brett, General Manager, ICT Infrastructure at DHS

“Upgrading to Windows 10 was one of the most seamless rollouts we’ve ever seen and the level of app remediation was marginal.”

— Gary Sterrenberg, CIO, Department of Human Services

As part of the upgrade to Windows 10, DHS wanted its employees to be able to conduct interview video calls with citizens, no matter where they were located across the country. Having visited a fair amount of Australia, I appreciate how immense and spread out its people are, especially across the rural outback. Previously DHS had a solution using iPads.

“With our iPads the screen size didn’t work for us, the sound quality was poor and using Wi-Fi connections on those devices didn’t really work for us. The other requirement required use of peripherals and that didn’t work so well with iPads. In the end, the overall experience was a barrier and frustrating to our citizens.”

— Ron Barnsley Solution Designer DHS

By creating a universal Windows 10 line of business app, DHS assessors have been able to easily interview people remotely using a broader range of devices and monitors for video calling, which solved a huge challenge of not only reaching citizens, but reaching them immediately. The pilot program on Windows 10 known internally at DHS as ‘Express Plus Connect’ (EPC), can be opened on any connected Windows 10 PC on the DHS network to allow employees to set up video calls, schedule appointments and generate reports. DHS assessors simply login with their DHS identity and can manage all aspects of an interview without customer input, and can bring additional experts into the interview for health consults.

“Previously, assessors would schedule back-to-back interviews for a specific day, then travel 200–300 kilometers inland,” says Kylie Martin, Service Center Manager for the Charters Towers Service Center, Indigenous, Regional, and Intensive Services Division at DHS. “The early pilot results in Queensland show assessors can proceed with a claim as soon as the customer reports a problem. And they aren’t just reacting more quickly, they are taking on more cases because they don’t have to arrange travel around each appointment. With EPC, people in rural places are starting to experience the same quality of service as we provide in big cities.”

— Kylie Martin, Service Center Manager for the Charters Towers Service Center, Indigenous, Regional, and Intensive Services Division at DHS

“The EPC on Windows 10 pilot provides a level of service that Australian citizens have simply never experienced before. They really feel cared for, not just by the officer in front of them, but they’re connecting with people who speak their language.”

— Gary Sterrenberg, CIO, Department of Human Services

Check out what the team at DHS has had to say about the new solution:

“When I meet with Windows customers, I always ask them: what compelled you to upgrade to Windows 10? Ken Simpson, Project Leader at DHS shared, “The improvements over Windows 8, such as the user interface and remote access, made Windows 10 very attractive and we’ve seen less incidents/defects observed. The OS quality has really increased. But two things stood out, Windows as a Service would help make our environment easier to maintain, because software updates flow continuously from Microsoft. It would also make us more agile because we could take what Microsoft provides and repackage it to provide a better service to our employees.”

— Ken Simpson, Project Leader at DHS

Complementary to DHS’ efforts, the Australian Federal Government (AFG) is working to ensure government services are more agile in responding to political, economic, and environmental change. In fact, the AFG has committed to improving access to health services, with an emphasis on expanding access via digital channels. The National Broadband Network (NBN), an Australian national wholesale-only open-access data network, is making it possible for this digital connection between public service providers and patients to take place.

DHS is also leveraging Cortana Intelligence, using machine learning and cognitive services, to build expert systems empowering its employees to respond faster and more effectively to citizen queries to better serve Australians.

Now Australians who have experienced disaster or misfortune can get help faster with less travel, while DHS continues to improve the speed, quality, and increase efficiency of services in remote areas. I could not be more proud that the Windows team and Microsoft Consulting Services has a part in helping DHS’ meet its on-going commitment and service to the people of Australia.

“Good on ya to the entire team at DHS and Microsoft”

For more detailed information about Australia Department of Human Services Windows 10 deployment, check out the case study here.

Fake fax ushers in revival of a ransomware family

$
0
0

“Criminal case against you” is a message that may understandably cause panic. That’s what a recent spam campaign hopes happens, increasing the likelihood of recipients opening the malicious attachment.

We recently discovered a new threat that uses email messages pretending to be fax messages, but in truth deliver a ransomware downloader. The attachment used in this campaign, “Criminal Case against_You-O00_Canon_DR-C240IUP-4VF.rar”, is password-protected RAR archive file that, when extracted, is a trojan detected as TrojanDownloader:JS/Crimace.A.

Email message masquerading as a fax but carrying TrojanDownloader:JS/Crimace.A as attachment

Figure 1. Email message masquerading as a fax but carrying TrojanDownloader:JS/Crimace.A as attachment

The malicious email ticks all the boxes to fake a fax:

  • The subject is a simple “PLEASE READ YOUR FAX T6931”
  • The message body lists fax protocol, date and time, fax channel and number of pages
  • The attachment file name spoofs a popular fax machine brand
  • The attached archive file contains a file that has the fake meta-data “—RAW FAX DATA—“

The use of a password-protected RAR file attachment is a clear attempt to evade AV scanners. The password is provided in the email message body. The archive file contains no fax, but Crimace, a malicious Windows Script File (.WSF) developed in JScript.

When the recipient falls for the lure and opens the attachment, Crimace displays the following message to complete the fax pretense:

Crimace displays a message to signify the fake fax cannot be displayed

Figure 2. Crimace displays a message to signify the fake fax cannot be displayed

Unsuspecting victims might think that is the end of it. But Crimace goes ahead with its intention to download its payload, a ransomware detected as Ransom:Win32/WinPlock.B.

WinPlock is a family of ransomware that has been around since September 2015 but did not have significant activity until recently. The discovery of this new variant signals that it’s back to wreak havoc.

Ransom:Win32/WinPlock.B can search for and encrypt a total of 2,630 file types.

Ransom:Win32/WinPlock.B’s ransom note contains instructions to pay

Figure 3. Ransom:Win32/WinPlock.B’s ransom note contains instructions to pay

It asks for a ransom of .55 Bitcoin, which the ransom note indicates as converting to ~US$386. However, using current conversion rates, it converts a little higher:

Bitcoin to US Dollar conversion on November 11, 2016 shows a higher rate than what is indicated in the ransom note

Figure 4. Bitcoin to US Dollar conversion on November 15, 2016 shows a higher rate than what is indicated in the ransom note (data from Coinbase)

Interestingly, when this ransomware family was first discovered in September 2015, it asked for ransom of 1 Bitcoin, which at the time converted to ~US$300. The market has changed since then, with more and more ransomware families and better technologies to detect ransomware. The increase in ransom amount indicates the actors behind this ransomware family are tracking Bitcoin exchange rates, and aim for potentially bigger gain.

And, just like the fake fax that delivers Crimace, Ransom:Win32/WinPlock.B attempts to cause panic by setting a timer that gives a victim 120 hours to pay the ransom:

data:text/mce-internal,content,%3Cimg%20width%3D%22538%22%20height%3D%22118%22%20class%3D%22alignnone%20size-full%20wp-image-9515%22%20alt%3D%22Bitcoin%20to%20US%20Dollar%20conversion%20on%20November%2011%2C%202016%20shows%20a%20higher%20rate%20than%20what%20is%20indicated%20in%20the%20ransom%20note%22%20src%3D%22https%3A//msdnshared.blob.core.windows.net/media/2016/11/winplock-bitcoin-us-dollar-conversion.png%22%20/%3E

Figure 5. Ransom:Win32/WinPlock.B sets a timer

TrojanDownloader:JS/Crimace.A has a lot of functions to download and execute

TrojanDownloader:JS/Crimace.A arrives as a malicious .WSF file contained in a RAR archive attached to emails:

The attachment is a RAR archive containing a malicious .WSF file

Figure 6. The attachment is a RAR archive containing a malicious .WSF file

Inspecting the .WSF file shows that it is obfuscated script file:

crimace-obfuscated-script

Figure 7. The .WSF file before unobfuscated form

Decrypting the file reveals a lot of suspicious functions including download and execute capabilities:

  • function CheckWSFInAutorun()
  • function CheckWSFInFolder()
  • function CopyWSFToFolder()
  • function DecRequest()
  • function Download()
  • function EncRequest()
  • function Execute()
  • function GetCurrentFile()
  • function GetInstallPath()
  • function GetRandHASH()
  • function GetRandomName()
  • function GetStrHASH()
  • function GetWSFGuid()
  • function HTTPRequest()
  • function HTTPRequestRaw()
  • function IsUserAdmin()
  • function MakeAutorun()
  • function SelfDelete()
  • function UnitChange()
  • function UnitPing()
  • function UnitRequest()

The header of the file is its configuration code and is embedded on the file as an array:

The header of the decrypted script is the configuration code

Figure 8. The header of the decrypted script is the configuration code

When decrypted, the configuration includes data including campaign number, download links, and installation paths:

Decrypted configuration

Figure 9. Decrypted configuration

Ransom:Win32/WinPlock.B encrypts 2,620 file types

Ransom:Win32/WinPlock.B is downloaded by Crimace as a Nullsoft Scriptable Install System (NSIS) package. Once executed it may create the following desktop shortcut:

NSIS package icon used by malware

Figure 10. NSIS package icon used by malware

When the malicious file is extracted from the NSIS package, it uses the following icon:

Icon used by malware after extraction from package

Figure 11. Icon used by malware after extraction from package

The malware’s file information also shows campaign ID as internal name and version:

The malware file information

Figure 12. The malware file information

When successfully executed, Ransom:Win32/WinPlock.B encrypts files with extensions in its list of 2,630. Notably, the ransom note contains an email address to contact for support. It asks for ransom of .55 Bitcoins.

Ransom:Win32/WinPlock.B’s ransom note contains support information

Figure 13. Ransom:Win32/WinPlock.B’s ransom note contains support information

The ransom note also lists websites where victim can buy Bitcoins:

Ransom:Win32/WinPlock.B’s ransom note lists information for acquiring Bitcoins

Figure 14. Ransom:Win32/WinPlock.B’s ransom note lists information for acquiring Bitcoins

Clicking the “Show files” lists all the encrypted files. Unlike other ransomware, Ransom:Win32/WinPlock.B does not change the extension of the encrypted files:

List of encrypted files

Figure 15. List of encrypted files

It also creates additional files to remind users that their computer is infected:

The malware creates additional files to indicate that files have been encrypted

Figure 16. The malware creates additional files to indicate that files have been encrypted

Prevention and mitigation

To avoid falling prey to this new ransomware campaign, here are some tips:

For end users

  • Use an up-to-date, real-time antimalware product, such as Windows Defender for Windows 10.
  • Keep Windows and the rest of your software up-to-date to mitigate possible software exploits.
  • Think before you click. Do not open emails from senders you don’t recognize.  Upload any suspicious files here: https://www.microsoft.com/en-us/security/portal/submission/submit.aspx. This campaign uses a RAR archive file, which may be a common attachment type, but it contains a .WSF file. Be mindful of what the attachment is supposed to be (in this case, a fax) and the actual file type (a script).

For IT Administrators

Additional information

To learn more about how Microsoft protects you from ransomware, you can read the following:

 

Francis Tan Seng

MMPC

The Next Generation Database & Data Lake from Microsoft

$
0
0

Re-posted from the SQL Server blog.

Earlier today, at the Connect() event, which is livestreaming globally from New York City, we announced the next generation of Microsoft SQL Server and Azure Data Lake, as well as many other exciting new capabilities to help developers build intelligent applications.

Here’s a quick recap of the key announcements:

  • The next release of SQL Server, with support for Linux & Docker (preview).
  • The release of SQL Server 2016 SP1, now offering a consistent programming model across all SQL Server editions.
  • General Availability of Azure Data Lake Analytics and Azure Data Lake Store.
  • Public preview of the DocumentDB Emulator, providing a local development experience for our blazing fast, planet-scale NoSQL.
  • General Availability of Microsoft R Server for Azure HDInsight. HDInsight is our fully managed cloud Hadoop offering, providing optimized Open Source analytic clusters for Spark, Hive, Map Reduce, HBase, Storm, and R Server, and backed up by a 99.9% SLA.
  • Public preview of Kafka for HDInsight, an enterprise-grade, cost-effective, Open Source streaming ingestion service that’s easy to provision, manage and use in your real-time solutions for IoT, fraud detection, click-stream analysis, financial alerts and social analytics.

Read all about these latest developments at the original post here, or by clicking on the image below.


CIML Blog Team

Package Management is generally available: NuGet, npm, and more

$
0
0
Today, I’m proud to announce that Package Management is generally available for Team Services and TFS 2017! If you haven’t already, install it from the Visual Studio Marketplace.

Best-in-class support for NuGet 3

NuGet support in Package Management enables continuous delivery workflows by hosting your packages and making them available to your team, your builds, and your releases. With best-in-class support for the latest NuGet 3.x clients, Package Management is an easy addition to your .NET ecosystem. If you’re still hosting a private copy of NuGet.Server or putting your packages on a file share, Package Management can remove that burden and even help you migrate.
To get started with NuGet in Package Management, check out the docs.
NuGet packages in Package Management

npm

Package Management was never just about NuGet. Accordingly, the team has been hard at work over the last few months adding support for npm packages. If you’re a developer working with node.js, JavaScript, or any of its variants, you can now use Team Services to host private npm packages right alongside your NuGet packages.
npm packages in Package Management
npm is available to every user with a Package Management license. To enable it, simply install Package Management from the Marketplace, if you haven’t already, then check out the get started docs.
npm support will also be available in an update to TFS 2017. Keep an eye on the features timeline for the latest updates.

GA updates: pricing, regions, and more

If you’ve been using Package Management during the preview period, you’ll now need to purchase a license in the Marketplace to continue using it. Your account has automatically been converted to a 60-day trial to allow ample time to do so. Look for the notice bar in the Package Management hub or go directly to the Users hub in your account to buy licenses.
The pricing for Package Management is:
  • First 5 users: Free
  • Users 6 through 100: $4 each
  • Users 101 through 1000: $1.50 each
  • Users 1001 and above: $0.50 each

Although the first 5 users are free, licenses for these users must still be acquired through the Marketplace.

Package Management is now also available in the India South and Brazil South regions.

What’s next?

With the launch of Package Management in TFS 2017, the team is now fully focused on adding additional value to the extension. Over the next year, we’ll be investing in a few key areas:
  • Package lifecycle: we want Package Management to serve not just as a repository for bits, but also as a service that helps you manage the production and release of your components. Accordingly, we’ll continue to invest in features that more closely integrate packages with Team Build and with Release Management, including more investments in versioning and more metadata about how your packages were produced.
  • Dependency management: packages come from everywhere: NuGet.org, teams across your enterprise, and teams in your group. In a world where there’s always pressure to release faster and innovate more, it makes sense to re-use as much code as possible. To enable that re-use, we’ll invest in tooling that helps you understand where your dependencies are coming from, how they’re licensed, if they’re secure, and more.
  • Refreshed experience: when we launched Package Management last November, we shipped a simple UX that worked well for the few scenarios we supported. However, as we expand the service with these new investments, we’ll be transitioning to an expanded UX that more closely matches the rest of Team Services, provides canvases for partners to extend Package Management with their own data and functionality, and gives us room to grow.
  • Maven/Ivy: as the rest of the product builds ever-better support for the Java ecosystem, it follows that Package Management should serve as a repository for the packages Java developers use most. So, we’ll be building support for Maven packages into Package Management feeds.

Announcing Code Search on Team Foundation Server 2017

$
0
0

Code Search is the most downloaded Team Services extension in the Marketplace!And it is now available on Team Foundation Server 2017!

Code Search provides fast, flexible, and accurate search across your code in TFS. As your code base expands and is divided across multiple projects and repositories, finding what you need becomes increasingly difficult. To maximize cross-team collaboration and code sharing, Code Search can quickly and efficiently locate relevant information across all your projects in a collection.

Read more about the capabilities of Code Search here.

Understand the hardware requirements and software dependencies for Code Search on Team Foundation Server 2017 here.

Configuring your TFS 2017 server for Code Search

1. You can configure Code Search as part of your production upgrade via the TFS Server Configuration wizard:

configuresearchdetails

2. Or you can complete your production upgrade first and subsequently configure Code Search through the dedicated Search Configuration Wizard:

searchwizard

3. To try out Code Search, you can use a pre-production TFS instance and carry out a pre-production upgrade. In this case, configure Code Search after the pre-production upgrade is complete. See step 2 above.

4. You can even configure Code Search on a separate server dedicated for Search. In fact we recommend this approach if you have more than 250 users or if average CPU utilization on your TFS server is higher than 50%.

remoteinstall

 

Got feedback?

How can we make Code Search better for you? Here is how you can get in touch with us

 

Thanks,
Search team

Announcing Public Preview for Work Item Search

$
0
0

Today, we are excited to announce the public preview of Work Item Search in Visual Studio Team Services. Work Item Search provides fast and flexible search across all your work items.

With Work Item Search you can quickly and easily find relevant work items by searching across all work item fields over all projects in an account. You can perform full text searches across all fields to efficiently locate relevant work items. Use in-line search filters, on any work item field, to quickly narrow down to a list of work items.

Enabling Work Item Search for your Team Services account

Work Item Search is available as a free extension on Visual Studio Team Services Marketplace. Click the install button on the extension description page and follow instructions displayed, to enable the feature for your account.
Note that you need to be an account admin to install the feature. If you are not, then the install experience will allow you to request your account admin to install the feature. Work Item Search can be added to any Team Services account for free. By installing this extension through the Visual Studio Marketplace, any user with access to work items can take advantage of Work Item Search.

You can start searching for code using the work item search box in the top right corner. Once in the search results page, you can easily switch between Code and Work Item Search.
workitem-search

Search across one or more projects

Work Item Search enables you to search across all projects, so you can focus on the results that matter most to you. You can scope search and drill down into an area path of choice.
search-across-all-projects

Full text search across all fields

You can easily search across all work item fields, including custom fields, which enables more natural searches. The snippet view indicates where matches were found.

Now you need not specify a target work item field to search against. Type the terms you recall and Work Item Search will match it against each work item field including title, description, tags, repro steps, etc. Matching terms across all work item fields enables you to do more natural searches.
Search across all fields

Quick Filters

Quick inline search filters let you refine work items in seconds. The dropdown list of suggestions helps complete your search faster. You can filter work items by specific criteria on any work item field. For example, a search such as “AssignedTo: Chris WorkItemType: Bug State: Active” finds all active bugs assigned to a user named Chris.
Quick Filters

Rich integration with work item tracking

The Work Item Search interface integrates with familiar controls in the Work hub, giving you the ability to view, edit, comment, share and much more.
integration-with-work-item-tracking

Got feedback?

How can we make Work Item Search better for you? Here is how you can get in touch with us

 
Thanks,
Search team


Azure ML Available in Japan East and US East 2

$
0
0

This post is authored by Ted Way, Senior Program Manager at Microsoft.

Azure Machine Learning is now generally available in Japan East and US East 2. These regions have the same pricing and SLA as other Azure regions where AML is already available: US South Central, Southeast Asia, West Europe, and Germany Central.

Japan East

Customers in Japan who are under regulatory or other constraints concerning where data storage and compute need to be located can now use Azure ML in Japan. Data can be uploaded to Azure ML Studio, a model can be trained, and a web service can be deployed, all in the Azure Japan East region.

To create a new workspace in Japan East, go to the Azure Portal. Click +New à Intelligence + analytics à Machine Learning Workspace. Choose “Japan East” as the Location.


Once the workspace has been created, you’ll receive an email with a link to open the workspace. You can also go to the Azure Machine Learning Studio. Sign in, choose the “Japan East” region in the workspace selector, and select the workspace.


To migrate experiments from other regions, you can use the Copy-AmlExperiment cmdlet in PowerShell or publish an unlisted experiment in the Gallery (in the documentation search for “have it only accessible to people with the link”). This will provide access to the experiment to only people you share the URL with. Click on the link to get to the experiment, and then click “Open in Studio.” Now you can copy this experiment from the Gallery into your Japan East workspace.


If you use Free or Guest Access workspaces, they will continue to be created and operated out of the US South Central region.

US East 2

Whether it’s a closer location to reduce latency or having another region in North America for high availability, you now have another option for running your web services with the Azure US East 2 region. To publish a web service to US East 2, open a workspace in Studio running in any region and create a predictive experiment. Click “Deploy Web Service” and select “Deploy Web Service [New] Preview.”


Once you are in the new web service management portal (in preview), select “Web Services” at the top. Click on the web service you want to copy, and then select “Copy” in the tab. Choose “East US 2” as the region, and then click the “Copy” button. This will create a copy of the web service in US East 2 that you can then use just like any other Azure ML web service.


If you have any questions we look forward to hearing from you in the Azure ML forum!

Ted Way
@tedwinway

WinAppDriver - Test any app with Appium's Selenium-like tests on Windows

$
0
0
WinAppDriver - Appium testing Windows Apps

I've found blog posts on my site where I'm using the Selenium Web Testing Framework as far back as 2007! Today there's Selenium Drivers for every web browser including Microsoft Edge. You can write Selenium tests in nearly any language these days including Ruby, Python, Java, and C#.

I'm a big Selenium fan. I like using it with systems like BrowserStack to automate across many different browser on many operating systems.

"Appium" is a great Selenium-like testing framework that implements the "WebDriver" protocol - formerly JsonWireProtocol.

WebDriver is a remote control interface that enables introspection and control of user agents. It provides a platform- and language-neutral wire protocol as a way for out-of-process programs to remotely instruct the behavior of web browsers.

From the Appium website, "Appium is 'cross-platform': it allows you to write tests against multiple platforms (iOS, Android, Windows), using the same API. This enables code reuse between iOS, Android, and Windows testsuites"

Appium is a webserver that exposes a REST API. The WinAppDriver enables Appium by using new APIs that were added in Windows 10 Anniversary Edition that allow you to test any Windows app. That means ANY Windows App. Win32, VB6, WPF, UWP, anything. Not only can you put any app in the Windows Store, you can do full and complete UI testing of those apps with a tool that is already familiar to Web Developers like myself.

Your preferred language, your preferred test runner, the Appium Server, and your app

You can write tests in C# and run them from Visual Studio's Test Runner. You can press any button and basically totally control your apps.

// Launch the calculator app
DesiredCapabilities appCapabilities = new DesiredCapabilities();
appCapabilities.SetCapability("app", "Microsoft.WindowsCalculator_8wekyb3d8bbwe!App");
CalculatorSession = new RemoteWebDriver(new Uri(WindowsApplicationDriverUrl), appCapabilities);
Assert.IsNotNull(CalculatorSession);
CalculatorSession.Manage().Timeouts().ImplicitlyWait(TimeSpan.FromSeconds(2));
// Make sure we're in standard mode
CalculatorSession.FindElementByXPath("//Button[starts-with(@Name, \"Menu\")]").Click();
OriginalCalculatorMode = CalculatorSession.FindElementByXPath("//List[@AutomationId=\"FlyoutNav\"]//ListItem[@IsSelected=\"True\"]").Text;
CalculatorSession.FindElementByXPath("//ListItem[@Name=\"Standard Calculator\"]").Click();

It's surprisingly easy once you get started.

public void Addition()
{
CalculatorSession.FindElementByName("One").Click();
CalculatorSession.FindElementByName("Plus").Click();
CalculatorSession.FindElementByName("Seven").Click();
CalculatorSession.FindElementByName("Equals").Click();
Assert.AreEqual("Display is 8 ", CalculatorResult.Text);
}

You can automate any part of Windows, even the Start Menu or Cortana.

var searchBox = CortanaSession.FindElementByAccessibilityId("SearchTextBox");
Assert.IsNotNull(searchBox);
searchBox.SendKeys("What is eight times eleven");

var bingPane = CortanaSession.FindElementByName("Bing");
Assert.IsNotNull(bingPane);

var bingResult = bingPane.FindElementByName("88");
Assert.IsNotNull(bingResult);

If you use "AccessibiltyIds" and refer to native controls in a non-locale specific way you can even reuse test code across platforms. For example, you could write sign in code for Windows, iOS, your web app, and even a VB6 Win32 app. ;)

Testing a VB6 app with WinAppDriver

Appium and WebAppDriver a nice alternative to "CodedUI Tests." CodedUI tests are great but just for Windows apps. If you're a web developer or you are writing cross platform or mobile apps you should check it out.


Sponsor: Help your team write better, shareable SQL faster! Discover how your whole team can write better, shareable SQL faster with a free trial of SQL Prompt. Write, refactor and share SQL effortlessly, try it now.



© 2016 Scott Hanselman. All rights reserved.
     

Announcing .NET Core Tools MSBuild “alpha”

$
0
0

We are excited to announce the first “alpha” release of the new MSBuild-based .NET Core Tools. You can try out the new .NET Core Tools in Visual Studio 2017 RC, Visual Studio for Mac, Visual Studio Code and at the commandline. The new Tools release can be used with both the .NET Core 1.0 and .NET Core 1.1 runtimes.

When we started building .NET Core and ASP.NET Core it was important to have a project system that worked across Windows, Mac and Linux and worked in editors other than Visual Studio. The new project.json project format was created to facilitate this. Feedback from customers was they loved the new project.json model, but they wanted their projects to be able to work with existing .NET code they already had. In order to do this we are making .NET Core become .csproj/MSBuild based so it can interop with existing .NET projects and we are taking the best features of project.json and moving them into .csproj/MSBuild.

There are now four experiences that you can take advantage of for .NET Core development, across Windows, macOS and Linux.

Yes! There is a new member of the Visual Studio family, dedicated to the Mac. Visual Studio for Mac supports Xamarin and .NET Core projects. Visual Studio for Mac is currently in preview. You can read more about how you can use .NET Core in Visual Studio for Mac.

You can download the new MSBuild-based .NET Core Tools preview and learn more about the new experiences in .NET Core Documentation.

Overview

If you’ve been following along, you’ll know that the new Preview 3 release includes support for the MSBuild build system and the csproj project format. We adopted MSBuild for .NET Core for the following reasons:

  • One .NET tools ecosystem— MSBuild is a key component of the .NET tools ecosystem. Tools, scripts and VS extensions that target MSBuild should now extend to working with .NET Core.
  • Project to project references– MSBuild enables project to project references between .NET projects. All other .NET projects use MSBuild, so switching to MSBuild enables you to reference Portable Class Libraries (PCL) from .NET Core projects and .NET Standard libraries from .NET Framework projects, for example.
  • Proven scalability– MSBuild has been proven to be capable of building large projects. As .NET Core adoption increases, it is important to have a build system we can all count on. Updates to MSBuild will improve the experience for all project types, not just .NET Core.

The transition from project.json to csproj is an important one, and one where we have received a lot of feedback. Let’s start with what’s not changing:

  • One project file– Your project file contains dependency and target framework information, all in one file. No source files are listed by default.
  • Targets and dependencies— .NET Core target frameworks and metapackage dependencies remain the same and are declared in a similar way in the new csproj format.
  • .NET Core CLI Tools– The dotnet tool continues to expose the same commands, such as dotnet build and dotnet run.
  • .NET Core Templates– You can continue to rely on dotnet new for templates (for example, dotnet new -t library).
  • Supports multiple .NET Core version— The new tools can be used to target .NET Core 1.0 and 1.1. The tools themselves run on .NET Core 1.0 by default.

There are many of you that have already adopted .NET Core with the existing project.json project format and build system. Us, too! We built a migration tool that migrates project.json project files to csproj. We’ve been using those on our own projects with good success. The migration tool is integrated into Visual Studio and Visual Studio for Mac. It is also available at the command line, with dotnet migrate. We will continue to improve the migration tool based on feedback to ensure that it’s ready to run at scale by the final release.

Now that we’ve moved .NET Core to use MSBuild and the csproj project format, there is an opportunity to share improvements that we’re making with other projects types. In particular, we intend to standardize on package reference within the csproj format for other .NET project types.

Let’s look at the .NET Core support for each of the four supported experiences.

Visual Studio 2017 RC

Visual Studio 2017 RC includes support for the new .NET Core Tools, as a Preview workload. You will notice the following set of improvements over the experience in Visual Studio 2015.

  • Project to project references now work.
  • Project and NuGet references are declared similarly, both in csproj.
  • csproj project files can be manually edited while the project is open.

Installation

You can install the Visual Studio 2017 from the Visual Studio site.

You can install the .NET Core Tools in Visual Studio 2017 RC by selecting the “.NET Core and Docker Tools (Preview)” workload, under the “Web and Cloud” workload as you can see below. The overall installation process for Visual Studio has changed! You can read more about that in the Visual Studio 2017 RC blog post.

.NET Core workload

Creating new Projects

The .NET Core project templates are available under the “.NET Core” project node in Visual Studio. You will see a familar set of projects.

.NET Core templates

Project to project references

You can now reference .NET Standard projects from .NET Framework, Xamarin or UWP projects. You can see two app projects relying on a .NET Standard Library in the image below.

project to project references

Editing CSProj files

You can now edit CSProj files, while the project is open and with intellisense. It’s not an experience we expect most of you to do every day, but it is still a major improvement. It also does a good job of showing the similarily between NuGet and projects references.

Editing csproj files

Dynamic Project system

The new csproj format adds all source files by default. You do not need to list each .cs file. You can see this in action by adding a .cs file to your project directory from outside Visual Studio. You should see the .cs file added to Solution Explorer within 1s.

A more minimal project file has a lot of benefits, including readability. It also helps with source control by reducing a whole category of changes and the potential merge conflicts that have historically come with it.

Opening and upgrading project.json Projects

You will be prompted to upgrade project.json-based xproj projects to csproj when you open them in Visual Studio 2017. You can see that experience below. The migration is one-way. There is no supported way to go back to project.json other than via source control or backups.

.NET Core migration

Visual Studio for Mac

Visual Studio for Mac is a new member of the Visual Studio family, focussed on cross-platform mobile and cloud development on the mac. It includes support for .NET Core and Xamarin projects. In fact, Visual Studio for Mac is an evolution of Xamarin Studio.

Visual Studio for Mac is intended to provide a very similar .NET Core development experience as what was described above for Visual Studio 2017 RC. We’ll continue to improve both experiences together as we get closer to shipping .NET Core Tools, Visual Studio for Mac and Visual Studio 2017 next year.

Installation

You can install the Visual Studio for Mac from the Visual Studio site. Support for .NET Core and ASP.NET Core projects is included.

Creating new Projects

The .NET Core project templates are available under the “.NET Core” project node in Visual Studio. You will see a familar set of projects.

.NET Core templates

You can see a new ASP.NET Core project, below.

ASP.NET Core New Project

Other experiences

Visual Studio for Mac does not yet support xproj migration. That experience will be added before release.

Visual Studio for Mac has existing support for editing csproj files while the project is loaded. You can open the csproj file by right-clicking on the project file, selecting Tools and then Edit File.

Visual Studio Code

The Visual Studio Code C# extension has also been updated to support the new .NET Core Tools release. At present, the extension has been updated to support building and debugging your projects. The extension commands (in the command palette) have not yet been updated.

Installation

You can install VS Code from the visualstudio.com. You can add .NET Core support by installing the C# extension. You can install it via the Extensions tab or wait to be prompted when you open a C# file.

Debugging a .NET Core Project

You can build and debug csproj .NET Core projects.

VS Code Debugging

.NET Core CLI Tools

The .NET Core CLI Tools have also been updated. They are now built on top of MSBuild (just like Visual Studio) and expect csproj project files. All of the logic that once processed project.json files has been removed. The CLI tools are now much simpler (from an implementation perspective), relying heavily on MSBuild, but no less useful or needed.

When we started the project to update the CLI tools, we had to consider the ongoing purpose of the CLI tools, particularly since MSBuild is itself a commandline tool with its own command line syntax, ecosystem and history. We came to the conclusion that it was important to provide a set of simple and intutive tools that made adopting .NET Core (and other .NET platforms) easy and provided a uniform interface for both MSBuild and non-MSBuild tools. This vision will become more valuable as we focus more on .NET Core tools extensibility in future releases.

Installing

You can install the new .NET Core Tools by installing the Preview 3 .NET Core SDK. The SDK comes with .NET Core 1.0. You can also use it with .NET Core 1.1, which you can install separately.

You are recommended to install the zips not the MSI/PKGs if you are doing project.json-based development outside of VS.

Side by side install

By installing the new SDK, you will update the default behavior of the dotnet command. It will use msbuild and process csproj projects instead of project.json. Similarly, dotnet new will create a csproj profile file.

In order to continue using the earlier project.json-based tools on a per-project basis, create a global.json file in your project directory and add the “sdk” property to it. The following example shows a global.json that contrains dotnet to using the project.json-based tools:

Templates

You can use the dotnet new command for creating a new project. It continues to support multiple project types with the -t argument (for example, dotnet new -t lib). The complete list of supported templates follows:

  • console
  • web
  • lib
  • xunittest

We intend to extend the set of templates in the future and make it easier for the community to extend the set of templates. In fact, we’d like to enable acqisition of full samples via dotnet new.

Upgrading project.json projects

You can use the dotnet migrate command to migrate a project.json project to the csproj format. This command will also migrate any project-to-project references you have in your
project.json file automatically. You can check the dotnet migrate command documentation for more information.

You can see an example below of what a default project look file looks like after migration from project.json to csproj. We are continuing to look for opportunities to simplify and reduce the size of the csproj format.

Existing .NET csproj files, for other project types, include GUIDs and file references. Those are (intentionally) missing from .NET Core csproj project files.

Adding project references

Adding a project reference in csproj is done using a element within an element. You can see an example below.

<ItemGroup><ProjectReferenceInclude="..\ClassLibrary1\ClassLibrary1.csproj" />ItemGroup>

After this operation, you still need to call dotnet restore to generate “the assets file” (the replacement for the project.lock.json file).

Adding NuGet references

We made another improvement to the overall csproj experience by integrating NuGet package information into the csproj file. This is done through a new element. You can see an example of the below.

<PackageReferenceInclude="Newtonsoft.Json"><Version>9.0.1Version>PackageReference>

Upgrading your project to use .NET Core 1.1

The dotnet new command produces projects that depends on .NET Core 1.0. You can update your project file to depend on .NET Core 1.1 instead, as you can see in the example below.

The project file has been updated in two places:

  • The target framework has been updated from netcoreapp1.0 to netcoreapp1.1
  • The Microsoft.NETCore.App version has been updated from ‘1.0.1’ to ‘1.0.0’

.NET Core Tooling for Production Apps

We shipped .NET Core 1.0 and the project.json-based .NET Core tools back in June. Many of you are using that release every day on your desktop to build your app and in production on your server/cloud. We shipped .NET Core 1.1 today, and you can start using it the same way.

Today’s .NET Core Tools release is considered alpha and is not recommended for use in production. You are recommended to use the existing project.json-based .NET Core tools (this is the preview 2 version) for production use, including with Visual Studio 2015.

When we ship the new msbuild-based .NET Core Tools, you will be able to open your projects in Visual Studio 2017 and Visual Studio for Mac and go through a quick migration.

For now, we recommend that you you try out today’s Tools alpha release and the .NET Core Tools Preview workload in Visual Studio 2017 RC with sample projects or projects that are under source control.

Closing

Please try the new .NET Core Tools release and give us feedback. You can try out the new csproj/MSBuild support in Visual Studio 2017 RC, Visual Studio for Mac, Visual Studio Code and at the command line. You’ve got great options for .NET Core development on Windows, macOS and Linux.

To recap, the biggest changes are:

  • .NET Core csproj support is now available as an alpha release.
  • .NET Core is integrated into Visual Studio 2017 RC and Visual Studio for Mac. It can be added to Visual Studio Code by the C# extension.
  • .NET Core tools are now based on the same technology as other .NET projects.

Thanks to everyone who has given us feedback about both project.json and csproj. Please keep it coming and please do try the new release.

Announcing general availability of Release Management

$
0
0

Today we are excited to announce the general availability of Release Management in Visual Studio Team Services. Release Management is available for Team Foundation Server 2017 as well.

Since we announced the Public Preview of Release Management, we have been adding new features continuously and the service has been used by thousands of customers whose valuable feedback has helped us improve the product.

Release Management is an essential element of DevOps that helps your team continuously deliver software to your customers at a faster pace and with high quality. Using Release Management, you can automate the deployment and testing of your application to different environments like dev, test, staging and production. You can use to deploy to any app platform and target On-Premises or Cloud.

Continuous delivery Automation flow

Release management works cross-platform and supports different application types from Java to ASP.Net and NodeJs. Also Release Management has been designed to integrate with different ALM tools as well to customize release process. For example, you can integrate Release Management with Jenkins and Team City builds or you can use Node.js sources from Github as artifacts to deploy directly. You can also customize the deployments by using the automation tasks that are available either out of the box or write a custom automation task/extension to meet your requirements.

Automated deployments

You can design and automate release pipelines across your environments to target any platform and any application by using Visual Studio Release Management. You can trigger release as soon as the build is available or even schedule it. Automated pipeline helps you to get faster time to market and respond with greater agility to customer feedback.

release-summary

Manual or automated gates for approval workflows

You can easily configure deployments using pre or post deployment approvals – completely automated to dev/test environments and manual approvals for production environments. Automatic notifications ensure collaboration and release visibility among team members. You get full audit-ability of the releases and approvals.

RM approvals

Raise the quality bar with every release

Testing is essential for any release. You can ship with confidence by configuring testing tasks for all of your release check points – performance, A/B, functional, security, beta testing and more. Using “Manual Intervention” you can even track and do manual testing in the automated flow.

Release Quality

Deploying to Azure is easy

Release Management makes it very easy to configure your release with built in tasks and easy configuration for deploying to Azure. You can deploy to Azure Web Apps, Docker containers, Virtual Machines and more. You can also deploy to a range of other targets like VMware, System Center Virtual Machine Manager or servers managed through some other virtualization platform.

End to end traceability

Traceability is very critical in releases, you can track the status of releases and deployments including commits and work items in each environment.

Refer to documentation to learn more about Release Management.

Try out Release Management in Visual Studio Team Services.

For any questions, comments and feedback – please reach out to Gopinath.ch AT microsoft DOT com.

Thanks

Gopinath

Release Management Team

Twitter: @gopinach

 

Essential documentation and resources for Microsoft System Center 2016 Operations Manager

$
0
0

We’ve seen a tremendous amount of interest in System Center 2016 Operations Manager so I thought I’d take a minute and share a few resources that should help you get started on the right track. This isn’t a comprehensive list but they’re some of the resources that I’ve found most helpful. Whether you’re just starting to look at what OpsMgr 2016 has to offer or if you’re already rolled it out, this is a good starting point for questions you may have,

Getting Started

If you’re still in the investigative phase and looking to get more information on what Operations Manager 2016 has to offer, you can see what’s new and even download a free trial via the links below.

Product Documentation

The core product documentation for Microsoft System Center 2016 Operations Manager can be found using the links below. Topics cover everything from planning your deployment to usage and troubleshooting.

Video Learning

If you were lucky enough to attend Microsoft Ignite back in September then you may have already seen our sessions on Operations Manager, but if not you can view them on-demand here:

Staying Current

It’s important to stay up to date on what’s happening in Operations Manager and there are a few options available to do this. The best resource is our System Center Operations Manager Team Blog. Here you will find tips and tricks from our OpsMgr support engineers, announcements from the product team, information about the latest product updates and downloads, as well as notifications for every new piece of content/documentation we release.

If you subscribe to our blog via RSS (here), you will automatically be notified any time something new is posted. Personally, I use Outlook when subscribing to RSS feeds since I’m in there all day checking email already, so if you need a newsreader and already use Outlook then that might be a good solution for you. If social media is more your thing, we make the same announcements on Twitter and Facebook via the links below.

Getting Help

One of the best resources for finding answers is our Operations Manager support forum. There you will find a community of experts, MVPs, as well as members of the support team and the product group who collectively hold a wealth of information about to implement, use and troubleshoot OpsMgr in a virtually any kind of environment. If you need help with a problem or maybe just need some advice, the good folks there will get you pointed in the right direction. And of course, Microsoft product support is here 24/7 to help with any Operations Manager issue or question you may have. You can find more information on the support options available here.

J.C. Hornbeck, Solution Asset PM
Microsoft Enterprise Cloud Group

Viewing all 13502 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>