Quantcast
Channel: TechNet Technology News
Viewing all 13502 articles
Browse latest View live

Parsing 4GB JSON with SQL Server

$
0
0

SQL Server 2016 and Azure SQL Database enable you to parse JSON text and transform it into tabular format. In this post, you might see that JSON functions can handle very large JSON text – up to 4GB.


First, I would need very large JSON document. I’m using TPCH database so I will export the content of lineitem table in a file. JSON can be exported using the bcp.exe program:

D:\Temp>bcp "select (select * from lineitem for json path)" queryout lineitems.json -d tpch -S .\SQLEXPRESS -T -w

Starting copy...


1 rows copied.

Network packet size (bytes): 4096

Clock Time (ms.) Total     : 103438 Average : (0.01 rows per sec.)

 

The query will format all rows from lineitem table, format them as JSON text and return them as a single cell. I’m using Unicode format (-w flag). As a result, bcp.exe will generate 4.35 GB (4,677,494,824 bytes) file containing one big JSON array.

Now I will load the content of this file using OPENROWSET(BULK) and pass content to OPENJSON function that will parse it, take the values from l_discount key, and find the average value:

select avg([l_discount])
from openrowset(bulk 'D:\Temp\lineitems.json', SINGLE_NCLOB) f
 cross apply openjson(f.BulkColumn) with([l_discount] [money])

In my SQL Server 2016 Express edition this query is finished in 1min 53 sec.

Conclusion

Functions that can parse JSON in SQL Server 2016 do not have any constraint regarding the size of JSON document. As you might see in this example, I can successfully parse 4GB JSON document, which is 2x bigger than maximum size of NVARCHAR(MAX) that can be stored in tables.

 

 


CIO of Accenture Talks About How to Pitch IT Ideas to Execs & Why CIO’s Have to Evolve

$
0
0

Season 4 of Lunch Break kicks off with the tallest and funniest CIO in the business Andrew Wilson from the massive, global consulting firm Accenture. The biggest question I had prior to this episode was practical: Could a 610 person fit in the very sleek, very low supercar that my producer rented for the day?

It turns out he can sort of.

Once Andrew had folded himself into the passenger seat, we talked about the work hes done with one of the biggest mobile workforces on earth and what its taught him about 1) how IT managers can develop and successfully present a digital agenda to company leadership, and 2) how/why the job of the CIO has had to evolve.

Andrew also has some great insights about early adoption (his team was the earliest adopter of Office 365 back before it even had a name), and he offers the first of several critiques about my driving.

To learn more about how top CIOs stay secure + productive,check out this new report.

Next week check out the second part of my drive with Andrew we discuss time traveling whales (not a typo), cloud-based security, and play IT Would You Rather.

You can also subscribe to these videoshere, or watch past episodes here: www aka.ms/LunchBreak.

Vcpkg recent enhancements

$
0
0

Vcpkg simplifies acquiring and building open source libraries on Windows. Since our first release we have continually improved the tool by fixing issues and adding features. The latest version of the tool is 0.0.71, here is a summary of the changes in this version:

  • Add support for Visual Studio 2017
    • VS2017 detection
    • Fixed bootstrap.ps1 and VS2017 support
    • If both Visual Studio 2015 and Visual Studio 2017 are installed, Visual Studio 2017 tools will be preferred over those of Visual Studio 201
  • Improve vcpkg remove:
    • Now shows all dependencies that need to be removed instead of just the immediate dependencies
    • Add –recurse option that removes all dependencies
  • Fix vcpkg_copy_pdbs()
    • under non-English locale
  • Notable changes for buiding the vcpkg tool:
    • Restructure vcpkg project hierarchy. Now only has 4 projects (down from 6). Most of the code now lives under vcpkglib.vcxproj
    • Enable multiprocessor compilation
    • Disable MinimalRebuild
    • Use precompiled headers
  • Bump required version & auto-downloaded version of cmake to 3.7.2 (was 3.5.x), which includes generators for Visual Studio 2017
  • Bump auto-downloaded version of nuget to 3.5.0 (was 3.4.3)
  • Bump auto-downloaded version of git to 2.11.0 (was 2.8.3)
  • Add 7z to vcpkg_find_acquire_program.cmake
  • Enhance vcpkg_build_cmake.cmake and vcpkg_install_cmake.cmake:
  • Introduce pre-install checks:
    • The install command now checks that files will not be overwrriten when installing a package. A particular file can only be owned by a single package
  • Introduce ‘lib\manual-link’ directory. Libraries placing the lib files in that directory are not automatically added to the link line.

See the Change Log file for more detailed description: https://github.com/Microsoft/vcpkg/blob/master/CHANGELOG.md

As usual your feedback and suggestions really matter. To send feedback, create an issue on GitHub, or contact us at vcpkg@microsoft.com. We also created a survey to collect your suggestion.

Deploying IaaS VM Guest Clusters in Microsoft Azure

$
0
0

Authors: Rob Hindman and Subhasish Bhattacharya, Program Manager, Windows Server

In this blog I am going to discuss deployment considerations and scenarios for IaaS VM Guest Clusters in Microsoft Azure.

IaaS VM Guest Clustering in Microsoft Azure

guestclustering

A guest cluster in Microsoft Azure is a Failover Cluster comprised of IaaS VMs. This allows hosted VM workloads to failover across the guest cluster. This provides a higher availability SLA for your applications than a single Azure VM can provide. It is especially usefully in scenarios where your VM hosting a critical application needs to be patched or requires configuration changes.

    SQL Server Failover Cluster Instance (FCI) on Azure

    A sizable SQL Server FCI install base today is on expensive SAN storage on-premises. In the future, we see this install base taking the following paths:

    1. Conversion to virtual deployments leveraging SQL Azure (PaaS): Not all on-premises SQL FCI deployments are a good fit for migration to SQL Azure.
    2. Conversion to virtual deployments leveraging Guest Clustering of Azure IaaS VMs and low cost software defined storage  technologies such as Storage Replica (SR) and Storage Spaces Direct(S2D): This is the focus of this blog.
    3. Maintaining a physical deployment on-premises while leveraging low cost SDS technologies such as SR and S2D
    4. Preserving the current deployment on-premises

    sqlserverfci

    Deployment guidance for the second path can be found here

    Creating a Guest Cluster using Azure Templates:

    Azure templates decrease the complexity and speed of your deployment to production. In addition it provides a repeatable mechanism to replicate your production deployments. The following are recommended templates to use for your IaaS VM guest cluster deployments to Azure.

    1. Deploying Scale out File Server (SoFS)  on Storage Spaces Direct

      Find template here

      a

    2. Deploying SoFS on Storage Spaces Direct (with Managed Disk)

      Find template here

      b

    3. Deploying SQL Server FCI on Storage Spaces Direct

      Find template here

      c

    4. Deploying SQL Server AG on Storage Spaces Direct

      Find template here

      template2

    5. Deploying a Storage Spaces Direct Cluster-Cluster replication with Storage Replica and Managed Disks

      Find template here

      template3atemplate3

    6. Deploying Server-Server replication with Storage Replica and Managed Disks

    Find template here

    template4template4a

    Deployment Considerations:

    Cluster Witness:

    It is recommended to use a Cloud Witness for Azure Guest Clusters.

    cloudwitness

    Cluster Authentication:

    There are three options for Cluster Authentication for your guest cluster:

    1. Traditional Domain Controller

      This is the default and predominant cluster authentication model where one or two (for higher availability) IaaS VM Domain Controllers are deployed.

    domainjoined

    Azure template to create a new Azure VM with a new AD Forest can be found here

    dj3

    Azure template to create a new AD Domain with 2 Domain Controllers can be found here

    dj2

    2. Workgroup Cluster

    A workgroup cluster reduces the cost of the deployment due to no DC VMs required. It reduces dependencies on Active Directory helping deployment complexity. It is an ideal fit for small deployments and test environments. Learn more here.

    workgroup

    3. Using Azure Active Directory

    Azure Active Directory provides a multi-tenant cloud based directory and identity management service which can be leveraged for cluster authentication. Learn more here

    aad

    Cluster Storage:

    There are three predominant options for cluster storage in Microsoft Azure:

    1. Storage Spaces Direct

      s2d

      Creates virtual shared storage across Azure IaaS VMs. Learn more here

    2. Application Replication

      apprep

    Replicates data in application layer across Azure IaaS VMs. A typical scenario is seen with SQL Server 2012 (or higher) Availability Groups (AG).

    3. Volume Replication

    Replicates data at volume layer across Azure IaaS VMs. This is application agnostic and works with any solution. In Windows Server 2016 volume replication is provided in-box with Storage Replica. 3rd party solutions for volume replication includes SIOS Datakeeper.

    Cluster Networking:

    The recommended approach to configure the IP address for the VCO (for instance for the SQL Server FCI) is through an Azure load balancer. The load balancer holds the IP address, on 1 cluster node at a time. The below video walks through the configuration of the VCO through a load balancer.

     

    Storage Space Direct Requirements:

    • Number of IaaS VMs: A minimum of 2
    • Data Disks attached to VMs:
      • A minimum of 4 data disks required
      • Data disks must be Premium Azure Storage
      • Minimum size of data disk 512GB
    • VM Size: The following are the guidelines for minimum VM deployment sizes.
      • Small: DS2_V2
      • Medium: DS5_V2
      • Large: GS5
      • It is recommended to run the DskSpd utility to evaluate the IOPS provided for a VM deployment size. This will help in planning an appropriate deployment for your production environment. The following video outlines how to run the DskSpd tool for this evaluation.

    Using Storage Replica for a File Server

    The following are the workload characteristics for which Storage Replica is a better fit than Storage Spaces Direct for your guest cluster.

    • Large number of small random reads and writes
    • Lot of meta-data operations
    • Information Worker features that don’t work with Cluster Shared Volumes.

    srcomp

    Ransomware: a declining nuisance or an evolving menace?

    $
    0
    0

    The volume of ransomware encounters is on a downward trend. Are we seeing the beginning of the end of this vicious threat?

    Unfortunately, a look at the attack vectors, the number of unique families released into the wild, and the improvements in malware code reveals otherwise.

    Ransomware was arguably the biggest security story of 2016. It certainly was one of the most prevalent threats. Our monitoring of the ransomware ecosystem in 2016 shows:

    • Every quarter, more than 500 million emails sent by spam campaigns carry ransomware downloaders that attempt to install ransomware on computers
    • These ransomware downloaders found their way into 13.4 million computers
    • On the other hand, 4.5 million computers were exposed to the Meadgive and Neutrino exploit kits, whose primary payload is ransomware
    • All in all, the ransomware payload of these spam and exploit kit campaigns were observed in 3.9 million computers in 2016

    The impact of ransomware attacks extended beyond consumers as businesses and the public sector fell victim to the threat. Mainstream news coverage of attacks, including stories of a California hospital paying ransom to restore important medical files and the interruption of the San Francisco transport system, injected ransomware deeper into mainstream consciousness. In September, a Europol report cited ransomware as the biggest cyber threat, overtaking data-stealing malware and online banking trojans.

    Interestingly, data from Windows Defender Antivirus shows an interesting trend: after peaking in August, when 385,000 encounters were registered, ransomware encounters dropped almost 50% in September, and it has continued to decline.

    ransomware-monthly-encounters

    Figure 1. Monthly encounters of ransomware payload files, excluding downloaders and other components; some industry figures combine the two

    Does this trend signal that we are seeing the end of ransomware? A look at other areas of the ransomware ecosystem reveals otherwise.

    (Note: This blog post is the second in the 2016 threat landscape review series, following a review of exploit kits. The series looks at how major areas in the threat landscape are evolving. In future blogs, we will look at how support scam malware and macro malware transformed in the past year.)

    Ransomware blocked before arrival

    To understand if ransomware is on the decline, we need to look at other areas of the infection chain, starting with attack vectors. Windows Defender Antivirus data and our research on ransomware downloaders—the primary ransomware attack vector in 2016—tell a different story.

    Trojan downloaders distributed via email campaigns

    Downloader trojans like Nemucod and Donoff install ransomware on target computers. Often taking the form of documents or shortcut files, these downloaders are distributed via email campaigns that use various social engineering tactics.

    There wasn’t a decline in the volume of emails that carry these ransomware downloaders. In the last quarter of 2016, we saw 500 million such emails. The downloader trojans ended up in at least one million computers every month in the same period. Clearly, cybercriminals have not stopped trying to infect computers with ransomware. In fact, up until the very end of 2016, we witnessed Nemucod email campaigns delivering Locky and Donoff campaigns delivering Cerber.

    ransomware-monthly-encounters-and-downloader

    Figure 2. While ransomware encounters showed a significant decline at the end of 2016, encounters of ransomware downloaders was higher on average in the second half compared to the first half

    Clearly, the decline in ransomware encounters was not for lack of trying by cybercriminals. We’re still seeing huge volumes of email carrying ransomware downloader trojans. However, ransomware infections were blocked at this entry point. This is an interesting development, because in 2016 we saw ransomware operators shift from exploit kits to email as their preferred infection vector.

    Exploit kits

    The Neutrino exploit kit was used to install Locky ransomware in computers. We saw Neutrino use increase in the middle of 2016, filling in the hole left by the Axpergle (aka, the Angler exploit kit) when it disappeared in June.  Apparently, Neutrino started scaling down in September as its operators reportedly went private, opting to cater to select cybercriminal groups.

    Another popular exploit kit, Meadgive (aka, the RIG exploit kit), primarily delivered Cerber ransomware. In 2016, we saw the use of Meadgive steadily increase, as it became a top exploit kit used to deliver malware. As late as December 2016, we detected a Meadgive campaign distributing the latest version of Cerber, primarily in Asia and in Europe.

    Although the usage of exploit kits is falling, we continue to see ransomware using exploit kits to infect computers. This is because ransomware campaigns can use exploits to elevate privileges and run potentially harmful routines with fewer restrictions.

    Attackers continue to innovate

    Another indication that we have not seen the end of ransomware are the numerous innovations in malware code we observed in 2016. Cybercriminals are continually updating their wares. For instance, toward the end of 2016, we documented significant updates to the latest Cerber version.

    These improvements in malware code are cascaded in attacks via ransomware-as-a-service, which provides a business model that makes the latest versions of ransomware available for cybercriminals in underground forums. This business model makes it easier for cybercriminals with the resources and motivation to launch attacks.

    The following are some of the improvements in ransomware behavior we saw in 2016.

    Server targeting

    The discovery of Samas ransomware in early 2016 cemented ransomware as a major problem for commercial companies. With ransomware that specifically targeted servers, IT administrators not only needed to protect endpoints, they also had to ramp up their server protection.

    Samas campaigns exploited server vulnerabilities. The campaigns searched for vulnerable networks using pen-testing tools and deployed various components to encrypt files on servers.

    Worm capabilities

    Zcryptor exhibited a capability to spread, demonstrating that some ransomware didn’t need to rely on campaigns to move from endpoint to endpoint. It identifies network drives, logical drives, and removable drives that it can use to spread. Only a few days into 2017, Spora was discovered sporting similar behavior.

    Alternative payment and contact methods

    Traditionally, ransomware demanded that victims pay in Bitcoin through underground websites in the Tor network. In what appeared to be a response to lower rates of ransom payment, cybercriminals began to explore new ways of encouraging victims to pay.

    Dereilock, for instance, told victims to contact the attackers via Skype. Telecrypt, on the other hand, used Telegram Messenger, another messaging service, as a communication channel to attackers.

    Spora went the “freemium” route – victims can decrypt a couple of files for free, or a set of files for a lower ransom, presumably to show that the decryptor works.

    Evolving social engineering tactics

    In 2016, most ransomware started displaying a countdown timer. This can pressure victims into immediately paying ransom fearing they risk permanently losing access to their files.

    When Cerber came out in March, it created waves because in addition to the usual ransom note in text and HTML formats, a VBScript converted text into an audio message demanding ransom, prompting researchers to call Cerber the “ransomware that speaks”.

    Another ransomware, CornCrypt, offered to decrypt files for free if the victim infected two other users, hoping to get the snowball effect rolling. Ultimately, the more victims there are, the higher the likelihood of finding victims who are willing to pay.

    Young ransomware families are on top

    The threat of ransomware will likely continue as seen in the number of ransomware new families being released in the wild. Of the more than 200 active ransomware families that we track, about 50% were first discovered in 2016.

    Most of these new ransomware families use encryption ransomware. This type of ransomware has eclipsed the older lockscreen ransomware, which simply locks the computer screen without encrypting files.

    In 2016, we saw multiple ransomware families that used new methods and techniques. However, the top five ransomware families accounted for 68% of all ransomware encounters in 2016.

    ransomware-encounters-by-family

    Figure 3. Cerber and Locky, both discovered in 2016, were the top ransomware of the year

    Interestingly, the top two ransomware families were discovered only in 2016.

    Cerber

    Cerber was discovered in March 2016 and got its name from the extension name it used on encrypted files. From March to December, it was observed in more than 600,000 computers.

    Cerber is being offered in underground forums as ransomware-as-a-service, allowing attackers to launch ransomware campaigns without actually writing malware code. Most of the its behaviors are controlled by a configuration file.

    The latest version of Cerber encrypts almost 500 file types. It is known to prioritize certain folders when searching for files to encrypt.

    Cerber primarily arrives via email campaigns that spread the Donoff downloader, a malware that downloads Cerber.

    ransomware-cerber-donoff-encounters

    Figure 4. Cerber encounters dropped dramatically starting September, but encounters of Donoff, which downloads Cerber, started to increase in December

    Cerber is also known to use Meadgive or the RIG exploit kit to infect computers. Meadgive was the top exploit kit by end of 2016.

    Locky

    Locky registered the second most encounters in 2016, at more than 500,000. It was discovered in February and similarly got its name from the extension name it used on encrypted files. It has since used other extension names, including .zepto, .odin, .thor., .aeris, and .osiris.

    Just like Cerber, multiple campaign operators subscribe to Locky as a ransomware-as-a-service. It contains code for its encryption routine, but it can also retrieve encryption keys and ransom notes from a remote server before encrypting files.

    Locky campaigns initially used the Neutrino exploit kit to infect computers, but later campaigns used email messages carrying Nemucod, which downloaded and executed Locky.

    ransomware-locky-nemucod-encounters

    Figure 5. Nemucod encounters in the second half of 2016 remained steady, even though Locky encounters dropped dramatically in the same period

    Ransomware as a global threat

    Ransomware proved to be a truly global threat in 2016, having been observed in more than 200 territories. In the US alone, ransomware was encountered in more than 460,000 computers or 15% of global encounters. Italy and Russia follow with 252,000 and 192,000 ransomware encounters, respectively. Korea, Spain, Germany, Australia, and France all registered more than 100,000 encounters.

    geographic-distribution

    Figure 6. Ransomware was observed in over 200 territories

    In the US, Cerber registered the biggest number of encounters. Cerber was so big in the US that 27% of all encounters in the world were recorded there. Locky, the other major ransomware discovered in 2016, was the second most widespread ransomware family in the US.

    Italy and Russia show a different picture with older versions of ransomware being more prevalent. In Italy, Critroni, a ransomware that has been around since 2014, was the most prevalent. When Critroni first came out, its ransom note was in both English and Russian. Newer versions have added more European languages, including Italian.

    Troldesh, discovered in early 2015, was top in Russia. After encrypting files, Troldesh modifies the desktop wallpaper to show a message in both Russian and English. It asks victims to email the attackers for payment instructions.

    ransomware-top-5-ransomware-in-top-5-countries

    Figure 7. Countries with the most ransomware encounters—US, Italy, Russia, Korea, and Spain—are affected by different ransomware families, possibly as result of localized campaigns

    Conclusion: an evolving menace requires evolving solutions

    Even though there is a dip in overall ransomware encounters, a look at the attack vectors, the number of unique families released into the wild, and the improvements in malware code reveals that we have not seen the end of this multi-component threat.

    Microsoft has built and is constantly enhancing Windows 10 to arm you with protection components built directly into the operating system itself.

    Preventing ransomware infections

    Most ransomware infections begin with email messages that carry downloader trojans. This is the primary vector that cybercriminals use to install ransomware. Office 365 Advanced Threat Protection has machine learning capability that blocks dangerous email threats, such as the millions of emails carrying ransomware downloaders that spam campaigns send.

    Some ransomware arrive via exploit kits. Microsoft Edge can protect against ransomware by preventing exploit kits from running and executing ransomware. Using Microsoft SmartScreen, Microsoft Edge blocks access to malicious websites, such as those hosting exploit kits.

    Device Guard can lock down devices and provide kernel-level virtualization-based security, allowing only trusted applications to run, effectively preventing ransomware and other dangerous software from executing.

    Detecting ransomware

    Ransomware authors may be some of the most prolific malware creators, introducing new families and continuously updating existing ones. They can also get creative in exploiting attack vectors to install ransomware in your computer.

    Windows 10 helps to immediately detect ransomware attacks at the first sign. Windows Defender Antivirus helps detect ransomware, as well as the exploit kits and trojan downloaders that install them. It uses cloud-based protection, helping to protect you from the latest threats.

    Windows Defender Antivirus is built into Windows 10 and, when enabled, provides real-time protection against threats. Keep Windows Defender Antivirus and other software up-to-date to get the latest protection.

    Responding to ransomware attacks

    Windows Defender Advanced Threat Protection (Windows Defender ATP) alerts security operations teams about suspicious activities. These include alerts for PowerShell command execution, TOR website connection, launching of self-replicated copies, and deletion of volume shadow copies. These are behaviors exhibited by some ransomware families, such as Cerber, and could be observed in future ransomware.

    Windows Defender ATP can be evaluated free of charge.

    Even more protection in Windows 10 Creators Update

    On top of these existing protection features, more security capabilities will be provided with Windows 10 Creators Update. These include Windows Defender Antivirus and Office 365 integration to create a layered protection that can help to further shrink email as an attack surface.

    Windows Defender Antivirus will strengthen context-aware detections and machine-learning capabilities that detect behavioral anomalies, providing detection capabilities at many points in the infection chain. Better integration of threat intelligence further provides faster blocking against delivery campaigns.

    Windows Defender ATP will enable security professionals to isolate compromised machines from the corporate network, stopping network outbreaks. The update will also provide an option for security professionals to specify files for quarantine and prevent subsequent execution.

    The threat of ransomware may not be going away soon, but Windows 10 will continue to improve and provide enhanced protection against this vicious threat.

    Getting personal – Jan 5

    $
    0
    0

    Note: The improvements discussed in this post will be rolling out throughout the next week.

    Happy new year! You may have noticed we skipped our last deployment due to the holidays, so this deployment comes with two sprints of goodies. One of the key themes across the team is to bring more social and personal experiences into the product. You’re going to see a steady stream of those new experiences throughout the year. We’re starting the year with the first collection of these.

    Try out new features we are working on

    Your feedback is super important to us, and getting this feedback while we are still developing a feature helps ensure we create something that you and our other customers will love. At the same time, we know new features can be disruptive. Therefore, starting with this release, select features we are working on will be made available for you to opt in and try early. Opt in when you are ready and opt out at anytime.

    To see the features available for you to try, go to Preview features in the profile menu.

    preview menu

    Switch the toggle to opt in or out of a feature.

    preview toggle

    Note that some features are only available to account administrators to turn on or off for all users in the account.

    Your Team Services is even more personalized

    With this release, it is super easy for you to access artifacts that are most important for you. The redesigned account page has a personalized experience that shows the Projects, Favorites, Work, and Pull Requests you care about. This makes it a great way to start your day. You can go to one place and quickly find everything you need to do and care about.

    Projects

    The Projects page is the one stop for you to access your recently visited projects and teams. You can also browse or look up all projects and teams within your account and then quickly navigate to the relevant hub for that project.

    project homepage

    My Favorites

    The Favorites page allows you to view all your favorite artifacts in one place. So the next time you visit a project, team, repo, branch, build definition, or query of your interest, simply favorite it so that you can quickly navigate to it from the favorites page.

    homepage favorites

    My Work

    Start your day at the My work items page to access easily all the work items assigned to you across all projects. It also lets you check and access the status of all the work items that you are following or those that you recently viewed.

    homepage mywork

    My Pull Requests

    If you work on a lot of repos across multiple projects, it’s a pain to get to the pull requests that you created or those that require your review. Not anymore! The My Pull Requests page shows all pull requests that require your attention in one place.

    homepage pull requests

    How do I get there?

    You can easily navigate to your personalized account page in Team Services by clicking the Visual Studio logo on the top left of the navigation bar. You can also hover over the logo and directly navigate to your recent project or one of the account home page pivots. We’ve also promoted Dashboards up as a top-level menu item.

    homepage navigation

    How to opt in

    Like most major changes to the user experience, we are phasing this change in gradually to minimize the disruption of the change. To try out the feature, hover over your avatar image then click Preview features. Set the toggle for New Account Landing Page to On. If you want to revert to the current experience, click on the avatar image, click Preview features, then toggle New Account Landing Page to Off . In a future sprint we will switch this to on by default and then remove the ability to revert to the old experience.

    homepage opt in

    Your project gets an identity

    There’s now one place to get an overview of your project. The new project page makes it easy to view and edit the project description, view and add members, and check up on the latest activity. It’s even easier to get started with a new project and leverage all the built-in DevOps functionality of Team Services.

    Improved getting started experience

    The new project page guides you to get started quickly by adding code to your repository when you choose one of the options to clone, push, import, or simply initialize a repo. You can easily get started by adding members, setting up builds, or adding work from this page.

    homepage project

    Talk about your project

    Create an identity and describe the vision and objectives of your project. The new project home page pulls data from the various hubs to give visitors a bird’s-eye view of your project activity.

    project overview

    Attachments in PR discussions

    You can now add attachments to your pull request comments. Attachments can be added by drag-and-drop or by browsing. For images, attachments can be added by simply pasting from the clipboard. Adding an attachment automatically updates the comment to include a Markdown reference to the new attachment.

    PR attachments

    Support file exclusions in the required reviewer policy

    When specifying required reviewers for specific file paths, you can now exclude paths by using a “!” prefix to the path you want to exclude. For example, you can use this to exclude a docs folder from your normally required signoff.

    file exclusions

    Highlight the PRs that have updates

    It’s now easier than ever to see the updates to your pull requests. In the PR list view, PRs with changes since you've last seen them are shown with a new updates column that shows a roll-up of the changes.

    PR updated files

    When you view a PR that has changes, you’ll see a similar summary message in the overview, where new pushes and comment threads are highlighted in blue. Clicking the View code updates link will navigate to the Files view, where a diff of the new changes since you last viewed the pull request is shown. This feature makes it easy to follow up on a PR where the author made changes in response to feedback.

    PR summary

    Branch policy for PR merge strategy

    We’ve added a new branch policy that lets you define a strategy for merging pull requests for each branch. Previously, the decision to either merge or squash was chosen by each user at the time a PR was completed. If enabled, this policy will override the user’s preferences, enforcing the requirement set by the policy.

    branch policy

    Expose merge conflict information

    If there are any files with conflicts in a pull request, the details about those conflicts will now be visible in the overview. Each conflicting file will be listed along with a short summary of the type of conflict between the source and target branches.

    merge conflicts

    Team Room deprecation

    With so many good solutions available that integrate well with TFS and Team Services, such as Slack and Microsoft Teams, we have made a decision to deprecate our Team Room feature from both TFS and Team Services. If you are working in Team Services, you will see a new yellow banner appear that communicates our plan. Later this year, we plan to turn off the Team Room feature completely.

    There are several alternatives you can use. The Team room is used both for a notification hub as well as for chat. TFS and Team Services already integrate with many other collaboration products including Microsoft Teams, Slack, HipChat, Campfire and Flowdock. You can also use Zapier to create your own integrations, or get very granular control over the notifications that show up.

    More information is available in this blog post.

    New notification settings experience

    Notifications help you and your teams stay informed about activity in your Team Services projects. With this update, it’s now easier to manage what notifications you and your teams receive.

    Users now have their own account-level experience for managing notification settings (available via the profile menu).

    notification settings

    This view lets users manage personal subscriptions they have created. It also shows subscriptions created by team administrators for all projects in the account.

    notification profile

    Learn more about managing personal notification settings.

    New delivery options for team subscriptions

    Team administrators can manage subscriptions shared by all members of the team in the Notifications hub under team settings. Two new delivery options are now available when configuring a team subscription: send the email notification to a specific email address (like the team’s distribution list), or send the notification to only team members associated with the activity.

    notification delivery

    Learn more about managing team subscriptions.

    Out of the box notifications (preview)

    Prior to this feature, users would need to manually opt in to any notifications they wanted to receive. With out-of-the-box notifications (which currently must be enabled by an account administrator), users automatically receive notifications for events such as:

    • The user is assigned a work item
    • The user is added or removed as a reviewer to a pull request
    • The user has a pull request that is updated
    • The user has a build that completes

    These subscriptions appear in the new user notifications experience, and users can easily choose to opt out of any of them.

    notifications oob

    To enable this feature for the account, an account administrator needs to go to Preview features under the profile menu, select From this account from the drop-down, then toggle on the Out of the box notifications feature.

    notifications oob opt in

    Learn more about out of the box subscriptions.

    New hosted build image

    We have deployed a new hosted build image with the following updates:

    • .NET Core 1.1
    • Android SDK v25
    • Azure CLI 0.10.7
    • Azure PS 3.1.0
    • Azure SDK 2.9.6
    • Cmake 3.7.1
    • Git for Windows 2.10.2
    • Git LFS 1.5.2
    • Node 6.9.1
    • Service Fabric SDK 2.3.311
    • Service Fabric 5.3.311
    • Typescript 2.0.6 for Visual Studio 2015
    • Permissions changes to allow building of .NET 3.5 ASP.NET Web Forms projects

    Firefox support for Test & Feedback extension

    We are happy to announce the General Availability of the Test & Feedback extension for Firefox. You can download the Firefox add-on from our marketplace site.

    Note: Support for Edge browser is also in the works; stay tuned for more updates

    Favorites for Test Plans

    You can now favorite the Test Plans you work with most frequently. In the Test Plans picker, you will see tabs for All your Test Plans and Favorites. Click the star icon to add a Test Plan to your list of favorites. The favorited Test Plans are accessible in the Test Plans picker and from the Favorites tab in the new account home page. You can also filter Test Plans by searching on the title field.

    test plans

    test favorites

    Test Impact Analysis for managed automated tests

    Test Impact Analysis for managed automated tests is now available via a checkbox in the 2.* preview version of the VSTest task.

    test impact

    If enabled, only the relevant set of managed automated tests that need to be run to validate a given code change will run. Test Impact Analysis requires the latest version of Visual Studio, and is presently supported in CI for managed automated tests.

    SonarQube MSBuild tasks

    SonarQube MSBuild tasks are now available from an extension provided by SonarSource. For more details, please read SonarSource have announced their own SonarQube Team Services / TFS integration.

    Improved experience for Code Search results

    There have been improvements to the Code Search results pane:

    • The filename is more prominent and clickable
    • We've added contextual actions:
      • Browse file
      • Download
      • Copy path
      • Get link to file

    code search

    Release Management parallel execution

    Release Management now supports a parallel execution option for a phase. Select this option to fan out a phase by using either Multi-configuration or Multi-agent as a phase multiplier option.

    parallel execution

    • Multi-configuration: Select this option to run the phase for each multi-configuration value. For example, if you wanted to deploy to two different geos at the same time, using a variable ReleasePlatform defined on the Variables tab with values "east-US, west-US" would run the phase in parallel, one with a value of "east-US" and the other "west-US”.
    • Multi-agent: Select this option to run the phase with one or more tasks on multiple agents in parallel.

    Inline service endpoints

    You can now create an endpoint within a build or release definition without having to switch to the Services tab. To do so, click the Add link next to the endpoint field.

    endpoints

    Multiple release triggers with branch and tag filters

    Release management now supports setting up CD triggers on multiple Build artifact sources. When added, a new release is created automatically when a new artifact version is available for any of the specified artifact sources.

    You can also specify the source branch for the new build to trigger a release. Additionally, tag filters can be set to further filter the builds that should trigger a release.

    triggers

    Set defaults for artifact sources in RM

    Users can define the default artifact version to deploy in a release when linking an artifact source in a definition. When a release is created automatically, the default version for all the artifact sources will be deployed.

    default artifact

    Variable groups support in RM

    Variable groups are used to group your variables and their values that you want to make available across multiple release definitions. You can also manage security for variable groups and chose who can view, edit and consume the variables from the variable groups in your release definitions.

    Open Library tab in Build & Release hub and choose + Variable group in the toolbar. Find more information about variable groups under Release definitions in Microsoft Release Management in the Visual Studio documentation.

    default artifact

    As always, if you have ideas on things you’d like to see us prioritize, head over to UserVoice to add your idea or vote for an existing one.

    Thanks,

    Jamie Cool

    Delivery Plans and mobile work item form – Jan 25

    $
    0
    0

    Note: The improvements discussed in this post will be rolling out throughout the next week.

    Happy new year! You may have noticed we skipped our last deployment due to the holidays, so this deployment comes with two sprints of goodies. One of the key themes across the team is to bring more social and personal experiences into the product. You’re going to see a steady stream of those new experiences throughout the year. We’re starting the year with the first collection of these.

    Try out new features we are working on

    Your feedback is super important to us, and getting this feedback while we are still developing a feature helps ensure we create something that you and our other customers will love. At the same time, we know new features can be disruptive. Therefore, starting with this release, select features we are working on will be made available for you to opt in and try early. Opt in when you are ready and opt out at anytime.

    To see the features available for you to try, go to Preview features in the profile menu.

    preview menu

    Switch the toggle to opt in or out of a feature.

    preview toggle

    Note that some features are only available to account administrators to turn on or off for all users in the account.

    Your Team Services is even more personalized

    With this release, it is super easy for you to access artifacts that are most important for you. The redesigned account page has a personalized experience that shows the Projects, Favorites, Work, and Pull Requests you care about. This makes it a great way to start your day. You can go to one place and quickly find everything you need to do and care about.

    Projects

    The Projects page is the one stop for you to access your recently visited projects and teams. You can also browse or look up all projects and teams within your account and then quickly navigate to the relevant hub for that project.

    project homepage

    My Favorites

    The Favorites page allows you to view all your favorite artifacts in one place. So the next time you visit a project, team, repo, branch, build definition, or query of your interest, simply favorite it so that you can quickly navigate to it from the favorites page.

    homepage favorites

    My Work

    Start your day at the My work items page to access easily all the work items assigned to you across all projects. It also lets you check and access the status of all the work items that you are following or those that you recently viewed.

    homepage mywork

    My Pull Requests

    If you work on a lot of repos across multiple projects, it’s a pain to get to the pull requests that you created or those that require your review. Not anymore! The My Pull Requests page shows all pull requests that require your attention in one place.

    homepage pull requests

    How do I get there?

    You can easily navigate to your personalized account page in Team Services by clicking the Visual Studio logo on the top left of the navigation bar. You can also hover over the logo and directly navigate to your recent project or one of the account home page pivots. We’ve also promoted Dashboards up as a top-level menu item.

    homepage navigation

    How to opt in

    Like most major changes to the user experience, we are phasing this change in gradually to minimize the disruption of the change. To try out the feature, hover over your avatar image then click Preview features. Set the toggle for New Account Landing Page to On. If you want to revert to the current experience, click on the avatar image, click Preview features, then toggle New Account Landing Page to Off . In a future sprint we will switch this to on by default and then remove the ability to revert to the old experience.

    homepage opt in

    Your project gets an identity

    There’s now one place to get an overview of your project. The new project page makes it easy to view and edit the project description, view and add members, and check up on the latest activity. It’s even easier to get started with a new project and leverage all the built-in DevOps functionality of Team Services.

    Improved getting started experience

    The new project page guides you to get started quickly by adding code to your repository when you choose one of the options to clone, push, import, or simply initialize a repo. You can easily get started by adding members, setting up builds, or adding work from this page.

    homepage project

    Talk about your project

    Create an identity and describe the vision and objectives of your project. The new project home page pulls data from the various hubs to give visitors a bird’s-eye view of your project activity.

    project overview

    Attachments in PR discussions

    You can now add attachments to your pull request comments. Attachments can be added by drag-and-drop or by browsing. For images, attachments can be added by simply pasting from the clipboard. Adding an attachment automatically updates the comment to include a Markdown reference to the new attachment.

    PR attachments

    Support file exclusions in the required reviewer policy

    When specifying required reviewers for specific file paths, you can now exclude paths by using a “!” prefix to the path you want to exclude. For example, you can use this to exclude a docs folder from your normally required signoff.

    file exclusions

    Highlight the PRs that have updates

    It’s now easier than ever to see the updates to your pull requests. In the PR list view, PRs with changes since you've last seen them are shown with a new updates column that shows a roll-up of the changes.

    PR updated files

    When you view a PR that has changes, you’ll see a similar summary message in the overview, where new pushes and comment threads are highlighted in blue. Clicking the View code updates link will navigate to the Files view, where a diff of the new changes since you last viewed the pull request is shown. This feature makes it easy to follow up on a PR where the author made changes in response to feedback.

    PR summary

    Branch policy for PR merge strategy

    We’ve added a new branch policy that lets you define a strategy for merging pull requests for each branch. Previously, the decision to either merge or squash was chosen by each user at the time a PR was completed. If enabled, this policy will override the user’s preferences, enforcing the requirement set by the policy.

    branch policy

    Expose merge conflict information

    If there are any files with conflicts in a pull request, the details about those conflicts will now be visible in the overview. Each conflicting file will be listed along with a short summary of the type of conflict between the source and target branches.

    merge conflicts

    Team Room deprecation

    With so many good solutions available that integrate well with TFS and Team Services, such as Slack and Microsoft Teams, we have made a decision to deprecate our Team Room feature from both TFS and Team Services. If you are working in Team Services, you will see a new yellow banner appear that communicates our plan. Later this year, we plan to turn off the Team Room feature completely.

    There are several alternatives you can use. The Team room is used both for a notification hub as well as for chat. TFS and Team Services already integrate with many other collaboration products including Microsoft Teams, Slack, HipChat, Campfire and Flowdock. You can also use Zapier to create your own integrations, or get very granular control over the notifications that show up.

    More information is available in this blog post.

    New notification settings experience

    Notifications help you and your teams stay informed about activity in your Team Services projects. With this update, it’s now easier to manage what notifications you and your teams receive.

    Users now have their own account-level experience for managing notification settings (available via the profile menu).

    notification settings

    This view lets users manage personal subscriptions they have created. It also shows subscriptions created by team administrators for all projects in the account.

    notification profile

    Learn more about managing personal notification settings.

    New delivery options for team subscriptions

    Team administrators can manage subscriptions shared by all members of the team in the Notifications hub under team settings. Two new delivery options are now available when configuring a team subscription: send the email notification to a specific email address (like the team’s distribution list), or send the notification to only team members associated with the activity.

    notification delivery

    Learn more about managing team subscriptions.

    Out of the box notifications (preview)

    Prior to this feature, users would need to manually opt in to any notifications they wanted to receive. With out-of-the-box notifications (which currently must be enabled by an account administrator), users automatically receive notifications for events such as:

    • The user is assigned a work item
    • The user is added or removed as a reviewer to a pull request
    • The user has a pull request that is updated
    • The user has a build that completes

    These subscriptions appear in the new user notifications experience, and users can easily choose to opt out of any of them.

    notifications oob

    To enable this feature for the account, an account administrator needs to go to Preview features under the profile menu, select From this account from the drop-down, then toggle on the Out of the box notifications feature.

    notifications oob opt in

    Learn more about out of the box subscriptions.

    New hosted build image

    We have deployed a new hosted build image with the following updates:

    • .NET Core 1.1
    • Android SDK v25
    • Azure CLI 0.10.7
    • Azure PS 3.1.0
    • Azure SDK 2.9.6
    • Cmake 3.7.1
    • Git for Windows 2.10.2
    • Git LFS 1.5.2
    • Node 6.9.1
    • Service Fabric SDK 2.3.311
    • Service Fabric 5.3.311
    • Typescript 2.0.6 for Visual Studio 2015
    • Permissions changes to allow building of .NET 3.5 ASP.NET Web Forms projects

    Firefox support for Test & Feedback extension

    We are happy to announce the General Availability of the Test & Feedback extension for Firefox. You can download the Firefox add-on from our marketplace site.

    Note: Support for Edge browser is also in the works; stay tuned for more updates

    Favorites for Test Plans

    You can now favorite the Test Plans you work with most frequently. In the Test Plans picker, you will see tabs for All your Test Plans and Favorites. Click the star icon to add a Test Plan to your list of favorites. The favorited Test Plans are accessible in the Test Plans picker and from the Favorites tab in the new account home page. You can also filter Test Plans by searching on the title field.

    test plans

    test favorites

    Test Impact Analysis for managed automated tests

    Test Impact Analysis for managed automated tests is now available via a checkbox in the 2.* preview version of the VSTest task.

    test impact

    If enabled, only the relevant set of managed automated tests that need to be run to validate a given code change will run. Test Impact Analysis requires the latest version of Visual Studio, and is presently supported in CI for managed automated tests.

    SonarQube MSBuild tasks

    SonarQube MSBuild tasks are now available from an extension provided by SonarSource. For more details, please read SonarSource have announced their own SonarQube Team Services / TFS integration.

    Improved experience for Code Search results

    There have been improvements to the Code Search results pane:

    • The filename is more prominent and clickable
    • We've added contextual actions:
      • Browse file
      • Download
      • Copy path
      • Get link to file

    code search

    Release Management parallel execution

    Release Management now supports a parallel execution option for a phase. Select this option to fan out a phase by using either Multi-configuration or Multi-agent as a phase multiplier option.

    parallel execution

    • Multi-configuration: Select this option to run the phase for each multi-configuration value. For example, if you wanted to deploy to two different geos at the same time, using a variable ReleasePlatform defined on the Variables tab with values "east-US, west-US" would run the phase in parallel, one with a value of "east-US" and the other "west-US”.
    • Multi-agent: Select this option to run the phase with one or more tasks on multiple agents in parallel.

    Inline service endpoints

    You can now create an endpoint within a build or release definition without having to switch to the Services tab. To do so, click the Add link next to the endpoint field.

    endpoints

    Multiple release triggers with branch and tag filters

    Release management now supports setting up CD triggers on multiple Build artifact sources. When added, a new release is created automatically when a new artifact version is available for any of the specified artifact sources.

    You can also specify the source branch for the new build to trigger a release. Additionally, tag filters can be set to further filter the builds that should trigger a release.

    triggers

    Set defaults for artifact sources in RM

    Users can define the default artifact version to deploy in a release when linking an artifact source in a definition. When a release is created automatically, the default version for all the artifact sources will be deployed.

    default artifact

    Variable groups support in RM

    Variable groups are used to group your variables and their values that you want to make available across multiple release definitions. You can also manage security for variable groups and chose who can view, edit and consume the variables from the variable groups in your release definitions.

    Open Library tab in Build & Release hub and choose + Variable group in the toolbar. Find more information about variable groups under Release definitions in Microsoft Release Management in the Visual Studio documentation.

    default artifact

    As always, if you have ideas on things you’d like to see us prioritize, head over to UserVoice to add your idea or vote for an existing one.

    Thanks,

    Jamie Cool

    PR usability improvements & richer Github build integration – Feb 15

    $
    0
    0

    Note: The improvements discussed in this post are features that will be rolling out over the next three weeks.

    We have a few feature improvements this sprint. Let’s get right into it...

    Improved support for team PR notifications

    Working with pull requests that are assigned to teams is getting a lot easier. When a PR is created or updated, email alerts will now be sent to all members of all teams that are assigned to the PR.

    This feature is in preview and requires an account admin to enable it from the Preview features panel (available under the profile menu).

    preview panel

    After selecting for this account, switch on the Team expansion for notifications feature.

    team notification preview

    In a future release, we’ll be adding support for PRs assigned to Azure Active Directory (AAD) groups and teams containing AAD groups.

    Improved CTAs for PR author and reviewers

    For teams using branch policies, it can sometimes be hard to know exactly what action is required when you view a pull request. If the main call to action is the Complete button, does that mean it’s ready to complete? Using information about the person viewing the page and the state of configured branch policies, the PR view will now present the call to action that makes the most sense for that user.

    When policies are configured, but aren’t yet passing, the Complete button will now encourage the use of the Auto-complete feature. It’s not likely that you’ll be able to complete the PR successfully if policies are blocking, so we offer an option that will complete the PR when those policies eventually pass.

    cta in pr

    For reviewers, it’s more likely that you’ll want to approve a PR than complete it, so reviewers will see the Approve button highlighted as the main CTA if you haven’t approved yet.

    cta approve

    Once approved, reviewers will see the Complete (or Auto-complete) button highlighted as the CTA for those cases where a reviewer is also the person completing the PR.

    Actionable comments

    In a PR with more than a few comments, it can be hard to keep track of all of the conversations. To help users better manage comments, we’ve simplified the process of resolving items that have been addressed with a number of enhancements:

    • In the header for every PR, you’ll now see a count of the comments that have been resolved.

    pr header

    • When a comment has been addressed, you can resolve it with a single click.

    resolve button

    • If you have comments to add while you’re resolving, you can reply and resolve in a single gesture.

    reply and resolve

    • As comments are resolved, you’ll see the count go up until everything has been addressed.

    pr header

    • The filter in the Overview has been improved to enable filtering by various comment states and to show the count of comments for each filter option.

    pr filter

    Updates view shows rebase and force push

    In the Pull Request details view, the Updates tab has been improved to show when a force push has occurred and if the base commit has changed. These two features are really useful if you rebase changes in your topic branches before completing your PRs. Reviewers will now have enough info to know exactly what’s happened.

    updates views

    Improved commit filtering

    You can now filter the commit history results by advanced filtering options. You can filter commits by:

    • full history
    • full history with simplified merges
    • first parent
    • simple history (This is the default filter setting)

    filtering

    Github pull request builds

    For a while we’ve provided CI builds from your GitHub repo. Now we’re adding a new trigger so you can build your GitHub pull requests automatically. After the build is done, we report back with a comment in your GitHub pull request.

    For security, we only build pull requests when both the source and target are within the same repo. We don’t build pull requests from a forked repo.

    github builds

    Maintenance for working directories

    You can now configure agent pools to periodically clean up stale working directories and repositories. This should reduce the potential for the agents to run out of disk space. The maintenance is done per agent, not per machine; so if you have multiple agents on a single machine, you may still run into disk space issues.

    agent maintenance

    Agent selection improvement

    The agent selection now takes machine activity into account when allocating an agent for a build or release. This will cause a build or release to be sent to an agent on an idle machine before selecting agents on a machine with other agents that are currently busy.

    Run tests using Agent Phases

    Using the Visual Studio Test task, you can now run automated tests using agent phases.

    We now have a unified automation agent across build, release and test. This brings in the following benefits:

    1. You can leverage an agent pool for your testing needs.
    2. Run tests in different modes using the same Visual Studio Test task, based on your needs—single agent–based run, multi-agent–based distributed test run or a multi-configuration run to run tests on, say, different browsers.

    agent phases

    For more information, refer to the this post.

    Multiple versions of Extension tasks

    Extension authors can now create extensions with multiple versions of a given task, which enables them to ship patches for each major version they have in production.

    See Reference for creating custom build tasks within extensions.

    Extension management permissions and new email notifications

    Any user or group can be given permission to manage extensions for the account. Previously, only account administrators could review extension requests or install, disable, or uninstall extensions. To grant this permission, an administrator can navigate to the Extensions admin hub by opening the Marketplace menu, selecting Manage extensions, and then click the Security button.

    extension permissions

    Also new this sprint, a user who requests an extension is now notified via email when that request is approved or rejected.

    Updated Package Management experience

    We’ve updated the Package Management user experience to make it faster, address common user-reported issues, and make room for upcoming package lifecycle features. Learn more about the update here, or turn it on using the toggle in the Packages hub.

    package management

    Support for AAD conditional access

    Team Services can now be explicitly selected as the target for Azure Active Directory (AAD) conditional access policy. This lets enterprises control where and how their users can access VSTS. Visit the Microsoft Azure documentation site to learn more about conditional access policy.

    aad conditional access

    Pipelines queue

    With the new Pipelines queue feature, we have provided a way to find the current state of a build/release in a pipeline.

    resource limits

    On launching the Pipelines queue, you can see the following information:

    1. Builds and releases waiting for a pipeline to execute and their position in the waiting queue.
    2. Builds and releases currently running using available pipelines.

    release in progress

    While your build/release is waiting for a pipeline, you can also directly launch this view from inside the build/release logs page and find its current position in the pipeline queue and other details.

    We think these features will help improve your workflows while addressing feedback, but we would love to hear what you think. Please don’t hesitate to send a smile or frown through the web portal, or send other comments through the Team Services Developer Community. As always, if you have ideas on things you’d like to see us prioritize, head over to UserVoice to add your idea or vote for an existing one.

    Thanks,

    Jamie Cool


    Monitoring Nutanix with OMS – Public Preview

    $
    0
    0

    Summary: The Nutanix Monitoring Solution from Comtrade Software now integrates with Operations Management Suite.

    Today we’re announcing the public preview of the Nutanix Monitoring Solution from Comtrade Software for Operations Management Suite (OMS). It enables monitoring, event analytics, and log analytics for on-premises Nutanix Enterprise Clouds. It extends OMS by:

    • Providing historic Nutanix performance metrics like cluster/host/storage/virtual machine (VM) latency, input/output operations per second (IOPS), and resource utilization
    • Identifying situations when more solid-state drive (SSD) storage resources must be added to maintain low I/O latency
    • Enabling instant identification of Nutanix hosts that run many VMs and identification of hosts that are much less loaded
    • Acting as a central point for Nutanix log and event collection and analytics to enable correlation of Nutanix log files across controller virtual machines (CVMs)

    The overall Comtrade Software’s Nutanix OMS Solution consists of five solutions (Nutanix Clusters, Nutanix Hardware, Nutanix Storage, Nutanix Virtual Machines, and Nutanix Log and Event Analytics). Each solution provides monitoring and analytics capabilities for individual Nutanix areas. Deploy all of them to get complete Nutanix monitoring and analytics coverage.

    The Nutanix Clusters Solution is an entry point to Nutanix monitoring and gives you a quick summarized overview over the health state and performance of Nutanix Clusters:

    The Nutanix Clusters Solution

     

    The Nutanix Hardware Solution provides a detailed health state and performance of Nutanix hosts that are a part of Nutanix Clusters:

    The Nutanix Hardware Solution

     

    The Nutanix Storage Solution gives you details about utilization and performance of Nutanix storage:

    The Nutanix Storage Solution

     

    The Nutanix Virtual Machines Solution provides the state and performance information of VMs running on Nutanix clusters:

    The Nutanix Virtual Machines Solution

     

    The Nutanix Log and Event Analytics Solution provides powerful correlation of log entries from different services on different hosts and is a central point for Nutanix log and event collection:

    The Nutanix Log and Event Analytics Solution

    These preview solutions require the Comtrade Data Collector, which you can download by joining the beta program on the Nutanix OMS Solution – Beta Registration webpage. For configuration details, please look at the Comtrade OMS Solution for Nutanix User Guide. The solutions will get periodically updated with new functionality. To see the latest details and updates about the solution, check out the Comtrade blog.

    Register today for SMB Live

    $
    0
    0

    SMB Live Microsoft Roadshow

    This year’s SMB Live events, coming to 22 cities across the United States, include a day of new in-depth and hands-on marketing content, and a day of 200 & 300-level technical training to modernize your organization’s technical skills and marketing capabilities. You’ll leave SMB Live with the tools you need for cloud business growth and to be prepared to help modernize your SMB customers to Azure Hybrid IT, Enterprise Mobility + Security, Microsoft Dynamics 365, and more!

    Space at these SMB Live events is limited. Register today to secure your spot at the event in the city nearest you.

    Register Today!

    Day 1

    SMB Live Microsoft SMB Live

    Are you taking full advantage of the hybrid cloud opportunity with your customers? Is Azure coming up more and more in your customer conversations? Are your customers looking for a secure and efficient cloud solution? Join our experts at this free, one-day SMB Live session and see how you too can be that market leader.

    What you will take away from this training:

    • Simple ways you can create a differentiated Cloud offering, including developing repeatable solutions to grow your profitability by learning how successful partners are:
      • building a Cloud Infrastructure or Azure Hybrid IT practice to protect customers, while increasing recurring revenues. 
      • leading with security solutions leveraging Enterprise Mobility + Security (EMS)
    • How to align your sales process to the new Cloud customer buying behaviors and proven ways to maximize your revenue from your existing customer base. 
    • See how new Microsoft business productivity tools can help you drive new and recurring profits. Hear what’s new with Windows 10 Enterprise Subscription E3 for CSP, Microsoft Dynamics 365 for Financials, SQL & Windows Server 2016 and Office 365 E5 (Skype Telephony).

    What Organizations should attend:

    Experienced Cloud partners who are interested in developing a new or current cloud practice beyond Office 365 with Microsoft Azure Hybrid IT solutions.

    Who from the organization should attend Day 1?

    Business leads, sales, marketing, and/or technical business decision-makers.

    Register Today! 

    Day 2

    SMB Live Azure Spec, Price, and Bid

    Learn how to differentiate yourself from your competitors using Azure Hybrid cloud solutions and scalable workloads. This one-day technical hands on training will provide you with the tools and skills necessary to develop and manage your cloud practice; easily spec and estimate monthly Azure costs and learn how powerful Azure Migration offerings from other partners can help move your customers to the cloud.

    What you will take away from this training:

    • Technical Readiness – How to develop and deploy core Azure workloads with hands-on training configuring Business Continuity solutions and Business Operations
    • Accurate Azure Price Forecasting – Learn how to present an AZURE solution with the use of detailed pricing structures and scenarios.
    • Understanding the Azure Solution Profitability – Review and understand the profitability potential with a view to near and long term customer relationships.
    • New and Existing Customer Assessment and Migration Resources - Resources to help you analyze your current customer base and begin migrations immediately to Azure.

    What Organizations should attend:

    Experienced Cloud partners who are interested in developing a new or current cloud practice beyond Office 365 with Microsoft Azure Hybrid IT solutions

    Who from the organization should attend day 2?

    Business / Technical Decision Makers, Pre / Post sales Technical Roles. 

    Register Today!

    Gartner positions Microsoft as a leader in BI and Analytics Platforms for ten consecutive years

    $
    0
    0
    Gartner has recognized our vision and execution for the tenth consecutive year, positioning Microsoft as a Leader in the Magic Quadrant for Business Intelligence and Analytics Platforms. Also, for second year in row, Microsoft is placed furthest in vision within the Leaders quadrant.

    Microsoft Enterprise Services Tips to using SQL Data Warehouse effectively

    $
    0
    0

    Azure SQL Data Warehouse is a SQL-based fully managed, petabyte-scale cloud solution for data warehousing. With tight integration between other Azure products, SQL Data Warehouse represents the backbone of any major cloud data warehousing solution in Azure. With decoupled compute and storage, elastic scale-out functionality, and the ability to pause and resume to meet demand, SQL Data Warehouse gives customers the flexibility, cost-savings, and performance they need, right when they need it.

    Microsoft Enterprise Services (ES) is one of the largest consulting and support businesses in the world. Microsoft ES operates across more than 100+ subsidiaries around the globe operating in spaces such as cloud productivity, mobility solutions, adoption services, and more. With the speed and scale at which clients’ operations transform and grow, it is paramount that ES stays ahead of the curve to meet future demand.

    Right data at the right time allows ES to make the best decision about where to fund resources and how to best serve their customers. Traditional data warehouse reporting stacks took far too long to deliver reports and were far too inflexible to change models. Adding to the costs of complexity, maintenance, and scaling to match growing data, their traditional on-premise data warehouse was producing less value than the cost of upkeep and development, distracting from the core business value of delivering insights.

    The move to a modern data warehouse solution was becoming readily apparent. ES analytics split their workload into data processing and distribution. Core requirements for ES included easy scalability, high IO, multi-terabyte storage, row level data security, and support of more than 200 concurrent users. Working through the host of Azure service offerings, ES landed on a solution with Azure SQL Data Warehouse as their data processing and ad-hoc analysis workload with IaaS SQL Server 2016 Always On instances as their data distribution layer.

    Implementation

    The implementation for ES at a high level takes audit and historical data from a variety of sources on-premise which first land into Azure Storage Blobs. Polybase is then used to load data in parallel quickly into the Azure SQL Data Warehouse where it is then processed and transformed into dimension and fact tables. Afterwards, these dimension and fact tables are moved into Analysis Services and SQL Server IaaS instances to support quick and highly concurrent access a variety of business users. Across this solution, Azure Data Factory acts as the orchestrating ELT framework, allowing for a single interface to control the data flow for the majority of the pipeline.

    esanalytics

    8 Tips for SQL Data Warehouse

    Putting together a solution like this, while performant, is not a trivial task. Listed below is some guidance straight from the ES team on designing solutions with Azure SQL Data Warehouse:

    1. Stage the data in Azure SQL DW:

    One of the guiding principles of our warehouse has been to stage the data in its native form, i.e. the way it is available in source. There are various reasons as to why we persist a copy of the source like performance, data quality, data persistence for validation, etc... because of staging the data, we are able to distribute data as per our needs and ensure we have minimal data skew. Rarely, if at all, have we used round robin mechanism to store data.

    2. Arriving at a common distribution key:

    The data staged into the Azure SQL Data Warehouse instance was ingested from a 3NF data source. This helped us to slightly change the schema from source and include the base table’s foreign key as part of all tables, which are distributed on the same key. During our fact load we join the tables on these set of keys thereby minimizing all DMS operations and in some cases, no DMS operations occur. Hence this gives us an edge in terms of performance. As an example, the data in source systems have one to many relationships between tables. However, in our SQL DW we have inserted a common distribution key across all tables, based on business logic and because this key gets loaded when ELT runs.

    However, we would recommend checking the data skew before going ahead with this approach as the distribution key must be chosen completely based on the skew that one may see. When data skew is high or the joins are not compatible we create an interim table, which is distributed on the same key as the other join table. We us CTAS to accomplish this which incurs one DMS operation to get all keys but improves performance when there are complex joins.

    3. Vertical Partitioning of Wide Tables:

    We had a row size limitation of 32KB in Azure SQL DW. Since we had several wide tables with 150+ columns and many with varchar(4000) we came up with an approach to vertically partition the table on the same distribution key. This helped us to overcome the challenge of 32KB and at the same time provide the required performance while joining the two tables as the distribution key was the same.

    Note: SQL Data Warehouse now supports 1MB wide rows

    4. Use the right resource class:

    In several cases we had complex facts that would need more resources (memory and CPU) to speed-up the process of fact loads. Not just facts, we also had dimensions which had complex business rules and type 2 kind of implementations. We designed our ETL in such a way that lesser complex facts and dimensions would run on smallrc resource class providing for more parallelism, whereas the more complex facts which would need more resources would run using largerc resource class.

    5. Use the primary key as distribution column for master tables:

    In the source from where we ingest data into SQL DW, we have many master tables that we use in SQL DW to look up these tables for building our facts. In such a case, we have made these tables with reasonable amount of data (>1 million rows) being distributed on the primary key, which is a unique integer. This has given us the advantage having even data distribution (minimal to no data skew), thereby making our look up queries really fast.

    6. Using Dynamic Scale up and Scale down for saving costs:

    Our ELTs using ADF are designed in such a way that prior to the scheduled ELT kick off, we scale up our instance from 100DWU to 600DWU. This has led to huge cost savings. Our ELT runs for nearly 4-5 hours during this time and the DWU usage is capped at 600 DWU. During month end when there is a faster need for processing and businesses need data faster, we have the option of scaling to 1000 DWU. All this is done as part of our ELT’s and no manual intervention is needed.

    7. Regular Maintenance:

    In our case, we included updating statistics and index rebuilds as part of the ELT process. No sooner the dimension and fact load is completed, we check for all the CCI’s where the fragmentation is > 5% and rebuild the index. Similarly, for the key tables, we are updating statistics to ensure best performance.

    8. Leverage SQL Server 2016 for data marts:

    Azure SQL DW is the primary data processing engine, whereas we chose to have SQL Server 2016 running on DS 14 IAAS VM’s are our primary source of data distribution. This has enabled us to leverage high concurrency provided by SQL Server and use the power of Azure SQL DW for processing. We have close to 1000+ users who would be using the data provisioned from Azure SQL DW.

    Using Azure SQL Data Warehouse as part of their solution, Microsoft Enterprise Services was able to reduce run times by up to 40% and provide insights to business users with reduced latency.

    If you have not already explored this fully managed, petabyte scale cloud data warehouse service, learn more at the links below.

    Learn more

    What is Azure SQL Data Warehouse?
    SQL Data Warehouse best practices
    Video library
    MSDN forum
    Stack Overflow forum

    New Virtual Health Templates extend Skype for Business as platform for developers

    $
    0
    0

    Modern healthcare providers are constantly looking for innovative ways to service and connect their patients and care teams. We are excited to announce the publication of new developer templates that extend Skype for Business as a platform for virtual healthcare. Office 365 with Skype for Business Online addresses the critical communication needs of healthcare providers, and these templates enable new mediums for care coordination for patients without requiring an Office 365 subscription.

    The Office 365 Virtual Health Templates allow you to accelerate building your virtual consult experiences by providing an open source solution using the Skype for Business SDKs announced earlier in 2016. These SDKs are powered by Office 365 and Skype for Business Online and support building web and mobile experiences that integrate presence, chat, audio and video with custom business experiences. Publishing these open source templates is a continuation of our investment in making Skype for Business and Office 365 a high-value platform for developers and partners. The templates use modern web development technologies and leverage existing Skype for Business video services, making it easier for developers to build their own portals and apps integrated with other healthcare applications like electronic health record (EHR) management systems and scheduling systems.

    “Since introducing telemedicine, built on Office 365 and Skype SDKs, we have seen a nearly 40 percent decrease in mortality rates in our Critical Care TeleHealth Program. We currently see an average of 361 patients per day via Skype across all programs, and are continuing to invest in our telehealth practice to meet the evolving needs of our providers and patients.”
    —Lonnie Buchanan, director of Enterprise Architecture at Intermountain Healthcare

     

    We have provided open source pages, site functions, authentication and connection to the Skype for Business APIs. As an open source project, we are actively encouraging participation in the community of other like-minded developers. To get more information about the developer assets being shared, please check out our blog over on dev.office.com.

    We look forward to working with developers and partners to create new applications and scenarios with Office 365 and Skype for Business. Key customers and partners are already seeing the benefits of these templates, for example:

    RingMD
    RingMD is a pre-release partner with Microsoft’s Skype for Business. With the integration of Skype’s new APIs, RingMD now can support virtual consultations through the platform. This means that every healthcare institution and practice that’s already an Office 365 customer can quickly, and easily, start facilitating virtual consultations through the RingMD platform with their patients and in their networks, doctor to doctor.

    Cambio
    Cambio is at the forefront of digitalizing doctor-patient visits, by integrating our EMR, Cambio COSMIC, with Skype for Business. The integrated solution is very easy to use and the time saving potential is substantial. Together with Microsoft, Cambio is launching a totally integrated platform for e-visits.

    CareFlow
    Integrating Skype for Business with CareFlow’s products is completely transformational in the way it bridges clinical records and communications and embeds video conferencing technology within clinical workflows. Clinicians across a care community can hold meetings and conversations with each other or with their patients in a way which is secure, patient-identified and recorded as part of the clinical record.

    Virtual Health Templates 2

    Want your own integration with Office 365 and Skype for Business?

    Excellent! Visit dev.office.com/skype where you’ll find documentation and code samples to help you get started. Once you jump in, tell us what you think. Give us your feedback on the API and documentation through GitHub and Stack Overflow or make new feature suggestions on UserVoice.

    Andrew Bybee, principal GPM for the Skype for Business team

    The post New Virtual Health Templates extend Skype for Business as platform for developers appeared first on Office Blogs.

    How do industry leaders set their mobile apps apart?

    $
    0
    0

    The past few years, innovation and increasingly savvy users have dramatically raised the bar for app quality. Industry leaders have taken notice, and they’re capitalizing on three core trends to stay ahead of their users—and their competition: (1) transitioning from basic apps to amazing mobile experiences; (2) evolving from simple, data-aware functions to intelligent, data-driven apps that learn and improve; and (3) shifting from building one-off apps to creating a mobile portfolio. Discover how you can apply these same ideas to transform your mobile experiences and grow your business.

    Want more insights into the strategies discussed in this post? Read the e-book, Out-mobile the competition: Learn how to set your apps apart with mobile DevOps and the cloud.

    Shift 1: From good apps to amazing experiences

    Keep users coming back

    Standard, functional apps are no longer enough. Today’s mobile users, whether they’re customers or employees, want more than well-designed user interfaces, geolocation, and other ‘standard’ app functionality. They demand fully native mobile apps, with fast, relevant, and personalized features that work well on their device of choice.

    Bloomin’ Brands, Inc., uses mobile to redefine the casual dining experience, build customer loyalty, and ensure repeat visitors. For its Outback Steakhouse brand, the Outback Steakhouse app enables customers to quickly and easily locate and reserve tables at nearby locations, automatically check in with hosts, get seated, pay for meals, and redeem vouchers.

    See how users love the five-star apps from Outback Steakhouse:

    Treat every user like a consumer

    There’s no longer a distinction between “enterprise” and “consumer” apps. Employees expect the same seamless and convenient experiences from their workplace apps that they get from their personal apps. If IT fails to deliver, employees quickly abandon apps—and organizations are left with wasted effort and investments.

    Alaska Airlines sets the standard for treating its team members just like customers, with a portfolio of employee-facing apps that mirror Alaska’s consumer user experience. Its internal Hopper app, for example, lets employees access their travel benefits from anywhere, automatically check in, view flight status, and receive mobile boarding passes.

    See how Alaska brings the consumer experience to employees:

    Shift 2: From data-aware apps to data-driven intelligence

    Turn data into insights

    Five-star apps do more than merely allow users to enter and edit data—they’re driven by data intelligence. They locate users, track how users are interacting with their apps, understand what customers are purchasing, and much more. This information helps companies recognize trends, increase customer loyalty or employee satisfaction, and make informed business decisions.

    Nuvem Tecnologia combines mobile and the cloud to connect big data capabilities with rural farmers everywhere. Nuvem Tecnologia’s AgroSIG app modernizes complex agricultural processes for farmers throughout Brazil, centralizing farm data and eliminating error-prone paper reporting. From analyzing historical data to identifying pest problems with GPS and device photo capture, the AgroSIG app equips farmers with the information they need to make real-time decisions and improve their harvests.

    See how Nuvem drives agricultural processes with data:

    Shift 3: From individual apps to a multi-app portfolio

    Create an innovation hub

    The third major shift isn’t about the mobile app experience itself, but about the way your business creates and updates apps. You need to think about your mobile strategy as a core component of your business strategy, starting by moving from monolithic apps to an entire portfolio of role- or function-specific apps.

    Industry leaders have repeatable, automated processes, giving them the fast feedback and release cycles they need to deliver new projects and continuously improve existing apps.

    Dutch Railways embodies this trend of reimagining mobile experiences and shifting from weighty, monolithic apps that “do everything” to single-purpose, lightweight apps. Instead of simply rebuilding the existing Rail Pocket—Dutch Railways’ clunky, legacy communication system—for new devices, the development team created eight function-specific apps, including time sheets, train timetables, maintenance logs, and more. Now, Dutch Railways can quickly and easily improve apps and add new functionality.

    See how Dutch Railways creates fast, data-rich apps for more than 7,000 staff members:

    Bottom line

    In a mobile-first world, user expectations and business demands are constantly evolving. Successful industry leaders embrace and make these changes work for their businesses. Discover how you can stay competitive and deliver the new mobile experience with the e-book, Out-mobile the competition: Learn how to set your apps apart with mobile DevOps and the cloud.

    Cormac Foster, Senior Product Marketing Manager

    Cormac is responsible for mobile Product Marketing. He came to Microsoft from Xamarin, where he was responsible for Analyst Relations and thought leadership. Prior to Xamarin, Cormac held a variety of roles in software testing, research, and marketing.

    Advice to help prevent data breaches at your company

    $
    0
    0

    A data breach can be your worst nightmare. Not only could it be disastrous for your company’s brand, it could lead to significant revenue losses and regulatory fines.

    Watch the latest Modern Workplace episode, “Cyber Intelligence: Help Prevent a Breach,” to get advice on how to best approach cyber security at your company from two chief information security officers (CISO)—Vanessa Pegueros, CISO at DocuSign, and Mike Convertino, CISO at F5 Networks. Learn how these seasoned security executives make decisions on security spending and how they justify security investments to skeptical executives who may not have ever experienced a security breach.

    Every company has cyber security risks and needs to be aware of them—but understanding your company’s risk profile is just the beginning. You also need to know what you are trying to protect. As Convertino explains, “The value proposition of the company needs to be the thing that you base your protections and recommendations on.” When you have a clear goal for security, it becomes easier to demonstrate the value of your security investments in tools and talent.

    You’ll also see a preview of the protection available from Office 365 Threat Intelligence, which lets you monitor and protect against risks before they hit your organization. Using Microsoft’s global presence to provide insight into real-time security threats, Threat Intelligence enables you to quickly and effectively set up alerts, dynamic policies and security solutions for potential threats.

    Watch the Modern Workplace episode to learn more.

    The post Advice to help prevent data breaches at your company appeared first on Office Blogs.


    Episode 118 on Building Contextual Bots in the SharePoint Framework—Office 365 Developer Podcast

    $
    0
    0

    In episode 118 of the Office 365 Developer Podcast, Richard diZerega discuss delivering contextual bots using the SharePoint Framework and the Bot Framework “back channel.”

    Download the podcast.

    Weekly updates

    Got questions or comments about the show? Join the O365 Dev Podcast on the Office 365 Technical Network. The podcast RSS is available on iTunes or search for it at “Office 365 Developer Podcast” or add directly with the RSS feeds.feedburner.com/Office365DeveloperPodcast.

    About the hosts

    RIchard diZeregaRichard is a software engineer in Microsoft’s Developer Experience (DX) group, where he helps developers and software vendors maximize their use of Microsoft cloud services in Office 365 and Azure. Richard has spent a good portion of the last decade architecting Office-centric solutions, many that span Microsoft’s diverse technology portfolio. He is a passionate technology evangelist and a frequent speaker at worldwide conferences, trainings and events. Richard is highly active in the Office 365 community, popular blogger at aka.ms/richdizz and can be found on Twitter at @richdizz. Richard is born, raised and based in Dallas, TX, but works on a worldwide team based in Redmond. Richard is an avid builder of things (BoT), musician and lightning-fast runner.

     

    ACoatesA Civil Engineer by training and a software developer by profession, Andrew Coates has been a Developer Evangelist at Microsoft since early 2004, teaching, learning and sharing coding techniques. During that time, he’s focused on .NET development on the desktop, in the cloud, on the web, on mobile devices and most recently for Office. Andrew has a number of apps in various stores and generally has far too much fun doing his job to honestly be able to call it work. Andrew lives in Sydney, Australia with his wife and two almost-grown-up children.

    Useful links

    StackOverflow

    Yammer Office 365 Technical Network

    The post Episode 118 on Building Contextual Bots in the SharePoint Framework—Office 365 Developer Podcast appeared first on Office Blogs.

    Team Services Update – Feb 15

    $
    0
    0

    This week we are beginning the deployment of our sprint 113 improvements.  You can read the release notes for details.

    Among other things, there’s a bunch of nice improvements to the Pull Request experience – we continue to refine and evolve it.

    We also did a “V2” overhaul of the package management UI.  We think it’s more responsive and simpler.  We certainly appreciate any feedback you have.  For now it is an “opt-in” experience.  Eventually, it will become the default experience.

    A note on release notes…

    The way me manage release notes continues to evolve.  Several months ago, we changed our process to post release notes at the very beginning of deployments rather than “in the middle” – meaning you get earlier notice of changes that are coming but you get the notice days, or even a week or more, before the changes actually appear in you account.

    This sprint we’ve evolved even further.  As we continue on our journey of service decomposition and autonomy, we really no longer have a single deployment wave.  We have many independent services deploying at different times.  Starting this sprint, you should now think of the release notes as describing work that is complete and will be deployed over the next 3 weeks.  I’ve decided on this small decrease in precision rather than publishing lots of smaller release notes.  Until we can actually provide personal release notes for your account, we’ll never get to fully precise release notes because every service takes about a week to deploy across the world, even if nothing goes wrong.

    **Update** I forgot to mention, one other ramification of this evolution is that some features will show up in your account before others do.  Because they are different services being deployed at different times, features show up at different times.  So, if you see some features listed in the release notes but not others, don’t worry, the rest are coming, they just haven’t hit your account yet.  By the end of 3 weeks (and usually 2 weeks) from the date of the release notes, all features listed in the release notes should be visible to you.

    As always, I’m open to feedback if you feel I’m making the wrong trade-offs.  The principles I’m currently using are:

    1. Always tell people what’s coming before it shows up in their account.
    2. Don’t tell people something is coming that doesn’t (no release note retractions).
    3. Don’t pepper people with constant release note updates – a 3 week cadence is reasonable.

    Brian

     

    SQL Server on Linux: Mission-critical HADR with Always On Availability Groups

    $
    0
    0

    This post was authored by Mihaela Blendea, Senior Program Manager, SQL Server

    In keeping with our goal to enable the same High Availability and Disaster Recovery solutions on all platforms supported by SQL Server, today Microsoft is excited to announce the preview of Always On Availability Groups for Linux in SQL Server v.Next Community Technology Preview (CTP) 1.3. This technology adds to the HADR options available for SQL Server on Linux, having previously enabled shared disk failover cluster instance capabilities.

    First released with SQL Server 2012 and enhanced in the 2014 and 2016 releases, Always On Availability Groups is SQL Server’s flagship solution for HADR. It provides High Availability for groups of databases on top of direct attached storage, supporting multiple active secondary replicas for integrated HA/DR, automatic failure detection, fast transparent failover, and read load balancing. This broad set of capabilities is enabling customers to meet the strictest availability SLA requirements for their mission- critical workloads.

    Here is an overview of the scenarios that Always On Availability Groups are enabling for SQL Server v.Next:

    Run mission-critical application using SQL Server running on Linux

    Always On Availability Groups make it easy for your applications to meet rigorous business continuity requirements. This feature is now available on all Linux OS distributions SQL Server v.Next supports — Red Hat Enterprise Linux, Ubuntu and SUSE Linux Enterprise Server. Also, all capabilities that make Availability Groups a flexible, integrated and efficient HADR solution are available on Linux as well:

    • Multidatabase failover– an availability group supports a failover environment for a set of user databases, known as availability databases.
    • Fast failure detection and failover– as a resource in a highly available cluster, an availability group benefits from built-in cluster intelligence for immediate failover detection and failover action.
    • Transparent failover using availability group listener– enables client to use single connection string to primary or secondary databases that does not change in case of failover.
    • Multiple sync/async secondary replicas– an availability group supports up to eight secondary replicas. The availability mode determines whether the primary replica waits (synchronous replica) or not (asynchronous replica) to commit transactions on a database until a given secondary replica has written the transaction log records to disk.
    • Manual/automatic failover with no data loss – failover to a synchronized secondary replica can be triggered automatically by the cluster or on demand by the database administrator.
    • Active secondary replicas available for read/backup workloads– one or more secondary replicas can be configured to support read-only access to secondary databases and/or to permit backups on secondary databases.
    • Automatic seeding– SQL Server automatically creates the secondary replicas for every database in the availability group.
    • Read-only routing– SQL Server routes incoming connections to an availability group listener to a secondary replica that is configured to allow read-only workloads.
    • Database level health monitoring and failover trigger– enhanced database-level monitoring and diagnostics.
    • Disaster Recoveryconfigurations– with Distributed Availability Groups or multisubnet availability group setup.

    Here is an illustration of a HADR configuration that an enterprise building a mission-critical application using SQL Server running on Linux can use to achieve: application-level protection (two synchronized secondary replicas), compliance with business continuity regulations (DR replica on remote site) as well as enhance performance (offload reporting and backup workloads to active secondary replicas):

    clip_image002

    Fig. 1Always On Availability Groups as an Integrated and Flexible HADR Solution on Linux

    On Windows, Always On depends on Windows Server Failover Cluster (WSFC) for distributed metadata storage, failure detection and failover orchestration. On Linux, we are enabling Availability Groups to integrate natively with your choice of clustering technology. For example, in preview today SQL Server v.Next integrates with Pacemaker, a popular Linux clustering technology. Users can add a previously configured SQL Server Availability Group as a resource to a Pacemaker cluster and all the orchestration regarding monitoring, failure detection and failover is taken care of. To achieve this, customers will use the SQL Server Resource Agent for Pacemaker available with the mssql-server-ha package, that is installed alongside mssql-server.

    Workload load balancing for increased scale and performance

    Previously, users had to set up a cluster to load balance read workloads for their application using readable secondary replicas. Configuring and operating a cluster implied a lot of manageability overhead, if HA was not the goal.

    Users can now create a group of replicated databases and leverage the fastest replication technology for SQL Server to offload secondary read-only workloads from the primary replica. If the goal is to conserve resources for mission-critical workloads running on the primary, users can now use read-only routing or directly connect to readable secondary replicas, without depending on integration with any clustering technology. These new capabilities are available for SQL Server running on both Windows and Linux platforms.

    clip_image008

    Fig. 2 Group of Read-Only Replicated Databases to Load Balance Read-Only Workloads

    Note this is not a high-availability setup, as there is no “fabric” to monitor and coordinate failure detection and automatic failover. For users who need HADR capabilities, we recommend they use a cluster manager (WSFC on Windows or Pacemaker on Linux).

    Seamless cross-platform migration

    By setting up a cross-platform Distributed Availability Group, users can do a live migration of their SQL Server workloads from Windows to Linux or vice versa. We do not recommend running in this configuration in a steady state as there is no cluster manager for cross-platform orchestration, but it is the fastest solution for a cross-platform migration with minimum downtime.

    clip_image010

    Fig. 3 Cross-Platform Live Migration Using Distributed Availability Groups

    Please visit our reference documentation on business continuity for SQL Server on Linux for more specifics on how integration with Pacemaker clustering is achieved in all supported OS flavors and end-to-end functional samples.

    Today’s announcement marks the first preview of new Always On Availability Groups capabilities: Linux platform support for HADR as well as new scenarios like creating a cluster-independent group of replicated databases for offloading read-only traffic. Availability Groups are available on all platforms and OS versions that SQL Server v.Next is running on. In upcoming releases, we are going to enhance these capabilities by providing high-availability solutions for containerized environments as well as tooling support for an integrated experience. Stay tuned!

    Get started

    You can get started with many of these capabilities today:

    Learn more

    SQL Server next version CTP 1.3 now available

    $
    0
    0

    Microsoft is excited to announce a new preview for the next version of SQL Server (SQL Server v.Next). Community Technology Preview (CTP) 1.3 is available on both Windows and Linux. In this preview, we added several feature enhancements to High Availability and Disaster Recovery (HADR), including the ability to run Always On Availability Groups on Linux. You can try the preview in your choice of development and test environments now: www.sqlserveronlinux.com.

    Key CTP 1.3 enhancement: Always On Availability Groups on Linux

    In SQL Server v.Next, we continue to add new enhancements for greater availability and higher uptime. A key design principle has been to provide customers with the same HA and DR solutions on all platforms supported by SQL Server. On Windows, Always On depends on Windows Server Failover Clustering (WSFC). On Linux, you can now create Always On Availability Groups, which integrate with Linux-based cluster resource managers to enable automatic monitoring, failure detection and automatic failover during unplanned outages. We started with the popular clustering technology, Pacemaker.

    In addition, Availability Groups can now work across Windows and Linux as part of the same Distributed Availability Group. This configuration can accomplish cross-platform migrations without downtime. To learn more, you can read our blog titled “SQL Server on Linux: Mission Critical HADR with Always On Availability Groups”.

    Other Enhancements

    SQL Server v.Next CTP 1.3 also includes these additional feature enhancements:

    • Full text search is now available for all supported Linux distributions.
    • Resumable online index rebuilds enables users to recover more easily from interruption of index builds, or split an index build across maintenance windows.
    • Temporal Tables Retention Policy support enables customers to more easily manage the amount of historical data retained by temporal tables.
    • Indirect checkpoint performance improvements. Indirect checkpoint is the recommended configuration for large databases and for SQL Server 2016, and now it will be even more performant in SQL Server v.Next.
    • Minimum Replica Commit Availability Groupssetting enables users to set the minimum number of replicas that are required to commit a transaction before committing on the primary.
    • For SQL Server v.Next technical preview running on Windows Server, encoding hints in SQL Server Analysis Services is an advanced feature to help optimize refresh times with no impact on query performance.

    For additional detail on CTP 1.3, please visit What’s New in SQL Server v.Next, Release Notes and Linux documentation.

    Get SQL Server v.Next CTP 1.3 today!

    Try the preview of the next release of SQL Server today! Get started with the preview of SQL Server with our developer tutorials that show you how to install and use SQL Server v.Next on macOS, Docker, Windows and Linux and quickly build an app in a programming language of your choice.

    Have questions? Join the discussion of SQL Server v.Next at MSDN. If you run into an issue or would like to make a suggestion, you can let us know through Connect. We look forward to hearing from you!

    Continuous Integration for C++ with Visual Studio Team Services

    $
    0
    0

    Visual Studio Team Services (VSTS) is an easy way to help your team manage code and stay connected when developing. VSTS supports continuous integration using a shared code repository at everyone on the team uses to check in code changes. Every time any code is checked in, it is fully integrated by running a full automated build. By integrating frequently, it is easier for you to discover where something goes wrong so you can spend more time building features, and less time troubleshooting.

    You can now take advantage of new documentation that makes it easier for you to use continuous integration with C++ code inside VSTS.

    Read the new doc: Build your C++ app for Windows

    Spin up a simple “hello world” application and give it a try completely for free! It could be a good way to improve your codebase by finding integration problems early.

    If you like this type of content or have any suggestions, please feel free to drop us a line, or continue the discussion in the comments below.

    Viewing all 13502 articles
    Browse latest View live


    <script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>