Quantcast
Channel: TechNet Technology News
Viewing all 13502 articles
Browse latest View live

Azure Blueprint supports the UK Government’s Cloud Security Principles

$
0
0

Azure Government Engineering is pleased to announce the release of Azure Blueprint for the UK Government’s Cloud Security Principles. Blueprint empowers Azure customers to build the most secure cloud solutions on the most secure cloud platform.
 
Azure Blueprint for the UK Government enables UK public sector organizations to understand how solutions built on Azure implement the 14 individual Cloud Security Principles published by the National Cyber Security Centre, supporting workloads with information designated as UK OFFICIAL. The Azure Blueprint UK Government Customer Responsibilities Matrix outlines how Azure implements security controls designed to satisfy each security principle and assists customers in understanding how they may implement safeguards within their Azure solution to fulfill the requirements of each principle where they hold a responsibility.

Win15_COM_4233

In conjunction with this documentation release, a Blueprint compliance architecture ARM (Azure Resource Manager) template has been released on GitHub. This ARM template deploys a three-tiered network architecture which provides a baseline from which customers can build a secure environment that supports the UK Cloud Security Principles.
 
To access the Azure Blueprint UK Government Cloud documents please e-mail AzureBlueprint@microsoft.com. Additional information and Blueprint resources are available on the Azure Government Documentation site.


Accelerated Continuous Testing with Test Impact Analysis – Part 1

$
0
0

Continuous Testing in DevOps

In older testing strategies, large software changes were tested as a complete product after a so called “release to QA”, running almost all tests just before release. We know the downsides to that. On the other hand, DevOps is all about a fast development to delivery pipeline and continuous delivery of value. Releases are happening in days and weeks – not in years as they used to. In such a DevOps world, there is not going to be any continuous delivery if you do not have your testing right. Just as we have CI Continuous Integration (CI) and Continuous Delivery (CD), DevOps calls for Continuous Testing (CT).

Test Impact Analysis (TIA)

Can such continuous testing also be fast? Can it still be comprehensive? Should you just run all your tests all the time? At smaller scales running through all your test suites and emitting out copious test results may be tolerable, but this not scale. At a larger scale, for testing cycles to be fast, a more sophisticated view of what is comprehensive testing becomes essential. “Relevance” becomes critical – only run the most relevant tests and only report the relevant results.

The Test Impact Analysis (TIA) feature specifically enables this – TIA is all about incremental validation by automatic test selection. For a given code commit entering the pipeline TIA will select and run only the relevant tests required to validate that commit. Thus, that test run is going to complete faster, if there is a failure you will get to know about it faster, and because it is all scoped by relevance, analysis will be faster as well.

tia
TIA does come with a little overhead of its own (in order to build and maintain the mapping), and so is best applied only in cases where a test run itself takes a little long to complete (say, > 10 mins).

Enabling TIA

TIA is supported via the Version 2.* (preview) of the Visual Studio Test task. If your application is a single tier application, all you need to do is to check “Run only impacted tests” in the task UI. The Test Impact data collector is automatically configured. No additional steps required.

vstestv2task

If your application interacts with a service in the context of IIS, you need to additionally configure the HTTP Proxy data collector and the Test Impact data collector to run in the context of IIS (both, via the .runsettings file). Here is a sample .runsettings file that does this configuration: samplerunsettings.

enablingtia

Reporting

TIA is in integrated into existing test reporting at both the summary and details levels, including notification emails.

report

Robust Test Selection

But why should you trust TIA? How do you know that quality has not been compromised in all of this? To answer that we need to look at 2 aspects:
(1) what are the tests that get automatically selected
(2) policies that can condition test selection.

TIA will look at an incoming commit, and select the set of relevant tests – these will have 3 components
(1) The existing tests impacted by the incoming commit.
(2) Additionally, it will also select previously failing tests– if not, then over the course of several commits some earlier failing test case might just get lost … Therefore, TIA will keep track of tests that failed in the previous build and include them in the selection.
(3) It will also include newly added tests– what if your commit contains new tests? Such tests could uncover product bugs, right? So, TIA will select newly added tests as well.

Taken together, this complement makes for a robust selection of tests. Additionally, we allow (and recommend) running all tests at a configured periodicity.

Scope

TIA is presently scoped to the following:

  • Dependencies
    • Requires use of the Version 2.* (preview) of the Visual Studio Test task in the build definition.
    • Requires either VS2015 Update 3 onwards OR VS 2017 RC onwards on the build agent
  • Supported
    • Managed code
    • CI and in PR workflows
    • IIS interactions
    • Automated Tests (tests and application must be running on the same machine).
    • Build vNext workflow, with multiple VSTest Tasks
    • Local and hosted build agents
    • Git, GitHub, External Git, TFVC repos
  • Not yet supported
    • Remote testing (where the test is exercising an app deployed to a different machine)
    • xplat (Windows only).
    • UWP support.

Next post

We will continue this series in the next post, looking at more details, specifically: manual overrides, repo considerations, using TIA in PR builds, and interaction with Code Coverage. If you would like any other aspect covered in more detail, let us know.

In the meantime, start using TIA! Looking forward to your feedback.

Breaking down a notably sophisticated tech support scam M.O.

$
0
0

The cornerstone of tech support scams is the deception that there is something wrong with your PC.  To advance this sham, tech support scams have long abused browsers’ full screen function. Coupled with dialogue loops, the pop-up messages that just won’t go away, and the spoofing of brands like Microsoft, tech support scam websites can be convincing.

The end-goal, of course, is to get you to call a technical support hotline, which then charges you for services you don’t need.

Recently we came across a new tech support scam website that stands out in the way it creatively uses the full screen function and dialogue boxes.

The scam is one of many websites we have discovered and blocked over the years. To achieve its end, the website uses a malicious script belonging to the Techbrolo family of support scam malware. Techbrolo is known for introducing the dialogue loops and audio message, which have now become staple in tech support scam sites.

Anatomy of a support scam website

The scam starts like any other. You are redirected to the website by nefarious ads. When the page loads, you get a pop-up message that says your computer has been locked because of virus infection. It asks you to immediately call a technical support number.

tech-support-scam-message-box

Figure 1. Dialogue box that pops up when the site is accessed

The website also starts playing an audio message, a tactic to further cause panic, something that we’re seeing more and more in these scams. It says:

Important security alert! Virus intrusions detected on your computer. Your personal data and system files may be at serious risk. All system resources are halted to prevent any damage. Please call customer service immediately to report these threats now.

In usual scam sites, if you click OK or close the pop-up message, a dialogue loop kicks in. The website continues to serve the pop-up messages whatever you do, effectively locking your browser.

In this new site, however, if you click OK, things start to get very interesting.

It loads a page with what appears to be a pop-up message containing the same details, including the technical support hotline. You may think at this point you’re just getting the usual dialogue loop. But, upon closer inspection, it’s not really a pop-up message, but a website element of the scam page.

tech-support-scam-fake-message-box

Figure 2. A fake dialogue box that is really a website element

If you click OK on the fake dialogue box (or basically anywhere on the page), it goes into full screen and brings in another surprise.

At full screen, you get what looks like a browser opened to support.microsoft.com/ru-ru/en. But, alas, just like the pop-up message, the browser is just a website element.

tech-support-scam-full-screen

Figure 3. A fake browser that is part of the design of the support scam website

This is how the scam site is able to spoof support.microsoft.com in the fake address bar. It even has the green HTTPS indicator to further feign authenticity. If you didn’t detect the scam at this point, you may think you were redirected to a Microsoft website and it’s serving you some messages about your PC.

Don’t fall for this. Exiting full screen puts things in perspective.

tech-support-scam-escape-from-fullscreen-1

Figure 4. The support scam website outside full screen

Busting the scam

Just like all tech support scams, this new iteration is doing its best to make you think there’s something wrong with your PC. The new techniques are meant to improve its chances of you taking the social engineering bait.

The key to stopping the attack is to immediately recognize and break it. If you’re a Microsoft Edge user, there are a couple of ways to do this.

The first clue that something’s amiss is a message from Microsoft Edge. As the offending site goes into full screen, you get a notification from Microsoft Edge. You can exit the full screen at this point by clicking Exit now, and you stop the attack.

tech-support-scam-full-screen-microsoft-edge-message

Figure 5. Alert from Microsoft Edge that the site has gone to full screen

The second clue is the change in the interface. Since the page is designed to look like Google Chrome, if you’re a Microsoft Edge user, you may catch the difference. Detecting the change in the interface may be easier said than done, but the opportunity to break the attack is there.

tech-support-scam-escape-from-fullscreen-2

Figure 6. You can detect that the fake browser is different from the real one

Conclusion: Avoiding tech support scams

As this newly discovered support scam website shows, scammers are always on the lookout for opportunities to improve their tools. They can get really creative, motivated by the possibility of avoiding security solutions and ultimately increasing the chances of you falling for their trap.

Avoid tech support scam websites by being more careful when browsing the Internet. As much as you can, visit trusted websites only. Like most tech support scams, you are redirected to offending sites via malvertising (malicious ads). These ads are usually found in dubious websites, such as those hosting illegal copies of media and software, crack applications, and malware.

Get the latest protection from Microsoft by keeping your Windows operating system and antivirus up-to-date. If you haven’t, upgrade to Windows 10.

Use Microsoft Edge when browsing the Internet. It blocks known support scam sites using Microsoft SmartScreen. Microsoft Edge can also stop pop-up dialogue loops used by these sites. It also calls out when a website goes into full screen, giving you a chance to stop the attack.

tech-support-scam-microsoft-edge-blocked

Figure 7. Microsoft Edge blocks the support scam website using Microsoft SmartScreen

 

Jonathan San Jose

MMPC

Agent-based deployment in Release Management

$
0
0

Agent-based deployment in Release Management

We have been working on adding robust in-the-box multi-machine deployment using Release Management. You can orchestrate  deployments across multiple machines, perform rolling updates while ensuring high availability of the application throughout.

Agent based deployment capability relies on the same build and deployment agents. However, unlike the current approach, where you install the build and deployment agents on a set of proxy servers in an agent pool and drive deployments to remote target servers, you can install the agent on each of your target servers directly, and then drive rolling deployment to those servers.

Preview

Agent based deployment feature is currently in early adopter phase, if you would like to participate or If you have suggestions on how can we make agent-based deployment better, you can drop an email to us.

Deployment Group

Deployment group is a logical group of targets (machines) with agents installed on each of them. Deployment groups represent your physical environments. For example single box Dev, multi-machine QA or farm of machines for UAT/Prod. They also specify the security context for your physical environments.

Here are a couple of screenshots to explain how this experience is shaping up.

  • Create your ‘Deployment Group’,  register your machines with the cmdlet.createdeploymentgroup
  • Manage: Track the deployments down to each machine. Tag the machine in the group so that you can deploy to the targets having specific tags. deploymentgroups
  • Author release definition: If you are deploying to farm of servers, you can use deployment configuration options viz., one target at a time, or half the targets at a time, or deploy to all targets at once, or use custom value. 
    deploymentgroupphase
  • Deploy: view live logs for each server, download logs for all servers. Track your deployments steps for each server using Release log view.deploymentgrouplivelogs

 

Bootstraping agents: We have made bootstrapping the agents on the target simpler. You can just copy-paste the cmdlet appropriate for the OS and it will take care of downloading, installation and configuring the agent against the deployment group. It even has an option to generate the cmdlet with ‘Personal Access Token’ with right scope so that you don’t have to.

  • If it is Azure, you can do it on-demand using Team Services extension for the VM or use Azure PowerShell/CLI to add the extension which will bootstrap the deployment agent. Or you can automate using resource extension in the Azure template json.
  • We plan to enhance ‘Azure Resource Group’ task to dynamically bootstrap agents on the newly provisioned / pre-existing Virtual Machines on Azure.teamservicesagent

With this you can use the same proven cross-platform agent and its pull-based execution model to easily drive deployments on a large number of servers no matter which domain they are on, without having to worry about the myriad of pre-requisites.

Avoid these six mobile development pitfalls

$
0
0

In our previous post in this series, we talked about the three shifts you need to make to set your mobile apps apart. As you implement your winning strategy, plan to tackle the six common challenges discussed below, ranging from meeting demand to post-release improvement.

Discover how industry leaders tackle these issues in the e-book, Out-mobile the competition: Learn how to set your apps apart with mobile DevOps and the cloud.

Challenge #1: Mounting demand for apps

In response to the mobile explosion, enterprises have recognized the need to deliver exceptional mobile experiences to their business stakeholders, customers, and employees. International Data Corporation (IDC) predicts that, by 2018, the number of enterprise mobile apps will double and spending on mobility will reach 50% of enterprise IT budgets.1

How will your organization meet your users’ growing demand for mobile apps when it exceeds your teams’ ability to deliver?

Begin by examining your existing resources and internal processes—from team structure to technology investments—to determine if your current infrastructure allows you to quickly build and continuously deliver many high-quality apps for various device types, scenarios, and audiences.

Challenge #2: Talent shortage

Per a 2014 Forrester survey, 50% of organizations had fewer than five developers in-house, barely enough to field a single mobile team.2 Fast forward to 2017: Do you have enough developers to cover your needs, including building, testing, and maintaining mobile apps, for each of the major platforms—or do you require additional talent? Do those developers need to be trained on new languages, tools, or platforms?

Challenge #3: Device fragmentation

Since you can’t predict your users’ preferred devices, your apps need to work well on as many form factors, operating systems, and hardware configurations as possible. While managing multiple operating system updates and device fragmentation can be costly—the number of different Android devices alone more than doubled between 2013 and 20153—it’s necessary to ensure the seamless experiences your users expect.

Challenge #4: Adequate testing and QA

Mobile quality is critical to adoption and engagement but adds another layer of complexity to the release cycle. Manual testing is not a long-term solution: it is expensive, labor intensive, can’t scale with rapid release cycles, and usually involves only a small number of available devices. Expanding device coverage using simulators may appear to lower expenses, but this provides a false sense of security, since simulators cannot faithfully reproduce real-world hardware operating conditions.

Automated UI testing and beta distribution are critical to releasing apps that meet—and exceed—users’ expectations. Teams rapidly validate that apps work as expected, identify and prioritize issues, and distribute to internal and external testers for feedback and suggestions. Development teams can catch issues early in the release process and avoid adding new features on top of faulty code, while QA teams can focus on triaging and quickly fixing issues.

Challenge #5: Innovation with security

Five-star app experiences are more than just a pretty UX. Successful apps must provide standout utilities—examples include proactive recommendations or personalized notifications based on user need, activity, or environment—while ensuring secure, reliable access to the same local or cloud-based files, services, and systems of record as traditional apps. On top of that, your users expect mobile apps to include unique capabilities like contextual push notifications and offline data sync. And all this infrastructure must be able to scale to millions of users as your app takes off.

Cloud and hybrid cloud technologies allow organizations to securely connect their systems while also giving users the on-the-go access they need to be productive. Use of cloud platforms also enable updates at scale, with offline access and push capabilities, plus critical security services like permissions management and user authentication.

Challenge #6: The “after launch”

Many teams focus on getting apps out the door but fail to account for post-launch maintenance and analysis. This mistake robs developers of a continuous feedback loop that is essential to maintaining quality. Abandoning apps (by not updating them) is essentially a waste of current investments.

Teams that capture post-release analytics better understand where their future efforts can make the most impact. By monitoring apps in the wild, your organization can understand the complete range of critical services your users expect, and you can properly identify and prioritize new features and bug fixes based on hard data and real user feedback, thereby continuously delivering value to your users and your business.

Overcome these challenges with mobile DevOps and the cloud

With Microsoft’s mobile DevOps technology and cloud platform, you can not only avoid these common pitfalls, but continuously deliver apps that users love, and drive your business forward. Mobile DevOps automates every stage of the mobile lifecycle, from planning to continuous improvement, with secure connections to cloud services which enable you to quickly deliver the capabilities your users demand and scale for any scenario.

See how Microsoft can help youmake mobile your competitive advantage: Out-mobile the competition: Learn how to set your apps apart with mobile DevOps and the cloud.

Cormac Foster, Senior Product Marketing Manager

Cormac is responsible for mobile Product Marketing. He came to Microsoft from Xamarin, where he was responsible for Analyst Relations and thought leadership. Prior to Xamarin, Cormac held a variety of roles in software testing, research, and marketing.

References

1. IDC, “IDC FutureScape: Worldwide Mobility 2017 Predictions,” November 2016, http://www.idc.com/getdoc.jsp?containerId=US41334316

2. Forrester Report, “The State of Mobile App Development: Few eBusiness Teams Keep Pace With Customer App Expectations,” March 23, 2015, https://www.forrester.com/report/The+State+Of+Mobile+App+Development/-/E-RES120267

3. OpenSignal “Android Fragmentation Visualized” report, August 2015, https://opensignal.com/reports/2015/08/android-fragmentation/

Just Released – Windows Developer Evaluation Virtual Machines – February 2017 Build

$
0
0

We’re releasing the February 2017 edition of our evaluation Windows developer virtual machines (VM) on Windows Dev Center. The VMs come in Hyper-V, Parallels, VirtualBox and VMWare flavors. The evaluation version will expire on 05/21/17.

Evaluation VM contain:
• Windows 10 Enterprise Evaluation, Version 1607
• Visual Studio 2015 Community Update 3 (Build 14.0.25425.01)
• Windows developer SDK and tools (Build 14393)
• Microsoft Azure SDK for .NET (Build 2.9.6)
• Windows UWP samples (Feb 2017 Update)
• Bash on Ubuntu on Windows

If you don’t currently have a Windows 10 Pro license, you can get one from the Microsoft Store. If you just want to try out Windows 10 and UWP, use the free evaluation version of the VMs. The evaluation copies will expire after a pre-determined amount of time.

If you have feedback on the VMs, please provide it over at the Windows Developer Feedback UserVoice site.

The post Just Released – Windows Developer Evaluation Virtual Machines – February 2017 Build appeared first on Building Apps for Windows.

How we are improving notifications in Team Services

$
0
0

Good communication is an essential ingredient to any successful development project. Whether the team is small or large, keeping everyone on the same page and informed as the project progresses helps reduce last minute surprises and ensures a smoother process overall.

Notifications, whether they arrive via email, Microsoft Teams, Slack, or some other system, push relevant information to recipients. Recipients don’t need to periodically check for new information; the information arrives when the recipient needs to be told something or when their action is required.

We have been working hard on features for Team Services that help ensure the right information is delivered to the right people at the right time. We are focused on …

  • Scale and performance: we made significant improvements to the overall performance of the pipeline. This includes faster processing of events and faster delivery of email and service hook notifications. This work was essential to ensuring notifications arrive in a timely manner, especially as the volume of activity increases and we add new and better ways for users to get notified.
  • Great out of the box experience: we want users to get notified about relevant activity in their projects, without them needing to do anything. This resulted in a set of new features including out of the box subscriptions and new delivery options for team subscriptions. These features will be enough for most users. If users or admins want to receive notifications about other activity, they can still do this via custom subscriptions.
  • Extensibility: delivering useful notifications for version control, Agile, build and release, is an obvious requirement, but Team Services is more than just the parts we build. Team Services is a platform that has a great set of core services, but is then extended through third-party extensions and integrations (see the Visual Studio Marketplace). We overhauled our notification services to be extensible and many of our own services now plug into the same extensibility points that third-parties will soon be able to plug into. This will enable third-party extensions and integration to publish events that can be delivered through various channels including email, Teams, Slack, and more.

New features

Here is a quick recap of some of the recent features we delivered. You may recognize some of this text from recent release notes, which I generously borrowed from. Note: the features discussed here are available now for Team Services and in Team Foundation Server 2017 Update 1 or later. You can learn more about notifications in earlier versions of TFS here.

Better configuration UI

It is now easier to manage what notifications you and your teams receive. Users now have their own account-level experience for managing notification settings (available via Notification settings in your profile menu).

This view lets users manage personal subscriptions they have created. It also shows subscriptions created by team administrators for all projects in the account.

Learn more about managing personal notification settings.

New delivery options for team subscriptions

Team administrators can manage subscriptions shared by all members of the team in the Notifications hub under team settings. Two new delivery options are now available when configuring a team subscription: send the email notification to a specific email address (like the team’s distribution list), or send the notification to only team members associated with the activity.

Learn more about managing team subscriptions.

Out of the box notifications (preview)

Prior to this feature, users would need to manually opt in to any notifications they wanted to receive. With out-of-the-box notifications (which currently must be enabled by an account administrator), users automatically receive notifications for events such as:

  • The user is assigned a work item
  • The user is added or removed as a reviewer to a pull request
  • The user has a pull request that is updated
  • The user has a build that completes

These subscriptions appear in the new user notifications experience, and users can easily choose to opt out of any of them.

To enable this feature for the account, an account administrator needs to go to Preview features under your profile menu, select From this account from the drop-down, then toggle on the Out of the box notifications feature.

Learn more about out of the box notifications.

Delivering certain notifications to all team members (preview)

Working with pull requests that are assigned to teams is getting a lot easier. When a PR is created or updated, email alerts will now be sent to all members of all teams that are assigned to the PR. This feature is in preview and requires an account admin to enable it from the Preview features panel (available under the profile menu). After selecting for this account, switch on the Team expansion for notifications feature.

Other stuff we are working on

We are already working on (or about to start on) a number of new features, not all of which I can disclose right now. Here are a few features we are working on:

  • Multiple recipients on the same email. Today when a particular email notification is sent to multiple users, each user receives an individual email. This makes it hard to know who else received the email and to have a conversation with the other users about the email’s contents. With this feature, a single email will be sent with each of the users on the TO line.
  • Improved custom subscription experience. It is not always easy to create a custom subscription today, especially one involving multiple conditions, with ANDs and ORs. We are planning to rebuild this experience to make it much simpler, but just as powerful (for those users that want the power).

 

Your feedback is important to us. Use the “send a smile” feature, comment on this post, or send to vssnotifications@microsoft.com.

Will Smythe

Always Set Impossible Goals

$
0
0

Impossible goals are like dreams, we always pursue them, with the hopes they will come true. In one of my recent experiences, I managed a feature crew, C++ Fast Project Load (FPL), a team of exceptional people. Personally, I’m very passionate about performance, as I believe it makes our interaction with our beloved machines much more satisfying.

As large codebases grow over time, they tend to suffer from a slow performance loading and building in Visual Studio. Most of the root causes were originating from our project system architecture. For years, we made decent improvements (percentages), just to see them wiped out, by the codebases steady growth rate. Hardware improvements like better CPUs or even SSDs helped, but they still didn’t make a huge difference.

This problem required an “Impossible Goal”, so we decided to aim very high, improving the solution load time by 10x! Crazy, no? Especially because for years, we were barely making small improvements. Goal set? Checked, now go, go, go!

A few years back, while working on Visual Studio Graphics Debugger, I faced a similar problem, loading huge capture files, which needed rendering (sometimes under REF driver, very slooow) and these took a long time especially for complex graphics applications. At that time, I employed a caching mechanism, which allowed us to scale and reuse previous computations, reducing dramatically the reload time and memory consumption.

For FPL, about a half a year ago, we started following a similar strategy. Luckily, we had a nice jump-start from a prototype we created 3 years ago, which we didn’t have time to finish it at that time.

This time, all the stars were finally aligned, and we were able to dedicate valuable resources to making this happen. It was an extraordinary ride, as we had to deliver at a very fast pace, a feature that potentially was capable of breaking a lot of functionality, and its merit was simply performance gains.

We started playing with very large solutions, establishing a good baseline. We had access to great real-world solutions (not always easy to find, given the IP constraints) along with our internal and generated solutions. We liked to stress the size beyond the original design sizes (500 projects). This time we pushed to an “Impossible Goal” (10x) for a good experience.

The main goals were to improve solution load times and drastically reduce the memory consumption. In the original design, we were loading always the projects like we were seeing them for the first time, evaluating their values, and holding them in memory, ready to be edited. From telemetry data, the latter one was totally unnecessary, as most of the user scenarios were “read-only”. This was the first big requirement, to design a “read-only” project system capable of serving the needed information to the Visual Studio components, which constantly query it (Design Time tools, IntelliSense, extensions). The second requirement was to ensure we reuse, as much as possible, the previous loads.

We moved all the project “real” load, and “evaluation” into an out-of-proc service, that uses SQLite to store the data and serve it on demand. This gave us a great opportunity to parallelize the project loading as well, which in itself provided great performance improvements. The move to out-of-proc added also a great benefit, of reducing the memory footprint in Visual Studio process, and I’m talking easily hundreds of MB for medium size solutions and even in the GB range for huge ones (2-3k projects solutions). This didn’t mean we just moved the memory usage elsewhere, we actually relied on SQLite store, and we didn’t have to load anymore the heavy object model behind MSBuild.

chrome

We made incremental progress, and we used our customers feedback from pre-releases, to tune and improve our solution. The first project type that we enabled was Desktop, as it was the dominant type, followed by the CLI project type. All the project types not supported, will be “fully” loaded like in early releases, so they will function just fine, but without the benefit of the FPL.

It is fascinating how you can find accidentally introduced N^2 algorithms in places where the original design was not accounting for a large possible load. They were small, relatively to the original large times, but once we added the caching layers, they tended to get magnified. We fixed several of them, and that improved the performance even more. We also spent significant time trying to reduce the size of the large count objects in memory, mostly in the solutions items internal representation.

From a usability point of view, we continue to allow the users to edit their projects, of course, as soon as they try to “edit”, we are seamlessly loading the real MSBuild based project, and delegate to it, allowing the user to make the changes and save them.

We are not done yet, as we still have a lot of ground to cover. From customer feedback, we learned that we need to harden our feature, to maintain the cache, even if the timestamps on disk change (as long as the content is the same, common cases: git branch switching, CMake regenerate).

impossible

Impossible Goals are like these magic guidelines, giving you long-term direction and allowing you to break the molds, which, let’s be fair, shackle our minds into preexisting solutions. Dream big, and pursue it! This proved a great strategy, because it allowed us to explore out of the box paths and ultimately, it generated marvelous results. Don’t expect instant gratification, it takes significant time to achieve big things, however always aim very high, as it is worth it when you will look back, and see how close you are, to once an impossible dream.


Announcing general availability of Upgrade Readiness

$
0
0

We are pleased to announce the general availability of Upgrade Readiness, one of the solutions in the suite of solutions called Windows Analytics. Upgrade Readiness helps you plan and manage the Windows upgrade process from beginning to end and enables you adopt new Windows releases more quickly. With new Windows versions being released multiple times a year, ensuring application and driver compatibility on an ongoing basis is key to adopting new Windows versions as they are released. Windows Upgrade Readiness not only supports upgrade management from Windows 7 and Windows 8.1 to Windows 10, but it also supports Windows 10 upgrades in the Windows as a service model.

Upgrade Analytics helps you accelerate your move to Windows 10 by providing:

  • A visual workflow that guides you from pilot to production
  • Detailed computer and application inventory
  • Powerful computer-level search and drill-downs
  • Guidance and insights into application and driver compatibility issues with suggested fixes
  • Data-driven application rationalization tools
  • Application usage information that allows targeted validation and workflow to track validation progress and decisions
  • Data export to commonly used software deployment tools including System Center Configuration Manager

Here are some resources where you can learn more:

Windows analytics blog post

Upgrade Readiness documentation on TechNet

Evan Hissey
Program Manager
Microsoft System Center

 

 

Fast acquisition of vswhere

$
0
0

I introducedvswhere last week as an easy means to locate Visual Studio 2017 and newer, along with other products installed with our new installer that provides faster downloads and installs – even for full installs (which has roughly doubled in size with lots of new third-party content).

vswhere was designed to be a fast, small, single-file executable you could download and even redistribute in a build pipeline or for other uses. To make this easy to acquire, I’ve published both a NuGet package and made vswhere available via Chocolatey.

choco install vswhere
vswhere -latest -products * -requires Microsoft.Component.MSBuild -property installationPath

You might notice a few other surprises in that command line I’ve implemented for our first packaged release of 1.0.40.

  • You can pass a single “*” to -products to search all installed product instances. Note that the asterisk is not a wildcard that can be used for pattern matching.
  • You can specify a single property to return only that property value. In PowerShell and some other script environments that makes it very easy to capture into a variable.

You can also install using the Chocolatey package provider for PowerShell, but a bug in the provider does not put vswhere in your PATH.

See our wiki for more information.

Azure Brings big data, analytics, and visualization capabilities to U.S. Government

$
0
0

To further our commitment to providing the latest in cloud innovation for government customers we’re excited to announce the general availability of HDInsight and Power BI Pro in Microsoft Cloud for Government.  HDInsight and Power BI bring exciting new capabilities to Azure Government that enable organizations to manage, analyze, and visualize large quantities of data. HDInsight unlocks the ability to build data and machine learning applications that run-on Apache Spark and Hadoop.  Power BI allows for the aggregation of data and visualization with easy to operate dashboard functionality.

We are also announcing a preview of Cognitive Services in Azure Government. We have enabled scenarios such as audio and text translation into other languages as well as facial (gender and age) and emotion recognition with Computer Vision and Emotion. If you’re interested in participating in the Azure Government Cognitive Services preview, please contact azgovfeedback@microsoft.com for more information.

With these capabilities working today, we can take data and derive insight in minutes.  Here’s a video example of these capabilities working together. In this demo, we leveraged HDInsight, Power BI and Machine Learning along with our Cognitive Services (available in preview for Azure Government) to show how you can easily build a solution to translate and analyze text and visualize the results.

Some partners are leveraging these capabilities to provide real time dashboards for their solutions, such as Prabal Acharyya, WW Director IoT Analytics for OSIsoft Technologies. OSISoft provides business solutions that connect sensor-based data, operations, and people to enable real-time intelligence for their customers. Prabal expanded on Power BI’s value, saying:

“Data scientists in U.S. Government spend inordinate amounts of time each day manually scrubbing terabytes of operational data for advanced analytics and business intelligence”, says Acharyya, “OSIsoft is pleased to partner with Microsoft to deliver PI Integrator for Microsoft Azure on Microsoft U.S. Government Cloud with free and fluid access to streaming Power BI-ready data, context & insights to build Innovative Gov solutions”

Azure HDInsight

HDInsight is the only fully-managed cloud Hadoop offering that provides optimized open source analytic clusters for Spark, Hive, MapReduce, HBase, Storm, Kafka, and R Server backed by a 99.9% SLA. Each of these big data technologies and ISV applications are easily deployable as managed clusters with enterprise-level security and monitoring. 

HDInsight brings Big Data to Azure Government and broadens the landscape for building powerful data analysis solutions. Examples include:

  • Deploy a Big Data analysis cluster in minutes. No upfront costs, get started immediately.
  • Enable streaming and processing of large data sets in real time using Kafka,Storm, and Spark for HDInsight.
  • Build Machine Learning capabilities with Spark and R Server
  • Build intelligent applications that leverage big data to deliver personalized experiences

If you’re looking to get started creating powerful solutions with HDInsight for Azure Government log into the Azure Portal or signup for a trial.

Power BI Pro for U.S. Government

Power BI brings your Big Data solutions to life with live dashboards, interactive reports, and compelling visualizations. Power BI connects to a broad range of data wherever it lives and enables anyone to visualize and analyze data with greater speed, efficiency, and understanding.

Power BI desktops

Power BI Pro for Microsoft Cloud for Government includes:

  • Power BI service is a cloud-based business analytics service that gives you a single view of your most critical data.
  • Power BI Desktop puts visual analytics at your fingertips with intuitive report authoring; drag-and-drop to place content exactly where you want it on the flexible and fluid canvas, and quickly discover patterns as you explore a single unified view of linked, interactive visualizations.
  • Power BI Mobile helps you stay connected to your data from anywhere, anytime; and get a 360° view of your organization data on the go - at the touch of your fingertips.

Want to get started? Signup for Power BI Pro for Government

How Power BI Embedded Helps MB3M Provide A More Dynamic Reporting Solution To Its Customers

$
0
0
MB3M is a French software company founded in 2007 that builds applications for travel agencies. Its flagship product, MB3M Travel, covers a vast array of CRM, financial, and accounting needs of travel agencies. Travel managers use it to analyze travel expenses and respond to their customer needs. Learn how they upgraded their reporting capabilities and empowered developers to create interactive reports with Power BI.

Improved troubleshooting in Azure Stream Analytics with diagnostic logs

$
0
0

We are announcing the much-awaited public preview of diagnostic logs for Azure Stream Analytics through integrations with Azure Monitoring. You can now examine late or malformed data that causes unexpected behaviors. This helps remediate errors caused by data that does not conform to the expectations of the query.

Diagnostic logs provide rich insights into all operations associated with a streaming job. They are turned off by default and can be enabled in the “Diagnostic logs” blade under “Monitoring”. These are different from Activity logs that are always enabled and provide details on management operations performed.

Screenshot_Monitoring_Diagnostic logs

Examples of data handling errors that diagnostic logs can help with include:

  • Data conversion and serialization errors in cases of schema mismatch.
  • Incompatible types including constraints such as allow null and duplicates.
  • Truncation of strings and issues with precision during conversion.
  • Expression evaluation errors such as divide by zero, overflow etc.

An example of non-conforming data being written to Azure storage is illustrated below:

{
              Diagnostic:"Encountered error trying to write 3 events: …",
                Timestamp:"7/25/2015 12:27:44Z",
                Source:"Output1",
                Output:"Output1",
                Error:
                {
                      Type:"System.InvalidOperationException",
                      Description:"The given value “hello world” of type string from the data source cannot be converted to type decimal of the specified target column [Amount].",
                },
                EventData:
                {
                                SomeValue:”hello world”,
                                Count:1
                }
}

Errors are sampled by error type and source as shown above.

Immediate access to the actual data that causes errors enables you to either quickly remediate problems or ignore the non-conforming data to make progress.

Persisting event data and operational metadata (such as occurrence time and occurrence count) in an Azure Storage artifacts enables easier diagnosis and faster troubleshooting of issues. This data can also be analyzed offline using Azure Log Analytics. Routing this data to EventHub makes it possible to set up a Stream Analytics job to monitor another Stream Analytics job!

It should be noted that the usage of services such as Azure Storage, EventHub, and Log Analytics for analyzing non-conforming data will be charged based on the pricing model for those services.

We are excited for you to try it out our diagnostic logs. Detailed steps on using this capability can be found in the documentation page.

SharePoint Server 2016 in Azure infrastructure services

$
0
0

To take advantage of SharePoint’s collaboration features, Microsoft recommends SharePoint Online in Office 365. If that is not the best option for you right now, you should use SharePoint Server 2016. However, building a SharePoint Server 2016 farm in Microsoft Azure infrastructure services requires additional planning considerations and deployment steps.

For a defined path from evaluation to successful deployment, see SharePoint Server 2016 in Microsoft Azure. This new content set reduces the time it takes for you to design and deploy dev/test, staging, production, or disaster recovery SharePoint Server 2016 farms in Azure.

There are step-by-step instructions for two prescriptive dev/test environments:

1. A single-server farm running in Azure for demonstration, evaluation, or application testing.

2. An intranet farm running in Azure to experiment with client access and administration in a simulated Azure IaaS hybrid configuration.

When you are ready to begin planning the Azure environment for your SharePoint Server 2016 farm, see Designing a SharePoint Server 2016 farm in Azure. A table-based, step-by-step approach assures that you are collecting the right set of interrelated settings for the networking, storage, and compute elements of Azure infrastructure services.

When you are ready to deploy, see Deploying SharePoint Server 2016 with SQL Server AlwaysOn Availability Groups in Azure to build out this high availability configuration:

A table-based, phased approach assures that you are creating the Azure infrastructure with the correct settings, which you can adapt or expand for your business needs.

To assist you in creating the Azure infrastructure and configuring the servers of the high availability SharePoint Server 2016 farm, use the SharePoint Server 2016 High Availability Farm in Azure Deployment Kit, a ZIP file in the TechNet Gallery that contains:

  • Microsoft Visio and Microsoft PowerPoint files with the figures for the two dev/test environments and the high-availability deployment

  • All the PowerShell command blocks to create and configure the high availability SharePoint Server 2016 farm in Azure

  • A Microsoft Excel configuration workbook that generates the PowerShell commands to create the SharePoint Server 2016 high availability farm in Azure, based on your custom settings

Comparing SELECT..INTO and CTAS use cases in Azure SQL Data Warehouse

$
0
0

The team recently introduced SELECT..INTO to the SQL language of Azure SQL Data Warehouse. SELECT..INTO enables you to create and populate a new table based on the result-set of a SELECT statement. So now users have two options for creating and populating a table using a single statement. This post summarises the usage scenarios for both CTAS and SELECT..INTO and summarizes the differences between the two approaches:

Look at the example of SELECT..INTO below:

SELECT *

INTO [dbo].[FactInternetSales_new]

FROM [dbo].[FactInternetSales]

;

The result of this query is also a new round robin distributed clustered columnstore table called dbo.FactInternetSales_new. All done and dusted in three lines of code. Great!

Let’s now contrast this with the corresponding CTAS statement below:

CREATE TABLE [dbo].[FactInternetSales_new]

WITH

( DISTRIBUTION = HASH(Product_key)

, HEAP

)

AS

SELECT *

FROM [dbo].[FactInternetSales]

;

The result of this query is a new hash distributed heap table called dbo.FactInternetSales_new. Note that with CTAS you have full control of the distribution key and the organisation of the table. However, the code is more verbose as a result. With SELECT..INTO that code is significantly reduced and also might be more familiar.

With that said there are some important differences to be mindful of when using SELECT..INTO. There are no options to control the table organization or the distribution method. SELECT..INTO also always creates a round robin distributed clustered columnstore table. It is also worth noting that there is a small difference in behavior when compared with SQL Server and SQL Database. In SQL Server and SQL Database the SELECT..INTO command creates a heap table (the default table creation structure). However, in SQL Data Warehouse, the default table type is a clustered columnstore and so we follow the pattern of creating the default table type.

Below is a summary table of the differences between CTAS and SELECT..INTO:

 CTASSELECT INTO
Distribution KeyAny (full control)ROUND_ROBIN
Table typeAny (full control)CLUSTERED COLUMNSTORE INDEX
VerbosityHigher (WITH section required)Lower (defaults fixed so no additional coding)
FamiliarityLower (newer syntax to Microsoft customers)Higher (very familiar syntax to Microsoft customers)

 

Despite these slight differences and variations there still several reasons for including SELECT..INTO in your code.

In my mind there are three primary reasons:

  1. Large code migration projects
  2. Target object is a round robin clustered columnstore index
  3. Simple cloning of a table.

When customers migrate to SQL Data Warehouse they are often times migrating existing solutions to the platform. In these cases the first order of business is to get the existing solution up and running on SQL Data Warehouse. In this case SELECT..INTO may well be good enough. The second scenario is the compact code scenario. Here a round_robin clustered columnstore table may be the desired option. In which case SELECT..INTO is much more compact syntactically. SELECT..INTO can also be used to create simple sandbox tables that mirror the definition of the source table. Even empty tables can created when paired with a WHERE 1=2 is used to ensure no rows are moved. This is a useful technique for creating empty tables when implementing partition switching patterns.

Finally, customers may not even realize they require SELECT..INTO support. Many customers use off the shelf ISV solutions that require support for SELECT..INTO. A good example might be a rollup Business Intelligence tool that generates its own summary tables using SELECT..INTO on the fly. In this case customers may be issuing SELECT..INTO queries without even realizing it.

For more information please refer to the product documentation for CTAS where the main differences are captured.


Announcing Microsoft Azure Storage Explorer 0.8.9

$
0
0

We just released Microsoft Azure Storage Explorer 0.8.9 last week. You can download it from http://storageexplorer.com/​.

Sovereign_Cloud_Login

Snapshot

Recent new features in the past two releases:

  • Automatically download the latest version when it is available
  • Create, manage, and promote blob snapshots
  • Sign-in to Sovereigh Clouds like Azure China, Azure Germany and Azure US Government
  • Zoom In, Zoom Out, and Reset Zoom from View menu

Try out and send us feedback from the links on the bottom left corner of the app.

Power BI Desktop March Feature Summary

$
0
0
We have a very exciting Power BI Desktop update for you this month! We have several highly-requested features in this month’s release, including textbox font color, several visual improvements, and previews of three highly requested features: report theming, a new matrix visual with major experience updates, and a numeric range slicer.

Launching online training and certification for Azure SQL Data Warehouse

$
0
0

Azure SQL Data Warehouse (SQL DW) is a SQL-based fully managed, petabyte-scale cloud solution for data warehousing. SQL Data Warehouse is highly elastic, enabling you to provision in minutes and scale capacity in seconds. You can scale compute and storage independently, allowing you to burst compute for complex analytical workloads or scale down your warehouse for archival scenarios, and pay based off what you're using instead of being locked into predefined cluster configurations.

We are pleased to announce that Azure SQL Data Warehouse training is now available online via the edX training portal. In this computer science course, you will learn how to deploy, design, and load data using Microsoft's Azure SQL Data Warehouse, or SQL DW. You'll learn about data distribution, compressed in-memory indexes, PolyBase for Big Data, and elastic scale.

Course Syllabus

Module 1: Key Concepts of MPP (Massively Parallel Processing) Technology and SQL Data Warehouse
This module makes a case for deploying a data warehouse in the cloud, introduces massively parallel processing and explores the components of Azure SQL Data Warehouse.

Module 2: Provisioning a SQL Data Warehouse
This module introduces the tasks needed to provision Azure SQL Data Warehouse, the tools used to connect to and manage the data warehouse and key querying options.

Module 3: Designing Tables and Loading Data
This module covers data distribution in an MPP data warehouse, creating tables and loading data.

Module 4: Integrating SQL DW in a Big Data Solution
This module introduces Polybase to access big data, managing, protecting, and securing your Azure SQL Data Warehouse, and integrating your Azure SQL Data Warehouse into a big data solution.

Final Exam
The final exam accounts for 30% of your grade and will be combined with the weekly quizzes to determine your overall score. You must achieve an overall score of 70% or higher to pass this course and earn a certificate.

Note: To complete the hands-on elements in this course, you will require an Azure subscription. You can sign up for a free Azure trial subscription (a valid credit card is required for verification, but you will not be charged for Azure services).  Note that the free trial is not available in all regions. It is possible to complete the course and earn a certificate without completing the hands-on practices.

Exclusive free trial

We’re giving all our customers free access to Azure SQL Data Warehouse for a whole month!  More information on theSQL DW Free Trial.  All you need to do is sign up with your Azure Subscription details before 30th June 2017.

Azure Subscription

If you don’t have an Azure subscription you can sign up for free.  Provision for yourself the industry leading elastic-scale data warehouse literally in minutes and experience how easy it is to go from ‘just data’ to ‘business insights’.  Load your data or try out pre-loaded sample data set and run queries with compute power of up to 1000 DWU (Data Warehouse Units) and 12TB of storage to experience this fully managed cloud-based service for an entire month for free.

Learn more

What is Azure SQL Data Warehouse?

What is Azure Data Lake Store?

SQL Data Warehouse best practices

Load Data into SQL Data Warehouse

MSDN forum

Stack Overflow forum

New Azure Storage JavaScript client library for browsers - Preview

$
0
0

Today we are announcing our newest library: Azure Storage Client Library for JavaScript. The demand for the Azure Storage Client Library for Node.js, as well as your feedback, has encouraged us to work on a browser-compatible JavaScript library to enable web development scenarios with Azure Storage. With that, we are now releasing the preview of Azure Storage JavaScript Client Library for Browsers.

Enables web development scenarios

The JavaScript Client Library for Azure Storage enables many web development scenarios using storage services like Blob, Table, Queue, and File, and is compatible with modern browsers. Be it a web-based gaming experience where you store state information in the Table service, uploading photos to a Blob account from a Mobile app, or an entire website backed with dynamic data stored in Azure Storage.

As part of this release, we have also reduced the footprint by packaging each of the service APIs in a separate JavaScript file. For instance, a developer who needs access to Blob storage only needs to require the following scripts:

Full service coverage

The new JavaScript Client Library for Browsers supports all the storage features available in the latest REST API version 2016-05-31 since it is built with Browserify using the Azure Storage Client Library for Node.js. All the service features you would find in our Node.js library are supported. You can also use the existing API surface, and the Node.js Reference API documents to build your app!

Built with Browserify

Browsers today don’t support the require method, which is essential in every Node.js application. Hence, including a JavaScript written for Node.js won’t work in browsers. One of the popular solutions to this problem is Browserify. The Browserify tool bundles your required dependencies in a single JS file for you to use in web applications. It is as simple as installing Browserify and running browserify node.js -o browser.js and you are set. However, we have already done this for you. Simply download the JavaScript Client Library.

Recommended development practices

We highly recommend use of SAS tokens to authenticate with Azure Storage since the JavaScript Client Library will expose the authentication token to the user in the browser. A SAS token with limited scope and time is highly recommended. In an ideal web application it is expected that the backend application will authenticate users when they log on, and will then provide a SAS token to the client for authorizing access to the Storage account. This removes the need to authenticate using an account key. Check out the Azure Function sample in our Github repository that generates a SAS token upon an HTTP POST request.

Use of the stream APIs are highly recommended due to the browser sandbox that blocks users from accessing the local filesystem. This makes the stream APIs like getBlobToLocalFile, createBlockBlobFromLocalFile unusable in browsers. See the samples in the link below that use createBlockBlobFromStream API instead.

Sample usage

Once you have a web app that can generate a limited scope SAS Token, the rest is easy! Download the JavaScript files from the repository on Github and include in your code.

Here is a simple sample that can upload a blob from a given text:


1. Insert the following script tags in your HTML code. Make sure the JavaScript files located in the same folder.

2. Let’s now add a few items to the page to initiate the transfer. Add the following tags inside the BODY tag. Notice that the button calls uploadBlobFromText method when clicked. We will define this method in the next step.

3. So far, we have included the client library and added the HTML code to show the user a text input and a button to initiate the transfer. When the user clicks on the upload button, uploadBlobFromText will be called. Let’s define that now:

Of course, it is not that common to upload blobs from text. See the following samples for uploading from stream as well as a sample for progress tracking.


•    JavaScript Sample for Blob
•    JavaScript Sample for Queue
•    JavaScript Sample for Table
•    JavaScript Sample for File 

Share

Finally, join our Slack channel to share with us your scenarios, issues, or anything, really. We’ll be there to help!

Moving to the cloud with confidence—Deutsche Börse Group chooses Office 365

$
0
0

Today’s post was written by Ron Markezich, corporate vice president for Microsoft.


Exciting news for global financial services providers in highly regulated countries looking to modernize their business operations with a move to the cloud: Deutsche Börse Group (DBG) is taking a thought leadership approach to digital transformation by moving to Office 365 to digitally transform the way they work. Based in Frankfurt, DBG is a leading stock exchange in Germany and one of the largest exchange organizations worldwide, with offices in more than 20 locations.

To drive business agility in a digital world, DBG chose Microsoft Office 365 as the foundation for a digital transformation to enhance their ability to grow with the demands of the market, and make their employees more productive.

According to Frank Fischer, chief security officer and head of security of Deutsche Börse Group:

“Microsoft satisfied DBG’s concerns about how it controls access to data and change management of automated processes, and did so through a deep examination of the features, such as Customer Lockbox and the capabilities of Office 365. We were also impressed by Microsoft’s transparency, audit measures, regulatory compliance support and overall willingness to meet our needs—performed as part of Microsoft’s Financial Services Compliance Program.”

Microsoft’s innovative approach to helping our customers move to the Microsoft Cloud, while also meeting their regulatory obligations, including our Financial Services Compliance Program, gave DBG confidence in this move. Following our work with DBG, and by adapting our contract terms, Microsoft is in a great position to address industry regulations for all global financial services customers. Germany is one of the most sophisticated regulatory systems in Europe, and having Microsoft Cloud customers in Germany shows how robust Office 365 is in meeting financial regulatory obligations.

—Ron Markezich

The post Moving to the cloud with confidence—Deutsche Börse Group chooses Office 365 appeared first on Office Blogs.

Viewing all 13502 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>