Quantcast
Channel: TechNet Technology News
Viewing all 13502 articles
Browse latest View live

In Case You Missed It – this week in Windows Developer

$
0
0

Did you have a busy week this week? That’s ok, so did we.

In case you missed some of our updates, here is our weekly #ICYMI wrap up, featuring all of the blogs in a TL;DR format for your reading pleasure.

The UWP Community Toolkit Update Version 1.1

This week we released the first update to the UWP Community Toolkit based on your feedback. The update includes joining the .NET foundation, several new features, a sample app and even some documentation to get you started. Click the tweet below to read more!

Introducing the Windows Device Portal Wrapper

“Device Portal is a small web server baked into every Windows device that you can enable once you’ve turned on Developer Mode. We built Device Portal as the starting point for a new generation of diagnostic tooling for Windows – tools that work on all your devices, not just your desktop.”

Pretty cool, right? We can’t wait to see how developers start using the wrapper project and we’re eager to hear your feedback. Click the tweet below to learn more.

And that’s it! Stay tuned for more updates next week, and as always, please feel free to reach out to us on Twitter with questions and comments.

Have a great weekend!

Download Visual Studio to get started!

The Windows team would love to hear your feedback.  Please keep the feedback coming using our Windows Developer UserVoice site. If you have a direct bug, please use the Windows Feedback tool built directly into Windows 10.


Ignite Trip Report–Atlanta GA, Sept 26-30

$
0
0

Last week was I was at the Ignite conference in Atlanta and I was responsible for running the Developer Tools track. Let me tell you it was a FUN and crazy week. Here’s a tip when you run a track (content or otherwise): Wear tennis shoes!

Highlights

    1. The dev track general session with  Scott Hanselman drew over 1200 developers into the room, was live streamed with
      now over 16K views, and  generated a lot of buzz on Twitter
    2. Our track scored highest of the conference! We have a lot of great speakers and content :-)
    3. Our  booths location was fantastic and centered directly under the huge “Developer” section in the middle of the Microsoft Showcase, with a lot of foot traffic
    4. Our developer audience made up roughly 10% of the 23K attendees representative of  our biggest fans of Microsoft developer tools, particularly Visual Studio & .NET

      General session with Scott Hanselman

      clip_image002We had 15 jam packed demos and Scott was brilliant, hilarious and very entertaining while showing a whirlwind of practical demos across the Microsoft developer platform. It’s usually hard to please everyone with these overview sessions but he absolutely killed it. For the finale,
      Open Live Writer was published to the Microsoft Store using Centennial Desktop App Converter.

      You can watch the session here:  Review the Microsoft application platform for developers

      Break out Sessions – all on-demand

      Our track consisted of 2 full pre-days, 25 breakouts, 6 theater sessions and 9 interactive labs. Most breakout rooms were filled to capacity and many people were turned away at the door and directed to overflow viewing areas. This is a testament to the popularity and relevance of the sessions and speakers. You can also head to ignite.microsoft.com to see all of the the 718 (!) sessions, but I thought I would make it easy on you and put all the dev track sessions here if you missed them. I bolded my favorites. :-)
      1. Access data in .NET Core 1.0 with Entity Framework (Rowan Miller)
      2. Break out of the box with Python (Steve Dower)
      3. Build Angular 2 apps with TypeScript and Visual Studio Code (John Papa)
      4. Build cloud-ready apps that rock with open and flexible tools for Azure (Mikkel Mork Hegnhoj)
      5. Build connected Universal Windows Platform apps with .NET and Visual Studio (Daniel Jacobson)
      6. Build performance-obsessed mobile apps with JavaScript (Jordan Matthiesen)
      7. Create UI automated testing for iOS and Android Mobile Apps with Xamarin Test Cloud (James Montemagno)
      8. Develop, debug and deploy Containerized Applications with Docker (Glen Condron, Steve Lasker)
      9. Dig into C# and Visual Basic code-focused development with Visual Studio (Kasey Uhlenhuth)
      10. Dig into terabytes of your application logs with ad-hoc queries in Application Insights (Rahul Bagaria, Evgeny Ternovsky)
      11. Discuss cross-platform mobile development at Microsoft: Xamarin, Cordova, UWP and C++ (Panel) (John Papa, James Montemagno, Jordan Matthiesen, Ankit Asthana, Daniel Jacobson)
      12. Dive deep into ASP.NET Core 1.0 (Daniel Roth)
      13. Embrace DevOps for your next project with Visual Studio Team Services and HockeyApp (Donovan Brown, Joshua Weber)
      14. Explore cross-platform mobile development end-to-end with Xamarin (James Montemagno, Jason McGraw)
      15. Explore the new, cross-platform .NET Core 1.0 (Rich Lander)
      16. Explore web development with Microsoft ASP.NET Core 1.0 (Daniel Roth)
      17. Get an overview of the .NET Platform (Scott Hunter)
      18. Lead an autonomous DevOps team at Scale: a true story (Matthew Manela, Jose Rady Allende)
      19. Learn debugging tips and tricks for .NET Developers (Kaycee Anderson)
      20. Learn what’s new with Microsoft Visual C++ (Ankit Asthana, Kaycee Anderson)
      21. Manage modern enterprise applications with Microsoft Intune & HockeyApp (Thomas Dohmke, Clay Taylor)
      22. Maximize web development productivity with Visual Studio (Mads Kristensen)
      23. Monitor and diagnose web apps & services with Application Insights & SCOM (Mahesh Narayanan, Victor Mushkatin)
      24. Secure the enterprise with Microsoft Visual Studio Team Services (Rajesh Ramamurthy)
      25. Unlock the essential toolbox for production debugging of .NET Web Applications (Ido Flatow)

      And a few pictures…

      WP_20160927_15_45_43_Proclip_image001[6]WP_20160929_20_51_04_ProWP_20160929_12_28_25_ProWP_20160928_22_32_28_ProWP_20160927_15_47_33_ProWP_20160927_15_46_58_Pro

      Thank you everyone for all your hard work making this event a success.

      Enjoy!

      New Home for In-Box DSC Resources

      $
      0
      0

      We have just released a new DSC module called PSDscResources.
      The aim of this module is to serve as the new home of the in-box PSDesiredStateConfiguration module open-sourced on GitHub.
      This allows us to accept contributions for the in-box resources from our wonderful DSC community and release these resources more frequently outside of the Windows Management Framework (WMF).

      The current release of PSDscResources contains 5 of the resources available in the in-box module:

      • Group
      • Service
      • User
      • WindowsOptionalFeature
      • WindowsPackageCab

      These resources are a combination of those in-box as well as community contributions from our experimental xPSDesiredStateConfiguration module on GitHub.
      These 5 resources have also recently been updated to meet the DSC Resource Kit High Quality Resource Module (HQRM) guidelines.

      Resources not currently included should not be affected and can still load from the in-box PSDesiredStateConfiguration module.
      As time goes on resources will gradually be added to PSDscResources.

      Because PSDscResources overwrites in-box resources, it is only available for WMF 5.1.
      Many of the resource updates provided here are also included in the xPSDesiredStateConfiguration module which is still compatible with WMF 4 and WMF 5 (though this module is not supported and may be removed in the future).

      To update your in-box resources to the newest versions provided by PSDscResources, first install PSDscResources from the PowerShell Gallery:

      Install-Module PSDscResources

      Then, simply add this line to your DSC configuration:

      Import-DscResource-ModuleName PSDscResources

      How to Contribute

      With this module on GitHub, you can now contribute to the future in-box DSC resources!
      There are several different ways you can help.
      You can fix bugs, add tests, improve documentation, and open issues.
      See our contributing guide for more info.

      We greatly value every effort our community puts into improving our resources!

      Questions, Comments?

      If you’re looking into using PowerShell DSC, have questions or issues with a current resource, or would like a new resource, let us know in the comments below, on Twitter (@PowerShell_Team), or by creating an issue in the main DSC Resource Kit repository on GitHub.

      Katie Keim
      Software Engineer
      PowerShell Team
      @katiedsc (Twitter)
      @kwirkykat (GitHub)

      CppCoreCheck

      How to reference an existing .NET Framework Project in an ASP.NET Core 1.0 Web App

      $
      0
      0

      I had a reader send me a question yesterday. She basically wanted to use her existing .NET Framework libraries in an ASP.NET Core application, and it wasn't super clear how to do it.

      I have a quick question for you regarding asp.net core. We are rewriting our website using asp.net core, empty from the bottom up. We have 2 libraries written in .net 4.6 . One is our database model and repositories and the other is a project of internal utilities we use a lot. Unfortunately we cannot see how to reference these two projects in our .net core project.

      It can be a little confusing. As I mentioned earlier this week, some people don't realize that ASP.NET Core 1.0 (that's the web framework bit) runs on either .NET Core or .NET Framework 4.6aka "Full Framework."

      ASP.NET Core 1.0 runs on ASP.NET 4.6 nicely

      When you make a new web project in Visual Studio you see (today) this dialog. Note in the dropdown at the top you can select your minimum .NET Framework version. You can select 4.6.2, if you like, but I'll do 4.5.2 to be a little more compatible. It's up to you.

      File New Project

      This dialog could use clearer text and hopefully it will soon.

      • There's the regular ASP.NET Web Application at the top. That's ASP.NET 4.6 with MVC and Web API. It runs on the .NET Framework.
      • There's ASP.NET Core 1.0 running on .NET Core. That's cross platform. If you select that one you'll be able to run your app anywhere but you can't reference "Full" .NET Framework assemblies as they are just for Windows.  If you want to run anywhere you need to use .NET Standard APIs that will run anywhere.
      • There's ASP.NET Core 1.0 running on .NET Framework. That's the new ASP.NET Core 1.0 with unified MVC and Web API but running on the .NET Framework you run today on Windows.

      As we see in the diagram above, ASP.NET Core 1.0 is the new streamlined ASP.NET  that can run on top of both .NET Framework (Windows) and .NET Core (Mac/Windows/Linux).

      I'll chose ASP.NET Core on .NET Framework and I'll see this in my Solution Explorer:

      Web App targeting .NET Framework 4.5.2

      I've got another DLL that I made with the regular File | New Project | Class Library.

      New Class Library

      Then I reference it the usual way with Add Reference and it looks like this in the References node in Solution Explorer. Note the icon differences.

      Adding ClassLibrary1 to the References Node in Solution Explorer

      If we look in the project.json (Be aware that this will change for the better in the future when project.json's functionality is merged with csproj and msbuild) you'll note that the ClassLIbrary1 isn't listed under the top level dependencies node, but as a framework specific dependency like this:

      {
      "dependencies": {
      "Microsoft.StuffAndThings": "1.0.0",
      "Microsoft.AspNetCore.Mvc": "1.0.1",
      },

      "frameworks": {
      "net452": {
      "dependencies": {
      "ClassLibrary1": {
      "target": "project"
      }
      }
      }
      }
      }

      Notice also that in this case it's a type="project" dependency in this case as I didn't build a NuGet package and reference that.

      Hope this helps!


      Sponsor: Big thanks to Telerik for sponsoring the blog this week! 60+ ASP.NET Core controls for every need. The most complete UI toolset for x-platform responsive web and cloud development. Try now 30 days for free!



      © 2016 Scott Hanselman. All rights reserved.
           

      Azure App Service improves Node.js and PHP developer experience

      $
      0
      0

      In March 2015, Azure App Service entered general availability with the goal of making it easier for developers to do cool things in the cloud. This Platform as a Service (PaaS) for web and mobile developers has seen rapid growth with over 350K active customers and over one million active applications hosted on Azure. In addition to a great experience for .NET developers, it also includes support for the PHP, Node.js, Java and Python stacks as well as a number of open source web products. Today, we’re releasing a preview that introduces native Linux support for Node.js and PHP stacks.

      App Service gives web and mobile developers a fully managed experience that takes away the effort of day to day management of the web server and operating system. To deliver that experience, we built on Microsoft’s unique differentiators with Windows Server. While PHP and Node.js have also been supported in App Service since the launch, we’ve heard loud and clear from some developers that having to deal with operating system compatibility quirks, like "path too long" errors with NPM or the web.config files and page rendering pipelines is too cumbersome. The preview now gives you the ability to choose Linux as an alternative to Windows as the base platform, making your web application run on Linux natively instead of Windows and thus making it easier for you to  work directly with .htaccess files or avoid using modified extensions or code. This includes streamlined deployment abilities with deployment slots, custom domains, SSL configuration, continuous deployment and horizontal and vertical scaling.

      App Service is used heavily by our customers for digital marketing solutions running content management systems (CMS). In fact, WordPress makes up over fifty percent of this usage and another forty percent being other LAMP stack CMS's such as Drupal and Joomla!. All of these require some tweaking to run on Windows. In some cases, plug-ins and extensions are not supported which blocks deployments. With this preview, we have updated our marketplace instance of WordPress to run on Apache/Linux. We plan to have updates for Drupal and Joomla! in the future.

      Data solutions on App Service

      We are also working closely with web developers on improving your experience in App Service related to data solutions. Over the last few months, we’ve come a long way in our data solution portfolio for Web developers, including revamping our PHP client drivers for Azure SQL, a new version of the JDBC drivers, expanded support for Linux on our ODBC drivers, MongoDB protocol support in DocumentDB and an early technical preview of the new PHP on Linux SQL Server drivers. We will continue working on more data solutions that make it easier for web developers to bring great applications to market on Azure, whatever the language, stack and platform!

      Azure offers many solutions for hosting MySQL, including:

      In August we announced MySQL in-app for quickly spinning up MySQL dev/test stacks on App Service. We have a similar MySQL dev/test capability as part of the Linux preview.

      Getting started

      The preview of App Service on Linux is available today to all Azure customers. To get started, sign in or start a free trial and create an App Service instance. More information available in the App Service documentation.

      We would love to hear your feedback on this preview. Please visit our feedback page to get it in the hands of our team.

      Database collation support for Azure SQL Data Warehouse

      $
      0
      0

      We’re excited to announce you can now change the default database collation from the Azure portal when you create a new Azure SQL Data Warehouse database. This new capability makes it even easier to create a new database using one of the 3800 supported database collations for SQL Data Warehouse.

      Collations provide the locale, code page, sort order and character sensitivity rules for character-based data types. Once chosen, all columns and expressions requiring collation information inherit the chosen collation from the database setting. The default inheritance can be overridden by explicitly stating a different collation for a character-based data type.

      Changing collation

      To change the default collation, you simple update to the Collation field in the provisioning experience.

      Provision

      For example, if you wanted to change the default collation to case sensitive, you would simply rename the Collation from SQL_Latin1_General_CP1_CI_AS to SQL_Latin1_General_CP1_CS_AS.

      collation

      Listing all supported collations

      To list all of the collations supported in Azure SQL Data Warehouse, you  can simply connect to the master database of your logical server and running the following command:

      SELECT * FROM sys.fn_helpcollations();

      This will return all of the supported collations for Azure SQL Data Warehouse. You can learn more about the sys.fn_helpcollations function on MSDN.

      Checking the current collation

      To check the current collation for the database, you can run the following T-SQL snippet:

      SELECT DATABASEPROPERTYEX(DB_NAME(), 'Collation') AS Collation;

      When passed ‘Collation’ as the property parameter, the DatabasePropertyEx function returns the current collation for the database specified. You can learn more about the DatabasePropertyEx function on MSDN.

      Learn more

      Check out the many resources for learning more about SQL Data Warehouse, including:

      Introducing the 2016 Future of Cloud Computing Survey - Join the cloud conversation

      $
      0
      0

      North Bridge, a leading venture capital firm, Wikibon, a worldwide community of practitioners, technologists and consultants dedicated to improving the technology adoption, have partnered to launch the sixth annual Future of Cloud Computing Survey.

      Microsoft participates in this survey regularly because your feedback on cloud computing is important to us and the industry. We want to hear about your plans for cloud, where it is making an impact across your organization, and what cloud technologies and capabilities you are prioritizing in your business.

      We invite you to be among the first to TAKE THE SURVEY and share it with your network. By doing so you will help all of us in the industry get a better view on what customers are doing with cloud computing and identify emerging trends.

      Results of the survey will be announced later this year and we will be back here to share the findings with you in November.

      We look forward to hearing from you!


      Congratulations to this month's Featured Data Stories Gallery submissions

      $
      0
      0
      Last month we put out the call for education-themed submissions -- along with other topics that interest you -- to the Data Stories Gallery, and we got some fantastic entries! Congratulations to the grand winner and six runners-up. The inspiration topic for this month is "mystery": show us your best data story about haunted houses, Sasquatch sightings, and other spooky data!

      Announcing General Availability for Code Search

      $
      0
      0

      Today, we are excited to announce the general availability of Code Search in Visual Studio Team Services. Code Search is available for Team Foundation Server “15” as well.

      What’s more? Code Search is free and included with a Basic user license.

      With this release Code Search now understands Java. Not only can you perform full text matching, for C#, C, C++ and Java it understands the structure of your code and allows you to search for specific context, like class definitions, comments, properties, etc across all your TFVC and Git Projects. We’ll be adding support for additional languages in the future.

      Enabling Code Search for your VSTS account

      Code Search is available as a free extension on Visual Studio Team Services Marketplace. Click the install button on the extension description page and follow instructions displayed, to enable the feature for your account.

      Note that you need to be an account admin to install the feature. If you are not, then the install experience will allow you to request your account admin to install the feature.

      Installation of the extension triggers indexing of the source code in your account. Depending on the size of the code base, you may have to wait for some time for the index to get built.

      You can start searching for code using the search box on the top right corner or use the context menu from the code explorer.

      SearchBox

      Enabling Code Search on Team Foundation Server “15”

      Code Search is available for Team Foundation Server starting with TFS “15”. You can configure Code Search as part of the TFS Server configuration. For more details see Administer Search.

      Note that you need to be a TFS admin to configure Search as part of TFS.

      Installation of the Code Search extension triggers indexing of the source code in a collection. Installation can be initiated for all collections by a TFS admin during configuration of the Search feature or post configuration by Project Collection admins for their respective Collections. The latter can be achieved by navigating to the Marketplace from within your TFS instance. Depending on the size of the code base, you may have to wait for some time for the index to get built.

      InstallCSOnTFS

      Search across one or more projects

      Code Search enables you to search across all projects (TFVC & Git), so you can focus on the results that matter most to you.

      MultipleProjects

      Semantic ranking

      Ranking ensures you get what you are looking for in the first few results. Code Search uses code semantics as one of the many signals for ranking; this ensures that the matches are laid out in a relevant manner E.g. Files with a term appears as definition are ranked higher.

      SemanticRanking

      Rich filtering

      Get that extra power from Code Search that lets you filter your results by file path, extension, repo, and project. You can also filter by code type, such as definition, comment, reference, and much more. And, by incorporating logical operators such as AND, OR, NOT, refine your query to get the results you want.

      RichFiltering

      Code collaboration

      Share Code Search results with team members using the query URL. Use annotations to figure out who last changed a line of code.

      CodeCollaboration

      Rich integration with version control

      The Code Search interface integrates with familiar controls in the Code Hub, giving you the ability to look up History, compare what’s changed since the last commit or changeset, and much more.

      VersionControlIntegration

      Refer to help documentation for more details.

      Got feedback?

      How can we make Code Search better for you? Here is how you can get in touch with us

       

      Thanks,
      Search team

      Code search and Exploratory Testing GA

      $
      0
      0

      Today, we are releasing the official “V1” releases of our two most popular VS Team Services extensions:

      Both are now out of preview.

      Code Search

      Code search enables you to search across all of your repositories (both TFVC and Git) in all of your projects.  It supports simple full text search on any text file and semantic search in C#, C, C++ and Java, enabling you to look for specific code element types (like class definitions) or filter out “noise” (like comments).

      workitem

      Code search is available to everyone with a Team Services “Basic” license – including the 5 free Basic licenses in every account.  To enable it, you can go to the marketplace and install the extension.  It will take some time, depending on your codebase size, to index all of your code and make it available for search – typically less than 30 minutes.

      Code search will also be available in TFS “15” for on premises customers and is in the currently available Release Candidate 2 preview.  In the initial release of TFS 15, searches will be scoped to individual Team Project Collections.  In update 1, we plan to enable searching across all collections on your TFS server.

      Test and Feedback

      Our Test and Feedback extension enables you to test, report bugs and provide feedback on any app that you run within a browser, on any platform – this includes web apps, of course, but, with our Perfecto Mobile integration, you can also test device apps using their browser based interactive device experience.

      CreateBug

      The testing extension is a browser plugin that enables you to easily take screen shots, mark them up, type comments, capture your click trail and more.  You can easily collect all of that data and  file it as a bug in Team Services or TFS or, in stand alone mode, save it as a report and email it to someone.

      The browser plugin is currently only available for Chrome but FireFox and Edge are in progress.

      The Test and Feedback extension can be used by Team Services/TFS Basic users, Stakeholders or, even people who don’t use TFS or Team Services at all.  In the “connected mode”, you can connect to any TFS 2015 or later version or to VS Team Services.  In “standalone mode” you can use the testing tool without even being a TFS or Team Services user – just test the app, save a report with all of your data and send it to someone.

      Brian

       

       

      Hands-on with the HP Elite x3 now available at Microsoft Stores

      $
      0
      0

      Today, the HP Elite x3 goes on sale in Microsoft Stores. We’re taking a closer look and giving you an overview of some of the benefits the HP Elite x3 offers.

      Announced at Mobile World Congress, the HP Elite x3 is HP’s premium new mobile device powered by Windows 10. It’s lightweight, sports Gorilla Glass 4, is water resistant and is equipped with a battery that packs a major punch. With great built-in features like Cortana*, Continuum** and Microsoft Edge, the HP Elite x3 also features a camera sensor and fingerprint reader that allow you to login securely to the device with Windows Hello***.

      Features include:

      1. Expandable 2TB of storage via SIM card
      2. 8 MP front-facing camera for conference video calls
      3. IP67 water resistant
      4. 5.96″ diagonal edge-to-edge HD display
      5. Up to 33 hours of battery life

      This device, specifically created for business, truly embraces the idea of the mobility of experience. With the power of Continuum and HP’s accessories, the HP Elite x3 Desk Dock and HP Elite x3 Lap Dock, the HP Elite x3 can be used as three separate devices – a phone, a tablet or a PC – bringing a new class of innovation to enterprise customers looking for a powerful device for all of their apps and experiences.

      A few personal favorite features of the HP Elite x3 include the brilliant screen. It’s super crisp and sharp, perfect for working through email and watching videos. Audio is awesome, thanks to the premium audio built-in with B&O Play and the 16 MP rear-facing camera for capturing photos when I’m out and about.

      To pick up your own HP Elite x3, visit your local Microsoft Store or microsoftstore.com. You can also purchase the device at HP.com.

      *Cortana available in select markets.
      **Feature requires Continuum-compatible accessories, all sold separately. External monitor must support HDMI input.
      ***Windows Hello requires specialized hardware, including fingerprint reader, illuminated IR sensor or other biometric sensors and capable devices.

      Migration to Modern Public Folders – Notes from the Field

      $
      0
      0

      In this blog post I wanted to go through the documented TechNet process for migrating legacy public folders to Exchange 2013/2016, expanded with real world guidance gained from field support. I have been involved in a number of legacy public folder data migrations to Exchange 2013/2016 and I wanted to pass along some of the lessons learned along the way. Please note: in case of discrepancies in various CMDlets used, TechNet article is the guidance you should follow.

      Note for Exchange 2007 customers reading this guidance – Exchange 2010 is the only version of Exchange explicitly mentioned in the TechNet article as this version of the guidance focuses only on Exchange 2016, and Exchange 2007 is not supported in coexistence with it. The Exchange 2010 guidance applies equally to Exchange 2007 unless otherwise noted.

      Part 0: Verify public folder replication

      This step is not included in the TechNet article, but it is vital to ensure that the data set being migrated is complete. If your public folder infrastructure only consists of one public folder database, you may proceed to Step 1, downloading the migration scripts. If, on the other hand, your public folders are replicated among two or more databases, you must ensure that at least one of those databases contains a fully replicated data set. The process to move data from legacy to modern public folders focuses on only one legacy database as the source, so our effort here will concentrate on selecting the most appropriate public folder database, then verifying it has all public folder data successfully replicated into it.

      1. First, download this script from the TechNet Script Gallery.

      2. The output of the script is an HTML page showing several metrics, most importantly the listing of each public folder replica and the replication status of each folder expressed as a percentage. Hovering over the percentage will pop up the size and item count of a specific folder replica.

      clip_image001

      3. It is important to note that we do not use the data stored in the legacy Non_IPM_Subtree folders, such as OWAScratchPad, SCHEDULE+ FREE BUSY or OFFLINE ADDRESS BOOK in the new Exchange 2013/2016 Modern Public Folders. Only concern yourself with making sure you have one public folder database with a complete set of the user data folders.

      Troubleshooting public folder replication

      This section is intended to cover the most common issue related to replication, not an exhaustive reference for the task. Bill Long’s Public Folder Replication Troubleshooting blog series is the first, best option for tracking down a difficult replication issue. If you do not find a solution here or in Bill’s blog posts, please contact Microsoft support and open a case to resolve your issue.

      Not all folders have a replica in every database – in this case, use the AddReplicaToPFRecursive.ps1 script in the built in Scripts directory of your Exchange installation. Using the -Server and -ServerToAdd switches allows you to be specific about which is the source and target database in the process, so be prepared to run the script from Server1 to Server2 and vice versa to fully replicate two public folder databases. Depending on the amount of data to be replicated, it may take some time for the target database to converge. Rerun the Get-PublicFolderReplication script at intervals to verify progress.

      Part 1: Download the migration scripts

      1. Download all scripts and supporting files from Public Folders Migration Scripts.
      2. Save the scripts to the local computer on which you’ll be running PowerShell. For example, C:\PFScripts. Make sure all scripts are saved in the same location.

      Notes from the field:

      Yes, the migration scripts are included in the scripts directory of the Exchange installation directory (don’t forget you can use CD $Exscripts to easily change to that directory in the Exchange Management Shell), but it is recommended that you download them fresh from the link above to ensure you get the latest version.

      Part 2: Prepare the Exchange 2010 server and public folders for the migration

      Perform all steps in this section in the Exchange Management Shell on your Exchange 2010 server.

      1. Open the Exchange Management Shell on your Exchange 2010 server.

      2. For verification purposes at the end of migration, run the following commands to take snapshots of your current public folder deployment:

      1. Run the following command to take a snapshot of the original source folder structure.
        Get-PublicFolder -Recurse | Export-CliXML C:\PFMigration\Legacy_PFStructure.xml
      2. Run the following command to take a snapshot of public folder statistics such as item count, size, and owner.
        Get-PublicFolderStatistics | Export-CliXML C:\PFMigration\Legacy_PFStatistics.xml
      3. Run the following command to take a snapshot of the permissions.
        Get-PublicFolder -Recurse | Get-PublicFolderClientPermission | Select-Object Identity,User -ExpandProperty AccessRights | Export-CliXML C:\PFMigration\Legacy_PFPerms.xml
        Save the information from the preceding commands for comparison purposes after your migration is complete.

      Notes from the field:

      I’ve seen customers use some form of file difference comparison utility to great effect here. The files generated with the previous three commands will be compared to the same files generated after the migration process is completed to verify consistency. Thus far, I’ve not had a customer suffer any data loss or experience permissions issues post migration.

      3. If the name of a public folder contains a backslash (\), the public folders will be created in the parent public folder when migration occurs. Before you migrate, you need to rename any public folders that have a backslash in the name if you don’t want this to happen.

      1. To locate public folders that have a backslash in the name, run the following command.
        Get-PublicFolderStatistics -ResultSize Unlimited | Where {$_.Name -like “*\*”} | Format-List Name, Identity
      2. If any public folders are returned, you can rename them by running the following command.
        Set-PublicFolder -Identity -Name

      Part 3: Generate the CSV files

      1. Open the Exchange Management Shell on your Exchange 2010 server.

      2. Run the following command to create a file that maps the folder name to the folder size for each public folder you want to migrate. You’ll need to specify an accessible network share where the CSV file created by the following command is run, and you’ll need to specify the FQDN of your Exchange 2010 server.

      This command needs to be run by a local administrator and will create a CSV file that contains two columns: FolderName and FolderSize. The values for the FolderSize column will be displayed in bytes. For example, \PublicFolder01,10000.
      .\Export-PublicFolderStatistics.ps1

      Example:
      .\Export-PublicFolderStatistics.ps1 “\\FileServer\Share\FolderSize.csv” “EX2010.corp.contoso.com”

      Here is a sample of what the output file will look like:

      clip_image001[6]

      Notes from the field:

      Nothing to fear on this one. It’s purely informational output in the form of a .csv file. This output file becomes the input file for the next script. Feel free to run this script as needed before the migration process is actually started to get a feel for the script and to parse the output.

      3. Run the following command to create the public folder-to-mailbox mapping file. This file is used to calculate the correct number of public folder mailboxes on the Exchange 2016 Mailbox server. You’ll need to specify the following parameters:

      • Maximum mailbox size in bytes This is the maximum size you want to set for the new public folder mailboxes. When specifying this setting, be sure to allow for expansion so the public folder mailbox has room to grow. In the command below, the value 20000000000 is used to represent 20 GB.
      • Folder to size map path This is the file path of the CSV file you created when running the previous command. For example, \\FileServer\Share\FolderSize.csv.
      • Folder to mailbox map path This is the file name and path of the folder-to-mailbox CSV file that you’ll create with this step. If you specify only the file name, the file will be generated in the current Windows PowerShell directory on the local computer.
        .\PublicFolderToMailboxMapGenerator.ps1
        Example:
        .\PublicFolderToMailboxMapGenerator.ps1 20000000000 “\\FileServer\Share\FolderSize.csv” “\\FileServer\Share\PFMailboxes.csv

      Notes from the field:

      This part deserves substantial additional explanation.

      First, let’s talk about the size and number of public folder mailboxes to be provisioned in Exchange 2016. Using an example of a 20 GB maximum PF mailbox size (100 GB is the current maximum – see Limits for Public Folders), let’s look at preparing to migrate 100 GB of public folder data. Following our guidance to only fill the public folder mailboxes to 50% of their capacity at migration, we would need 10 public folder mailboxes created to host the data. This example would modify the example command above to:

      .\PublicFolderToMailboxMapGenerator.ps1 10737418240 “\\FileServer\Share\FolderSize.csv” “\\FileServer\Share\PFMailboxes.csv

      Finally, the output file may be manipulated prior to future steps in order to name the public folder mailboxes appropriately.

      Note: in larger modern public folder installations, it’s recommended for the hierarchy mailbox (the first PF mailbox to be created that has the writable copy of the folder hierarchy) to be isolated onto a database and server that hosts no other PF mailboxes. That hierarchy mailbox should be configured not to answer hierarchy requests from users in order to dedicate it to the task of maintaining the hierarchy for the environment.

      Here is a sample of what the initial output file will look like:

      clip_image001[8]

      Here is a modified file with the folder structure expanded and the PF mailbox names changed:

      clip_image001[10]

      While I have your attention on the screenshot of the public folder mapping file, let me advise that on one of my engagements, there were at least a dozen or so formatting errors that halted the beginning of the public folder migration (step 5 in the process). From what I could see, Exchange 2013/2016 processes the entire mapping file before starting the move of data to ensure that it is formatted correctly because the errors that exited the command to start the migration occurred very quickly, and some of the errors were hundreds of lines into the mapping file. The tedious aspect of encountering mapping file errors like this is that the command to start the migration exits immediately and must be rerun to find the next error. After restarting the command a dozen or so times, I was quite pleased when the command finally returned that the migration was queued.

      The errors I encountered were not difficult to remedy. When the command errors out, it tells you which line of the file the error is located in. My recommendation is to have a utility on hand that displays line numbers (the venerable Notepad++ comes to mind) to make it easy to navigate the file. The types of errors I saw were either missing double quotes or too many double quotation marks defining the public folder mailbox or the path of the folder. As you can see in the screenshot above, the format is “PFMailbox”,”Total path of the folder”. Look the affected line over and ensure it is correctly punctuated.

      Part 4: Create the public folder mailboxes in Exchange 2016

      Run the following command to create the target public folder mailboxes. The script will create a target mailbox for each mailbox in the .csv file that you generated previously in Step 3, by running the PublicFoldertoMailboxMapGenerator.ps1 script.

      .\Create-PublicFolderMailboxesForMigration.ps1 -FolderMappingCsv PFMailboxes.csv -EstimatedNumberOfConcurrentUsers:

      Mapping.csv is the file generated by the PublicFoldertoMailboxMapGenerator.ps1 script in Step 3. The estimated number of simultaneous user connections browsing a public folder hierarchy is usually less than the total number of users in an organization.

      Notes from the field:

      First comment here is to determine if you really need to bother with the script to create the public folder mailboxes. If your modern public folder mailbox count to be created is significant, the script is quite efficient, but if you are only going to be creating a few mailboxes, it may be simpler to run a few quick EMS commands to put the needed mailboxes in place.

      Working through the process of manually creating the mailboxes also allows us to discuss two specific switches you should know about – HoldForMigration and -IsExcludedFromServingHierarchy.

      • Creating the first Public Folder Mailbox. In the command below, the HoldForMigration switch prevents any client or user, except for the Microsoft Exchange Mailbox Replication service (MRS) process, from logging on to a public folder mailbox.

      New-Mailbox -Name PFMB-HIERARCHY -PublicFolder –HoldForMigration –IsExcludedFromServingHierarchy:$True

      This first Public Folder Mailbox contains the writable copy of the Public Folder Hierarchy. In this example command, the mailbox is named for that purpose, and the -IsExcludedFromServingHierarchy switch will be set to $True permanently.

      Remember, it is recommended to locate the hierarchy public folder mailbox in a database with no other public folder mailboxes, mounted on a server with no other active public folder mailboxes, particularly in larger public folder environments.

      • Create the Public Folder Mailboxes that will host data (based on our example .csv file above):

      New-Mailbox -Name PFMB-AMERICAS -PublicFolder –IsExcludedFromServingHierarchy:$True
      New-Mailbox -Name PFMB-APAC -PublicFolder –IsExcludedFromServingHierarchy:$True
      New-Mailbox -Name PFMB-EMEA -PublicFolder –IsExcludedFromServingHierarchy:$True

      The -IsExcludedFromServingHierarchy switch set to $True for these mailboxes will NOT be a permanent setting here as it is for the hierarchy mailbox. Creating a new public folder mailbox without initially excluding it from serving the hierarchy creates a situation where users may connect to it for hierarchy requests before the initial hierarchy synchronization has completed. It’s easy to determine if a public folder mailbox is ready to serve the hierarchy with a short Exchange Management Shell command:

      Get-Mailbox -PublidFolder | fl name,*hierarchy*

      Note the IsHierarchyReady attribute of the newly created “PFMB3” compared to the other PF mailboxes in this output below. Once that attribute reports True, the IsExcludedFromServingHierarchy switch may be set to $False.

      clip_image001[12]

      Hierarchy synchronization will occur automatically in short order, though it is possible to call the synchronizer into action immediately:

      Get-Mailbox -PublicFolder | Update-PublicFolderMailbox -InvokeSynchronizer

      A bit more on the IsExcludedFromServingHierarchy attribute from TechNet (highlight mine):

      “This parameter prevents users from accessing the public folder hierarchy on the specified public folder mailbox. For load-balancing purposes, users are equally distributed across public folder mailboxes by default. When this parameter is set on a public folder mailbox, that mailbox isn’t included in this automatic load balancing and won’t be accessed by users to retrieve the public folder hierarchy. However, if you set the DefaultPublicFolderMailbox property on a user mailbox to a specific public folder mailbox, the user will still access the specified public folder mailbox even if the IsExcludedFromServingHierarchy parameter is set for that public folder mailbox.”

      Based on this guidance, there will need to be enough public folder mailboxes provisioned to accommodate no more than an average of 2,000 concurrent users (see again the TechNet article Limits for public folders). This adds another dimension to the sizing guidelines. For example, you may well be able to place the public folder data set in as few as 5 public folder mailboxes, but if your system hosts 16,000 mailbox enabled users, a minimum of 8 public folder mailboxes would be necessary to support the user connections. Be sure to validate both size and user connection counts in your modern public folder design stage.

      Part 5: Start the public folder migration

      At this point, you’re ready to start the public folder migration. The steps below will create and start the migration batch. Depending on the amount of data in your public folders and the speed of your network connections, this could take a few hours or several days. During this stage of the migration, users will still be able to access their public folders and content on your Exchange 2010 server. In “Part 6: Complete the public folder migration (downtime required)”, you’ll run another sync to catch up with any changes made in your public folders, and then finalize the migration.

      1. Open the Exchange Management Shell on your Exchange 2016 server.
      2. Run the following command to create the new public folder migration batch. Be sure to change the path to your public folder-to-mailbox mapping file.
        New-MigrationBatch -Name PFMigration -SourcePublicFolderDatabase (Get-PublicFolderDatabase -Server EX2010) -CSVData (Get-Content “\\FileServer\Share\PFMailboxes.csv” -Encoding Byte)
      3. Start the migration by using the following command.
        Start-MigrationBatch PFMigration

      The progress and completion of the migration can be viewed and managed in the EAC. Because the New-MigrationBatch cmdlet initiates a mailbox migration request for each public folder mailbox, you can view the status of these requests by using the mailbox migration page. You can get to the mailbox migration page and create migration reports that can be emailed to you by doing the following:

      1. Open the EAC by browsing to the URL of your Exchange 2016 Mailbox server. For example, https://Ex2016/ECP.
      2. Navigate to Mailbox > Migration.
      3. Select the migration request that was just created, and then click View Details in the Details pane.

      The Status column will show the initial batch status as Created. The status changes to Syncing during migration. When the migration request is complete, the status will be Synced. You can double-click a batch to view the status of individual mailboxes within the batch. Mailbox jobs begin with a status of Queued. When the job begins the status is Syncing, and once InitialSync is complete, the status will show Synced.

      Notes from the field:

      In the deprecated “Serial Migration” TechNet article, the following guidance was given concerning the data transfer rate (highlight mine):

      You’ll know that the command started successfully when the migration request reaches a status of Queued or InProgress. Depending on how much data is contained in the public folders, this command can take a long time to complete. If migration isn’t being throttled due to the load on the destination server, the typical data copy rate can be 2 GB to 3 GB per hour.

      In my experience, it is difficult to set any real estimate on the data transfer rate, and the later TechNet article detailing the “Batch Migration” process on which this blog post is based has changed the verbiage to “this may take several hours“. The variables of network speed and available bandwidth combined with the quality of the hardware supporting both old and new versions of Exchange will have a significant impact on your data transfer rate at this stage.

      The number and size of your public folders will also affect the speed of the migration. One particular migration I was involved with saw roughly 70% of the total public folder data located in one folder (high resolution pictures). During the time that particular folder was being moved, the data transfer rate was closer to 20 GB per hour, but the data rate slowed closer to the original 2-3 GB per hour estimate once the final 30% of the data was being moved from the many, smaller remaining folders.

      Under normal circumstances, the migration of data will progress to the 95% synchronization mark and autosuspend. If you leave the migration in the suspended state for any length of time, the migration process will resynchronize to 95% at intervals in the same manner as moving a mailbox configured for manual completion. In one of my public folder migrations, the process progressed to 95%, but the reported state did not show the desired “AutoSuspended”, but rather an error. Looking at the onscreen report, the move failed due to TooManyLargeItemsPermamentException and called out two items, one 12 MB in size, the other 14 MB (organization maximum message size was 10 MB). These two items were located and saved offline from the folders by the customer using Outlook. I modified the migration request to skip the large items (simply added -LargeItemLimit 2 to the command) and resumed it, and the status moved to the AutoSuspended state.

      More on the LargeItemLimit switch to allow the large items to be skipped (save the items out if you want to keep them!):

      The LargeItemLimit parameter specifies the maximum number of large items that are allowed before the migration request fails. A large item is a message in the source mailbox that exceeds the maximum message size that’s allowed in the target mailbox. If the target mailbox doesn’t have a specifically configured maximum message size value, the organization-wide value is used.

      For more information about maximum message size values, see the following topics:

      Valid input for the LargeItemLimit parameter is an integer or the value unlimited. The default value is 0, which means the migration request will fail if any large items are detected. If you are OK with leaving a few large items behind, you can set this parameter to a reasonable value (we recommend 10 or lower) so the migration request can proceed.

      Part 6: Complete the public folder migration (downtime required)

      Until this point in the migration, users have been able to access the legacy public folders. The next steps will disconnect users from the Exchange 2010 public folders and will lock the folders while the migration completes its final synchronization. Users won’t be able to access public folders during this process. Also, any mail sent to mail-enabled public folders will be queued and won’t be delivered until the public folder migration is complete.

      Before you can finalize the migration, you need to lock the public folders on the Exchange 2010 server to prevent any additional changes by doing the following:

      1. Open the Exchange Management Shell on your Exchange 2010 server.
      2. Run the following command to lock the legacy public folders for finalization.
        Set-OrganizationConfig -PublicFoldersLockedForMigration:$true

      If your organization has multiple public folder databases, you’ll need to wait until public folder replication is complete to confirm that all public folder databases have picked up the PublicFoldersLockedForMigration flag and any pending changes users recently made to folders have replicated across the organization. This may take several hours.

      Once the public folders on the Exchange 2010 server have been locked, you can finalize the migration by doing the following:

      • Open the Exchange Management Shell on your Exchange 2016 server.
      • Run the following command to change the Exchange 2016 deployment type to Remote.
        Set-OrganizationConfig -PublicFoldersEnabled Remote
      • Run the following command to complete the public folder migration.
        Complete-MigrationBatch PFMigration

      When you do these steps, Exchange will perform a final synchronization between the Exchange 2010 server and Exchange 2016 server. If the final synchronization is successful, the public folders on the Exchange 2016 server will be unlocked and the status of the migration batch will change first to Completing, and then to Completed.

      Notes from the field:

      When you have progressed to this step, there is one helpful tip I’ve found to speed up the process. After running the command Set-OrganizationConfig -PublicFoldersLockedForMigration:$true, restart the information store service on the legacy Exchange server involved in the migration. This will refresh the Store cache on that server that the public folders are locked for migration, allowing you to move immediately to the Step 7 command Complete-MigrationBatch PFMigration. Considering that all your mailboxes have been moved before you migrate public folders anyway, there should be nothing to stop you from recycling the information store service. If you are unable to do this, the command to complete the migration batch in step 7 will not proceed until the information store service on the legacy server picks up the change.

      Part 7: Finalize the public folder migration (downtime required)

      First, run the following cmdlet to change the Exchange 2016 deployment type to Remote:

      1. Set-OrganizationConfig -PublicFoldersEnabled Remote

      Once that is done, you can complete the public folder migration by running the following command:

      2. Complete-MigrationBatch PFMigration

      Or, in EAC, you can complete the migration by clicking Complete this migration batch.

      When you complete the migration, Exchange will perform a final synchronization between the Exchange 2010 server and Exchange 2016. If the final synchronization is successful, the public folders on the Exchange 2016 server will be unlocked and the status of the migration batch will change to Completing, and then Completed.

      Notes from the field:

      If you have advanced to this point, it’s likely that the final 5% synchronization of data will finish smoothly and the process will reach the Completed status. The only real note here is that 5% is a relative number. If your legacy public folder installation is 17 GB as one of my customers was, that’s only about 870 MB to finish syncing. If your legacy install has 1.5 TB of data, you still have multiple hours of sync time remaining.

      Part 8: Test and unlock the public folders

      After you finalize the public folder migration, you should run the following test to make sure that the migration was successful. This allows you to test the migrated public folder hierarchy before you switch to using Exchange 2016 public folders.

      1. Open the Exchange Management Shell on your Exchange 2016 server.
      2. Run the following command to assign some test mailboxes to use any newly migrated public folder mailbox as the default public folder mailbox.
        Set-Mailbox -Identity -DefaultPublicFolderMailbox
      3. Open Outlook 2010 or later using the test user identified in the previous step, and then perform the following public folder tests:
        • View the hierarchy
        • Check permissions
        • Create and delete public folders
        • Post content to and delete content from a public folder
      4. If everything looks okay, run the following command to unlock the public folders for all other users.
        Get-Mailbox -PublicFolder | Set-Mailbox -PublicFolder -IsExcludedFromServingHierarchy $false
      5. On the Exchange 2010 server, run the following command to indicate that the public folder migration is complete.
        Set-OrganizationConfig -PublicFolderMigrationComplete:$true
      6. After you’ve verified that the migration is complete, run the following command on the Exchange 2016 server.
        Set-OrganizationConfig -PublicFoldersEnabled Local

      Notes from the field:

      At this point, your Modern Public Folders are accessible to the users. Outlook will determine the updated location of the public folders via AutoDiscover, so it may be necessary to reopen Outlook in order to force an AutoDiscover query for the Public Folder list to show up and be accessible. Outlook performs an AutoDiscover query at startup and at regular intervals thereafter, so any open clients will eventually pick up the change under any circumstance.

      Hope this is helpful! I wanted to quickly acknowledge that I definitely had help from various people in getting this post ready for publishing. Thank you!

      Butch Waller
      Premier Field Engineer

      Windows 10 Tip: Getting started with the Windows Ink Workspace

      $
      0
      0

      Today, we’re talking about how to get started with Windows Ink* in four easy steps. Windows Ink is part of the Windows 10 Anniversary Update and lets you capture ideas quickly and naturally with a pen or touch-enabled device.

      Get started with the Windows Ink Workspace

      Windows Ink Workspace

      First, find the Windows Ink Workspace, your canvas for all the ink-powered features and apps on your PC. The Workspace has built-in experiences like sketchpad, screen sketch and Sticky Notes, as well as apps optimized for pen use.

      Press the Windows Ink Workspace button in your system tray at the bottom right of your screen or click the back of your pen**! If you don’t see the icon, right-click anywhere in the system tray and click on “Show Windows Ink Workspace Button.”

      Take notes that become smart and active with Sticky Notes

      Windows Ink Workspace

      You can find Sticky Notes in the Start menu or at the top of your Windows Ink Workspace. Write an address and Maps readies it for finding a route; jot down a few items and they become an easy-to-manage checklist; scribble down an email address and it tees up in your Mail app.*** And, with Cortana, you can simply jot down a time or date with your note and it will be highlighted. You can then tap on it to create a Cortana reminder, available across all the devices you have Cortana installed on. 1

      You can even write down a flight number and click on the text when it turns blue to track the flight right in the Sticky Note***.

      Create in the sketchpad and trace with the digital ruler

      Windows Ink Workspace

      The sketchpad in the Windows Ink Workspace is a simple blank canvas where you can quickly and easily draw an idea, doodle, create, and solve problems.

      You can also use the digital ruler with Windows Ink to measure distance or trace along a straight edge, just like on paper! Head to the sketchpad or screen sketch within the Windows Ink Workspace and click on the ruler icon in the upper right-hand corner of the toolbar. Then, adjust the digital ruler and with your pen (or finger, if you click on the “touch writing” icon) to draw sharp lines along the edge.

      Draw, crop and markup your desktop with screen sketch

      Windows Ink Workspace

      Screen sketch lets you draw on a screen capture of your entire desktop – allowing you to collaborate on documents as though you would with pen and paper or add your personal touch to an awesome picture you saw in the Photos app, then easily share this with the rest of your world. Screen sketch is designed to be the most natural way to freely express emotions and personalize content, so you can draw, crop and mark up the entire image. Similar to the sketchpad, it’s easy for you to save and share these creations with your friends and colleagues.

      If you aren’t running the Windows 10 Anniversary Update yet, learn how to get it here.

      Have a great week!

      *Touch-capable tablet or PC required. Pen accessory may be sold separately.
      **User must enable in settings and have a Bluetooth button on pen.
      1Available in select markets; experience may vary by region and device.
      ***US only

      Data Mining the 2016 Presidential Campaign Finance Data

      $
      0
      0

      So you just got yourself a brand new dataset.

      Now what?
      How do you prepare a dataset for machine learning?
      Where do you even begin?

      This post is an attempt to demystify this process, using a fun example that is of topical interest.

      Dataset Selection

      Since we are in the middle of the U.S. presidential elections, let’s use the free and publicly accessible campaign finance dataset, provided by the Federal Elections Committee (FEC). Specifically, we will be using the Contributions by Individuals dataset for the 2015-2016 election cycle. Under US Federal law, aggregate donations of $200 or more by an individual must be disclosed. This dataset represents the itemized transactions where individuals donate to the political campaigns of candidates, political action committees (PACs) or national party committees.


      This article will try to show to the user how to receive a brand new dataset, formulate a data mining question and how to process the data to get ready for machine learning.

      Data Mining Process

      Every data mining solution is tailored to the data at hand and the question it is trying to answer, so there are no cookie cutter solutions. This blog post walks users through the general thought process when approaching a new dataset. We’ll begin with the data mining framework, which can help guide the “next-steps” thought process in data mining. For this exercise we’ll use a combination of R programming, SQL, and Azure Machine Learning Studio.

      Method

      Tools

      Import and Conversion

      We’ll be using R to perform the initial importing. We have provided a sample R script for loading in and converting this dataset to CSV format.


      Some things to note:

      • The data file is pipe (“|”) delimited: this will be solved by using the read.table() command, which lets us specify less conventional flat file delimiters. We will also convert a copy to CSV for ourselves since it’s the most versatile flat file format and will allow us to use other tools such as Excel, PowerBI, or Azure ML Studio.
      • The file does not have headers included: the list of headers and their descriptions are actually in a separate file. These will have to be renamed and included after the initial read-in.
      • The file is large. At a whopping ~2.8 GB, it will take about 15 minutes to read in and will require about 6 GB of ram to read into memory with R. You may need a virtual machine for this step if you are on a particularly modest machine.

      This is a dynamically changing dataset that is updated almost daily. As of the date of writing of this post (9/30/2016), the dataset is 2.8 GB with 12,575,590 rows and 21 columns. We’ll now use the power of the cloud by importing the data into Azure ML Studio. We published the entirety of this experiment into the Cortana Intelligence Gallery where you may clone and follow along.


      Clone this experiment in Azure Machine Learning Studio

      Asking Questions of Your Data

      A simplified view of ML is that you ask a question of your data, framing historical data in a way that the predictive model will be able to answer for future cases. This dataset is quite versatile and can be adapted for a host of solutions. Some example questions that have been asked of this dataset by other data miners and researchers are:

      In the spirit of the election, let’s setup the dataset in a way to identify transactions to Hillary Clinton or Donald Trump. To do that we will need to identify and label each transaction (supervised learning) that went toward Clinton or Trump. From then it can become a classification machine learning problem.

      The individual transactions dataset also contains transactions to all Senators, all House of Representatives, and all Presidential candidates running for election. Those will have to be filtered out, which will be explained in the next section.

      Tracking Transactions to the Clinton or Trump Campaigns

      The dataset represents individual donations to committees and each committee is identified by “committee id”. By identifying which committees are associated directly with which candidate, and filtering for just Clinton or Trump, we can identify transactions that are for their respective election campaigns. Transactions where the contributor (1) donated directly to the official candidate campaign, or (2) donated to a committee that is aligned with a single candidate, are included.


      Pulling in Reference Data

      Luckily there’s a committee linkage file which shows the existing relationships between committees to candidates (if a relationship exists) by “candidate id” and “committee id”. There also exists a candidate master file which lists out every candidate by “candidate id”. Lastly there is a committee list that lists out every committee by “committee id”.


      Extend this experiment:
      By cross-referencing in this manner, we are ignoring issue advocacy PACs, candidate leadership PACs, and National Party Committees (such as the RNC and DNC). The political preference of donors to these committees can often be inferred by how closely a candidate is aligned with an issue or political party. However, as they have not donated directly to the campaigns we’re referencing, we’ve excluded them for our purposes here. To provide a larger data set, you may opt to include donors to these organizations as well.

      Finding Clinton or Trump Committees


      A SQL query can be applied to the candidate master file to the “candidate id” for both Clinton and Trump to filter out the other 7,224 candidates in the list. We will then perform an inner join on “candidate id” to filter out only the rows (committees) that are officially aligned with Trump and Clinton. The result is that we find 6 committees that are officially aligned with Clinton or Trump.

      Understanding the Domain


      Domain expertise or research into the data is crucial to develop sound models. Upon interviewing a domain expert, we found that candidates also get direct funding from victory funds committees. A SQL query to the candidate list to find committee names with the words “victory” and followed by either “Clinton” or “Trump” reveals the “committee ids” for both Clinton’s victory fund and Trump’s victory fund.


      Scoping Down to the Data

      In the end we find that there are 8 “committee ids” that we need to look out for within the individual transactions dataset. A final inner join of individual transactions dataset by the 8 committees will filter out all non-related transactions. After the filtering the data is reduced to 1,004,015 relevant rows from its original 12,575,590 rows.

      Extend this experiment:
      This experiment can easily be adapted to label other candidates as well, such as Ted Cruz or Bernie Sanders.

      What’s in the Data?

      The results are quite fascinating. Among the reported transactions in this dataset, only 5.8% out of the 1,004,015 transactions were for Trump, representing 58k transactions. Keep in mind that many low dollar value gifts (below $200) are not reported in this dataset. The data also shows that Hillary transactions total $298 million while Trump raised $50 million from individual contributors. This is a steep asymmetry of the class labels that seems abnormal for presidential campaign elections and is therefore researched further in the section that follows.

      Transaction Frequency by Candidate

      You can reproduce this pie chart
      using this R code.

      Dollar Amount Raised by Candidate

      Researching the Asymmetry

      Upon further research to see if this asymmetry is based upon error, Trump has raised $127,980,237 total this cycle, $19,370,699 from itemized individual donors, whereas Clinton has raised $315,353,001 total this cycle, $200,132,873 from itemized donors. To put this in perspective, President Obama raised $722,393,592 in the entire 2012 cycle, $315,192,451 from itemized individual donors.

      It should be noted that Trump subsidized his primary campaign effort with much of his own funds, contributing a total of $52,003,469. He did not begin fundraising in earnest from individuals until after he was the presumptive Republican nominee in May of 2016.

      In the end the asymmetry between Trump and Clinton transactions does not seem like an error and must be dealt with using data mining techniques described later.

      Feature Selection

      Feature selection is normally done as the last step prior to machine learning. However, working with a large data set continuously will bog down the running times of any tool that we may use. When working with large datasets, it is recommended to perform feature selection relatively early and drop unnecessary columns.

      From the individual contributions dataset, we will use the following features as predictors or engineered into features:

      FeatureDescription
      NAMEThe donator’s name. Can be used to predict user gender.
      CITYThe city address of the donator
      STATEThe state address of the donator
      OCCUPATIONThe occupation of the donor
      TRANSACTION_AMTThe amount of donation in this transaction

      The remaining columns will be dropped from the dataset.

      Extend this experiment:
      The “memo” column contains important clarifications on each transactions, e.g. if a given transaction was a refund, an earmark, etc. Research into this column can yield a more granular dataset for the model. The “image number” column also contains within it the month and day of the receipt’s scan. Extracting the day of the week can yield other interesting features such as whether it’s a holiday or not.

      Feature Engineering with External Datasets

      Never look at the data in front of you in isolation but consider the world as your data source. Extra data can always be pulled in to improve the model. In this dataset we have access to contributor’s names. Luckily the US Social Security Administration keeps a record of every baby name filed at birth, their year, and gender. By doing statistical aggregations of first names by gender, we can predict contributor’s genders based upon their first names. For example, 64% of all people named “Pat” that were born in the US between 1932 and 2012 were male, therefore a contributor with the name “Pat” would be classified as male. There also exist titles such as Mr. and Ms. which are even better predictors of gender.

      Extracting First Names & Titles


      Currently the contributor names include their full names. We will extract first names and titles using R within the execute R script module within Azure Machine Learning.

      Names and titles were extracted for all but 33 rows.

      Predicting Gender by First Names

      We import a gender model table where 17,597 unique names and titles and their associated predicted gender label. From this we can perform a left-outer-join to look up gender values. Left-outer-join allows us to keep transaction rows that don’t have a gender lookup value.

      Reducing Number of Categorical Levels

      To avoid the curse of dimensionality, we have to reduce the number of categorical levels present within our dataset.

      • Occupation: This column represents people’s jobs. People were allowed to fill in this field themselves, free-form. As a result, there are a large number of misspellings. The word “veteran” can also be found as “veteren”, “WWII vet” and “vetren”. As a result, there are 33,456 distinct occupations our dataset. Luckily the Bureau of Labor Statistics has done by classifying each job title into one of 28 occupational type buckets for us. We wrote an R script in Azure ML to do this bucketing.
      • State:
        There are 60 states in this dataset – the 50 US States, Washington DC and 9 additional territories, representing overseas military bases, citizens working abroad, or other territories such as Pacific Islands. However, these extra 9 states only account for 4,151 rows, which is less than half a percent of data which means there may not be enough representation to learn from these categories. We will use a SQL statement to filter out only the 50 main states plus Washington DC.
      • Cities: There are 11,780 distinct city names. We will run a SQL query to filter out city and state combinations that don’t have at least 50 transactions. We filter these out because there are not enough observations to form representation within the dataset. This threshold becomes a tuning parameter that you can play with. Increasing it may improve the overall performance of your models at the cost of less granularity for the inclusion of a specific city. After the filtering, 2,489 cities remain.
      • Contribution amounts: This feature does not have to be bucketed. However, in the experiment we show that you can bucket numeric features by quantile percentages.

      Cleaning Missing Values

      All columns have missing values. For simplicity, all missing values were replaced with a separate category called “OTHER”. However, it is advised at this step to experiment with different methods of missing value cleansing to see if there is a performance increase. Missing categorical features can be cleaned using the mode or can be imputed via another predictive model to predict the missing value.

      Note that any rows containing missing values of the response label have to be removed because supervised learning models can only learn if there is a label present for a given row.

      Dealing with Class Imbalance
      We return to the class imbalance from earlier. Class imbalance is quite common in ML, where one class is rarer than another. Some examples include medical diagnosis of tumor cells, a certain hormone that signifies pregnancy, or a fraudulent credit card transaction. There are many ways to combat class imbalance, however the one that is employed here will be to down sample the common class or to oversample the rare class. Clinton’s transactions were randomly sampled down (6.2%) to match Trump’s 58k transactions. This sampling percentage also becomes a tuning parameter for future predictive models.

      Visualizations, Exploration, and Segmentation

      Segmentation by Gender

      If gender is segmented by Clinton or Trump we get some pretty different looking pie charts: 64% of Clinton’s transactions come from female contributors, whereas 68% of Trump’s transactions originate from males.

      Clinton Transactions by GenderTrump Transactions by Gender

      Donation Amount Frequency, Segmented by Clinton and Trump

      As stated above, any inferences we make only apply for the subset of transactions within our current dataset, because of the way we have set things up. This leaves out many transactions such as the JFC, DNC, RNC and non-publicly affiliated committee transactions.

      A density plot is a nice way to visualize how the data is distributed. The x-axis represents the donation amounts and the y-axis shows frequency of donations at this level. For example, it would seem that in this dataset, about 27% of Clinton’s itemized contribution transactions amount to $25 or so, whereas about 17% of Trump’s contributions were for sums of $250 or thereabouts. What’s more important in this segmentation is to pay attention to the ranges at which one segment is higher or lower than the other. Lower level donations (less than $60) are almost all for Clinton, meaning – of her itemized donors – most come in small amounts. Once again, keep in mind that we are missing a lot of unreported transactions since only transactions of $200 or more from an individual donor need to be disclosed.


      Next Steps

      There are several opportunities to enhance this project. For instance, we dropped several features that could be used for further feature engineering. To take an example, the “memo” column contains important clarifications on each transaction, such as whether a given transaction was a refund, earmark or something else that may be of interest. Research and extraction into this column can yield a more granular dataset for the model. Similarly, the “image number” column contains within it a month and day of the receipt’s scan, and extracting the day of the week could yield interesting features such as whether or not it was a holiday.

      Even without these enhancements, the data is now in a state where you can feed it into one of many machine learning algorithms to build a model. To see how Azure ML can be used to build models, check out our Gallery, which features several interesting models shared by members of our community – you can get yourself jump started there. We hope you have fun. Do let us if you end up using this data in interesting ways.

      Cortana Intelligence & ML Blog Team


      Announcing the preview of the Office 365 adoption content pack in Power BI

      $
      0
      0

      Understanding how your users adopt and use Office 365 is critical for you as an Office 365 admin. It allows you to plan targeted user training and communication to increase usage and to get the most out of Office 365. The usage reports in the new Office 365 admin center are a great starting point to understand usage. However, many of you have shared feedback with us that you want the ability to further analyze your data to understand how specific departments or regions use Office 365 or which products are used the most to communicate.

      To provide you with richer and more personalized usage insights, we’re combining the intelligence of the usage reports with the interactive reporting capabilities of Power BI. The new Office 365 adoption content pack enables you to visualize and analyze Office 365 usage data, create custom reports and share the insights within your organization and pivot by attributes such as location and department. Today, we’re announcing that a limited preview of the adoption content pack is now available for Office 365 customers.

      announcing-the-preview-of-the-office-365-adoption-content-pack-in-power-bi-1

      Richer adoption, collaboration and communication insights

      Office 365 is all about enabling users to be more productive and to communicate and collaborate more effectively. With the adoption content pack, admins can gain deeper insights into how their users leverage Office 365 to communicate and collaborate and how it has evolved over time. This helps understand where admins need to focus user training and communication going forward.

      The dashboard is split up into four areas: Adoption, Communication, Collaboration and Activation. Admins can access detailed dashboards for each area by clicking any of the metrics.

      Adoptionreport—Helps you understand how your users have adopted Office 365 as well as how usage of the individual services has changed month-over-month. Admins can easily see how many users they have assigned a license to, how many users actively use the products and how many are first time users or returning users that use the product each month. This helps admins identify the products for which additional user training might be needed to increase adoption.

      announcing-the-preview-of-the-office-365-adoption-content-pack-in-power-bi-2

      Communicationreport—Shows admins how users use Office 365 to communicate. The dashboard includes a communication activities report that provides details about how the usage of different communication methods—such as email or Yammer message posts—has changed over time allowing admins to understand how their users adopt new ways of communication. Additional metrics include average number of emails sent, average number of Yammer posts read and average amount of time spent using Skype. The dashboard also shows which client apps are used to read email or to use Skype.

      announcing-the-preview-of-the-office-365-adoption-content-pack-in-power-bi-3

      Collaborationreport—Gives you the ability to see how people in your organization use OneDrive and SharePoint to store documents and collaborate with each other and how this is changing. Admins can also see how many users share documents internally versus externally.

      Activation report—Helps you understand Office 365 ProPlus, Project and Visio activations. Admins can see total activations across users, number of users that have activated the products, number of devices they have activated them on and the type of device.

      The adoption content pack provides you with additional reports that admins can access by clicking the tabs at the bottom of the site including the following reports:

      • Yammer Usage report—Useful for organizations that are in the process of rolling out Yammer or are focused on increasing usage. The report provides helpful information about how various parts of your organization adopt Yammer as a form or communication including how many people post messages, how many consume content by liking or reading a message and how new user activation has changed over time.
      • Skype for Business Usage report—Provides a consolidated view of Skype activity as well as with details about how many users leverage Skype to connect with others through peer-to-peer messages and how many communicate their ideas by participating or organizing video conferences.
      • OneDrive for Business Usage report—Shows admins how users leverage OneDrive to collaborate with others in new ways. They can easily see how many users use OneDrive to share files and how many utilize it mostly for file storage. The report also includes information about how many OneDrive accounts are actively being used, and how many files are stored on average.
      • SharePoint Usage report—Shows how SharePoint team sites and groups sites are being used to store files and for collaboration. The report also includes information about how many SharePoint sites are actively being used, and how many files are stored on average.
      • Office 365 Top User report—Enables admins to identify Office 365 power users and the individual products they are using. Power users can often help to drive product usage by sharing their experience about how they use the products to get their work done faster and more efficiently.

      Create, customize and share reports

      The dashboard is just a starting point to quickly get started with the adoption content pack and to interact with the data. As every organization has unique needs, we’ve ensured that admins can query the data to help you answer specific questions about your organization and create a personalized view. For example, you can filter to show only one of the products or adjust the time frame of the reports. By default, most reports provide data for the previous six months. Admins can also create additional reports or a new dashboard that show specific views.

      The adoption content pack also allows you to easily share the usage reports with anybody within your organization who might not have access to the Office 365 admin center. In addition, the adoption content pack combines usage data with the Active Directory (AD) information of your users and enables you to pivot by AD attributes such as location, department or organization.

      Sign up for the preview program

      The limited preview of the adoption content pack is available for Office 365 customers today. To get access to the content pack, please send an email to O365usagePowerBIPreview@service.microsoft.com and include your tenant ID. Sign-up closes October 16, 2016and space is limited—so please sign up now. After we have prepared your data (it can take 2-3 weeks), you will receive an email with instructions.

      The adoption content pack will become available for all customers to opt in by the end of December.

      Let us know what you think!

      Please try the new features and provide feedback in email to O365usagePowerBIPreview@service.microsoft.com. And don’t be surprised if we respond to your feedback. We truly read every piece of feedback that we receive to make sure the Office 365 administration experience meets your needs.

      —Anne Michels @Anne_Michels, senior product marketing manager for the Office 365 Marketing team

       

      The post Announcing the preview of the Office 365 adoption content pack in Power BI appeared first on Office Blogs.

      Identity Management in Retail Industry with #AzureAD

      $
      0
      0

      Howdy folks,

      Over the past three years weve had the privilege to work closely with many thousands of customers helping them successfully deploy and use Azure AD Premium. Over that period we have been surprised to see how many of our customers are national and global retailers.

      In fact, many of the worlds largest retailers are Azure AD Premium customers. These retailers have been among the most progressive organizations in the world as they worked to reinvent the way their store and warehouse staff work by leveraging the power of the the cloud, smart devices and Azure AD Premium.

      And lucky for us, weve learned a ton from these customers and their progressive efforts!

      Retail Industry Challenges

      Large retailers face unique challenges due to the massive scale of their store workforces, relatively high levels of staff turnover and the huge volume of devices that these store workers use and share. In addition these organizations tend to have relatively young employees and those millennial employees are among the most tech savvy and digitally connected people on the planet.

      Our largest retail customers around the world have told us that to modernize their retail operations and build omni-channel/unified commerce capabilities, they had to be able to do three things:

      • Increase employee productivity and customer responsiveness.
      • Increase collaboration across departments and supply chains.
      • Secure a wide variety of employees, customers, partners, applications and devices across their virtual organizations.

      And to do all this, they needed to provide a unique digital identity to every user and device across their organization.

      clip_image002

      To make sure these important customers can succeed, weve made service enhancements targeted specifically at their needs”:

      • Managing identity lifecycle for hundreds of thousands of employees.
      • Providing easy and secure access to retail specific apps.
      • Protecting all types of users, apps and devices (shared, company owned and BYOD).

      For example, weve focused our ISV recruitment efforts on the critical SaaS application categories our retail customers value like Learning Management, Collaboration, Task Management, Supply Chain, HR, and Time Scheduling. Just this month weve added support for eight new applications in these categories:

      • Kronos
      • Anaplan
      • Cornerstone OnDemand
      • Hightail
      • Workforce Manager
      • Tangoe
      • Tidemark
      • Edigital Research

      And as part of these efforts, today Im happy to announce the availability of our new Azure AD Deployment Guides for the Retail industry. These guides include a ton of best practices, lists of pre-requisites for successful deployments and proposed architectural designs based on varying levels of productivity and security needs.

      Working with Microsoft Partners

      Weve also been working with our top SI partners around the world getting them ready to enable these retail use cases using these deployment best practices. If you are a large retailer, these partners can help you with pilots and production deployments. We will back-stop each of them, giving them a direct line into our engineering team.

      Please ask your Microsoft Account representative to get started with partner and engineering assistance on quick pilots and deployments and a specially priced sku that can help you take advantage of economies of scale.

      This is just the beginning

      Were just getting started on this effort, so make sure to follow this blog or to follow me on Twitter (@Alex_A_Simons). Well be publishing customer stories and highlight news features that are particularly valuable for retailers, including things like password less signin to shared devices, conditional access policies based on time schedules for store workers and various other use cases in the retail industry.

      And as always, wed love to receive any feedback or suggestions you have.

      Best Regards,

      Alex Simons (Twitter: @Alex_A_Simons)

      Director of Program Management

      Microsoft Identity Division

      P.S.: If you are interested, heres a quick overview of the new deployment guides we just published

      Azure AD Retail Deployment Guides

      Managing Identity life cycle at scale for store/seasonal workers– This guide describes how to deploy a unified cloud identity platform to manage identity lifecycle of particularly your seasonal/store staff. Whether you have multiple sources like databases, payroll systems or LDAP directories for storing identities or have no unique user identities / HR systems in place today, this guide describes how to manage identities at scale with Azure AD in all these scenarios

      .clip_image004

      Raising Productivity of store workers– This guide provides detailed instructions on configuring easy and secure single sign on to all your applications as well as self- service tools for password reset and group management that reduce help desk calls and let your users handle these tasks. One of our grocery retail customer saved over $250K in help desk calls in the first quarter of deploying self-service password rest for their store workers.

      clip_image006

      Centralized Security management for all users, devices and applications– This deployment guide provides best practices on consistent security management, monitoring and access control encompassing all type of users, applications and devices. The guide details how to configure access policies for different personas and use cases. (e.g. certain users /groups get access to specific apps only when they are in the store).

      Hybrid Cloud Management Ignite recap: Microsoft Operations Management Suite overview

      $
      0
      0

      Last week in Atlanta at Microsoft Ignite, we announced multiple new capabilities for Microsoft Operations Management Suite. Designed to help you gain visibility, increase control, speed response to security issues, and increase availability, the new offerings give you expanded options for managing your hybrid cloud.

      As you move to the cloud, one of the key things to consider is how your management strategy needs to evolve. When managing a hybrid cloud environment, you have two options. You can manage resources from the datacenter up into the cloud, using existing toolsets, or you can bring in cloud management. There are several advantages to moving to cloud-based management. First, you reduce time to value. You can get up and running faster because you dont have to deploy anything. You also get innovation at cloud speed. With the pace of innovation today, new management toolsets are often required to handle the new technologies you bring into your environment. Management as a service lets us deliver new capabilities to you all the time, reducing the need for point solutions. In addition, you want to be flexible, with the ability to pull data from specialized toolsets as required. And finally, you need dashboards and easy search to keep it simple, to minimize the overhead involved in running the management tools themselves.

      Delivered from Azure, Operations Management Suite is designed for cloud speed and cloud flexibility. If you want to learn more about how cloud-based management can work for you, check out the Ignite breakout session, Taking your management and security strategy to the cloud. In this session Jeremy Winter and Srini Chandrasekar, the engineering leads for Operations Management Suite, give their perspective on how new management capabilities can address some of the key challenges facing IT Operations teams today.

      Faster Visual Studio “15” Startup

      $
      0
      0

      As John mentioned last Wednesday in the preview 5 announcement post, we are investing heavily performance improvements in this release. This is part one of a five-part series covering performance improvements for Visual Studio “15”.

      Today, I’ll walk you through a set of investments we made to improve the Visual Studio startup experience in this latest release and specifically cover the following topics:

      • Using our new performance center, how you can determine if you are using extensions or tool windows that are impacting experiences such as startup, solution load, or code editing. And of course, how to optimize it.
      • How moving extensions out of the startup path using an on-demand-load approach and optimizing and deferring cache initializations help us improve the startup times.

      As the Visual Studio user base has grown tremendously over the years, so has the variety of partner technologies used in Visual Studio. Unfortunately, some of the extensions significantly impact Visual Studio startup time as features and extensions load automatically at startup.

      Below is an example indicating how Visual Studio startup time could easily get 50% slower when loading extensions at startup.

      What is involved in Visual Studio startup?

      There are three different Visual Studio startup types:

      1. First launch: the very first launch of Visual Studio after setup has finished. First launch of Visual Studio is considerably slower than other startups because the Visual Studio environment is configured with various caches and prebuilt tables.
      2. Normal startup: we call subsequent Visual Studio launches, following the first launch, as normal startup; such launches exclude debug instances, or instances launched using command line arguments, as well as instances where an extension or update was installed before. 80% of Visual Studio startups fall into the normal startup category.
      3. Configuration change: a startup that occurs after when an extension or update is installed.

      These types help us identify the root cause of potential slow-downs and enable further investigations for optimizations.

      First launch improvements

      In Visual Studio 2015, a first launch startup involves scanning installed components and creating a configuration hive, initializing default settings, getting user sign-in information, and initializing caches such as the Managed Extensibility Framework (MEF), extension manager, toolbox, and font/color caches.

      In Visual Studio “15”, we have considered each of these steps to see which ones can be deferred or optimized:

      • We experimented with deferring toolbox initialization in Visual Studio 2015 which had a positive impact on load time and changed it permanently in Visual Studio “15”.
      • Some caches like the font and color cache are no longer initialized at first launch. Instead, we significantly improved cache configuration allowing us to defer their initialization until a later time without impacting the user experience at first launch.
      • By having MEF and the extension manager service be asynchronous, we can now initialize those caches in parallel while sign-in runs.

      With all these changes, we are happy to announce that Visual Studio “15” launches up to 3x faster than Visual Studio 2015. Here is a very early read from our telemetry:

      DurationVisual Studio 2015Visual Studio “15”
      First Launch duration (80th percentile)215.5 sec80.3 sec

      Normal startup improvements

      A normal startup of Visual Studio, at minimum, involves initializing core services. Every release we continue to monitor these services to ensure core initialization doesn’t get longer. On top of core services, startup can be impacted by two major additions; extensions that are automatically loaded at startup and tool windows that persist in the window layout from a previous instance. Our telemetry suggests that automatically loading extensions, including both Microsoft and 3rd party ones, significantly increase Visual Studio startup time.

      As mentioned above, our primary focus in Visual Studio “15” has been avoiding automatic loading of extensions and providing more options for those extensions to be able to load later without impacting the user experience. As part of that work, we have delivered features like:

      • Enabling asynchronous loading support for Visual Studio packages
      • Extending metadata rules that control automatic package loading and command visibility
      • Supporting asynchronous query of core Visual Studio services

      Xamarin and Python tooling are the first to adopt the on-demand-load approach in Visual Studio “15”. You should now see that startup is considerably faster in cases where you have these extensions installed.

      Another factor that impacts startup is tool windows that are visible on the IDE at the startup time. These tool windows persist from the previous instance and such tool windows add time depending on the size of the features they initialize; and some of them are quite noticeable. We continue to focus on improving performance of such tool windows based on our telemetry. In addition, the Manage Visual Studio Performance dialog helps you override default persistence behavior for these tool windows going forward. You will learn more about this dialog in the next section.

      Monitoring extension and tool window performance

      While we are minimizing the need for auto loading features at startup, we also added a feature in Visual Studio “15” to help users understand the impact of their extensions and tool windows at startup and other scenarios such as solution load and editing. Yes, this feature does auto-load to monitor what happens during Visual Studio startup.

      When you launch Visual Studio, for a slower extension you will get a one-time notification regarding the extension’s performance impact on startup.

      At any time, you can go to Help -> Manage Visual Studio Performance to open the dialog and see which extensions or tool windows are impacting your Visual Studio’s startup, solution load or typing performance. From the Manage Visual Studio Performance dialog, you can disable the extension.

      Like extensions, you can see the impact of a tool window in the same dialog. If a tool window slows down the Visual Studio startup significantly, you will also get a notification. You can override the default persistence behavior by choosing from one of the two options below:

      • Use default behavior; no change to the current behavior. Hence, you won’t get any benefit if you chose this option.
      • Do not show window at startup; tool window will never persist between instances. You can open it later from a menu.
      • Auto hide window at startup; if tool window is visible at startup, this selection will collapse its group to avoid initializing the tool window. This option is my favorite.

      You can always revert these options by going back to Manage Visual Studio Performance dialog and changing the option to Use default behavior.

      Continue to help us

      While this post focuses on startup improvements, there are also significant investments in solution load area that my colleague Will Buik will talk about in the next post. Together these changes should allow Visual Studio to start faster when opened with a solution selected from Windows Explorer as well.

      You can help us make Visual Studio a better product for you in several ways:

      • First, we monitor telemetry from all our releases, including pre-releases. Please download and use Visual Studio “15” Preview 5.
      • Secondly, if you are an extension author, stay tuned; in the upcoming weeks, we will post details on how extension authors can analyze their extension. As any extension that chooses to auto load at startup or when a solution is being opened can negatively impact performance of Visual Studio. The coming guidance will help evaluating if new features in Visual Studio “15” can be used to remove the need for auto loading or reduce the impact of auto loading. For those extensions that need to auto load, the guidance will also help with measuring the impact on startup.

      Thanks!

      Selma

      Selma Ikiz, Program Manager, Visual Studio IDE

      Selma has been working in developer technologies since she joined Microsoft in 2009. Her major focus has been delivering performant and stable IDE to developers. She is currently working on improving the new Visual Studio Installer through telemetry.

      PASS Summit 2016: world’s biggest SQL Server event

      $
      0
      0

      PASS Summit 2016 is nearly upon us. With only 4 weeks until the big event, now is the time to register!

      PASS Summit 2016 is community-driven event with three days of technical sessions, networking, and professional development. Don’t miss your chance to stay up to date on the latest and greatest Microsoft data solutions along with 4,000 of your data professional and developer peers.

      What’s new this year?  So many things!

      • PASS Summit is not just for DBAs.  With nearly 1,000 developers attending the event, Microsoft has increased the number of sessions focused on application development and developer tools by 60%.

      • While many people attend PASS Summit to grow fundamental database skills, we know that many attendees are very experienced, senior data professionals so we increased the number of deep technical sessions by half.

      • We have also added a new type of session called a Chalk Talk. These are Level 500 sessions with Microsoft senior program management hosting open Q&A in a collegiate style setting.  Seating is limited to 50 so you’ll want to get there early to claim your spot.

      In addition to these enhancements, Microsoft has also increased investment in sending employees onsite to talk with attendees.  They’ll be easy to spot – all 500 Microsoftees will be wearing bright fuchsia t-shirts.  You can find them in big numbers the Day 1 keynote, Microsoft booth, SQL Clinic, Wednesday’s Birds of a Feather luncheon, Thursday’s WIT luncheon, and of course in our big booth in the Expo Hall.

      Have a technical challenge or need architecture advice?

      SQL Clinic is the place to be. SQL Clinic is the hub of technical experts from SQLCAT, Tiger Team, CSS, and others. Whether you are looking for SQL Server deployment support, have a troublesome technical issue, or developing an application the experts at SQL Clinic will have the right advice for you.

      Click here to register today!

      Are you a member of a PASS chapter or virtual chapter?  If so, remember to take advantage of the $150 discount code.  Contact your chapter leader for details.

      Sending your whole team? There is also a great group discount for companies sending five or more employees.

      Once you get a taste for the learning and networking waiting for you at PASS Summit, we invite you to join the conversation by following @SQLServer on Twitter as well as @SQLPASS and #sqlsummit. We’re looking forward to an amazing event, and can’t wait to see everyone there!

      Stay tuned for regular updates and highlights on Microsoft and PASS activities planned for this year’s conference.

      Viewing all 13502 articles
      Browse latest View live


      <script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>