Quantcast
Channel: TechNet Technology News
Viewing all 13502 articles
Browse latest View live

TFS/Team Services Q4 Roadmap update

$
0
0

A few days ago we published our Q4 update to the TFS/Team Services roadmap.  We should have done it 6 weeks ago but, Connect(); in mid-November had us really busy and it just fell through the cracks.  Sorry about that.  We should be updating it again in mid-January.

As a reminder, everything on here is just our best estimates and things change.  We try to give about a 6 month time horizon so that you know what we are working on/planning now.  Beyond that, it really is just a backlog review exercise with no real ability to predict much concretely.

I’m working on getting a few blog posts written to drill into some detail on the more provocative ones.

If you have any questions, let me know.

Brian


DevOps at Connect();

$
0
0

A little late on this, but the video of my presentation (with Jamie Cool) is on Channel9 now: https://channel9.msdn.com/Events/Connect/2016?sort=status&direction=desc&c=Brian-Harry&term=

I’ve been saying for a couple of year that TFS and Team Services are a great DevOps solution for your whole team – regardless of the technology or platforms they work on.  I decided to finally put my money where my mouth is and do a big talk where the center of it is a DevOps story for a Java app running on Linux, deployed to a Docker container.  There’s a few other cool demos too.

Check it out.

Brian

Prepare for the new MCSA: Windows Server 2016 certification

$
0
0

We’ve got great news for IT Pros who’ve been waiting for the MCSA: Windows Server 2016 certification! It will be available soon, and we have some practical instructor-led courses to help you get ready for the exams:

  • 20740: Installation, Storage, and Compute with Windows Server 2016 This course is designed for professionals who will be responsible for managing storage and compute by using Windows Server 2016 and who need to understand the available and applicable scenarios, requirements, and storage and compute options.
  • 20741: Networking with Windows Server 2016 In this course, learn the fundamental networking skills required to deploy and support Windows Server 2016. It covers IP fundamentals, remote access technologies, and more advanced content, including software-defined networking.
  • 20742: Identity with Windows Server 2016 For IT Pros who want to learn to deploy and configure Active Directory Domain Services (AD DS) in a distributed environment, this course teaches how to implement Group Policy, how to perform backup and restore, and how to monitor and troubleshoot Active Directory–related issues with Windows Server 2016.

Take these courses to help you prepare for these three exams, which are currently in development and available as a beta. Practice tests for the exams will be available shortly after each beta period ends. Pass all three exams to earn your MCSA: Windows Server 2016 certification:

Do you already hold an MCSA: Windows Server 2012 or MCSA: Windows Server 2008 certification? Want to upgrade to the new 2016 certification? You can do so through a single upgrade exam:

Dive into exam prep this month! And get ready to demonstrate your ability to accomplish the technical tasks covered in these exams, get certified (or upgrade your current certification), and take your career to the next level, with the new MCSA: Windows Server 2016 certification.

How We Share Machine Learning, Analytics & Data Science at Microsoft

$
0
0

We recently concluded the Fall 2016 edition of the Machine Learning, Analytics & Data Science (MLADS) conference, Microsoft’s largest internal gathering of employees focused on this very important field. This latest edition was the sixth in a very popular series that we launched in spring 2014, and with over 3,000 employees participating over the course of this two-day event in Redmond, it was a testimony to the rapid growth of this community at Microsoft and the expanding investments that Microsoft is making in this area.

The conference itself is just one aspect of the very large community that we are fostering within the company, many thousands strong, who are unified by their passion for this space and its potential for customer impact, and their desire to network and learn from one another. We drive a twice a year call for content from the community, and the submissions from that process feed into the final MLADS conference programs.

The latest conference featured over 100 tutorials and talks, covering the whole gamut of topics including Big Data platforms such as Azure Data Lake, Deep Learning tools and techniques, including the Cognitive Toolkit (Microsoft’s Open Source Deep Learning Framework), the Microsoft Bot Framework, commonly used techniques such as Classification, Regression, Time Series, Anomaly Detection and more, Security Analytics, R, Python, notebooks and much more. There was even a hands-on tutorial offered on Julia, taught by one of the co-founders of that language. Additionally, there were over 60 demos and posters showcased at our evening MLADS reception. Attendees also availed themselves of 1:1 consultation sessions with expert ML engineers and data scientists, to get their questions answered.


A panel of judges shortlisted the top submissions for our Distinguished Contribution Awards, a unique internal recognition program for top-class work being done in this field. Additionally, selective conference submissions were also chosen for publication in our internal Microsoft Journal of Applied Research. The latest conference also featured several external speakers, including presenters from University of Washington, Julia Computing, Algorithmia and NASA, and also included informal lunchtime meetups on important related efforts such as Women in Machine Learning, Women in Data Science, Ethics in Machine Learning and Microsoft Professional Programs in Data Science.

We opened to packed keynote sessions each day of the conference: Christopher Bishop, Distinguished Scientist and Director of the Microsoft Cambridge (UK) Labs, talked about Embracing Uncertainty, highlighting how fundamental uncertainty is to the entire ML/AI revolution, and thus the need for mathematical rigor around probability and decision making, to ensure that predictions offered by software systems maximize the value created for customers and users.

.
Christopher Bishop, Distinguished Scientist and Director of the Microsoft Cambridge (UK) Labs

Joseph Sirosh, Corporate Vice President of the Data Group at Microsoft, used his keynote to highlight the massive impact that the Intelligent Cloud is having on the human condition. Joseph’s talk covered three key design patterns for creating intelligent apps, namely: Intelligent Databases, Intelligent Data Lakes, and Deep Intelligence,and included demos illustrating these patterns based on Microsoft’s partnerships with top companies such as Stack Overflow, Uber, Lowe’s, eSmart Systems and the LV Prasad Eye Institute.

Additionally, Xuedong Huang (XD), Distinguished Engineer, talked about theCognitive Toolkit(formerly CNTK), including an impressive demo of real-time captioning of his own talk, enabled by the breakthrough technology that his team is creating in the realm of speech recognition.


Xuedong Huang, Distinguished Engineer, Microsoft

The conference was book-ended by a Startup Showcase, at which employees were able to connect with founders and engineers from the latest cohort of startups that are part of the Microsoft Accelerator program in Seattle, and a closing plenary panel on the very important topic of Privacy, Ethics & Machine Learning.

Between our large MLADS conferences, our community also has a regular cadence of in-person and online events, including quarterly ML-focused hackathons. Through these investments, practitioners from across the global Microsoft community are able to share knowledge, help each another and enhance Microsoft’s products and services through the power of cutting-edge techniques in big data, ML, advanced analytics and data science.

CIML Blog Team
Follow us on Twitter

SC 2016 DPM Capacity Planner

$
0
0

SC 2016 DPM announced Modern Backup Storage. This changes the way data is backed up and stored. Modern Backup Storage delivers 50% Storage Savings and 3x faster by leveraging latest Windows Server 2016 technologies as ReFS Block Cloning, VHDX, Allocate on write, etc.

Further, DPM 2016 comes with Workload Aware Storage, which enables you to configure backups of certain kinds of workloads to go to certain volumes. Hence, you can now store your more frequently backed up SQL and SharePoint data on expensive, high performant volumes , while storing other less frequently backed up data on low performant storage. This further optimizes storage consumption, while decreasing spends on storage.

Here is the Backup Storage Capacity Planner to help you provision storage for DPM 2016 using the storage savings and efficiency. Based on inputs as the size, kind and policy of backups, the Planner suggests the amount of storage that will be needed to store the backups to disk, and to Azure.

3 Simple steps to plan Backup Storage Requirements

untitled

 

Provisioning Resources using SC 2016 DPM Capacity Planner

Once you have planned the storage, begin with adding volumes to SC 2016 DPM and using MBS. The best practice is to add your disks to a Storage Pool, create Virtual Hard Disks with Simple layout, and create a volume on it. This volume can be given to SC 2016 DPM, and extended as and when needed.

Get these Modern Backup Storage Now! 

You can get DPM 2016 up and running in ten minutes by downloading Evaluation VHD.  Questions? Reach out to us at AskAzureBackupTeam@microsoft.com.

If you are new to Azure Backup and want to enable Azure Backup for longterm retention, refer to Preparing to backup workloads to Azure with DPM.  Click for a free Azure trial subscription.

Here are some additional resources:

New to Office 365 in November—new collaboration capabilities and more

$
0
0

Today’s post was written by Kirk Koenigsbauer, corporate vice president for the Office team.

This month, we’re announcing several updates to the Office apps to help you easily and effectively collaborate with others. This includes real-time co-authoring in PowerPoint, the ability to upload attachments to the cloud directly from Outlook and other collaboration capabilities.

Expanding real-time co-authoring to PowerPoint

When we launched Office 2016 last year, we said we would expand real-time co-authoring to more of our native apps beyond Word on Windows desktops. We are excited to announce that real-time co-authoring is now available in PowerPoint on Windows desktops, allowing you to see what others are typing as it happens on a given slide.

new-to-office-365-in-november-1

Real-time co-authoring is now available in PowerPoint on Windows desktops.

Availability: Real-time co-authoring is currently available in PowerPoint Mobile on Windows tablets, and it is now also available in PowerPoint on Windows desktops for Office 365 subscribers in the Office Insider program.

Use Outlook to move attachments to the cloud and share with others

Outlook already lets you attach cloud-based documents to an email, helping you better collaborate with others using a single version of the file. Now you can easily transform a traditional attachment into a shared cloud document right within Outlook. Upload a file to your own OneDrive or a shared OneDrive as part of an Office 365 Group. Then specify sharing permissions for the email recipients. Get started with cloud attachments in a few easy steps.

new-to-office-365-in-november-2

Easily transform a traditional attachment into a shared cloud document in Outlook.

Availability: Uploading attachments to the cloud is currently available in Outlook on the web, and it is now also available in Outlook on Windows desktops for all Office 365 subscribers.

Stay on top of changes to shared documents with mobile notifications

We’re introducing notifications in Word, Excel and PowerPoint on mobile devices to alert you to activity with your shared cloud documents. Notifications let you know when changes are being made while you are away from a document, so you can stay connected and know when you need to act. Together with the integrated activity feed already available on Windows desktops, you can collaborate more confidently. Commercial customers and consumers receive notifications when documents are shared with others. Consumers are additionally notified when documents are edited. We’ll continue enhancing notifications to provide more detail and transparency around activity in shared documents in the future.

new-to-office-365-in-november-3

Mobile notifications alert you to activity with your shared cloud documents.

Availability: Sharing and editing notifications are now available for consumers in Word, Excel and PowerPoint on Android and Windows Mobile for Office Insiders. Sharing and editing notifications for consumers on iOS are coming with next month’s updates. Sharing notifications for commercial customers in all Office mobile apps will be available on all platforms in the coming months.

Find, open and save documents more easily with Shared with Me and Recent Folders

We’ve added a Shared with Me tab in Word, Excel and PowerPoint. Just like in OneDrive, this view makes it easy to find and open documents that others have shared with you without leaving the app you’re working in. We’ve also added a Recent Folders list in the Recent tab, making it easier to navigate to the right place to find or save your files as desired.

new-to-office-365-in-november-4

The Shared with Me tab helps you find and open documents more easily.

Availability: The Shared with Me tab is now available in Word, Excel and PowerPoint on Windows desktops and Macs for all Office 365 subscribers, as well as on iOS and Android. It is coming soon for Windows Mobile. The Recent Folders list is now available in Word, Excel and PowerPoint on Windows desktops for Office 365 subscribers in the Office Insider program.

Other Office 365 updates this month

We also have a few additional updates this month. Learn more by clicking the links below:

Learn more about what’s new for Office 365 subscribers this month at: Office 2016 | Office for Mac | Office Mobile for Windows | Office for iPhone and iPad | Office on Android. If you’re an Office 365 Home or Personal customer, be sure to sign up for Office Insider to be the first to use the latest and greatest in Office productivity. Commercial customers on both Current Channel and Deferred Channel can also get early access to a fully supported build through First Release. This site explains more about when you can expect to receive the features announced today.

—Kirk Koenigsbauer

The post New to Office 365 in November—new collaboration capabilities and more appeared first on Office Blogs.

Merging intelligence with productivity—a demo tour of recent Office app updates

$
0
0

On today’s Microsoft Mechanics, we look at the latest in intelligent Office app experiences spanning PowerPoint, Excel, Word, Outlook and the browser. Ben Walters presents how Office can bring in intelligence via the Microsoft Graph, Azure Machine Learning and Delve to save you time.

Ben kicks off his demonstration with PowerPoint Designer, a new way to automatically design slide layouts when importing pictures or other visual elements. Designer now also works with text-based lists to apply visual formats. Going beyond slide design and formatting, Ben also highlights PowerPoint QuickStarter, which uses the Microsoft Graph to pull in content ideas and photos, creating a great starting point for your presentations.

In Word, we take a look at Word Tap, a new feature that also uses intelligence with Delve to bring in recent content like charts and graphs. Within Word, Ben also shows how machine learning can also help improve writing by looking for word redundancies, jargon, gender-specific language and more.

In Excel, we explore the new Azure Machine Learning add-in to run text sentiment analysis, then use map charts to give a visual representation of data inserted via the Azure Machine Learning add-in.

In Outlook, we’re using intelligence and machine learning to help assess the importance of incoming email via the new Focused Inbox. The MyAnalytics add-in also helps track information about how your email is being read, reply rates and the timing of these activities. For Outlook Mobile, we’re applying intelligence to find important events like flight information using Action Cards; also scheduling calendar events is much easier on the phone using integrated free/busy information with meeting attendees.

In the browser, Office 365 not only uses the intelligence from Delve to surface important content you’ve been working on, but you can also customize the App Launcher to highlight the important apps to you and be directly linked to Office 365 capabilities using the intelligent Tell Me engine from Office 365 Help.

While this is a glimpse of some the capabilities rolling out to Office via First Release and the Office Insider program, there is much more coming. Of course, to see all of this in action, you’ll want to check out today’s show.

—Jeremy Chapman

The post Merging intelligence with productivity—a demo tour of recent Office app updates appeared first on Office Blogs.

Harness the Power of the Redesigned Start Page

$
0
0

In Visual Studio 2017 RC, we brought you a faster installation, better performance, and new productivity features. One of these productivity features is a redesigned Start Page that prioritizes the actions that help you get to code and start working faster.

Most Recently Used (MRU) List

We’ve heard from you that the MRU is the most valuable part of the Start Page so we thought it was time to give it the prominence it deserves. To help you quickly find what you’re looking for, each MRU item now displays an icon denoting it as a project, solution or folder, a file path for local items, and a remote URL for remote items not yet on disk. With the addition of open folder functionality, we’ve added support for recently opened folders. The new MRU also brings you grouping by date, a longer history and a pinned group that lives at the top of the MRU to give you easy access to your most important items.

To help you be more productive across multiple machines, we’ve also added a roaming element to the MRU. If you clone a repository hosted on a service, such as Visual Studio Team Services or GitHub, Visual Studio will now roam this item to any Visual Studio instance that is associated with the same personalization account. You can then select the roamed item from the MRU to clone it down and continue your work in that code base.

MRU with pinned remote repository, folder and projects

Create New Project

Whether you’re new to Visual Studio or a seasoned user, chances are you’ll want to create a new project at some point. In our research, we found that the “New Project…” command was one of the most used features of the Start Page but after talking with many of you, we discovered you often created the same few projects for experimentation. To help speed up this process, the Start Page now allows you to search for the specific project type you’d like to create. With this change, we eliminated the need to click around the New Project dialog to find what you’re looking for. As well, the Start Page will remember what you’ve recently created and allows you to create your project directly from the Start Page. This will bypass the steps of finding and selecting this template in the New Project dialog. If you sign into Visual Studio, this list will also roam with you across your devices.

Recent Project Templates

Search for web project templates

Open

Whether your code lives locally, on an on premise TFS server, hosted in VSTS or shared on GitHub, we wanted to simplify finding, downloading and opening that project.

We’ve added the ability to Open Folder, as well as preserving the Open Project/Solution command, to enable you to open a code base with or without a solution file from within VS.

For code on TFS or hosted in VSTS, you have support out of the box. You can clone a repository simply by clicking on the Visual Studio Team Services item underneath the “Checkout from” header. This area is also third-party extensible. GitHub is one service provider that has already taken advantage of this extension point. For those of you who install the updated GitHub extension, you’ll notice GitHub will appear with VSTS. We’re working to onboard more service providers so that you can easily connect to your code, regardless of the service you use to host your projects.

Open from VSTS, GitHub or from local disk

Developer News

We’ve worked on making sure the news stays fresh and relevant with fewer gaps between posts. While some users read Start Page news to start their day, we heard it wasn’t for everyone. To let you reclaim space and focus on your code, you can now collapse the news section. Don’t worry about missing the latest post; a little badge will appear whenever something new comes in to keep you up to date on all our latest news.

Collapsed Developer News with Alert Badge

Show Start Page

From early feedback, we’ve heard some confusion over where the command to show the Start Page moved. We envision the Start Page will become a starting point for your experience and so we’ve moved this command into the File menu for quicker access.

Thank you

With our latest Start Page, we aim to make your experience more productive, more functional and more personalized, drawing value from quick access to key actions, ability to roam important elements like repositories and project templates, and a new design that allows you to focus on getting to your code.

Download Visual Studio 2017 RC today and share your feedback. For problems, let us know via the Report a Problem option in the upper right corner, either from the installer or the Visual Studio IDE itself. Track your feedback on the developer community portal. For suggestions, let us know through UserVoice.

Allison Buchholtz-Au, Program Manager, Visual Studio Platform

Allison is a Program Manager on the Visual Studio Platform team, focusing on streamlining source control workflows and supporting both our first and third party source control providers.


The week in .NET – Cosmos on On.NET, GongSolutions.WPF.DragDrop, Transistor

$
0
0

To read last week’s post, see The week in .NET – .NET Core, ASP.NET Core, EF Core 1.1 – Docker – Xenko.

On .NET

Last week, Chad Z. Hower, a.k.a. Kudzu was on the show to talk about Cosmos, a C# open source managed operating system:

This week, we’ll speak with Xavier Decoster and Maarten Balliauw about MyGet. The show is on Wednesday this week and begins at 10AM Pacific Time on YouTube. We’ll take questions on the video’s integrated chat.

Package of the week: GongSolutions.WPF.DragDrop

The GongSolutions.WPF.DragDrop library is an easy to use drag & drop framework for WPF. It supports MVVM, multi-selection, and visual feedback adorners.

WPF.DragDrop

Game of the week: Transistor

Transistor is a sci-fi action RPG that follows the story of Red, a famous singer who is under attack. Even though Red manages to escape, it is not without losses. Fortunately, Red immediately comes into possession of a weapon known as the Transistor. As foes are defeated, new Functions are unlocked for the weapon, giving players the ability to configure thousands of possible combinations. Transistor features a unique strategic approach to combat, beautiful graphics and a rich story.

Transistor

Transistor was created Supergiant Games using C# and their own custom engine. It is currently available on Steam, PlayStation 4 and the Apple App Store.

User group meeting of the week: Electrical Engineering for Programmers in NYC

Tonight, Tuesday November 29 at 6:00PM at the Microsoft Reactor in NYC, the Microsoft Makers and App Developers group hold a meeting on Electrical Engineering for programmers.

.NET

ASP.NET

F#

New F# language proposals:

Check out F# Weekly for more great content from the F# community.

Xamarin

Azure

And more Azure links on Azure Weekly, by Chris Pietschmann.

Data

Games

And this is it for this week!

Contribute to the week in .NET

As always, this weekly post couldn’t exist without community contributions, and I’d like to thank all those who sent links and tips. The F# section is provided by Phillip Carter, the gaming section by Stacey Haffner, and the Xamarin section by Dan Rigby.

You can participate too. Did you write a great blog post, or just read one? Do you want everyone to know about an amazing new contribution or a useful library? Did you make or play a great game built on .NET? We’d love to hear from you, and feature your contributions on future posts:

This week’s post (and future posts) also contains news I first read on The ASP.NET Community Standup, on Weekly Xamarin, on F# weekly, and on Chris Alcock’s The Morning Brew.

Empowering Developers with AI & Deep Learning

$
0
0

Re-posted from the Azure blog.

Deep Learning is powering many of the biggest breakthroughs in Artificial Intelligence in recent times, spanning areas such as speech recognition, language understanding and computer vision.

Indeed, Deep Learning is now changing the very customer experience around many Microsoft products, including HoloLens, Skype, Cortana, Office 365, Bing and more. Our Deep Learning based language translation in Skype was recently named one of the 7 greatest software innovations of the year by Popular Science, and this technology has now helped machines achieve human-level parity in conversational speech recognition.


Deep Learning is now a core part of Microsoft’s development platform offerings, an extensive toolset that includes:


The applications of this technology are truly so far reaching that the new mantra, of Deep Learning in Every Software, may well become a reality within this decade.

Through our intelligent algorithms, cloud infrastructure and partnerships with leading organizations such as NVIDIA and OpenAI, we are making Microsoft Azure the fastest and most versatile AI platform in the world, and helping developers and customers everywhere realize the holy grail of a truly intelligent cloud.

Learn more at the original post by Joseph Sirosh here.

CIML Blog Team

Docker on a Synology NAS - Also running ASP.NET and .NET Core!

$
0
0

Docker on Synology is amazingI love my Synology NAS (Network Attached Storage) device. It has sat quietly in my server closet for almost 5 years now. It was a fantastic investment. I've filled it with 8TB of inexpensive Seagate Drives and it does its own flavor of RAID to give me roughly 5TB (which, for my house is effectively infinite) of storage. In my house it's just \\SERVER or http://server. It's a little GNU Linux machine that is easier to manage and maintain (and generally deal with) that just chills in the closet. It's a personal cloud.

It also runs:

  • Plex - It's a media server with over 15 years of home movies and photos. It's even more magical when used with an Xbox One. It transcodes videos that then download to my Windows tablets or iPad...then I watch them offline on the plane.
  • VPN Server - I can remotely connect to my house. Even stream Netflix when I'm overseas.
  • Surveillance Station - It acts as a DVR and manages streams from a dozen cameras both inside and outside the house, scanning for motion and storing nearly a week of video.
  • Murmur/Mumble Server - Your own private VOIP chat service. Used for podcasts, gaming, private calls that aren't over Skype, etc.
  • Cloud Sync/Backup - I have files in Google Drive, Dropbox, and OneDrive...but I have them entirely backed up on my Synology with their Cloud Sync.

51FvMne3PyL._SL1280_Every year my Synology gets better with software upgrades. The biggest and most significant upgrade to Synology has been the addition of Docker and the Docker ecosystem. There is first class support for Docker on Synology. There are some Synology devices that are cheaper and use ARM processors. Make sure you get one with an Intel processor for best compatibility. Get the best one you can and you'll find new uses for it all the time! I have the 1511 (now 1515) and it's amazing.

ASP.NET Core on Docker on Synology

A month ago Glenn Condron and I did a Microsoft Virtual Academy on Containers and Cross-Platform .NET (coming soon!) and we made this little app and put it in Docker. It's "glennc/fancypants." That means I can easily run it anywhere with just:

docker run glennc/fancypants

Sometimes a DockerFile for ASP.NET Core can be as basic as this:

FROM microsoft/aspnetcore:1.0.1
ENTRYPOINT ["dotnet", "WebApplication4.dll"]
ARG source=.
WORKDIR /app
EXPOSE 80
COPY $source .

You could certainly use Docker Compose and have your Synology running Redis, MySql, ASP.NET Core, whatever.

Even better, since Synology has such a great UI, here is Glenn's app in the Synology web-based admin tool:

Docker on Synology - Node and ASP.NET Core Apps 

I can ssh into the Synology (you'll need to SSH in as root, or you'll want to set up Docker to allow another user to avoid this) and run docker commands directly, or I can use their excellent UI. It's really one of the nicest Docker UIs I've seen. I was able to get ASP.NET Core and the Node.js Ghost blog running in minutes with modest RAM requirements.

image

Once Containers exist in Docker on Synology you can "turn them on and off" like any service.

ASP.NET Core on Docker on Synology

This also means that your Synology can now run any Docker-based service like a private version of GitLab (good instructions here)! You could then (if you like) do cool domain mappings like gitlab.hanselman.com:someport and have your Synology do the work. The Synology could then run Jenkins or Travis as well which makes my home server fit nicely into my development workflow without use any compute resources on my main machine (or using any cloud resource at all!)

The next step for me will be to connect to Docker running on Synology remotely from my Windows machine, then setup "F5 Docker Debugging" in Visual Studio.

image

Anyone else using a Synology?

* My Amazon links pay for tacos. Please use them.


Sponsor: Big thanks to Octopus Deploy! Do you deploy the same application multiple times for each of your end customers? The team at Octopus have taken the pain out of multi-tenant deployments. Check out their latest 3.4 release!


© 2016 Scott Hanselman. All rights reserved.
     

VS Team Services Update – Nov 28

$
0
0

This week we are deploying our sprint 109 payload.  You can read the release notes for details.

There’s a few things I’m particularly excited about.

Build task versioning– We had a live site incident a few months ago because we rolled out an update to a build task on our hosted pools and it broke pretty much everyone’s builds.  That caused me to go dig in a bit to understand how we were managing validation and rollout of build tasks.  What I learned is that we really just didn’t have a sufficient mechanism to manage this.  With this deployment, we are rolling out a new build task versioning capability that enables an author to rollout a new version of their task without affecting people currently using their task.  Specifically, we introduced the notion of major and minor versions of tasks.  You can lock your build definition to a major version.  An author can create a new major version without affecting existing builds.  Build definitions can be updated to the new major version when ready to test it.  You cannot currently disable minor version updates.  The primary reason is we want, for instance, a way to forcibly push security fixes, etc.  We’re still exploring options for further lock down for very controlled environments.

Following a pull request– We added following work items a while back and now we have following pull requests.  I’ve found that to be an awesome way to track a PR conversation.

Linux hosted build pool– I’m really excited to be providing 1st class Linux support – including Docker.

As I mentioned last time, this is our last deployment for 2016.  This deployment should finish up by Dec 2nd.  We will skip the sprint 110 deployment (because it would run into Christmas) and pick back up with the 111 deployment in mid January.

Thanks,

Brian

 

Tips and Tricks from PowerShell Core Validation

$
0
0

It has been a privilege for the CAT team to work with customers and the PowerShell team to validate early builds and experiences with PowerShell Core. Some of the customers involved were key influences on the whitepaper, The Release Pipeline Model Applied to Windows Server and Microsoft Cloud. As a result, validation has included many experiences from outside the traditional Microsoft tool chain, such as Vagrant and Jenkins.

For this blog post, I wanted to share some of the learnings that I gained during the validation experience. This is not meant to be a complete picture of PowerShell Core. Rather, a glimpse at some exciting new possibilities.

Voyage of Discovery

In my mind the best thing about PowerShell as a scripting language is the never ending journey of exploration. When I have the need to accomplish a task and I’m unfamiliar with the area of technology, I rely on Get-Command. If I need to gather information I start the learning process by looking for commands that begin with the verb Get, so I open the console and type Get-Command Get-*. When I need to create something new, I’m going to be looking for commands that start with the verb New. When I need to change a value, I am probably looking for commands that start with Set. If I plan to run something, the commands probably start with Invoke, and so forth.

I view myself as a novice with Linux based servers. I am certainly not an expert but I am by no means a rookie either. Still, it is a muscle I don’t regularly exercise, so to accomplish even the most routine tasks I require a search engine. My testing began by connecting to an Ubuntu 14.04 server running on Azure using SSH. I followed the documented steps from the PowerShell repo to install .NET core and PowerShell Core. Then just as I would when remotely connected to Windows Server Core or Nano, I executed Get-Command to start looking at which cmdlets are available and how I could combine them to perform operations wizardry.

The “ah ha!” moment for me has been combining PowerShell Core with the existing Linux tools and scripting languages. Now the same work patterns that I describe above are applicable on platforms where my domain knowledge has less depth, but I am exploring topics using a familiar approach.

A simple proof of concept

I wanted to experiment with simple “operations tasks” that I handle daily on Windows based machines. The bare bones list that came to mind was:

  • getting the IPv4 address of the machine
  • finding out which DNS name servers it is pointing to
  • checking the drive configuration

I purposefully chose these areas because (at the time of this writing) there are not yet cmdlets available. The challenge would be understanding how to weave together the cmdlets at my disposal until the tasks become simple.

Getting text using Select-String

Of the three challenges, the most simple solution was checking which DNS name servers the machine is pointing to. This also allows me to demonstrate a core concept that will be an important skill, getting text.

The existing tools and scripting languages on Linux are GREAT at working with text output. That is a fundamental skill. These tools, such as the cat command, could simply be ‘wrapped’ by PowerShell functions if needed. I wanted to be stubborn and try working with the native PowerShell cmdlets for pulling information out of text. The Select-String cmdlet has helped a lot in this pursuit.

If you are not familiar with Select-String, try the command Get-Help Select-String -detailed from the console. This will provide you with more information. Get-Help is a handy way to learn more about cmdlets as you are experimenting.

get-content /etc/resolv.conf | select-string nameserver

nameserver 10.0.0.5

In this example, the one-line command is just capturing the contents of the file resolv.conf, and then returning each line that includes the work ‘nameserver’. Of course it will not always be so simple. You might need to capture multiple lines of text and extract specific values. A good trick to know is the -Context parameter for Select-String. The value will be the number of lines before and after the text you are selecting that you want to have returned. Seperate the values with a comma. So to return any line with the word ‘nameserver’ and the line after it, see the example below.

get-content /etc/resolv.conf | select-string nameserver -context 0,1> nameserver 10.0.0.5
  search dns.example.com

Getting text using Regular Expressions

As I mentioned above, there are Linux tools to handle this. Great tools, in fact, such as grep, awk, and sed. The challenge I gave myself was to use the style and approach to working with regular expressions that I am most familiar with from PowerShell script examples. Including both the -match operator and the .NET method Regex.Matches.

See also:

The task of getting the IPv4 address of a server provides a nice example. It is also an opportunity to work with text that is output from a command.

$ipInfo = ifconfig | select-string 'inet'
$ipInfo = [regex]::matches($ipInfo,"addr:\b(?:\d{1,3}\.){3}\d{1,3}\b") | ForEach-Object value
$ipInfo

addr:10.0.0.2
addr:127.0.0.1

Although that is the output I was looking for, it would be nice to clean up the “addr:” text in front of each address. If you are unfamiliar with manipulating strings using PowerShell, pass the result of Select-String to Get-Member using a pipeline operator |.

$ipInfo | Get-Member

Among the available options are split and replace.

$ipInfo.replace('addr:','')

10.0.0.2
127.0.0.1

Finally, I don’t really need to see the localhost address in every output, so I can apply a filter using the Where-Object cmdlet.

$ipInfo.replace('addr:','') | Where-Object {$_ -ne '127.0.0.1'}

10.0.0.2

To take this work and create a nice function that we can re-use in the future.

function Get-IPAddress {
    $ipInfo = ifconfig | select-string 'inet'
    $ipInfo = [regex]::matches($ipInfo,"addr:\b(?:\d{1,3}\.){3}\d{1,3}\b") | ForEach-Object value
    $ipInfo.replace('addr:','') | Where-Object {$_ -ne '127.0.0.1'}
}

Get-IPAddress

10.0.0.2

Objects

All this working with text is unfamiliar to experienced PowerShell users. A core concept in PowerShell is working with Objects. The last example, getting a list of available disks, provided an opportunity to try this. Also, since getting disk information is a privileged operation, I got a chance to try the sudo command within a function in PowerShell Core.

Piling on the ideas we have explored so far, I need to find the command that will provide text output with disk information, and I will need to manipulate that text. The new concept this example introduces is taking that text and putting it in an object using the New-Object cmdlet. The text string values are simply passed in to the -Property parameter as a hashtable.

Note the use of square brackets to index in to an array. For example, $array[0] returns just the first value.

Function Get-DiskInfo {
    $disks = sudo parted -l | Select-String "Disk /dev/sd*" -Context 1,0
    $diskinfo = @()
    foreach ($disk in $disks) {
        $diskline1 = $disk.ToString().Split("`n")[0].ToString().Replace('  Model: ','')
        $diskline2 = $disk.ToString().Split("`n")[1].ToString().Replace('> Disk ','')
        $i = New-Object psobject -Property @{'Friendly Name' = $diskline1; Device=$diskline2.Split(': ')[0]; 'Total Size'=$diskline2.Split(':')[1]}
        $diskinfo += $i
    }
    $diskinfo
}

Get-DiskInfo
[sudo] password for psuser:

Friendly Name            Total Size Device
-------------            ---------- ------
Msft Virtual Disk (scsi)  31.5GB    /dev/sda
Msft Virtual Disk (scsi)  145GB     /dev/sdb

Now you can work with specific properties of the object. Recall you can always pipe the object to Get-Member to see which properties and methods are available.

(Get-DiskInfo).Device
/dev/sda
/dev/sdb

Performing the same work across platforms

For the final challenge, is it now possible to author a function in PowerShell that can be run across Windows, Linux, and macOS? There is a simple solution that makes this very straightforward.

PowerShell Core includes variables:

  • $IsLinux
  • $IsOSX
  • $IsWindows

So in your function you can perform Linux type work in one IF block and Windows type work in another, and as long as you return an object with the same properties you can work with the results using the same cmdlets. In testing I found it handy to check if I was in Linux or OSX and if not, use cmdlets from modules I expected to find in Windows. Since the $IsWindows variable is not available (at the time of this writing) in the version of PowerShell provided with Windows.

function Get-Something {
    if ($IsLinux -or $IsOSX) {
        # Get this information using approaches for Linux and/or OSX
        $something = get-content /file | select-string text
        }
    else {
        # Get this information using cmdlets already available in Windows
        $something = get-whatever -parameter value
    }

    $something | Where-Object {$_ -ne 'excluded value'}
}

For more information

For more information on this topic and for expansion on the examples used above, see ‘Learning PowerShell’ in the docs folder of the PowerShell repo. While you are there, you might also visit the ‘demos’ folder at the root of the repository for the latest examples of using PowerShell in cross-platform environments.

Thank you!
Michael Greene
Principal Program Manager
ECG CAT

APIs for Developers Series - Bing Search APIs

$
0
0
Coming out of the MVP Global Summit 2016 earlier this month, we were happy to see the excitement and enthusiasm around our Bing Search APIs and how they could help enhance our partners’ experiences. To broaden this conversation, we’ve created this series where we’ll delve into the details of the Bing APIs to answer any unknowns and help you get started.

In this blog post we will discuss some of the new additions to the Bing Search APIs and show how they can be lit up in new scenarios to help enrich your apps and experiences.

For those unfamiliar, Bing Search APIs (including Web Search, Image Search, Video Search, News Search as one collection) give developers the ability to bring the knowledge and intelligence of web search right into their experiences, and intelligence is always at the forefront of our conversations with partners.

Built on the same technology stack as Bing.com, developers benefit from the security, scalability, relevance and ranking improvements that hundreds of millions of monthly users rely on. We are committed to supporting this ecosystem and will be pushing out regular feature improvements as part of the API lifecycle on an ongoing basis with a new version planned every 12 months.

This collection of APIs is also a great starting point for developers creating mobile apps. They are REST APIs that follow the latest structured data standards (Schema.org, JSON-LD), making them easy to implement and allow your users to find relevant results from billions of webpages, images, videos and news with one call and a few lines of code.

Bing Web Search API

With the Bing Web Search API, you can provide search results for billions of web pages, images, videos and news with a single API call.  Web results include the most commonly clicked links on the destination website. These deep links help users complete their tasks more quickly and navigate to where they want to go with one click. The news answer, image answer, video answer and related searches are also part of the API.
 


For example, app developer WildWorld uses the intelligence of the Bing APIs to power search for their online social-shopping community where people passionate about outdoor recreation can browse users’ photos, identify the tagged products, and then search for them online at the best prices. The scalability of the Bing APIs helps WildWorld deliver the comprehensive and relevant results their users expect and need.

Other Bing Web Search API features include the adult intent signal to help determine if a query will deliver adult content and customize the safe search level of the results, filtered answers, pagination, bolding, query alteration and spell suggestions where commonly misspelled terms are automatically corrected.

Bing Image Search API

The Bing Image Search API offers powerful image search capabilities and can provide results that span from a narrow search topic to trending images to visually similar images and more. Extra filters and parameters are available, such as size, license, style, freshness, and color. The license filter helps users find images in the public domain or under licenses that permit them to use the image.

Image insights allow you to get deeper information about a specific image based on machine learning and entity recognition. Safe search is another powerful feature that allows you to adjust image insight settings to help filter out inappropriate search results, such as explicit adult and racy content.


 
There are also new features such as merchants and recipes that expose a list of retailers that sell a given product in an image. With images of food, we have tagged a number of different sources where there is a recipe with instructions on how to make that specific item.

These updates open up a lot of new scenarios. Mobile app maker Cardinal Blue is taking advantage of the flexibility, power and safe search capabilities of the Bing Image Search API in their PicCollage app, which allows users to combine their photos, videos, captions, stickers and special effects to create unique collages and images to share with friends.

Bing Video Search API

With the Bing Video Search API, you can add advanced video search features, including video previews, trending video results, filters for free or paid options and other useful metadata, such as creator, encoding format, video length and view count.



Motion thumbnails (i.e., video preview) are a new addition that help drive engagement. You can create an experience for your users that is similar to Bing.com where they can view a short preview of a video by hovering over the thumbnail. As a result, users may stay longer as they preview the results, and this feature helps them quickly find the video they are looking for.

 
Bing News Search API

The Bing News Search API allows you to help users find relevant news results by topic, geographical location and rich article metadata. News articles can be searched by category/market and trending news (top results in the relevant topic).



Bing Search API is another example of how we at Bing are continuing to build and improve APIs that allow developers to harness knowledge and intelligence around the web for their users.

To get started for free or purchase via Azure, please visit the Bing for Partners website. If you have feedback or questions about our APIs, please comment on Bing Listens.

- The Bing Team
 

Evolving the Test Platform – Part 3: .NET Core, convergence, and cross-plat

$
0
0

[This is the 3rd post in the series on evolving the Visual Studio Test Platform. You can read the earlier posts here:
Evolving the Visual Studio Test Platform – Part 2,
Evolving the Visual Studio Test Platform – Part 1]

As .NET Core draws an ever-growing community of developers with existing assets and experiences, it is essential to support a consistent tools ecosystem. Thus, the “alpha” release of the MSBuild-based .NET Core Tools shipping with Visual Studio 2017 RC introduces support for the MSBuild build system and the .csproj project format – both familiar and key components of the .NET tools ecosystem. The need for a consistent testing experience follows naturally. Thus, the .NET Core testing experience/ecosystem is now converged with the Visual Studio Test Platform.

Concretely, this means the following:
(1) Evolved test platform: vstest is now fully cross-plat (Windows/Linux/Mac). dotnet test now leverages this evolved Visual Studio Test Platform.
(2) Converged adapters: Adapter writers can maintain a converged adapter that can serve Visual Studio, the vstest.console.exe CLI and the dotnet CLI.
(3) Converged user experience: the experience of using vstest.console.exe (the same options and capabilities) on the .NET Framework will be available on .NET Core as well.

Let’s see how …

Evolved test platform

The evolved vstest engine is presently named Microsoft.TestPlatform.V2 and is packaged as a vsix extension. Launch Tools | Extensions and Update… and you will notice that this is already bundled with Visual Studio.

5
C:\Program Files (x86)\Microsoft Visual Studio\2017\Enterprise\Common7\IDE\Extensions is the folder where extensions are installed.

Converged Test Adapter for .NET Framework and .NET Core

When we first introduced MSTest V2 support there were 2 adapters – dotnet-test-mstest to target .NET Core, and MSTest.TestAdapter to target the .NET Framework. The dotnet-test-mstest adapter understands the project.json based project format. With .NET Core now supporting .csproj, that adapter is no longer required! Instead, the MSTest.TestAdapter can be used!

Launch Visual Studio 2017 RC, and create a Unit Test Project (.NET Core). This creates an MSTest V2 based solution targeting .NET Core. Now, look at the references.

1

The solution is self-contained – it refers the vstest engine, the MSTest V2 test framework, and the MSTest.TestAdapter to discover/run such tests using the engine. And it is targeting .NET Core.

The same approach can be used by other test frameworks as well. For e.g., xUnit.net too has 2 adapters – dotnet-test-xunit to target .NET Core, and xunit.runner.visualstudio to target the .NET Framework, but with .NET Core now supporting .csproj, the dotnet-test-xunit is no longer required either. Instead, the xunit.runner.visualstudio adapter can be used. Here is a .NET Core unit test solution using the xUnit.net test framework:

2

The solution is self-contained – it refers the vstest engine, the xUnit.net test framework, and the xunit.runner.visualstudio test adapter to discover/run such tests using the engine. This same adapter is used when targeting both the .NET Framework and .NET Core.

[Note: the evolved test platform has the notion of a translation layer that enables interfacing with IDEs to perform common tasks like discover tests, run tests, get the results to display, etc. This will be released soon on NuGet. In the meantime, we worked closely with the xUnit.net team to implement this support using early bits. We look forward to working with the community to extend this support.]

Let’s continue with this solution in our brief tour of the functionality.

Testing using the Visual Studio IDE

The below figure illustrates testing from within the Visual Studio IDE.

34new

This is same experience provided in the case of tests targeting the.NET Framework.

Testing using vstest.console.exe

C:\Program Files (x86)\Microsoft Visual Studio\2017\Enterprise\Common7\IDE\Extensions is the folder where extensions are installed, remember? Locate vstest.console.exe under this folder.
Navigate to the folder containing the xUnit.net test project, and use vstest.console.exe to run the tests as shown:6newNotice that vstest.console.exe takes the DLL(s) as input – just like it does in the case of tests targeting the .NET Framework. Go ahead, try this out and see what other switches can be used!

Testing using the .NET Core CLI

You can try these steps with latest dotnet-cli builds, since a critical bug fix couldn’t make it to RC release.
Install the latest .NET Code SDK Preview build from here:  https://github.com/dotnet/core/blob/master/release-notes/preview3-download.md. I downloaded it to the folder D:\temp\dotnet on my machine.
Use dotnet test to run the tests as shown:7new
The command automatically picks up the .csproj file(s) and discovers and runs the appropriate tests using the referenced xunit.runner.visualstudio adapter.

Testing using VSTS

To run the tests using VSTS, I setup a local build agent with Visual Studio 2017, and these same .NET Core SDK bits.
Here is the build definition:8new

Queue up a build and you should see the report.9new

Summary

The .NET Core testing experience/ecosystem is now converged with the Visual Studio Test Platform. This is a huge benefit to everyone – same adapters, common invocation model, much broader feature set than CLI). As mentioned earlier, we will release the translation layer to NuGet soon, and look forward to working with you to get more adapters interfacing with the test platform.

Stay tuned!


Increasing the size limit of Sitemaps file to address evolving webmaster needs

$
0
0

For years, the sitemaps protocol defined at www.sitemaps.org stated that each sitemap file (or each sitemap index file) may contain no more than 50,000 URLs and must not be larger than 10MB (10,485,760 bytes). While most sitemaps are under this 10 MB file limit, these days, our systems occasionally encounter sitemaps exceeding this limit. Most often this is caused when sitemap files list very long URLs or if they have attributes listing long extra URLs (as alternate language URLs, Image URLs, etc), which inflates the size of the sitemap file.

To address these evolving needs, we are happy to announce that we are increasing sitemaps file and index size from 10 MB to 50MB (52,428,800 bytes).
  Webmasters can still compress their Sitemap files using gzip to reduce file size based on the bandwidth requirement; however, each sitemap file once uncompressed must still be no larger than 50MB. The update file size change is now reflected in the sitemap protocols on www.sitemaps.org.

This update only impacts the file size; each sitemap file and index file still cannot exceed the maximum of 50,000 URLs ( attributes).

Having sitemaps updated daily, listing all and only relevant URLs is key to webmaster SEO success. To help you, please find our most recent sitemaps blog posts

·         Sitemaps Best Practices Including Large Web Sites

·         Sitemaps – 4 Basics to Get You Started

 
Fabrice Canel
Principal Program Manager
Microsoft - Bing

Windows Server 2016 Operating System Management Pack

$
0
0

We want to let you know that we are releasing an updated version of “Microsoft System Center 2016 Management Pack for Windows Server Operating System“. Some of the changes in this new MP are

  • New object types have been added for Nano to help users differentiate and manage them accurately
  • New monitor is added to alert when the Storport miniport driver times out a request
  • Various bugs related to Nano server such as issues with Nano Server Cluster Disk and Nano Server Cluster Shared Volumes health discoveries, Free Space monitors on Nano Server are fixed
  • Fixed multiple customer and regression bugs, please refer to the Management Pack guide for details

You can find all the details as well as a download link here: https://www.microsoft.com/en-us/download/details.aspx?id=54303. Provide us feedback on our user voice website.

Announcing general availability of preview features and new APIs in Azure Search

$
0
0

Today we are announcing the general availability (GA) of several features as well as a new REST API version and .NET SDK that support the new GA features. All GA features and APIs are covered by the standard Azure service level agreement (SLA) and can be used in production workloads.

New GA APIs

  • Blob Indexer allows you to parse and index text from common file formats such as Office documents, PDF, and HTML. NOTE: CSV and JSON support is still in preview.
  • Table Indexer enables you to ingest data from an Azure Table store.
  • Custom Analyzers enable you to take control over the process of lexical analysis that’s performed over the content of your documents and query terms. For example, custom analyzers enable support for Phonetic search. For more information, read more on custom analyzers
  • Analyze API allows you to test built-in and custom analyzers to see how the analyzer breaks text into tokens.
  • ETag Support allows you to safely update index definitions, indexers, and data sources concurrently.

New REST API Version (2016-09-01) and .NET SDK

The following REST API and .NET SDKs are GA:

  • Service REST API version (2016-09-01), which includes all of the GA features noted in the previous section.
  • A .NET SDK equivalent of the Service REST API (2016-09-01).
  • A .NET SDK for management operations (Microsoft.Azure.Management.Search), which is the first .NET SDK for Search service and api-key management.

The existing preview API (2015-02-28-Preview) is still active and continues to support existing preview features such as moreLikeThis, and CSV and JSON support in Blob indexer. We look forward to your continued feedback on these and other new features as we work to add more GA features.

Learn More

Learn more about the new REST API version, including steps to upgrade from an older version of the REST API to the GA version. Get the download to GA .NET SDK.

Live Dependency Validation in Visual Studio 2017

$
0
0

Last month we announced that Visual Studio “Dev15” Preview 5 now supported Live Dependency Validation. In this blog post,

On demand video about dependency validation

During the connect 2016 event, we’ve proposed an on-demand video which explains in detail why you’d want to use Dependency Validation and how to do so Validate architecture dependencies with Visual Studio. The video contains a quick review of the topic – unwanted dependencies are part of your technical debt. Then I remind you of what the experience was in previous versions of Visual Studio Enterprise, and show how this could be improved, I demo the new-real time dependency validation experience that you, as a developer, will benefit from in Visual Studio 2017. I also demo how easy it is to create a dependency diagram. Finally, I describe how users using the current experience can migrate to the new one
I encourage you to watch this 10 mins video, which will give you a good overview of the scenario and the feature.

Updates on the experience improvements

The demos in the video were done with Visual Studio 2017 RC. We had fixed a number of bugs since “Preview 5” and improved the user experience:

  • When upgrading projects to enable live validation, a dialog now expresses progress,
      clip_image001
  • When updating a project for live dependency validation, the version of the NuGet package is now upgraded to be the same for all projects (and is the highest version in use),
  • VB.NET is now fully supported on par with C#,
  • The addition of a new Dependency Validation project now triggers a project update,
  • The dependency validation Roslyn analyzer is now better:
    • It’s less verbose as we no longer report when you declare an implicitly-typed variable (var in C#). Because the type of the variable is guaranteed to be the same as the type on the right-hand side we were effectively reporting the same error twice
    • It now reports on delegate invocations.
  • You are now advised to enable “full solution analysis” when using live dependency validation: a gold bar appears in the Error List, and opens the options page pointing you at the option to enable or disable the “full solution analysis” (for C#, and VB). You can permanently dismiss this gold bar if you are not interested in seeing all the architectural issues in your solution.
    clip_image003 
    If you don’t enable “full solution analysis” the analysis will only be done for the files being edited (this behavior is common to all Roslyn analyzers).

Known issues

We were not able to fix everything for RC but we are still working on it. The experience will be even better for RTW. In Visual Studio 2017 RC:

  • saving a dependency Validation diagram no longer triggers the analysis. This is annoying because if you want to see immediately the effect on the issues in the code, of changing the diagram, the only workaround is to close and reload the solution.
  • the experience is not very good when you enable the “Lightweight Solution load” option.
  • deleting a dependency validation diagram does not remove the corresponding links from C# and VB projects.

These have all been fixed for RTW.

Real time Dependency Validation error messages

It’s now possible to click on the error code for a dependency validation error message and get an explanation of the issue. The figure below shows the default severity of the rules in the ruleset editor

clip_image005 For your convenience, I’ve summarized, in the table below, in which condition you will have which error.

Error Code

When is it reported?

DV0001

Invalid Dependency. This issue is reported when a code element (namespace, type, member) mapped to a Layer references a code element mapped to another layer, but there is no dependency arrow between these layers in the dependency validation diagram containing this layers. This is a dependency constraint violation

DV1001

Invalid namespace name. This issue is reported on a code element associated with a layer which “Allowed Namespace Names” property does not contain the namespace in which this code element is defined. This is a naming constraint violation.

Note that the syntax of “Allowed Namespace Names” is to be a semi-colon list of namespaces in which code elements associated with are layer are permitted to be defined.

DV1002

Dependency on unreferenceable namespace. This issue is reported on a code element associated with a layer and referencing another code element defined in a namespace which is defined in the “Unreferenceable Namespace” property of the layer. This is a naming constraint violation.

Note that the “Unreferenceable Namespaces” property is defined as a Semi-colon separated list of namespaces that should not be referenced in code elements associated with this layer

DV1003

Disallowed namespace name. This issue is reported on a code element associated with a layer which “Disallowed Namespace Names” property contains the namespace in which this code element is defined. This is a naming constraint violation.

Note that the “Disallowed namespace name” property is defined as a Semi-colon separated list of namespaces in which code elements associated with this Layer should not be defined.

Differences between Layer validation in Visual Studio 2010-2015 and Live Dependency Validation in Visual Studio 2017

Old layer validation issues vs new Dependency Validation issues

As we have explained last month, the Dependency Validation experience leverages the existing Layer diagram format with some improved terminology for the layer properties. The error messages generated by the Roslyn analyzers are similar, but we have used different error code because the results can be slightly different. The old issues have an error code starting with AV (Architecture Validation errors) whereas the new issues have an error code starting with DV (Dependency Validation errors).

Why can there be differences?

To be fair, you might not notice these differences, however, there is a case where you will. This is the case where you have a Visual Studio 2015 solution where you had enabled the legacy Layer validation. Then you migrate this solution, with Visual Studio 2017 Enterprise, to enable live dependency validation. You commit you changes. Later if you, or someone in your team, opens this solution with Visual Studio 2015, and build it, Visual Studio 2016 will start showing both the old issues (AV) and the new issues (DV). In most cases, they will be the same, but not always, so we thought it would be good to explain a bit more about the differences. The underlying reason for these differences is that the legacy validation (Layer) was being performed against the binary code, whereas the new validation is performed against source code. In the same way that you sometimes must add a reference to an assembly declaring a base interface for a class you use (whereas you don’t yourself reference this base interface), the compiler sometimes generates constructs which are not effectively used, but were drawing a dependency in binary, which is not visible in source code. So, the binary caught legacy validation issues which logically should not be there, whereas the new real time dependency validation won’t. This is a bit subtle, and cases are quite rare. (One case I met is when calling a method passing as a parameter a delegate which had an offending return type).

What dependency issues are caught in Live Dependency Validation?

Live Dependency Validation for assemblies, types, and namespaces is pretty simple: either they are allowed or they are disallowed. The case of generic types is more complex, but makes sense: it’s enough to have at least one unwanted generic type argument to report an issue on the generic type.

As far as members are concerned:

  • Method declarations: we check the return type and the parameter types
  • Method invocations:
    • we check the parent type of the method being called
    • we check the return type, even if it is ignored i.e. is not assigned to anything
    • we do not check the types of parameters to a method
    • However, we do check the values of arguments that are passed to a method. This means that if you pass an object which type is not allowed by at least one of the dependency diagrams in the solution, you will get an error. You won’t, however get an error if you pass null.
  • Property reads and writes: we check the containing type and the return type i.e. they are a kind of method invocation
  • Fields: we check the field type

For delegates, things get a little more complicated as sometimes we want to validate them as if they were a type, in others we want to validate them as is they were a method invocation.

Tips to author a Dependency Validation diagram

I’ll finish this blog post by sharing a tip on how to author Dependency Validation diagrams. You can, of course, add layers though the toolbox and map them to code elements using the Layer Explorer. But if you have an existing solution, you can do it quicker by Dragging and from Visual Studio explorers and dropping to a Dependency Validation Diagram. You might not know though that this can also be done by Copy/Paste or Drag and Drop from a Code Map

Drag and Drop from explorers

As shown in the video, a recommended way to add layers to a dependency validation diagram is to drag and drop assemblies, references, types and members from the solution explorer, the Class View or the Object Browser. It’s also possible to drag and drop namespaces from the Class View or the Object Browser (they are the only explorers showing namespace).

Drag and Drop or Copy / Paste from a Code Map

You might also be interested in creating a Dependency Validation diagram as corresponds to the solution. To do that you can:

  • Generate a Code Map for Solution (from the Architecture menu)
  • Personally, I use the Code Map Filter Windows to filter out solution folders and “Test Assets” (as I mostly care so about enforcing dependencies in product code).
  • On the generated Code map, you may also remove the “External” node (or expand it to show external assemblies) depending on what you want to do. In the figure below I removed “External”. If you are interested in enforcing namespace dependencies as I did, you can expand the assemblies you care about (delete the other ones from the Code Map)
  • Then finally when you are happy that you have the content you care about on Code Map, you can create a new Dependency Diagram (from the Architecture menu) select all the nodes from the Code Map (either by Ctrl+A, or by the rubber band selection, which you trigger by pressing the shift key before you click / drag / release the selection), and do a drag and drop or a copy paste to the new Dependency Validation diagram. You will (in RTM, and after doing a bit of manual layout) get the diagram as in the picture below.
  • Note that in Visual Studio 2017 RC, you’ll need to press the Shift key to get one layer per Code Map node when doing the drag and drop. And you won’t see the dependencies. This was fixed for RTM.
  • From there you have the current architecture, and you can decide what you want the architecture to be and modify the Dependency Validation diagram accordingly

clip_image007

In closing

As usual we’d love to hear your feedback. Don’t hesitate to send feedback directly from Visual Studio (Help | Send Feedback) to report a problem or provide a suggestion.

Cognitive Toolkit Helps Win 2016 CIF International Time Series Competition

$
0
0

This post is authored by Slawek Smyl, Senior Data & Applied Scientist at Microsoft.

The Computational Intelligence in Forecasting (CIF) International Time Series Competitionwas one of ten competitions held at the IEEE World Congress on Computational Intelligence (IEEE WCCI) in Vancouver, Canada, in late July this year. I am excited to report that my CIF submission won the first prize! In this blog post, I would like to provide an overview of:

  1. The CIF competition.
  2. My winning submission and its algorithms.
  3. The tools I used.

CIF International Time Series Competition

This competition started in 2015 with the goal of getting an objective comparison of existing and new Computational Intelligence (CI) forecasting methods (Stepnicka & Burda, 2015). The IEEE Computational Intelligence Society defines these methods as “biologically and linguistically motivated computational paradigms emphasizing neural networks, connectionist systems, genetic algorithms, evolutionary programming, fuzzy systems, and hybrid intelligent systems in which these paradigms are contained” (IEEE Computational Intelligence Society, 2016).

In the CIF 2016 competition, there were 72 monthly time series of medium length (up to 108-long). 24 of them were bank risk analysis indicators, and 48 were generated. In a large majority of cases, the contestants were asked to forecast 12 future monthly values (i.e. up to 1 year ahead), but for some shorter series, the forecasting horizon was smaller, at 6 future values.

The error metric was the average sMAPE over all time-series. The sMAPE for a single time series is defined as below:


where h is the maximum forecasting horizon (6 or 12 here), is a true value at the horizon t, and is the forecast for the horizon t.

Up to 3 submissions, without feedback, were allowed.

Four statistical methods were used for the purposes of comparison – Exponential Smoothing (ETS), ARIMA, Theta and Random Walk (RW), and two ensembles using simple and Fuzzy Rule based averages.

Winning Submission, Algorithms & Results

The organizers received 19 submissions from 16 participants: 8 NN-based, 3 fuzzy, 3 hybrid methods, and 2 others (kernel-based, Random Forest). Interestingly, Computational Intelligence methods performed relatively better on real as opposed to generated time series: on the real life series, the best statistical method was ranked 6th. On the generated time series, a statistical algorithm called “Bagged ETS” won. One can speculate that this is because statistical algorithms generally assume a Gaussian noise around a true value, but this assumption is incorrect for many “real-life” series, e.g. they tend to have “fatter” tails, in other words, the probability of outliers is higher. Overall, two of my submissions got 1st and 2nd places. I will describe now my winning submission, a Long Short-Term Memory(LSTM) based neural network applied to de-seasonalized data.

Data Preprocessing

Forecasting time series with Machine Learning algorithms or Neural Networks is usually more successful with data preprocessing. This is typically done with a moving (or “rolling”) window along the time series. At each step, constant size features (inputs) and outputs are extracted, and therefore each series can be a source of many input/output records. For networks with squashing functions like sigmoid, a normalization is often helpful, and it is more needed if one tries to train a single network of this kind on many time series differing in size (amplitude). Finally, for seasonal time series, although in theory a neural network should be able to deal with them well, it often pays off to remove the seasonality during the data preprocessing step.

All the above was present in the preprocessing used by my winning submission. It started with applying logarithm and then the stl() functions of R. STL decomposes a time series into seasonal, trend, and irregular components. The logarithm transformation provides two benefits: firstly, it is part of the normalization, secondly, it converts the STL’s normally additive split into a multiplicative one (remember that log (x+y) = log (x) * log (y)), and the multiplicative seasonality is a safer assumption for non-stationary time series. The graph in Figure 1 illustrates this decomposition.


Figure 1

So, after subtracting the seasonality, the moving window was applied to cover 15 months (points) in case of the 12 months’ ahead forecast. It’s worth stressing that the input is a 15-long vector and the output is a 12-long vector, so we are forecasting the whole year ahead at once. This approach works better than trying to forecast just one month ahead – to get the required 12-step (month) ahead forecast, one would need to forecast 12 times, and use the previous forecast as input 11 times. That leads to instability of the forecast.


Figure 2

Then, the last value of trend inside the input window (the big filled dot above) is subtracted from all input and output values for normalization. The values are stored as part of this time series sequence. Input and output windows move forward one step and the normalization step is repeated. Two files are created: training and validation. The procedure described above continues until the last point of the input window is positioned at lengthOfSeries-outputSize-1, e.g. here 53-12, in case of the training file, or until the last point of the output window equals the last point of the series, in case of the validation file. The validation file contains the training file, but actually only the last record of each series is used – the rest, although later forecasted, is discarded and only used as a “warm-up” region for the recurrent neural network used (LSTM).

The data preprocessing described here is relatively simple, it could have been more sophisticated, e.g. one could use some artifacts of other statistical algorithms like Exponential Smoothing as features.

Neural Network Architecture

A typical time series preprocessing as described above has an obvious weakness: data outside of the input window has no influence on the current derived features. Such missing information may be important. On the other hand, extending the size of the input window may not be possible due to the shortness of the series. This problem can be mitigated by using Recurrent Neural Networks (RNN) – these have internal directed cycles and can “remember” some past information. LSTM networks (Hochreiter & Schmidhuber, 1997) can have a long memory and have been successfully used in speech recognition and language processing over the last few years. The winning submission used a single LSTM network (layer) and a simple linear “adapter” layer without bias. The whole solution is shown in Figure 3 below.

slawek-3 Figure 3

Microsoft Cognitive Toolkit and R

The neural network was run using Microsoft Cognitive Toolkit, formerly known as CNTK (Microsoft, 2016). Cognitive Toolkit is Microsoft’s open-source neural network toolkit, available for Windows and Linux. It is highly scalable and can run on a CPU, one or more GPUs in a single computer, and on a cluster of servers, each running several GPUs. This scalability was actually not needed for this project – the entire learning took just a few minutes on a PC with a single GPU. However, the ease of creation and experimentation with neural networks architectures was the highlight. Microsoft Cognitive Toolkit shines here – it allows you to express almost arbitrary architectures, including recurrent ones, through mathematical formulas describing the feed-forward flow of the signal. For example, the figure below shows the beginning of the definition of an LSTM network; note how easy it is to get a past value for a recurrent network, and how straightforward the translation from mathematical formulas to code. Preprocessing and post-processing was done with R.


Figure 4

Please note that, in my current implementation of the system described in the Cortana Intelligence Gallery tutorial, you will not find the formulas shown in Figure 4. I rewrote my original script that utilizes a preconfigured LSTM network (September 2016, Microsoft Cognitive toolkit version 1.7). The configuration file is much simpler now and you can access it here. To download the dataset, visit http://irafm.osu.cz/cif.

Slawek

Bibliography

Hochreiter, S. & Schmidhuber, J., 1997. Long short-term memory. Neural Computation, 9(8), p. 1735–1780.

IEEE Computational Intelligence Society, 2016. Field of Interest. [Online]
Available at: http://cis.ieee.org/field-of-interest.html

Microsoft, 2016. CNTK. [Online]
Available at: https://github.com/Microsoft/CNTK

Stepnicka, M. & Burda, M., 2015. Computational Intelligence in Forecasting – The Results of the Time Series Forecasting Competition. Proc. FUZZ-IEEE.

Stepnicka, M. & Burda, M., n.d. [Online]
Available at: http://irafm.osu.cz/cif

Viewing all 13502 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>