Quantcast
Channel: TechNet Technology News
Viewing all 13502 articles
Browse latest View live

10 reasons you’ll love Windows Server 2016

$
0
0

Windows Server 2016 is the cloud-ready operating system built to support your current workloads and allow you to transition to the cloud. Weve taken all of our learnings from Azure and built them right in, packing it full of exciting new innovations and features. Here are ten we think youll love:

Image1

1. Control the keys to your sensitive data.

Improve security by limiting access to the IT environment with Just Enough and Just-In-Time Administration. Control who gets access to what and for how long. With Windows Server 2016, you can define which keys each admin has access to, even setting temporary permissions.

Image2

2. Manage your servers from anywhere, even your mobile device.

Windows Server 2016 has a new toolset hosted in the cloud called Server Management Tools. Its a web-based remote GUI that allows you to manage your serversphysical or virtual, datacenter or cloudfrom just about anywhere, even your mobile device.

Image3

3. Deploy servers in an exact configuration and keep them that way.

Automate tasks and manage settings to set up and keep servers configured properly. Weve enhanced PowerShell Desired State Configuration and given you the ability to define, deploy, and manage your software environment using a single console. Weve also added elements of open source software that make it easier to test your code.

Image4

4. Easily handle the 9:00 A.M. logon storm.

Weve improved and strengthened our Remote Desktop Services platform allowing partners to build secure, customized apps. Graphic improvements have increased compatibility and performance across the board.

Image5

5. Do-it-yourself storage.

Software-defined storage used to be exclusive to storage industry vendors. With Windows Server Storage Spaces, weve included all the features you traditionally expect, directly in the operating system. This means greater performance, without the premium cost.

Image6

6. Upgrade without the downtime.

Rolling Cluster Upgrades and Mixed Mode Cluster allow you to upgrade and manage your servers without taking them down. This is designed to help reduce the impact of management operations on your workload.

Image7

7. Click, click, donejust like in Azure.

Windows Server 2016 Software-Defined Networking is based on clear and concise policy management, cutting the time spent on infrastructure. This Azure-inspired network virtualization feature gives you the centralized control to configure network resources. Deploy new workloads more quickly and use network segmentation to increase security.

Image8

8. Move beyond passive security.

Traditional, passive perimeter security is becoming less and less effective. Once someone bypasses your wall, they are free to do whatever they want. With Windows Server 2016, that is no longer true. Add new layers of security to your environment to control privileged access, protect virtual machines, and harden the platform against emerging threats.

Image9

9. Use Containers to streamline app deployment from keyboard to production.

We are excited to announce that open source Containers are now built into Windows Server 2016, helping to accelerate your app deployment. Use Containers to streamline existing apps and to create new microservices, whether on-premises or in any cloud.

Image10

10. A super small server that packs a big punch.

Nano Server is a new deployment option for Windows Server 2016 that loads an image 25x smaller than the Windows Server 2016 with a desktop. It brings only the elements that the specific workload needs, resulting in faster boot times and simpler operations.

BONUS Move to the cloud for less using your existing licenses

With the Azure Hybrid Use Benefit, you can use on-premises Windows Server licenses that include Software Assurance to earn special pricing for new Windows Server virtual machines in Azurewhether youre moving a few workloads or your entire datacenter. Start saving now>

Ready to dive deeper?

This e-book will tell you everything you want to know about Windows Server 2016.

Get the Ultimate Guide to Windows Server 2016.


The “Internet of Stranger Things” Wall, Part 2 – Wall Construction and Music

$
0
0

Overview

I do a lot of woodworking and carpentry at home. Much to my family’s chagrin, our house is in a constant state of flux. I tend to subscribe to the Norm Abram school of woodworking, where there are tools and jigs for everything. Because of this, I have a lot of woodworking and carpentry tools around. It’s not often I get to use them for my day job, but I found just the way to do it.

In part one of this series, I covered how to use Windows Remote Wiring to wire up LEDs to an Arduino, and control from a Windows 10 UWP app. In this post, we’ll get to constructing the actual wall.

This post covers:

  • Constructing the Wall
  • Music and UWP MIDI
  • Fonts and Title Style

The remainder of the series will be posted this week. Once they are up, you’ll be able to find the other posts here:

  • Part 1 – Introduction and Remote Wiring
  • Part 2 – Constructing the wall and adding Music (this post)
  • Part 3 – Adding voice recognition and intelligence

If you’re not familiar with the wall, please go back and read part 1 now. In that, I described the inspiration for this project, as well as the electronics required.

Constructing the Wall

In the show Stranger Things, “the wall” that’s talked about is an actual wall in a living room. For this version, I considered a few different sizes for the wall. It had to be large enough to be easily visible during a keynote and other larger-room presentations, but small enough that I could fit it in the back of the van, or pack in a special box to (expensively) ship across the country. That meant it couldn’t be completely spread out like the one in the TV show. But at the same time, the letters still had to be large enough so that they looked ok next to the full-size Christmas lights.

Finally, I didn’t want any visible seams in the letter field, or anything that would need to be rewired or otherwise modified to set it up. Seams are almost impossible to hide well once a board has traveled a bit. Plus, demo and device-heavy keynote setup is always very time-constrained, so I needed to make sure I could have the whole thing set up in just a few minutes. Whenever I come to an event, the people running it are stunned by the amount of stuff I put on a table. I typically fill a 2×8 table with laptops, devices, cameras, and more.

I settled on using a 4’ x 4’ sheet of ½” plywood as the base, with poplar from the local home store as reinforcement around the edges. I bisected the plywood sheet to 32” and 16” to make it easier to ship and also so it would easily fit in the back of the family van for the first event we drove to.

The wallpapered portion of the wall ended up being 48” wide and 32” tall. The remaining paneled portion is just under 16” tall. The removable bottom part turned out to be quite heavy, so I left it off when shipping to Las Vegas for DEVintersection.

To build the bottom panel, I considered getting a classic faux wood panel from the local Home Depot and cutting it to size for this. But I really didn’t want a whole 4×8 sheet of fake wood paneling laying around an already messy shop. So instead I used left-over laminate flooring from my laundry room remodel project and cut it to length. Rather than snap the pieces tight together, I left a gap, and then painted the gaps black to give it that old 70s/80s paneling look.

picture1

picture2

The size of this version of the wall does constrain the design a bit. I didn’t try to match the same layout that the letters had in the show, except for having the right letters on the right row. The wall in the show is spaced out enough that you could easily fill a full 4×8 sheet and still look a bit cramped.

The most time-consuming part of constructing the wall was finding appropriately ugly wallpaper. Not surprisingly, a search for “ugly wallpaper” doesn’t generally bring up items for sale :). In the end, I settled for something that was in roughly the same ugliness class as the show wallpaper, but nowhere near an actual match. If you use the wallpaper I did, I suggest darkening it a bit with a tea stain or something similar. As-is, it’s a bit too bright.

Note that the price has gone up significantly since I bought it (perhaps I started an ugly wallpaper demand trend?), so I encourage you to look for other sources. If you find a source for the exact wallpaper, please do post it in the comments below!

Another option, of course, is to use your art skills and paint the “wallpaper” manually. It might actually be easier than hanging wallpaper on plywood, which as it turns out, is not as easy as it sounds. In any case, do the hanging in your basement or some other place that will be ok with getting wet and glued-up.

Here it is with my non-professional wallpaper job. It may look like I’m hanging some ugly sheets out to dry, but this is wallpaper on plywood.

picture3

When painting the letters on the board, I divided the area in three sections vertically, and used a leftover piece of flooring as a straight edge. That helped there, but didn’t do anything for my letter spacing / kerning.

To keep the paint looking messy, I used a cheap 1” chip brush as the paint brush. I dabbed on a bit extra in a few places to add drips, and went back over any areas that didn’t come out quite the way I wanted, like the letter “G.”

picture4

Despite measuring things out, I ran out of room when I got to WXYZ and had to squish things together a bit. I blame all the white space around the “V”. There’s a bit of a “Castle of uuggggggh” thing going on at the end of the painted alphabet.

picture5

Once the painting was complete, I used some pre-colored corner and edge trim to cover the top and bottom and make it look a bit more like the show. I attached most trim with construction glue and narrow crown staples (and cleaned up the glue after I took the above photo). If you want to be more accurate and have the time, use dark stained pine chair rail on the bottom edge, between the wallpapered section and the paneled section.

Here you can see the poplar one-by support around the edges of the plywood. I used a combination of 1×3 and 1×4 that I had around my shop. Plywood, especially plywood soaked with wallpaper paste, doesn’t like to stay flat. For that reason, as well as for shipping reasons, the addition of the poplar was necessary.

picture6

You can see some of the wiring in this photo, so let’s talk about that.

Preparing and Wiring the Christmas lights

There are two important things to know about the Christmas lights:

  1. They are LEDs, not incandescent lamps.
  2. They are not actually wired in a string, but are instead individually wired to the control board.

I used normal 120v AC LED lights. LEDs, like regular incandescent lamps, don’t really care about AC or DC, so it’s easy enough to find LEDs to repurpose for this project. I just had to pick ones which didn’t have a separate transformer or anything odd like that. Direct 120v plug-in only.

The LED lights I sacrificed for this project are Sylvania Stay-Lit Platinum LED Indoor/Outdoor C9 Multi-Colored Christmas Lights. They had the right general size and look. I purchased two packs for this because I was only going to use the colors actually used on the show and also because I wanted to have some spares for when the C9 housings were damaged in transit, or when I blew out an LED or two.

There are almost certainly other brands that will work, as long as they are LED C9 lamps and the wires are wrapped in a way that you can unravel.

When preparing the lamps, I cut the wires approximately halfway between the two lamps. I also discarded any lamps which had three wires going into them, as I didn’t want to bother trying to wire those up. Additionally, I discarded any of the lumps in the wires where fuses or resistors were kept.

picture7

For one evening, my desk was completely covered in severed LED Christmas lamps.

Next, I figured out the polarity of the LED leads and marked them with black marker. It’s important to know the anode from the cathode here because wiring in reverse will both fail to work, and likely burn out the LED, making subsequent trials also fail. Through trial and error, I found the little notch on the inside of the lamp always pointed in the same way, and that it was in the same position relative to the outside clip.

Once marked, I took note of the colors used on the show and following the same letter/color pairings, drilled an approximately ¼” hole above each letter and inserted both wires for the appropriate colored lamp through to the back. Friction held them in place until I could come through with the hot glue gun and permanently stick them there.

From there, I linked each positive (anode) wire on the LEDs together by twisting the wires together with additional lengths of wire and taping over them with electrical tape. The wire I used here was spare wire from the light string. This formed one continuous string connecting all the LED anodes together.

Next, I connected the end of that string to the +3.3v output on the Arduino. 3.3v is plenty to run these LEDs. The connection is not obvious in the photos, but I used a screw on the side of the electronics board and wired one end to the Arduino and the other end to the light string.

Finally, I wired the negative (cathode) wires to their individual terminals on the electronics board. I used a spool of heavier stranded wire here that would hold up to twisting around the screw terminals. For speed, I used wire nuts to connect those wires to the cathode wire on the LED. That’s all the black wire you see in this photo.

picture8

To make it look like one string of lights, I ran a twisted length of the Christmas light wire pairs (from the same light kit) through the clips on each lamp. I didn’t use hot glue here, but just let it go where it wanted. The effect is such that it looks like one continuous strand of Christmas lights; you only see the wires going into the wall if you look closely.

picture9

I attached the top and bottom together using 1×3 maple boards that I simply screwed to both the top and bottom, and then disassembled when I wanted to tear it down.

gif1

The visuals were all done at that point. I could have stopped there, but one of my favorite things about Stranger Things is the soundtrack. Given that a big part of my job at Microsoft is working with musicians and music app developers, and with the team which created the UWP MIDI API, I knew I had to incorporate that into this project.

Music / MIDI

A big part of the appeal of Stranger Things is the John Carpenter-style mostly analog synthesizer soundtrack by the band Survive (with some cameos by Tangerine Dream). John Carpenter, Klaus Shulze and Tangerine Dream have always been favorites of mine, and I can’t help but feel a shiver when I hear a good fat synth-driven soundtrack. They have remained my inspiration when recording my own music.

So, it would have been just wrong of me to do the demo of the wall without at least some synthesizer work in the background. Playing it live was not an option and I wasn’t about to bring a huge rig, so I sequenced the main arpeggio and kick drum in my very portable Elektron Analog Four using some reasonable stand-ins for the sounds.

At the events, I would start and clock the Analog Four using a button on the app and my Windows 10 UWP MIDI Library clock generator. The only lengthy part of this code is where I check for the Analog Four each time. That’s a workaround because my MIDI library, at the time of this writing, doesn’t expose the hardware add/remove event. I will fix that soon.


private void StartMidiClock()
{
    // I do this every time rather than listen for device add/remove
    // becuase my library didn't raise the add/remove event in this version
    SelectMidiOutputDevices();

    _midiClock.Start();

    System.Diagnostics.Debug.WriteLine("MIDI started");
}

private void StopMidiClock()
{
    _midiClock.Stop();

    System.Diagnostics.Debug.WriteLine("MIDI stopped");
}


private const string _midiDeviceName = "Analog Four";
private async void SelectMidiOutputDevices()
{
    _midiClock.OutputPorts.Clear();

    IMidiOutPort port = null;

    foreach (var descriptor in _midiWatcher.OutputPortDescriptors)
    {
        if (descriptor.Name.Contains(_midiDeviceName))
        {
            port = await MidiOutPort.FromIdAsync(descriptor.Id);

            break;
        }
    }

    if (port != null)
    {
        _midiClock.OutputPorts.Add(port);
    }
}

For this code to work, I just set the Analog Four to receive MIDI clock and MIDI start/stop messages on the USB port. The sequence itself is already programmed in by me, so all I need to do is kick it off.

If you want to create a version of the sequence yourself, the main riff is a super simple up/down arpeggio of these notes:

picture10

You can vamp on top of that to bring in more of the sound from what S U R V I V E made. I left it as it was and simply played the filter knob a bit to bring it in. A short version of that may be found on my personal SoundCloud profile here.

There are many other components to the music, including a muted kick drum type of sound, a bass line, some additional melody and some other interesting effects, but I hope this helps get you started.

If you’re interested in the synthesizers behind the music, and a place to hear the music itself, check out this tour of S U R V I V E ’s studio.

The final thing that I needed to include here was a nod to the visual style of the opening sequence of the show.

Fonts and Title Style

If you want to create your own title card in a style similar to the show, the font ITC Benguiat is either the same one used, or a very close match. It’s readily available to anyone who wants to license it. I licensed it from Fonts.com for $35 for my own project. The version I ended up using was the regular book font, but I think the Condensed Bold is probably a closer fit.

Even though there are tons of pages, sites, videos, etc. using the title style, be careful about what you do here, though, as you don’t want to infringe on the show’s trademarks or other IP. When in doubt, consult your lawyer. I did.

picture11

That’s using just the outline and glow text effects. You can do even better in Adobe Photoshop, especially if you add in some lighting effects, adjust the character spacing and height, and use large descending capital letters, like I did at the first event. But I was able to quickly do this above mockup in PowerPoint using the ITC Benguiat font.

If you don’t want to license a font and then work with the red glow in Adobe Photoshop, you can also create simple versions of the title card at http://makeitstranger.com/

None of that is required for the wall itself, but can help tie things together if you are presenting several related and themed demos like I did. Consider it a bit of polish.

With that, we have the visuals and sound all wrapped up. You could use the wall as-is at this point, simply giving it text to display. That’s not quite enough for what I wanted to show, though. Next up, we need to give the bot a little intelligence, and save on some typing.

Resources

Questions or comments? Have your own version of the wall, or used the technology described here to help rid the universe of evil? Post below and follow me on twitter @pete_brown

Most of all, thanks for reading!

Work out loud on Yammer—7-day challenge

$
0
0

yammer-wol-1

Join us and millions of teams during International Working Out Loud Week, November 7–13, by creating a space for your team to work out loud on Yammer! Here’s how working out loud on Yammer can help your team:

  • You and your teammates become more aligned with what matters—People start to reveal their assumptions, ask questions and share ideas.
  • You get more work done—You won’t feel overwhelmed by too many status meetings.
  • You’ll make better decisions, faster—Your team will spend less time deliberating.

As part of Office 365, Yammer helps your team get work done. Did you know that your team can create and edit documents, Excel sheets and presentations right from a Yammer group? This means keeping all your messages, files and updates in one place, where everyone can see what’s going on without the usual back-and-forth. Also, Yammer makes it easy for everyone to read the conversations that are relevant to them and skip what’s not.

Here are a few more tips to have a go at successfully working out loud on Yammer:

  • Create a Yammer group—Feel free to make it private! Your teammates will feel more comfortable working out loud if they know who’s reading their messages.
  • Share early. Share oftenThat includes updates, files and questions about your projects. Don’t wait until that deck, doc or mockup feels “done” to get feedback.
  • Start a discussionAvoid slowing your team’s progress by sending long monologues via email. Talk about each other’s updates on Yammer.
  • Tsk-tsk, no cheatingWhenever you think about sending an email, post a message to the Yammer group instead.
  • Celebrate your teammates—“Like” your teammates’ messages and give them “praise” for solving problems or completing tasks.

That’s it. You can kick off November 7 on Yammer by sharing and printing the infographic below. Let us know how it goes! #WOL #Yammer #wolweek

The post Work out loud on Yammer—7-day challenge appeared first on Office Blogs.

Forza Horizon 3 Windows 10 demo now available

$
0
0

Windows 10 players can now get a taste of the epic automotive adventure that is Forza Horizon 3 with the launch of the Windows 10 demo of Forza Horizon 3. Starting today, the Forza Horizon 3 Windows 10 PC demo is available in the Windows Store for no additional charge.

Forza Horizon 3

In Forza Horizon 3, you’re the boss of the Horizon Festival – the biggest and best cars and music festival in the world. Your goal is to expand the festival to new locations throughout Australia and you’ll get your first taste of what that experience is like in the demo. Along the way, you’ll be able to explore a portion of Forza Horizon 3’s map, while taking a guided tour through some of the best features that the game has to offer.

Xbox One Demo in HDR

In addition to the release of the Forza Horizon 3 demo for Windows 10 PC, today we’re releasing an update for the Forza Horizon 3 Xbox One demo. This update will add HDR support to the demo, giving players on Xbox One S (and viewing on an HDR display) the ability to experience the fun and beauty of the Forza Horizon 3 demo in all its HDR glory.

The Forza Horizon 3 demo for Windows 10 PC is now available on the Windows Store. Get the demo here and read more over at Xbox Wire!

11/7 Webinar: Announcing Microsoft Flow General Availability

$
0
0
A lot of people think the Power BI team is strictly focused on the visualization of data, but actually Power BI sits within a larger organization called Business Application Platform Innovation (BAPI) whose charter is to not just make data actionable through visualization, but also to simplify the automation and acquisition of that data. To that end the team has built Microsoft PowerApps for acquisition and Microsoft Flow for automation. On November 7, 2016, we'll be hosting a webinar to announce Microsoft Flow General Availability. In this webinar, Group Program Manager Stephen Siciliano will show how Microsoft Flow is an easy-to-use product that helps you set up automated workflows between your favorite apps and services. You can synchronize files, get notifications, collect data, and much more. For preview customers already familiar with Microsoft Flow, Stephen will also cover the changes for release and how you can take advantage of using Microsoft Flow for production applications.

Evolving the workplace with WeWork

$
0
0

The way we work continues to evolve. Every day, teams of all sizes are setting out to create and collaborate, each one dynamic, increasingly global and cross-functional with its own diverse work styles. Finding the right tools and the right space to drive creation and collaboration that leads to success can be challenging.

Microsoft and WeWork are tackling this challenge by combining smart tools and spaces that inspire people to pursue their passions and achieve more together.

Beginning in 2017, WeWork will adopt Office 365 for its corporate employees, providing them a universal toolkit for collaboration, designed for the unique work style of every team. As part of this rollout, WeWork will begin using Microsoft’s latest collaboration offerings—including Microsoft Teams and Surface Hub.

In addition to already offering Office 365 to its members, WeWork and Microsoft are working together to find new ways to use Office 365 to enhance the physical workspaces, including making it easy to reserve conference rooms through Outlook.

Also, starting in November, Microsoft employees based in New York, Atlanta, Philadelphia and Portland will be able to work from WeWork spaces globally, providing them a space to connect and collaborate with other businesses and partners.

WeWork embodies a new generation’s view of work where community and mobility are key. With Office 365 made for this new work reality, teams big and small will be even more empowered to achieve more together.

See the WeWork blog for more information.

The post Evolving the workplace with WeWork appeared first on Office Blogs.

Microsoft Access now included in Office 365 Business and Business Premium with new enhancements

$
0
0

As businesses of all sizes come to realize the value of data analytics to inform decision-making, many are also discovering the need for database solutions like Microsoft Access to help collect, organize and share data, as well as create reports that deliver valuable insights.

That’s why we’re pleased to announce today that Microsoft Access is now included in the Office 365 Business and Business Premium plans—designed to meet the needs of small and mid-size businesses. We’re also introducing an additional set of data sources that can be integrated with Access for Office 365 ProPlus, E3 and E5 subscribers.

Read on to learn more.

Database management for companies of all sizes—large and small

Access is a great database management solution for small businesses because it makes collecting and storing data accessible on the desktop—without requiring support from an IT administrator. Access enables users to develop business applications, collect and analyze data from multiple sources, and track any kind of data, from a customer contact list to robust asset management.

Soon, Access will be rolling out to Office 365 Business and Business Premium subscribers. Access will be automatically installed for these customers as part of their next regular Office client update, rolling out between December 1, 2016 and January 30, 2017. Access will continue to be included in the Office 365 ProPlus, E3 and E5 plans.

*Please note: Customers who have updates set to the Deferred Channel will receive this update in June 2017. To learn more about the Deferred Channel, see Overview of update channels for Office 365 ProPlus.

New data sources in Access

A set of new enterprise data connectors will roll out to Microsoft Access in early 2017. These new connectors include OData Feed, Dynamics CRM, Salesforce and Amazon Redshift and will be available for customers with Office 365 ProPlus, E3 and E5 plans. These new connectors will enable customers to integrate and extend Access into other line of business solutions and databases.

This is just the beginning—there are even more new data sources on the way. In the meantime, we welcome your feedback about Access. Please share your suggestions or submit requests for desired data sources on the Access UserVoice site.

The post Microsoft Access now included in Office 365 Business and Business Premium with new enhancements appeared first on Office Blogs.

Getting personal – speech and inking (App Dev on Xbox series)

$
0
0

The way users interact with apps on different devices has gotten much more personal lately, thanks to a variety of new Natural User Interface features in the Universal Windows Platform. These UWP patterns and APIs are available for developers to easily bring in capabilities for their apps that enable more human technologies. For the final blog post in the series, we have extended the Adventure Works sample to add support for Ink on devices that support it, and to add support for speech interaction where it makes sense (including both synthesis and recognition). Make sure to get the updated code for the Adventure Works Sample from the GitHub repository so you can refer to it as you read on.

And in case you missed the blog post from last week on how to enable great social experiences, we covered how to connect your app to social networks such as Facebook and Twitter, how to enable second screen experiences through Project “Rome”, and how to take advantage of the UWP Maps control and make your app location aware. To read last week’s blog post or any of the other blog posts in the series, or to watch the recordings from the App Dev on Xbox live event that started it all, visit the App Dev on Xbox landing page.

Adventure Works (v3)

picture1

We are continuing to build on top of the Adventure Works sample app we worked with in the previous two blog posts. If you missed those, make sure to check them out here and here. As a reminder, Adventure Works is a social photo app that allows the user to:

  • Capture, edit, and store photos for a specific trip
  • auto analyze and auto tag friends using Cognitive Services vision APIs
  • view albums from friends on an interactive map
  • share albums on social networks like Facebook and Twitter
  • Use one device to remote control slideshows running on another device using project Rome
  • and more …

There is always more to be done, and for this final round of improvements we will focus on two sets of features:

  1. Ink support to annotate images, enable natural text input, as well as the ability to use inking as a presentation tool in connected slideshow mode.
  2. Speech Synthesis and Speech Recognition (with a little help from cognitive services for language understanding) to create a way to quickly access information using speech.

More Personal Computing with Ink

Inking in Windows 10 allows users with Inking capable devices to draw and annotate directly on the screen with a device like the Surface Pen – and if you don’t have a pen handy, you can use your finger or a mouse instead. Windows 10 built-in apps like Sticky Notes, Sketchpad and Screen sketch support inking, as do many Office products. Besides preserving drawings and annotations, inking also uses machine learning to recognize and convert ink to text. OneNote goes a step further by recognizing shapes and equations in addition to text.

picture2

Best of all, you can easily add Inking functionality into your own apps, as we did for Adventure Works,  with one line of XAML markup to create an InkCanvas. With just one more line, you can add an InkToolbar to your canvas that provides a color selector as well as buttons for drawing, erasing, highlighting, and displaying a ruler. (In case you have the Adventure Works project open, the InkCanvas and InkToolbar implementation can be found in PhotoPreviewView.)

The InkCanvas allows users to annotate their Adventure Works slideshow photos. This can be done both directly as well as remotely through the Project “Rome” code highlighted in the previous post. When done on the same device, the ink strokes are saved off to a GIF file which is then associated with the original slideshow image.

picture3

When the image is displayed again during later viewings, the strokes are extracted from the GIF file, as shown in the code below, and inserted back into a canvas layered on top of the image in PhotoPreviewView. The code for saving and extracting ink strokes are found in the InkHelpers class.


var file = await StorageFile.GetFileFromPathAsync(filename);
if (file != null)
{
    using (var stream = await file.OpenReadAsync())
    {
        inker.InkPresenter.StrokeContainer.Clear();
        await inker.InkPresenter.StrokeContainer.LoadAsync(stream);
    }
}

Ink strokes can also be drawn on one device (like a Surface device) and displayed on another one (an Xbox One). In order to do this, the Adventure Works code actually collects the user’s pen strokes using the underlying InkPresenter object that powers the InkCanvas. It then converts the strokes into a byte array and serializes them over to the remote instance of the app. You can find out more about how this is implemented in Adventure Works by looking through the GetStrokeData method in SlideshowSlideView control and the SendStrokeUpdates method in SlideshowClientPage.

It is sometimes useful to save the ink strokes and original image in a new file. In Adventure Works, this is done to create a thumbnail version of an annotated slide for quick display as well as for uploading to Facebook. You can find the code used to combine an image file with an ink stroke annotation in the RenderImageWithInkToFIleAsync method in the InkHelpers class. It uses the Win2D DrawImage and DrawInk methods of a CanvasDrawingSession object to blend the two together, as shown in the snippet below.


CanvasDevice device = CanvasDevice.GetSharedDevice();
CanvasRenderTarget renderTarget = new CanvasRenderTarget(device, (int)inker.ActualWidth, (int)inker.ActualHeight, 96);

var image = await CanvasBitmap.LoadAsync(device, imageStream);
using (var ds = renderTarget.CreateDrawingSession())
{
    var imageBounds = image.GetBounds(device);
                
    //...

    ds.Clear(Colors.White);
    ds.DrawImage(image, new Rect(0, 0, inker.ActualWidth, inker.ActualWidth), imageBounds);
    ds.DrawInk(inker.InkPresenter.StrokeContainer.GetStrokes());
}

Ink Text Recognition

picture4

Adventure Works also takes advantage of Inking’s text recognition feature to let users handwrite the name of their newly created Adventures. This capability is extremely useful if someone is running your app in tablet mode with a pen and doesn’t want to bother with the onscreen keyboard. Converting ink to text relies on the InkRecognizer class. Adventure Works encapsulates this functionality in a templated control called InkOverlay which you can reuse in your own code. The core implementation of ink to text really just requires instantiating an InkRecognizerContainer and then calling its RecognizeAsync method.


var inkRecognizer = new InkRecognizerContainer();
var recognitionResults = await inkRecognizer.RecognizeAsync(_inker.InkPresenter.StrokeContainer, InkRecognitionTarget.All);

You can imagine this being very powerful when the user has a large form to fill out on a tablet device and they don’t have to use the onscreen keyboard.

More Personal Computing with Speech

There are two sets of APIs that are used in Adventure Works that enable a great natural experience using speech. First, UWP speech APIs allow developers to integrate speech-to-text (recognition) and text-to-speech (synthesis) into their UWP apps. Speech recognition converts words spoken by the user into text for form input, for text dictation, to specify an action or command, and to accomplish tasks. Both free-text dictation and custom grammars authored using Speech Recognition Grammar Specification are supported.

Second, Language Understanding Intelligent Service (LUIS) is a Microsoft Cognitive Services API that uses machine learning to help your app figure out what people are trying to say. For instance, if someone wants to order food, they might say “find me a restaurant” or “I’m hungry” or “feed me”. You might try a brute force approach to recognize the intent to order food, listing out all the variations on the concept “order food” that you can think of – but of course you’re going to come up short. LUIS lets you set up a model for the “order food” intent that learns, over time, what people are trying to say.

In Adventure Works, these features are combined to create a variety of speech related functionalities. For instance, the app can listen for an utterance like “Adventure Works, start my latest slideshow” and it will naturally open a slideshow for you when it hears this command. It can also respond using speech when appropriate to answer a question. LUIS, in turn, augments this speech recognition with language understanding to improve the recognition of natural language phrases.

picture5

The speech capabilities for our app are wrapped in a simple assistant called Adventure Works Aide (look for AdventureWorksAideView.xaml). Saying the phrase “Adventure Works…” will invoke it. It will then listen for spoken patterns such as:

  • “What adventures are in .”
  • “Show me adventure.”
  • “Who is closes to me.”

Adventure Works Aide is powered by a custom SpeechService class. There are two SpeechRecognizer instances that are used at different times, first to recognize the “Adventure Works” phrase at any time:


_continousSpeechRecognizer = new SpeechRecognizer();
_continousSpeechRecognizer.Constraints.Add(new SpeechRecognitionListConstraint(new List() { "Adventure Works" }, "start"));
var result = await _continousSpeechRecognizer.CompileConstraintsAsync();
//...
await _continousSpeechRecognizer.ContinuousRecognitionSession.StartAsync(SpeechContinuousRecognitionMode.Default);
and then to understand free form natural language and convert it to text:
_speechRecognizer = new SpeechRecognizer();
var result = await _speechRecognizer.CompileConstraintsAsync();
SpeechRecognitionResult speechRecognitionResult = await _speechRecognizer.RecognizeAsync();
if (speechRecognitionResult.Status == SpeechRecognitionResultStatus.Success)
{
    string str = speechRecognitionResult.Text;
}

As you can see, the SpeechRecognizer API is used for both listening continuously for specific constraints throughout the lifetime of the app, or to convert any free-form speech to text at a specific time. The continuous recognition session can be set to recognize phrases from a list of strings, or it can even use a more structured SRGS grammar file which provides the greatest control over the speech recognition by allowing for multiple semantic meanings to be recognized at once. However, because we want to understand every variation the user might say and use LUIS for our semantic understanding, we can use the free-form speech recognition with the default constraints.

Note: before using any of the speech APIs on Xbox, the user must give permission to your application to access the microphone. Not all APIs automatically show the dialog currently so you will need to invoke the dialog yourself. Checkout the CheckForMicrophonePermission function in SpeechService.cs to see how this is done in Adventure Works.

When the continuous speech recognizer recognizes the key phrase, it immediately stops listening, shows the UI for the AdventureWorksAide to let the user know that it’s listening, and starts listening for natural language.


await _continousSpeechRecognizer.ContinuousRecognitionSession.CancelAsync();
ShowUI();
SpeakAsync("hey!");
var spokenText = await ListenForText();

Subsequent utterances are passed on to LUIS which uses training data we have provided to create a machine learning model to identify specific intents. For this app, we have three different intents that can be recognized: showuser, showmap, and whoisclosest (but you can always add more). We have also defined an entity for username for LUIS to provide us with the name of the user when the showuser intent has been recognized. LUIS also provides several pre-built entities that have been trained for specific types of data; in this case, we are using an entity for geography locations in the showmap intent.

picture6

To use LUIS in the app, we used the official nugget library which allowed us to register specific handlers for each intent when we send over a phrase.


var handlers = new LUISIntentHandlers();
_router = IntentRouter.Setup(Keys.LUISAppId, Keys.LUISAzureSubscriptionKey, handlers, false);
var handled = await _router.Route(text, null);

Take a look at the HandleIntent method in the LUISAPI.cs file and the LUISIntentHandlers class which handles each intent defined in the LUIS portal, and is a useful reference for future LUIS implementations.

Finally, once the text has been processed by LUIS and the intent has been processed by the app, the AdventureWorksAide might need to respond back to the user using speech, and for that, the SpeechService uses the SpeechSynthesizer API:


_speechSynthesizer = new SpeechSynthesizer();
var syntStream = await _speechSynthesizer.SynthesizeTextToStreamAsync(toSpeak);
_player = new MediaPlayer();
_player.Source = MediaSource.CreateFromStream(syntStream, syntStream.ContentType);
_player.Play();

The SpeechSynthesizer API can specify a specific voice to use for the generation based on voices installed on the system, and it can even use SSML (speech synthesis markup language) to control how the speech is generated, including volume, pronunciation, and pitch.

The entire flow, from invoking the Adventure Works Aide to sending the spoken text to LUIS, and finally responding to the user is handled in the WakeUpAndListen method.

There’s more

Though not used in the current version of the project, there are other APIs that you can take advantage of for your apps, both as part of the UWP platform and as part of Cognitive Services.

For example, on desktop and mobile device, Cortana can recognize speech or text directly from the Cortana canvas and activate your app or initiate an action on behalf of your app. It can also expose actions to the user based on insights about them, and with user permission it can even complete the action for them. Using a Voice Command Definition (VCD) file, developers have the option to add commands directly to the Cortana command set (commands like: “Hey Cortana show adventure in Europe in Adventure Works”). Cortana app integration is also part of our long-term plans for voice support on Xbox, even though it is not supported today. Visit the Cortana portal for more info.

In addition, there are several speech and language related Cognitive Services APIs that are simply too cool not to mention:

  • Custom Recognition Service– Overcomes speech recognition barriers like speaking style, background noise, and vocabulary.
  • Speaker Recognition– Identify individual speakers or use speech as a means of authentication with the Speaker Recognition API.
  • Linguistic Analysis– Simplify complex language concepts and parse text with the Linguistic Analysis API.
  • Translator– Translate speech and text with a simple REST API call.
  • Bing Spell Check– Detect and correct spelling mistakes within your app.

The more personal computing features provided through Cognitive Services is constantly being refreshed, so be sure to check back often to see what new machine learning capabilities have been made available to you.

That’s all folks

This was the last blog post (and sample app) in the App Dev on Xbox series, but if you have a great idea that we should cover, please just let us know, we are always looking for cool app ideas to build and features to implement. Make sure to check out the app source on our official GitHub repository, read through some of the resources provided, read through some of the other blog posts or watch the event if you missed it, and let us know what you think through the comments below or on twitter.

Happy coding!

Resources

Previous Xbox Series Posts


This Week on Windows: Forza, Messenger, Minecraft and more

$
0
0

We hope you enjoyed this week’s episode of This Week on Windows! Head over here to read more about the launch of Minecraft: Education Edition or how you can get started with Windows Hello. Here’s what’s new in the Windows Store this week:

Forza Horizon 3 – Alpinestars Car Pack– $6.99, or included in Forza Horizon 3 Car Pass

The Alpinestars Car Pack brings seven new cars to Forza Horizon 3. Leading off the pack are a pair of beloved drifting legends, the 1998 Nissan Silvia K’s and the 1990 Mazda Savanna RX-7; two of the most requested cars in the Forza community. The Alpinestars Car Pack is included as part of the Forza Horizon 3 Car Pass, which is available for purchase in the Windows Store.Get Forza Horizon 3 here and read more over at Xbox Wire!

Election 2016 Collection in the Windows Store

 Election 2016 collection in the Windows Store

Tuesday is the big day: after more than a year of campaigning, America will cast their votes and we’ll select a new president. In honor of the occasion, we’ve put together an Election Collection of apps so you can watch live, track the voting, and stay in the loop (even on the go) as the day unfolds. Here are a couple to get you started!

CBS News

Election 2016 collection in the Windows Store

CBS reporters will be across the country as the votes are cast and counted, so expect extensive coverage here on Election Day, along with national and international reports and programming. Enjoy live streaming of the 24/7 CBS News channel, too, and all regular CBS news and current affairs programming – from 60 Minutes and Face the Nation to the CBS Evening News.

Twitter

Election 2016 collection in the Windows Store

Follow the national dialogue here on Election Day, with friends and neighbors, celebrities and national figures. It’s America’s water cooler, and there will be plenty of hopes, dreams, insight, updates and opinions expressed. And it’s a chance to put in your two cents, too!

Batman: The Telltale Series– $4.99 an episode

Batman

From the award-winning creators of The Walking Dead comes Batman: The Telltale Series, a gritty, violent journey into the fractured psyche of the Caped Crusader’s alter ego, Bruce Wayne. In this interactive, episodic game series, based on DC Comics’ iconic Batman, your actions and choices will determine the fate of our hero – and you will discover the powerful and far-reaching consequences of your choices as the Dark Knight. Get Episode 1 today.

Facebook Messenger– Free update

 Facebook Messenger

Facebook Messenger has been keeping Facebook friends in touch for a long time, but now you can message anyone in your phone book from your PC – just enter a number to add a new contact. Best of all, talk to each other or start a group call with voice and video – and talk for as long as you want, even with people in other countries. Make calls even to Messenger users on other platforms.

Alicia Keys, HEREBuy for $9.99

Alicia Keys, HERE

Alicia Keys’ new album, HERE, is out today and available in the Windows Store. You can buy the album for $9.99 or listen free with a free 30-day trial of Groove Music Pass.**

Have a great weekend!

*30-day trial continues to a paid monthly subscription unless cancelled. Credit card required. Groove Music Pass sold separately and in select markets. Catalog size and availability varies by market and over time.

Multi-Factor Authentication in Exchange and Office 365

$
0
0

Multi-Factor Authentication (MFA), which includes Two-factor authentication (2FA), in Exchange Server and Office 365, is designed to protect against account and email compromise.

Microsoft has evaluated recent reports of a potential bypass of 2FA. We have determined that the technique described is not a vulnerability and the potential bypass does not exist on properly configured systems.

The reported technique does not pose a risk to Exchange Server or Office 365:

  • In Exchange Server, authentication configuration settings for client endpoints are not shared across protocols. Supported authentication mechanisms are configured independently on a per protocol endpoint basis. Multi-Factor Authentication is available when using OAuth as the authentication mechanism. Before implementing MFA with Exchange Server, it is important that all client protocol touchpoints are identified and configured to support MFA via OAuth.
  • In Office 365, when Azure MFA is enabled within a tenant, it is applied to all supported client protocol endpoints. Exchange Web Services (EWS) is an Office 365 client endpoint which is enabled. Outlook on the Web (OWA) and Outlook client access are also enabled in Office 365. Office 365 users may experience a small delay in activation of MFA on all protocols due to propagation of configuration settings and credential cache expiration.

Additional information on enabling OAuth in Office 365 and Exchange Server can be found on Office.com and MSDN.

The Exchange Team

Watch Satya unveil Microsoft Teams

$
0
0

Two days ago, we announced Microsoft Teams, the chat-based workspace in Office 365. It is a new experience that brings together people, conversations, content and the tools they need—all in one place and integrated with familiar Office applications.

As Satya says, “Every individual is different, and so is the case with every team.” We see great opportunity in helping them achieve more together—and Microsoft Teams is the open, digital environment we created to make that happen.

Watch Satya unveil Microsoft Teams in this video.

Want an exclusive look at the new product in action? Don’t miss our Modern Workplace webcast, November 15 at 8 a.m. PST / 4 p.m. GMT, titled “Bridging the Generation Gap: How to create cohesive teams.” Register here.

The post Watch Satya unveil Microsoft Teams appeared first on Office Blogs.

FishBaitism: #Windows10 lately... https://t.co/7QImYKK56v

$
0
0
FishBaitism: #Windows10 lately... https://t.co/7QImYKK56v

productioninstance011/4/2016 7:12:34 PM

grahamehorner: @windowsinsider oh dear just got a #BSOD on current preview release on fast ring? It the first #BSOD I've seen in #Windows10 😭

$
0
0
grahamehorner: @windowsinsider oh dear just got a #BSOD on current preview release on fast ring? It the first #BSOD I've seen in #Windows10 😭

NitroViizon: RT @Xbox: .@CallofDuty #InfiniteWarfare [M] has arrived. Get it today on #XboxOne or #Windows10: https://t.co/Ny5gaLNs3B https://t.co/yrjeI…

$
0
0
NitroViizon: RT @Xbox: .@CallofDuty #InfiniteWarfare [M] has arrived. Get it today on #XboxOne or #Windows10: https://t.co/Ny5gaLNs3B https://t.co/yrjeI…

9 Things You Should Do To Optimize HBase Performance in HDInsight

$
0
0

Reposted from Channel 9 and the Azure HDInsight blog.

Apache HBase is a fantastic high-end Open Source NoSQL BigData machine that’s built on Hadoop and modeled after Google BigTable. HBase provides random access and strong consistency for large amounts of unstructured and semi structured data in a schema-less database organized by column families.

Azure HDInsight is an Apache Hadoop distribution powered by the cloud. HDInsight handles any amount of data, scaling from terabytes to petabytes on demand. It lets you spin up any number of nodes at any time and we charge only for the compute and storage that you use. HDInsight is Microsoft’s offering of Apache Hadoop, Spark, R, HBase, and Storm cloud services, and made super easy.

HBase gives you many options to get great performance in HDInsight – the Channel 9 video below discusses 9 things you must do to get better performance from your HBase HDInsight cluster:


A summary of these recommendations is also available from this earlier blog post on the same topic.

CIML Blog Team

The “Internet of Stranger Things” Wall, Part 3 – Voice Recognition and Intelligence

$
0
0

Overview

I called this project the “Internet of Stranger Things,” but so far, there hasn’t been an internet piece. In addition, there really hasn’t been anything that couldn’t be easily accomplished on an Arduino or a Raspberry Pi. I wanted this demo to have more moving parts to improve the experience and also demonstrate some cool technology.

First is voice recognition. Proper voice recognition typically takes a pretty decent computer and a good OS. This isn’t something you’d generally do on an Arduino alone; it’s simply not designed for that kind of workload.

Next, I wanted to wire it up to the cloud, specifically to a bot. The interaction in the show is a conversation between two people, so this was a natural fit. Speaking of “natural,” I wanted the bot to understand many different forms of the questions, not just a few hard-coded questions. For that, I wanted to use the Language Understanding Intelligent Service (LUIS) to handle the parsing.

This third and final post covers:

  • Adding Windows Voice Recognition to the UWP app
  • Creating the natural language model in LUIS
  • Building the Bot Framework Bot
  • Tying it all together

You can find the other posts here:

If you’re not familiar with the wall, please go back and read part one now. In that, I describe the inspiration for this project, as well as the electronics required.

Adding Voice Recognition

In the TV show, Joyce doesn’t type her queries into a 1980s era terminal to speak with her son; she speaks aloud in her living room. I wanted to have something similar for this app, and the built-in voice recognition was a natural fit.

Voice recognition in Windows 10 UWP apps is super-simple to use. You have the option of using the built-in UI, which is nice but may not fit your app style, or simply letting the recognition happen while you handle events.

There are good samples for this in the Windows 10 UWP Samples repo, so I won’t go into great detail here. But I do want to show you the code.

To keep the code simple, I used two recognizers. One is for basic local echo testing, especially useful if connectivity in a venue is unreliable. The second is for sending to the bot. You could use a single recognizer and then just check some sort of app state in the events to decide if you were doing something for local echo or for the bot.

First, I initialized the two recognizers and wired up the two events that I care about in this scenario.


SpeechRecognizer _echoSpeechRecognizer;
SpeechRecognizer _questionSpeechRecognizer;

private async void SetupSpeechRecognizer()
{
    _echoSpeechRecognizer = new SpeechRecognizer();
    _questionSpeechRecognizer = new SpeechRecognizer();

    await _echoSpeechRecognizer.CompileConstraintsAsync();
    await _questionSpeechRecognizer.CompileConstraintsAsync();

    _echoSpeechRecognizer.HypothesisGenerated +=
                   OnEchoSpeechRecognizerHypothesisGenerated;
    _echoSpeechRecognizer.StateChanged += 
                   OnEchoSpeechRecognizerStateChanged;

    _questionSpeechRecognizer.HypothesisGenerated +=
                   OnQuestionSpeechRecognizerHypothesisGenerated;
    _questionSpeechRecognizer.StateChanged += 
                   OnQuestionSpeechRecognizerStateChanged;

}

The HypothesisGenerated event lets me show real-time recognition results, much like when you use Cortana voice recognition on your PC or phone. In that event handler, I just display the results. The only real purpose of this is to show that some recognition is happening in a way similar to how Cortana shows that she’s listening and parsing your words. Note that the hypothesis and the state events come back on a non-UI thread, so you’ll need to dispatch them like I did here.


private async void OnEchoSpeechRecognizerHypothesisGenerated(
        SpeechRecognizer sender,
        SpeechRecognitionHypothesisGeneratedEventArgs args)
{
    await Dispatcher.RunAsync(CoreDispatcherPriority.Normal, () =>
    {
        EchoText.Text = args.Hypothesis.Text;
    });
}

The next is the StateChanged event. This lets me alter the UI based on what is happening. There are lots of good practices here, but I took an expedient route and simply changed the background color of the text box. You might consider running an animation on the microphone or something when recognition is happening.


private SolidColorBrush _micListeningBrush = 
                     new SolidColorBrush(Colors.SkyBlue);
private SolidColorBrush _micIdleBrush = 
                     new SolidColorBrush(Colors.White);

private async void OnEchoSpeechRecognizerStateChanged(
        SpeechRecognizer sender, 
        SpeechRecognizerStateChangedEventArgs args)
{
    await Dispatcher.RunAsync(CoreDispatcherPriority.Normal, () =>
    {
        switch (args.State)
        {
            case SpeechRecognizerState.Idle:
                EchoText.Background = _micIdleBrush;
                break;

            default:
                EchoText.Background = _micListeningBrush;
                break;
        }
    });
}

I have equivalent handlers for the two events for the “ask a question” speech recognizer as well.

Finally, some easy code in the button click handler kicks off recognition.


private async void DictateEcho_Click(object sender, RoutedEventArgs e)
{
    var result = await _echoSpeechRecognizer.RecognizeAsync();

    EchoText.Text = result.Text;
}

The end result looks and behaves well. The voice recognition is really good.

gif1

So now we can talk to the board from the UWP PC app, and we can talk to the app using voice. Time to add just a little intelligence behind it all.

Creating the Natural Language Model in LUIS

The backing for the wall is a bot in the cloud. I wanted the bot to be able to answer questions, but I didn’t want to have the exact text of the question hard-coded in the bot. If I wanted to hard-code them, a simple web service or even local code would do.

What I really want is the ability to ask questions using natural language, and map those questions (or Utterances as called in LUIS) to specific master questions (or Intents in LUIS). In that way, I can ask the questions a few different ways, but still get back an answer that makes sense. My colleague, Ryan Volum, helped me figure out how LUIS worked. You should check out his Getting Started with Bots Microsoft Virtual Academy course.

So I started thinking about the types of questions I wanted answered, and the various ways I might ask them.

For example, when I want to know the location of where Will is, I could ask, “Where are you hiding?” or “Tell me where you are!” or “Where can I find you?” When checking to see if someone is listening, I might ask, “Are you there?” or “Can you hear me?” As you can imagine, hard-coding all these variations would be tedious, and would certainly miss out on ways someone else might ask the question.

I then created those in LUIS with each master question as an Intent, and each way I could think of asking that question then trained as an utterance mapped to that intent. Generally, the more utterances I add, the better the model becomes.

picture1

The above screen shot is not the entire list of Intents; I added a number of other Intents and continued to train the model.

For a scenario such as this, training LUIS is straight forward. My particular requirements didn’t include any entities or Regex, or any connections to a document database or Azure search. If you have a more complex dialog, there’s a ton of power in LUIS to be able to make the model as robust as you need, and to also train it with errors and utterances found in actual use. If you want to learn more about LUIS, I recommend watching Module 5 in the Getting Started with Bots MVA.

Once my LUIS model was set up and working, I needed to connect it to the bot.

Building the Bot Framework Bot

The bot itself was the last thing I added to the wall. In fact, in my first demo of the wall, I had to type the messages in to the app instead of sending it out to a bot. Interesting, but not exactly what I was looking for.

I used the generic Bot Framework template and instructions from the Bot Framework developer site. This creates a generic bot, a simple C# web service controller, which echoes back anything you send it.

Next, following the Bot Framework documentation, I integrated LUIS into the bot. First, I created the class which derived from LuisDialog, and added in code to handle the different intents. Note that this model is changing over time; there are other ways to handle the intents using recognizers. For my use, however, this approach worked just fine.

The answers from the bot are very short, and I keep no context. Responses from the Upside Down need to be short enough to light up on the wall without putting everyone to sleep reading a long dissertation letter by letter.


namespace TheUpsideDown
{
    // Reference: 
    // https://docs.botframework.com/en-us/csharp/builder/sdkreference/dialogs.html

    // Partial class is excluded from project. It contains keys:
    // 
    // [Serializable]
    // [LuisModel("model id", "subscription key")]
    // public partial class UpsideDownDialog
    // {
    // }
    // 
    public partial class UpsideDownDialog : LuisDialog
    {
        // None
        [LuisIntent("")]
        public async Task None(IDialogContext context, LuisResult result)
        {
            string message = $"Eh";
            await context.PostAsync(message);
            context.Wait(MessageReceived);
        }


        [LuisIntent("CheckPresence")]
        public async Task CheckPresence(IDialogContext context, LuisResult result)
        {
            string message = $"Yes";
            await context.PostAsync(message);
            context.Wait(MessageReceived);
        }

        [LuisIntent("AskName")]
        public async Task AskName(IDialogContext context, LuisResult result)
        {
            string message = $"Will";
            await context.PostAsync(message);
            context.Wait(MessageReceived);
        }

        [LuisIntent("FavoriteColor")]
        public async Task FavoriteColor(IDialogContext context, LuisResult result)
        {
            string message = $"Blue ... no Gr..ahhhhh";
            await context.PostAsync(message);
            context.Wait(MessageReceived);
        }

        [LuisIntent("WhatIShouldDoNow")]
        public async Task WhatIShouldDoNow(IDialogContext context, LuisResult result)
        {
            string message = $"Run";
            await context.PostAsync(message);
            context.Wait(MessageReceived);
        }

        ...

    }
}

Once I had that in place, it was time to test. The easiest way to test before deployment is to use the Bot Framework Channel Emulator.

First, I started the bot in my browser from Visual Studio. Then, I opened the emulator and plugged in the URL from the project properties, and cleared out the credentials fields. Next, I started typing in questions that I figured the bot should be able to handle.

picture2

It worked great! I was pretty excited, because this was the first bot I had ever created, and not only did it work, but it also had natural language processing. Very cool stuff.

Now, if you notice in the picture, there are red circles on every reply. It took a while to figure out what was up. As it turns out, the template for the bot includes an older version of the NuGet bot builder library. Once I updated that to the latest version (3.3 at this time), the “Invalid Token” error local IIS was throwing went away.

Be sure to update the bot builder library NuGet package to the latest version.

Publishing and Registering the Bot

Next, it was time to publish it to my Azure account so I could use the Direct Line API from my client app, and also so I could make the bot available via other channels. I used the built-in Visual Studio publish (right click the project, click “Publish”) to put it up there. I had created the Azure Web App in advance.

picture3

Next, I registered the bot on the Bot Framework site. This step is necessary to be able to use the Direct Line API and make the bot visible to other channels. I had some issues getting it to work at first, because I didn’t realize I needed to update the credential information in the web.config of the bot service. The BotId field in the web.config can be most anything. Most tutorials skip telling you what to put in that field, and it doesn’t match up with anything on the portal.

picture4

As you can see, there are a few steps involved in getting the bot published and registered. For the Azure piece, follow the same steps as you would for any Web App. For the bot registration, be sure to follow the instructions carefully, and keep track of your keys, app IDs, and passwords. Take your time the first time you go through the process.

You can see in the previous screen shot that I have a number of errors shown. Those errors were because of that NuGet package version issue mentioned previously. It wasn’t until I had the bot published that I realized there was an error, and went back and debugged it locally.

Testing the Published Bot in Skype

I published and registered the bot primarily to be able to use the Direct Line channel. But it’s a bot, so it makes sense to test it using a few different channels. Skype is a pretty obvious one, and is enabled by default, so I hit that first.

picture5

Through Skype, I was able to verify that it was published and worked as expected.

Using the Direct Line API

When you want to communicate to a bot from code, a good way to do it is using the Direct Line API. This REST API provides an additional layer of authentication and keeps everything within a structured bot framework. Without it, you might as well just make direct REST calls.

First, I needed to enable the Direct Line channel in the bot framework portal. Once I did that, I was able to configure it and get the super-secret key which enables me to connect to the bot. (The disabled field was a pain to try and copy/paste, so I just did a view source, and grabbed the key from the HTML.)

picture6

That’s all I needed to do in the portal. Next, I needed to set up the client to speak to the Direct Line API.

First, I added the Microsoft.Bot.Connector.DirectLine NuGet package to the UWP app. After that, I wrote a pretty small amount of code for the actual communication. Thanks to my colleague, Shen Chauhan (@shenchauhan on Twitter), for providing the boilerplate in his Hunt the Wumpus app.


private const string _botBaseUrl = "(the url to the bot /api/messages)";
private const string _directLineSecret = "(secret from direct line config)";


private DirectLineClient _directLine;
private string _conversationId;


public async Task ConnectAsync()
{
    _directLine = new DirectLineClient(_directLineSecret);

    var conversation = await _directLine.Conversations
            .NewConversationWithHttpMessagesAsync();
    _conversationId = conversation.Body.ConversationId;

    System.Diagnostics.Debug.WriteLine("Bot connection set up.");
}

private async Task GetResponse()
{
    var httpMessages = await _directLine.Conversations
                  .GetMessagesWithHttpMessagesAsync(_conversationId);

    var messages = httpMessages.Body.Messages;

    // our bot only returns a single response, so we won't loop through
    // First message is the question, second message is the response
    if (messages?.Count > 1)
    {
        // select latest message -- the response
        var text = messages[messages.Count-1].Text;
        System.Diagnostics.Debug.WriteLine("Response from bot was: " + text);

        return text;
    }
    else
    {
        System.Diagnostics.Debug.WriteLine("Response from bot was empty.");
        return string.Empty;
    }
}


public async Task TalkToTheUpsideDownAsync(string message)
{
    System.Diagnostics.Debug.WriteLine("Sending bot message");

    var msg = new Message();
    msg.Text = message;


    await _directLine.Conversations.PostMessageAsync(_conversationId, msg);

    return await GetResponse();
}

The client code calls the TalkToTheUpsideDownAsync method, passing in the question. That method fires off the message to the bot, via the Direct Line connection, and then waits for a response.

Because the bot sends only a single message, and only in response to a question, the response comes back as two messages: the first is the message sent from the client, the second is the response from the service. This helps to provide context.

Finally, I wired it to the SendQuestion button on the UI. I also wrapped it in calls to start and stop the MIDI clock, giving us a bit of Stranger Things thinking music while the call is being made and the result displayed on the LEDs.


private async void SendQuestion_Click(object sender, RoutedEventArgs e)
{
    // start music
    StartMidiClock();

    // send question to service
    var response = await _botInterface.TalkToTheUpsideDownAsync(QuestionText.Text);

    // display answer
    await RenderTextAsync(response);

    // stop music
    StopMidiClock();
}

With that, it is 100% complete and ready for demos!

What would I change?

If I were to start this project anew today and had a bit more time, there are a few things I might change.

I like the voice recognition, Bot Framework, and LUIS stuff. Although I could certainly make the conversation more interactive, there’s really nothing I would change there.

On the electronics, I would use a breadboard-friendly Arduino, not hot-glue an Arduino to the back. It pains me to have hot-glued the Arduino to the board, but I was in a hurry and had the glue gun at hand.

I would also use a separate power supply for LEDs. This is especially important if you wish to light more than one LED at a time, as eventually, the Arduino will not be able to support the current draw required by many LED lights.

If I had several weeks, I would have my friends at DF Robot spin a board that I design, rather than use a regular breadboard, or even a solder breadboard. I generally prefer to get boards spun for projects, as they are more robust, and DF Robot can do this for very little cost.

Finally, I would spend more time to find even uglier wallpaper .

Here’s a photo of the wall, packaged up and ready for shipment to Las Vegas (at the time of this writing, it’s in transit), waiting in my driveway. The box was 55” tall, around 42” wide and 7” thick, but only about 25 lbs. It has ¼” plywood on both faces, as well as narrower pieces along the sides. In between the plywood is 2” thick rigid insulating foam. Finally, the corners are protected with the spongier corner form that came with that box.

It costs a stupid amount of money to ship something like that around, but it’s worth it for events. 🙂

picture7

After this, it’s going to Redmond where I’ll record a video walkthrough with Channel 9 during the second week of November.

What Next?

Windows Remote Wiring made this project quite simple to do. I was able to use the tools and languages I love to use (like Visual Studio and C#), but still get the IO of a device like the Arduino Uno. I was also able to use facilities available to a UWP app, and call into a simple bot of my own design. In addition to all that, I was able to use voice recognition and MIDI all in the same app, in a way that made sense.

The Bot Framework and LUIS stuff was all brand new to me, but was really fun to do. Now that I know how to connect app logic to a bot, there will certainly be more interactive projects in the future.

This was a fun project for me. It’s probably my last real maker project of the fall/winter, as I settle into the fall home renovation work and also gear up for the NAMM music event in January. But luckily, there have been many other posts here about Windows 10 IoT Core and our maker and IoT-focused technology. If this topic is interesting to you, I encourage you to take a spin through the archives and check them out.

Whatever gift-giving and receiving holiday you celebrate this winter, be sure to add a few Raspberry Pi 3 devices and some Arduino Uno boards on your list, because there are few things more enjoyable than cozying up to a microcontroller or some IoT code on a cold winter’s day. Oh, and if you steal a strand or two of lights from the tree, I won’t tell. 🙂

Resources

Questions or comments? Have your own version of the wall, or used the technology described here to help rid the universe of evil? Post below and follow me on Twitter @pete_brown

Most of all, thanks for reading!

Call of Duty: Infinite Warfare now available for Xbox One or Windows 10

$
0
0

Today, we’re excited to announce that Call of Duty: Infinite Warfare, the latest entry in one of the best-selling video game franchises of all time, is now currently available in the Xbox and Windows Stores.

Restricted Content

Confirm your age to continue






Call of Duty: Infinite Warfare returns to the roots of the franchise with large-scale war and a focus on cinematic, immersive military storytelling. Infinite Warfare will take players on an unforgettable journey as they engage in battles from Earth to beyond our atmosphere against a relentless enemy faction that threatens our very way of life.

Starting today you can get Call of Duty: Infinite Warfare – Digital Deluxe Edition, which includes Call of Duty: Infinite Warfare, Call of Duty Modern Warfare Remastered and the Call of Duty: Infinite Warfare Season Pass for one great price. Head over to Xbox Wire to read more!

Kinect demo code and new driver for UWP now available

$
0
0

Here’s a little memory test: Do you recall this blog, which posted back in May and promised to soon begin integrating Kinect for Windows into the Universal Windows Platform? Of course you do! Now we are pleased to announce two important developments in the quest to make Kinect functionality available to UWP apps.

First, by popular demand, the code that Alex Turner used during his Channel 9 video (above) is now available on GitHub as part of the Windows universal samples. With this sample, you can use Windows.Media.Capture.Frames APIs to enumerate the Kinect sensor’s RGB/IR/depth cameras and then use MediaFrameReader to stream frames. This API lets you access pixels of each individual frame directly in a highly efficient way.

These new functionalities debuted in the Windows 10 Anniversary Update, and structure of the APIs should be familiar to those who’ve been using the Kinect SDK for years. But these new APIs are designed to work not only with the Kinect sensor but with any other sensors capable of delivering rich data streams—provided you have a matching device driver.

Which brings us to our second announcement: We have now enabled the Kinect driver on Windows Update. So if you’d like try out this new functionality now, simply go to the Device Manager and update the driver for the Kinect sensor. In addition to enabling the new UWP APIs described above, the new driver also lets you use the Kinect color camera as a normal webcam. This means that apps which use a webcam, such as Skype, can now employ the Kinect sensor as their source. It also means that you can use the Kinect sensor to enable Windows Hello for authentication via facial recognition.

picture1

Another GitHub sample demonstrates how to use new special-correlation APIs, such as CameraIntrinsics or DepthCorrelatedCoordinateMapper, to process RGB and depth camera frames for background removal. These APIs take advantage of the fact that the Kinect sensor’s color and depth cameras are spatially correlated by calibration and depth frame data. This sample also shows how to access the Kinect sensor’s skeletal tracking data through a custom media stream in UWP apps with newly introduced APIs.

Finally, we should note that the Xbox summary update also enables these Kinect features through Windows.Media.Capture.Frames for UWP apps. Thus, apps that use the Kinect sensor’s RGB, infrared, and/or depth cameras will run on Xbox with same code, and Xbox can also use the Kinect RGB camera as a normal webcam for Skype-like scenarios

Judging from requests, we’re confident that many of you are eager to explore both the demo code and download the new driver. When you do, we want to hear about your experiences—what you liked, what you didn’t, and what enhancements you want to see. So send us your feedback!

Please note that, if you have technical questions about this post or would like to discuss Kinect with other developers and Microsoft engineers, we hope you will join the conversation on the Kinect for Windows v2 SDK forum. You can browse existing topics or ask a new question by clicking the Ask a question button on the forum webpage.

The Kinect for Windows Team

Key links

FBI warning: Ransomware attacks skyrocketing

$
0
0

Cybercriminals collected $209 million in the first three months of 2016 by extorting businesses and institutions to unlock computer servers.1 And that estimate is probably low, considering many companies fail to report such attacks for a variety of reasons. This type of crime has grown rapidly and is quickly becoming a favorite of attackers because it is so easy to execute. An attack like this on your business can have disastrous effects, many of which arent seen until after the ransom is paid.

What is Ransomware?

Simply put, its a type of malware that gets into a computer or server and encrypts files, making them inaccessible. The goal is to shut down your ability to do normal business. The attacker then demands a ransom for the key to unlock your data.

One recently publicized attack underscores how difficult it can be to decide what to do. An L.A.-area hospital was targeted and hundreds of patients lives were put at risk. The attackers achieved their infiltration through a simple targeted phishing email and one click of an attachment locked up the hospitals medical records. They had very little recourse and ended up paying $17,000 to the attackers for the key to their own data2. In this case, paying the ransom was an easy choice with real health concerns in the mix, but thats also what made them an ideal target. If you get hit with a ransomware attack, your organization will have an extremely difficult decision to make.

Neither is ideal.

Choice A: Pay the ransom

This is certainly the easiest way to get back up and running, but it only increases the likelihood youll be attacked again. Additionally, you are funneling money to organized crime or potentially even terror organizations. In some cases, companies paid the ransom only to have the attackers ask for more.

Choice B: Work to recover your systems

If you choose not to pay the ransom, youll need to recover the locked data yourself. If you do not have a clear recovery protocol in place, then you may have to deal with being locked out of your data and systems for a while. That forces you to weigh the impact on your business against the ransom ask, which is exactly what they want.

FBI guidelines: How to protect your company3

While ransomware attacks may have spiked, the tactics for preventing them are not new. Its the same for all types of malware. Educate your employees on proper email protocol. Keep hardware and software patched and up-to-date, especially on your endpoints. And manage the access of your privileged accounts.

That said, like malware, its nearly impossible to stop everything. Per the FBI, your best defense against this type of attack is having a strong backup policy. Not just backup. Backup Policy. That means you:

  • Regularly back up data. This is the simplest and most effective way to recover critical data.
  • Secure your backups. That means storing them somewhere that is not connected to the original data, such as in the cloud or physically offline.
  • Run recovery drills. The only way to know for sure if your system will work is to test it in real-life situations.

To us, this just further underscores the need to have a strong recovery plan that includes backup and disaster recovery (DR). Many companies, once they have a DR solution in place, are choosing to use less and less backup to save costs. The problem is, while incredibly useful, disaster recovery faithfully replicates your current environment. If that environment is compromised, so is your DR.

When you have a solution like Operations Management Suite and Azure Backup, you dont need to take that risk. Azure Backup gives an extremely cost-effective and secure way to store your backups in the cloud. It preserves recovery points for up to three days, giving you a way to restore quickly after an attack is discovered, and tools like two-factor authentication and deferred delete prevent destructive operations against your backups. Its a few simple steps that could save you from a disastrous attack.

Learn more

See how integrated cloud backup and disaster recovery provide you with greater security on our Protection and Recovery page.

Free trial

Try Operations Management Suite for yourself and see how it can give you increased visibility and control across your entire hybrid environment. Get your free trial >
1Fitzpatrick, David, and Griffin, Drew. Cyber-extortion losses skyrocket, says FBI. CNN Money. 2016. http://money.cnn.com/2016/04/15/technology/ransomware-cyber-security/

2Staff Report. LA Hospital Paid 17K Ransom to Hackers of Its Computer Network. NBC Los Angeles. 2016. http://www.nbclosangeles.com/news/local/Hollywood-Presbyterian-Paid-17K-Ransom-to-Hackers-369199031.html

3FBI Public Service Announcement. Ransomware Victims Urged To Report Infections To Federal Law Enforcement. September, 2016. https://www.ic3.gov/media/2016/160915.aspx

Viewing all 13502 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>