Quantcast
Channel: TechNet Technology News
Viewing all 13502 articles
Browse latest View live

Migrate your existing Windows C++ projects to MSBuild

$
0
0

If your project targets one of the Windows platforms only (Desktop or UWP), you should consider using MSBuild as your C++ build system. If you consider expanding beyond these platforms though, consider using CMake to specify your build. To learn more, read about the CMake support in Visual Studio.

Using MSBuild has the benefit that from a single codebase you can easily target all the Windows platforms that VS supports today, and you can leverage the C++ Project System that provides file and project management functionality. This makes it easy to manage your project as it grows (easily adding project references between projects, configuring PCHs, configuring compiler and linker switches across multiple projects, etc.).

This article covers the high-level steps needed to migrate your existing C++ code targeting Windows to use MSBuild. You can read about other C++ project types in the guide for Bringing your C++ code to Visual Studio.

Step 1. Run the Project from Existing Code wizard: Launch “File>New>Project from existing code…” and follow the wizard steps to create a new VS project for your sources. On the “Specify Project Settings” step, make sure you select “Use Visual Studio” before configuring any other option that might apply (e.g. using MFC, ATL or CLR).

bringcode-win-wizard

Step 2. Translate compile options to VS: For this step, it’s recommended to turn up the verbosity of MSBuild (from Tools>Options>Projects and Solutions>Build and Run, change “MSBuild project build output verbosity” to “Detailed”). This step will list the compiler command lines that MSBuild uses to run the build. You can compare this against a log of the previous build system you were using.

Differences can be reconciled in the Project Properties dialog (right-click on the project in Solution Explorer and select “Properties”) >Configuration Properties>C/C++>All Options. In the search box, you can search for a specific switch to find the property that maps to that switch and operate changes to make sure that MSBuild ends up calling the compiler with the same command line that your previous build system was.

bringcode-win-proppages

Step 3. Use Shared code projects: If you want to expand your project to target more platforms in addition to Windows, follow the instructions in the “Cross-platform code sharing with Visual C++” article to move your C++ code into a shared code project and share it among C++ projects targeting Windows, UWP, Android or iOS.

bringcode-win-shared

bringcode-win-slnexplorer

Step 4. Consume 3rd party C++ libraries. If your project is depending on any open-source C++ libraries today, chances are that you will find them in vcpkg’s catalog. Vcpkg can easily integrate with MSBuild projects and can simplify both the build process for these 3rd party open-source libraries as well as the consumption in your own projects. To learn more about vcpkg, check out the Getting started with vcpkg introductory post.

What’s next

If you’re new to Visual Studio, learn more by reading the Getting Started with Visual Studio for C and C++ Developer topic (Coming soon!) and the rest of the posts in this Getting Started series aimed at C++ users that are new to Visual Studio. Download Visual Studio 2017 today, try it out and share your feedback.


Bring your C++ code to Visual Studio

$
0
0

C++ has been around for a long time and throughout its history many tools have been built to make life easier for C++ developers. This has led to a diverse C++ ecosystem in terms of the editing tools, build systems, coding conventions, and C++ libraries that we use in our day-to-day work. As a C++ developer, you are probably accustomed to using a variety of tools from different vendors for different purposes. Rest assured that you will not trade-in your flexibility in how you develop your C++ projects once you start using Visual Studio. Visual Studio provides industry-leading development tools for C++ for any platform you’re targeting.

Depending on a few characteristics of your C++ project, this document will guide you through the recommended steps to get started with Visual Studio. Read on each chapter to see if it fits the description of your project. This post is part of a Getting Started series aimed at C++ users that are new to Visual Studio.

Cross-platform C++ applications and libraries

Building with CMake

If your project targets multiple platforms, you are likely to use CMake to specify your build. The steps needed to move to Visual Studio are very simple in this case – just open the folder containing your CMakeLists.txt files and let Visual Studio do the rest. To learn more about using CMake in Visual Studio, read the CMake support in Visual Studio page.

Targeting Qt Framework or building with QMake

Qt framework is a cross-platform C++ framework; it is ideal for building desktop, mobile and even embedded solutions. While you can use CMake to target Qt (in which case you should review the above topic), Qt also offers its own Qt-optimized build system called qmake that supports non-Qt C++ projects as well. If you are using qmake, learn how to import your .pro projects into Visual Studio.

Building with a cross-platform C++ build system (make, ninja, gyp, scons, gradle, etc.)

There are many build systems that support C++ today for cross-platform scenarios. It is outside the scope of this document to recommend one over another. But regardless of which build system your project uses today, you can open it inside Visual Studio and with minimal configuration you can become productive. With any of these build systems, you can enable all or any of the following Visual Studio capabilities:

  • C++ editing (e.g. IntelliSense, code navigation)
  • Building
  • C++ debugging (e.g. Windows process debugging, attaching, remote debug, etc.)

To learn how to move to Visual Studio, read more about Open Folder support in Visual Studio 2017.

Linux C++ applications (including targeting server, cloud, IoT)

Are you developing a server-side component or a containerized binary running on Linux or maybe a critical component for an IoT device? Visual Studio provides support for targeting Linux out-of-the-box. You can edit, build and debug your C++ projects either by using a remote Linux machine or using the built-in Windows 10 Linux subsystem support. For a step-by-step guide to porting your projects to Visual Studio read Bring your existing C++ Linux projects to Visual Studio.

Android C++/Java applications

Using Eclipse

You can use Visual Studio to develop both your C++-only projects as well as C++/Java JNI-based projects targeting Android. If you’re currently using Eclipse, you can move to Visual Studio via our Eclipse Android Project Import Wizard. Follow the link to learn more about migrating your Eclipse Android projects to Visual Studio.

Using Gradle

Whether you already have a gradle-based build for your Android project, or you are just getting started targeting Android, Visual C++ provides the support you need to build Gradle projects. Visual C++ also offers a great editing and debugging experience for both your C++ and Java source code. To learn more, read about building your Android applications in Visual Studio using Gradle.

iOS Objective-C/C++ applications

If you’re targeting iOS and writing a lot of C++ code, you should consider importing your XCode projects into Visual Studio. Visual Studio not only provides an easy way to import these projects, but also allows opening these projects back in XCode if you need to make non-C++ related edits (e.g. storyboarding, UI design). Follow this link to learn more about migrating your XCode iOS projects to Visual Studio.

Windows C++ application

If your project targets Windows, you should consider using MSBuild as your C++ build system. With MSBuild, you can target from a single codebase all the platforms that Visual Studio supports today. You also get access to the C++ Project System that provides file and project management functionality that makes it easy to manage your project as it grows. You can easily addi references between projects, configuring PCH file, and configur compiler and linker switches across multiple projects). Learn more about migrating your C++ project to MSBuild.

What’s next

If you’re new to Visual Studio, learn more by reading the Getting Started with Visual Studio for C and C++ Developer topic (Coming soon!) and the rest of the posts in this Getting Started series aimed at C++ users that are new to Visual Studio. Download Visual Studio 2017 today, try it out and share your feedback.

If your C++ development scenarios are not covered today by Visual Studio, don’t hesitate to reach out to us at visualcpp@microsoft.com. We would love to learn more about it

Deploy PHP application to Azure App Service using VSTS

$
0
0

This blog post shows how you can deploy a new PHP application from Visual Studio Team Services or Microsoft Team Foundation Server to Azure App Service.

Download the sample

  • Fork the Hello World sample app repository to your github account
    https://github.com/RoopeshNair/php-docs-hello-world

Create a web app

  • From Azure portal> App Services >+ Add
    addazureappservice
  • Select Web Apps > Click “Create” with App Name, Subscription and Resource Group details. Once the deployment is successful, configure the PHP version in the “Application settings” to use “7.0” as shown below

createnewwebapp-1

Setup Release

  1. Open the Releases tab of the Build & Release hub, open the + drop-down in the list of release definitions, and choose Create release definition
  2. In the DEPLOYMENT TEMPLATES dialog, select the “Deploy PHP App to Azure App Service”template and choose OK.

deployphptemplate

  1. Click “Choose Later” for the artifact to be deployed.

chooselater

  1. Configure the Azure App Service Deployment task:
      • Azure Subscription: Select a connection from the list under Available Azure Service Connections. If no connections appear, choose Manage, select New Service Endpoint | Azure Resource Manager, and follow the prompts. Then return to your release definition, refresh the Azure Subscription list, and select the connection you just created.
        Note: If your Azure subscription is defined in an Azure Government Cloud, ensure your deployment process meets the relevant compliance requirements. For more details, see Azure Government Cloud deployments.
      • App Service Name: the name of the App Service (the part of the URL without .azurewebsites.net)
      • Deploy to Slot: make sure this is cleared (the default)
      • Virtual Application : leave blank
      • Package or Folder:Click on “…” button
        packageorfolder
      • Click on “Link to an artifact source”

linkartifactsource

      • Select the Github as the artifact source type and point it to your Github repository (forked earlier). You may need to create GitHub endpoint

githubartifact

      • Select the repo root as the folder to deploy
        selectedpackage
      • Advanced:(optional)
        • Deployment script: The task gives you additional flexibility to run deployment script on the Azure App Service. For example, you can run a script to update dependencies (example composer extension) on the Azure App Service instead of packaging the dependencies in the build step.
          composerextension
        • Take App Offline: If you run into locked file problems when you test the release, try selecting this check box.
    1. Type a name for the new release definition and, optionally, change the name of the environment from Default Environment toQA. Also, set the deployment condition on the environment to “Automatically start after release creation”.
    2. Save the new release definition. Create a new release and verify that the application has been deployed correctly.

    Related Topics

    1. Configure PHP in Azure App Service Web Apps

    DockerCon 2017: Powering new Linux innovations with Hyper-V isolation and Windows Server

    $
    0
    0

    This post was authored by John Gossman, Azure Lead Architect and Linux Foundation Board Member.

    With over 900,000 containerized applications in the Docker Hub there has never been a better time to be a developer. However, a barrier remained Linux images run on a Linux host and Windows images on a Windows host requiring multiple infrastructures and more complex development tooling. Today at DockerCon 2017, Microsoft showcased how we will remove this barrier with Linux containers running natively on Windows Server through our Hyper-V isolation technology. This will enable developers to build with Windows and IT administrators hosting Windows Server to run any container image regardless of their platform.

    When we announced and launched Hyper-V Containers it was because some customers desired additional, hardware-based isolation for multi-tenant workloads, and to support cases where customers may want a different kernel than what the container host is using for example different versions. We are now extending this same Hyper-V isolation technology to deliver Linux containers on Windows Server. This will give the same isolation and management experience for Windows Server Containers and Linux containers on the same host, side by side.

    Tens of thousands of developers depend on Docker Community Edition (CE) on their Windows 10 laptops each day as they build, ship and run Linux and Windows containers. Microsoft has a long history of working in the Docker community, collaborating to bring container technologies to Windows and Microsoft Azure. This project is being launched today, at DockerCon, so that we can continue that legacy of working with the community to deliver innovative solutions in open source.

    More than three years ago, we helped contribute Hyper-V support to the Docker Machine and boot2docker projects which served as the early foundation of Moby and LinuxKit. Over the last year, weve continued working hand-in-hand to bring Windows container support into Docker CE, first with Microsoft adding support for Windows Server Containers on Windows 10 using Hyper-V isolation and then Docker adding support to switch between Linux and Windows. We are now looking forward to continuing that collaboration in the open source LinuxKit and Docker projects to provide even better Windows and Linux container support. We are also committed to building support for this feature as part of the ongoing containerd project in line with the goals of an industry-standard cross platform container runtime.

    Beginning with the very first DockerCon in June 2014, Microsofts ongoing strong commitment to Docker and open source has been singular, said Scott Johnston, COO, Docker, Inc. Microsofts new Hyper-V Linux containers, announced today at DockerCon, and its collaboration with Dockers LinuxKit and containerd together represent a unique, innovative solution for developers building heterogeneous, hybrid cloud applications.

    In the spirit of providing customers with a choice, we will also enable customers to choose the Linux distributions they want to use to host their Linux containers. Microsoft will be open sourcing the required integration code and we have been working with leading Linux vendors who will be providing container OS images. We are happy to share that Canonical, Intel, Red Hat and SUSE will also support this project.

    “Canonical is proud of a longstanding relationship with Microsoft to bring Ubuntu and the best of the open source world to the Windows ecosystem. We have teamed together to deliver Ubuntu images to the Microsoft Azure cloud platform and Azure Container Service, and Ubuntu as the Bash experience in the Windows Subsystem for Linux on the Windows Desktop, and now in the form of a minimal, secure, Ubuntu container OS image.”

    – Dustin Kirkland, Head of Product, Canonical

    We are excited to collaborate closely with Microsoft to optimize and include the Clear Linux OS for Intel Architecture as an option for customers to use within their new Linux containers running natively on Windows Server through Hyper-V isolation technology,

    – Arjan van de Ven, Sr. Principal Engineer, Intel Corporation

    “Through both our upstream open source contributions and through Red Hat Enterprise Linux Atomic Host and Red Hat OpenShift, Red Hat is committed to bringing production-ready container solutions to enterprise customers. The cloud is hybrid and customers want to be able to adopt heterogeneous technologies. Through this aligned vision with Microsoft, we look forward to bringing Red Hat Enterprise Linux containers to Hyper-V users.

    – Jim Totton, Vice President and General Manager, Platforms Business Unit, Red Hat

    Microsoft is investing in Linux containers on Windows Server — and if security and containers are important to you — keep on reading. This collaboration is a natural step for SUSE, as we are investing in secure, rootless containers for our CaaS Platform solution. SUSE is excited to be a part of this announcement and will actively collaborate with Microsoft to enable our joint customers with SUSE-based Hyper-V isolated containers that run natively on Windows Server.”

    – Dr. Gerald Pfeifer, VP of Products and Technology Programs, SUSE

    We look forward to working with all of you on this project over the coming months.

    Visual Studio 2017 Preview 15.2

    $
    0
    0

    Today we are releasing Visual Studio 2017 Preview 15.2. For information on what this preview contains, please refer to the Visual Studio 2017 Preview release notes.

    If you haven’t heard about our new Preview releases, do take a few minutes to learn about them on the Visual Studio Preview page on visualstudio.com.

    As always, we welcome your feedback. For problems, let us know via the Report a Problem option in the upper right corner, either from the installer or the Visual Studio IDE itself. Track your feedback on the developer community portal. For suggestions, let us know through UserVoice.

     

    Sarika Calla, Principal Program Manager, Visual Studio

    Sarika Calla runs the Visual Studio release engineering team with responsibility for making Visual Studio releases available to our customers around the world.

    Introducing Groups in Outlook for Mac, iOS and Android

    $
    0
    0

    More than 10 million people rely on Groups in Outlook every month to work together and get things done. Groups is proving useful to our customers. And for that, we couldn’t be more thankful. Groups in Outlook offers huge improvements over traditional distribution lists, with a shared space for group conversations, calendars, files and notebooks, the convenience of self-service membership and much more.

    Today, we’re pleased to announce Groups is now rolling out to Outlook for Mac, iOS and Android. Groups is already available in Outlook for Windows and on the web—so now you can access your group conversations and content no matter which platform you use.

    With these updates, you can:

    • View your group list.
    • Read and reply to group conversations.
    • Add group events to your personal calendar.
    • View unread messages sent to the group.
    • View group details within the group card (Outlook for iOS and Android only).

    There is more to come as we continue to work on making Groups better in response to your input, so stay tuned.

    Recently released updates for Groups in Outlook

    In addition to bringing groups to more Outlook apps, we’ve released several new features for Groups in Outlook on other platforms, too.

    Give guest access—Last fall, we updated Outlook on the web to give you the ability to set up guest access for people outside your organization, set group classification as defined by Office 365 admins, and view usage guidelines. Now, these same capabilities are available in Outlook for Windows.

    Invite people to join—One of our most requested improvements was an easier way to invite multiple people to join a group. We’ve released the Invite to join feature to Outlook on the web, which lets you create invitation links and share them with others via email or other channels, giving them a quick way to join the group.

    Multi-delete conversations—Group owners can now multi-select conversations and delete them from the group conversations space in Outlook for Windows.

    Send email as a group—Office 365 admins can grant send-as and send-on-behalf-of permissions to members of a group using the Exchange admin center. Group members who have these permissions can then send emails as the group, or on behalf of the group, from Outlook for Windows and Outlook on the web.

    What’s next

    We’re always listening to your feedback as we deliver new Groups capabilities to Outlook. Here are a few of your key requests we are going to tackle next:

    • Add appointments to a group calendar in Outlook for Windows—When adding an event to a group calendar, you will have the option to do so without sending an invite to everyone in the group.
    • Addition of Mail Contacts as guests—You will be able to easily add Mail Contacts in your company’s directory as a guest in a group.

    Thanks for the feedback, and please keep it coming via our UserVoice site.

    —The Outlook team

     

    Frequently asked questions

    Q. Now that Groups support is being added to Outlook for iOS and Android, what happens to the standalone Outlook Groups app?

    A. Customers gave us feedback that they wanted Groups available directly in Outlook for iOS and Android. The Outlook Groups app will still be available while we continue to enhance Groups experiences in Outlook, such as adding support for group files, calendar and notebooks.

    Q. Why am I not seeing Groups yet?

    A. Groups is rolling out to Outlook for Mac, iOS and Android and will be available for eligible users in the coming weeks. Even if you are using the latest build of Outlook for Mac, iOS and Android, Groups will only be available to those who have joined or been added to a group. Once we add the ability to create and join groups on Mac, iOS and Android, every Office 365 user will see Groups in Outlook.

    Q. Is Groups available to Outlook.com users?

    A. Groups is for commercial users of Office 365 and is not available for Outlook.com.

    Q. Why am I not seeing all my groups in Outlook for Mac?

    A. Outlook for Mac currently shows the top 10 most active groups in Outlook for Mac. We’re working on making all groups visible in a future update.

    Q. What about Outlook for Windows 10 Mobile?

    A. We’re working on the best way to integrate Groups in Outlook for Windows 10 Mobile. In the meantime, the Outlook Groups app for Windows 10 Mobile helps customers stay on top of all group activities, including conversations, files, calendar and notebook.

    Q. Where can I find more about managing Groups in Outlook for my organization?

    A. If you are responsible for managing and supporting Outlook for your company, take a look at our IT pro documentation and check out our recently released improvements for administering Groups.

    Q. What is coming next for Groups?

    A. Stay tuned to the Office 365 Roadmap to see what is on the way.

    The post Introducing Groups in Outlook for Mac, iOS and Android appeared first on Office Blogs.

    A week with Microsoft Edge: How to organize the web

    $
    0
    0

    Yesterday, we introduced you to a series of blog posts we’re publishing throughout this week about Microsoft Edge. Each day we’ll be sharing a new video and blog post to introduce you to the best of Microsoft Edge.

    Today, we’re showing you the Microsoft Edge features that can help you keep the web organized.

    We all spend a lot of time browsing the web, and it’s more important than ever to have the tools you need to stay organized, productive, and focused on the web. With the Windows 10 Creators Update, we’ve added new features to make this easier than ever – including the new tab preview bar and a simple way to set your tabs aside.

    You can catch up on yesterday’s blog post about getting started with Microsoft Edge below.

    Stay tuned this week for more on Microsoft Edge!

    The post A week with Microsoft Edge: How to organize the web appeared first on Windows Experience Blog.

    Deep Learning on the New Ubuntu-Based Data Science Virtual Machine for Linux

    $
    0
    0

    Authored by Paul Shealy, Senior Software Engineer, and Gopi Kumar, Principal Program Manager, at Microsoft.

    Deep learning has received significant attention recently for its ability to create machine learning models with very high accuracy. It’s especially popular in image and speech recognition tasks, where the availability of massive datasets with rich information make it feasible to train ever-larger neural networks on powerful GPUs and achieve groundbreaking results. Although there are a variety of deep learning frameworks available, getting started with one means taking time to download and install the framework, libraries, and other tools before writing your first line of code.

    Microsoft’s Data Science Virtual Machine (DSVM) is a family of popular VM images published on the Azure marketplace with a broad choice of machine learning and data science tools. Microsoft is extending it with the introduction of a brand-new offering in this family – the Data Science Virtual Machine for Linux, based on Ubuntu 16.04LTS – that also includes a comprehensive set of popular deep learning frameworks.

    Deep learning frameworks in the new VM include:

    • Microsoft Cognitive Toolkit
    • TensorFlow
    • H2O
    • MXNet
    • NVIDIA DIGITS
    • Theano
    • Torch, including PyTorch
    • Keras

    The image can be deployed on VMs with GPUs or CPU-only VMs. It also includes OpenCV, matplotlib and many other libraries that you will find useful.

    Run dsvm-more-info at a command prompt or visit the documentation for more information about these frameworks and how to get started.

    Sample Jupyter notebooks are included for most frameworks. Start Jupyter or log in to JupyterHub to browse the samples for an easy way to explore the frameworks and get started with deep learning.

    GPU Support

    Training a deep neural network requires considerable computational resources, so things can be made significantly faster by running on one or more GPUs. Azure now offers NC-class VM sizes with 1-4 NVIDIA K80 GPUs for computational workloads. All deep learning frameworks on the VM are compiled with GPU support, and the NVIDIA driver, CUDA and cuDNN are included. You may also choose to run the VM on a CPU if you prefer, and that is supported without code changes. And because this is running on Azure, you can choose a smaller VM size for setup and exploration, then scale up to one or more GPUs for training.

    The VM comes with nvidia-smi to monitor GPU usage during training and help optimize parameters to make full use of the GPU. It also includes NVIDIA Docker if you want to run Docker containers with GPU access.

    Data Science Virtual Machine

    The Data Science Virtual Machine family of VM images on Azure includes the DSVM for Windows, a CentOS-based DSVM for Linux, and an Ubuntu-based DSVM for Linux. These images come with popular data science and machine learning tools, including Microsoft R Server Developer Edition, Microsoft R Open, Anaconda Python, Julia, Jupyter notebooks, Visual Studio Code, RStudio, xgboost, and many more. A full list of tools for all editions of the DSVM is available here. The DSVM has proven popular with data scientists as it helps them focus on their tasks and skip mundane steps around tool installation and configuration.


    To try deep learning on Windows with GPUs, the Deep Learning Toolkit for DSVM contains all tools from the Windows DSVM plus GPU drivers, CUDA, cuDNN, and GPU versions of CNTK, MXNet, and TensorFlow.

    Get Started Today

    We invite you to use the new image to explore deep learning frameworks or for your machine learning and data science projects – DSVM for Linux (Ubuntu) is available today through the Marketplace. Free Azure credits are available to help get you started.

    Paul & Gopi


    Office Online Server April 2017 release

    $
    0
    0

    We are excited to announce our second major update to Office Online Server (OOS), which includes support for Windows Server 2016 as well as several improvements. OOS allows organizations to provide users with browser-based versions of Word, PowerPoint, Excel and OneNote, among other capabilities offered in Office Online, from their own datacenter.

    In this release, we officially offer support for Windows Server 2016, which has been highly requested. If you are running Windows Server 2016, you can now install OOS on it. Please verify that you have the latest version of the OOS release to ensure the best experience.

    In addition, this release includes the following improvements:

    • Performance improvements to co-authoring in PowerPoint Online.
    • Equation viewing in Word Online.
    • New navigation pane in Word Online.
    • Improved undo/redo in Word Online.
    • Enhanced W3C accessibility support for users who rely on assistive technologies.
    • Accessibility checkers for all applications to ensure that all Office documents can be read and authored by people with different abilities.

    We encourage OOS customers to visit the Volume License Servicing Center to download the April 17, 2017 release. You must uninstall the previous version of OOS to install this release. We only support the latest OOS version—with bug fixes and security patches available from Microsoft Updates Download Center.

    Customers with a Volume Licensing account can download OOS from the Volume License Servicing Center at no cost and will have view-only functionality—which includes PowerPoint sharing in Skype for Business. Customers that require document creation and edit and save functionality in OOS need to have an on-premises Office Suite license with Software Assurance or an Office 365 ProPlus subscription. For more information on licensing requirements, please refer to our product terms.

    The post Office Online Server April 2017 release appeared first on Office Blogs.

    3 tips for how sales managers can use Office 365 to meet their goals

    $
    0
    0

    If you’re like Jenn Schaal, a busy sales manager for an international trade association, your day is filled with client connections, budgets to hit and leads to generate—on top of day-to-day fires to put out. Hitting the end goal—whether it’s a sales target or a client win—takes time and effort. Office 365 helps shave off time throughout the day, so sales managers can meet their goals.

    Taking a note from her powerlifting hobby, Jenn wanted to carry over the sense of feeling strong and in charge into her job. She found three ways Office 365 helps her save time, work smarter and be more efficient on the go.

    1. Take the hassle out of travel with OneDrive for Business

    Travel is almost synonymous with sales, and while it’s invaluable for meeting clients or attending trade events, it can be challenging to stay in sync with reports and presentations being updated back in the office. Jenn knows this all too well. With most of her clients in different markets, she needs to have all the important info in one convenient spot while on the road. With OneDrive for Business, Jenn can pull up files or media kits to share and knows exactly where to find what she needs.

    Plus, OneDrive for Business does more than let you view and edit your documents from anywhere. For example:

    • If your computer happens to die, get lost or is stolen on that important sales trip, you can use someone else’s device and sign in as yourself. Office 365 remembers your most recent documents, so you’ll always have access to client presentations or reports.
    • Stuck somewhere without internet access but want to get some work done? OneDrive for Business helps you get around your Wi-Fi troubles and work offline by syncing up to 20,000 files and folders into your library.
    • For large ongoing projects, instead of sending specific files, it can be easier to share a whole folder with your client, so that they always have access to the latest reports on OneDrive for Business.

    2. Stay on top of what matters with Outlook

    Like many people, most of Jenn’s day is spent in Outlook emailing clients, setting up meetings, viewing the team calendar and sending and receiving the latest sales reports. She likes having the same functionality and rich features in the Outlook app, which is especially useful for travel and staying connected on the go.

    Power-user tips for smarter emailing and calendaring with Outlook help you get even more done:

    • Sending a large presentation to a client and don’t want to overflow their inbox? Share as a OneDrive cloud attachment and not only free up space, but make last-minute tweaks without having to re-send the file.
    • Use Outlook Customer Manager to track customers and see all related info—email, meetings, calls, notes, files, tasks, deals and deadlines—in a convenient focused list view. Deals and even customers can be prioritized and then easily shared with other team members.

    3. Stay in touch with your team or clients with Skype for Business

    Whether you have a few remote team members, like Jenn, or multiple sales offices across the country, staying connected and aligned on business priorities is key. With Skype for Business, Jenn can easily turn a messaging chat into a call to quickly resolve a problem or close a deal.

    There are other ways that Skype can help bridge the distance between teams or clients:

    • Want to have a call with a client not on Skype for Business? No problem! Meet with up to 250 people—all they need is a phone or internet connection to get started.
    • Don’t just talk, have a truly interactive meeting by sharing your screen and annotating PowerPoint for real-time collaboration. Then share it all with anyone who couldn’t make it via a recording. You can also use a whiteboard, polls, Q&A and built-in IM during your sales meetings for instant feedback.
    • Help build connections and relationships with remote clients through video calls. Enjoy industry-leading HD video for online meetings that feel top quality and trustworthy. Focus more on the people in your call, with added features like automatic cropping and head tracking.

    Whether it’s storing, syncing and sharing files in OneDrive for Business; smarter emailing or calendaring with the Outlook app; or audio, HD video and web conferencing with Skype for Business; there are many reasons to become a champion for the latest productivity technology within your team or company. Learn how you can get more out of your day with Office 365. Watch the full story of how Jenn simplified her job, and spread the word within your organization.

    The post 3 tips for how sales managers can use Office 365 to meet their goals appeared first on Office Blogs.

    Top three capabilities to get excited about in the next version of SQL Server

    $
    0
    0

    clip_image002

    We announced the first public preview of SQL Server v.Next in November 2016, and since then we’ve had lots of customer interest, but a few key scenarios are generating the most discussion.

    If you’d like to learn more about SQL Server v.Next on Linux and Windows, please join us for the upcoming Microsoft Data Amp online event on April 19 at 8 AM Pacific. It will showcase how data is the nexus between application innovation and intelligence—how data and analytics powered by the most trusted and intelligent cloud can help companies differentiate and out-innovate their competition.

    In this blog, we discuss three top things that customers are excited to do with the next version of SQL Server.

    1. Scenario 1: Give applications the power of SQL Server on the platform of your choice

    With the upcoming availability of SQL Server v.Next on Linux, Windows, and Docker, customers will have the added flexibility to build and deploy more of their applications on SQL Server. In addition to Windows Server and Windows 10, SQL Server v.Next supports Red Hat Enterprise Linux (RHEL), Ubuntu, and SUSE Linux Enterprise Server (SLES). SQL Server v.Next also runs on Linux and Windows Docker containers opening up even more possibilities to run on public and private cloud application platforms like Kubernetes, OpenShift, Docker Swarm, Mesosphere DC/OS, Azure Stack, and Open Stack. Customers will be able to continue to leverage existing tools, talents, and resources for more of their applications.

    SQL Server v.Next Linux Docker

    Some of the things customers are planning for SQL Server v.Next on Windows, Linux, and Docker include migrating existing applications from other databases on Linux to SQL Server; implementing new DevOps processes using Docker containers; developing locally on the dev machine of choice, including Windows, Linux, and macOS; and building new applications on SQL Server that can run anywhere—on Windows, Linux, or Docker containers, on-premises, and in the cloud.

    2. Scenario 2: Faster performance with minimal effort

    SQL Server v.Next further expands the use cases supported by SQL Server’s in-memory capabilities, In-Memory OLTP and In-Memory ColumnStore. These capabilities can be combined on a single table delivering the best Hybrid Transactional and Analytical Processing (HTAP) performance available in any database system. Both in-memory capabilities can yield performance improvements of more than 30x, enabling the possibility to perform analytics in real time on operational data.

    In v.Next natively compiled stored procedures (In-memory OLTP) now support JSON data as well as new query capabilities. For the column store both building and rebuilding a nonclustered column store can now be done online. Another critical addition to the column store is support for LOBs (Large Objects).

    With these additions, the parts of an application that can benefit from the extreme performance of SQL Server’s in-memory capabilities have been greatly expanded! We also introduced a new set of features that learn and adapt from an application’s query patterns over time without requiring actions from your DBA.

    3. Scenario 3: Scale out your analytics

    In preparation for the release of SQL Server v.Next, we are enabling the same High Availability (HA) and Disaster Recovery (DR) solutions on all platforms supported by SQL Server, including Windows and Linux. Always On Availability Groups is SQL Server’s flagship solution for HA and DR. Microsoft has released a preview of Always On Availability Groups for Linux in SQL Server v.Next Community Technology Preview (CTP) 1.3.

    SQL Server Always On availability groups can have up to eight readable secondary replicas. Each of these secondary replicas can have their own replicas as well. When daisy chained together, these readable replicas can create massive scale-out for analytics workloads. This scale-out scenario enables you to replicate around the globe, keeping read replicas close to your Business Analytics users. It’s of particularly big interest to users with large data warehouse implementations. And, it’s also easy to set up.

    In fact, you can now create availability groups that span Windows and Linux nodes, and scale out your analytics workloads across multiple operating systems.

    SQL Server v.Next HA

    In addition, a cross-platform availability group can be used to migrate a database from SQL Server on Windows to Linux or vice versa with minimal downtime. You can learn more about SQL Server HA and DR on Linux by reading the blog SQL Server on Linux: Mission-critical HADR with Always On Availability Groups.

    To find out more, you can watch our SQL Server on Linux webcast. Find instructions for acquiring and installing SQL Server v.Next on the operating system of your choice at www.microsoft.com/sqlserveronlinux. To get your SQL Server app on Linux faster, you can nominate your app for the SQL Server on Linux Early Adopter Program, or EAP. Sign up now to see if your application qualifies for technical support, workload validation, and help moving your application to production on Linux before general availability.

    To find out more about SQL Server v.Next and get all the latest announcements, register now to attend Microsoft Data Amp—where data gets to work.

    Voodoo Vince: Remastered launches today on Windows 10 and Xbox One

    $
    0
    0

    Voodoo Vince: Remastered launches today on Windows 10 and Xbox One

    “13 years, 6 month and 25 days. That’s how much time has passed since Voodoo Vince first appeared on the original Xbox in September of 2003.

    The half-life of a game is unbelievably short. A game doesn’t get much time before it’s relegated to the past. The march of technology and player demographics weren’t on our side in 2003. Voodoo Vince was quickly lost in the shuffle and seemed like it would remain in obscurity.

    This revival of Vince would have been far less likely without the encouragement of Phil Spencer and the amazing team over at ID@Xbox. ID@Xbox made it possible for a small team to publish a game independently. Phil reminded me of what a game can mean to players when he described playing Voodoo Vince with his kids in a 2014 interview.

    The result is as much a restoration as it is a remastering. It’s easy to look at our trailer and screenshots and conclude that Voodoo Vince looks like it did back in the day. I take that as a compliment, but my goal was to update the game carefully so it looks like we think it did.

    Our villain Kosmo calls Vince a “mere pile of cloth and thread” near the beginning of the game. He’s technically correct. But Kosmo overlooks the fact that Vince, like any team effort, is greater than the sum of his parts. I’m excited that the world has a fresh chance to discover that.”

    Head over to Xbox Wire to read more from Clayton Kauzlaric!

    The post Voodoo Vince: Remastered launches today on Windows 10 and Xbox One appeared first on Windows Experience Blog.

    Introducing Power Throttling

    $
    0
    0

    Most people running Windows like having multiple apps running at the same time – and often, what’s running in the background can drain your battery. In this latest Insider Preview build (Build 16176), we leveraged modern silicon capabilities to run background work in a power-efficient manner, thereby enhancing battery life significantly while still giving users access to powerful multitasking capabilities of Windows. With “Power Throttling”, when background work is running, Windows places the CPU in its most energy efficient operating modes – work gets done, but the minimal possible battery is spent on that work.

    You may remember some of our January power experiments we mentioned in Build 15002’s release notes. Power Throttling was one of those experiments, and showed up to 11% savings in CPU power consumption for some of the most strenuous use cases. We’ve been hard at work making improvements and listening to Windows Insider feedback since then, so this capability should help many of you see a nice boost in battery life!

    Figure 1 - Task Manager shows which processes are Power throttled

    Figure 1 – Task Manager shows which processes are Power throttled

    Note: Power Throttling is currently available only for processors with Intel’s Speed Shift technology, available in Intel’s 6th-gen (and beyond) Core processors – we’re working on expanding support to other processors as well over the next few months.

    How does it work? To give great performance to the apps you’re using, while at the same time power throttling background work, we built a sophisticated detection system into Windows. The OS identifies work that is important to you (apps in the foreground, apps playing music, as well as other categories of important work we infer from the demands of running apps and the apps the user interacts with). While this detection works well for most apps, if you happen to notice an app that is negatively impacted by Power Throttling, we really want to know!! You can do 3 things:

    1. Provide feedback! Please run the Feedback Hub and file feedback under the Power and Battery > Throttled Applications category

    2. Control power throttling system-wide, using the Power Slider. Windows works hardest to keep the processor in its efficient ranges when you’ve selected “Battery Saver” or “Recommended”, and turns off completely when you’ve selected “Best Performance”.

    Power Slider

    3. Opt individual apps out from Power Throttling:

    • Go to Battery Settings (Settings >  System > Battery).
    • Click on “Battery Usage by App”.
    • Select your app.
    • Toggle “Managed by Windows” to “Off”.
    • Uncheck the “Reduce work app does when in background” checkbox.

    Note that benchmark results may vary with power throttling turned on. While most benchmarks run fine and produce great performance results, some benchmark processes may be affected by throttling. Our general recommendation is to always run performance benchmarks while plugged in, as power throttling does not apply in that case.

    Developer note: Power throttling is designed to work well with applications out of the box, but we recognize that in some cases, application developers may be able to provide additional power savings by having more fine-grained control over Power Throttling. We will have APIs to provide more fine grained control in upcoming flights. Please make sure to watch out for API updates on MSDN.

    *Power throttling is a temporary working name for this capability and may change during the course of the development cycle for the next release of Windows.

    Thanks,
    Bill @billkar44

    The post Introducing Power Throttling appeared first on Windows Experience Blog.

    The week in .NET – Happy birthday .NET with Robin Cole, TinyORM, 911 Operator

    $
    0
    0

    Previous posts:

    On .NET

    This week on the show, we’ll speak with Don Schenck about Red Hat. We’ll take questions on Gitter, on the dotnet/home channel and on Twitter. Please use the #onnet tag. It’s OK to start sending us questions in advance if you can’t do it live during the show.

    Happy birthday .NET with Robin Cole

    In February we got together with many Microsoft alumni and current employees for a huge .NET Birthday bash. We spoke to Robin Cole, who joined Microsoft in 2005 working on many projects including Expression and Visual Studio. In this quick interview, she shares her thoughts on developers and designers and exciting future ahead.

    Package of the week: TinyORM

    TinyORM is a new micro-ORM for .NET that automates connection and transaction management, that is simple and easy to use correctly.

    Game of the Week: 911 Operator

    911 Operator is an indie simulation game. Ever wanted to see what it was like to be a 911 operator? Well, now you can! In 911 Operator, you’ll manage emergency lines by answering incoming calls and reacting appropriately. Give first aid instructions, dispatch emergency respondents or even choose to ignore the call which could very well be from a prankster. In 911 Operator, you can play in any city of the world by using Free Play mode to download real maps, which of course includes real addresses, streets and emergency infrastructure.

    911 Operator

    911 Operator was created by Jutsu Games using C# and Unity. It is available on Steam for PC, Mac and Linux.

    Meetup of the week: Global Azure Bootcamp in Miami, FL

    The dotnetmiami user group hosts their Global Azure Bootcamp this Saturday at 9:00AM in Miami.

    .NET

    ASP.NET

    C#

    F#

    New F# language Suggestions:

    There was a major F# conference two weeks ago, F# eXchange. You can view all of the talks online here. If you wish to see all the new and exciting areas where F# is going, please watch them. They’re entirely free.

    Check out F# Weekly for more great content from the F# community.

    VB

    Xamarin

    Microsoft Engineering is offering a limited number of technical sessions to help your team build better apps faster, and avoid the common pitfalls in going mobile. The Go Mobile Tech Workshops are dedicated sessions for your team covering everything from your technology stack and architecture to the latest in Visual Studio 2017 and DevOps best practices. These workshops help your team get ahead with current projects and prepare for what is coming next in app development.

    Apply here.

    Azure

    UWP

    Game Development

    And this is it for this week!

    Contribute to the week in .NET

    As always, this weekly post couldn’t exist without community contributions, and I’d like to thank all those who sent links and tips. The F# section is provided by Phillip Carter, the gaming section by Stacey Haffner, the Xamarin section by Dan Rigby, and the UWP section by Michael Crump.

    You can participate too. Did you write a great blog post, or just read one? Do you want everyone to know about an amazing new contribution or a useful library? Did you make or play a great game built on .NET?
    We’d love to hear from you, and feature your contributions on future posts:

    This week’s post (and future posts) also contains news I first read on The ASP.NET Community Standup, on Weekly Xamarin, on F# weekly, and on The Morning Brew.

    Deep Learning with Caffe2 on the Azure Data Science Virtual Machine

    $
    0
    0

    This post is authored by Gopi Kumar, Principal Program Manager, and Paul Shealy, Senior Software Engineer, at Microsoft.

    With the availability of ultra-fast GPUs (Graphics Processing Units), compute-intensive deep learning algorithms are becoming increasingly popular. Deep learning algorithms are particularly versatile at deriving insights from large amounts of information across rich formats such as text, audio, video and more, thereby bringing cognitive intelligence to core business and consumer applications. Deep learning is also becoming more accessible to data scientists and software engineers through new frameworks that provide a clean abstraction to build deep neural network models and consume them easily.

    Caffe has been one such early and popular open source deep learning framework. It has now been rewritten and greatly improved in the latest iteration, named Caffe2. Facebook created Caffe2 by rewriting and improving upon Caffe. Caffe2 is built with expression, speed and modularity in mind. Another benefit of Caffe2 is its support for the consumption of the deep learning models from a wide variety of platforms including mobile devices, thereby bringing the power of AI to devices with relatively modest resources. Microsoft and Facebook have worked together to bring Caffe2 to Azure on the Data Science Virtual Machine (DSVM) which can run on either GPU or CPU based virtual machines on the cloud.

    Microsoft has offered the Data Science Virtual Machine on the Azure marketplace in both Windows and Linux editions. For nearly a year and a half, DSVM has been a very popular tool among data scientists, engineers and business intelligence professionals alike, and across a broad range of organizations. DSVM provides a host of popular tools for machine learning and data science, These tools are pre-installed and pre-configured, allowing users to focus more of their time on the analytics and data science work, and not on mundane IT tasks such as the installation or configuration of software. Since the DSVM runs on Azure, you automatically get the benefit of the cloud with respect to scaling up and down as per your needs, saving time from having to manage the infrastructure, and giving you redundancy and distribution across data centers around the globe, security and the ability to pay for just what you use.

    Our new release of the DSVM, which for the first time is offered on the Ubuntu Linux distribution, has been enhanced to combine the best data science and machine learning tools we have offered in the past with the most popular deep learning and AI tools in a single VM. We are happy to be an early cloud partner for Caffe2. This will help users build deep learning models on the powerful Nvidia K80 based GPU hardware on Azure, reuse prebuilt models from the community and consume them closer to the source of the data, be it from enterprise applications running on the cloud, on-premises, desktop or mobile applications. In addition to pre-installing and configuring Caffe2 on the DSVM, we also provide a few example Jupyter notebooks to help you get started quickly.

    Creating your DSVM is simple. Just visit the product page at: http://aka.ms/dsvm/ubuntu and click the “Get it Now” button. Within about 5 minutes you will have your ready to use instance of the DSVM with all the pre-installed tools.


    We believe that the Data Science Virtual Machine, with its broad choice of built-in deep learning and machine learning tools, and utilities to access and explore data from a variety of sources, saves you time from having to curate, install and maintain software, helps you get started from examples, allows you to run rapid experiments and develop production-ready models and applications that you can then deploy on the cloud or on-premises.

    We invite you to try the Data Science Virtual Machine for your deep learning and AI needs, whether you’re just getting started or a seasoned expert. Free Azure trials are available, giving you $200 in free credits for a period of 30 days.

    Paul & Gopi
    You can follow Gopi on Twitter @zenlytix

    Resources


    Building a Telepresence App with HoloLens and Kinect

    $
    0
    0

    When does the history of mixed reality start? There are lots of suggestions, but 1977 always shows up as a significant year. That’s the year millions of children – many of whom would one day become the captains of Silicon Valley – first experienced something they wouldn’t be able to name for another decade or so.

    The plea of an intergalactic princess that set off a Star Wars film franchise still going strong today: “Help me Obi-wan Kenobi, you’re my only hope.” It’s a fascinating validation of Marshal McLuhan’s dictum that the medium is the message. While the content of Princess Leia’s message is what we have an emotional attachment to, it is the medium of the holographic projection – today we would call it “augmented reality” or “mixed reality” – that we remember most vividly.

    While this post is not going to provide an end-to-end blueprint for your own Princess Leia hologram, it will provide an overview of the technical terrain, point out some of the technical hurdles and point you in the right direction. You’ll still have to do a lot of work, but if you are interested in building a telepresence app for the HoloLens, this post will help you get there.

    An external camera and network connection

    The HoloLens is equipped with inside-out cameras. In order to create a telepresence app, however, you are going to need a camera that can face you and take videos of you – in other words, an outside-in camera. This post is going to use the Kinect v2 as an outside-in camera because it is widely available, very powerful and works well with Unity. You may choose to use a different camera that provides the features you need, or even use a smartphone device.

    The HoloLens does not allow third-party hardware to plug into its mini-USB port, so you will also need some sort of networking layer to facilitate inter-device communication. For this post, we’ll be using the HoloToolkit’s sharing service – again, because it is just really convenient to do so and even has a dropdown menu inside of the Unity IDE for starting the service. You could, however, build your own custom socket solution as Mike Taulty did or use the Sharing with UNET code in the HoloToolkit Examples, which uses a Unity provided networking layer.

    In the long run, the two choices that will most affect your telepresence solution are what sort of outside-in cameras you plan to support and what sort of networking layer you are going to use. These two choices will determine the scalability and flexibility of your solution.

    Using the HoloLens-Kinect project

    Many telepresence HoloLens apps today depend in some way on Michelle Ma’s open-source HoloLens-Kinect project. The genius of the app is that it glues together two libraries, the Unity Pro plugin package for Kinect with the HoloToolkit sharing service, and uses them in unintended ways to arrive at a solution.

    Even though the Kinect plugin for Unity doesn’t work in UWP (and the Kinect cannot be plugged into a HoloLens device in any case), it can still run when deployed to Windows or when running in the IDE (in which case it is using the .NET 3.5 framework rather than the .NET Core framework). The trick, then, is to run the Kinect integration in Windows and then send messages to the HoloLens over a wireless network to get Kinect and the device working together.

    On the network side, the HoloToolkit’s sharing service is primarily used to sync world anchors between different devices. It also requires that a service be instantiated on a PC to act as a communication bus between different devices. The sharing service doesn’t have to be used as intended, however. Since the service is already running on a PC, it can also be used to communicate between just the PC and a single HoloLens device. Moreover, it can be used to send more than just world anchors – it can really be adapted to send any sort of primitive values – for instance, Kinect joint positions.

    To use Ma’s code, you need two separate Unity projects: one for running on a desktop PC and the other for running on the HoloLens. You will add the Kinect plugin package to the desktop app. You will add the sharing prefab from the HoloToolkit to both projects. In the app intended for the HoloLens, add the IP address of your machine to the Server Address field in the Sharing Stage component.

    The two apps are largely identical. On the PC side, the app takes the body stream from the Kinect and sends the joint data to a script named BodyView.cs. BodyView creates spheres for each joint when it recognizes a new body and then repositions these joints whenever it gets updated Kinect.

    
    private GameObject CreateBodyObject(ulong id)
    {
        GameObject body = new GameObject("Body:" + id);
        for (int i = 0; i < 25; i++)
        {
            GameObject jointObj = GameObject.CreatePrimitive(PrimitiveType.Sphere);
    
            jointObj.transform.localScale = new Vector3(0.3f, 0.3f, 0.3f);
            jointObj.name = i.ToString();
            jointObj.transform.parent = body.transform;
        }
        return body;
    }
    
    
    private void RefreshBodyObject(Vector3[] jointPositions, GameObject bodyObj)
    {
        for (int i = 0; i < 25; i++)
        {
            Vector3 jointPos = jointPositions[i];
    
            Transform jointObj = bodyObj.transform.FindChild(i.ToString());
            jointObj.localPosition = jointPos;
        }
    }
    
    

    As this is happening, another script called BodySender.cs intercepts this data and sends it to the sharing service. On the HoloLens device, a script named BodyReceiver.cs gets this intercepted joint data and passes it to its own instance of the BodyView class that animates the dot man made up of sphere primitives.

    The code used to adapt the sharing service for transmitting Kinect data is contained in Ma’s CustomMessages2 class, which is really just a straight copy of the CustomMessages class from the HoloToolkit sharing example with a small modification that allows joint data to be sent and received:

    
    
    public void SendBodyData(ulong trackingID, Vector3[] bodyData)
    {
        // If we are connected to a session, broadcast our info
        if (this.serverConnection != null && this.serverConnection.IsConnected())
        {
            // Create an outgoing network message to contain all the info we want to send
            NetworkOutMessage msg = CreateMessage((byte)TestMessageID.BodyData);
    
            msg.Write(trackingID);
    
            foreach (Vector3 jointPos in bodyData)
            {
                AppendVector3(msg, jointPos);
            }
    
            // Send the message as a broadcast
            this.serverConnection.Broadcast(
                msg,
                MessagePriority.Immediate,
                MessageReliability.UnreliableSequenced,
                MessageChannel.Avatar);
        }
    }
    
    

    Moreover, once you understand how CustomMessages2 works, you can pretty much use it to send any kind of data you want.

    Be one with The Force

    Another thing the Kinect is very good at is gesture recognition. HoloLens currently supports a limited number of gestures and is constrained by what the inside-out cameras can see – mostly just your hands and fingers. You can use the Kinect-HoloLens integration above, however, to extend the HoloLens’ repertoire of gestures to include the user’s whole body.

    For example, you can recognize when a user raises her hand above her head simply by comparing the relative positions of these two joints. Because this pose recognition only requires the joint data already transmitted by the sharing service and doesn’t need any additional Kinect data, it can be implemented completely on the receiver app running in the HoloLens.

    
    private void DetectGesture(GameObject bodyObj)
    {
        string HEAD = "3";
        string RIGHT_HAND = "11";
    
        // detect gesture involving the right hand and the head
        var head = bodyObj.transform.FindChild(HEAD);
        var rightHand = bodyObj.transform.FindChild(RIGHT_HAND);
            
        // if right hand is half a meter above head, do something
        if (rightHand.position.y > head.position.y + .5)
            _gestureCompleteObject.SetActive(true);
        else
            _gestureCompleteObject.SetActive(false);
    }

    In this sample, a hidden item is shown whenever the pose is detected. It is then hidden again whenever the user lowers her right arm.

    The Kinect v2 has a rich literature on building custom gestures and even provides a tool for recording and testing gestures called the Visual Gesture Builder that you can use to create unique HoloLens experiences. Keep in mind that while many gesture solutions can be run directly in the HoloLens, in some cases, you may need to run your gesture detection routines on your desktop and then notify your HoloLens app of special gestures through a further modified CustomMessages2 script.

    As fun as dot man is to play with, he isn’t really that attractive. If you are using the Kinect for gesture recognition, you can simply hide him by commenting a lot of the code in BodyView. Another way to go, though, is to use your Kinect data to animate a 3D character in the HoloLens. This is commonly known as avateering.

    Unfortunately, you cannot use joint positions for avateering. The relative sizes of a human being’s limbs are often not going to be the same as those on your 3D model, especially if you are trying to animate models of fantastic creatures rather than just humans, so the relative joint positions will not work out. Instead, you need to use the rotation data of each joint. Rotation data, in the Kinect, is represented by an odd mathematical entity known as a quaternion.

    Quaternions

    Quaternions are to 3D programming what midichlorians are to the Star Wars universe: They are essential, they are poorly understood, and when someone tries to explain what they are, it just makes everyone else unhappy.

    The Unity IDE doesn’t actually use quaternions. Instead it uses rotations around the X, Y and Z axes (pitch, yaw and roll) when you manipulate objects in the Scene Viewer. These are also known as Euler angles.

    There are a few problems with this, however. Using the IDE, if I try to rotate the arm of my character using the yellow drag line, it will actually rotate both the green axis and the red axis along with it. Somewhat more alarming, as I try to rotate along just one axis, the Inspector windows show that my rotation around the Z axis is also affecting the rotation around the X and Y axes. The rotation angles are actually interlocked in such a way that even the order in which you make changes to the X, Y and Z rotation angles will affect the final orientation of the object you are rotating. Another interesting feature of Euler angles is that they can sometimes end up in a state known as gimbal locking.

    These are some of the reasons that avateering is done using quaternions rather than Euler angles. To better visualize how the Kinect uses quaternions, you can replace dot man’s sphere primitives with arrow models (there are lots you can find in the asset store). Then, grab the orientation for each joint, convert it to a quaternion type (quaternions have four fields rather than the three in Euler angles) and apply it to the rotation property of each arrow.

    
    private static Quaternion GetQuaternionFromJointOrientation(Kinect.JointOrientation jointOrientation)
    {
        return new Quaternion(jointOrientation.Orientation.X, jointOrientation.Orientation.Y, jointOrientation.Orientation.Z, jointOrientation.Orientation.W);
    }
    private void RefreshBodyObject(Vector3[] jointPositions, Quaternion[] quaternions, GameObject bodyObj)
    {
        for (int i = 0; i < 25; i++)
        {
            Vector3 jointPos = jointPositions[i];
    
            Transform jointObj = bodyObj.transform.FindChild(i.ToString());
            jointObj.localPosition = jointPos;
            jointObj.rotation = quaternions[i];
        }
    }
    
    

    These small changes result in the arrow man below who will actually rotate and bend his arms as you do.

    For avateering, you basically do the same thing, except that instead of mapping identical arrows to each rotation, you need to map specific body parts to these joint rotations. This post is using the male model from Vitruvius avateering tools, but you are welcome to use any properly rigged character.

    Once the character limbs are mapped to joints, they can be updated in pretty much the same way arrow man was. You need to iterate through the joints, find the mapped GameObject, and apply the correct rotation.

    
    private Dictionary RigMap = new Dictionary()
    {
        {0, "SpineBase"},
        {1, "SpineBase/SpineMid"},
        {2, "SpineBase/SpineMid/Bone001/Bone002"},
        // etc ...
        {22, "SpineBase/SpineMid/Bone001/ShoulderRight/ElbowRight/WristRight/ThumbRight"},
        {23, "SpineBase/SpineMid/Bone001/ShoulderLeft/ElbowLeft/WristLeft/HandLeft/HandTipLeft"},
        {24, "SpineBase/SpineMid/Bone001/ShoulderLeft/ElbowLeft/WristLeft/ThumbLeft"}
    };
    
    private void RefreshModel(Quaternion[] rotations)
    {
        for (int i = 0; i < 25; i++)
        {
            if (RigMap.ContainsKey(i))
            {
                Transform rigItem = _model.transform.FindChild(RigMap[i]);
                rigItem.rotation = rotations[i];
            }
        }
    }

    This is a fairly simplified example, and depending on your character rigging, you may need to apply additional transforms on each joint to get them to the expected positions. Also, if you need really professional results, you might want to look into using inverse kinematics for your avateering solution.

    If you want to play with working code, you can clone Wavelength’s Project-Infrared repository on github; it provides a complete avateering sample using the HoloToolkit sharing service. If it looks familiar to you, this is because it happens to be based on Michelle Ma’s HoloLens-Kinect code.

    Looking at point cloud data

    To get even closer to the Princess Leia hologram message, we can use the Kinect sensor to send point cloud data. Point clouds are a way to represent depth information collected by the Kinect. Following the pattern established in the previous examples, you will need a way to turn Kinect depth data into a point cloud on the desktop app. After that, you will use shared services to send this data to the HoloLens. Finally, on the HoloLens, the data needs to be reformed as a 3D point cloud hologram.

    The point cloud example above comes from the Brekel Pro Point Cloud v2 tool, which allows you to read, record and modify point clouds with your Kinect.

    The tool also includes a Unity package that replays point clouds, like the one above, in a Unity for Windows app. The final steps of transferring point cloud data over the HoloToolkit sharing server to HoloLens is an exercise that will be left to the reader.

    If you are interested in a custom server solution, however, you can give the open source LiveScan 3D – HoloLens project a try.

    HoloLens shared experiences and beyond

    There are actually a lot of ways to orchestrate communication for the HoloLens of which, so far, we’ve mainly discussed just one. A custom socket solution may be better if you want to institute direct HoloLens-to-HoloLens communication without having to go through a PC-based broker like the sharing service.

    Yet another option is to use a framework like WebRTC for your communication layer. This has the advantage of being an open specification, so there are implementations for a wide variety of platforms such as Android and iOS. It is also a communication platform that is used, in particular, for video chat applications, potentially giving you a way to create video conferencing apps not only between multiple HoloLenses, but also between a HoloLens and mobile devices.

    In other words, all the tools for doing HoloLens telepresence are out there, including examples of various ways to implement it. It’s now just a matter of waiting for someone to create a great solution.

    The post Building a Telepresence App with HoloLens and Kinect appeared first on Building Apps for Windows.

    Cleaning up the Visual Studio 2017 package cache

    $
    0
    0

    With the ability to disable or move the package cache for Visual Studio 2017 and other products installed with the new installer, packages are removed for whatever instance(s) you are installing, modifying, or repairing.

    If you have a lot of instances and want to clean all of them up easily from the command line – perhaps scripting it for users in an organization – you can combine tools such as vswhere or the VSSetup PowerShell module with the installer at %ProgramFiles(x86)%\Microsoft Visual Studio\Installer\vs_installer.exe.

    Batch script with vswhere

    You can get the installation path for all instances and call the installer for each to disable the cache (only necessary once, but for simplicity of the script we’ll pass it for each instance) and modify – which will basically just remove package payloads – or re-enable the cache and repair the packages to re-download packages.

    Note that the following sample is intended for use within a batch script. If typing on the command line only use one “%”. Run this within an elevated command prompt to avoid being prompted to elevate each time vs_installer.exe is launched.

    @echo off
    setlocal enabledelayedexpansion
    for /f "usebackq delims=" %%i in (`vswhere -all -property installationPath`) do (
      if /i "%1"=="cache" (
        set args=repair --cache
      ) else (
        set args=modify --nocache
      )
      start /wait /d "%ProgramFiles(x86)%\Microsoft Visual Studio\Installer" vs_installer.exe !args! --installPath "%%i" --passive --norestart
      if "%ERRORLEVEL%"=="3010" set REBOOTREQUIRED=1
    )
    if "%REBOOTREQUIRED%"=="1" (
      echo Please restart your machine
      exit /b 3010
    )

    PowerShell script with VSSetup

    While you can also use vswhere within PowerShell easily (e.g. vswhere -format json | convertfrom-json), this example uses the VSSetup PowerShell module you can easily obtain in Windows 10 with: install-module -scope currentuser VSSetup.

    Put the following example into a script and run it from an elevated PowerShell host to avoid being prompted to elevate each time vs_installer.exre is launched.

    param (
      [switch] $Cache
    )
    $start_args = if ($Cache) {
      'repair', '--cache'
    } else {
      'modify', '--nocache'
    }
    get-vssetupinstance -all | foreach-object {
      $args = $start_args + '--installPath', "`"$($_.InstallationPath)`"", '--passive', '--norestart'
      start-process -wait -filePath "${env:ProgramFiles(x86)}\Microsoft Visual Studio\Installer\vs_installer.exe" -args $args
      if ($LASTEXITCODE -eq 3010) {
        $REBOOTREQUIRED = 1
      }
    }
    if ($REBOOTREQUIRED) {
      "Please restart your machine"
      exit 3010
    }

    Both of these examples will remove all instances’ packages or put them pack depending on your command line arguments you would pass to the scripts.

    Editing a .VMCX file

    $
    0
    0

    In Windows Server 2016 we moved from using .XML for our virtual machine configuration files to using a binary format – that we call .VMCX.

    There are many benefits to this – but one of the downsides is that it is no longer possible to easily edit a virtual machine configuration file that is not registered with Hyper-V.  Fortunately – we provide all the APIs you need to do this without editing the file directly.

    This code sample takes a virtual machine configuration file – that is not registered with Hyper-V.  It then:

    • Loads the virtual machine into memory – without actually importing it into Hyper-V
    • Changes some settings on the virtual machine
    • Exports this changed virtual machine to a new .VMCX file

    Using this method you can make any changes you need to a .VMCX file without actually having to import the virtual machine.  The key piece of information here is that when you perform a traditional import of a virtual machine you use ImportSystemDefinition to create a planned virtual machine (in memory copy) which you then realized to complete the import operation.  But if you do not want to import the virtual machine – but just want to edit it – you can modify the planned virtual machine and pass it into ExportSystemDefinition to create a new configuration file.

    Cheers,
    Ben

    Meet the Azure Analysis Services team at upcoming user group meetings

    $
    0
    0

    Come meet the Analysis Services team in person as they answer your questions on Analysis Services in Azure. Learn about the new service and features available now.

    The success of any modern data-driven organization requires that information is available at the fingertips of every business user, not just IT professionals and data scientists, to guide their day-to-day decisions. Self-service BI tools have made huge strides in making data accessible to business users. However, most business users don’t have the expertise or desire to do the heavy lifting that is typically required, finding the right sources of data, importing the raw data, transforming it into the right shape, and adding business logic and metrics, before they can explore the data to derive insights. With Azure Analysis Services, a BI professional can create a semantic model over the raw data and share it with business users so that all they need to do is connect to the model from any BI tool and immediately explore the data and gain insights. Azure Analysis Services uses a highly optimized in-memory engine to provide responses to user queries at the speed of thought.

    SQL Saturday Silicon Valley – April 22nd
    Microsoft Technology Center, 1065 La Avenida, Mountain View, CA
    Group Site
    Register Now

    Boston Power BI User Group – April 25th 6:30pm – 8:30pm
    MS Office 5 Wayside Road, Burlington, MA
    Group Site
    Register Now

    New York Power BI User Group – April 27th 6pm-8:30pm
    MS Office Times Square, NY
    Group Site
    Register Now

    Philadelphia Power BI User Group – May 1st 3pm-6pm
    MS Office Malvern, PA
    Group Site
    Register Now

    Philadelphia SQL User Group – May 2nd
    Group Site
    Registration: Coming Soon!

    New to Azure Analysis Services? Find out how you can try Azure Analysis Services or learn how to create your first data model.

    Announcing the general availability of Service Map solution

    $
    0
    0

    Modern applications are complex, multi-tier, business-service systems that might span multiple datacenters and cloud hosting environments. To manage these applications and ensure they meet their SLA, you need an end-to-end view that ties together the different application components and infrastructure services.

    I am excited to announce the general availability of Service Map, which enables you to automatically discover and build a common reference map of dependencies across servers, processes, and third-party services in real-time. Using Service Map, you can isolate problems and accelerate root-cause analysis by visualizing process and server dependencies. You can manage incidents and improve SLAs by viewing cascading alerts, failed connections, load-balancing issues, and rogue clients. You can also make sure that nothing is left behind during migrations with the help of detailed process and server dependency inventory.

    One customer, Inframon, uses Service Map to help meet stringent compliance requirements and view their entire network, end-to-end.  “The standard view in OMS is great, but Service Map is the first view we go to. It’s like the instant, let’s see what the breadth of the problem is, and then work through it,” says Gordon McKenna, CEO and Microsoft MVP.  “We can also see patch and alert information. It just gives us that fuller picture of the services we’re offering, how we’re actually managing them, and how well we’re looking after these hosts.”

    Comprehensive view of app components and infrastructure services

    When performance issues and outages occur, a top challenge is isolating the source of the problem. Without visibility to how systems and application components are interconnected, team members join a bridge call, each with their own tools and data, and often the finger-pointing begins.

    By providing automatic discovery of dependencies across any workload with zero pre-definition required, Service Map removes the guesswork that’s required to isolate the problem domain. With a common reference point, teams can quickly focus on the problem area, reduce mean time to resolution (MTTR), and involve fewer resources.

    Comprehensive view of app components and infrastructure services

    Improves SLA with proactive incident management

    When performance issues and outages occur, a top challenge is isolating the source of the problem. Without visibility to how systems and application components are interconnected, team members join a bridge call, each with their own tools and data, and often the finger-pointing begins.

    By providing automatic discovery of dependencies across any workload with zero pre-definition required, Service Map removes the guesswork that’s required to isolate the problem domain. With a common reference point, teams can quickly focus on the problem area, reduce mean time to resolution (MTTR), and involve fewer resources.

    Better together with OMS Solutions

    Through Service Map’s automatic dependency discovery and mapping, users can visualize the data from multiple OMS solutions such as Log Analytics, Change Tracking, Update Management, and Security, all in context. Rather than looking at individual types of data, you can now see all data that’s related to the systems you care about most, as well as data on their dependencies.

    Service Map’s automatic dependency discovery and mapping

    Azure migration assurance

    In addition to enhancing your troubleshooting and root-cause analysis, Service Map helps to expedite your app and workload migrations, accelerating your transition to the cloud. Service Map helps you eliminate the guesswork of problem isolation, identify surprise connections and broken links in your environment, and perform Azure migrations knowing that critical systems and endpoints won’t be left behind. Service Map provides a REST API that makes it easy to pull data on dependencies into your existing tools and processes.

    This solution is available as part of Insight & Analytics in Microsoft Operations Management Suite (OMS). Try today by activating your free account.

    To learn more visit the documentation page and Insight & Analytics webpage.

    Nick Burling
    Principal Program Manager
    Enterprise Cloud Management Team

    Viewing all 13502 articles
    Browse latest View live


    <script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>