Quantcast
Channel: TechNet Technology News
Viewing all 13502 articles
Browse latest View live

Azure Search now available in UK

$
0
0

We’re pleased to announce Azure Search is now available in the UK.

Azure Search is a search-as-a-service that helps customers build sophisticated search experiences into web and mobile applications. It is now generally available in UK and deployed only to the UK South region.

Learn more about Azure Search and view our documentation.

We are excited about this addition, and invite customers using this Azure region to try Azure Search today!


Azure brings 5 new services to Canada

$
0
0

Since the beginning of the year, we’ve deployed multiple new services in Canada. Please find below a brief summary of recently deployed services.

Available now

HDInsight is the only fully-managed cloud Hadoop offering that provides optimized open source analytic clusters for Spark, Hive, MapReduce, HBase, Storm, Kafka, and R Server backed by a 99.9% SLA. Each of these big data technologies and ISV applications are easily deployable as managed clusters with enterprise-level security and monitoring.

Learn more about HDInsight.

Azure Functions is an event-based serverless compute experience to accelerate your development. It can scale based on demand and you pay only for the resources you consume. Azure Function’s numerous triggers and bindings, such as http, storage, queues, and events streams, allow you to quickly build solutions with less code.

Learn more about Azure Functions.

Managed Disks makes managing your VM disks much simpler. With Managed Disks, customers only need to specify the desired disk type (Standard or Premium disk) and the disk size, and Azure will create and manage the disk for them. In addition, Managed Disks comes with enhanced VM scale sets (VMSS) capabilities such as being able to define scale sets with attached data drives, and create a scale set with up to 1,000 VMs from an Azure platform/marketplace images.

Learn more about Managed Disks in our General Availability Announcement.

Azure Site Recovery contributes to your BCDR strategy by orchestrating replication of on-premises virtual machines and physical servers. You replicate servers and VMs from your primary on-premises datacenter to the cloud, Azure, or to a secondary datacenter.

Learn more about Azure Site Recovery.

Azure Backup previously required service registration in PowerShell for the past few months. This is no longer required and you can use Backup directly in the Azure Portal.  All subscriptions that were registered previously will continue to work without any intervention. In addition, Hybrid Backup is now deployed (on-prem to Azure backup) and is also available in the Azure Portal. Azure Backup can be used to back up, protect, and restore your data in the Microsoft cloud. Azure Backup replaces your existing on-premises or off-site backup solution with a cloud-based solution that is reliable, secure, and cost-competitive.

Learn more about Azure Backup.

Azure Command Line 2.0 now generally available

$
0
0

Back in September, we announced Azure CLI 2.0 Preview. Today, we’re announcing the general availability of the vm, acs, storage and network commands in Azure CLI 2.0. These commands provide a rich interface for a large array of use cases, from disk and extension management to  container cluster creation.

Today’s announcement means that customers can now use these commands in production, with full support by Microsoft both through our Azure support channels or GitHub. We don’t expect breaking changes for these commands in new releases of Azure CLI 2.0.

This new version of Azure CLI should feel much more native to developers who are familiar with command line experiences in the bash enviornment for Linux and macOS with simple commands that have smart defaults for most common operations and that support tab completion and pipe-able outputs for interacting with other text-parsing tools like grep, cut, jq and the popular JMESpath query syntax​. It’s easy to install on the platform of your choice and learn.

During the preview period, we’ve received valuable feedback from early adopters and have added new features based on that input. The number of Azure services supported in Azure CLI 2.0 has grown and we now have command modules for sql, documentdb, redis, and many other services on Azure. We also have new features to make working with Azure CLI 2.0 more productive. For example, we’ve added the "--wait" and "--no-wait" capabilities that enable users to respond to external conditions or continue the script without waiting for a response.

We’re also very excited about some new features in Azure CLI 2.0, particularly the combination of Bash and CLI commands, and support for new platform features like Azure Managed Disks.

Here’s how to get started using Azure CLI 2.0.

Installing the Azure CLI

The CLI runs on Mac, Linux, and of course, Windows. Get started now by installing the CLI on whatever platform you use.  Also, review our documentation and samples for full details on getting started with the CLI, and how to access to services provided via Azure using the CLI in scripts.

Here’s an example of the features included with the "vm command":

vm command example (2)

 

Working with the Azure CLI

Accessing Azure and starting one or more VMs is easy. Here are two lines of code that will create a resource group (a way to group and Manage Azure resources) and a Linux VM using Azure’s latest Ubuntu VM Image in the westus2 region of Azure.

az group create -n MyResourceGroup -l westus2
az vm create -g MyResourceGroup -n MyLinuxVM --image ubuntults 

Using the public IP address for the VM (which you get in the output of the vm create command or can look up separately using "az vm list-ip-addresses" command), connect directly to your VM from the command line:

ssh 

For Windows VMs on Azure, you can connect using remote desktop ("mstsc " from Windows desktops).

The "create vm" command is a long running operation, and it may take some time for the VM to be created, deployed, and be available for use on Azure. In most automation scripting cases, waiting for this command to complete before running the next command may be fine, as the result of this command may be used in next command. However, in other cases, you may want to continue using other commands while a previous one is still running and waiting for the results from the server. Azure CLI 2.0 now supports a new "--no-wait" option for such scenarios.

az vm create -n MyLinuxVM2 -g MyResourceGroup --image UbuntuLTS --no-wait

As with Resource Groups and a Virtual Machines, you can use the Azure CLI 2.0 to create other resource types in Azure using the "az create" naming pattern.

For example, you can create managed resources on Azure like WebApps within Azure AppServices:

# Create an Azure AppService that we can use to host multiple web apps 
az appservice plan create -n MyAppServicePlan -g MyResourceGroup

# Create two web apps within the appservice (note: name param must be a unique DNS entry)
az appservice web create -n MyWebApp43432 -g MyResourceGroup --plan MyAppServicePlan
az appservice web create -n MyWEbApp43433 -g MyResourceGroup --plan MyAppServicePlan

Read the CLI 2.0 reference docs to learn more about the create command options for various Azure resource types. The Azure CLI 2.0 lets you list your Azure resources and provides different output formats.

--output	Description
json	json string. json is the default. Best for integrating with query tools etc
jsonc	colorized json string.
table	table with column headings. Only shows a curated list of common properties for the selected resource type in human readable form.
tsv	tab-separated values with no headers. optimized for piping to other tex-processing commands and tools like grep, awk, etc.

You can use the "--query" option with the list command to find specific resources, and to customize the properties that you want to see in the output. Here are a few examples:

# list all VMs in a given Resource Group
az vm list -g MyResourceGroup --output table

# list all VMs in a Resource Group whose name contains the string ‘My’
az vm list --query “[?contains(resourceGroup,’My’)]” --output tsv

# same as above but only show the 'VM name' and 'osType' properties, instead of all default properties for selected VMs
az vm list --query “[?contains(resourceGroup,’My’)].{name:name, osType:storageProfile.osDisk.osType}” --output table

Azure CLI 2.0 supports management operations against SQL Server on Azure. You can use it to create servers, databases, data warehouses, and other data sources; and to show usage, manage administrative logins, and run other management operations.

# Create a new SQL Server on Azure
az sql server create -n MySqlServer -g MyResourceGroup --administrator-login  --administrator-login-password  -l westus2

# Create a new SQL Server database
az sql db create -n MySqlDB -g MyResourceGroup --server-name MySqlServer -l westus2

# list available SQL databases on Server within a Resource Group
az sql db list -g MyResourceGroup --server-name MySqlServer

Scripting with the new Azure CLI 2.0 features

The new ability to combine Bash and Azure CLI 2.0 commands in the same script can be a big time saver, especially if you’re already familiar with Linux command-line tools like grep, cut, jq and JMESpath queries.

Let’s start with a simple example that stops a VM in a resource group using a VM’s resource ID (or multiple IDs by spaces):

az vm stop –ids ‘

You can also stop a VM in a resource group using the VM’s name. Here’s how to stop the VM we created above:

az vm stop -g resourceGroup -n simpleVM

For a more complicated use case, let’s imagine we have a large number of VMs in a resource group, running Windows and Linux.  To stop all running Linux VMs in that resource group, we can use a JMESpath query, like this:

os="Linux"
rg="resourceGroup"
ps="VM running"
rvq="[].{resourceGroup: resourceGroup, osType: storageProfile.osDisk.osType, powerState: powerState, id:id}| [?osType=='$os']|[?resourceGroup=='$rg']| [?powerState=='$ps']|[].id"
az vm stop --ids $(az vm list --show-details --query "$rvq" --output tsv)

This script issues an az vm stop command, but only for VMs that are returned in the JMESpath query results (as defined in the rvq variable). The osType, resourceGroup and powerState parameters are provided values. The resourceGroup parameter is compared to a VM’s resourceGroup property, and the osType parameter is compared to a VM’s storageProfile.osDisk.osType property, and all matching results are returned (in tsv format) for use by the "az vm stop" command.

Azure Container Services in the CLI

Azure Container Service (ACS) simplifies the creation, configuration, and management of a cluster of virtual machines that are preconfigured to run container applications. You can use Docker images with DC/OS (powered by Apache Mesos), Docker Swarm or Kubernetes for orchestration.

The Azure CLI supports the creation and scaling of ACS clusters via the az acs command. You can discover full documentation for Azure Container Services, as well as a tutorial for deploying an ACS DC/OS cluster with Azure CLI commands.

Scale with Azure Managed Disks using the CLI

Microsoft recently announced the general availability of Azure Managed Disks to simplify the management and scaling of Virtual Machines. You can create a Virtual Machine with an implicit Managed Disk for a specific disk image, and also create managed disks from blob storage or standalone with the az vm disk command. Updates and snapshots are easy as well -- check out what you can do with Managed dDisks from the CLI.

Start using Azure CLI 2.0 today!

Whether you are an existing CLI user or starting a new Azure project, it’s easy to get started with the CLI at http://aka.ms/CLI and master the command line with our updated docs and samples. Check out topics like installing and updating the CLI, working with Virtual Machines, creating a complete Linux environment including VMs, Scale Sets, Storage, and network, and deploying Azure Web Apps– and let us know what you think!

Azure CLI 2.0 is open source and on GitHub.

In the next few months, we’ll provide more updates. As ever, we want your ongoing feedback! Customers using the vm, storage and network commands in production can contact Azure Support for any issues, reach out via StackOverflow using the azure-cli tag, or email us directly at azfeedback@microsoft.com.

Exchange 2007 reaches end of life on April 11th. What’s your plan to move?

$
0
0

On April 11, 2017, Exchange Server 2007 will reach End of Life. If you haven’t already begun your migration from Exchange 2007 to Office 365 or Exchange 2016, you need to start planning now.

End of life means that Microsoft will no longer provide the following for Exchange 2007:

  • Free or paid assisted support (including custom support agreements)
  • Bug fixes for issues that are discovered and that may impact the stability and usability of the server
  • Security fixes for vulnerabilities that are discovered and that may make the server vulnerable to security breaches
  • Time zone updates

Your installation of Exchange 2007 will continue to run after this date. However, because of the changes listed above, we strongly recommend that you migrate from Exchange 2007 as soon as possible.

To learn about your options for migrating from Exchange 2007 to Office 365 or a newer version of Exchange Server, check out Exchange 2007 End of Life Roadmap.

If you have other Office 2007 servers or clients, such as SharePoint Server 2007, PerformancePoint Server 2007, Office Communications Server, Project Server 2007, or Office 2007 client applications, check out Resources to help you upgrade from Office 2007 servers and clients for information about their end of life dates and upgrade options.

Exchange Team

Learning Tools for Word Online and OneNote Online now available, plus new languages

$
0
0

We are pleased to share that Learning Tools are now available in new languages to all customers with Office 365 or a personal OneDrive account. Previously, we announced that we would deliver Learning Tools to Word Online and OneNote Online, so users on any device can experience the same reading and writing benefits people see using Learning Tools for OneNote. To access these new Learning Tools in Word Online or OneNote Online, go to the View menu and click Immersive Reader. In addition, Learning Tools for Word 2016 has Learning Tools built into Read Mode.

Learning Tools includes a modified reader view that utilizes techniques proven to help people read more effectively, such as:

  • Read Aloud—Reads text aloud with simultaneous highlighting that improves decoding, fluency and comprehension while sustaining the reader’s focus and attention.
  • Spacing—Optimizes font spacing in a narrow column view to improve reading fluency for users who suffer from visual crowding issues.
  • Syllables—Shows the breaks between syllables to enhance word recognition and decoding.
  • Parts of Speech*—Supports writing instruction and grammar comprehension by identifying verbs, nouns and adjectives.

The Immersive Reader features help everyone on any device improve their reading skills, including those with dyslexia, dysgraphia, ADHD, emerging readers or a combination of the broad range of unique student abilities.

To get started, check out our support page.

Learning Tools for Word Online and OneNote Online now available 1

Additionally, Learning Tools are available in more languages, including new Text-to-Speech languages and voices, Syllables languages, and Parts of Speech.*

We look forward to hearing about the impact of Learning Tools on students or anyone who uses the tools to read and write more effectively in the applications they love!

*Parts of Speech and Comprehension mode are not available in Learning Tools Immersive Reader for Word desktop.

The post Learning Tools for Word Online and OneNote Online now available, plus new languages appeared first on Office Blogs.

#AzureAD Connect Health: Monitoring for Windows Server AD DS and Sync Error Reports are GA + simplified licensing

$
0
0

Howdy folks,

It’s a big day for Azure AD! I’m happy to let you know that:

  • Azure AD Connect Health for Windows Server AD DS is now GA!
  • Azure AD Connect Health Sync Error Reports is now GA!
  • Based onyour feedback, we’vesimplified the Azure AD Connect Health licensing model.

I’ve invited two program managers from my team, Varun Karandikar and Arturo Lucatero, to give you all the details here. Their blog is below.

As you read through these updates and begin exploring, share your feedback with us. We’re always listening!

Best regards,

Alex Simons (Twitter: @Alex_A_Simons)

Director of Program Management

Microsoft Identity Division

—-

Hello everyone,

We couldn’t be more thrilled to share the latest updates on Azure AD Connect Health with you! Before we get started, we’d recommend that if you haven’t tried this service yet, you really should! Please visit our documentation page, and give it a try! (Note: Azure AD Connect Health requires Azure AD Premium licenses)

General Availability of Connect Health for Windows Server AD

You can now monitor your on-premises Active Directory (AD DS) infrastructure from the cloud using Connect Health for AD DS!

In the six months Connect Health for AD DS lived in preview, we received all kinds of feedback from the community. You told us about areas that needed polishing, capabilities that were working well, and new features you would like to see included. Your feedback has been invaluable in helping us improve our offering and get to general availability status.

Here are some of the updates we made during the preview:

  • The Domain Controllers dashboard contains more information. Adding OS Name was one of the most popular requests we received.


  • Support for monitoring Read Only Domain Controllers & identifying RODCs in the Domain Controllers dashboard.


  • Big performance improvements to the portal. The main dashboards load ten times faster, leading to a smooth experience for forests with 100+ Domain Controllers.
  • A new entry point to the Performance Monitors Collection. Now you can easily pin the monitors collection to your Azure dashboard.


  • Alert coverage for all the essential services running on your DCs. If an essential service like Kerberos Key Distribution Center or Netlogon stops, you will quickly be notified about it.
  • Refinements to existing alerts to minimize noisy notifications. Improving the detection logic of alerts is highly important and something we’re always investing in.

On behalf of the entire Connect Health team, we thank everyone who has deployed this feature, reported issues, and sent feedback, and we encourage others to do the same!

General Availability of Sync Error Reports

You may encounter Object Level Sync Errors while syncing data from your on-premises AD to Azure AD. With the Sync Error Reports within Azure AD Connect Health for Sync it’s now easy to get all the relevant information about sync errors in one place. This reduces the time required to fix errors and helps your users embrace the cloud.

The Sync Error Reports are now generally available to all Azure AD Premium customers using Azure AD Connect (version 1.1.281.0 or higher). Here are a few key points to note about them:

  • Provide an overview of errors based on error type and root cause.
  • Allow you to download the report with all errors as a single CSV.
  • Make it easy to understand the root cause and steps to fix the error.
  • Side-by-side comparison of objects for errors due to duplicates.
  • Allow you to delegate report access to users who are not global admins via Role Based Access Control.
  • Provide weekly email notifications.

Here’s a demo of the report available in the new Azure Portal:

Licensing Update

We also heard your feedback regarding our licensing model and that it was complicated to understand and to manage. In response, we made the following changes to make it simpler:

  • First Connect Health agent requires at least one Azure AD Premium license.
  • Each additional agent requires 25 additional incremental AADP licenses.
  • Agent count is equivalent to the total number of agents registered per role (AD FS, Azure AD Connect, AD DS) per server.

You can also find this information on the Azure AD Pricing page.

Congratulations! You are officially caught up with Azure AD Connect Health news.

Now it’s time for that last request: please share your thoughts on Azure AD Connect Health! Comments, questions, and suggestions are strongly encouraged and extremely important to us. Post below, in our discussion forum, or send us a note at askaadconnecthealth@microsoft.com. We look forward to hearing from you.

Thanks for reading!

Varun, Arturo and The Azure AD Connect Health Team

eBay makes a bid for flexibility in the workplace using Office 365

$
0
0


Today’s Office 365 post was written by Ron Markezich, corporate vice president at Microsoft.

Ron MeBay joined the Office 365 family of enterprise customers a few years ago, and I’m excited that the groundbreaking online retailer is now looking to take the next step toward advanced productivity by investing in Office 365 E5 for communication and analytics.

I recently heard from Rami Mazid, head of Global Infrastructure and End Users Services at eBay, about how Office 365 dovetails with the company’s internal goals. “At eBay, we were built on the belief that we can empower and connect people to create more opportunity, specifically through commerce. Within our own company, we’ve mirrored that same belief by leveraging technology and tools that enable our employees to easily connect and collaborate with one another. We’re using cloud-based technologies like Office 365 to give our employees the flexibility to work how they want, helping them forge bonds with colleagues across geographical boundaries for greater productivity. And we’re doing this in a way that’s both manageable and trustworthy for us as an enterprise.”

Smart companies look for ways to empower employees with flexible productivity options without compromising on security, privacy or regulatory compliance. eBay continues to foster a culture of collaboration by bringing standardized yet flexible technology to its approximately 30,000 employees in 62 locations across the globe.

Innovators also look for ways to enable productivity quickly. After eBay made the decision to adopt Office 365, the company engaged with the Microsoft FastTrack team for help migrating mailboxes to Exchange Online. By providing our customers with support for everything from remediation efforts to onboarding through Microsoft FastTrack, companies like eBay put new capabilities into the hands of their employees faster.

And by enabling its employees to work together effectively and efficiently through Office 365, eBay is making it possible to serve customers in a more comprehensive way. We’re excited to see what the company does next!

—Ron Markezich

The post eBay makes a bid for flexibility in the workplace using Office 365 appeared first on Office Blogs.

Webinar: On-premises conditional access with EMS and NetScaler

$
0
0

The demand for a modern mobile user experience isnt just a matter of conveniencepeople do their best work when they have the freedom to access their corporate email and documents from anywhere, on any device. But increasing freedom and mobility also raises the stakes for IT requiring you to balance the need to protect your corporate data with the expectations and needs of your users.

Join us for a free one-hour webinar with Citrix NetScaler Unified Gateway expert Akhilesh Dhawan and David Randall, from Microsoft Intune to learn about a product integration between Microsoft EMS and Citrix NetScaler that provides on-premises conditional access to corporate resources and data.

The integration of Citrix NetScaler Unified Gateway with Microsoft Enterprise Mobility + Security lets you:

  • Give your employees the highly productive mobile experience they expect.
  • Ensure that only the right users on compliant devices have access to your corporate data and resources.

EMS_NetScaler

The live webinar is taking place on March 1, 2017 at 11 AM PT.

REGISTER NOW to learn more about this integration and to see how it works!


Update 1702 for Configuration Manager Technical Preview Branch – Available Now!

$
0
0

Hello everyone! We are happy to let you know that update 1702 for the Technical Preview Branch of System Center Configuration Manager has been released. Technical Preview Branch releases give you an opportunity to try out new Configuration Manager features in a test environment before they are made generally available. This months new preview features include:

  • Azure Active Directory Domain Services support You can install a ConfigMgr site on an Azure virtual machine that is connected to Azure Active Directory Domain Services, and use the site to manage other Azure virtual machines connected to the same domain.
  • Improvements for in-console search Based on User Voice feedback, we have added several improvements to in-console search, including searching by Object Path, preservation of search text and preservation of your decision to search sub-nodes.
  • Windows Update for Business integration You can now implement Windows Update for Business assessment results as part of Conditional Access compliance policy conditional rules.
  • Customize high-risk deployment warning You can now customize the Software Center warning when running a high-risk deployment, such as a task sequence to install a new operating system. The default string regarding data may not apply in scenarios like in-place upgrade.
  • Close executable files at the deadline when they would block application installation– If executable files are listed on the Install Behavior tab for a deployment type and the application is deployed to a collection as required, then a more intrusive notification experience is provided to inform the user, and the specified executable files will be closed automatically at the deadline.

This release also includes the following improvements for customers using System Center Configuration Manager connected with Microsoft Intune to manage mobile devices:

  • Non-Compliant Apps Compliance Settings – Add iOS and Android applications to a non-compliant apps rule in a compliance policy to trigger conditional access if the devices have those applications installed.
  • PFX Certificate Creation and Distribution and S/MIME Support– Admins can create and deploy PFX certificates to users. These certificates can then be used for S/MIME encryption and decryption by devices that the user has enrolled.
  • Android for Work Support – You can now manage Android for Work devices. This enables you to enroll devices, approve and deploy apps, and configure policies for Android for Work devices.

Update 1702 for Technical Preview Branch is available in the Configuration Manager console. For new installations please use the 1610 baseline version of Configuration Manager Technical Preview Branch available on TechNet Evaluation Center.

We would love to hear your thoughts about the latest Technical Preview! To provide feedback or report any issues with the functionality included in this Technical Preview, please use Connect. If theres a new feature or enhancement you want us to consider for future updates, please use the Configuration Manager UserVoice site.

Thanks,

The System Center Configuration Manager team

Configuration Manager Resources:

Documentation for System Center Configuration Manager Technical Previews

Try the System Center Configuration Manager Technical Preview Branch

Documentation for System Center Configuration Manager

System Center Configuration Manager Forums

System Center Configuration Manager Support

Download the Configuration Manager Support Center

Getting Started with a Mixed Reality Platformer Using Microsoft HoloLens

$
0
0

The platform game genre has undergone constant evolution, from its earliest incarnations in Donkey Kong and Pitfall to recent variations like Flappy Bird. Shigeru Miyamoto’s Super Mario Bros. is recognized as the best platform game of all time, setting a high bar for everyone who came after. The Lara Croft series built on Shigeru’s innovations by taking the standard side-scrolling platformer and expanding it into a 3D world. With mixed reality and HoloLens, we all have the opportunity to expand the world of the platform game yet again.

Standard video game conventions undergo a profound change when you put a platformer in a mixed reality environment. First of all, instead of sitting in a chair and moving your character inside your display screen, you physically follow your character as he moves around the real world. Second, the obstacles your protagonist encounters aren’t just digital ones but also physical objects in the real world, like tables and chairs and stacks of books. Third, because every room you play in effectively becomes a new level, the mixed reality platform game never runs out of levels and every level presents unique challenges. Instead of comparing scores for a certain game stage, you will need to compare how well you did in the living room—or in Jane’s kitchen or in Shigeru’s basement.

In this post, you will learn how to get started building a platform game for HoloLens using all free assets. In doing so, you will learn the basics of using Spatial Mapping to scan a room so your player character can interact with it. You will also use the slightly more advanced features of Spatial Understanding to determine characteristics of the game environment. Finally, all of this will be done in the Unity IDE (currently 5.5.0f3) with the open source HoloToolkit.

Creating your game world with Spatial Mapping

How does HoloLens make it possible for virtual objects and physical objects to interact?  The HoloLens is equipped with a depth camera, similar to the Kinect v2’s depth camera, that progressively scans a room in order to create a spatial map through a technique known as spatial mapping. It uses this data about the real world to create 3D surfaces in the virtual world. Then, using its four environment-aware cameras, it positions and orients the 3D reconstruction of the room in correct relation to the player. This map is often visualized at the start of HoloLens applications as a web of lines blanketing the room the player is in. You can also sometimes trigger this visualization by simply tapping in the air in front of you while wearing the HoloLens.

To play with spatial mapping, create a new 3D project in Unity. You can call the project “3D Platform Game.” Create a new scene for this game called “main.”

Next, add the HoloToolkit unity package to your app. You can download the package from the HoloToolkit project’s GitHub repository. This guide uses HoloToolkit-Unity-v1.5.5.0.unitypackage. In the Unity IDE, select the Assets tab. Then click on Import Package -> Custom Package and find the download location of the HoloTookit to import it into the scene.

The HoloToolkit provides lots of useful helpers and shortcuts for developing a HoloLens app. Under the HoloToolkit menu, there is a Configure option that lets you correctly rig your game for HoloLens. After being sure to save your scene and project, click on each of these options to configure your scene, your project and your capability settings. Under capabilities, you must make sure to check off SpatialPerception—otherwise spatial mapping will not work. Also, be sure to save your project after each change. If for some reason you would prefer to do this step manually, there is documentation available to walk you through it.

To add spatial mapping functionality to your game, all you need to do is drag the SpatialMapping prefab into your scene from HoloToolkit -> SpatialMapping -> Prefabs. If you build and deploy the game to your HoloLens or HoloLens Emulator now, you will be able to see the web mesh of surface reconstruction occurring.

Congratulations! You’ve created your first level.

Adding a protagonist and an Xbox Controller

The next step is to create your protagonist. If you are lucky enough to have a Mario or a Luigi rigged model, you should definitely use that. In keeping with the earlier promise to use only free assets, however, this guide will use the complimentary Ethan asset.

Go to the Unity menu and select Assets -> Import Package -> Characters. Copy the whole package into your game by clicking Import. Finally, drag the ThirdPersonController prefab from Assets -> Standard Assets -> Characters -> ThirdPersonCharacter -> Prefabs into your scene.

Next, you’ll want a Bluetooth controller to steer your character. Newer Xbox One controllers support Bluetooth. To get one to work with HoloLens, you’ll need to closely follow these directions in order to update the firmware on your controller. Then pair the controller to your HoloLens through the Settings -> Devices menu.

To support the Xbox One controller in your game, you should add another free asset. Open the Asset Store by clicking on Window -> Asset Store and search for Xbox Controller Input for HoloLens. Import this package into your project.

You can this up to your character with a bit of custom script. In your scene, select the ThirdPersonController prefab. Find the Third Person User Control script in the Inspector window and delete it. You’re going to write your own custom Control that depends on the Xbox Controller package you just imported.

In the Inspector window again, go to the bottom and click on Add Component -> New Script. Name your script ThirdPersonHoloLensControl and copy/paste the following code into it:


using UnityEngine;
using HoloLensXboxController;
using UnityStandardAssets.Characters.ThirdPerson;

public class ThirdPersonHoloLensControl : MonoBehaviour
{

    private ControllerInput controllerInput;
    private ThirdPersonCharacter m_Character;
    private Transform m_Cam;                
    private Vector3 m_CamForward;            
    private Vector3 m_Move;
    private bool m_Jump;                      

    public float RotateAroundYSpeed = 2.0f;
    public float RotateAroundXSpeed = 2.0f;
    public float RotateAroundZSpeed = 2.0f;

    public float MoveHorizontalSpeed = 1f;
    public float MoveVerticalSpeed = 1f;

    public float ScaleSpeed = 1f;


    void Start()
    {
        controllerInput = new ControllerInput(0, 0.19f);
        // get the transform of the main camera
        if (Camera.main != null)
        {
            m_Cam = Camera.main.transform;
        }

        m_Character = GetComponent();
    }

    // Update is called once per frame
    void Update()
    {
        controllerInput.Update();
        if (!m_Jump)
        {
            m_Jump = controllerInput.GetButton(ControllerButton.A);
        }
    }


    private void FixedUpdate()
    {
        // read inputs
        float h = MoveHorizontalSpeed * controllerInput.GetAxisLeftThumbstickX();
        float v = MoveVerticalSpeed * controllerInput.GetAxisLeftThumbstickY();
        bool crouch = controllerInput.GetButton(ControllerButton.B);

        // calculate move direction to pass to character
        if (m_Cam != null)
        {
            // calculate camera relative direction to move:
            m_CamForward = Vector3.Scale(m_Cam.forward, new Vector3(1, 0, 1)).normalized;
            m_Move = v * m_CamForward + h * m_Cam.right;
        }


        // pass all parameters to the character control script
        m_Character.Move(m_Move, crouch, m_Jump);
        m_Jump = false;
    }
}

This code is a variation on the standard controller code. Now that it is attached, it will let you use a Bluetooth enabled Xbox One controller to move your character. Use the A button to jump. Use the B button to crouch.

You now have a first level and a player character you can move with a controller: pretty much all the necessary components for a platform game. If you deploy the project as is, however, you will find that there is a small problem. Your character falls through the floor.

This happens because, while the character appears as soon as the scene starts, it actually takes a bit of time to scan the room and create meshes for the floor. If the character shows up before those meshes are placed in the scene, he will simply fall through the floor and keep falling indefinitely because there are no meshes to catch him.

How ‘bout some spatial understanding

In order to avoid this, the app needs a bit of spatial smarts. It needs to wait until the spatial meshes are mostly completed before adding the character to the scene. It should also scan the room and find the floor so the character can be added gently rather than dropped into the room. The spatial understand prefab will help you to accomplish both of these requirements.

Add the Spatial Understanding prefab to your scene. It can be found in Assets -> HoloToolkit -> SpatialUnderstanding -> Prefabs.

Because the SpatialUnderstanding game object also draws a wireframe during scanning, you should disable the visual mesh used by the SpatialMapping game object by deselecting Draw Visual Mesh in its Spatial Mapping Manager script. To do this, select the SpatialMapping game object, find the Spatial Mapping Manager in the Inspector window and uncheck Draw Visual Mesh.

You now need to add some orchestration to the game to prevent the third person character from being added too soon. Select ThirdPersonController in your scene. Then go to the Inspector panel and click on Add Component -> New Script. Call your script OrchestrateGame. While this script could really be placed anywhere, attaching it to the ThirdPersonController will make it easier to manipulate your character’s properties.

Start by adding HideCharacter and ShowCharacter methods to the OrchestrateGame class. This allows you to make the character invisible until you are ready to add him to the game level (the room).


    private void ShowCharacter(Vector3 placement)
    {
        var ethanBody = GameObject.Find("EthanBody");
        ethanBody.GetComponent().enabled = true;
        m_Character.transform.position = placement;
        var rigidBody = GetComponent();
        rigidBody.angularVelocity = Vector3.zero;
        rigidBody.velocity = Vector3.zero;        
    }

    private void HideCharacter()
    {
        var ethanBody = GameObject.Find("EthanBody");
        ethanBody.GetComponent().enabled = false;
    }

When the game starts, you will initially hide the character from view. More importantly, you will hook into the SpatialUnderstanding singleton and handle it’s ScanStateChanged event. Once the scan is done, you will use spatial understanding to correctly place the character.


    private ThirdPersonCharacter m_Character;

    void Start()
    {
        m_Character = GetComponent();
        SpatialUnderstanding.Instance.ScanStateChanged += Instance_ScanStateChanged;
        HideCharacter();
    }
    private void Instance_ScanStateChanged()
    {
        if ((SpatialUnderstanding.Instance.ScanState == SpatialUnderstanding.ScanStates.Done) &&
    SpatialUnderstanding.Instance.AllowSpatialUnderstanding)
         {
            PlaceCharacterInGame();
        }
    }

How do you decide when the scan is completed? You could set up a timer and wait for a predetermined length of time to pass. But this might provide inconsistent results. A better way is to take advantage of the spatial understanding functionality in the HoloToolkit.

Spatial understanding is constantly evaluating surfaces picked up by the spatial mapping component. You will set a threshold to decide when you have retrieved enough spatial information. Every time the Update method is called, you will evaluate whether the threshold has been met, as determined by the spatial understanding module. If it is, you call the RequestFinishScan method on SpatialUnderstanding to get it to finish scanning and set its ScanState to Done.


private bool m_isInitialized;
    public float kMinAreaForComplete = 50.0f;
    public float kMinHorizAreaForComplete = 25.0f;
    public float kMinWallAreaForComplete = 10.0f;
    // Update is called once per frame
    void Update()
    {
        // check if enough of the room is scanned
        if (!m_isInitialized && DoesScanMeetMinBarForCompletion)
        {
            // let service know we're done scanning
            SpatialUnderstanding.Instance.RequestFinishScan();
            m_isInitialized = true;
        }
    }

    public bool DoesScanMeetMinBarForCompletion
    {
        get
        {
            // Only allow this when we are actually scanning
            if ((SpatialUnderstanding.Instance.ScanState != SpatialUnderstanding.ScanStates.Scanning) ||
                (!SpatialUnderstanding.Instance.AllowSpatialUnderstanding))
            {
                return false;
            }

            // Query the current playspace stats
            IntPtr statsPtr = SpatialUnderstanding.Instance.UnderstandingDLL.GetStaticPlayspaceStatsPtr();
            if (SpatialUnderstandingDll.Imports.QueryPlayspaceStats(statsPtr) == 0)
            {
                return false;
            }
            SpatialUnderstandingDll.Imports.PlayspaceStats stats = SpatialUnderstanding.Instance.UnderstandingDLL.GetStaticPlayspaceStats();

            // Check our preset requirements
            if ((stats.TotalSurfaceArea > kMinAreaForComplete) ||
                (stats.HorizSurfaceArea > kMinHorizAreaForComplete) ||
                (stats.WallSurfaceArea > kMinWallAreaForComplete))
            {
                return true;
            }
            return false;
        }
    }

Once spatial understanding has determined that enough of the room has been scanned to start the level, you can use spatial understanding one more time to determine where to place your protagonist. First, the PlaceCharacterInGame method, show below, tries to determine the Y coordinate of the room floor. Next, the main camera object is used to determine the direction the HoloLens is facing in order to find a coordinate position two meters in front of the HoloLens. This position is combined with the Y coordinate of the floor in order to place the character gently on the ground in front of the player.


private void PlaceCharacterInGame()
{
// use spatial understanding to find floor
SpatialUnderstandingDll.Imports.QueryPlayspaceAlignment(SpatialUnderstanding.Instance.UnderstandingDLL.GetStaticPlayspaceAlignmentPtr());
SpatialUnderstandingDll.Imports.PlayspaceAlignment alignment = SpatialUnderstanding.Instance.UnderstandingDLL.GetStaticPlayspaceAlignment();

// find 2 meters in front of camera position
var inFrontOfCamera = Camera.main.transform.position + Camera.main.transform.forward * 2.0f;

// place character on floor 2 meters ahead
ShowCharacter(new Vector3(inFrontOfCamera.x, alignment.FloorYValue, 2.69f));

// hide mesh
var customMesh = SpatialUnderstanding.Instance.GetComponent();
customMesh.DrawProcessedMesh = false;
}

You complete the PlaceCharacterInGame method by making the meshes invisible to the player. This reinforces the illusion that your protagonist is running into and jumping over objects in the real world. The last thing needed to finish this game, level design, is something that is unfortunately too complex to cover in this platform.

Because this platform game has been developed in mixed reality, you have an interesting choice to make, however, as you design your level. You can do level design the traditional way using 3D models. Alternatively, you can also do it using real world objects which the character must run between and jump over. Finally, the best approach may involve even mixing the two.

Conclusion

To paraphrase Shakespeare, all the world’s a stage and every room in it is a level. Mixed reality has the power to create new worlds for us—but it also has the power to make us look at the cultural artifacts and conventions we already have, like the traditional platform game, in entirely new ways. Where virtual reality is largely about escapism, the secret of mixed reality may simply be that it makes us appreciate what we already have by giving us fresh eyes with which to look at them.

The post Getting Started with a Mixed Reality Platformer Using Microsoft HoloLens appeared first on Building Apps for Windows.

Happy 25th Birthday MFC!

$
0
0

February 26th marks the 25th anniversary for the Microsoft Foundation Classes (MFC). Join us in wishing MFC a big Happy Birthday!

img_20170227_125200-sm

MFC saw the light of day on February 26th 1992 and it has been a very large part of the Microsoft C++ legacy ever since. While Visual C++ 1.0 would only ship one year later (with MFC 2.0), in 1992 MFC 1.0 was laying the foundation as part of the Microsoft C/C++ 7.0 product. Here’s a snippet of that announcement that we dusted off from the Microsoft archives:

SANTA CLARA, Calif. — Feb.26, 1992
Microsoft Debuts C/C++ 7.0 Development System for Windows 3.1
High-Performance Object Technology Produces Smallest, Fastest Code for Windows 3.0, 3.1 Applications

“Microsoft C/C++ has been crafted with one goal in mind — to help developers build the best C/C++ applications possible for Microsoft Windows,” said Bill Gates, Microsoft chairman and CEO. “The combination of a great C++ compiler and the Microsoft Foundation Class framework gives programmers the benefits of object orientation for Windows with the production code quality they expect from Microsoft.”

[…]
C/C++ 7.0 provides a number of new object-oriented technologies for building Windows-based applications:

[…]
Microsoft Foundation Classes provide objects for Windows, with more than 60 C++ classes that abstract the functionality of the Windows Application Programming Interface (API). The entire Windows API is supported. There are classes for the Windows graphics system, GDI; Object Linking and Embedding (OLE) and menus. The framework allows easy migration from the procedural programming methodology of C and the Windows API to the object-oriented approach of C++. Developers can add object-oriented code while retaining the ability to call any Windows API function directly at any time; a programmer can take any existing C application for Windows and add new functionality without having to rewrite the application from scratch.

In addition, the foundation classes simplify Windows message processing and other details the programmers must otherwise implement manually. The foundation classes include extensive diagnostics. They have undergone rigorous tuning and optimization to yield very fast execution speeds and minimal memory requirements.

[…]
C++ source code is included for all foundation classes. More than 20,000 lines of sample code are provided in 18 significant Windows-based applications to demonstrate every aspect of the foundation classes and programming for Windows, including use of OLE.

Win32 APIs have been evolving with Windows, release after release. Through the years, MFC has stayed true to the principles outlined above by Bill Gates: to provide a production-quality object-oriented way of doing Windows programming in C++. When Win32 development slowed down in recent years and made room for more modern UI frameworks, so did MFC development. Nevertheless, we’re thrilled to see so many developers being productive with MFC today.

The Microsoft C++ team is very proud of the MFC legacy and fully committed to have your MFC apps, old or new, continue to rock on any Windows desktop and in the Windows Store through the Desktop Bridge. Thank you to all of you that have shared with us ideas, bug reports and code over the years. A special thanks to all the Microsoft and BCGSoft team members, present or past, that through the years have contributed to the MFC library, Resource Editor, MFC Class Wizard and other MFC-related features in Visual Studio. It’s been a great journey and we look forward to our next MFC adventures!

That’s our story, what’s yours? To share your story about MFC and/or Visual C++, find us on twitter at @visualc and don’t forget to use hashtag #MyVSStory

mfc-icons

The Microsoft C++ Team

Enterprise Ethereum Alliance

$
0
0

 

enterprise-ethereum-alliance

We are proud to announce our participation as a launch partner with the Enterprise Ethereum Alliance in addition to making the first reference implementations of Ethereum available in a public cloud.  Ethereum was the first blockchain supported in Azure and it is evolving to address the needs of enterprises globally.  Focusing on requirements like privacy, permissions and a pluggable architecture while retaining its public roots, Ethereum continues to widen the scope of what developers, businesses and consortiums can achieve.

While Azure and Project Bletchley are independent of any particular blockchain system, Ethereum and Enterprise Ethereum are supported by Azure middleware services like Cryptlets, Azure Active Directory for Identity, data services via Cortana Analytics Suite, Key Vault for key management, operations and deployment as well as rich tooling.  A large partner community offering industry solutions based on Smart Contracts, Cryptlets and SAAS offerings provides a valuable consortium data tier outlined in my previous blog post about Smart Contract architecture and Cryptlets.

You can now deploy your own implementation of this platform on Azure: Quorum: Enterprise Ethereum Alliance Reference Implementations

The CIO of eBay Talks About Racing Carl Lewis & What He Looks for When Promoting Managers

$
0
0

Today I go for a ride with Dan Morales, the CIO of eBay.

We had a great chat, but I did start things off with a bit of a curveball: As a guy with a really common name, I try to get to the bottom of which of the 3,073 Dan Morales on LinkedIn Dan really is. It turns out he is not the guy preparing tax returns for celebrity pets but he did race Michael Johnson and Carl Lewis in college.

Dan has some great advice about the qualities he looks for when he promotes IT managers to leadership roles, and he also talks about the steps his organization took to simultaneously go mobile and boost productivity. His vision is to make his workforce equally productive no matter what device is being using laptop, tablet, or phone.

To learn more about how top CIOs stay secure + productive,check out this new report.

Next week, I wrap up this discussion with Dan by talking about the pros/cons of failing in a way that actually means you were successful (but is also still kind of failing). It makes sense in context, I promise.

You can also subscribe to these videoshere, or watch past episodes here: www aka.ms/LunchBreak.

Demo Tuesday // How to avoid storage network overload with Storage QoS

$
0
0

Welcome to our new Demo Tuesday series. Each week we will be highlighting a new product feature from the Hybrid Cloud Platform.

Every systems administrator who manages I/O intensive workloads is acutely aware of the problems that happen when storage controllers become overloadedapplications come to a screeching halt.

This is especially problematic for database administrators managing busy transactional workloads across multiple virtual machines. When storage devices come under load, you need to be able to utilize every available IOPS to maximize performance.

Windows Server 2016 with Storage Quality of Service (QoS) offers greater control over how you reserve IOPS for each virtual machine, enabling you to make more efficient use of storage bandwidth across virtual machines. Admins can set not only the maximum IOPs limit for each VM, but in a feature unique to Windows Server 2016, they can also set the minimum IOPs that should be allocated for each VM. This level of fine-tuned control means IT can optimize the performance of all available storage resources. Take a look at this brief demo:

Storage QoS offers two important advantages: 1) it helps you to get more bang for your storage buck by better utilizing the storage you already own, and 2) it makes storage performance more reliable, even for workloads with uneven traffic across virtual machines.

Watch more of the Windows Server 2016 demo series and learn more about software-defined storage options inWindows Server 2016.

Office 365 news in February—new and improved intelligent services

$
0
0

Today’s post was written by Kirk Koenigsbauer, corporate vice president for the Office team.

This month, we released several improvements to our cloud-powered intelligence services. These services use machine learning and advanced algorithms to create magical experiences that save you time and help produce polished, gorgeous content. And they will just get better each month as we deliver continued innovation and further tune our intelligent recommendations as more people use them.

Jumpstart presentations with QuickStarter in PowerPoint

QuickStarter is now available in PowerPoint. Announced in September, QuickStarter reinvents how you create presentations by helping you conquer blank slides. Simply type in your topic and QuickStarter gets you going with a curated outline, recommendations on what categories to include, information to research further, and associated images tagged with Creative Commons licenses. Check out QuickStarter in action to see what it can do!

Office 365 news in February 1

Jumpstart presentations with QuickStarter in PowerPoint.

Availability: QuickStarter is now available in PowerPoint on Windows desktops, for Office 365 subscribers in Office Insider Fast.

Writing assistance all in one place with Editor

Editor, your digital writing assistant in Word, is now even more helpful. The new Editor pane gives you additional information from its advanced spelling, grammar and writing style recommendations. It also makes it easy to scan your whole document. This experience replaces the Spelling & Grammar pane and incorporates inclusive design best practices to be accessible for the visually impaired. Learn more about Editor.

The new Editor pane is being shown alongside a document being written. The Editor pane is showing recommendations for more concise language for a highlighted passage.

The new Editor pane helps you learn to improve your writing by providing even more information and context.

Availability: The new Editor pane is now available in Word on Windows desktops, for Office 365 customers in Office Insider Fast.

Remember commitments with Cortana’s help

Cortana now works with Outlook to remember things you said you’d do in email. For example, Cortana will automatically recognize if you promise to send your boss a report by the end of the week and then proactively remind you so you can follow through at just the right time. Learn more and get started in this Windows blog.

Availability: Cortana’s suggested reminders are now available in the U.S. on Windows 10, for customers using an Outlook.com email address or an Office 365 work or school account. Support for iOS, Android and other email services is coming soon.

Intuitive commands at your fingertips with Touch Bar support in Office for Mac

Touch Bar support is now available in Word, Excel and PowerPoint on Mac. As previously announced, the most common commands are intelligently placed at your fingertips based on what you’re doing in the document, spreadsheet or presentation. For example, one tap can put you into distraction-free Focus Mode in Word or an enhanced slideshow experience with thumbnails in PowerPoint. Touch Bar support in Outlook for Mac is coming soon.

The image shows a Mac with touch bar enabled in Excel. The touch bar shows commonly used functions such as “SUM” that are related to the cell that is being selected within the spreadsheet.

Access the most common Office commands in the Touch Bar, based on what you’re doing in the document.

Availability: Touch Bar support is now available in Word, Excel and PowerPoint on Mac, for all Office 365 subscribers and Office for Mac 2016 customers. Touch Bar support is coming soon in Outlook on Mac.

New Office 365 capabilities help you proactively manage security and compliance risk

Earlier this month, we announced several new capabilities in Office 365 that help you manage risk and stay ahead of threats. We introduced a new security analytics tool—Office 365 Secure Score—which helps you understand your organization’s security configuration and actions you can take to enhance security and reduce risk. In addition, Office 365 Threat Intelligence helps you stay ahead of cyber threats by leveraging billions of data points from the Microsoft Intelligent Security Graph. It offers information about malware families inside and outside your organization and integrates seamlessly with other Office 365 security features, so you’ll be able to see analysis, malware frequency and security recommendations related to your business. We also introduced Office 365 Advanced Data Governance, which helps you find and retain important data while eliminating redundant, obsolete and trivial data that could cause risk if compromised. It does this by applying machine learning to intelligently deliver proactive policy recommendations; classify data based on automatic analysis of factors like the type of data, its age and the users who have interacted with it; and take action. Read this month’s security and compliance blog for more.

Availability: Office 365 Secure Score is now generally available to commercial customers. Commercial customers can contact their Microsoft account representative to sign up for the private preview of Office 365 Threat Intelligence, or register for the limited preview of Advanced Data Governance.

Learn more about what’s new for Office 365 subscribers this month at: Office 2016 | Office for Mac | Office Mobile for Windows | Office for iPhone and iPad | Office on Android. If you’re an Office 365 Home or Personal customer, be sure to sign up for Office Insider to be the first to use the latest and greatest in Office productivity. Commercial customers on both Current Channel and Deferred Channel can also get early access to a fully supported build through First Release. This site explains more about when you can expect to receive the features announced today.

—Kirk Koenigsbauer

The post Office 365 news in February—new and improved intelligent services appeared first on Office Blogs.


Homegrown Trailers hits the road with Windows 10

$
0
0

Growing up in the Pacific Northwest, I’ve been a lifelong adventurer who enjoys camping in the great outdoors with few amenities and the ability to disconnect from the urban world. When I had the first of my two daughters ten years ago, I knew things would change and they did. As the girls got older, I realized that the amount of things you have to bring to keep the kiddos comfortable can be overwhelming. I soon discovered that I was the only one who could deal with a lack of privacy, pitching tents and sleeping on the ground. I knew there had to be a better way to enjoy the outdoors with my family, and as a sustainability consultant, I was determined to find the greenest path. Four years ago after an exhaustive search of the RV market, my daughter asked me, “Daddy, what if we build a treehouse on wheels?” It donned on me that I needed to build a trailer that was capable of being off-grid, comfortable with some key amenities, and was made with sustainable materials.

About a year ago, Eric Gertsman, co-founder and chief marketing officer and I launched Homegrown Trailers based in Kirkland, Wash., a social purpose corporation that produces sustainable, hand-crafted travel trailers.

As most small business owners who are starting out can relate, it’s all about finding the right technology fast so you can get your business off the ground. We ended up cobbling together software and hardware technologies from our personal lives or that seemed to meet our immediate needs. We definitely weren’t thinking big-picture of where we wanted to be and how technology could help get us there.

We started with a hodgepodge of technology. I had been using a Google Chromebook and Google’s G Suite of apps to collaborate on projects and edit documents in real-time while Eric and the rest of the team were using Microsoft devices and tools. While I found Google’s line of products to be simple, I soon realized that simplicity came with limited functionality, which impacted the business’ ability to achieve goals and grow. As our business picked up, we became more and more frustrated that we had to constantly transition documents back into Microsoft Word or Excel to effectively communicate and collaborate amongst our team and with partners. As co-founder and CEO, I really started to feel my productivity take a hit because I couldn’t crunch numbers or create the kind of graphs I needed in Google Sheets like I could with Microsoft Excel. I also became frustrated at the inability to effectively edit and create new documents when off-grid or on a slow Wi-Fi connection, and editing images or graphically heavy documents often crashed my Chromebook. And when it came to hardware, I found myself buying a new Chromebook at least every year due to the amount of use, which is not in line with my sustainability values.

With a fast-growing small business and a goal to produce a couple hundred trailers a year in the not too distant future, we quickly understood that technology is critical to our success and we needed a change. We needed a more productive, secure and robust set of software and hardware technology that could keep up with our operational and production demands. We recently turned to Microsoft and SADA Systems, a Microsoft National Solutions Provider, to help digitally transform each employee’s personal work experience and how we work with each other.

Homegrown Trailers team uses Windows 10 via their Surface Pro 4 device to collaborate and edit documents in real-time.

Homegrown Trailers team uses Windows 10 via their Surface Pro 4 device to collaborate and edit documents in real-time.

SADA Systems led us through a series of discovery sessions to discuss our business processes and how Microsoft technology can improve our day-to-day operations. The SADA team then completed our migration and deployment, and provided training services to ensure we optimized the vast features of the Microsoft cloud solutions. After having a mishmash of technologies for a few years, it has been refreshing to have one set of software and hardware technologies –   Windows 10, Office 365, Dynamics 365 and Windows devices – that are intuitive to use and work seamlessly together. We appreciate the operating speed and how intuitively everything works together in Windows 10, especially the ease of collaboration and communication that comes with using its suite of tools. Windows Ink has been a unique addition that allows me to notate documents and engineering plans very quickly on my Surface Pro 4 when working with the production team. I can then send immediate plan updates to our engineers, designers, regulators and other stakeholders.

It has also been really encouraging to see our communication and collaboration grow over the last few months because of Microsoft products like SharePoint, Microsoft Planner and Skype for Business. Because we’re a small yet mighty team with limited financial resources, we need to be as efficient as possible so we can grow to meet market demand. At the same time, we need to watch our bottom line while continually pushing the envelope on innovation. With Microsoft Planner, we can now assign tasks to create the kind of workflow needed for production, sales, and marketing efforts. Specifically, it has enhanced our weekly team meetings by ensuring that we’re not duplicating tasks or getting ahead of ourselves by pushing forward on a project that is dependent on another project being complete. But, it has also helped us ensure the team is staying focused, making and sharing their progress as well as meeting the most important goals each day. Another great way our team is staying connected is through the power of shared calendars via Microsoft Outlook. Our previous system didn’t allow us to effectively see what each other were up to and when to best optimize our time as a team.

Homegrown Trailers uses Microsoft Planner to assign tasks to create the kind of workflow needed for production, sales and marketing efforts.

Homegrown Trailers uses Microsoft Planner to assign tasks to create the kind of workflow needed for production, sales and marketing efforts.

I’ve also noticed a huge difference in ease of access to materials we are all working on. We can now easily post documents in SharePoint and work collaboratively without the hassle of clogging our email inboxes with large attachments, dealing with the headache of version control or wasting valuable time converting files back and forth. Personally, I’ve seen my productivity improve because I can now access documents from anywhere on any device, which is big since I am always on the go working with my team and in meetings with customers and partners. We are in the midst of launching Microsoft Dynamics 365 for Sales to streamline our customer interactions and improve the hand-off from our sales to production teams. With all of these tools at our fingertips, we can now avoid redundancies in our work and no longer have to battle technology inefficiencies so we can focus on building trailers and getting them to our passionate customers.

My team actually just drove across the country with one of our trailers to the Daytona 500 in Daytona Beach, Florida to meet with other small business owners who are looking to digitally transform their business just like us. While this may not be the first place you would think to see a Homegrown Trailer, we were excited to share both our story and commitment to building a fully electric and sustainable travel trailer. With Microsoft on our side, we have seen an immediate improvement to our operations and were delighted to share our early learnings and commitment to a more sustainable future especially as the NASCAR community works to do the same.

I was able to join the team for two legs of the drive and have to add that I love how Skype for Business via our Surface Pro 4 devices has kept those of us on the road this week for the Daytona 500 connected to the team back home no matter where we are. We have been able to push projects forward that would not have been possible without the technology and I was even able to walk my production team through some troubleshooting via Skype while flying across the country.

Homegrown Trailers employees build sustainable, hand-crafted travel trailers in its Kirkland, Wash., facility.

Homegrown Trailers employees build sustainable, hand-crafted travel trailers in its Kirkland, Wash., facility.

Today, I am back in the office and my team is hitting the open road back to Washington. We are excited to have been able to take advantage of this opportunity, share our experiences with other entrepreneurs, and continue growing our business with Microsoft at our side so that many others can get out there and experience the world in a Homegrown Trailer.

You can follow us on Twitter at @TrailerGreen, on Facebook, on Instagram, on LinkedIn or learn more at www.homegrowntrailers.com.

The post Homegrown Trailers hits the road with Windows 10 appeared first on Windows For Your Business.

The week in .NET – On .NET with Eric Mellino, Happy Birthday from Scott Hunter, OzCode

$
0
0

Previous posts:

On .NET

In last week’s episode, Kendra interviewed Eric Mellino to talk about his CrazyCore game and game engine:

This week, Phillip Carter will be on the show to walk an F# beginner and experienced C# dev (me) through the Tour of F#. We’ll stream live on Channel 9. We’ll take questions on Gitter’s dotnet/home channel and on Twitter. Please use the #onnet tag. It’s OK to start sending us questions in advance if you can’t do it live during the shows.

Happy Birthday .NET!

A couple weeks ago we got together with the Microsoft Alumni Network and threw a big .NET 15th birthday bash with former .NET team members & rock stars. We caught up with Scott Hunter, Director of Program Management for .NET, to share his excitement on past, present and future of .NET. Great times!

Tool of the week: OzCode

OzCode is a great Visual Studio debugging extension that makes debugging Linq considerably easier. Drill down into expressions, analyze the data set at any spot in a fluent call chain, analyze complex Boolean expressions, predict exceptions, and much, much more.

Debugging Linq with OzCode

User group meeting of the week: making toys in F# in NYC

The NYC F# .NET user group has a hackathon tonight at 6:30PM where you’ll explore toy making in F# with Steve Goguen.

Update to last week’s package of the week

In last week’s post, I showed NeinLinq, a way to use custom code with Linq in a more general way than is possible out of the box. The next day, I got a message from Damien Guard about a better way to achieve the same thing, that he developed with David Fowler in 2009. Check out his blog post from back then.

.NET

ASP.NET

C#

F#

New F# Language Suggestions:

Check out F# Weekly for more great content from the F# community.

Xamarin

UWP

Azure

Data

Games

And this is it for this week!

Contribute to the week in .NET

As always, this weekly post couldn’t exist without community contributions, and I’d like to thank all those who sent links and tips. The F# section is provided by Phillip Carter, the gaming section by Stacey Haffner, and the Xamarin section by Dan Rigby, and the UWP section by Michael Crump.

You can participate too. Did you write a great blog post, or just read one? Do you want everyone to know about an amazing new contribution or a useful library? Did you make or play a great game built on .NET?
We’d love to hear from you, and feature your contributions on future posts:

This week’s post (and future posts) also contains news I first read on The ASP.NET Community Standup, on Weekly Xamarin, on F# weekly, and on Chris Alcock’s The Morning Brew.

SC 2016 DPM UR2: Protect Microsoft 2016 workloads to Cloud using Azure Backup

$
0
0

System Center Data Protection Manager (SC DPM) is an Enterprise backup solution, providing App Consistent backups to workloads as SQL, SharePoint, Exchange and File Servers, running in Physical or Virtualized Environments. In addition, you can also backup your Hyper-V and VMWare VMs (with SC 2012 R2 DPM UR11). These backups can be stored on disks for short term, and on Azure for long term retention and offsite copy compliance needs. To backup to Azure, all you need to do is to register on Azure Portal, and download the MARS agent and the vault credential file. You can set it up on DPM in five minutes.

Further, SC 2016 DPM, with Modern Backup Storage, gives you the freedom to store your high churn workload backups on faster storage as SAN, and low churn workloads on slower storage as JBODS. This considerably reduces your backup TCO with the 50% storage savings, and faster backups. These savings get compounded with the backups to Azure being compressed, achieving savings of about 30% on an average.

Key Features

Here are some key features in SC 2016 DPM UR2:

  1. Protect your 2016 Server Workloads: The much awaited features are now here. You can begin protecting your SQL 2016, SharePoint 2016, and Exchange 2016 workloads with SC 2016 DPM. Further, you can also have your SC 2016 DPM DB stored on SQL 2016. All you need to upgrade the SQL version of DPM DB is to follow the simple steps here, and you are good to go.
  2. Auto protection to cloud: If auto protection is enabled for a SQL instance, SC 2016 DPM UR2, and SC 2012 R2 DPM UR12 now auto-protect your SQL databases to Azure. This means that DPM detects and creates an offsite copy for any new SQL DBs on your protected server to meet the offsite copy and compliance needs .
  3. Security Features for your Backups: SC 2016 DPM UR2, and SC 2012 R2 DPM UR 12 announce enhanced security for your cloud backups. These features provide an additional layer of protection, and the ability to recover in case of attacks. These are built on three pillars of Security: Prevention, Alerting, and Recovery.

With this, SC DPM asks for a Security PIN whenever critical changes, as modification of the Passphrase is triggered. This prevents hackers to re-encrypt the data and backups. Only users with valid credentials can perform this operation, safeguarding your backups, and more importantly, your access to them. You are alerted when the operation is complete. In case your cloud backups are deleted, they are retained for a longer period and can be restored. Further, strict enforcement of a minimum retention range helps you in ensuring that you can always restore your data from multiple recovery points.

Related links and additional content

Get SC 2016 DPM up and running in 10 minutes using the Evaluation VHD and begin backing up to Azure using Free Azure Trial Subscription.

Deprecating SHA1 Certificates in System Center Operations Manager for UNIX/Linux Monitoring

$
0
0

 

The communication between System Center Operations Manager Management Server and the UNIX/Linux agents are secured with TLS/SSL. UNIX and Linux agents employ Server Authentication certificates (i.e. “agent certificates”) for the TLS/SSL channel and these certificates are signed by an Operations Manager Management Server’s “signing certificate.” As of System Center 2016 RTM, both agent certificates and signing certificates are generated with the sha1WithRSAEncryption signing algorithm. With System Center 2012 R2 Operations Manager UR12 and System Center 2016 Operations Manager UR2, use of SHA1 certificate would be deprecated with a default preference for SHA 256 certificate. Customers can now update and sign their certificates on currently deployed agents by following the below procedure.

  1. Install SCOM 2012 R2 UR12 – https://support.microsoft.com/en-us/help/3209587/system-center-2012-r2-om-ur12 (or) SCOM 2016 UR2 – https://support.microsoft.com/en-us/help/3209591/update-rollup-2-for-system-center-2016-operations-manager
  2. Import the UNIX/Linux Management packs for SCOM 2012 R2/SCOM 2016 UR2 – https://www.microsoft.com/en-in/download/details.aspx?id=29696
  3. Certificate can be updated from SHA1 to SHA 256 in one of the following ways

Option1:

Use the powershell script UpdateXplatCertificates.ps1. This when used without any parameters will update the certificate for all the agents.

.\UpdateXplatCertificates.ps1

This script can be downloaded from here.

Option2:

To update the certificate for specific agents use the below command

.\UpdateXplatCertificates.ps1 -AgentsDisplayName ““,”

Option 3:

Certificate can be updated through SCOM Console –

Console –> Monitoring –> UNIX/Linux Computers –> select the server.

 

On the right task pane under UNIX/Linux Computer Tasks there are two tasks that could be performed.

verifycertresult

 

  1. Verify Certificate Signature– This task is used to verify the Signature algorithm of the agent’s signed certificate. This can be helpful in identifying SHA1 certificates that requires an update.On clicking Verify Certificate Signature you would get the below screen and the results.

runtask

task-status

         2. UNIX/Linux Update Certificate Task– This task updates the certificate from SHA1 to SHA 256.Click the server you wish to update the certificate and click UNIX/Linux Update    Certificate Task in the task pane.     

runtask_updatecert

 

taskstatus_updatecert

 

Please note:

 Already existing certificate will not be invalidated or deleted. Once the customer updates the certificate for all their monitored servers, the old certificates should be manually deleted.

 Once SCOM 2012 UR12 or SCOM 2016 UR2 is installed, the SHA 256 certificate will be used by default for newly discovered servers.

 User would need to update the certificate the same way for high availably configuration too.

Azure Stack TP3 Delivers Hybrid Application Innovation and Introduces Pay-as-you-Use Pricing Model

$
0
0

Building innovative applications on cloud technologies is critical for organizations to accelerate growth and create differentiated customer experiences. Applications leveraging cloud technologies with pay-as-you-use pricing are now standard. Our goal is to ensure that organizations choosing hybrid cloud environments have this same flexibility and innovation capability to match their business objectives and application designs. This is why we are extending Azure technologies on-premises with Azure Stack and today, are announcing several updates for Azure Stack:

  • TP3 available for download:Technical Preview 3 (TP3) is available for download today and has new features that enable: more modern application capabilities; running in locations without connections to Azure; along with infrastructure and security enhancements.
  • Packaging and pricing model: Azure Stack brings the cloud economic model on-premises with pay-as-you-use pricing.
  • Roadmap Update: Shortly after TP3, Azure Functions will be available to run on TP3, followed by Blockchain, Cloud Foundry, and Mesos templates. Continuous innovation will be delivered to Azure Stack up to general availability and beyond. TP3 is the final planned major Technical Preview before Azure Stack integrated systems will be available for order in mid-CY17.

Extending Azure on-premises

Azure Stack enables three unique hybrid cloud scenarios for organizations looking to build new apps and/or renovate existing apps across cloud and on-premises environments:

  • Consistent hybrid application development: Organizations investing in people, processes, and applications can do so knowing that it is transferable between Azure and Azure Stack. Individuals looking to develop skills can take those skills to any organization using Azure. Consistency between Azure and Azure Stack means organizations can draw from a worldwide pool of talent that can be productive on day one, easily moving from one project to another.  Individuals with Azure skills can move projects, teams, DevOps processes or organizations with ease. The APIs, Portal, PowerShell cmdlets, and Visual Studio experiences are all the same.
  • Azure services available on-premises: Infrastructure and Platform services fuel the next generation of application innovation. Delivering Azure IaaS and PaaS services on-premises empowers organizations to adopt hybrid cloud computing based on their business and technical requirements. They have the flexibility to choose the right combination of public, service provider, and on-premises deployment models. If they decide an app should be deployed in another location, they can easily move it without any modifications.
  • Purpose-built systems for operational excellence: To help organizations focus on work that drives their business, Azure Stack is delivered through integrated systems that are designed to continuously incorporate Azure innovation in a predictable, non-disruptive manner.

Hybrid use cases for Azure and Azure Stack

As we talk to customers about their cloud strategy, hybrid will be their steady state operating model and are looking to augment their cloud strategy with Azure Stack in a few key scenarios:

  1. Edge and disconnected solutions: Address latency and connectivity requirements by processing data locally in Azure Stack and then aggregating in Azure for further analytics, with common application logic across both. 
  2. Modern applications across cloud and on-premises: Apply Azure web & mobile services, containers, serverless, and microservice architectures to update and extend legacy applications with Azure Stack, while using a consistent DevOps process across on-premises and cloud.
  3. Cloud applications that meet every regulation: Develop and deploy applications in Azure, with full flexibility to deploy on-premises with Azure Stack to meet your regulatory or policy requirements, with no code changes needed.

Customers who have factory floor automation, remote use needs like cruise ships and mines, or requirements for isolation, like government systems, can all adopt modern designs, developing in the cloud and deploying in their locations. 

What’s new in Azure Stack TP3

With Azure Stack TP3, we’ve worked with customers to improve the product through numerous bug fixes, updates, and deployment reliability & compatibility improvements from TP2. With Azure Stack TP3 customers can:

  • Deploy with ADFS for disconnected scenarios
  • Start using Azure Virtual Machine Scale Sets for scale out workloads
  • Syndicate content from the Azure Marketplace to make available in Azure Stack
  • Use Azure D-Series VM sizes
  • Deploy and create templates with Temp Disks that are consistent with Azure
  • Take comfort in the enhanced security of an isolated administrator portal
  • Take advantage of improvements to IaaS and PaaS functionality
  • Use enhanced infrastructure management functionality, such as improved alerting

Roadmap Update

As part of our continuous innovation model, we will be adding Azure Functions, VM Extension syndication and multi-tenancy shortly after TP3. This will be followed by new workloads such as Blockchain, Cloud Foundry, and Mesos templates.  We will continue to refresh TP3 until we GA in mid-CY17.

In mid-CY17, the Proof of Concept (POC) deployment will be renamed to the Microsoft Azure Stack Development Kit. This single server dev/test tool enables customers to prototype and validate hybrid applications. It is a key piece of the continuous innovation model that Azure Stack will use to bring new functionality from Azure quickly to customers. It provides a way for new updates to be distributed early to customers so that they can experiment, learn and provide feedback.  

TP3 is our final planned major Technical Preview before GA.  The Azure Stack Development Kit will be released as GA first and at the same time we will release the software to our hardware partners so that they can finish the last mile of co-engineering work required to deliver multi-server Azure Stack integrated systems, mid-CY17.

After GA, we will continuously deliver additional capabilities through frequent updates. The first round of updates after GA are focused on two areas: 1) enhanced application modernization scenarios and 2) enhanced system management and scale. These updates will continue to expand customer choice of IaaS and PaaS technologies when developing applications, as well as improve manageability and grow the footprint of Azure Stack to accommodate growing portfolios of applications.

Extending cloud economics to on-premises with pay-as-you-use pricing

Azure Stack brings the cloud economic model on-premises, with pay-as-you-use pricing. As with Azure, there are no upfront licensing fees for using Azure services in Azure Stack and customers only pay when they use the services. Services are transacted in the same way as they are in Azure, with the same invoices and subscriptions. Services will be typically metered on the same units as Azure, but prices will be lower, since customers operate their own hardware and facilities. For scenarios where customers are unable to have their metering information sent to Azure, we will also offer a fixed-price “capacity model” based on the number of cores in the system.

Customers will acquire Azure Stack hardware from our hardware partners, Dell EMC, HPE, Lenovo and (later in the year) Cisco. We are excited to work with our hardware partners to provide a flexible range of buying options, including pay-as-you-go, for the hardware that underpins the integrated systems.

Customers can reach out to their Microsoft and hardware partner account representatives for detailed pricing information.

Final thoughts and next steps

Every company in every industry around the world transforming from an organization that simply uses digital technology, to a digital organization. We are dedicated to helping organizations grow by creating continually evolving products for their customers.

Azure and the Azure Stack integrated systems enable businesses to focus on investing energy and talent on turning their application portfolio into a strategic differentiator for their business. This approach enables customer choice and flexibility of deploying and operating their application where it best meets their business needs. IT can deliver far greater value by empowering development teams with self-service provisioning and cloud services while partnering with them to establish DevOps workflows that meet business policies and requirements.

Learn more about Azure Stack and download Azure Stack TP3.

Jeffrey Snover
Azure Infrastructure and Management Technical Fellow
Follow me on Twitter at @jsnover

Viewing all 13502 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>