Quantcast
Channel: TechNet Technology News
Viewing all 13502 articles
Browse latest View live

Three ways to better secure your hybrid environment

$
0
0

As more organizations adopt a hybrid cloud model for IT, its no surprise theyre encountering new security challenges. With more surface area to cover, more mission-critical assets to protect, and more sophisticated threats to defend against, security issues become increasingly complex.

And with the average cost of a data breach to a single company now $3.8 million and rising, its easy to understand why security remains top of mind.

To help, weve created a webinar that provides suggestions on addressing these new security challenges in a hybrid cloud environment. The webinar also offers a high-level overview of the changing role IT operations management plays in security.

There are essentially three areas where hybrid cloud management and security can make a real difference for your organization.

1. Bring IT and security operations together

Often IT operations are working with one set of tools and procedures while security teams use an entirely different set. This lack of integration between the systems that detect threats and the systems that respond to them means it now takes 146 days for organizations to discover a data breach.

Bringing IT and security together can have a profound impact on your efficiency in fighting security threats. For example, IT may be investigating a performance issue that, at its root, really turns out to be a security issue like a brute force attack. With an integrated approach, you can quickly pass off that information and take action.

2. Ensure good security hygiene

Once you have your team in place, its time to get your data in line. With your data in one place, you can more easily tackle the basics that sometimes get overlooked. Things like identifying systems that have missing or outdated security updates and incomplete configurations, and taking a hard look at anomalous network traffic and user behavior.

FY17_HYB-OMS_3Ways-BlogImagery_FINAL_112916_thumb.jpg

3. Enable rapid threat detection and response

The advantage of having your teams integrated and your data in one place is that you can become more efficient in how you handle security events. You can improve your threat detection capabilities and reduce the amount of time it takes to investigate and recover from attacks. With the right tools, you can quickly search data, see a map of suspicious traffic, understand a single threat, or get a comprehensive picture of your entire system.

Watch Three Ways to Better Secure your Hybrid Cloud Environment

We cover these points in greater detail and a whole lot more. Learn about all the steps you can take to get better visibility of your security posture and how Operations Management Suite can help.

Watch the webinar


CES 2017: MSI launches VR-ready gaming PCs

$
0
0

Brand-new VR-ready gaming laptops, gaming motherboards, ultimate gaming graphics cards equipped with TWIN FROZR VI, gaming desktop PCs and Gaming Gear peripherals will bring gamers all over the world the latest and greatest technologies. With Windows 10, these devices have your personal digital assistant, Cortana, built-in, are equipped with the Xbox app and Direct X12 and can take advantage of Xbox Play Anywhere.

Let’s take a closer look at what MSI announced today:

MSI gaming laptops with performance, resolution and multimedia

Click to view slideshow.

Thanks to the new the Intel 7th generation CPU and NVIDIA GeForce GTX1050 Ti & GTX1050 gaming graphics cards, the gaps between laptops and desktops are further shortened, delivering the faster and smoother VR experience to all gamers.  All of MSI’s gaming notebooks equipped with the latest NVIDIA PASCAL graphics cards are VR-ready, and Intel 7th generation CPU will be added to all of MSI’s gaming product lines, allowing VR users to run their VR applications even faster and smoother.

MSI also enhances the audio experience with ESS SABRE HiFi technology. Now, MSI GT83VR, GT73VR, GS73VR, GS63VR and GS43VR gaming laptops are ready to support the finest audio quality through SABRE HiFi DAC to your high-end headsets.

Nahimic VR sound enhancement in MSI Gaming

Nahimic VR sound enhancement in MSI Gaming

Now, with Nahimic VR with 7.1 sound provided exclusively on MSI gaming notebooks (on selected models), you can enjoy VR like never before.

The Nahimic 2+, which will be available for all new gaming notebooks announced at CES, will support more features and higher compatibility for VR games. It has refined profile interface and, together with Nahimic VR by Nahimic 2+ technology, will be optimized for HTC VIVE kits in early 2017.

MSI GAMING Motherboards – Personalize your gaming rig with RGB Mystic Light

Click to view slideshow.

The next-generation MSI GAMING motherboards are packed with unique features such as M.2 Shield to protect against thermal throttling on super-fast M.2 SSDs and VR Boost to deliver a smooth VR experience with lower latency and less chance of motion sickness.

The Z270 GAMING M7 gaming motherboard is an incredibly versatile and complete foundation for a high-end gaming system. Features such as Audio Boost 4 PRO, with Nahimic 2, provides gamers the purest sound experience and a competitive edge on the battlefield. All the heatsinks and covers on the Z270 GAMING M7 motherboard feature RGB Mystic Light, which lets you change the colors and effects with a single click of the mouse.

The Z270 GAMING PRO CARBON motherboard is sure to offer gamers a true personalized experience with RGB Mystic Light. Using the Gaming App, gamers can easily control the LEDs by smartphone. Mystic Light Sync ensures any additional LEDs on cases, keyboard or other accessories synchronize colors and effects.

The Z270 TOMAHAWK gaming motherboard is built to ensure perfect performance during long gaming sessions with features such as twin Turbo M.2, GAMING LAN and Audio Boost.

Mighty Imposing Gaming Desktop PCs – Aegis, Nightblade and Trident Series

Click to view slideshow.

Unmatched in performance and extraordinary in design, the Aegis Series are at the top of MSI’s gaming desktop line-up. Built on the legacy of the first Nightblade gaming desktop series, the new Nightblades feature both an updated exterior and interior design. The Trident series has made its mark as the world’s smallest VR-ready PC to date. Now upgraded with the new Intel 7th generation CPU, Trident 3 is the pinnacle of small form factor, yet high-performance PC gaming.

MSI GAMING Graphics Card Series

MSI GAMING Graphics Card Series

Core Frozr is the best choice for gamers looking for a complete MSI-style setup with top-notch acoustic performance. For gamers looking to gain more graphics power while staying with their current laptop, MSI GUS (Graphics Upgrade Solution) is the answer to elevate the graphics performance to the highest level. Using a Thunderbolt interface, both latest generation NVIDIA GeForce and AMD Radeon graphics cards are supported.

Immerse GH70 GAMING Headset, Clutch GM70 GAMING Mouse, Vigor GK80 and GK70 GAMING Keyboards

Recognizing the importance of true detailed Hi-Fi sound reproduction in games, the upcoming Immerse GH70 GAMING Headset features certified Hi-Res drivers with Virtual 7.1 Surround sound and color effects through MSI Mystic Light Sync. The Clutch GM70 GAMING mouse, combined with an incredibly precise 18000 DPI optical sensor and full RGB Mystic Light, can be used both wired and wireless by simply disconnecting the cable on the mouse for ultimate comfort. Finally, two new high-end mechanical gaming keyboards featuring Cherry MX Switches, RGB Mystic Light and a multitude of hotkeys. Standing out from the crowd, Vigor GK80 and GK70 GAMING Keyboards are designed to offer comfort and all the tools a true gamer needs.

Pricing and availability will be announced at a later date.

MSI continues to push the limits of what’s possible to offer PC gamers. To learn more about today’s news, visit us.msi.com.

*Cortana available in select markets

The post CES 2017: MSI launches VR-ready gaming PCs appeared first on Windows Experience Blog.

CES 2017: Samsung unveils new gaming PC — the Samsung Notebook Odyssey

$
0
0

Samsung

Ahead of CES 2017 this week Samsung unveiled their new portable gaming PC powered by Windows 10, the Samsung Notebook Odyssey featuring a high performance and dynamic display. Available in 17.3-inch and 15.6-inch models, the Samsung Notebook Odyssey packs power in a beautiful design with premium features to offer a premium gaming experience. Together with Windows 10 the Samsung Notebook Odyssey offers gamers a great gaming experience with features like the Xbox app, Xbox Play Anywhere and the ability to stream Xbox One games to the device. Samsung is also showcasing their recently updated Samsung Notebook 9 at CES this week. The Notebook 9 is equipped with the latest 7th Generation Intel Core i7 Processor, as well as faster storage and memory.

Samsung Notebook Odyssey Product Specifications

  Samsung

Samsung Notebook Odyssey – high performance that packs a punch

Click to view slideshow.

The Samsung Notebook Odyssey includes exclusive features built specifically for intense and casual gamers alike. With advanced technology, such as the HexaFlow Vent, which is an advanced cooling and ventilation system that helps the device remain cool at all times, the Samsung Notebook Odyssey allows gamers to play games for long periods of time without worrying about the device overheating and shutting down unexpectedly. Created for maximum performance and endless play, gamers can also open the HexaFlow Vent, which is located on the bottom panel of the device, to upgrade the storage and memory.

For optimal gameplay, the Samsung Notebook Odyssey is equipped with an intelligent and robust processer. Powered by a 7th Generation Intel Core i7 processor (Quad Core 45W), both models of the Samsung Notebook Odyssey offer lightning fast performance with premium graphic technologies.

New advanced display for high-quality vivid images

In addition to its high-performing engine, the Samsung Notebook Odyssey 15-inch offers a beautiful viewing experience including a backlight that goes up to 280 nits in brightness for crystal clear images. Its mesmerizing screen also includes an anti-glare surface treatment to minimize reflection, allowing gamers to focus on their next move without distractions.

The Samsung Notebook Odyssey transports gamers into faraway lands and exciting scenes with life-like color contrast and wide-view angle display. It also offers high-quality vibrant images to provide an immersive and dynamic cinema-quality HDR video experience.

Innovative design and function with enhanced usability

Without comprising on function, the Samsung Notebook Odyssey features an innovative design that’s both functional and refined to provide users with optimal usability. The Samsung Notebook Odyssey is easily portable, allowing gamers to take their game with them on the go.

Beyond its sleek design, The Samsung Notebook Odyssey’s keyboard includes advanced features including ergonomically curved keycaps (0.5mm volcano keycaps on the 17.3-inch, 0.3mm crater keycaps on the 15.6-inch,) and backlit WASD keys (WASD keys are available on the 15.6’ model only) to provide users with optimal interaction. For personalization and easy access to crucial keys, users can also choose the backlit color of individual keycaps on the Samsung Notebook Odyssey 17.3-inch.

More details on this new device will be unveiled later this Spring.

Updates to the Samsung Notebook 9 15’

Samsung recently announced updates to the Samsung Notebook 9 15’ that will be available in the US this year, and this week it’s on display at their booth, #15006, in the Las Vegas Convention Center to check out.

Click to view slideshow.

 

The 15” Samsung Notebook 9 combines powerful performance, and the best viewing experience into a sleek and light design. It’s powered by Windows 10 and offers great features like Cortana, your personal digital assistant, Microsoft Edge for faster more secure browsing and comprehensive security with features like Windows Hello. Whether you’re at home or on the road, the Notebook 9 makes for the perfect companion for any situation. The Notebook 9 15’ is equipped with the latest 7th Generation Intel Core i7 Processor, features a vibrant full HD display, has a built-in fingerprint reader that unlocks password free sign in with Windows Hello, and is packed with a battery that can last as long as 15 hours on a single charge (based on data from MobileMark 14). This device starts at $1399.99 — for more information, visit news.samsung.com.

Other features include:

  • Powerful graphics: equipped with a NVIDIA 940MX graphics card and a 2.0 GPU boost, the Samsung Notebook 9 provides a powerful graphics experience.
  • Ultra-light and portable at 2.73lbs: the Samsung Notebook 9 15’ is effortless to carry, and easy to travel with. It’s the perfect combination of performance and portability.
  • Vibrant full-HD display: the Samsung Notebook 9 was created with a stunningly immersive full-HD display with bright, vivid and accurate colors. Featuring Samsung RealView Display, the Samsung Notebook 9 elevates the viewing experience to new heights with support for a high level of brightness, true-to-life colors and a wide viewing angle. Also, with Outdoor Mode, instantly boost the clarity of the display with a quick shortcut command when working under direct sunlight.
  • Security is always top-of-mind which is why the Samsung Notebook 9 includes a built-in fingerprint sensor as well as support for Windows Hello, which enables secure sign in with a single touch.
  • USB Type-C: the latest USB Type-C port enables charging of the Samsung Notebook 9 through a cell-phone charger, and enables quick data transfer and connectivity to displays and other devices.

Samsung’s news today is nothing short of exciting. It’s great to see our partners like Samsung create hardware that offers customers premium Windows 10 experiences. All of their new Windows 10 devices will be on display at their booth #15006 in the Las Vegas Convention Center. To learn more about Samsung’s news today, head over to news.samsung.com.

The post CES 2017: Samsung unveils new gaming PC — the Samsung Notebook Odyssey appeared first on Windows Experience Blog.

Deprecation of the Team Rooms in Team Services and TFS

$
0
0

Modern development teams heavily depend on collaboration. People want (and need) a place to monitor activity (notifications) and talk about it (chat). A few years back, we recognized this trend and set out to build the Team Room to support these scenarios. Since that time, we have seen more solutions to collaborate emerge in the market. Most notably, the rise of Slack. And more recently, the announcement of Microsoft Teams.

With so many good solutions available that integrate well with TFS and Team Services, we have made a decision to deprecate our Team Room feature from both TFS and Team Services.

Timeline for Deprecation

Please find below the timeline for deprecation for both TFS and Team Services.

deprecation-message

Team Services

If you are working in Team Services, you will see a new yellow banner appear in early January that communicates our plan. Later this year, we plan to turn off the Team Room feature completely.

TFS

If you are using TFS installed on-premises, the announcement banner will appear when you install TFS 2017 Update 1. The Team Room will be removed from the product with the next major version of TFS.

Alternatives

The Team room is used both for a notification hub as well as for chat. TFS and Team Services already integrate with many other collaboration products including Microsoft Teams, Slack, HipChat, Campfire and Flowdock. You can also use Zapier to create your own integrations, or get very granular control over the notifications that show up.
Which one is the best solution for you is mostly dependent if one of these tools are already in use in your organization, and personal preference.

Another alternative is to install the Activity Feed by Dave Smits. It allows you to add a widget to the team’s dashboard to show the activity in the product.

If you have any questions or feedback, feel free to comment on this blog post or send email to teamroom_feedback@microsoft.com.

Getting Started with Always Encrypted using PowerShell

$
0
0

In the previous articles from the Always Encrypted blog series, we demonstrated how to configure Always Encrypted using SQL Server Management Studio. In this article, we will show you how to configure Always Encrypted from the command line, using PowerShell.

Prerequisites

To try the examples in this article, you need:

  • A database, named Clinic, hosted in SQL Server 2016 or in Azure SQL Database. The database should contain the Patients table with the following schema. To make things more interesting, make sure the table contains some data.
    CREATE TABLE [dbo].[Patients](
     [PatientId] [int] IDENTITY(1,1), 
     [SSN] [char](11) NOT NULL,
     [FirstName] [nvarchar](50) NULL,
     [LastName] [nvarchar](50) NULL, 
     [MiddleName] [nvarchar](50) NULL,
     [StreetAddress] [nvarchar](50) NULL,
     [City] [nvarchar](50) NULL,
     [ZipCode] [char](5) NULL,
     [State] [char](2) NULL,
     [BirthDate] [date] NOT NULL
     PRIMARY KEY CLUSTERED ([PatientId] ASC) ON [PRIMARY] )
    GO
  • The SqlServer PowerShell module, which you can obtain by downloading and installing the latest version of SQL Server Management Studio.

Step 1: Configure a Column Master Key

In this step, we will create a column master key and a metadata object describing the column master key in the database.

  1. Open a PowerShell window.
    Note: In a production environment, you should always run tools (such as PowerShell) provisioning and using Always Encrypted keys on a machine that is different than the machine hosting your database. Remember, the primary purpose of Always Encrypted is to protect your data, the environment hosting your database gets compromised. If your keys leak to the machine hosting the database, an attacker can get them and the benefit of Always Encrypted will be defeated.
  2. Create a column master key. In this example, we will use a certificate, stored in the Current User certificate store, as a column master key.
    $cert = New-SelfSignedCertificate -Subject "AlwaysEncryptedCert" -CertStoreLocation Cert:CurrentUser\My -KeyExportPolicy Exportable -Type DocumentEncryptionCert -KeyUsage DataEncipherment -KeySpec KeyExchange
  3. Import the SqlServer PowerShell module.
    Import-Module "SqlServer"
  4. Connect to your database. There are multiple ways to connect to the database using the SqlServer module. Here, we will use the most universal method, using SMO, which works for both SQL Server and Azure SQL Database.
    $serverName = "myserver"
    $databaseName = "Clinic"
    $connStr = "Server = " + $serverName + "; Database = " + $databaseName + "; Integrated Security = True"
    $connection = New-Object Microsoft.SqlServer.Management.Common.ServerConnection
    $connection.ConnectionString = $connStr
    $connection.Connect()
    $server = New-Object Microsoft.SqlServer.Management.Smo.Server($connection)
    $database = $server.Databases[$databaseName]
  5. Create a SqlColumnMasterKeySettings object that contains information about the location of your column master key. SqlColumnMasterKeySettings is an object that exists in memory (in PowerShell).
    $cmkSettings = New-SqlCertificateStoreColumnMasterKeySettings -CertificateStoreLocation "CurrentUser" -Thumbprint $cert.Thumbprint
  6. Create a column master key metadata object, describing your column master key in the database. Notice that we are passing the name of the column master key object, the database object and the SqlColumnMasterKeySettings object created above to the New-SqlColumnMasterKey cmdlet.
    $cmkName = "CMK1"
    $cmk = New-SqlColumnMasterKey -Name $cmkName -InputObject $database -ColumnMasterKeySettings $cmkSettings

    Note the above cmdlet simply calls the CREATE COLUMN MASTER KEY Transact-SQL statement against the target database.

  7. Verify the properties of the column master key metadata object, you created
    Write-Host $cmk.Name
    Write-Host $cmk.KeyStoreProviderName 
    Write-Host $cmk.KeyPath

    The output from the above commands should look like this.

    
    CMK1
    MSSQL_CERTIFICATE_STORE
    CurrentUser/my/228D7A09B1632899518FF5034D38000197F607A2

    Note that the column master key metadata object, you created in the database, does not contain the actual certificate (the column master key) – it only describes the location of the certificate.

Step 2: Configure a Column Encryption Key

In this step, we will create a column encryption key and a metadata object describing the column encryption key in the database.

  1. Create a column encryption key and its metadata in the database.
    $cekName = "CEK1"
    New-SqlColumnEncryptionKey -Name $cekName  -InputObject $database -ColumnMasterKey $cmkName

    The above executes a fairly complex workflow:

    • Generates a column encryption key (in memory of PowerShell), which is a 256-bit random number.
    • Retrieves the metadata, describing the specified column master key ($cmkName), from the database.
    • Encrypts the generated column encryption key with the column master key (which is a certificate, generated in Step 1 and stored on the machine, where PowerShell is running).
    • Creates a metadata object, describing the column encryption key in the database. To achieve that, the cmdlet executes the CREATE COLUMN ENCRYPTION KEY Transact-SQL statement against the target database.
  2. Verify the properties of the column encryption key metadata object, you created in the database.
    Write-Host $cek.Name
    Write-Host $cek.ColumnEncryptionKeyValues.Length
    Write-Host $cek.ColumnEncryptionKeyValues[0].ColumnMasterKeyName 
    Write-Host $cek.ColumnEncryptionKeyValues[0].EncryptedValue 

    The output from the above commands should look like this.

    CEK1
    1
    CMK1
    1 110 0 0 1 99 0 117 0 114 0 114 0 101 0 110 0 116 0 117 0 115 0 101 0 114 0 47 0 109 0 121 0 47 ...

    Note that the ColumnEncryptionKeyValuest property is an array of objects, each of which contains an encrypted value of the column encryption key protected with a given column master key. Normally (and this is the case above), the column encryption key has just one encrypted value. However, during a column master key rotation, it can have up to two values. An encrypted value is a byte array.

    Also note that, as in the case of the column master key metadata, the column encryption key metadata does not contain the actual key in plaintext – only its encrypted value and other information describing the key and its encrypted value.

Step 3: Encrypt Selected Columns

After we have provisioned Always Encrytped keys, it is time to encrypted some columns.

  1. Create an array of SqlColumnEncryptionSettings objects, each of which describes target encryption settings for one column in the target database. In our example, we want to encrypt two columns in the Patients table: SSN and BirthDate. Hence, we create an array with two elements. Each SqlColumnEncryptionSettings object  specifies the type of encryption (Randomized or Deterministic) for the target column, and the name of the metadata object describing the column encryption key to be used to encrypt the column.
    $ces = @()
    $ces += New-SqlColumnEncryptionSettings -ColumnName "dbo.Patients.SSN" -EncryptionType "Deterministic" -EncryptionKey "CEK1"
    $ces += New-SqlColumnEncryptionSettings -ColumnName "dbo.Patients.BirthDate" -EncryptionType "Randomized" -EncryptionKey "CEK1"
  2. Encrypt the columns.
    Set-SqlColumnEncryption -InputObject $database -ColumnEncryptionSettings $ces -LogFileDirectory .

    To apply the specified target encryption settings for the database, the Set-SqlColumnEncryption cmdlet transparently:

    • Creates a new temporary (initially empty) table, which has the same schema as the Patients table, but the two specified columns (SSN and BirthDate) are configured as encrypted.
    • Downloads all data the Patients table.
    • Uploads the data back to the temporary table. The data gets encrypted on upload.
    • Replaces the original tables with the temporary table.

    The above workflow can take a long time, depending on the size of the data. By default (and this is the case in the above example), Set-SqlColumnEncryption locks the target table making it unavailable to write transactions throughout the duration of the entire operation. In SSMS 17.0 and later versions, Set-SqlColumnEncryption also supports the online mode, which minimizes the duration of downtime. We will discuss the online mode in a later blog post.

  3. Review the log file, that was generated in the specified log file directory (.). Please note, that log file generation is only supported in SSMS 17.0 an later versions.
    
    12/12/2016 5:34:59 PM INFO MainThread Logger initialized.
    12/12/2016 5:34:59 PM INFO MainThread Acquiring database model and preparing data migration.
    12/12/2016 5:35:10 PM INFO [dbo].[Patients] Data migration for table '[dbo].[Patients]' started.
    12/12/2016 5:35:10 PM INFO [dbo].[Patients] Processing Table '[dbo].[Patients]'. 100.00 % done.
    12/12/2016 5:35:10 PM INFO MainThread Finalizing data migration.
    12/12/2016 5:35:11 PM INFO MainThread Deploying the specified encryption settings completed in 0d:0h:0m:11s.

Step 4: Verify Encryption

Now, let’s verify the data in the SSN and BirthDate column are indeed encrypted.

  1. Query the table containing encrypted columns.
    
    Invoke-Sqlcmd -Query "SELECT TOP(3) * FROM Patients" -ConnectionString $connStr

    Note that the values of the SSN and BirthDate columns appear as byte arrays, as the columns are encrypted.

    ps11

  2. Let’s query the table again, but this time let’s decrypt the values stored in the encrypted columns. We can achieve this by adding Column Encryption Setting = Enabled to the connection string. This instructs the client driver to decrypt the data retrieved from the encrypted columns.
    $connStr = $connStr + "; Column Encryption Setting = Enabled"
    Invoke-Sqlcmd -Query "SELECT TOP(3) * FROM Patients" -ConnectionString $connStr

    Note that the values of the SSN and BirthDate columns now appear in plaintext. The Invoke-Sqlcmd cmdlet can successfully decrypt the data, as it runs on the machine containing the column master key.
    ps222

Summary

In this blog post, we have demonstrated how configure Always Encrypted using PowerShell, including:

  1. Configuring certificates as column master keys.
  2. Configuring column encryption keys.
  3. Encrypting column.
  4. Querying encrypted columns.

Please, note that steps 1-3 are equivalent to running the Always Encrypted Wizard, which we demonstrated in SSMS Encryption Wizard – Enabling Always Encrypted in a Few Easy Steps.

For more information about using PowerShell for setting up Always Encrypted, please see Configure Always Encrypted using PowerShell.

Protection and recovery of Citrix XenDesktop and XenApp using Azure Site Recovery

$
0
0

I am excited to announce support for the protection and recovery of Citrix XenDesktop and XenApp environments in Azure using Azure Site Recovery (ASR).  We have been working closely with Citrix to validate and provide guidance on leveraging ASR to build a robust, enterprise grade DR solution for the recovery to Azure of on-premises XenDesktop and XenApp environments running on VMware/Hyper-V.

With ASR, you can protect and recover the essential components in your on-premises XenDesktop and XenApp environment including:

  • Citrix Delivery Controller
  • StoreFront Server
  • XenApp Master Virtual Delivery Agent (VDA)
  • XenApp License Server
  • AD DNS Server
  • SQL Database Server

Additionally, ASR provides you the ability to:

  • Recover to an application consistent point in time, which is useful to recover your multi-tiered Citrix VDI environment to an application-consistent state.
  • Use flexible recovery plans to customize the order of recovery by grouping together machines that need to failover together, add automation scripts, and manual actions to be executed on a failover.
  • Perform Non-disruptive recovery testing, that lets you test the failover of your Citrix VDI farm to Azure, without impacting on-going replication or the performance of your production environment.

A detailed step by step guidance for building a disaster recovery solution using ASR has been chalked out in close collaboration with Citrix. The whitepaperfrom Citrix detailing the same can be downloaded.

Ready to start using ASR? Check out additional product information to start replicating your workloads to Microsoft Azure using Azure Site Recovery today. You can use the powerful replication capabilities of Site Recovery for 31 days at no charge for every new physical server or virtual machine that you replicate.

Azure Site Recovery, as part of Microsoft Operations Management Suite, enables you to gain control and manage your workloads no matter where they run (Azure, AWS, Windows Server, Linux, VMware or OpenStack) with a cost-effective, all-in-one cloud IT management solution. Existing System Center customers can take advantage of the Microsoft Operations Management Suite add-on, empowering them to do more by leveraging their current investments. Get access to all the new services that OMS offers, with a convenient step-up price for all existing System Center customers. You can also access only the IT management services that you need, enabling you to on-board quickly and have immediate value, paying only for the features that you use.

Brad Anderson’s Lunch Break / s3 e8 / Mark Russinovich, CTO, Azure

$
0
0

After a great first half of our discussion before the holiday break, Mark Russinovich (CTO, Microsoft Azure) and I spend the second part of our drive talking about his successful side-career as a novelist, the amazing rate of public cloud adoption by enterprises over the last year, and we play the Is This a Real Startup? game.

We also talk about what the usage patterns of financial institutions can teach us about the quality and security of the public cloud, and Mark tells the story of the moment he knew his career was going to be in tech.

.

To learn more about Microsoft Enterprise Mobility + Security, visit: http://www.microsoft.com/ems.

In the next episode, I hit the road with Brad Strock, the CIO of PayPal.

You can subscribe to these videoshere, or watch every past episode here:www aka.ms/LunchBreak.

 

Breaking down EMS Conditional Access: Part 2

$
0
0

This post is the second in a three-part series detailing Conditional Access from Microsoft Enterprise Mobility + Security.

Today, the typical employee connects an average of four devices to their corporate network. Usually theyre connecting from their own mobile device or PC, but thats not always the case. Maybe they use their daughters iPad in a pinch, or log on from a friends house, or use a hotel kiosk to connect. You might be OK with allowing access in some cases, but in other circumstances you may want to provide access only to certain employees, only to specific data, or only from known and compliant devices.

Device-based conditional access from Microsoft Enterprise Mobility + Security (EMS) helps you make sure that only compliant mobile devices and PCsthose that meet the standards youve sethave access to corporate data.

Device Compliance

Device compliance policies help you protect company data by making sure the devices used to access your data or sensitive apps comply with your specific requirements or standards. Administrators can set these policies to enforce device compliance requirements before users attempt to access company resources. These can include settings for device enrollment, domain join, passwords and encryption, as well for the OS platform running on the device.

You can use compliance policy settings in Microsoft Intune to create a set of rules for and to evaluate the compliance of employee devices. When devices don’t meet the conditions set in the policies, the end user is guided though the process of enrolling the device and fixing the issue that prevents the device from being compliant.

Conditional access policies are a set of rules that can restrict or allow access to a specific service based on whether the user meets the requirements you define. When you use a conditional access policy in combination with a device compliance policy, only users with compliant devicesin addition to any other rules youve setwill be allowed to access the service. Since both policies are applied at the user level, any device from which the user tries to access services will be checked for compliance.

Conditional Access Policy Scenario

In this scenario, IT has applied a policy that blocks unmanaged devices from accessing and opening files stored on OneDrive for Business. Devices need to be enrolled first, before the location can be accessed.

EMS + Lookout, providing additional mobile endpoint security

Lookouts deep integration with EMS gives you real-time visibility into mobile device risks, including advanced mobile threats and app data leakage, which can inform your conditional access policies. Lookout provides visibility across all three mobile risk vectors: app-based risks (such as malware), network-based risks (such as man-in-the-middle attacks), and OS-based risks (such as malicious OS compromise).

The integration between Lookout and EMS makes it easy to apply this threat intelligence to your conditional access policies. If a device is found to be non-compliant due to a mobile risk identified by Lookout, access is blocked and the user is prompted to resolve the issue with one-step guidance from Lookout before they can regain access. Note that Lookout licenses must be purchased separately from EMS.

EMS Intune Lookout

Device-based conditional access to on-premises resources

EMS conditional access capabilities help you to secure access to both your cloud and on-premises resources. Our customers often manage broad and complex networks, so with that in mind, weve built partnerships with popular network access providers such as Cisco ISE, Aruba ClearPass, and Citrix NetScaler. Now you can extend your Intune conditional access capabilities to work with these networks.

Partner network providers can implement checks for Intune-managed and compliant devices as a requirement before allowing user access through either your wireless or virtual private network. When you extend device compliance policies to network providers, you can ensure that only managed and compliant devices will be able to connect to your on-premises corporate network.

EMS offers you some great access simplifications: you can still enable secure access to on-premises applications without VPNs, DMZs, or on-premises reverse proxies by leveraging the Azure Active Directory Application Proxy. Best of all, all of this can be done without installing or maintaining additional on-premises infrastructure or opening your company firewall to route traffic through it. Conditional access capabilities will work for this scenario as well.

Additional Resources


Released: Exchange Server Role Requirements Calculator 8.4

$
0
0

Today, we released an updated version of the Exchange Server Role Requirements Calculator.

This release focuses on bug fixes with the DAG auto-calculation functionality that was introduced in 8.3, as well as, support for ReplayLagMaxDelay.

In addition to allowing you to configure the ReplayLagMaxDelay value (default is 24 hours), the calculator has also been updated to ensure that the SafetyNetThreshold is configured to a value that is equal to or greater than the sum of ReplayLagTime+ReplayLagMaxDelay.

For all the other improvements and bug fixes, please review the Release Notes, or download the update directly.

As always we welcome feedback and please report any issues you may encounter while using the calculator by emailing strgcalc AT microsoft DOT com.

Ross Smith IV
Principal Program Manager
Office 365 Customer Experience

How six lines of code + SQL Server can bring Deep Learning to ANY App

$
0
0

Deep SQL Server

The What Part

Deep Learning is a hot buzzword of today. The recent results and applications are incredibly promising, spanning areas such as speech recognition, language understanding and computer vision. Indeed, Deep Learning is now changing the very customer experience around many of Microsoft’s products, including HoloLens, Skype, Cortana, Office 365, Bing and more. Deep Learning is also a core part of Microsoft’s development platform offerings with an extensive toolset that includes: the Microsoft Cognitive Toolkit, the Cortana Intelligence Suite, Microsoft Cognitive Services APIs, Azure Machine Learning, the Bot Framework, and the Azure Bot Service. Our Deep Learning based language translation in Skype was recently named one of the 7 greatest software innovations of the year by Popular Science, and this technology has now helped machines achieve human-level parity in conversational speech recognition. To learn more about our Deep Learning journey, I encourage you to read a recent blog From “A PC on every desktop” to “Deep Learning in every software”.

The applications of Deep Learning technology are truly so far reaching that the new mantra, of Deep Learning in Every Software, may well become a reality within this decade. The venerable SQL Server DBMS is no exception. Can SQL Server do Deep Learning? The response to this is enthusiastic “yes!” With the public preview of the next release of SQL Server, we’ve added significant improvements into R Services inside SQL Server including a very powerful set of machine learning functions that are used by our own product teams across Microsoft. This brings new machine learning and deep neural network functionality with increased speed, performance and scale to database applications built on SQL Server. We have just recently showcased SQL Server running more than one million R predictions per second, using SQL Server as a Machine Learning Model Management System and I encourage you all to try out R examples and machine learning templates for SQL Server on GitHub.

In this blog, I wanted to address the finer points of the matter – the what, the why and the how part of Deep Learning in SQL Server. With this new clarity, it will be easier to see a picture of the road forward for data-driven machine intelligence using such a powerful data platform like SQL Server.

The Why Part

Today, every company is a data company, and every app is a data app.

When you put intelligence (AI, ML, AA, etc.)  close to where the data lives, then every app becomes an intelligent app. SQL Server can help developers and customers everywhere realize the holy grail of deep learning in their applications with just a few lines of code.  It enables It enables data developers to deploy mission critical operational systems that embed deep learning models. So here are the 10 whys for deep learning in SQL Server.

The 10 Whys of Deep Learning inside SQL Server

  1. By pushing intelligence close to where your data lives (i.e., SQL Server), you get security, compliance, privacy, encryption, master data services, availability groups, advanced BI, in-memory, virtualization, geo-spatial, temporal, graph capabilities and other world-class features.
  2. You can do both near “real-time intelligence” or “batch intelligence” (similar in spirit to OLTP and OLAP, but applied to Deep Learning and intelligence).
  3. Your apps built on top of SQL Server don’t need to change to take advantage of Deep Learning, and a multitude of apps (web, mobile, IoT) can share the same deep learning models without duplicating code.
  4. You can exploit a number of functionalities that come in machine learning libraries (e.g., MicrosoftML) that will drive the productivity of your data scientists, developers and DBAs and business overall. This might be faster and far more efficient than doing it in the house.
  5. You can develop predictable solutions that can evolve/scale up as you need. With the latest service pack of SQL Server, many features that were only available in the Enterprise Edition are now available in the Standard/Express/Web Edition of SQL Server. That means you can do Deep Learning using a standard SQL Server without high costs.
  6. You can use heterogeneous external data sources (via Polybase) for training and inference of deep models.
  7. You can create versatile data simulations and what-if scenarios inside SQL Server and then train a variety of rich Deep Learning models in those simulated worlds to enable intelligence even with a limited training data.
  8. You can operationalize Deep Learning models in a very easy and fast way using stored procedures and triggers.
  9. You get all the tools, monitoring, debugging and ecosystem around SQL Server applicable to intelligence. SQL Server can literally become your Machine Learning Management System and handle the entire life cycle of DNN models along with data.
  10. You can generate new knowledge and insights on the data you are storing already and anyways without having any impact on your transactional workload (via HTAP pattern).

Let’s be honest, nobody buys a DBMS for the sake of DBMS. People buy it for what it enables you to do. By putting deep learning capabilities inside SQL Server, we can scale artificial intelligence and machine learning both in traditional sense (scale of data, throughput, latency), but we also scale it in terms of productivity (low barrier to adoption and lower learning curve). The value that it brings results in so many shapes and forms – time, better experience, productivity, lower $ cost, higher $ revenue, opportunity, higher business aspirations, thought-leadership in an industry, etc.

Real-life applications of Deep Learning running inside SQL Server span banking, healthcare, finance, manufacturing, retail, e-commerce and IoT. With applications like fraud detection, disease prediction, power consumption prediction, personal analytics, you have the ability to transform existing industries and apps. That also means whatever workloads you are running using SQL Server, be it CRM, ERP, DW, OLTP, BD… you can add Deep Learning to them almost seamlessly.  Furthermore, it’s not just about doing deep learning standalone, but it’s rather about combining it with all kinds of data and analytics that SQL Server is so great at (e.g., processing structured data, JSON data, geo-spatial data, graph data, external data, temporal data). All that is really left to be added to this mix is… your creativity.

SQL Server and Deep Learning

The How Part

Here is a great scenario to show all of this in reality. I am going to use an example of predicting galaxy classes from image data – using the power of Microsoft R and its new MicrosoftML package for machine learning (which has been built by our Algorithms and Data Science team). And I am going to do all this in SQL Server with R Services on a readily available Azure NC VM. I am going to classify the images of galaxies and other celestial objects into 13 different classes based on the taxonomy created by astronomers – mainly elliptical and spirals and then various sub-categories within them. The shape and other visual features of galaxies change as they evolve. Studying the shapes of galaxies and classifying them appropriately helps scientists learn how the universe is evolving. It is very easy for us humans to look at these images and put them in the right buckets based on the visual features. But in order to scale it to the 2 trillion known galaxies I need help from machine learning and techniques like deep neural networks – so that is exactly what I am going to use. It’s not a big leap to imagine that instead of astronomy data, we have healthcare data or financial data or IoT data and we are trying to make predictions on that data.

An app

Imagine a simple web app that loads images from a folder and then classifies them into different categories – spiral or elliptical and then sub-types with those categories (e.g., is it a regular spiral or does it have a handlebar structure in the center).

Use Deep Neural Nets

The classification can be done incredibly fast on vast amounts of images. Here is an example output:

Classifier
The first two columns are the elliptical types and the others are of different spiral types.

So how does this simple app do this amazingly complex computation?

The code for such an app actually isn’t doing much – it just writes the paths to the new files to classify into a database table (the rest of the app code is plumbing and page layout, etc).

SqlCommand Cmd = new SqlCommand("INSERT INTO [dbo].[GalaxiesToScore] ([path] ,[PredictedLabel]) "

 

CodeDemo

What is happening in the database?

Prediction and operationalization part:

Let’s look at the table where the app writes the image paths. It contains a column with paths to the galaxy images, and a column to store the predicted classes of galaxies. As soon as a new row of data gets entered into this table, a trigger gets executed.

Image Path Table 1

 

The trigger in turn invokes a stored procedure – PredictGalaxiesNN as shown below (with R script portion embedded inside the stored proc):

Image Path Table 2

This is where the magic happens – in these few lines of R code. This R script takes two inputs – the new rows of data (that have not been scored yet) and the model that is stored in a table as varbinary(max). I will talk about how the model got there in a minute. Inside the script, the model gets de-serialized and is used by the familiar scoring function (rxPredict) in this line:

scores <- rxPredict(modelObject = model_un, data = InputDataSet,  extraVarsToWrite="path")

to score the new rows and then write the scored output out. This is a new variant of rxPredict which understands the ML algorithms included in the new Microsoft ML package. This line

 [ library("MicrosoftML") ]

loads the new package that contains the new ML algorithms. In addition to DNN (the focus of this blog), there are five other powerful ML algorithms in this package – fast linear learner, fast tree, fast forest, one class SVM for anomaly detection, regularized logistic regression (with L1 and L2) and neural nets. So, with just 6-7 lines of R code, you can enable any app to get the intelligence from a DNN based model. All that the apps needs to do is connect to SQL Server. By the way, you can now very easily generate a stored procedure for R Code using the sqlrutils package.

What about training the model?

Where was the model trained? Well, the model was trained in SQL Server as well. However, it does not have to be trained on SQL Server– it could have been trained on a separate machine with a standalone R Server running on it, on-prem or on the cloud. Today we have these new ML algorithms on Windows version of R Server, and the support for other platforms is coming soon. I just chose to do the training in the SQL server box here, but I could have done it outside as well. Let’s look at the stored proc with the training code.

Training code:

The model training is done in these lines of code.

Training Code

This new function – rxNeuralNet from the MicrosoftML package for training a DNN. The code looks similar to other R and rx functions – there is a formula, an input dataset, and some other parameters. One of the parameters here is this line “netDefinition = netDefinition”. This is where the neural network is being defined.

Network definition:

Here is the DNN definition in this portion of the code:

DNN Definition

Here, a deep neural net is defined using Net# specification language that was created for this purpose. It has 1 input, 1 output and 8 hidden layers. It starts with an input layer of 50×50 pixels and 3 colors (RGB) image data. First hidden layer is a convolution layer where we specify the kernel (small sub-part of the image) size and how many times we want the kernel to map to other kernels (convolute). There are some other layers for more convolutions, and for normalization and pooling that help stabilize the network. And finally, the output layer that maps it to one of the 13 classes. In about 50 lines of Net# specification, I have defined a complex neural network. Net# is documented on MSDN.

Training data size/GPU:

Here is the R code to do the training.

R code

Some other lines to note here are – ‘training_rows = 238000’. This model was trained on 238K images that we got from Sloan Digital Sky Survey dataset. We then created two variants of each image with 45% and 90% rotations. In all there was about 700K images to train on. That’s a lot of image data to train on – so, how long did it take to train it? Well, we were able to train this model in under 4 hours. This is a decent sized machine – 6 cores and 56GB or RAM, but then it also has a powerful Nvidia Tesla K80 GPU. In fact, it is an Azure VM– the new NC series GPU VM, readily available to anyone with an Azure subscription. We were able to leverage the GPU computation by specifying one simple parameter: acceleration = “gpu”. Without GPU, the training takes roughly 10X more time.

The What Are You Waiting For Part

So with just a few lines of R code using algorithms from the MicrosoftML package, I was able to train a DNN on tons of image data and operationalize the trained model in SQL using R services such that any app connected to SQL can get this type of Intelligence easily. That’s the power of Microsoft R and the Microsoft ML package in it combined with SQL Server. This is just the beginning, and we are working on adding more algorithms on our quest to democratize the power of AI and machine learning.  You can download the MicrosoftML: Algorithm Cheat Sheet here to help you choose the right machine learning algorithm for a predictive analytics model.

MicrosoftML Alogorithm Cheat Sheet

Don’t wait, go ahead and give it a try.

@rimmanehme

3 techniques for successful cloud collaboration

$
0
0

What’s your business’s motivation for implementing cloud collaboration solutions? The ones we hear most frequently are increased productivity, accelerated decision-making and improved sales. But here’s the surprise: According to the 2016 Connected Enterprise Report, one in four IT groups aren’t measuring cloud collaboration results by whether business goals were achieved. They’re not even checking whether users adopted the solution. They’re just measuring whether the tool was implemented.

But implementation isn’t a useful measure of success. Sure, it’s the prerequisite for effective collaboration, but it’s just the first step. Your enterprise can only see a full return on its investment in collaboration technologies when employees actually use the tools—so user adoption is the first measure to focus on.

Putting resources into ensuring adoption pays off. The same report shows that when businesses fully implemented and adopted collaboration technologies, they reported some exciting benefits:

  • Accelerated decision-making—85 percent say that using collaboration technology has met or exceeded their expectations.
  • More efficient business processes—79 percent say collaboration has met or exceeded their expectations.
  • Improved customer service—86 percent say collaboration has met or exceeded their expectations.

Three techniques to increase adoption

Want comparable benefits for your enterprise? Here are three techniques our customers have used to increase adoption of collaboration tools and get great business results. Each company’s story is a little different, but they’ve all improved productivity and agility.

  1. Start with a familiar interface—For fashion group BCBGMAXAZRIAGROUP (BCBG), successful adoption was simpler because they started with a familiar interface.

After evaluating Google Apps and Microsoft Office 365, the IT team at BCBG was concerned that the unfamiliar Google interface would create a training and adoption challenge. “We needed something our employees could adopt now with minimum disruption to day-to-day business,” says Kent Fuller, director of IT Infrastructure Services at BCBG. “We have a lot of infrastructure transformation going on, and Google would have introduced new challenges.” With a familiar interface and credits for online training materials included with subscriptions, BCBG employees can easily adopt Office 365.

One of the main benefits has been productive collaboration. With an updated, advanced messaging and productivity environment, BCBG employees can send and receive email messages faster; have the right tools to produce better documents, spreadsheets and presentations; and collaborate more effectively with colleagues, customers, suppliers and other partners. “With Office 365, we can build a more effective, more comprehensive collaboration environment than we could have with Google Apps,” says Fuller.

  1. Turn executives into enthusiasts—For the KCOM Group, a national communications and information services organization in the United Kingdom, the rollout of its collaboration solution started at the top.

The company originally chose 40 senior executives to evaluate the Office Communications Server 2007 instant messaging, but the number of early users expanded organically and rapidly. “The technology sells itself once you start to use it,” says Dean Branton, group CIO at the KCOM Group and director of customer operations. “The senior team members immediately decided they wanted their direct reports using it as well, and then their extended teams, and then their personal assistants. Before we knew it, we had rolled out by stealth.”

“Now,” says Bill Halbert, executive chairman of the KCOM Group, “We are more flexible, more agile, and we can make quicker decisions, because it is much easier to find the information we need.”

5 reasons your IT team will benefit from a collaboration suite

Fully adopted collaboration solutions can deliver tremendous value to your enterprise. Find out the reasons why it makes sense.

Get the free eBook
  1. Let the experts help build your plan—The process of building an Office 365 adoption was a little different for Mott MacDonald, a global consulting company. The Microsoft FastTrack Team helped the company with its adoption plan by providing both self-service resources and expert advice. “The FastTrack adoption methodology is really beneficial,” says Simon Denton, the business architect responsible for Office 365 implementation at Mott MacDonald. “It sets out quite clearly the steps we needed to go through to define principles and scenarios. Once we did that, we knew adoption would come easily. We based our entire adoption plan on the FastTrack documentation. It gave us a really good foundation.”

For example, Mott MacDonald encouraged adoption of its new Yammer enterprise social network with a “30 Days of Yammer” campaign, which involved all the staff and more than doubled the number of active and engaged users. Employees started using it to break down barriers within the organization much more quickly than anyone had expected.

The most important step: Start!

As soon as you roll out your collaboration solution, start measuring and tracking user adoption, and move decisively to address any hitches in the process. Implementing a suite solution and preparing your IT team to get employees up and running can be a daunting task—but Office 365 lets you move at your own pace. With our suite of available tools, you decide whether to migrate employees over in groups or by program. The ability to implement a steady rollout enables your teams to work at their own pace, allows you to save costs, and increases productivity by helping your business adapt to new streamlined solutions over time. Additionally, FastTrack for Office 365 provides customers with hands-on support to drive deployment and adoption at their own speed.

The post 3 techniques for successful cloud collaboration appeared first on Office Blogs.

Episode 112 with Andrew Connell on technical training—Office 365 Developer Podcast

$
0
0

In episode 112 of the Office 365 Developer Podcast, Richard diZerega and Andrew Coates talk with Andrew Connell about technical training.

Download the podcast.

Weekly updates

Show notes

Got questions or comments about the show? Join the O365 Dev Podcast on the Office 365 Technical Network. The podcast RSS is available on iTunes or search for it at “Office 365 Developer Podcast” or add directly with the RSS feeds.feedburner.com/Office365DeveloperPodcast.

About Andrew Connell

andrew-connell

Andrew Connell is a full stack web developer with a focus on Microsoft Azure and Office 365—specifically the Office 365 APIs, SharePoint, Microsoft .NET Framework /.NET Core, Angular, Node.js and Docker. He’s received Microsoft’s MVP award every year since 2005. Andrew has helped thousands of developers through the various courses he’s authored and taught both in-person and online courses. Recently he launched his own on-demand video platform, Voitanos to deliver his on-demand video training. You can also follow Andrew on his blog (www.andrewconnell.com) and on Twitter @andrewconnell. Also, check out some of the numerous projects he’s involved in on GitHub or listen to his popular weekly podcast, The Microsoft Cloud Show, which focuses on Microsoft cloud services such as Azure and Office 365 as well as the competitive cloud landscape.

About the hosts

RIchard diZeregaRichard is a software engineer in Microsoft’s Developer Experience (DX) group, where he helps developers and software vendors maximize their use of Microsoft cloud services in Office 365 and Azure. Richard has spent a good portion of the last decade architecting Office-centric solutions, many that span Microsoft’s diverse technology portfolio. He is a passionate technology evangelist and a frequent speaker at worldwide conferences, trainings and events. Richard is highly active in the Office 365 community, popular blogger at aka.ms/richdizz and can be found on Twitter at @richdizz. Richard is born, raised and based in Dallas, TX, but works on a worldwide team based in Redmond. Richard is an avid builder of things (BoT), musician and lightning-fast runner.

 

ACoatesA Civil Engineer by training and a software developer by profession, Andrew Coates has been a Developer Evangelist at Microsoft since early 2004, teaching, learning and sharing coding techniques. During that time, he’s focused on .NET development on the desktop, in the cloud, on the web, on mobile devices and most recently for Office. Andrew has a number of apps in various stores and generally has far too much fun doing his job to honestly be able to call it work. Andrew lives in Sydney, Australia with his wife and two almost-grown-up children.

Useful links

StackOverflow

Yammer Office 365 Technical Network

The post Episode 112 with Andrew Connell on technical training—Office 365 Developer Podcast appeared first on Office Blogs.

Moving eBird to the Azure Cloud

$
0
0

Re-posted from the Azure Data Lake & HDInsight blog.

Hosted by the Cornell Lab of Ornithology, eBird is a citizen science project that allows birders to submit observations to a central database. Birders seek to identify and record the birds that they discover, and can also report how much effort it took to find those birds. eBird’s web and mobile apps make data recording and interaction super convenient. eBird has accumulated over 350 million records of birds all over the world in the past 14 years.

What’s more, birds are strong indicators of environmental health. They use a variety of habitats, respond to seasonal and environmental cues in specific ways, and undergo dramatic migrations across the globe. Understanding their distribution, abundance and movements across large geographic areas over long periods of time, researchers can build models to understand these patterns, monitor trends and identify conservation priorities.

 Species distribution model showing an abundance of tree swallows throughout an entire year. The model was generated using information collected entirely by eBirders. (Image courtesy of eBird and the Cornell Lab of Ornithology.)

Although the eBird project was providing research opportunities at a scale that would have been inconceivable otherwise, it ran into challenges to do with data growth and the time it took to run analytics models. The project, which has thus far captured 25 million hours of bird observation, faced exponential growth in data volumes. The mid-sized high performance computers being used to run these analytics models were taking as many as 3 weeks to process the results for a single species. That made it very inefficient to generate the results that the researchers needed for the 700 odd species of birds that regularly inhabit North America.

Thanks to a recent collaboration between the Cornell Lab and Microsoft, this project and the associated machine learning workflow were migrated to the fully managed, highly scalable Azure HDInsight (Hadoop) service, a key component of the Microsoft Cortana Intelligence Suite. As a result of this partnership, researchers were able to scale their clusters sufficiently to reduce analysis run times to as little as 3 hours, generating results across more species dramatically faster. This, in turn, provides more timely results for conservation staff to then use in their planning process. They have also been able to run models on dozens more species than they would have otherwise.

The complete solution is built on Azure Storage, HDInsight, Microsoft R Server, Linux Ubuntu, Apache Hadoop MapReduce and Spark.


You can click on this link to read the original post, or on the architecture diagram above. 

Taking advantage of the scalability, manageability and open source support of the Microsoft Azure cloud platform, the researchers behind eBird hope to drive further innovation and accelerate their research and conversation efforts, working closely with the community. 

CIML Blog Team

CES 2017: Dell adds convertible XPS 13 model, unveils 8K monitor and more

$
0
0

At the annual Consumer Electronics Show (CES), Dell unveiled a series of innovations built with Windows 10 helping people maximize productivity and creativity while enjoying lifelike viewing experiences. The new products include the Dell Canvas, a horizontal smart workspace with touch, totem and pen capabilities; a 13-inch 2-in-1; XPS and Precision All-in-Ones; the Dell UltraSharp 32 Ultra HD 8K Monitor and Dell 27 Ultrathin Monitor; Dell’s most powerful VR-ready mobile workstation; a new Inspiron gaming line, a wireless charging 2-in-1 for the ultimate “no wires” experience and more.

Here’s a closer look at what Dell announced today:

Dell Canvas – work at the speed of thought

Click to view slideshow.

The Dell Canvas channels the innovative possibilities of Windows 10 and the upcoming Windows 10 Creators Update into a new category of smart workspace technology. The versatile 27-inch QHD smart workspace expands productivity for content developers and designers so you can create, communicate and express your ideas as naturally as you do with pen on paper. Through the use of touch, digital pen and totems or dials, the Dell Canvas allows you to turn drawings into part of the digital workflow with Windows Ink or by marking up webpages in Microsoft Edge. Powered by virtually any Windows 10 PC, Dell Canvas also plugs seamlessly into software solutions from partners including Adobe, Autodesk, AVID, Dassault Systems, SolidWorks and Microsoft to unleash the creative genius in everyone.

Pricing and Availability: The Dell Canvas will be available March 30 on Dell.com in the U.S. starting at $1,799.

Updates to the XPS lineup – XPS 13 adds convertible model; XPS 15 more powerful

Click to view slideshow.

 

Dell introduced a brand new XPS 13 2-in-1 that offers a 360-degree hinge for multiple productivity and viewing options, so you can use it as a laptop or flip the display to use it as a tablet. It also features up to 15 hours of battery life and eye-popping clarity with a gorgeous QHD+ (5.7M pixels) InfinityEdge touch display. The fanless design keeps it silent and it can be configured with 7th Gen Intel Core vPro processors and Dell BIOS and manageability software – all backed by Dell’s global ProSupport services.

Other features include:

  • The internal and external surface temperatures of the system ensure the XPS 13 2-in-1 keeps its cool while tackling your most challenging tasks.
  • Superior materials and build quality – durable and stunning – machined aluminum, carbon fiber, bonded Gorilla Glass NBT and steel hinges wrapped in machined aluminum
  • Leading edge connectivity with all USB Type C with one Thunderbolt 3 connector – charging and display functionality with either port; faster downloads and one-cable docking solutions for a clean desk setup; USB C to USB A adapter included in the box
  • Superior usability with standard backlit keyboard and precision touchpad, as well as a fingerprint reader that allows you login securely and easily with Windows Hello

Dell XPS 15 (Model 9550) Touch notebook computer hero shot.

Dell has also updated the XPS 15, which offers powerhouse performance and a stunning display in a 15-inch laptop, with 7th Gen Intel Core processors, NVIDIA Pascal architecture – GeForce GTX 100 graphics and a fingerprint reader that enables Windows Hello for password-free logins with the swipe of a finger.

Pricing and Availability: The Dell XPS 13 2-in-1 is available exclusively on Dell.com and at Best Buy in the U.S. starting at $999.99. The Dell XPS 15 notebook is available on Dell.com in the U.S. starting at $999.99.

XPS & Precision all-in-ones with Windows 10  

Click to view slideshow.

The new XPS 27 AIO takes sound to a whole new level with built-in audio quality that previously required an external sound bar. With full-frequency range high-fidelity audio built in, they deliver the best sound available in an AIO with 10 speakers pumping out sound at 50W per channel. The viewing experience is equally impressive with a beautiful 4K Ultra HD (3840 x 2160) edge-to-edge touch display supporting 100 percent Adobe RGB color gamut. With Windows Hello, you can login with a look using the infrared camera, or ask Cortana for things from across the room using the device’s far-field processing from waves + quad array mics.

The Precision AIO delivers more performance options with Intel Xeon processors, AMD Radeon Pro graphics capable of powering VR and outstanding reliability with ISV certifications for top programs like AVID and SolidWorks, as well as leading security and manageability software.

Pricing and Availability: Dell XPS 27 AIO is available on Dell.com in the U.S. starting at $1,499.99. The Dell Precision AIO (5720) will be available April 6 on Dell.com in the U.S. starting at $1,599.

Dell’s first VR-ready mobile workstation – the Precision 7720

Dell Precision 7710 Non-Touch mobile workstation.

The world of virtual reality is expanding in near-infinite directions and the creative pioneers driving all this content need the best tools. Enter the Dell Precision 7720 mobile workstation—Dell’s first VR-ready mobile workstation designed specifically for VR content creation. It’s Dell’s most powerful mobile workstation, thanks to the latest 7th Gen Intel Core and Intel Xeon processors and NVIDIA Pascal Quadro professional graphics. Like all Dell Precision workstations, it’s developed specifically to work with and optimize the latest software suites, and – since it’s a mobile workstation – you can now develop VR content from anywhere you find inspiration.

Pricing and Availability: Dell Precision 7720 VR-ready mobile workstation will be available on Dell.com in the U.S. starting at $1,699.

Dell UltraSharp 32 Ultra HD 8K monitor and Dell 27 Ultrathin monitor

Click to view slideshow.

The Dell UltraSharp 32 Ultra HD 8K Monitor is a near-borderless 32-inch 8K resolution display. With more than 1 billion colors, 33.2M pixels of resolution, 100 percent Adobe RGB and sRGB color gamut and an unprecedented 280 ppi, the Dell UltraSharp 32 Ultra HD 8K monitor delivers four times more content than Ultra HD 4K resolution and 16 times more content than Full HD.

Additionally, Dell is announcing several new monitors with its new HDR feature to enhance the visual experience through a wider range of colors for exceptionally vivid images, higher clarity with visibly vibrant textures and increased contrast to capture a multitude of natural shades and hues. New monitors with the HDR feature include: the Dell 27 Ultrathin Monitor featuring the world’s overall thinnest profile, a sleek, modern design aesthetic, Quad HD technology and USB Type-C connectivity as well as the Dell 24 and 27 InfinityEdge Monitors, which also feature dual 6W external speakers professionally tuned by award-winning Waves Maxx Audio.

Pricing and Availability: Available March 23 on Dell.com in the U.S., the Dell 27 Ultrathin Monitor (S2718D) starts at $699.99, and the Dell UltraSharp 32 Ultra HD 8K Monitor (UP3218K) starts at $4,999. Available Feb. 23 on Dell.com in the United States, the Dell 24 InfinityEdge Monitor (S2418HX) starts at $289.99, and the Dell 27 InfinityEdge Monitor (S2718HX) starts at $379.99.

The ultimate “no wires” experience coming soon with a wireless charging 2-in-1

Dell-Latitude-7285-2-in-1-Image

With WiTricity magnetic resonance wireless charging technology, Dell will deliver a truly wireless experience in the 12-inch Latitude 7285 2-in-1, available later this year. When combined with a charging mat and WiGig wireless dock, you can take the Latitude 2-in-1 with you without disengaging any wires or a physical dock. And when you get back to your desk and set the 2-in-1 on the charging mat, it begins charging, automatically reconnects to the WiGig dock and content appears on the external display. No wires involved!

Pricing and availability: The Latitude 7285 2-in-1 featuring wireless charging capabilities will be available summer of 2017. Pricing will be announced during Dell EMC World in May.

Dell consumer and commercial devices refresh

Dell-Latitude-5285-2-in-1-Image

Dell’s line of XPS, Inspiron and Alienware consumer devices and OptiPlex, Latitude and Precision commercial products are being upgraded with performance enhancements, thanks to 7th Gen Intel Core processors, USB-C with Thunderbolt 3 connectivity options and updated professional graphics from NVIDIA and AMD.

Precision and Latitude products built for business also feature 7th Gen Intel Core vPro and Xeon processors and new ultrathin notebook and 2-in-1 designs— thin and light professional products that don’t compromise on productivity, security and manageability. The Latitude 7000 Series Ultrabooks and an award-winning 12-inch Latitude 5000 Series 2-in-1 are also added to the lineup. The detachable Latitude 5285 2-in-1 weighs less than two lbs, features a unique auto-deploy kickstand that extends up to 150 degrees for multiple viewing angles and has multiple connectivity options.

For desktop lovers looking for space-saving designs, Dell has redefined the desktop experience with the new OptiPlex 5250 AIO and an updated line of OptiPlex small and micro form factor desktops. Great for pairing with a desktop or laptop, new monitors also include the Dell 24 Monitor for Video Conferencing perfect for Skype, which features a two megapixel Full HD IR camera with privacy shutter, noise-canceling microphone and dual 5W speakers built in, and the Dell 24 Touch Monitor, which features 10-point touch with an anti-glare touch screen and height-adjustment, tilt and swivel capabilities for an ideal touch experience.

Pricing and Availability: New Latitude notebooks, including the Dell Latitude 5285 2-in-1, will be available on Dell.com in the U.S. The Latitude 5285 2-in-1 starts at $899. Available Feb. 7 on Dell.com in the U.S., the OptiPlex 5050 Micro starts at $599 and the OptiPlex 5250 AIO starts at $879. Available Jan. 12 on Dell.com in the U.S., the Dell 24 Touch Monitor (P2418HT) starts at $399.99, and the Dell 24 Monitor for Video Conferencing (P2418HZ) starts at $329.99

It’s great to see partners like Dell push the limits of what’s possible to offer customers innovative devices that light up Windows 10. To learn more about Dell’s news from today, visit dell.com/CESpressroom.

The post CES 2017: Dell adds convertible XPS 13 model, unveils 8K monitor and more appeared first on Windows Experience Blog.

New video for Cloud Management Gateway

$
0
0

Teaching coding from the Metal Up or from the Glass Back?

$
0
0

* Stock photo by WOCInTech Chat used under CC

Maria on my team and I have been pairing (working in code and stuff together) occasionally in order to improve our coding and tech skills. We all have gaps and it's a good idea to go over the "digital fundamentals" every once in a way to make sure you've got things straight. (Follow up post on this topic tomorrow.)

As we were white boarding and learning and alternating teaching each other (the best way to make sure you know a topic is to teach it to another person) I was getting the impression that, well, we weren't feeling each other's style.

Now, before we get started, yes, this is a "there's two kinds of people in this world" post. But this isn't age, background, or gender related from what I can tell. I just think folks are wired a certain way.  Yes, this a post about generalities.

Here's the idea. Just like there are kinesthetic learners and auditory learners and people who learn by repetition, in the computer world I think that some folks learn from the metal up and some folks learn from the glass back.

Learning from Metal Up

Computer Science instruction starts from the metal, most often. The computer's silicon is the metal. You start there and move up. You learn about CPUs, registers, you may learn Assembly or C, then move your way up over the years to a higher level language like Python or Java. Only then will you think about Web APIs and JSON.

You don't learn anything about user interaction or user empathy. You don't learn about shipping updates or test driven development. You learn about algorithms and Turing. You build compilers and abstract syntax trees and frankly, you don't build anything useful from a human perspective. I wrote a file system driver in Minix. I created new languages and built parsers and lexers.

  • When your type cnn.com and press enter you can pretty much tell you from the address bar down to electrons what happens. AND I LOVE IT.
  • Your feel like you own the whole stack and you understand computers like your mechanic friends understand internal combustion engines.
  • You'll open the hood of a car and look around before you drive it.
  • You'll open up a decompiler and start poking around to learn.
  • When you learn something new, you want to open it up and see what makes it tick. You want to see how it relates to what you already know.
  • If you need to understand the implementation details then an abstraction is leaking.
  • You know you will be successful because you can have a FEEL for the whole system from the computer science perspective.

Are you this person? Were you wired this way or did you learn it? If you teach this way AND it lines up with how your students learn, everyone will be successful.

Learning from the Glass Back

Learning to code instruction starts from the monitor, most often. Or even the user's eyeballs. What will they experience? Let's start with a web page and move deeper towards the backend from there.

You draw user interfaces and talk about user stories and what it looks like on the screen. You know the CPU is there and how it works but CPU internals don't light you up. If you wanted to learn more you know it's out there on YouTube or Wikipedia. But right now you want to build an application for PEOPLE an the nuts and bolts are less important. 

  • When this person types cnn.com and presses enter they know what to expect and the intermediate steps are an implementation detail.
  • You feel like you own the whole experience and you understand people and what they want from the computer.
  • You want to drive a car around a while and get a feel for it before you pop the hood.
  • You'll open F12 tools and start poking around to learn.
  • When you learn something new, you want to see examples of how it's used in the real world so you can build upon them.
  • If you need to understand the implementation details then someone in another department didn't do their job.
  • You know you will be successful because you can have a FEEL for the whole system from the user's perspective.

Are you this person? Were you wired this way or did you learn it? If you teach this way AND it lines up with how your students learn, everyone will be successful.

    Conclusion

    Everyone is different and everyone learns differently. When teaching folks to code you need to be aware of not only their goals, but also their learning style. Be ware of their classical learning style AND the way they think about computers and technology.

    My personal internal bias sometimes has me asking "HOW DO YOU NOT WANT TO KNOW ABOUT THE TOASTER INTERNALS?!?!" But that not only doesn't ship the product, it minimizes the way that others learn and what their educational goals are.

    I want to take apart the toaster. That's OK. But someone else is more interested in getting the toast to make a BLT. And that's OK.

    * Stock photo by WOCInTech Chat used under CC



    © 2016 Scott Hanselman. All rights reserved.
         

    Check Out What ConfigMgr Customers were Doing During the 2016 Holiday Break

    $
    0
    0

    Happy New Year!!! I hope all of you had the opportunity to get some time to relax and spend time with your loved ones.

    Earlier this week, as we all came back from a little time off, the first thing everyone on my team did was dig into the telemetry and signal that had come in over the last two weeks. It didnt take much digging to see something incredible.

    What we found was an answer to a question we had posed before everyone left for break: How many upgrades will there be during the last two weeks of the year?

    This question is significant because we know that many organizations lock down significant changes at the end of the year, and we also know that there are many organizations that use this time when the offices slow down to perform upgrades and make changes.

    All of these factors had us wondering what would happen with the upgrades to ConfigMgr Current Branch.

    Below is a screenshot of a chart showing the 26k+ unique organizations who have upgraded to Current Branch. Take a look at the chart and then lets dig into what we learned from it.

    chart

    The yellow section at the top of the chart represents tenants that are on ConfigMgr 1610. We released 1610 in early December but did not make it available to all Current Branch customers until December 8. As you look at the yellow area, you can see exactly when we made it generally available it’s where the slope of the line accelerates (Point A on the chart). You can see that throughout December the pace at which organizations upgraded to 1610 was both consistent and fast.

    The next big data point is the slope of the cumulative number of tenants especially during the last two weeks of the year (Point B). Notice that the number of customers on ConfigMgr Current Branch flattened during those two weeks. This means that there were very few customers upgrading from ConfigMgr 2007 or 2012 to Current Branch during those two weeks.

    One final piece of the picture to consider: Look at the slope of the line of customers upgrading to 1610vs. the slope of the line going to 1606 and 1602. The slope (i.e. rate of customers upgrading to that update) accelerates which each release!!! As an engineer, this is one of the most exciting data points possible.

    Heres everything we learned in a nutshell:

    • The upgrade from 2007 or 2012 to Current Branch is considered to be a significant update and very few organizations were willing/able to make that change during the last two weeks of the year.
    • Customers who were already on Current Branch were more than willing to upgrade during that time to 1610 since it is significantly easier, safer, and faster to update Current Branch (directly in console!) than upgrading from 2007/2012 to the Current Branch. Once on Current Branch, customers see updates as low risk.
    • The fact that the rate at which people upgrade accelerates with each release tells us that once a customer upgrades to Current Branch their confidence increases in the reliability and simplicity of keeping up-to-date. As an engineer building products/services, this is the most rewarding data that I find in the chart!
    • One of the other really interesting pieces of data is the average time it takes for an upgrade to complete. Based on what we see here, that amount of time is, on average, shrinking by 50% with each upgrade.
    • Bearing in mind that every ConfigMgr deployment is, of course, unique (they are all different sizes, with different numbers of groups and nested groups) we see that the full upgrade from 2012 to 1511 required about 15 hours, the upgrade to 1602 was about 8 hours, the upgrade to 1606 was about 4 hours, and the average to upgrade to 1610 is 2 hours. Wow.

    I have a request for the customers still on 1511: Set aside the time to update to 1610 this week. Try it, youll like it.

    As you do this, your confidence in the stability and simplicity will grow and youll be confident in keeping up with the updates. After all, 1702 is just around the corner. Here is the link to the directions on how to do the in-console upgrade right now.

    For customers still on 1602 hey, youve upgraded once, now just do it again!

     

     

    Add Intelligence to Any SQL App, with the Power of Deep Learning

    $
    0
    0

    Re-posted from the SQL Server blog.

    Recent results and applications involving Deep Learning have proven to be incredibly promising, and across a diverse set of areas too, including speech recognition, language understanding, computer vision and more. Deep Learning is changing customer expectations and experiences around a variety of products and mobile apps, whether we’re aware of it or not. That’s definitely true of Microsoft apps you’re likely to be using every day, such as Skype, Office 365, Cortana or Bing. As we’ve mentioned before, our Deep Learning based language translation in Skype was recently named one of the 7 greatest software innovations of 2016 by Popular Science, a true technological milestone, with machines now sitting at or above human parity, when it comes to recognizing conversational speech.

    As a result of these developments, it’s only a matter of time before intelligence powered by Deep Learning becomes an expectation of any app.

    In a new blog post, Rimma Nehme addresses the question of how easy might it be for your typical SQL Server developer to integrate Deep Learning into their app. This question is especially timely in light of the recent enhancement to SQL Server 2016 through the integration of R Services, with powerful ML functions, including deep neural networks (DNNs) as a core part of it.

    Can we help you turn any SQL app into a truly ‘intelligent’ app, and ideally with just a few lines of code?

    To find out, read the original blog post here– the answer may surprise you.


    CIML Blog Team

    VS Team Services Update – Jan 6

    $
    0
    0

    Next week we will be rolling out our sprint 110 and 111 updated (we didn’t do a 110 deployment due to the holidays).  You can check out the release notes for details.  Please bear with us – these changes are going to roll out a bit slower than usual.  As I write this, we are waiting for a major snow storm to hit North Carolina and we are expecting a pretty interrupted work schedule into early next week.  As such most people probably won’t have access to these changes until mid next week (~Jan 11th).

    When you do get them, you’ll find there’s quite a lot of new stuff.  The thing I’m most excited to hear your feedback about is all the UX changes we’ve been making to make Team Services more personal and more approachable – the new account level experience and the new project home page and associated navigation changes, especially.  Please check it out and give us your feedback.

    We’ve also implemented a consistent way for you to learn about and enable the various preview features that we enable.  Particularly with the bigger UX changes we’ve been making, we’re increasingly introducing them as “opt-in” changes so we can collect feedback and refine them before we roll them out broadly.  We hope it also gives you some opportunity to decide when is a convenient time to absorb the changes.  This release introduces 2 previews – one for the account experience and one for some notifications changes.  We’ve got more previews like that coming in the next few sprints.

    Check it out and let us know what you think!

    Brian

    New Windows 10 devices unveiled at CES 2017 unlock the creator in each of us

    $
    0
    0

    Happy New Year!

    And thank you for making 2016 an amazing year for Windows 10 across PCs, tablets, all-in-1’s, 2-in-1’s, mixed reality, and Xbox One. As we move toward the release of the Windows 10 Creators Update, I’ve spent the week at the annual Consumer Electronics Show (CES) in Las Vegas, seeing all kinds of innovation firsthand. It’s incredible to see the creativity of our partners and so many teams around the world who are building devices for each of us.

    Empowering you to create and play continues to be at the core of how we build Windows, whether you’re a student, educator, gamer, creative or mobile professional. This week our partners announced some fantastic new devices at CES.

    Acer, Dell, HP, Lenovo, LG, MSI, Samsung, and Toshiba introduced Windows 10 devices in a variety of form factors, such as 2-in-1 convertibles, laptops, desktops and gaming PCs

    Key themes among partner devices include VR-ready gaming PCs featuring the latest NVIDIA graphics cards, increased power and performance with Intel’s latest 7th Gen Core i7 processor, lighter devices with longer battery life for greater portability than ever, OLED and 4K screens for the most stunning visuals, infrared cameras and fingerprint readers that unlock Windows Hello, and much more.

    Here are some highlights:

    Acer launched cutting-edge gaming PCs that deliver performance, design innovation, and immersive experiences

    The Predator 21 X gaming laptop powered by Windows 10

    Acer introduced the highly anticipated curved-screen Predator 21 X gaming laptop. It offers a curved 21-inch IPS display and delivers a truly immersive gaming experience, especially when combined with the notebook’s Tobii eye-tracking technology. The curved screen features NVIDIA G-SYNC technology, ensuring smooth and sharp gameplay. Acer also introduced a VR-ready PC: the Aspire GX desktop that delivers stunning 4K resolution visuals and can support up to four displays at once for a maximized gaming experience. To learn more about what Acer announced at CES, visit this link.

    Dell delivered first-of-a-kind PC and peripheral innovations that empower creators, and introduced the new XPS 13 2-in-1

    Dell Canvas

    Dell unveiled a series of innovations built with Windows 10 to redefine personal computing – including the Dell Canvas, a horizontal smart workspace with touch, totem and pen capabilities; a 13-inch 2-in-1; XPS and Precision All-in-Ones; the Dell UltraSharp 32 Ultra HD 8K Monitor; Dell’s most powerful VR-ready mobile workstation, and a wireless charging 2-in-1 for the ultimate “no wires” experience. The Dell Canvas with Windows 10 is a new category of smart workspace technology that expands creative productivity for content developers and designers; the 27-inch QHD smart workspace can be used at an angle or flat on a desk so you can create, communicate, and express your ideas as naturally as you do with pen on paper. Powered by virtually any Windows 10 device, Dell Canvas also plugs seamlessly into software solutions from partners including Adobe, Autodesk, AVID, Dassault Systems, SolidWorks, and Microsoft to unleash the creative genius in everyone.

    Dell also introduced an all new XPS 13 2-in-1 that offers a 360-degree hinge for multiple productivity and viewing options, up to 15 hours of battery life, and eye-popping clarity with a gorgeous QHD+ (5.7M pixels) InfinityEdge touch display. The fanless design keeps it silent and it can be configured with 7th Gen Intel Core vPro processors. To learn more about what Dell announced at CES, visit this link.

    HP delivered power and innovative design in the redesigned Sprout Pro, and their latest 2-in-1 for business

    The new Sprout Pro by HP powered by Windows 10

    HP introduced the redesigned Sprout Pro, an immersive all-in-one PC that incorporates advanced technologies and new features that empower you to create highly visual content and interactive experiences by blending the physical and digital worlds. HP has evolved Sprout Pro’s software to make it easy to interact between Windows 10 Pro and Sprout’s unique HD resolution projector, touch mat, and 2D/3D cameras. The Sprout Pro now features a hefty Intel Core i7 processor, 1TB of SSHD storage, up to 16GB of RAM, NVIDIA GeForce GTX 960M graphics for faster 3D scanning, and Windows 10 Pro with its innovative and secure experiences.

    HP also announced the HP EliteBook x360 1030, an incredibly thin 14.9mm business-class convertible, featuring the iconic Elite design and durability, this device is built for performance with Intel 7th Generation Core processors and runs Windows 10 Pro. Integrated collaboration capabilities enhance productivity with dedicated conferencing keys, audio features bring new life to meetings and a 13.3-inch diagonal optional 4K UHD display with webcam and pen support makes creating content easier than ever. To learn more about what HP announced at CES, visit this link.

    Lenovo introduced a new gaming brand and powerful state-of-the-art gaming laptops

    Lenovo launched a dedicated sub-brand for Lenovo gaming, called Lenovo Legion. Lenovo’s new Legion offerings come in the form of two powerful gaming laptops designed for mainstream and enthusiast players. The Lenovo Legion Y720 and Y520 laptops offer gamers state-of-the-art technology to fully immerse in the game. These new PCs allow for a greater gaming experience in every sense—powering VR through the latest NVIDIA graphics, better sound with Dolby Atmos, and increased power with Intel’s latest 7th Gen Core i7 processor. In addition to these new gaming laptops, Lenovo announced updates to the ThinkPad X1 family of devices that are thinner, lighter, and more powerful than previous models. To learn more about what Lenovo announced at CES, visit this link.

    LG delivered superior portability and battery life

    LG Gram laptops powered by Windows 10

    The feather-light LG Gram’s feature 7th generation Intel Core i7 processors, up to 512GB SSD and max 16GB DDR4 Dual Channel Memory and Full HD IPS panels that supports In-Touch Display technology. The newest LG Gram laptops come in three different screen sizes – the 13.3-inch and 14.0-inch models weigh in at 940 grams and 970 grams, respectively, while the largest model only weighs 1,090 grams despite its impressive 15.6-inch screen. Super slim bezels frame the screen to create a sophisticated, near-edgeless touchscreen that is great for interacting naturally with Windows 10 and using Windows Ink to make notes and annotate directly on webpages in Microsoft Edge. To learn more about what LG announced at CES, visit this link.

    Game on with new MSI Laptops that pack the power of traditional desktops

    MSI

    MSI launched a brand-new lineup of gaming devices powered by Windows 10, the latest Intel 7th generation CPUs, and NVIDIA GeForce GTX 10 GPUs. Brand-new VR-ready gaming laptops, gaming motherboards, ultimate gaming graphics cards equipped with TWIN FROZR VI, gaming desktop PCs and Gaming Gear peripherals to bring gamers all over the world the latest and greatest technologies. New MSI GT83VR, GT73VR, GS73VR, GS63VR, and GS43VR gaming laptops are powered by Windows 10 and Intel 7th generation CPUs and NVIDIA GeForce GTX1050 Ti & GTX1050 gaming graphics cards, feature enhanced audio experiences and offer gamers the power of a desktop including fast and smooth VR experiences to gamers on the go. To learn more about what MSI announced at CES, visit this link.

    Samsung unveiled their first gaming PC powered by Windows 10, the Samsung Notebook Odyssey

    Samsung

    This week Samsung unveiled their first ever gaming PC, the Samsung Notebook Odyssey powered by Windows 10. Available in 17.3-inch and 15.6-inch models, the Samsung Notebook Odyssey packs power in a beautiful design with premium features to offer a premium gaming experience. Powered by a 7th Generation Intel Core i7 processor (Quad Core 45W), both models of the Samsung Notebook Odyssey offer lightning fast performance with premium graphic technologies. In addition to its high-performing engine, the Samsung Notebook Odyssey 15-inch offers a beautiful viewing experience including a backlight that goes up to 280 nits in brightness for crystal clear images. Samsung also showcased their recently updated Samsung Notebook 9 15-inch laptop. The Notebook 9 15-inch is equipped with the latest 7th Generation Intel Core i7 Processor, features a vibrant full HD display, has a built-in fingerprint reader that unlocks password free sign in with Windows Hello, and is packed with a battery that can last as long as 15 hours on a single charge. To learn more about what Samsung announced at CES, visit this link.

    Toshiba introduced a powerful premium 2-in-1 for business

    The Portégé X20W with Windows 10

    Toshiba introduced the new Portégé X20W, a premium 2-in-1 convertible PC running Windows 10 Pro. Measuring 15.4mm thin and weighing less than 2.5 pounds, the Portégé X20W has a battery life of up to 16 hours, features multi-directional microphones to support Cortana, a pair of IR cameras to allow easy and secure log in with Windows Hello, and a touchscreen for digital inking with Windows Ink or marking up webpages in Microsoft Edge. The Portégé X20W is designed for mobile professionals, educators, and students. To learn more about what Toshiba announced at CES, visit this link.

    These great new devices are just the beginning of how Windows will enable more creativity in 2017. As we announced in October, the Windows 10 Creators Update will let customers access amazing mixed reality content with the Universal Windows Platform catalog on head mounted displays from hardware partners like Acer, ASUS, Dell, HP, Lenovo, and 3Glasses. And as we announced in December, with Windows 10 on ARM, we are addressing the growing desire for more lightweight, portable, more battery efficient PCs. By partnering Qualcomm hardware with the Windows 10 Creators update and emulation technology, we’re opening the doors for our partners to build innovative devices that will create diversity in the marketplace and bring more choice to people and businesses that are looking for always-connected mobile computing experiences.

    I want to thank all our partners for the great work they are doing to develop hardware that is designed to maximize all the benefits of the Windows 10 Creators Update. These partnerships ensure an entire ecosystem of devices to choose from so that each of us can unleash our inner creator, and 2017 is shaping up to bring even more exciting and compelling options than ever before.

    Terry

    The post New Windows 10 devices unveiled at CES 2017 unlock the creator in each of us appeared first on Windows Experience Blog.

    Viewing all 13502 articles
    Browse latest View live


    <script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>