Quantcast
Channel: TechNet Technology News
Viewing all 13502 articles
Browse latest View live

Announcing SQL Server Management Studio -16.5 Release

$
0
0

 Today, we are pleased to announce the latest generally-available (GA) quality release of SQL Server Management Studio (SSMS) 16.5. This update features fixes for high DPI, reconnection and many issues! Check out the list of fixes below.

Get it here:                                                          

Download SSMS 16.5 release

  • The version number for the latest release is 13.0.16000.28

New in this release

  1. Fixed an issue where a crash could occur when a database with table name containing “;:” was clicked on.
  2. Fixed an issue where changes made to the Model page in AS Tabular Database Properties window would script out the original definition. Microsoft Connect Item: 3080744
  3. Fixed the issue that temporary files are added to the “Recent Files” list.
    Microsoft Connect Item: 2558789
  4. Fixed the issue that “Manage Compression” menu item is disabled for the user table nodes in object explorer tree. Microsoft Connect Item: 3104616
  5. Fixed the issue that user is not able to set the font size for object explorer, registered server explorer, template explorer as well as object explorer details. Font for the explorers will be using the Environment font. Microsoft Connect Item: 691432
  6. Fixed the issue that SSMS always reconnect to the default database when connection is lost. Microsoft Connect Item: 3102337
  7. Fixed many of high dpi issues in policy management and query editor window including the execution plan icons.
  8. Fixed the issue that option to config font and color for Extended Event is missing.
  9. Fixed the issue of SSMS crashes that occur when closing the application or when it is trying to show the error dialog.

Please visit the SSMS download page for additional details, and to see the full changelog.

Known Issues
The full list of known issues is available in Release notes available here.

Contact us
As always, if you have any questions or feedback, please visit our forum or Microsoft Connect page. You can also tweet our Engineering Manager at @sqltoolsguy on Twitter. We are fully committed to improve the SSMS experience and look forward to hearing from you!


Recommendations to speed C++ builds in Visual Studio

$
0
0

In this blog, I will discuss features, techniques and tools you can use to reduce build time for C++ projects. The primary focus of this post is to improve developer build time for the Debug Configuration as a part of your Edit/Build/Debug cycle (inner development loop). These recommendations are a result of investigating build issues across several projects.

Developers invoke build frequently while writing and debugging code, so improvements here can have a large impact on productivity. Many of the recommendations focus on this stage, but others will carry over to build lab scenarios, clean builds with optimizations for end-end functional and performance testing and release.

Our recommendations include:

Before We Get Started

I will highlight the search feature in project settings. This feature will make it easy for you to locate and modify project settings.

  1. Bring up the project properties and expand sub groups for the tool you are interested in.
  2. Select “All options” sub group and search for the setting by name or the command line switch e.g. Multi-processor or /MP as shown in the figure below:properties
  3. If you cannot find the setting through search, select “Command Line” sub group and specify the switch in Additional Options

Recommendations

Specific recommendations include:

  • DO USE PCH for projects
  • DO include commonly used system, runtime and third party headers in PCH
  • DO include rarely changing project specific headers in PCH
  • DO NOT include headers that change frequently
  • DO audit PCH regularly to keep it up to date with product churn
  • DOUSE /MP
  • DORemove /Gm in favor of /MP
  • DO resolve conflict with #import and use /MP
  • DOUSElinker switch /incremental
  • DOUSElinker switch /debug:fastlink
  • DOconsider using a third party build accelerator

Precompiled Header

Precompiled headers (PCH) reduce build time significantly but require effort to set up and maintain for the best results. I have investigated several projects that either didn’t have a PCH or had one that was out of date. Once PCH was added or updated to reflect current state of the project, compile time for individual source files in the project reduced by 4-8x (~4s to  <1s).

An ideal PCH is one that includes headers that meet the following criteria

  • Headers that don’t change often.
  • Headers included across a large number source files in the project.

System (SDK), runtime header and third party library headers generally meet the first requirement and are good candidates to include in PCH. Creating a PCH with just these files can significantly improve build times. In addition, you can include your project specific headers in PCH if they don’t change often.

Wikipedia article on the topic or searching for ‘precompiled headers’ is a good starting point to learn about PCH. In a future blog post I will talk about PCH in more detail and as well as tools to help maintain PCH files.

Recommendation:
  • DO USE PCH for projects
  • DOinclude commonly used system, runtime and third party headers in PCH
  • DOinclude rarely changing project specific headers in PCH
  • DO NOTinclude headers that change frequently
  • DOaudit PCH regularly to keep it up to date with product churn

/MP– Parallelize compilation of source files

Invokes multiple instances of cl.exe to compile project source files in parallel. See documentation for /MP for a detailed discussion of the switch including conflicts with other compiler features. In addition to documentation this blog post has good information about the switch.

Resolving conflicts with other compiler features
  • /Gm (enable minimal rebuild): I recommend using /MP over /Gm to reduce build time.
  • #import: Documentation for /MP discusses one option to resolve this conflict. Another option is to move all import directives to precompiled header.
  • /Yc (create precompiled header): /MP does not help with creating precompiled header so not an issue.
  • /EP, /E, /showIncludes: These switches are typically used to diagnose issues hence should not be an issue.
Recommendation:
  • DO USE /MP
  • DO Remove /Gm in favor of /MP
  • DOresolve conflict with #import and use /MP

/incremental– Incremental link

Incremental link enables the linker to significantly speed up link times. With this feature turned on, linker can process just the diffs between two links to generate the image and thus speed up link times by 4-10x in most cases after the first build. In VS2015 this feature was enhanced to handle additional common scenarios that were previously not supported.

Recommendation:
  • DO USElinker switch /incremental

The linker spends significant time in collecting and merging debug information into one PDB. With this switch, debug information is distributed across input object and library files. Link time for medium and large projects can speed up by as much as 2x. Following blog posts discuss this feature in detail

Recommendation:
  • DO USElinker switch /debug:fastlink

Third party build accelerators

Build accelerators analyze Msbuild projects and create a build plan that optimizes resource usage. They can optionally distribute builds across machines. Following are a couple of build accelerators that you may find beneficial.

  • Incredibuild: A link to install VS extension is available under New project/Build accelerators. Visit their website for more information.
  • Electric Cloud: Visit their website for download link and more information

In addition to improving build time, these accelerators help you identify build bottlenecks through build visualization and analysis tools.

Recommendation:
  • DOconsider using a third party build accelerator

Sign up to get help

After you have tried out the recommendations and need further help from the Microsoft C++ team, you can sign up here. Our product team will get in touch with you.

If you run into any problems like crashes, let us know via the Report a Problem option, either from the installer or the Visual Studio IDE itself. You can also email us your query or feedback if you choose to interact with us directly! For new feature suggestions, let us know through User Voice.

Bing helps developers connect more naturally with people

$
0
0
Artificial Intelligence (AI) is central to our ambitions as a company. Through this intelligence, we want to empower everyone to achieve more. Central to that mission is leveraging the knowledge of Bing and the services that Bing has created to make knowledge accessible and usable by all.
 
In this first post on how Bing technology is being used by developers across the industry, we touch upon our work to enable more natural and conversational experiences that are designed around people.
 
Today we are witnessing a shift towards a more natural, conversational computing. While this shift is still very early, and evolving, many groups are already seeing opportunities – from developers who view this as the new platform to create on with bots, to businesses who see a new way to connect with customers, and individuals who are eager for more natural ways to discover, access and interact digitally.
 
Our services continue to evolve through our direct experience delivering new intelligent experiences at scale. Our popular chatbots Xiaoice in China and Rinna in Japan already have more than 90 million users. Built using Bing technology, users have formed strong emotional connections with Xiaoice. Cortana, the more productive personal digital assistant, again built upon Bing services, is used globally by more than 133 million people each month.
 
Beyond Microsoft’s first party experiences, we focus on applying Bing technology to create building blocks for developers to understand the user (intents, contexts, disambiguation), extract knowledge (insights, facts, information) and intelligence (natural language, safe search). Below are just a couple of examples.
 
A search engine is traditionally known for its vast index of the web, a digital map of the planet, complete index of public images, videos, and all the news you ever wanted. Ask it anything, and you typically get back useful information that answers your query across all these domains.
 
Bing additionally has a multi-domain knowledge graph. To put our knowledge graph into perspective, think about a person, place, or thing—say a sports team. That sports team has a name, upcoming game schedule, a team roster with individual player statistics, news, pictures, videos, maps to the venues they play, and weather forecasts for the days they will be playing. All this information is associated with that one sports team, which, in our knowledge graph, is one node/entity. And we have billions—all interlinked to each other so they can be conversationally traversed to bring knowledge into experiences, including conversations.
 

   

 

The Bing Sportscaster bot on Facebook Messenger taps into Bing’s knowledge and intelligence to keep users up-to-date with news, facts, scores, schedules and more about their favorite teams.
 
When we launched Bing, we recognized the value of integrating deep smarts and controls into our search stack, to enable experiences which work better for people.
 
One example is image analysis and the safe search settings that are valued by parents and educators. We apply our ability to use AI to detect and filter out inappropriate search results, both explicit adult and racy content. We offer this capability to developers as part of our Bing Search API.
 


This is just a taste of the capabilities that we make available to developers. Developers and businesses are using Bing services right now to build new natural and conversational experiences for people.
 
It is an exciting time to be working to enable this changing technology landscape.
 
In our next post for this series, we will take a deeper look at how Bing Search APIs help power our partners’ solutions.
 
For more information on the services we make available to power chatbots, apps and new business opportunities, please visit our partner website or contact us. We are always happy to connect!
 
Check out some of our posts from the past for more information about Bing services:

- The Bing Team
 

 

Office 2013 can now block macros to help prevent infection

$
0
0

In response to the growing trend of macro-based threats, a new feature in Office 2016 allows an enterprise administrator to block users from running macros in Office documents that originated from the Internet.

This feature was documented back in March: New feature in Office 2016 can block macros and help prevent infection, and the predominant customer request we received was for this feature to be added to Office 2013.

We are pleased to announce that, as of September 2016, this feature is now part of Office 2013 – and it works in the same way as it does in Office 2016.

Administrators can enable this feature for Word, Excel, and PowerPoint by configuring it under the respective application’s Group Policy Administrative Templates for Office 2013.

For more information on how this feature works, and some background information on how macros can be abused for malware, see our blog from March 2016.

Network Virtualization in the Windows Server 2016 Software Defined Networking (SDN) Stack

$
0
0

By Jason Messer, Senior Program Manager

Using VXLAN for Encapsulation and OVSDB for Policy Distribution

Windows Server 2016 is the perfect platform for building your Software-Defined Data Center (SDDC) with new layers of security and Azure-inspired innovation for hosting business applications and infrastructure. A critical piece of this SDDC is the new Software Defined Network (SDN) Stack which provides agility, dynamic security, and hybrid flexibility by enforcing network policy in the Hyper-V Virtual Switch using the Azure Virtual Filtering Platform (VFP) Switch Extension. Instead of programming network configurations into a physical switch using CLI, NetConf, or OpenFlow, the network policy is instead delivered from the new Microsoft Network Controller to the Hyper-V Hosts using the OVSDB protocol and programmed into the VFP extension of the vSwitch by a Host Agent which enforces the policy. By creating overlay virtual networks (VXLAN Tunnels / logical switches) and endpoints which terminate in the vSwitch, each Hyper-V host becomes a software VXLAN Tunnel End Point (VTEP).

Note: This will be a technical post focusing on networking protocols and some implementation details

Virtual Networking

Overlays, VXLAN, virtual networking, HNV, encapsulation, NVGRE, logical switch… why should you care about all these esoteric networking terms? Maybe you have heard hard-core networking types mention these in passing or have customers asking how Microsoft’s network virtualization solution compares with other solutions. Why should you care? Because just as compute and storage have been virtualized, traditional networking devices and services are also being virtualized for greater flexibility.

Server hardware is now virtualized through software to mimic CPUs, memory, and disks to create virtual machines. Network hardware is also being virtualized to mimic switches, routers, firewalls, gateways, and load balancers to create virtual networks. Not only do virtual networks provide isolation between workloads, tenants, and business units but they also allow IT and network administrators to configure networks and define policy with agility while realizing increased flexibility in where VMs are deployed and workloads run.

Virtual networks still require physical hardware and IP networks to connect servers and VMs together. However, the packets transmitted between VMs across these physical links are encapsulated within physical network IP packets to create an overlay network – Reference Figure 1. This means that the original packet from the VM with its MAC and IP addresses, TCP/UDP ports, and data remains unchanged and is simply placed inside of an IP packet on the physical network. The physical network underneath is then known as the underlay or transport network – traditionally Microsoft has called this the HNV Provider (or PA) network.

Figure 1 – VXLAN Encapsulation

The idea of network virtualization to guarantee isolation has been around for some time in the form of VLANs. VLANs allow network traffic to be “tagged” with an identifier to create logical network segments and segregate network traffic into broadcast and isolation domains. However, VLANs are largely static configurations programmed on individual switch ports and network adapters. Anytime a server or VM moves, the VLAN configuration must be updated (sometimes in multiple places) and the IP addresses of that workload or VM may need to be changed as well. Moreover, since VLANs use a 12-bit field for the network identifier, there is a limit of 4096 logical network segments which can be created.

Hyper-V Network Virtualization (HNV)

The network virtualization solution in Windows Server 2012 and 2012R2 – Hyper-V Network Virtualization (HNV) – used an encapsulation format known as NVGRE (RFC 7637) to create overlay networks based on network policy managed through SCVMM and programmed through WMI / PowerShell. Another popular industry protocol for encapsulation is VXLAN (RFC 7348) including guidance on how to distribute or exchange VXLAN Tunnel Endpoint (VTEP) information between virtual and physical devices – e.g. Hardware VTEPs.

Note: The HNV solution in Windows Server 2012 and 2012R2 which used NVGRE for encapsulation and WMI/PowerShell for management is still available in Windows Server 2016. We strongly recommend customers move to Windows Server 2016 and the new SDN stack, as the bulk of development and innovation will occur on this stack as opposed to HNVv1.

In talking with customers, we observed some confusion around which encapsulation protocol to use (VXLAN or NVGRE), which was taking focus away from the higher-level value of network virtualization. Consequently, in Windows Server 2016 (WS2016), we support both NVGRE and VXLAN encapsulation protocols, with the default being VXLAN. We also built the Microsoft Network Controller as a programmable surface to create and manage these virtual networks and apply fine-grained policies for security, load balancing, and Quality of Service (QoS). A distinct management-plane – either PowerShell scripts, System Center Virtual Machine Manager (SCVMM), or the Microsoft Azure Stack (MAS) components – programs network policy through the RESTful API exposed by the Microsoft Network Controller. The Network Controller then distributes this policy to each of the Hyper-V hosts using the OVSDB Protocol and a set of schemas to represent virtual networks, ACLs, user-defined routing, and other policy.

Like VLANs, both NVGRE and VXLAN provide isolation by including an identifier (e.g. VXLAN Network Identifier – VNI) to identify the logical network segment (virtual subnet). In WS2016, multiple VNIs (or virtual subnets) can be combined within a Routing Domain so that isolation between tenant virtual networks is maintained thereby allowing for overlapping IP address spaces.  A network compartment is created on the Hyper-V host for each routing domain with a Distributed Router used to route traffic between virtual subnets for a given tenant. Admins can also create User-Defined Routes to chain virtual appliances into the traffic path for increased security and functionality. Unlike physical networks with VLANs where policy is closely tied to location and the physical port to which a server (hosting a VM) is attached, a network endpoint (VM) is free to move across the datacenter while ensuring that all policy moves along with it.

As network equipment manufacturers build support for VXLAN into their NIC cards, switches, and routers, they can support:

  • Encapsulation Task Offloads to offload operations from the OS and Host CPU onto the NIC Card
  • ECMP Spreading using the UDP source port as a hash to distribute connections

Microsoft has worked with the major NIC vendors to ensure support exists for both NVGRE and VXLAN Task Offloads in Windows Server 2016 NIC drivers. These offloads take the processing burden off the Host CPU and instead perform functions such as LSO and Inner Checksums on the physical NIC card itself. Moreover, Microsoft conforms to the standard VXLAN UDP source port hash over the inner packet to ensure ECMP spreading for different connections will just work from ECMP-enabled routers.

VXLAN Implementation in Windows Server 2016

TCP/IP stacks rely on the Address Resolution Protocol (ARP) and port learning performed by traditional layer-2 switches to determine the MAC address of the remote hosts and the ports on a switch to which they are connected. Overlay Virtual Networking encapsulates the VM traffic’s packet headers and data (inner packet) inside of a Layer-3 IP (outer) packet transmitted on the underlay (physical) network. Therefore, before a VM attached to a virtual network can send a unicast packet destined for another VM in the same virtual subnet, it must first learn or be told the remote VM MAC address as well as the VTEP IP address of the host on which the VM is running. This allows the VM to place the correct Destination MAC address in the inner packet’s Ethernet header and for the host to build the encapsulated packet with the correct Destination IP address to deliver the packet to the remote host.

The VXLAN RFC talks about different approaches for distributing the VTEP IP to VM MAC mapping information:

  1. Learning-based control plane
  2. Central Authority-/directory-based lookup
  3. Distribution of this mapping information to the VTEPs by the central authority

In a learning-based control-plane, encapsulated packets having an unknown destination (VTEP IP) are sent out via broadcast or to an IP multicast group. This requires a mapping between a multicast group and each virtual subnet (identified by a VXLAN Network Identifier (VNI)) such that any VTEPs which host VMs attached to this VNI register for the group through IGMP. The VM MAC addresses and remote host’s IP address (VTEP IP) are then discovered via source-address learning. A clear disadvantage of this approach is that it places a lot of unnecessary traffic on the wire which most network administrators try to avoid.

Based on our learnings in Azure, Microsoft chose the distribution by a central authority (i.e. Microsoft Network Controller) approach to send out the VM MAC : VTEP IP mapping information to avoid the unnecessary broadcast/multicast network traffic. The Microsoft Network Controller (OVSDB Client) communicates with the Hyper-V Hosts (VTEPs) using the OVSDB protocol with policy represented in schemas persisted to a Host Agent’s database (OVSDB Server). A local ARP responder on the host is then able to catch and respond to all ARP requests from the VMs to provide the destination MAC address of the remote VM. The Host Agent database also contains the VTEP IP address of all hosts attached to the virtual subnet. The Host Agent programs mapping rules into the VFP extension of the Hyper-V Virtual Switch to correctly encapsulate and send the VM packet based on the destination VM.

NC-HA Communication

Figure 2 – Network Controller – Host Agent Communication

Looking Ahead

At present, Microsoft’s implementation of VXLAN does not support interoperability with third party hardware VTEPs due to a difference in OVSDDB schemas. We created a custom OVSDB schema to convey additional policy information such as ACLs and service chaining which was not available in the initial hardware_VTEP schema. However, support for the core protocols  (VXLAN, OVSDB) is in place in the platform for us to bring in support for hardware VTEPs in the future. Our current thinking on implementation is that we will support the hardware_VTEP schema in our Network Controller and distribute mapping information to the hardware VTEPs. We do not think that a learning-based control plane is the right solution due to the increased amount of multicast/broadcast required on the network – network admins are already trying to limit this L2 traffic which by some accounts consumes 50% of network capacity.

If this is something of interest, please do reply in the comments field below and let us know. We’d love to speak with you.

SDN Network Virtualization Key Features

  • Create thousands of logical network segments
  • User-Defined Routing (UDR) for Virtual Appliances
  • Multi-tenancy support through individual routing domains
  • VXLAN and NVGRE encapsulation
  • Distributed Router on each Hyper-V host
  • Integration with Software Load Balancer (SLB), Gateways, QoS, and Distributed Firewall
  • Network virtualization policy programmed through the Microsoft Network Controller using OVSDB

 

RESOURCES

Documentation

Deployment Scripts

Blogs

Kevin Gallo gives the developer perspective on today’s Windows 10 Event

$
0
0

Did you see the Microsoft Windows 10 Event this morning?  Satya, Terry, and Panos talked about some of the exciting new features coming in the Windows 10 Creators Update and announced some amazing new additions to our Surface family of devices. If you missed the event, be sure to check it out here.

As a developer, my first question when I see new features or new hardware is “What can I do with that?” We want to take advantage of the latest and coolest platform capabilities to make our apps more useful and engaging.

There were several announcements today that offer exciting opportunities for Windows developers.  Three of these that I want to tell you about are:

  • 3D in Windows 10 along with the first VR headsets capable of mixed reality through the Windows 10 Creators update.
  • Ability to put the people you care about most at the center of your experience—right where they belong—with Windows MyPeople
  • Surface Dial, a new input peripheral designed for the creative process that integrates with Windows and is complimentary to other input devices like pen. It gives developers the ability to create unique multi-modal experiences that can be customized based on context. The APIs work in both Universal Windows Platform (UWP) and Win32 apps.

Rather that write a long blog post, I decided to go down to our Channel 9 studios and record a video that gives my thoughts and provides what I hope will be a useful developer perspective on today’s announcements.  Here’s my conversation with Seth Juarez from Channel 9:

My team and I are working hard to finish the platform work that will fully support the Windows 10 Creators Update, but you can start experimenting with many of the things we talked today. Windows Insiders can download the latest flight of the SDK and get started right away.

If you want to dig deeper on the Surface Dial, check out the following links:

Stay tuned to this space for more information in the coming weeks as we get closer to the release of the Windows 10 Creator’s update.  In the meantime, we always love to hear from you and welcome your feedback at the Windows Developer Feedback site.

Free ASP.NET Core 1.0 Training on Microsoft Virtual Academy

$
0
0

This time last year we did a Microsoft Virtual Academy class on what was then called "ASP.NET 5." It made sense to call it 5 since 5 > 4.6, right? But since then ASP.NET 5 has become .NET Core 1.0 and ASP.NET Core 1.0. It's 1.0 because it's smaller, newer, and different. As the .NET "full" framework marches on, on Windows, .NET Core is cross-platform and for the cloud.

Command line concepts like dnx, dnu, and dnvm have been unified into a single "dotnet" driver. You can download .NET Core at http://dot.net and along with http://code.visualstudio.com you can get a web site up and running in 10 minutes on Windows, Mac, or many flavors of Linux.

So, we've decided to update and refresh our Microsoft Virtual Academy. In fact, we've done three days of training. Introduction, Intermediate, and Cross-Platform. The introduction day is out and it's free! We'll be releasing the new two days of training very soon.

NOTE: There's a LOT of quality free courseware for learning .NET Core and ASP.NET Core. We've put the best at http://asp.net/free-courses and I encourage you to check them out!

Head over to Microsoft Virtual Academy and watch our new, free "Introduction to ASP.NET Core 1.0." It's a great relaxed pace if you've been out of the game for a bit, or you're a seasoned .NET "Full" developer who has avoided learning .NET Core thus far. If you don't know the C# language yet, check out our online C# tutorial first, then watch the video.

image

And help me out by adding a few stars there under Ratings. We're new. ;)


Sponsor: Do you deploy the same application multiple times for each of your end customers? The team at Octopus have taken the pain out of multi-tenant deployments. Check out their latest 3.4 release!



© 2016 Scott Hanselman. All rights reserved.
     

General availibility for Azure Search S3, S3 High Density, and new regions

$
0
0

Today, we are excited to announce general availability of the S3 tier of service! The S3 tier of Azure Search is designed to handle large volumes of documents and heavy traffic. Identically priced and backed by the same high performance CPUs and SSD storage, the S3 tier comes in two configurations: S3 and S3 High Density (HD).

S3

The standard S3 tier of Azure Search is suited for customers with large numbers of documents and can handle many hundreds of queries per second. With each partition supporting 120 million documents (or 200 GB), it is possible to search up to 1.4 billion documents (or 2.4TB) in a single search service while still maintaining low latency for high query volumes.

S3 High Density

The S3 HD tier is targeted at ISVs and SaaS providers who build applications which support a large number of relatively small indexes in a single search service. In fact, Microsoft Dynamics 365 has been using Azure Search’s S3 HD tier to power its search experience for thousands of its online customers:

“Dynamics 365 (online) took a huge leap forward and paved the way for significant future search innovation by leveraging Azure Search. Delivering a high level of performance and functionality, the S3 HD SKU makes it straightforward and cost-effective to manage Azure Search for thousands of Dynamics 365 customers.”

- Mike Carter, Principal Program Manager, Dynamics 365

In a single S3 HD service, there can be up to 3,000 indexes which can each support 1 million documents which makes it an attractive option to support numerous smaller applications with many low-cost indexes. You can read more about multitenant applications and Azure Search.

 FreeBasicStandard S1Standard S2Standard S3Standard S3 HD
SLANoYesYesYesYesYes
Storage50MB2GB25GB/partition100GB/partition200GB/partition200GB/partition (max 600GB/service)
Max Indexes35502002001000/partition (max 3000/service)
Documents Hosted100001 million15 million/partition (max 180 million/service)60 million/partition (max 720 million/service)120 million/partition (max 1.4 billion/service)1 million/index (max 200 million/partition)
Scale-out Limitsn/aUp to 3 replicas/serviceUp to 36 units/serviceUp to 36 units/serviceUp to 36 units/serviceUp to 12 replicas/service and 3 partitions/service

S3 HD does not currently support indexers. Pricing information can be found here.

New Regions: Canada Central and West Central US

With a truly global presence, Azure Search makes it possible to serve applications across the world with minimal latency. Now available in Canada Central and West Central US, Azure Search can be provisioned in 14 regions: East US, West US, North Central US, South Central US, West Central US, North Europe, West Europe, East Asia, Southeast Asia, Japan West, Brazil South, Australia East, Central India, and Canada Central.

Try it out!

For more information on these new Azure Search tiers and pricing, please visit our pricing page or create your own Search service.

Learn more about Azure Search and view documentation.


An important milestone in enterprise integration – launch of Microsoft BizTalk Server 2016

$
0
0

Today marks the release of Microsoft BizTalk Server 2016. This is an important milestone that not only reinforces strong on-premises application integration capabilities, but also provides flexibility and control to our customers to adopt cloud applications as and when it makes sense for the business. We realize that every company is undergoing digital transformation with the proliferation of applications, data and services. Whether your applications run in the cloud or on-premises, you should have the flexibility to seamlessly connect applications, unlock data and automate business process anywhere.

Earlier this year, I shared our commitment to build a comprehensive hybrid integration platform. Today, Microsoft is the only vendor that provides a truly hybrid integration platform, offering a consistent experience to our customers and partners whether they are looking to connect on-premises or cloud-native applications. This consistent experience is enabled through Microsoft BizTalk Server and Azure Logic Apps forms the foundation of our vision.

Why upgrade to BizTalk Server 2016?

With BizTalk Server 2016, customers can automate mission critical business processes, leverage support for latest first party platforms and gain newer capabilities within the BizTalk Administration console. With this release, customers also have the flexibility to adopt a hybrid approach in their digital transformation journey by choosing to connect to SaaS applications, or running BizTalk Server on Azure leveraging full support in production environments. Below are some key capabilities that I want to highlight:

  • Hybrid connectivity: With BizTalk Server 2016, customers can now connect to cloud native applications, web and mobile back end systems, and custom applications through the Azure Logic Apps adapter. This acts as a bridge to rapidly integrate numerous SaaS applications using pre-built, out-of-the box connectors. Azure Logic Apps adapter uses enterprise messaging in the cloud across partners and vendors and leverages the Microsoft Cloud to help you build holistic integration solutions. Customers can now take advantage of Azure Services like Functions, Cognitive Services, Machine Learning and more to gain actionable intelligence on their data and make informed business decisions.
  • Integration with the latest Microsoft products such as Windows Server 2016, Visual Studio 2015, SQL Server 2016 and Office 2016. Leverage first-class integration experience with the latest Microsoft products and services to enable seamless integration through BizTalk Server 2016.
  • SQL Server 2016 Always On Availability Groups offer a highly available solution in Azure, and on-premises. The traditional log shipping mechanism is now enhanced with an automated, faster and better way to achieve high availability and disaster recovery.

Customers are already reaping the benefits of hybrid cloud with this new release of BizTalk Server. Abid Nasim, CIO, Generalsoft Corporation, which develops custom software noted, “As of date, we’ve migrated all of our integrations and presently running regression tests. We are finding incredible potential for the Logic Apps adapter. This appears to be the best BizTalk release since BizTalk 2006 R2.”

Here is what Steef-Jan Wiggers, principal consultant, Macaw (Unit of Macaw Business Solutions) had to say about BizTalk Server 2016, “BizTalk Server 2016 makes your enterprise communicate with any application, data and business process anywhere. It offers bigger, better, more capabilities, features and connectivity options.”

I want to thank our customers and partners who have participated in this journey with us as we evolve BizTalk server into a product that continues to be the gold standard for enterprise integration in a hybrid cloud world. Learn more about other customers that benefit from the hybrid cloud experience by leveraging Microsoft’s integration products from Mexia, one of Microsoft’s gold partners.

We are committed to help customers in their journey of digital transformation by helping build holistic solutions using our Hybrid Integration platform. Learn more about  the cool new features of BizTalk Server 2016 and be sure to check out this blog post by Codit, another one of our gold partners. As always, we want to hear from you – please share your comments and continue to engage with our teams.

Advanced Analytics with Power BI Embedded and R

$
0
0
Today we are announcing the support for R visuals in Power BI Embedded. R visuals not only enhance Power BI Embedded with advanced analytics depth but also offers developers endless visualization flexibility. Check out our demo to see how the technology works!

Skype for Business announces new Mac client and new mobile sharing experiences

$
0
0

Today, we are pleased to announce that Skype for Business Mac is now publicly available for download. The Mac client offers edge-to-edge video and full immersive content sharing and viewing. The result is a great first class experience for Mac users.

skype-for-business-announces-new-mac-client-and-new-mobile-sharing-experiences-1

We’ve also updated the Skype Operations Framework (SOF) assets to help customers Plan, Deliver and Operate the new Mac client. You will find the latest documentation and updated training on the SOF website and you can read more about what has changed in this SOF blog post. We expect the latest help documentation to be available later at support.office.com.

Enhancements to Skype for Business mobile apps on Android and iOS

We are also announcing new capabilities in Skype for Business apps for iOS and Android—including the ability to present PowerPoint files in a meeting, and a faster, more reliable content sharing approach.

skype-for-business-announces-new-mac-client-and-new-mobile-sharing-experiences-2Present in a meeting from your mobile app—Now you can present content right from Android or iOS device. No more emailing files and links back and forth when you present from your phone or tablet. Now sharing a PowerPoint deck in a meeting is as easy as selecting the file from your favorite cloud drive and presenting right from your phone. On Android, you can also share a file stored on the device itself. With swipe gestures, you can easily transition between different slides. Once shared, the PowerPoint file also becomes available in the meeting’s content bin for other participants to download or present.

Video-based Screen Sharing for mobile devices—We’re also continuing to enhance the content viewing experience with Skype for Business on mobile devices by using Video-based Screen Sharing (VbSS) for content viewing on iOS and Android apps. The initial setup is much faster, the experience more reliable, while also consuming network bandwidth efficiently. It provides a seamless viewing experience, especially if you are sharing animated content such as CAD models.  Learn more about VbSS and how it can enhance your meeting experience.

Stay tuned for upcoming updates, such as call-kit integration on iOS. If you haven’t yet checked the Skype for Business mobile apps for Android and iOS, visit Skype for Business Apps & Downloads so you can download the apps and experience meetings on-the-go today!

—Paul Cannon & Praveen Maloo

The post Skype for Business announces new Mac client and new mobile sharing experiences appeared first on Office Blogs.

How to avoid mobile app security scares

$
0
0

Workplace mobility has freed employees from their desks. Effective collaboration is no longer dependent on whether coworkers are in the same room, building or even country as their colleagues. The mobile devices, apps and policies that enable mobility have created a new generation of workers. But as the electronic net widens, organizations face increased security risks.

Security breaches are not to be taken lightly. What seems like a small security issue can cost your company big. According to the Ponemon Institute (via IBM) the average total cost of a single data breach is $3.79 million. Make sure you’re taking the necessary steps to protect your organization.

Manage BYOD and minimize Shadow IT

As more employees seek mobility in the workplace, bring-your-own-device (BYOD) policies are becoming commonplace. According to the Aberdeen Group (via SearchSecurity), 77 percent of enterprise respondents have launched mobility initiatives in response to pressure from executives seeking increased productivity. But unauthorized app usage might open the door to cybersecurity threats.

According to SearchSecurity, “The BYOD trend means employees use their personal smartphones and tablets to work from anywhere, and many of them download mobile apps to do so.” Unfortunately, these apps may or may not keep your data secure, creating Shadow IT—the use of apps within an organization without the approval (or even knowledge) of corporate IT—can make organizations vulnerable to threats. Despite proactive organizations’ IT policies outlining BYOD best practices, employee adherence is not guaranteed.

To best protect your information, monitor your network for threats, create a BYOD policy, provide a list of apps that are approved for employee use and continually communicate with and educate employees on the importance of safe mobile strategies.

Invest in mobile app security

In a perfect world, developers are aware of the potential implications of how their applications access data and interact with other apps, and they design them to be secure by default. Unfortunately, in the real world, developers and software companies devote millions of dollars to mobile application development but focus little money on security. Because mobile apps are typically created with very little security oversight, vulnerabilities can open the door to severe threats. According to an IBM report by the Ponemon Institute (via TechTarget), $34 million on average is spent on mobile app development, but only 6 percent of this is for security.

Ensure that the apps your employees use to access company data and information is developed with security in-mind, and avoid finding out the hard way—when a data breach has already happened.

Know where to watch for vulnerabilities and threats

According to Microsoft’s 2016 Trends in Cybersecurity, 44.2 percent of all disclosed vulnerabilities are found in applications other than web browsers and operating system applications (mainly mobile apps).

“Many security teams focus their efforts on patching operating systems and web browsers. But vulnerabilities in those two types of software usually account for a minority of the publicly disclosed vulnerabilities. The majority of vulnerabilities are in applications,” the e-book states. “Security teams need to spend appropriate time on assessing and patching these vulnerabilities. Otherwise, they could be missing the bulk of vulnerabilities in their environments.”

In addition to application-based threats, there are a wide variety of places where a lack of mobile security leaves you vulnerable. From these application-based threats (like malware and spyware), to web-based threats (such as phishing, drive-by downloads and browser exploits), to network threats (from network exploits and Wi-Fi sniffing), to physical threats (when devices are lost or stolen).

True mobile security looks at the big picture and continually monitors your network for threats. Broaden your security approach to include these vulnerable entry points.

Put mobile security first

Mobile access is no longer limited to remote workers and frequent travelers—having access to company information from mobile devices is the standard in today’s workforce. Despite its many benefits, BYOD culture has opened businesses up to additional cyber threats that can’t be ignored. To successfully maintain top-notch security, implement proactive security measures to ensure minimal threats to your business. Consider a solution designed to deliver the enterprise-grade security you require to access the cloud with confidence. When you make mobile security a priority at your organization, you can stop breaches at the source.

Related content

The post How to avoid mobile app security scares appeared first on Office Blogs.

Accelerate your eDiscovery analysis workflow with one click

$
0
0

Does your legal department often complain about how long it takes to run an analysis for eDiscovery investigations? We released two new features for Office 365 Advanced eDiscovery—Express Analysis and Export with analytics to Excel—to make it easier and faster for organizations to quickly find, analyze and review relevant information related to investigations, legal matters and regulatory requests.

eDiscovery is applicable for a wide variety of scenarios where you need to sort through a set of unstructured data to find the small number of files which may be relevant. The amount of data you need to sort through is dependent upon the breadth and complexity of the case. For large legal matters or regulatory data requests, the amount of data could be tens of millions of files, while internal investigations could only be a few thousand files.

Express Analysis

With a click of a button you can now run Advanced eDiscovery analytics, specifically near-duplicates, email threads and themes and export the results. Express Analysis accelerates the analytics workflow allowing you to quickly minimize and organize your dataset and export to your desired location. There is no additional configuration or multiple steps required—significantly simplifying the process.

accelerate-your-ediscovery-1

Export with analytics and view in Excel

The new export with analytics feature in Advanced eDiscovery allows you to view your analyzed results directly in Excel. Excel’s familiar interface makes manipulating the results easy for anyone to accomplish.

The exported file includes all the metadata associated with the documents—such as Sender, Recipient, Date and other email/file-related information—as well as all the Advanced eDiscovery analytics information, including email threading, near-duplicates and the key themes in the document.

There is also a “For review” column that flags if an email or document needs to be looked at by the reviewer or if it is redundant information and can be defensibly skipped. Finally, a hyperlink is provided for easy access to navigate the actual data so you can quickly determine relevance.

Having all this analytics information conveniently packaged up in a file that can be opened in Excel is great for smaller investigations and legal matters, as you can quickly review and tag the analyzed data without having to use more advanced tools.

accelerate-your-ediscovery-2

Future enhancements

Office 365 has a rich set of in-place eDiscovery capabilities that help organizations investigate and meet the wide variety of information requests directly from the Security & Compliance Center. To further improve your eDiscovery process, in the coming months we will be delivering additional eDiscovery enhancements, such as Unified Case Management, Search and Export Analytics, optical character recognition and new intelligent analytics features in the Security & Compliance Center.

To take advantage of Advanced eDiscovery Express Analysis and Export with analytics to Excel, simply go to the Security & Compliance Center in your tenant.

—The Office 365 team

The post Accelerate your eDiscovery analysis workflow with one click appeared first on Office Blogs.

October 2016 updates for Get & Transform in Excel 2016 and the Power Query add-in

$
0
0

Excel 2016 includes a powerful new set of features based on the Power Query technology, which provides fast, easy data gathering and shaping capabilities and can be accessed through the Get & Transform section on the Data ribbon.

Today, we are pleased to announce three new data transformation and connectivity features that have been requested by many customers.

These updates are available as part of an Office 365 subscription. If you are an Office 365 subscriber, find out how to get these latest updates. If you have Excel 2010 or Excel 2013, you can also take advantage of these updates by downloading the latest Power Query for Excel add-in.

These updates include the following new or improved data connectivity and transformation features:

  • Query Parameters support.
  • Improved Web connector—web page previews.
  • Query Editor improvements—option to Merge/Append as new query.

Query Parameters support

With this update, users can now create and manage parameters for their queries within the Excel workbook. The new “Manage Parameters” dialog is available on the ribbon under the Home tab within the Query Editor.

october-2016-updates-for-get-transform-1

The new dialog allows the users to create new parameters, give them a meaningful Name and Description, specify the information about the expected parameter type and values, default value and current value.

october-2016-updates-for-get-transform-2

Once one or more parameters are available in the current workbook, users can reference those parameters in their queries via Query Editor. Referencing parameters is supported via the Data Source dialogs, Filter Rows, Keep Rows (top/bottom, etc.), Remove Rows (top/bottom, etc.), Replace Values, Add Conditional Columns dialog and more.

In addition, parameters can be loaded to the grid or to the Data Model just like any other query, allowing references from Excel formulas or DAX measures.

october-2016-updates-for-get-transform-3

The in-depth tutorial on query parameters will be coming on the Excel blogs portal soon. Stay tuned.

Improved Web connector—web page previews

One of the most unique Get & Transform connectors is the Web connector. With the Web connector, users can easily import data from websites that has been formatted as an HTML table or even pull data from Web APIs.

When using the Web connector for “scrapping” data from HTML pages, a very common challenge is that the Navigator view, which is based on a list of tables, is not very helpful in identifying the desired tables. This is particularly hard when dealing with web pages that contain lots of tables, and in many cases, with not very representative table names.

october-2016-updates-for-get-transform-4

With this update, we’re introducing a new mode in the Navigator dialog that allows users to preview tables on the web pages “in context” and select the desired tables by just clicking on them within the Web View preview. This results in a much more intuitive and seamless user experience for selecting tables from a web page.

october-2016-updates-for-get-transform-5

To access this mode, click the Web View button at the top of the Navigator dialog. Users can also switch back to the classic data-centric view by selecting the Table View option.

Query Editor improvement—option to Merge/Append as new query

Within the Query Editor, users can easily merge (join) or append (union) multiple tables, allowing them to mash up data from multiple sources into a single table. The Merge/Append operations are on the ribbon under the Home tab inside Query Editor.

october-2016-updates-for-get-transform-6

In previous versions of the Query Editor, Merge/Append operations were always applied as new steps within the current query. Starting with this update, users can decide whether to apply these operations as a new step in the current query (old behavior) or whether the output of the Merge/Append operation should be created as a new query (new behavior).

october-2016-updates-for-get-transform-7

How do I get started?

Excel 2016 provides a powerful set of capabilities for fast, easy data gathering and shaping, which is available under the Get & Transform section on the Data ribbon. Updates outlined in this blog are available as part of an Office 365 subscription. If you are an Office 365 subscriber, find out how to get these latest updates. If you have Excel 2010 or Excel 2013, you can also take advantage of these updates by downloading the latest Power Query for Excel add-in.

—The Excel team

The post October 2016 updates for Get & Transform in Excel 2016 and the Power Query add-in appeared first on Office Blogs.

Announcing the launch of Microsoft BizTalk Server 2016

$
0
0

Today, we are announcing the release of Microsoft BizTalk Server 2016. This marks the tenth major release for the product that has been in the market serving various application integration customer needs for the past 15 years. This release not only highlights some key on-premises application integration capabilities that help customers automate mission critical business processes, but also showcases our strong commitment to the hybrid integration platform.

We realize customers have different business needs In addition to running workloads on-premises, many business want to run some workloads and applications in the cloud. Our goal is not only to provide flexibility and agility to our customers, but also provide a consistent experience whether you are looking to integrate applications, data and processes across on-premises or the cloud. With the release of BizTalk Server 2016, customers can seamlessly connect to cloud applications through Azure Logic Apps. Customers can now connect to SaaS applications faster, enable enterprise cloud messaging across vendors and partners, and take advantage of first-class integration with Azure services including Azure Functions, Machine Learning and Cognitive Services via the Logic Apps adapter all from the comfort of using BizTalk Server 2016.

To learn more about why our customers want to upgrade to BizTalk Server 2016 and hear our customer comments about the new release, please check out Frank Weigel’s recent Azure blog post.


A deep-dive into Cluster OS Rolling Upgrades in Windows Server 2016

$
0
0

The story of Failover Clustering in Windows Server goes back to Windows NT! This feature was developed largely as a way to keep servers and their underlying applications and workloads running – even if a node in the cluster was to fail. It ensures high uptime requirements and Service Level Agreements can be met. Over time, new capabilities like live migration make a service running on a cluster even more resilient and agile when cluster nodes need to be drained and rebalanced while apps, processes and services continue to run. But one challenge that remained with cluster management until now was how to upgrade the underlying host operating systems on cluster nodes.

Windows Server 2016 clustering allows for a mixed OS mode – meaning that in an individual cluster, your nodes can run both Windows Server 2012 R2 and Windows Server 2016. Mixed OS mode in turn enables you to drain cluster nodes and upgrade their operating systems one-by-one until all cluster nodes are at Windows Server 2016, then a simple PowerShell cmdlet is used to update the cluster functional level. This week Microsoft Mechanics teams up with Rob Hindman from the Windows Server engineering team to explain and demonstrate step-by-step how clusters can now be upgraded in-place without standing up and migrating workloads into a net-new cluster.

The process can involve taking a node offline, upgrading it, re-adding the node and repeating that process for each server in the cluster. If a cluster node cannot be taken offline due to capacity or SLA constraints, you can also temporarily add a node to the cluster and follow the process above, then remove the added node when finished. Rob explains these options on detail in the show.

While the process seems pretty straightforward, it might be a little too manual or script-reliant to use on larger clusters or if you have several clusters in need of an upgrade. For this, you can use System Center Virtual Machine Manager and the built-in support for Cluster OS Rolling Upgrade, which will automate the entire process and upgrade the functional level. Matt McSpirit demonstrates this on the show. To learn more, watch this week’s show and check out Rob’s recent blog.

Get started with Cluster OS Rolling Upgrade in Windows Server 2016! Download the evaluation version!

Healthcare Analytics with Cortana Intelligence

$
0
0

This post is authored by Shaheen Gauher, PhD, Data Scientist at Microsoft.

U.S. healthcare spending is expected to reach $4.8 trillion in 2021, accounting for one-fifth of the U.S. economy! The total annual cost of healthcare for a typical family of four with employer-provided PPO insurance coverage, as estimated by Milliman, is about $25,000 in 2016. Meanwhile, medical debt and bankruptcies still threaten the solvency of many American families. According to some estimates, waste-spending constitutes one-third to nearly one-half of all US health spending. There is an urgent need to disrupt the status quo and find smart ways to reduce costs and optimize resources while still focusing on better patient care.

Using the power of the cloud and advanced predictive analytics, it is possible to use the vast amounts of rich data we have available to us to provide tremendous insights and discover solutions to some of these problems.

Healthcare

Healthcare solutions are needed in two primary areas – managing finances and managing patient health. 

  • Finance solutions would address problems such as managing the revenue cycle, predicting claim amounts, when claims will be paid, and also resource allocation and inventory management problems, such as forecasting the demand for medicines or equipment to ensure adequate availability while cutting down on waste.
  • Managing patient health can include solutions for reducing emergency visits, predicting patient readmissions, predicting crises situations, such as patient cardiac arrests when in an ICU, predicting the propensity for diseases such as diabetes or breast cancer, and even predicting whether or not surgery is advantageous or required in some situations.

Using comprehensive patient data – from hospital visits, clinical data, lab results, integrated Electronic Medical Records, and more – we can build models that can learn from past treatment regimes, detect patterns from symptoms and detect potential issues, recommend preventive screening and provide proactive treatment measures. A shift from a reactive paradigm to a proactive one using machine learning and advanced data analytics can improve patient health and reduce costs across the spectrum. As they say – a stitch in time saves nine!

What’s more, we can supplement these models with additional data sources such as behavioral and social data feeds to build models for personalized patient care, and that can help in many ways, including reduced needs for office visits or even early warnings around potentially life-threatening issues.

A primary requirement at the start of a data science project is to identify a sharp question that needs to be addressed, and then identify the data that can help answer that question (and then ask more questions!) For example, if the question we are trying to answer is, “When will this claim be paid?” we need to know what the definition of a claim being paid is, for the company. Is the claim closed and considered paid if the balance is paid in full or is below a certain amount? Is the starting point for a claim the day the patient visited the doctor or the day when the claim was submitted? Is the data consistently collected and reported by all providers? Do we want to know when this claim will be paid by primary insurance or secondary insurance? In summary, the more clarity we have around the end objective, the better the feature engineering work we can do, and the more accurate the model and end results.

Machine Learning in Healthcare – Some Use Cases

Predict At-Risk Patients in ICU, at Cleveland Clinic
Cleveland Clinic, a non-profit academic medical center providing clinical and hospital care, teamed up with Microsoft to use predictive and advanced analytics to identify potential at-risk patients under ICU care. Using the data collected from monitoring units in ICUs over a period of time, they use Azure Machine Learning to predict if a patient will need to be administered a vasopressor in the near future to prevent a cardiac failure. A timely prediction can mitigate a crisis and facilitate timely intervention by medical professionals. Given the shortage of primary care physicians and nurses, ML models like these can provide a helping hand and an extra set of eyes for overworked professionals, especially in a crisis situation.

Accelerate Claim Automation & Revenue, at GAFFEY Healthcare
GAFFEY has targeted workflow processes, helping its customers speed up their payment collections, while keeping labor costs lower by eliminating non value-added touches. 

Identify Students at Risk for Dyslexia, at Optolexia
Using a repository of eye-tracking data and an analytical engine built with cloud-based Microsoft Azure ML, Optolexia aims to help schools identify students at risk for dyslexia significantly earlier than current screening tests, ensuring that students can receive appropriate treatments early which helps boost their learning skills and improves academic performance. 

Identify Asthma, at Aerocrine
Using Microsoft Azure, Aerocrine captures and manages real-time data about FeNO devices, used by hospitals and asthma clinics around the globe to identify asthma and monitor patients’ progress in controlling the disease. The company hopes to use the solution to achieve its ultimate goal – helping physicians diagnose asthma and assisting patients in managing their symptoms. 

Deliver Personalized Healthcare, with Dartmouth-Hitchcock ImagineCare
Dartmouth-Hitchcock has ushered in a new age of proactive, personalized healthcare using Cortana Intelligence Suite. Their ImagineCare solution is built on Cortana Intelligence Suite, Microsoft’s machine learning, big data and perceptual intelligence offering. It will change the way people interact with the healthcare system, putting patients at the center and ultimately changing the way we all think about our health. 

Predict Real-Time Patient Risk Factors, at Medical Information Records
Medical Information Records LLC, a leading provider of medical software technology is using Azure ML to deliver real-time predictive analytics capabilities. Anesthesiologists are able to predict real-time patient risk factors to proactive take preventive steps. 

While there are tremendous opportunities for using ML and data science in the healthcare domain, the challenges are equally pressing. One of the main challenges has to do with the quality and consistency of data collection. Healthcare data comes from various sources and, often times, the data collected is incomplete or inconsistent between different sources. Combining these data sources meaningfully to create a complete timeline of events often results in chunks of missing data. There is also a need for open and honest sharing of information while still maintaining data anonymity and privacy requirements.

As more healthcare providers realize the potential of ML and analytics, awareness around data requirements is growing. Providers are investing in relevant data capture, data warehousing and data cataloging efforts now more than ever. As a fully managed big data and advanced analytics suite, Cortana Intelligence can help organizations transform data into intelligent action. Right from ingesting data from various sources to building custom ML models, retraining models with the latest data, consuming models via automatically generated Rest APIs, getting real-time predictions and state of the art visualizations, we have all the tools needed to create and deploy end to end solutions and with fast turnaround times.

We are excited by our partnership with leading-edge customers and partners in our push for data-driven transformation in healthcare, and we can see how this will lead to better outcomes, both for patients and for healthcare providers.

Shaheen
@Shaheen_Gauher

 

References:

Active Directory Forest Functional Levels for Exchange Server 2016

$
0
0

Our September 2016 release blog included a statement that is causing some confusion with customers. The confusion relates to our support of Windows Server 2016 with Exchange Server 2016. The blog included a statement that read, “Domain Controllers running Windows Server 2016 are supported provided Forest Functional Level is Windows Server 2008R2 or Later.” We would like to provide additional clarity on what this statement means, and more importantly what it doesn’t.

Question #1: If I want to deploy Exchange Server 2016, must my Active Directory environment use Forest Functional Level 2008R2 or later?

Answer: No. Exchange Server 2016 is supported in environments configured to Forest Functional Level 2008 and later.

Question #2: If I want to install Exchange Server on a server running Windows Server 2016, does my Active Directory environment need to advance Forest Functional Level to 2008R2 or later?

Answer: No. Exchange Server 2016 installation on Windows Server 2016 is supported if Active Directory is configured to Forest Functional Level 2008 and later.

Question #3: What is the real requirement you are calling out here?

Answer: If you are running Exchange 2016 anywhere in your environment, and if any of the Domain Controllers used by Exchange are running Windows Server 2016, then the Forest Functional Level must be raised to 2008R2 or later.

In our experience, customers who keep their Domain Controllers deployed at the latest OS revision level, also employ the highest level of reliability, security and functionality and this requirement should not be a deployment blocker.

Question #4: Why is 2008R2 Forest Functional Level or later required?

Answer: Advancing the directory to a higher level of functionality requires DC’s on older operating systems to be retired. Our goal is to make certain that Exchange Server uses the highest level of security settings reasonably possible, including newer cryptographic standards. Windows Server 2008 no longer meets the minimum standard we are requiring and being requested by customers. Customers who are deploying the latest version of Exchange and Windows Server are often doing so to improve the security of their overall ecosystem. Our goal is to make certain that Exchange Server functions correctly under these assumptions and requirements. Limiting the use of old standards allows Exchange Server to meet the requirements of current security standards.

Question #5: Will Exchange Setup block installing Exchange Server 2016 if I am using Windows Server 2016 on a Domain Controller but have not raised the Forest Functional Level?

Answer: At this time, there is no Setup block. This pre-requisite is a soft requirement enforced by policy only. If a customer calls into support and is using Windows Server 2016 Domain Controllers with Exchange Server 2016 and they have not raised the Forest Functional Level to the minimum value, we may ask them to do so as part of root cause elimination.

Question #6: When will Exchange Setup force the use of 2008R2 Forest Functional Level for an Exchange Server installation?

Answer: The minimum supported Forest Functional Level will be raised to 2008R2 in Cumulative Update 7 for all Exchange Server 2016 deployments. We know that customers need time to plan and deploy the necessary migration/decommission of Active Directory Servers. 2008R2 Forest Functional Level will be a hard requirement in Cumulative Update 7, enforced by Exchange Setup. Cumulative Update 7 ships in the 3rd quarter of 2017, one year after the first announcement.

For a complete list of Exchange requirements please see this TechNet article.

The Exchange Team

The October Edition of “The Endpoint Zone” — aka 1610!

$
0
0

This month’s Endpoint Zone is packed!

In this episode, Simon andI dive in on the massive amount of news from Ignite, we recap the foundation session, and then there’s a long chunk of live demos (AADIP, MAM-CA, CA for Windows, Lookout integration, and more). There’s even a sneak peek at the upcoming improvements to the Azure console for Intune and EMS.

 

To stay on top of everything that’s happening, keep an eye on the EMS blog and follow me on Twitter!

Also: if you want to go into even greater depth on any of the topics we touch on here, Irecommend this roster of sessions from Ignite:

Technical Preview of Power BI reports in SQL Server Reporting Services now available

$
0
0

image

It’s finally here! Following our announcement Tuesday and subsequent session at PASS Summit 2016 this week, we’re pleased to announce the Technical Preview of Power BI reports in SQL Server Reporting Services is now available for you to try in the Azure Marketplace.

What is the Technical Preview?

As we brainstormed creative ways to let people try this functionality as early as possible, we had three very specific goals we wanted to achieve:

  • Provide access to the new functionality publicly as early as possible while ensuring the end-user experience was something you’d find valuable
  • Create a self-contained experience and environment that allowed users of any skill level an easy way to get started
  • In no way disrupt or delay the initial preview of a downloadable and installable version

By using the Azure Marketplace to distribute this early technical preview, we feel we have not only met those goals, but also established a repeatable way to distribute content in the future.  For users who would prefer to run this technical preview on an on-premises server, you are welcome to provision a virtual machine and then download the image as a .vhd file and use Hyper-V functionality to do so.

How to get started

You’ll need to have a Microsoft Azure subscription to get started with the technical preview.  If you don’t already have an Azure account, you can create your free Azure account and get started with $200 in credit. To learn more about the free option please visit: https://azure.microsoft.com/en-us/free/.  There is no cost associated with the technical preview software – you only need to pay for the Azure infrastructure costs.

By either doing a search in the Azure Marketplace for “Reporting Services” or using the link we provided earlier, you will start at the following screen to provision your virtual machine:

image

After clicking “Create”, you will begin the process of creating your new machine.

image

In Step 1, you will provide the following information.  The items in bold are the default options:

  • VM Name
  • Admin username/password (Password must be at least 12 characters)
  • Analysis Services Server Mode (Tabular or Multidimensional)
  • Subscription Name (can only change if you have more than one Azure subscription)
  • Resource Group Name
  • Virtual Machine Location (select the region closest to your physical location for best results)

You will see a green checkmark to confirm any manual items you’ve entered are valid.

image

Once you’ve finished step 1, click OK to move to step 2.

image

In step 2, it is strongly recommended users less experienced with Azure leave the selections for Virtual Machine Size, Storage Accounts, Public IP address and Domain Name label as the default options.  Click OK to proceed to the next step.

Step 3 gives you the opportunity to review your selections.  If they look good, click OK to proceed to the final step.

image

Step 4 is the last step before purchasing.  Simply read through the terms of use and click “Purchase”.  That will provision the virtual machine, storage, demo content and other items needed to properly run your environment in Azure.  Depending on the region you selected, it will take approximately 10-20 minutes to finish provisioning and be available for use.

Access your new virtual machine

To access your new VM, you can download and use the Remote Desktop shortcut from the Azure portal.  The easiest way to access this is by navigating to the Virtual Machine section and clicking the “Connect” button in the toolbar.

image

You will be prompted to save the file, which is a shortcut you can use now and in the future to connect to your virtual machine.  Select a location on your PC to save it to, then double-click on the downloaded file to connect.  Once you do this, a dialog box may appear asking if you wish to connect, even though the remote computer can’t be identified.  This is expected, and click Connect to proceed:

clip_image004

A pop-up dialog will appear like this one, where you will enter the admin username and password you chose during the setup process.

image

The domain needs to match the name of the machine you are connecting to. However, there is a trick to use to connect properly. Enter your username in the following manner:

image

This will make sure it doesn’t try to use the domain you’re currently on when you’re attempting to connect. After entering your password, click OK.  Another prompt will launch, asking you if you’d like to verify the certificate provider and proceed connecting to the machine.  Again, this is expected and you can simply hit “Yes” to continue.

image

You should now be connected to your VM via Remote Desktop and can begin using the virtual machine.

Opening the web portal and viewing your first Power BI report

When you first launch the machine, you should have the following items on your desktop:

  • A folder with three sample Power BI Desktop files
  • A shortcut to Power BI Desktop
  • A shortcut to the SQL Server Reporting Services Team Blog
  • A shortcut to SQL Server Data Tools
  • A shortcut to the report server web portal on your VM

image

Double-click on the SSRS Preview icon to launch Internet Explorer.  You will see several pieces of demo content already pre-loaded onto the machine, including three sample Power BI reports.

image

Click the “Sample Customer Overview Report”.  This will launch the report in your web browser so you can view it with the same rich, interactive experience as in the Power BI service.

More to come tomorrow

We hope you enjoy this technical preview of these exciting new features. We’ll be back with part two tomorrow to cover more around the new functionality, including creating and editing new reports with Power BI Desktop, plus how to post comments on published reports.

Try it now and send us your feedback

Viewing all 13502 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>