Microsoft’s commitment to leadership in IoT security continues with Azure IoT’s improving the level of trust and confidence in securing IoT deployments. Azure IoT now supports Device Identity Composition Engine (DICE) and many different kinds of Hardware Security Modules (HSMs). DICE is an upcoming standard at Trusted Computing Group (TCG) for device identification and attestation which enables manufacturers to use silicon gates to create device identification based in hardware, making security hardware part of the DNA of new devices from the ground up. HSMs are the core security technology used to secure device identities and provide advanced functionality such as hardware-based device attestation and zero touch provisioning.
Azure IoT supports new security hardware to strengthen IoT security
Combating a spate of Java malware with machine learning in real-time
In recent weeks, we have seen a surge in emails carrying fresh malicious Java (.jar) malware that use new techniques to evade antivirus protection. But with our research team’s automated expert systems and machine learning models, Windows 10 PCs get real-time protection against these latest threats.
Attackers are constantly changing their methods and tools. We know from many years of research into malware and cybercriminal operations that cybercriminals have go-to programming languages for their malicious activities, but they switch from time to time to slip past security solutions. For instance, we recently tracked how cybercriminals have changed how they use NSIS installers in order to evade AV and deliver ransomware.
To help deliver real-time protection, our researchers use the Microsoft intelligent security graph, a robust automated system that monitors threat intelligence from a wide network of sensors. This system includes machine learning models, which drive proactive and predictive protection against fresh threats.
Tracking malicious email campaigns
Our sensors first picked up signs of the Java spam campaigns at the start of the year. Our automated tools, which can sort and classify massive volumes of malicious emails, showed us actionable intelligence about the surge of Java malware-bearing emails.
These emails use various social engineering techniques to lure recipients to open malicious attachments. Many of the emails are in Portuguese, but we’re also seeing cases in English. They pretend to be notifications for billing, payment, pension, or other financial alerts.
Here are the most popular subject line and attachment file name combinations used in the email campaigns:
Subject | Attachment file name |
Segue em anexo Oficio Numero: | Decisão-Judicial.zip |
Serviços de Cobranças Imperio adverte, Boleto N | 2Via_Boleto_N |
“Cobrança Extrajudicial” Imperio Serviços de Cobranças | 2Via_Boleto_N |
Payment Advice | Payment Advice.rar |
Curriculum Vitae | Curriculum_ |
FGTS Inativo – | SALDO_FGTS_MP_ |
FGTS Inativo – | FGTS_-_MP_ |
Extrato_FGTS_disponivel_em_sua_conta_inativa_de_N | FGTS_Disponivel_N |
NEW PURCHASE ORDER (TOP URGENT) | BLUERHINETECHNOLOGY_EXPORT_PURCHASE_ORDER.zip |
NF-e | NF-e- |
Figure 1. Most popular subject line and attachment file name combinations in email campaigns
The attachments are usually .zip or .rar archive files that contain the malicious .jar files. The choice of .jar as attachment file type is an attempt by cybercriminals to stay away from the more recognizable malicious file types: MIME, PDF, text, HTML, or document files.
Figure 2. Sample malicious email carrying Java malware in a .zip file
Tracking updates in malicious code
In addition to information about the email campaigns, our monitoring tools also showed another interesting trend: throughout the run of the campaigns, an average of 900 unique Java malware files were used in these campaigns every day. At one point, there were 1,200 unique malicious Java files in a single day.
Figure 3. Volume of unique Java malware used in email campaigns
These Java malware files are variants of old malware with updated code that attempt to evade detection by security products.
The most notable change we saw in these new variants of Java malware is in the way they obfuscate malicious code. For instance, we saw the following obfuscation techniques:
- Using a series of append operators and a string decryption function
Figure 4. Sample obfuscated Java malware code - Using overly long variable names, making them effectively unreadable
Figure 5. Sample obfuscated Java malware code - Using excessive codes, making code tracing more difficult
Figure 6. Sample obfuscated Java malware code
Obfuscated codes can make analysis tedious. We use automated systems that detonate the attachments, effectively bypassing obfuscation. When malware is detonated, we see the malicious intent and gain intelligence that we can use to prevent attacks.
Our tools log malicious behaviors observed during detonation and use these to detect new and unknown attachments. These malicious behaviors include:
Figure 7. Sample Java malware trace logs
From threat intelligence to real-time protection
Through automated analysis, machine learning, and predictive modeling, we’re better able to deliver protection against the latest, never-before-seen malware. These expert systems give us visibility and context into attacks as they happen, allowing us to deliver real-time protection against the full range of threats.
Context-aware detonation systems analyze millions of potential malware samples and gather huge amounts of threat intelligence. This threat intelligence enriches our cloud protection engine, allowing us to block threats in real-time. In addition to the Java malware, we also detect the payloads, which are usually online banking Trojans like Banker and Banload, or Java remote access Trojans (RATs) like Jrat and Qrat.
Figure 8. Automated systems feed threat intelligence to cloud engines and machine learning models, which result in real-time protection against threats
Threat intelligence from the detonation system constantly enhances our machine learning models. New malicious file identifiers from the analysis of the latest threats are added to machine learning classifiers, which power predictive protection.
This is how we use automation, machine learning, and the cloud to deliver protection technologies that are smarter and stronger against new and unknown threats. We automatically protect Windows PCs against more than 97% of Java malware in the wild.
Figure 9. Breakdown of Java malware detection methods
Conclusion: Real-time protection against relentless threats
The email campaigns distributing Java malware account for a small portion of cybercriminal operations that deliver new malware and other threats. Cybercriminals are continuously improving their tools and modus operandi to evade system protections.
Our research team is evolving how we combat cybercrime by augmenting human capacity with a combination of sensors, automated processes, machine learning, and cloud protection technologies. Through these, we are better able to monitor and create solutions against these threats.
These protections are available in the security technologies that are built into Windows 10. And with the Creators Update, up-to-date computers get the latest security features and proactive mitigation.
Windows Defender Antivirus provides real-time protection against threats like Java malware and their payloads by using automation, machine learning, and heuristics.
In enterprise environments, Office 365 Advanced Threat Protection blocks malicious emails from spam campaigns, such as those that distribute Java malware, using machine learning capabilities and threat intelligence from the automated processes discussed in this blog.
Device Guard locks down devices and provides kernel-level virtualization-based security, allowing only trusted applications to run.
Windows Defender Advanced Threat Protection alerts security operations teams about suspicious activities on devices in their networks.
It is also important to note that Oracle has been enforcing stronger security checks against legitimate applications using Java. For instance, starting with Java 7 Update 51, Java does not allow Java applications that are not signed, are self-signed, or are missing permission attributes. Oracle will also start blocking .jar files signed with MD5, requiring instead signing with SHA1 or stronger.
However, the Java malware discussed in this blog are equivalent to executable files (as opposed to Java applet). Here are some additional tips to defend against Java malware in enterprise environments:
- Remove JAR in file type associations in the operating system so that .jar files don’t run when double-clicked; .jar files must be manually executed using command line
- Restrict Java to execute only signed .jar files
- Manually verify signed .jar files
- Apply email gateway policy to block .jar as attachments
Duc Nguyen, Jeong Mun, Alden Pornasdoro
Microsoft Malware Protection Center
Office 365 ProPlus updates
Today’s post was written by Ron Markezich, corporate vice president for the Office commercial marketing team.
Office 365 ProPlus delivers cloud-connected and always up-to-date versions of our most valuable enterprise apps. Today, we’re making three important announcements related to ProPlus: changes to the Office 365 system requirements; improvements to the ProPlus update model, including alignment with Windows 10; and new tools and programs to manage ProPlus application compatibility.
Changes to the Office 365 system requirements
Office 365 ProPlus is the very best way to experience the Office 365 services. For IT, ProPlus delivers the most secure and most complete suite of productivity apps available. And because the apps are cloud-connected and always up-to-date, they’re continually getting better—with new security features, new telemetry and new management capabilities. For end users, ProPlus brings the Office 365 services to life. When a modern app is connected to a modern service, magic happens. People can collaborate in new ways. Apps can simplify mundane tasks. And advanced security services can protect users as they work.
When customers connect to Office 365 with a legacy version of Office, they’re not enjoying all that the service has to offer. The IT benefits—particularly security—are cut short. And the end user experience in the apps is limited to the features shipped at a point in time. To ensure that customers are getting the most out of their Office 365 subscription, we are updating our system requirements.
- Office 365 ProPlus or Office perpetual in mainstream support required to connect to Office 365 services. Starting October 13, 2020, Office 365 ProPlus or Office perpetual in mainstream support will be required to connect to Office 365 services. Office 365 ProPlus will deliver the best experience, but for customers who aren’t ready to move to the cloud by 2020, we will also support connections from Office perpetual in mainstream support.
- Applies to Office 365 commercial services only. This update does not change our system requirements or support policies for the Office perpetual clients, Office perpetual clients connecting to on-premises servers, or any consumer services.
- More than three years’ notice. We’re providing more than three years’ notice to give IT time to plan and budget for this change. Until this new requirement goes into effect in 2020, Office 2010, Office 2013 and Office 2016 perpetual clients will still be able to connect to Office 365 services.
Visit our Office 365 Tech Community to learn more and to ask the experts your questions.
Improvements to the Office 365 ProPlus update model, including alignment with Windows 10
Moving to Office 365 ProPlus requires an initial upgrade and ongoing management of regular updates. Customers quickly see the benefits of the move, but they’ve also asked us to simplify the update process—and to improve the coordination between Office and Windows. To respond to this feedback, we’re pleased to announce that we will align the Office 365 ProPlus and Windows 10 update model. This change will make planning and managing updates for both Office and Windows easier for customers using the Secure Productive Enterprise.
Targeting September 2017, we will make the following changes to the Office 365 ProPlus update model:
- Two updates a year. We will reduce the Office 365 ProPlus update cadence from three to two times a year, with semi-annual feature updates to Windows 10 and Office 365 ProPlus targeted for March and September.
- 18 months of support. We will extend the support period for Office 365 ProPlus semi-annual updates from 12 to 18 months (starting from first release) so IT professionals can choose to update once or twice a year.
- System Center Configuration Manager support. System Center Configuration Manager will support this new aligned update model for Office 365 ProPlus and Windows 10, making it easier to deploy and update the two products together.
See the upcoming changes to the Office 365 ProPlus update management article to learn more.
New tools and programs to manage Office 365 ProPlus application compatibility
One of the biggest concerns customers have about the move to a new version of Office is application compatibility. Office add-ins and VBA solutions often play a significant role in key business processes, and application compatibility is an important consideration in both upgrades and updates. To help customers manage ProPlus application compatibility, we’re pleased to announce four new investments.
- Upgrade assessment tools. Starting today, we’re offering a limited preview of new tools that will catalogue the add-ins and VBA solutions in use in your organization, identify potential issues with the upgrade to Office 365 ProPlus, and recommend steps for remediation.
- Application compatibility testing. For each new Office 365 ProPlus release, we will perform compatibility testing of the most common third-party add-ins, identify potential issues, and take steps to remediate.
- Office 365 ProPlus monitoring services. We will provide new services to monitor your ProPlus deployment and provide visibility into the usage and stability of apps and add-ins.
- Reporting, tracking and resolving issues. We will improve our existing service for reporting, tracking and resolving application compatibility issues—and partner with customers and ISVs to find the best approach to remediation.
You can learn more about the upgrade assessment tools today, and we’ll have more to share on our application compatibility testing program, the new Office 365 ProPlus monitoring services and the new service for reporting, tracking and resolving issues in the coming months.
We are here to help
If you’re connecting to Office 365 with a legacy version of Office, you’re not enjoying all that the service has to offer—and we’re here to help. For more information on Office 365 ProPlus deployment, refer to the Office 365 ProPlus Deployment Guide and the ProPlus Ignite On-Demand sessions. And when you’re ready to make the move to ProPlus, the Microsoft FastTrack customer success service will help you with the details. Visit the FastTrack website to learn more and submit a request for help with planning, assessment and deployment. ProPlus is the very best way to experience the Office 365 services, and we’re committed to helping you upgrade with confidence.
—Ron Markezich
The post Office 365 ProPlus updates appeared first on Office Blogs.
Resumable Online Index Rebuild is in public preview for SQL Server 2017 CTP 2.0
We are delighted to announce that Resumable Online Index Rebuild is now available for public preview in the SQL Server vNext 2017 CTP 2.0 release. With this feature, you can resume a paused index rebuild operation from where the rebuild operation was paused rather than having to restart the operation at the beginning. In addition, this feature rebuilds indexes using only a small amount of log space. You can use the new feature in the following scenarios:
- Resume an index rebuild operation after an index rebuild failure, such as after a database failover or after running out of disk space. There is no need to restart the operation from the beginning. This can save a significant amount of time when rebuilding indexes for large tables.
- Pause an ongoing index rebuild operation and resume it later. For example, you may need to temporarily free up system resources to execute a high priority task or you may have a single maintenance window that is too short to complete the operation for a large index. Instead of aborting the index rebuild process, you can pause the index rebuild operation and resume it later without losing prior progress.
- Rebuild large indexes without using a lot of log space and have a long-running transaction that blocks other maintenance activities. This helps log truncation and avoid out-of-log errors that are possible for long-running index rebuild operations.
Read the articles: The following articles provide detailed and updated information about this feature:
Public preview information: For public preview communication on this topic, please contact the ResumableIDXPreview@microsoft.com alias.
To try SQL Server 2017: Get started with the preview of SQL Server 2017 on macOS, Docker, Windows, and Linux.
Graph Data Processing with SQL Server 2017
SQL Server is trusted by many customers for enterprise-grade, mission-critical workloads that store and process large volumes of data. Technologies like in-memory OLTP and columnstore have also helped our customers to improve application performance many times over. But when it comes to hierarchical data with complex relationships or data that share multiple relationships, users might find themselves struggling with a good schema design to represent all the entities and relationships, and writing optimal queries to analyze complex data and relationships between the tables. SQL Server uses foreign keys and joins to handle relationships between entities or tables. Foreign keys only represent one-to-many relationships and hence, to model many-to-many relationships, a common approach is to introduce a table that holds such relationships. For example, Student and Course in a school share a many-to-many relationship; a Student takes multiple Courses and a Course is taken by multiple Students. To represent this kind of relationship one can create an “Attends” table to hold information about all the Courses a Student is taking. The “Attends” table can then store some extra information like the dates when a given Student took this Course, etc.
Over time applications tend to evolve and get more complex. For example, a Student can start “Volunteering” in a Course or start mentoring “Mentoring” others. This will add new types of relationships to the database. With this type of approach, it is not always easy to modify existing tables to accommodate evolving relationships. To analyze data connected by means of foreign keys or multiple junction tables involves writing complex queries with joins across multiple tables, and this is no trivial task. The queries can quickly get complex, resulting in complex execution plans and degraded query performance over time.
We live in an era of big data and connected information; people, machines, devices, businesses across the continents are connected to each other more than ever before. Analyzing connected information is becoming critical for businesses to achieve operational agility. Users are finding it easier to model data and complex relationships with the help of graph databases. Native graph databases have risen in popularity, being used for social networks, transportation networks, logistics, and much more. Graph database scenarios can easily be found across several business disciplines, including supply chain management, computer or telecommunication networks, detecting fraud attacks, and recommendation engines.
At Microsoft, we believe that there should be no need for our customers to turn to a new system just to meet their new or evolving graph database requirements. SQL Server is already trusted by millions of customers for mission-critical workloads, and with graph extensions in SQL Server 2017, customers get the best of both relational and graph databases in a single product, including the ability to query across all data using a single platform. Users can also benefit from other cutting-edge technologies already available in SQL Server, such as columnstore indexes, advanced analytics using SQL Server R Services, high availability, and more.
Graph extensions available in SQL Server 2017
A graph schema or database in SQL Server is a collection of node and edge tables. A node represents an entity—for example, a person or an organization—and an edge represents a relationship between the two nodes it connects. Figure 1 shows the architecture of a graph database in SQL Server.
Figure 1: SQL graph database architecture
Create graph objects
With the help of T-SQL extensions to DDL, users can create node or edge tables. Both nodes and edges can have properties associated to them. Users can model many-to-many relationships using edge tables. A single edge type can connect multiple type of nodes with each other, in contrast to foreign keys in relational tables. Figure 2 shows how a node and edge table are stored internally in the database. Since nodes and edges are stored as tables, most of the operations supported on tables are available on node or edge tables, too.
Figure 2: Person Node and Friends Edge table.
The CREATE TABLE syntax guide shows the supported syntax for creation of node and edge tables.
Query language extensions
To help search a pattern or traverse through the graph, a new MATCH clause is introduced that uses ASCII-art syntax for pattern matching and navigation. For example, consider the Person and Friends node tables shown in Figure 2; the following query will return friends of “John”:
SELECT Person2.Name
FROM Person Person1, Friends, Person Person2
WHERE MATCH(Person1-(Friends)->Person2)
AND Person1.Name = ‘John’;
The MATCH clause is taking a search pattern as input. This pattern traverses the graph from one node to another via an edge. Edges appear inside parentheses and nodes appear at the ends of the arrow. Please refer to MATCH syntax guide to find out more ways in which MATCH can be used.
Fully integrated in SQL Server engine
Graph extensions are fully integrated in the SQL Server engine. Node and edge tables are just new types of tables in the database. The same storage engine, metadata, query processor, etc., is used to store and query graph data. All security and compliance features are also supported. Other cutting-edge technologies like columnstore, ML using R Services, HA, and more can also be combined with graph capabilities to achieve more. Since graphs are fully integrated in the engine, users can query across their relational and graph data in a single system.
Tooling and ecosystem
Users benefit from the existing tools and ecosystem that SQL Server offers. Tools like backup and restore, import and export, BCP, and SSMS “just work” out of the box.
FAQs
How can I ingest unstructured data?
Since we are storing data in tables, users must know the schema at the time of creation. Users can always add new types of nodes or edges to their schema. But if they want to modify an existing node or edge table, they can use ALTER TABLE to add or delete attributes. If you expect any unknown attributes in your schema, you could either use sparse columns or create a column to hold JSON strings and use that as a placeholder for unknown attributes.
Do you maintain an adjacency list for faster lookups?
No. We are not maintaining an adjacency list on every node; instead we are storing edge data in tables. Because it is a relational database, storing data in the form of tables was a more natural choice for us. In native-directed graph databases with an adjacency list, you can only traverse in one direction. If you need to traverse in the reverse direction, you need to maintain an adjacency list at the remote node too. Also, with adjacency lists, in a big graph for a large query that spawns across your graph, you are essentially always doing a nested loop lookup: for every node, find all the edges, from there find all the connected nodes and edges, and so on.
Storing edge data in a separate table allows us to benefit from the query optimizer, which can pick the optimal join strategy for large queries. Depending on the complexity of query and data statistics, the optimizer can pick a nested loop join, hash join, or other join strategies — as opposed to always using nested loop join, as in the case of an adjacency list. Each edge table has two implicit columns, $from_id and $to_id, which store information about the nodes that it connects. For OLTP scenarios, we recommend that users create indexes on these columns ($from_id, $to_id) for faster lookups in the direction of the edge. If your application needs to perform traversals in reverse direction of an edge, you can create an index on ($to_id, $from_id).
Is the new MATCH syntax supported on relational tables?
No. MATCH clause works only on graph node and edge tables.
Can I alter an existing table into a node or edge table?
No. In the first release, ALTER TABLE to convert an existing relational table into a node or edge table is not supported. Users can create a node table and use INSERT INTO … SELECT FROM to populate data into the node table. To populate an edge table from an existing table, proper $from_id and $to_id values must be obtained from the node tables.
What are some table operations that are not supported on node or edge tables?
In the first release, node or edge tables cannot be created as memory-optimized, system-versioned, or temporary tables. Stretching or creating a node or edge table as external table (PolyBase) is also not supported in this release.
How do I find a node connected to me, arbitrary number of hops away, in my graph?
The ability to recurse through a combination of nodes and edges, an arbitrary number of times, is called transitive closure. For example, find all the people connected to me through three levels of indirections or find the employee chain for a given employee in an organization. Transitive closure is not supported in the first release. A recursive CTE or a T-SQL loop may be used to work around these types of queries.
How do I find ANY Node connected to me in my graph?
The ability to find any type of node connected to a given node in a graph is called polymorphism. SQL graph does not support polymorphism in the first release. A possible workaround is to write queries with UNION clause over a known set of node and edge types. However, this workaround is good for a small set of node and edge types.
Are there special graph analytics functions introduced?
Some graph databases provide dedicated graph analytical functions like “shortest path” or “page rank.” SQL Graph does not provide any such functions in this release. Again, T-SQL loops and temp tables may be used to write a workaround for these scenarios.
Thank you for reading this post! We are excited to announce the first version of graph extensions to SQL Server. To learn more, see this article on Graph processing with SQL Server 2017. Stay tuned for more blog posts and updates on SQL graph database!
Try SQL Server 2017
Get started with the preview of SQL Server 2017 on macOS, Docker, Windows, and Linux using these links:
Text Mining to Improve the Health of Millions of Citizens
By Kenji Takeda, Director of Azure for Research at Microsoft.
Doctors face daily decisions about the best care for their patients, and their own clinical experience can be enhanced using evidence-based medicine, such as through clinical trial data. As David Tovey, Editor-in-Chief, Cochrane, explained, “Before evidence-based medicine came along, people were reliant on the expertise of a doctor, the level of knowledge or understanding that he or she had. And this meant that treatments frequently took many, many years to come from research into practice.”
One of the most robust ways of synthesizing research evidence across healthcare trials is through a systematic review. This involves finding, examining, and analyzing clinical trial data and research reports in a methodical way, to pull together high-quality summaries of how effective healthcare interventions are. This provides critical evidence to decision-makers at the international, national and local level, to make sure citizens receive the medical and social care they deserve. While this is a rigorous approach, it can take up to three years to produce a major systematic review, which limits our ability to use up to date research to guide decision-making.
Cochrane is a not-for-profit organization that creates, publishes and maintains systematic reviews of health care interventions, with more than 37,000 contributors working in 130 countries. The Cochrane Transform Project is using AI and machine learning to text mine thousands of reports to automatically select ones to include in systematic reviews. This saves weeks of monotonous work, freeing up the expert reviewers to spend their time and energy on high-level analysis. Researchers at University College London are using Azure Machine Learning to develop and deploy their text mining classifiers as a cloud service at scale, customized for different clinical assessment groups, in ways that were previously impossible. This is helping to make decisions around healthcare interventions faster and more accurate for millions of people around the world.
Teaching a Machine to Read Research Studies
The evidence pipeline developed by Cochrane is a ‘surveillance’ system that helps Cochrane find relevant research as soon as it is published. Research enters the pipeline through routine and specified searches of the health and social care literature and is then classified using machine learning. The three key types of classifier are grouped per:
- Study type, e.g. is it a randomized control trial;
- Review group, e.g. dementia, hypertension, pregnancy and childbirth, stroke, urology, etc.;
- Patient, population or problem, including characteristics such as demographics and risk factors;
- Intervention under consideration for the patient or population;
- Comparison with other interventions, e.g. placebo, or a different drug;
- Outcome, e.g. quality of life, adverse effects, morbidity.
- Patient, population or problem, including characteristics such as demographics and risk factors;
Cochrane evidence pipeline
The first stage in the pipeline is to identify research studies that are Randomized Control Trials (RCTs), so that we can filter out irrelevant studies quickly. To build these classifiers a training dataset was created using the Cochrane Crowd citizen science platform that enables anyone to contribute by helping to categorize medical research. A classifier was built using more than 300,000 records from Cochrane Crowd, including over 30,000 clinical trials. 60-80% of the studies have scores less than 0.1, so if we trust the machine, and automatically exclude these citations, we’re left with 99.897% of the RCTs (i.e. we lose 0.1% but make significant gains in terms of manual workload reduction).
Randomized control trial classifier performance
Azure Machine Learning is used to provide text mining AI capabilities to speed up reviewing of clinical trial reports and research papers on healthcare interventions. The team easily moved their existing research methods in R to the cloud with Azure ML. A key advantage is that they can quickly create customized ML models for different end-users, e.g. groups looking at different clinical/medical conditions. “We’ve got a series of different classifiers which are running up on the Azure Machine Learning platform, where we prospectively, narrow the scope of what a particular citation is looking at. We have a study type classifier – the RCT classifier. I’ve just developed a systematic reviews classifier, which tells us about the type of study, the type of research we’re looking at. Then we go into a domain-specific classifier. There are 50 odd Cochrane review groups, and we have a classifier for every review group. So we’re able to say which review group, musculoskeletal group, heart group, all these different review groups, we can say which review group a particular citation is useful for.” said Professor James Thomas, lead researcher at UCL.
The classification algorithms were originally written in R, and these could be seamlessly inserted into the Azure ML workflow. Over time, the researchers have moved to Python, and found it trivial to replace the R algorithms with Python ones in Azure ML. “Initially I developed these models in R, which didn’t work as well as Python. So I was able to keep the top and the bottom of the process and just swap out the machine learning section in the middle.”. Azure SQL Database is used to store the Cochrane database of trials and research studies, that is then used by Azure ML. These services are deployed through Azure ML APIs to help clinical assessment groups using the Cochrane Register of Studies (CRS) online service to select the studies that will be included in their reviews.
“What I like about building these classifiers using Azure Machine Learning is that I can deploy them as Web services at the click of a button and I don’t have to maintain the virtual machine when I need to call them,” Thomas added. The team is improving the PICO classifier, which is quite challenging, and using Azure N-Series GPU instances to speed up training of the neural network they are developing.
Cochrane Register of Studies (CRS)
Helping Millions of Patients and Citizens Receive the Best Healthcare
The same research group at UCL is also working with the National Institute for Health and Care Excellence (https://www.nice.org.uk/) in the U.K., which provides health care delivery guidance across the U.K. National Health Service (NHS) for more than 65 million people. The processes and technologies are similar – to use AI to build research ‘surveillance’ systems – and this time they will be used to update clinical guidelines. This work is deployed to hundreds of researchers worldwide through the EPPI Reviewer cloud service, which runs on Azure, using the APIs deployed through Azure ML. It will directly speed up adoption of new health care interventions at national and international scale, for example, by facilitating the clearing of new drugs for widespread use across the NHS. The result is that decisions about which interventions are best will be faster and more reliable, thanks to Microsoft’s intelligent cloud.
Azure ML is augmenting systematic reviewers’ capabilities, allowing them to focus on high-level analysis of research studies, instead of the monotonous work of sifting through thousands of research studies by hand. This new approach is leading to a system that is pro-active, classifying research as it is published, instead of relying on reviewers to do manual online searches before they can start analyzing the trial data.“What we’re now doing is getting more granular and identifying which reviews a particular study might be relevant for. There are thousands of reviews in the Cochrane library, and what we’re looking to do is be able to identify which review a new piece of research is relevant for, which will mean that the authors of that review can be alerted to its presence and potentially update the review if it looks as though the research would then change the review’s findings,” explained Thomas.
It is a critical advance, as currently Cochrane reviewers alone manually screen around three to four million citations every year. These developments at NICE and in the Cochrane Transform project are making major leaps to accelerate and improve the quality of healthcare decision-making globally.
Kenji
Resources
Venue management firm reduces costs, improves collaboration with move to the cloud
Today’s blog post was written by John Hornby, chief operating officer at the NEC Group, a venue management company based in the United Kingdom.
The NEC Group got its start in 1976 when Queen Elizabeth II officially opened the National Exhibition Centre in Birmingham, England. Since then, we’ve grown to become one of the world’s leading venue management companies. The NEC Group now operates five major venues in Birmingham, plus conference facilities at Resorts World Birmingham at the NEC, one of the largest integrated leisure and entertainment complexes in the United Kingdom. Each year, we welcome approximately 7 million people to more than 750 events—from concerts to conventions. Our company also operates a national ticketing agency, an award-winning catering business, a leading hospitality brand and several other businesses. Combined, our business activities contribute more than £$2.1 billion (approximately US$2.6 billion) annually to the local economy.
With so many businesses and venues to oversee, storing and managing data has always been a challenge. Several years ago, we started operating internal datacenters, a decision that made perfect sense when it was much more cost-effective to keep our corporate information in-house. Today, however, both the costs and the risks associated with managing data internally are increasing at an alarming rate.
We also have approximately 1,500 employees spread across many different locations. Most of them don’t sit at a desk with a PC at their fingertips. They spend their days and nights in workshops, kitchens, conference centers and arenas. And to do their jobs well, they need to be able to stay connected, access data and collaborate with coworkers from wherever their duties take them.
Since the day we first opened our doors, we’ve made it our mission to look forward, never back. To explore every new opportunity and embrace cutting-edge technology. To be a leader, not a follower. It was in that spirit that we made the decision to implement Microsoft Office 365 cloud-based services. To help us make the transition, we relied, in part, on a number of Microsoft FastTrack* resources.
FastTrack provided useful scenarios, best practices and other resources that helped us create an internal process designed to ensure seamless Office 365 migration, onboarding and adoption. We also used FastTrack resources to build our successful ambassador program, which raises awareness and promotes widespread adoption of Office 365 throughout the organization. Our employee ambassadors, who have a high level of Office 365 skill, volunteer to share their knowledge with coworkers and encourage them to use the new services.
With help from Microsoft and our Office 365 capabilities, we are reaching our business goals. By moving our data to the cloud, we are lowering our operational costs, reducing risks to our company information and gradually ending our dependence on internal datacenters. With Office 365, we can foster real-time collaboration and support mobile productivity for our employees. For example, now they can efficiently collaborate on event plans anytime, anywhere, or quickly touch base with an onsite catering manager or lighting technician using almost any device. And because our data is now stored in the cloud and Office 365 is always up to date, our IT staff has more time to focus on adding value to the business.
Taking advantage of FastTrack resources was a good move for everyone at the NEC Group. Because of the information and insight FastTrack provided, we never felt like we were reinventing the wheel. The process of moving our data to the cloud went very smoothly, and with Office 365, our employees do their jobs better, and their lives get a little easier every day.
—John Hornby
Read the full story of NEC Group’s adoption of Office 365 with FastTrack. To learn more about Microsoft FastTrack, visit FastTrack.microsoft.com and become familiar with what our customer success service has to offer. For more information about the NEC Group, visit necgroup.co.uk/.
*FastTrack is available to customers with 50 seats and above with eligible plans. Refer to FastTrack Center Benefit for Office 365 for eligibility details.
The post Venue management firm reduces costs, improves collaboration with move to the cloud appeared first on Office Blogs.
A Week with Microsoft Edge: The browser built for books and reading
Next up in our series, A Week with Microsoft Edge we’re talking about how the browser is built for books and reading. With the Windows 10 Creators Update, we’ve worked to make reading in Microsoft Edge a great experience and have expanded the type of content you can read right within the browser. You can read PDF files, the e-book file format EPUB and books you’ve downloaded from the Windows Store right within Microsoft Edge. You can also simplify the layout of web pages with Reading view, save pages to read later in your reading list, and more.
Here are some tips for a great reading experience in Microsoft Edge.
You can catch up on other blog posts in this week’s series below.
The post A Week with Microsoft Edge: The browser built for books and reading appeared first on Windows Experience Blog.
Just released – Windows developer evaluation virtual machines – April 2017 build
We’re releasing the April 2017 edition of our evaluation Windows developer virtual machines (VM) on Windows Dev Center. The VMs come in Hyper-V, Parallels, VirtualBox and VMWare flavors. The evaluation version will expire on 07/07/17.
Evaluation VM contain:
- Windows 10 Creators Update – Enterprise Evaluation, Version 1703
- Visual Studio 2017 with the Universal Windows Platform (15063 SDK) and Azure workflows enabled
- Windows UWP samples (March 2017 Update)
- Windows Subsystem for Linux
If you want a non-evaluation version of the VM, we have those as well. They do require a Windows 10 Pro license, which you can get from the Microsoft Store.
If you have feedback on the VMs, please provide it over at the Windows Developer Feedback UserVoice site.
The post Just released – Windows developer evaluation virtual machines – April 2017 build appeared first on Building Apps for Windows.
This Week on Windows: New Windows 10 PCs, Forza Horizon 3, and more
We hope you enjoyed today’s episode of This Week on Windows! Head over here to see our series, “A Week with Microsoft Edge,” all about the browser designed for Windows 10. Check out five ways to get started with the Paint 3D app, learn more about the new creativity apps with unique capabilities for Surface Dial, or, keep reading to catch up on all of this week’s news.
In case you missed it:
HP announces new Pavilion convertible PCs powered by Windows 10 at Coachella
Click to view slideshow.HP revealed a new lineup of Pavilion laptops powered by Windows 10 with sophisticated designs, innovative technologies. and original features built to take advantage of the Windows 10 Creators Update. These devices include dual storage options, USB type-C, and the best of Windows 10: 3D in Windows 10, your digital personal assistant – Cortana* – a more secure browser with Microsoft Edge, and more.
Other features include:
- Up to 10 hours of battery life and HP Fast Charge (90% in 90 minutes) on select models
- Latest 7th Gen Intel Core i3-i7 processors
- Storage options: Dual storage up to 256 GB SSD+1TB HDD or single storage up to 512GB SSD or up to 2TB HDD on select models
- Choice of AMD Radeon & NVIDIA GeForce Discrete Graphics
- Enhanced 2x2ac WiFi option
- HP Wide Vision camera or optional IR camera which supports Windows Hello
- Exceptional audio with dual speakers, HP Audio Boost, and tuning by the experts at B&O Play – perfect for calls with Skype
These new PCs are available now beginning at $399 USD. Head over to HP.com to learn more!
Here’s what’s new in the Windows Store:
Explore a post-apocalyptic “Minecraft” wasteland as Vault Boy, Nick Valentine, Fawkes the philosophical supermutant and others made famous through a retro-sci-fi series with the “Fallout” Mash-Up Pack. While console gamers have enjoyed the pack since December, it’s now available on “Minecraft: Pocket Edition” and “Minecraft: Windows 10 Edition,” bringing with it 44 skins, as well as appropriately dystopian textures and music.
Forza Horizon 3 Porsche Car Pack
Down Under’s premiere road race just got a lot more thrilling. In Forza Horizon 3, home to Australia’s Horizon Festival, the back roads are filled with legendary and amazing vehicles from all over the world. Now, a new Porsche Car Pack ($6.99) joins the excitement, letting players experience prime examples of the breadth and depth of Porsche automotive history. Head over to ForzaMotorsport.net for more details on each of the cars featured in the pack, or Xbox Wire for more!
Buy Voodoo Vince: Remastered for $14.99
Vince the voodoo doll is back after a 13-year absence in Voodoo Vince: Remastered ($14.99), now in high definition and headed to the bayous of Louisiana and the streets of New Orleans. He’s searching for his keeper, Madam Charmaine, and as usual he’s ready to take on whatever monsters or villains get in his way. Read more over at Xbox Wire!
TV Spotlight – Better Call Saul
In search of a new TV obsession? This month’s TV Spotlight is on Better Call Saul, the critically acclaimed Breaking Bad spinoff which chronicles the backstory of slippery criminal lawyer Saul Goodman. Binge watch and save on seasons 1 and 2 now, then dive into season 3 in the Movies & TV section of the Windows Store.
The Fate of the Furious – Preorder + Soundtrack
Preorder The Fate of the Furious from the Movies & TV section of the Windows Store while the film is still in theaters and get the soundtrack now. Learn more on The Fire Hose.
Have a great weekend!
The post This Week on Windows: New Windows 10 PCs, Forza Horizon 3, and more appeared first on Windows Experience Blog.
Episode 127 on the new Script Lab Office add-in with Michael Zlatkovsky and Bhargav Krishna—Office 365 Developer Podcast
In Episode 127 of the Office 365 Developer Podcast, Richard diZerega and Andrew Coates talk with Michael Zlatkovsky and Bhargav Krishna about the new Script Lab Office add-in.
Weekly updates
- SharePoint Patterns and Practices – April 2017 Release by the PnP team
- SharePoint PnP Webcast – Managing the “modern” experiences in SharePoint Online by the PnP team
- Using Azure Functions with the Microsoft Graph and BING Translator API’s by Jeremy Thake
- Using Chrome Profiles to manage multiple identities by Jeremy Thake
- SharePoint time, is not your time, is not their time by Julie Turner
- How to generate SharePoint Framework bundles for multiple tenants by Wictor Wilen
Show notes
- Script Lab, a Microsoft Garage Project
- Script Lab in Office Store
- Script-Lab on GitHub
- Office JS Snippets on GitHub
- E-Book: “Building Office Add-ins using Office.js”
Got questions or comments about the show? Join the O365 Dev Podcast on the Office 365 Technical Network. The podcast RSS is available on iTunes or search for it at “Office 365 Developer Podcast” or add directly with the RSS feeds.feedburner.com/Office365DeveloperPodcast.
About Michael Zlatkovsky
I’m a developer on the Office Extensibility Team at Microsoft, working on the Office.js APIs and the tooling that surrounds them. I love API design work and feel fortunate to have played a part in the rebirth of the Office 2016 wave of Office.js APIs. In my spare time, I have been writing a book about Office.js key concepts, which has been a fun way of expanding upon my answers on StackOverflow. The book is available in e-book form at leanpub.com/buildingofficeaddins.
About Bhargav Krishna
I have been a web developer at Microsoft since 2013. I currently work for Microsoft Teams and love cutting edge tech, learning new frameworks, tools, platforms etc. Outside of work, I am an avid gamer and you can find me online with @wrathofzombies on Xbox, GitHub, Twitter and Facebook.
About the hosts
Richard is a software engineer in Microsoft’s Developer Experience (DX) group, where he helps developers and software vendors maximize their use of Microsoft cloud services in Office 365 and Azure. Richard has spent a good portion of the last decade architecting Office-centric solutions, many that span Microsoft’s diverse technology portfolio. He is a passionate technology evangelist and a frequent speaker at worldwide conferences, trainings and events. Richard is highly active in the Office 365 community, popular blogger at aka.ms/richdizz and can be found on Twitter at @richdizz. Richard is born, raised and based in Dallas, TX, but works on a worldwide team based in Redmond. Richard is an avid builder of things (BoT), musician and lightning-fast runner.
A civil engineer by training and a software developer by profession, Andrew Coates has been a developer evangelist at Microsoft since early 2004, teaching, learning and sharing coding techniques. During that time, he’s focused on .NET development on the desktop, in the cloud, on the web, on mobile devices and most recently for Office. Andrew has a number of apps in various stores and generally has far too much fun doing his job to honestly be able to call it work. Andrew lives in Sydney, Australia with his wife and two almost-grown-up children.
Useful links
- Office 365 Developer Center
- Office 365 main blog
- dev.office.com blog
- Slack channel
StackOverflow
Yammer Office 365 Technical Network
- O365 Dev Podcast
- O365 Dev Apps Model
- O365 Dev Tools
- O365 Dev APIs
- O365 Dev Migration to App Model
- O365 Dev Links
- UserVoice
The post Episode 127 on the new Script Lab Office add-in with Michael Zlatkovsky and Bhargav Krishna—Office 365 Developer Podcast appeared first on Office Blogs.
vswhere is now installed with Visual Studio 2017
Starting in the latest preview release of Visual Studio version 15.2 (26418.1-Preview), you can now find vswhere installed in “%ProgramFiles(x86)%\Microsoft Visual Studio\Installer” (on 32-bit operating systems before Windows 10, you should use “%ProgramFiles%\Microsoft Visual Studio\Installer”).
While I initially made vswhere.exe available via NuGet and Chocolatey for easy acquisition, some projects do not use package managers nor do most projects want to commit binaries to a git repository (since each version with little compression would be downloaded to every repo without a filter like git LFS).
So starting with build 15.2.26418.1* you can rely on vswhere.exe being installed. We actually install it with the installer, so even if you install a product like Build Tools you can still rely on vswhere.exe being available in “%ProgramFiles(x86)%\Microsoft Visual Studio\Installer”.
* A note about versions: the display version is 15.2.26418.1, but package and binary versions may be 15.0.26418.1. This is an artifact of how we do versioning but are looking to fix the “installationVersion” property you can see with vswhere.exe to match the display version, which you can currently see as part of the “installationName” property like in the following example.
instanceId: 881fd1f9 installDate: 4/20/2017 installationName: VisualStudioPreview/15.2.0-Preview+26418.1.d15rel installationPath: C:\Program Files (x86)\Microsoft Visual Studio\Preview\Enterprise installationVersion: 15.0.26418.1
C++ Code Editing and Navigation in Visual Studio
Visual Studio comes packed with a set of productivity tools to make it easy for C++ developers to read, edit, and navigate through their code. In this blog post we will dive into these features and go over what they do. This post is part of a series aimed at new users to Visual Studio.
This blog post goes over the following concepts:
- Basic Editor Features
- Quick Info and Parameter Info
- Scroll Bar Map Mode
- Class View
- Generate Graph of Include Files
- View Call Hierarchy
- Peek Definition
- Open Document
- Toggle Header/Code File
- Solution Explorer
- Go To Definition / Declaration
- Find / Find in Files
- Find All References
- Navigation Bar
- Go To
- Quick Launch
- Basic Editor Features
- Change Tracking
- IntelliSense
- Quick Fixes
- Refactoring Features
- Code Style Enforcement with EditorConfig
Reading and Understanding Code
If you’re like most developers, chances are you spend more time looking at code than modifying it. With that in mind, Visual Studio provides a suite of features to help you better visualize and understand your project.
Basic Editor Features
Visual Studio automatically provides syntax colorization for your C++ code to differentiate between different types of symbols. Unused code (e.g. code under an #if 0) is more faded in color. In addition, outlines are added around code blocks to make it easy to expand or collapse them.
If there is an error in your code that will cause your build to fail, Visual Studio adds a red squiggle where the issue is occurring. If Visual Studio finds an issue with your code but the issue wouldn’t cause your build to fail, you’ll see a green squiggle instead. You can look at any compiler-generated warnings or errors in the Error List window.
If you place your cursor over a curly brace, ‘{‘ or ‘}’, Visual Studio highlights its matching counterpart.
You can zoom in or out in the editor by holding down Ctrl and scrolling with your mouse wheel or selecting the zoom setting in the bottom left corner.
The Tools > Options menu is the central location for Visual Studio options, and gives you the ability to configure a large variety of different features. It is worth exploring to tailor the IDE to your unique needs.
You can add line numbers to your project by going to Text Editor > All Languages > General or by searching for “line num” with Quick Launch(Ctrl + Q). Line numbers can be set for all languages or for specific languages only, including C++.
Quick Info and Parameter Info
You can hover over any variable, function, or other code symbol to get information about that symbol. For symbols that can be declared, Quick Info displays the declaration.
When you are writing out a call to a function, Parameter Info is invoked to clarify the types of parameters expected as inputs. If there is an error in your code, you can hover over it and Quick Info will display the error message. You can also find the error message in the Error List window.
In addition, Quick Info displays any comments that you place just above the definition of the symbol that you hover over, giving you an easy way to check the documentation in your code.
Scroll Bar Map Mode
Visual Studio takes the concept of a scroll bar much further than most applications. With Scroll Bar Map Mode, you can scroll and browse through a file at the same time without leaving your current location, or click anywhere on the bar to navigate there. Even with Map Mode off, the scroll bar highlights changes made in the code in green (for saved changes) and yellow (for unsaved changes). You can turn on Map Mode in Tools > Options > Text Editor > All Languages > Scroll Bars > Use map mode for vertical scroll bar or by searching for “map” with Quick Launch (Ctrl + Q).
Class View
There are several ways of visualizing your code. One example is Class View. You can open Class View from the View menu or by pressing Ctrl + Shift + C. Class View displays a searchable set of trees of all code symbols and their scope and parent/child hierarchies, organized on a per-project basis. You can configure what Class View displays from Class View Settings (click the gear box icon at the top of the window).
Generate Graph of Include Files
To understand dependency chains between files, right-click while in any open document and choose Generate graph of include files.
You also have the option to save the graph for later viewing.
View Call Hierarchy
You can right-click any function call to view a recursive list of its call hierarchy (both functions that call it, and functions that it calls). Each function in the list can be expanded in the same way. For more information, see Call Hierarchy.
Peek Definition
You can check out the definition of a variable or function at a glance, inline, by right-clicking it and choosing Peek Definition, or pressing Alt+F12 with the cursor over that symbol. This is a quick way to learn more about the symbol without having to leave your current position in the editor.
Navigating Around Your Codebase
Visual Studio provides a suite of tools to allow you to navigate around your codebase quickly and efficiently.
Open Document
Right-click on an #include directive in your code and choose Open Document, or press Ctrl+Shift+G with the cursor over that line, to open the corresponding document.
Toggle Header/Code File
You can switch between a header file and its corresponding source file or vice versa, by right-clicking anywhere in your file and choosing Toggle Header / Code File or by pressing its corresponding keyboard shortcut: Ctrl+K, Ctrl+O.
Solution Explorer
Solution Explorer is the primary means of managing and navigating between files in your solution. You can navigate to any file by clicking it in Solution Explorer. By default, files are grouped by the projects that they appear in. To change this default view, click the Solutions and Folders button at the top of the window to switch to a folder-based view.
Go To Definition/Declaration
You can navigate to the definition of a code symbol by right-clicking it in the editor and choosing Go To Definition, or pressing F12. You can navigate to a declaration similarly from the right-click context menu, or by pressing Ctrl+F12.
Find / Find in Files
You can run a text search for anything in your solution with Find(Ctrl+F) or Find in Files(Ctrl+Shift+F).
Find can be scoped to a selection, the current document, all open documents, the current project, or the entire solution, and supports regular expressions. It also highlights all matches automatically in the IDE.
Find in Files is a more sophisticated version of Find that displays a list of results in the Find Results window. It can be configured even further than Find, such as by allowing you to search external code dependencies, filter by filetypes, and more. You can organize Find results in two windows or append results from multiple searches together in the Find Results window. Individual entries in the Find Results window can also be deleted if they are not desired.
Find All References
Find All References displays a list of references to the chosen symbol. For more information on Find All References, check out our blog post, Find All References Re-designed for Larger Searches.
Navigation Bar
You can navigate to different symbols around your codebase by using the navbar that is above the editor window.
Go To
Go To(Ctrl + T) is a code navigation feature that can be used to navigate to files, code symbols or line numbers. For more information, take a look at Introducing Go To, the Successor to Navigate To.
Quick Launch
Quick Launch makes it easy to navigate to any window, tool, or setting in Visual Studio. Simply type Ctrl+Q or click on the search box in the top-right corner of the IDE and search for what you are looking for.
Authoring and refactoring code
Visual Studio provides a suite of tools to help you author, edit, and refactor your code.
Basic Editor Features
You can easily move lines of code up and down by selecting them, holding down Alt, and pressing the Up/Down arrow keys.
To save a file, press the Save button at the top of the IDE, or press Ctrl+S. Generally though, it’s a good idea to save all your changed files at one time by using Save All(Ctrl+Shift+S).
Change Tracking
Any time you make a change to a file, a yellow bar appears on the left to indicate that unsaved changes were made. When you save the file, the bar turns green.
The green and yellow bars are preserved as long as the document is open in the editor. They represent the changes that were made since you last opened the document.
IntelliSense
IntelliSense is a powerful code completion tool that suggests symbols and code snippets for you as you type. C++ IntelliSense in Visual Studio runs in real time, analyzing your codebase as you update it and providing contextual recommendations based on the characters of a symbol that you’ve typed. As you type more characters, the list of recommended results narrows down.
In addition, some symbols are omitted automatically to help you narrow down on what you need. For example, when accessing a class object’s members from outside the class, you will not be able to see private members by default, or protected members (if you are not in the context of a child class).
After you have picked out the symbol you want to add from the drop-down list, you can autocomplete it with Tab, Enter, or one of the other commit characters (by default: {}[]().,:;+-*/%&|^!=?@#\).
TIP: If you want to change the set of characters that can be used to complete IntelliSense suggestions, search for “IntelliSense” in Quick Launch (Ctrl + Q) and choose the Text Editor -> C/C++ -> Advanced option to open the IntelliSense advanced settings page. From there, edit Member List Commit Characters with the changes you want. If you find yourself accidentally committing results you didn’t want or want a new way to do so, this is your solution.
The IntelliSense section of the advanced settings page also provides many other useful customizations. The Member List Filter Mode option, for example, has a dramatic impact on the kinds of IntelliSense autocomplete suggestions you will see. By default, it is set to Fuzzy, which uses a sophisticated algorithm to find patterns in the characters that you typed and match them to potential code symbols. For example, if you have a symbol called MyAwesomeClass, you can type “MAC” and find the class in your autocomplete suggestions, despite omitting many of the characters in the middle. The fuzzy algorithm sets a minimum threshold that code symbols must meet to show up in the list.
If you don’t like the fuzzy filtering mode, you can change it to Prefix, Smart, or None. While None won’t reduce the list at all, Smart filtering displays all symbols containing substrings that match what you typed. Prefix filtering on the other hand purely searches for strings that begin with what you typed. These settings give you many options to define your IntelliSense experience, and it’s worth trying them out to see what you prefer.
IntelliSense doesn’t just suggest individual symbols. Some IntelliSense suggestions come in the form of code snippets, which provide a basic example of a code construct. Snippets are easily identified by the square box icon beside them. In the following screenshot, “while” is a code snippet that automatically creates a basic while loop when it is committed. You can choose to toggle the appearance of snippets in the advanced settings page.
Visual Studio 2017 provides two new IntelliSense features to help you narrow down the total number of autocomplete recommendations: Predictive IntelliSense, and IntelliSense filters. Check out our blog post, C++ IntelliSense Improvements – Predictive IntelliSense & Filtering, to learn more about how these two features can improve your productivity.
If you ever find yourself in a situation where the list of results suggested by IntelliSense doesn’t match what you’re looking for, and you already typed some valid characters beforehand, you can choose to unfilter the list by clicking the Show more results button in the bottom left corner of the drop-down list–which looks like a plus (+)—or by pressing Ctrl + J. This will refresh the suggestions, and add some new entries. If you’re using Predictive IntelliSense, which is an optional mode that uses a stricter filtering mechanism than usual, you may find the list expansion feature even more useful.
Quick Fixes
Visual Studio sometimes suggests ways to improve or complete your code. This comes in the forms of some lightbulb pop-ups called Quick Fixes. For example, if you declare a class in a header file, Visual Studio will suggest that it can declare a definition for it in a separate .cpp file.
Refactoring Features
Do you have a codebase that you’re not happy with? Have you found yourself needing to make sweeping changes but are afraid of breaking your build or feel like it will take too long? This is where the C++ refactoring features in Visual Studio come in. We provide a suite of tools to help you make code changes. Currently, Visual Studio supports the following refactoring operations for C++:
- Rename
- Extract Function
- Change Function Signature
- Create Declaration/Definition
- Move Function Definition
- Implement Pure Virtuals
- Convert to Raw String Literal
Many of these features are called out in our announcement blog post, All about C++ Refactoring in Visual Studio. Change Function Signature was added afterward, but functions exactly as you’d expect – it allows you to change the signature of a function and replicate changes throughout your codebase. You can access the various refactoring operations by right-clicking somewhere in your code or using the Edit menu. It’s also worth remembering Ctrl + R, Ctrl + R to perform symbol renames; it’s easily the most common refactoring operation.
In addition, check out the C++ Quick Fixes extension, which adds a host of other tools to help you change your code more efficiently.
For additional information, check our documentation on Writing and refactoring code in C++.
Code Style Enforcement with EditorConfig
Visual Studio 2017 comes with built-in support for EditorConfig, a popular code style enforcement mechanism. You can create .editorconfig files and place them in different folders of your codebase, applying code styles to those folders and all subfolders below them. An .editorconfig file supersedes any other .editorconfig files in parent folders and overwrites any formatting settings configured via Tools > Options. You can set rules around tabs vs. spaces, indent size, and more. EditorConfig is particularly useful when you are working on a project as part of a team, such as when a developer wants to check in code formatted with tabs instead of spaces, when your team normally uses spaces. EditorConfig files can easily be checked in as part of your code repo to enforce your team style.
Learn more about EditorConfig support in Visual Studio
Keyboard Shortcut Reference
For a full set of default key bindings for Visual Studio C++ developers, take a look at our Visual Studio 2017 Keyboard Shortcut Reference.
Conclusion
Lastly, you can find additional resources on how to use Visual Studio in our official documentation pages at docs.microsoft.com. In particular, for developer productivity, we have the following set of articles available:
- Writing Code in the Code and Text Editor– goes over more features in this area.
- Writing and refactoring code (C++)– provides some C++ productivity tips.
- Finding and Using Visual Studio Extensions– many community contributors submit both free and paid extensions that can improve your development experience.
Released: Public Preview for SQL Server vNext Replication Management Pack (6.7.40.0)
We are happy to announce that public preview for SQL Server vNext Replication Management Pack is ready. Please install and use this public preview and send us your feedback (sqlmpsfeedback@microsoft.com).
You can download the public preview at: https://www.microsoft.com/download/details.aspx?id=55098
This management pack was thoroughly built from the ground up in accordance with Best Practices for SQL Server vNext. The monitoring provided by the management pack includes performance, availability, and configuration monitoring, as well as performance and events data collection. All monitoring workflows have predefined thresholds and complimentary knowledge base articles. You can integrate the monitoring of SQL Server vNext Replication components into your service-oriented monitoring scenarios. In addition to health monitoring capabilities, this management pack includes dashboards, diagram views, state views, performance views and alert views that enable near real-time diagnostics and remediation of detected issues. The management pack automatically selects the monitoring type used by the management pack for SQL Server vNext to monitor the appropriate SQL Server instance. Replication objects discovered and monitored by the management pack are as follows:
- Distributor
- Publisher
- Subscriber
- Publication
- Subscription
Feature Summary The following list gives an overview of the features introduced by Microsoft System Center Operations Manager Management Pack for SQL Server vNext Replication. Please refer to Microsoft SQL Server vNext Replication Management Pack Guide for more details. Full functionality will be available with SQL Server vNext GA. This CTP release only covers a subset of monitors and rules. We will work towards full functionality as we release new CTPs.
- Agentless monitoring is now available along with traditional agent monitoring. Agentless monitoring target is defined by SQL Server vNext Monitoring Pool.
- Usage of scripts is discontinued in favor of .Net Framework modules.
- SQL Server Dynamic Management Views and Functions are now used for getting information on health and performance. Previously some of these monitors were using WMI and other system data sources.
We are looking forward to hearing your feedback.
Bing refines its copyright removals process
Enhanced visibility
This new feature will provide webmasters with more visibility into how DMCA takedowns impact their site and gives webmasters the opportunity to either address the infringement allegation or remove the offending material. All requests will be evaluated in a new appeals process.
More information
For more information on Bing’s copyright infringement policies and how Bing delivers search results, visit Bing's copyright infringement policies. Bing also provides full transparency of takedown requests in a bi-annual Content Removal Requests Report with associated FAQs. Access the latest version here Bing Content Removal Requests Report.
-The Bing Webmaster Tools Team
#AzureAD Mailbag: Azure AD App Proxy, Round 2
Hey everyone, Ian Parramore here. Long time no post for us on these mailbags. You might be wondering what happened and why we didnt have a post for almost 2 months. I can tell you who is to blame, Mark. Now that we got that out of the way. Today were going to dive in a little bit on some of the most common questions weve seen around the Azure AD Application Proxy. For those of you not familiar with this awesome feature, Application Proxy provides single sign-on (SSO) and secure remote access for web applications hosted on-premises. These on-premises web applications can now be integrated with Azure AD, allowing your end users to access your on-premises applications the same way they access O365 and other SaaS apps integrated with Azure AD. You don’t even need to change the network infrastructure or require a VPN to provide this solution for your users. To learn more about Application Proxy and how to get started, see our documentation. Now lets dig into some of your questions.
Question 1:
Im trying to setup Kerberos constrained delegation as discussed in this article but am struggling to understand the PrincipalsAllowedToDelegateToAccount method. Do you have some more insights you can share on this?
Answer 1:
PrincipalsAllowedToDelegateToAccount is specifically used where the Connector servers are in a different domain to the web application service account and requires the use of Resource-based Constrained Delegation.
If the Connector servers and the web application service account are in the same domain then you can use the Active Directory Users and Computers to configure the delegation settings on each of the Connector machine accounts to allow them to delegate to the target SPN.
If the Connector servers and the web application service account are in different domains then we need to use Resource based delegation where the delegation permissions are configured on the target web server / web application service account.
This is a relatively new method of Constrained Delegation introduced in Windows Server 2012 which supports cross-domain delegation by allowing the resource (web service) owner to control which machine/service accounts are allowed to delegate to it. There is no UI to assist with this configuration so we need to use PowerShell.
Each Azure AD Application Proxy Connector machine account needs to be granted permissions to delegate to the web application service account.
When validating your configuration you can check the PrincipalsAllowedToDelegateToAccount setting using the following PowerShell:-
Get-ADUser -Identity sharepointserviceaccount -Properties “PrincipalsAllowedToDelegateToAccount”
The following output shows 2 machine accounts with permissions to delegate to the sharepointserviceaccount corresponding to our 2 Connector servers:
If one or more of your Connector servers do not have permissions to delegate to the target web application service account then you will see errors similar to the following:
In the article you’ll see the following sample PowerShell commands:
$connector= Get-ADComputer -Identity connectormachineaccount -server dc.connectordomain.com
Set-ADUser -Identity sharepointserviceaccount -PrincipalsAllowedToDelegateToAccount $connector
This is fine but will only set one Connector with delegation rights to the sharepointserviceaccount.
If you only specify one of two Azure AD App Proxy connectors, access to the app will only succeed if traffic is routing through that connector.
Where you have more than one Connector the first command would ideally look something like this:
$connectors = Get-ADComputer -filter {name – like “*appproxyname*”} -server dc.connectordomain.com
This command assumes the connectors have a similar name and that the wildcards will return more than one computer account. For example, in my environment I have two connectors, MSFTPM-AAP1 and MSFTPM-AAP2. So I would run:
$connectors = Get-ADComputer -filter {name – like “*aap*”} -server dc.connectordomain.com
This returns both servers and sets them in the $connectors variable. I can then run the second command to set the attribute appropriately on my resource server:
Set-ADUser -Identity sharepointserviceaccount -PrincipalsAllowedToDelegateToAccount $connectors
We can then use the following PowerShell to re-validate the setting:
Get-ADUser -Identity sharepointserviceaccount -Properties “PrincipalsAllowedToDelegateToAccount”
Note the above examples are using Set-AdUser/Get-AdUser when getting/setting the PrincipalsAllowedToDelegateToAccount attribute. This is because the web application is running under a service account.
If the web application was running under a machine context we would need to use Set-AdComputer/Get-AdComputer. This may be relevant in a test environment with only a single web server but in a load balanced web server deployment we would expect the services to be running under a common service account.
When populating the $connectors variable we will always use Get-AdComputer as we are specifically interested in the Connector machine accounts.
For further information about Kerberos Constrained Delegation and Resource-based Constrained Delegation please see the following whitepaper http://aka.ms/kcdpaper
Question 2:
Should I create a dedicated account to register the connector with the Azure AD Application Proxy?
Answer 2:
There’s no reason to. Any global admin account will work fine. The credentials entered during installation are not used after the registration process. Instead, a certificate is issued to the connector which will be used for authentication from that point forward. You can see this certificate in the personal store of the computer account:
Question 3:
How can I monitor the performance of the Azure AD Application Proxy connector?
Answer 3:
There are Performance Monitor counters that are installed along with the connector. To view them do the following:
1. Start -> Type “Perfmon” -> Enter
2. Select Performance Monitor and click the green “+” icon:
3. Select and add the Microsoft AAD App Proxy Connector counters:
Question 4:
Can only IIS-based apps be published? What about web apps running on non-Windows web servers? Does the connector have to be installed on a server with IIS installed?
Answer 4:
Woah, this is a 3 for 1!
No there is no IIS requirement for apps that are published.
Yes you can publish web apps running on servers other than Windows Server. Having said that, you may or may not be able to use pre-authentication with a non-Windows Server depending on if the web server supports Negotiate (Kerberos authentication).
The server the connector is installed on does not have to have IIS installed.
Question 5:
Does the Azure AD App Proxy connector have to be on the same subnet as the resource?
Answer 5:
There is no requirement for the connector to be on the same subnet. It does however need name resolution to the resource as well as the necessary network connectivity (routing to the resource, ports open on the resource, etc.). If you want a more detailed discussion on connector location, please see our blog.
Question 6:
Ive published the App Proxy application, and Im able to log in, but the application is not displaying as expected. Why isnt it working?
Answer 6: If youre able to login and the application isnt displaying properly, there are two common possible causes.
Please verify that all the pages referenced by the application are in the path you published. For example, we see many cases where the published path is contoso/myapp/register/, but the web page has references to resource under different paths e.g. conotoso/myapp/style.css. Because the path containing the style page has not been published, the application is unable to find it when loading.
One way to check if this may be the problem is to look at a Fiddler trace or use the Network tab in the F12 Developer tools in Internet Explorer or Edge browsers to get an overview of the request/response pairs and associated HTTP status codes as you load a web page. You can use the output to identify if you are getting any 404 errors, and if so, whether the resources with the 404 errors are in the published path.
In the above example, publishing contoso/myapp/ instead of contoso/myapp/register/ would solve the problem.
Also, make sure to check if your application uses hard-coded internal links to either other applications or unpublished sites or, for its own internal namespace.
This can be problematic where the internal and external FQDNs in use are different and the web server generates links based on its internal name. Our general recommendation is to use the same internal and external FQDN and protocol (validate that both are the same https is preferred, http is allowed) where possible to reduce the chance of any problems.
For sites that contain links to other internal sites or applications, you would need to identify these and then ensure the relevant applications and sites are also published and available externally through Application Proxy. If these links are fully qualified, please use the custom domains feature to make sure these links will work. If not, look for an upcoming announcement in the coming months on some new Application Proxy capabilities in this area.! Please check the Enterprise Mobilty and Security blog for announcements.
You can use a tool such as Fiddler to review the traffic and identify request failures with a 404 status. You can also use the Network tab in the F12 Developer tools in Internet Explorer or Edge browsers to get an overview of the request/response pairs and associated HTTP status codes as you load a web page.
Thanks for reading.
For any questions you can reach us at
AskAzureADBlog@microsoft.com, the Microsoft Forums and on Twitter @AzureAD, @MarkMorow and @Alex_A_Simons
-Ian Parramore, Harshini Jayaram, and Mark Morowczynski
The future of work on Modern Workplace
Register to watch the latest Modern Workplace episode, “The Future of Work: Build, attract, connect,” which aired April 11, 2017.
- Jacob Morgan—speaker, futurist and author—presents five factors affecting the future of work. He describes how, as a futurist, he helps people not be surprised by what the future will bring.
- Angela Oguntala—futurist, designer and director at Greyspace—adds her perspective on how companies need to think about the future differently. Too often people confidently state, “This is what the future will be,” and organizations listen to them. But instead of trying to predict the future, organizations should consider different options and how their core processes could be affected.
Also, see a demo of the new Microsoft Teams enhancements, available to all Office 365 business users, and get an exclusive tour of the Microsoft Envisioning Center, where you’ll see how Microsoft is planning for the future of productivity.
Watch the Modern Workplace episode to learn more.
The post The future of work on Modern Workplace appeared first on Office Blogs.
Windows Developer Awards: Honoring Windows Devs at Microsoft Build 2017
As we ramp up for Build, the Windows Dev team would like to thank you, thedeveloper community, for all the amazing work you have done over the past 12 months. Because of your efforts and feedback, we’ve managed to add countless new features to the Universal Windows Platform and the Windows Store in an ongoing effort to constantly improve. And thanks to your input on the Windows Developer Platform Backlog, you have helped us to prioritize new UWP features.
In recognition of all you have done, this year’s Build conference in Seattle will feature the first-ever Windows Developers Awards given to community developers who have built exciting UWP apps in the last year and published them in the Windows Store. The awards are being given out in four main categories:
- App Creator of the Year – This award recognizes an app leveraging the latest Windows 10 capabilities. Some developers are pioneers, the first to explore and integrate the latest features in Windows 10 releases. This award honors those who made use of features like Ink, Dial, Cortana, and other features in creative ways.
- Game Creator of the Year – This award recognizes a game by a first-time publisher in Windows Store. Windows is the best gaming platform–and it’s easy to see why. From Xbox to PCs to mixed reality, developers are creating the next generation of gaming experiences. This award recognizes developers who went above and beyond to publish innovative, engaging and magical games to the Windows Store over the last year.
- Reality Mixer of the Year– This award recognizes the app demonstrating a unique mixed reality experience. Windows Mixed Reality lets developers create experiences that transcend the traditional view of reality. This award celebrates those who choose to mix their own view of the world by blending digital and real-world content in creative ways.
- Core Maker of the Year – This award recognizes a maker project powered by Windows. Some devs talk about the cool stuff they could build–others just do it. This award applauds those who go beyond the traditional software interface to integrate Windows in drones, PIs, gardens, and robots to get stuff done.
In addition to these, a Ninja Cat of the Year award will be given as special recognition. Selected by the Windows team at Microsoft, this award celebrates the developer or experience that we believe most reflects what Windows is all about, empowering people of action to do great things.
Here’s what we want from you: we need the developer community to help us by voting for the winners of these four awards on the awards site so take a look and tell us who you think has created the most compelling apps. Once you’ve voted, check back anytime to see how your favorites are doing. Voting will end on 4/27, so get your Ninja votes in quickly.
The post Windows Developer Awards: Honoring Windows Devs at Microsoft Build 2017 appeared first on Building Apps for Windows.
Introducing a new experience for Gmail accounts in Windows 10 Mail & Calendar apps
Over the past year we’ve introduced many new features in Windows 10 Mail & Calendar apps for users with Outlook.com accounts—such as easily tracking travel and shipping deliveries, making emails more actionable, helping you easily track your favorite sports events, faster search, and more. We’re now excited to bring these features to our users with Gmail accounts, so you can enjoy the best of what Windows 10 Mail & Calendar have to offer.
We’ll roll out the improved experience gradually to users with Gmail accounts in the Windows Insiders program over the next several weeks. We’d love for you to provide feedback on the experience in this phase. After we’ve incorporated your feedback, we’ll proceed to roll out the updates to all Windows 10 users.
Bringing new features to your Gmail account
Mail & Calendar apps have long supported connecting to and managing your Gmail account. But up until now, some capabilities were only available to those with an Outlook.com or Office 365 email address. With these updates, our latest features will be available for your Gmail account, including Focused Inbox and richer experiences for travel reservations and package deliveries.
To power these new features, we’ll ask your permission to sync a copy of your email, calendar and contacts to the Microsoft Cloud. This will allow new features to light up, and changes to update back and forth with Gmail–such as creation, edit or deletion of emails, calendar events and contacts. But your experience in Gmail.com or apps from Google will not change in any way.
How to get started
The new experience will first be made available to Mail & Calendar users who are part of the Windows Insider program – though not all Insiders will see the new experience on their Gmail account right away. We will expand the rollout gradually over the next few weeks. You’ll know the new experience is available for your account when you are prompted to update your Gmail account settings. If you miss the first prompt, we will remind you again in a few weeks.
We’re excited for bring an improved Mail & Calendar experience to your Gmail account, and welcome your feedback as we fine tune the new experience in the coming weeks. You can provide feedback on this and other Mail & Calendar features at any time by going to Settings> Feedback in the app.
— Windows 10 Mail & Calendar team
The post Introducing a new experience for Gmail accounts in Windows 10 Mail & Calendar apps appeared first on Windows Experience Blog.
Update 1704 for Configuration Manager Technical Preview Branch – Available Now!
Hello everyone! We are happy to let you know that update 1704 for the Technical Preview Branch of System Center Configuration Manager has been released. Technical Preview Branch releases give you an opportunity to try out new Configuration Manager features in a test environment before they are made generally available. This months new preview features include:
- Secure Boot inventory data Hardware inventory can now determine whether the device has Secure Boot enabled (enabled by default).
- Run Task Sequence step This is a new step in the task sequence to run another task sequence, which creates a parent-child relationship between two task sequences.
- Reload boot images with latest Windows PE version During the “Update Distribution Points” wizard on a boot image, you can now reload the version of Windows PE in the selected boot image.
This release also includes the following improvements for customers using System Center Configuration Manager connected with Microsoft Intune to manage mobile devices:
- Androidapp configuration support Administrators can create an app configuration policy for Android applications deployed with Google Play.
Update 1704 for Technical Preview Branch is available in the Configuration Manager console. For new installations please use the 1703 baseline version of Configuration Manager Technical Preview Branch available on TechNet Evaluation Center.
We would love to hear your thoughts about the latest Technical Preview! To provide feedback or report any issues with the functionality included in this Technical Preview, please use Connect. If theres a new feature or enhancement you want us to consider for future updates, please use the Configuration Manager UserVoice site.
Thanks,
The System Center Configuration Manager team
Configuration Manager Resources:
Documentation for System Center Configuration Manager Technical Previews
Try the System Center Configuration Manager Technical Preview Branch
Documentation for System Center Configuration Manager
System Center Configuration Manager Forums