Quantcast
Channel: TechNet Technology News
Viewing all 13502 articles
Browse latest View live

DocumentDB: API for MongoDB now generally available

$
0
0

Today, we are excited to announce that DocumentDB: API for MongoDB is generally available. The API for MongoDB allows developers to experience the power of the DocumentDB database engine with the comfort of a managed service and the familiarity of the MongoDB SDKs and tools. With the announcement of its general availability, we are introducing a suite of new features for improvements in availability, scalability, and usability of the service.

What is API for MongoDB?

DocumentDB: API for MongoDB is a flavor of DocumentDB that enables MongoDB developers to use familiar SDKs, tool chains, and libraries to develop against DocumentDB. MongoDB developers can now enjoy the advantages of DocumentDB, which include auto-indexing, no server management, limitless scale, enterprise-grade availability backed by service level agreements (SLAs), and enterprise-grade customer support.

What’s new?

From preview to general availability, we have reached a few important milestones. We are proud to introduce a number of major feature releases:

  • Sharded Collections
  • Global Databases
  • Read-only Keys
  • Additional portal metrics

Sharded Collections– By specifying a shard key, API for MongoDB will automatically distribute your data amongst multiple partitions to scale out both storage and throughput. Sharded collections are an excellent option for applications to ingest large volumes of data or for applications that require high throughput, low latency access to date. Sharded collections can be scaled in a matter of seconds in the Azure portal. They can scale to a nearly limitless amount of both storage and throughput.

Global Databases– API for MongoDB now allows you to replicate your data across multiple regions to deliver high availability. You can replicate your data across any of Azure’s 30+ datacenters with just a few clicks from the Azure portal. Global databases are a great option for delivering low latency requests across the world or in preparation for disaster recovery (DR) scenarios. Global databases have support for both manual and policy driven failovers for full user control.

Read-only Keys– API for MongoDB now supports read-only keys, which will only allow read operations on the API for MongoDB database.

Portal Metrics– To improve visibility into the database, we are proud to announce that we have added additional metrics to the Azure portal. For all API for MongoDB databases, we provide metrics on the numbers of requests, request charges, and errored requests. Supplementing the portal metrics, we have also added a custom command, GetLastRequestStatistics, which allows you to programmatically determine a command’s request charge.

api-for-mongodb-metrics

What’s next?

General availability is just the beginning for all the features and improvements we have in stored for DocumentDB: API for MongoDB. In the near future, we will be releasing support for Unique indexes and  a couple major performance improvements. Stay tuned!

In addition to API for MongoDB’s general availability, we are announcing a preview Spark connector. Visit our Github repo for more information.

We hope you take advantage of these new features and capabilities. Please continue to provide feedback on what you want to see next. Try out DocumentDB: API for MongoDB today by signing up for a free trial and create a API for MongoDB account.

Stay up-to-date on the latest Azure DocumentDB news and features by following us on Twitter @DocumentDB.


Announcing new capabilities of HDInsight and DocumentDB at Strata

$
0
0

This week in San Jose, Microsoft will be at Strata Hadoop + World where will be announcing new capabilities of Azure HDInsight, our fully managed OSS analytics platform for running all open-source analytics workloads at scale, with enterprise grade security and SLA and Azure DocumentDB, our planet-scale fully-managed NoSQL database service. Our vision is to deeply integrate both services and make it seamless for developers to process massive amounts of data with low-latency and global scale.

DocumentDB announcements

DocumentDB is Microsoft’s globally distributed database service designed to enable developers to build planet-scale applications. DocumentDB allows you to elastically scale both throughput and storage across any number of geographical regions. The service offers guaranteed single-digit millisecond low latency at the 99th percentile, 99.99% high availability, predictable throughput, and multiple well-defined consistency models—all backed by comprehensive SLAs for latency, availability, throughput, and consistency. By virtue of its schema-agnostic and write-optimized database engine, DocumentDB, by default, is capable of automatically indexing all the data it ingests and serves across SQL, MongoDB, and JavaScript language-integrated queries in a scale-independent manner. As one of the foundational services of Azure, DocumentDB has been used virtually ubiquitously as a backend for first-party Microsoft services for many years. Since its general availability in 2015, DocumentDB is one of the fastest growing services on Azure.

Real-time data science with Apache Spark and DocumentDB

At Strata, we are pleased to announce Spark connector for DocumentDB. It enables real-time data science and exploration over globally distributed data in DocumentDB. Connecting Apache Spark to Azure DocumentDB accelerates our customer’s ability to solve fast-moving data sciences problems where data can be quickly persisted and retrieved using DocumentDB. The Spark to DocumentDB connector efficiently exploits the native DocumentDB managed indexes and enables updateable columns when performing analytics, push-down predicate filtering, and advanced analytics to data sciences against fast-changing globally-distributed data, ranging from IoT, data science, and analytics scenarios. The Spark to DocumentDB connector uses the Azure DocumentDB Java SDK. Get started today and download the Spark connector from GitHub!

Spark

General availability of high-fidelity, SLA backed MongoDB APIs for DocumentDB

DocumentDB is architected to natively support multiple data models, wire protocols, and APIs. Today we are announcing the general availability of our DocumentDB’s API for MongoDB. With this, existing applications built on top of MongoDB can seamlessly target DocumentDB and continue to use their MongoDB client drivers and toolchain. This allows customers to easily move to DocumentDB while continuing to use the MongoDB APIs, but get comprehensive enterprise grade SLAs, turn-key global distribution, security, compliance, and a fully managed service.

DocumentDB

HDInsight announcements

Cloud-first with Hortonworks Data Platform 2.6

Microsoft’s cloud-first strategy has already shown success with customers and analysts, having recently been placed as a leader in the Forrester Big Data Hadoop Cloud Solutions Wave and a Leader in the Gartner Magic Quadrant for Data Management Solutions for Analytics. Operating a fully managed cloud service like HDInsight, which is backed by enterprise grade SLA, enable customers to deploy the latest bits of Hadoop & Spark, on demand. To that end, we are excited that the latest Hortonworks Data Platform 2.6 will be continuously available to HDInsight even before its on-premises release. Hortonworks’ commitment to being cloud-first is especially significant given the growing importance of cloud with Hadoop and Spark workloads.

"At Hortonworks we have seen more and more Hadoop related work loads and applications move to the cloud. Starting in HDP 2.6, we are adopting a “Cloud First” strategy in which our platform will be available on our cloud platforms – Azure HDInsight at the same time or even before it is available on traditional on-premises settings. With this in mind, we are very excited that Microsoft and Hortonworks will empower Azure HDInsight customers to be the first to benefit from our HDP 2.6 innovation in the near future."
- Arun Murthy, co-founder, Hortonworks

Most secured Hadoop in a managed cloud offering

Last year at Strata + Hadoop World Conference in New York, we announced the highest levels of security for authentication, authorization, auditing, and encryption natively available in HDInsight for Hadoop workloads. Now, we are expanding our security capabilities across other workloads including Interactive Hive (powered by LLAP) and Apache Spark. This allows customers to use Apache Ranger over these popular workloads to provide a central policy and management portal to author and maintain fine-grained access control. In addition, customers can now analyze detailed audit records in the familiar Apache Ranger user interface.

New fully managed, SLA-backed Apache Spark 2.1 offering

With the latest release of Apache Spark for Azure HDInsight, we are providing the only fully managed, 99.9% SLA-backed Spark 2.1 cluster in the market. Additionally, we are introducing capabilities to support real-time streaming solutions with Spark integration to Azure Event Hubs and leveraging the structured streaming connector in Kafka for HDInsight. This will allow customers to use Spark to analyze millions of real-time events ingested into these Azure services, thus enabling IoT and other real-time scenarios. We made this possible through DirectStreaming support, which improves the performance and reliability of Spark streaming jobs as it processes data from Event Hubs. The source code and binary distribution of this work is now available publicly on GitHub.

New data science experiences with Zeppelin and ISV partnerships

Our goal is to make big data accessible for everybody. We have designed productivity experiences for different audiences including the data engineer working on ETL jobs with Visual Studio, Eclipse, and IntelliJ support, the data scientists performing experimentation with Microsoft R Server and Jupyter notebook support, and the business analysts creating dashboards with Power BI, Tableau, SAP Lumira, and Qlik support. As part of HDInsight’s support for the latest Hortonworks Data Platform 2.6, Zeppelin notebooks, a popular workspace for data scientists, will support both Spark 2.1 and interactive Hive (LLAP). Additionally, we have added popular independent software vendors (ISVs) Dataiku and H20.ai to our existing set of ISV applications that are available on the HDInsight platform. Through the unique design of HDInsight edge nodes, customers can spin up these data science solutions directly on HDInsight clusters, which are integrated and tuned out-of-the-box making it easier for customers to build intelligent applications.

Enabling Data Warehouse scenarios through Interactive Hive

Microsoft has been involved from the beginning in making Apache Hive run faster with our contributions to Project Stinger and Tez that sped up Hive query performance up to 100x. We announced support for Hive using LLAP (Long Lived and Process) to speed up query performance up to an additional 25x. With support for the newest version of Apache Hive 2.1.1, customers can expect sub-second query performance, thus enabling data warehouse scenarios over all enterprise data, without the need for data movement. Interactive Hive clusters also support popular BI tools, which is useful for business analysts who want to run their favorite tools directly on top of Hadoop. 

Announcing SQL Server CTP 1.4

Microsoft is excited to announce a new preview for the next version of SQL Server Community Technology Preview (CTP) 1.4 is available on both Windows and Linux. This preview offers an enhancement to SQL Server v.Next on Linux. Another enhancement to SQL Server v.Next on Windows and Linux is resumable online index builds b-tree rebuild support which extends flexibility in index maintenance scheduling and recovery. You can try the preview in your choice of development and test environments now and for additional detail on CTP 1.4, please visit What’s New in SQL Server v.Next, Release Notes and Linux documentation.

Earlier today, we also announced a new online event that will take place next month - Microsoft Data Amp. During the event, Scott Guthrie and Joseph Sirosh will share some exciting new announcements around investments we are making that put data front and center of application innovation and artificial intelligence. I encourage you to check out Mitra Azizirad’s blog post to learn more about Microsoft Data Amp and save the date for what’s going to be an amazing event.

This week the big data world is focused on Strata + Hadoop World in San Jose, a great event for the industry and community. We are committed to making the innovations in big data and NoSQL natively available, easily accessible, and highly productive as part of our Azure services.

Released: System Center Management Pack for SQL Server and Dashboards (6.7.20.0)

$
0
0

We are happy to announce that updates to SQL Server Management Packs have been released!

Downloads available:

Microsoft System Center Management Pack for SQL Server 2016

Microsoft System Center Management Pack for SQL Server 2014

Microsoft System Center Management Pack for SQL Server (2008-2012)

Microsoft System Center Management Pack for SQL Server Dashboards

A Note About Dashboard Performance

Sometimes, you may run into a situation when dashboards open rather unwillingly—it may take quite a lot of time to get them ready. The core of the issue lays in large amounts of data written into Data Warehouse throughout the day. Every bit of this data is to be processed when you open any dashboard, which may lead to dashboards freezing. The issue is rather frequent if you open the dashboards after a certain period of inactivity. To neutralize this issue, it is recommended to enable the special “DW data early aggregation” rule. The rule will distribute the Data Warehouse processing load during the day, which will result in a quicker start of the dashboards.

By default, the rule has a 4-hour launch interval which works for most environments. In case the dashboards have not reached the desired performance level in your environment, decrease the interval. The more frequently the rule launches, the quicker the dashboards behave. However, do not decrease the interval below 15 minutes and do not forget to override the rule timeout value to keep it always lower than the interval value.

Please see below for the new features and improvements. Most of them are based on your feedback. More detailed information can be found in guides that can be downloaded from the links above.

New SQL Server 2008-2012 MP Features and Fixes

  • Implemented some enhancements to data source scripts
  • Fixed issue: The SQL Server 2012 Database Files and Filegroups get undiscovered upon Database discovery script failure
  • Fixed issue: DatabaseReplicaAlwaysOnDiscovery.ps1 connects to a cluster instance using node name instead of client access name and crashes
  • Fixed issue: CPUUsagePercentDataSource.ps1 crashes with “Cannot process argument because the value of argument “obj” is null” error
  • Fixed issue: Description field of custom user policy cannot be discovered
  • Fixed issue: SPN Status monitor throws errors for servers not joined to the domain
  • Fixed issue: SQL Server policy discovery does not ignore policies targeted to system databases in some cases
  • Fixed issue: GetSQL20XXSPNState.vbs fails when domain controller is Read-Only
  • Fixed issue: SQL ADODB “IsServiceRunning” function always uses localhost instead of server name
  • Increased the length restriction for some policy properties in order to make them match the policy fields
  • Actualized Service Pack Compliance monitor according to the latest published Service Packs for SQL Server

New SQL Server 2014 and 2016 MP Features and Fixes

  • Implemented some enhancements to data source scripts
  • Fixed issue: DatabaseReplicaAlwaysOnDiscovery.ps1 connects to a cluster instance using node name instead of client access name and crashes
  • Fixed issue: CPUUsagePercentDataSource.ps1 crashes with “Cannot process argument because the value of argument “obj” is null” error
  • Fixed issue: Description field of custom user policy cannot be discovered
  • Fixed issue: SPN Status monitor throws errors for servers not joined to the domain
  • Fixed issue: SQL Server policy discovery does not ignore policies targeted to system databases in some cases
  • Fixed issue: Garbage Collection monitor gets generic PropertyBag instead of performance PropertyBag
  • Fixed issue: GetSQL20XXSPNState.vbs fails when domain controller is Read-Only
  • Fixed issue: SQL ADODB “IsServiceRunning” function always uses localhost instead of server name
  • Increased the length restriction for some policy properties in order to make them match the policy fields
  • Actualized Service Pack Compliance monitor according to the latest published Service Packs for SQL Server

SQL Server Dashboards MP

  • No changes since 6.7.15.0. The version number has been bumped to 6.7.20.0 to match the current version of SQL Server MPs.

We are looking forward to hearing your feedback.

 

Microsoft Teams featured on Good Morning America—watch now

$
0
0

Good Morning America’s “Boosting Your Business” segment, sponsored by Microsoft, provides entrepreneurs and small businesses with simple advice and tools to help them grow.

On March 15, Good Morning America brought in Maxie McCoy, a career expert, to give tips and tricks to help businesses across the country be more productive and collaborative. To demonstrate some of these tips, Maxie visited WeWork, the hugely successful shared–office space startup, and talked about how the WeWork Creator Awards team can work together in a new way using Microsoft Teams, a new chat-based workspace in Office 365.

Maxie gave the Creator Awards team advice on aligning their vision, delegating responsibility and communicating clearly within team workspaces. She showed them how Microsoft Teams creates a secure hub for teamwork, helping them communicate and collaborate more effectively.

Unique vision and unquestionable talent has made the WeWork team into what it is today. Microsoft Teams gives them a new way to work together and continue to grow.

Watch the segment now:

Check out Microsoft Teams to see how your team can be more productive and collaborative as well.
Download and read “The Ultimate Guide to Chat-Based Tools.”

The post Microsoft Teams featured on Good Morning America—watch now appeared first on Office Blogs.

Over 300,000 Square Kilometers of Imagery Released in Italy and Switzerland

$
0
0

We are excited to announce the release of new imagery in Italy and Switzerland. This latest imagery release includes 297,000 square kilometers in Italy and 38,000 square kilometers in Switzerland.

Below are examples of the beautiful imagery of Italy and Switzerland now available:

Italy

Palmanova, located in northeastern Italy, is a concentric city in the shape of a star. It is an example of the star fort from the renaissance period, and was built with military and societal considerations incorporated into its plans.

Palmanova, Italy

Switzerland

The city of Bern, referred to as the federal city, is the fourth most populous city in Switzerland. Also, the historic Old Town was designated as a UNESCO World Heritage Site in 1983.

Bern, Switzerland

Explore even more points of interest in Italy and Switzerland on Bing Maps.

To learn more about how you can incorporate maps in your apps, visit www.microsoft.com/maps.  

- Bing Maps Team

Test drive our hottest new collaboration tool—Microsoft Teams

$
0
0

Today’s fast-paced workplace requires you to transition between tasks seamlessly and find things quickly. As work and collaboration evolve to become more web-based and complex, changing professional styles call for tools that provide agility across the full span of a workday.

Instead of wasting time searching for your most-used features and content, you need technology that allows you to transition from email to collaboration to project work and back again with ease.

Get the chance to test our latest technology on your own work

Microsoft offers hands-on live sessions where you’ll have the opportunity to test drive Windows 10, Office 365 and our hottest new collaboration tool: Microsoft Teams. During these small-group sessions, you’ll have the opportunity to apply these tools to your own business scenarios and see how they work for you.

Each 90-minute session starts with an online business roundtable discussing your biggest business challenges with a trained facilitator and then transitions into a live environment in the cloud. You will receive a link to connect your own device to a remote desktop loaded with our latest technology, so you can experience first-hand how Microsoft tools can solve your challenges.

Learn skills that will simplify your workflow immediately

During this interactive online session, you will:

  • Explore how Microsoft Teams helps you collaborate with your co-workers in different locations and time zones.
  • Discover how you can keep your information more secure without inhibiting your workflow.
  • Learn how to visualize and analyze complex data, quickly zeroing in on the insights you need.
  • See how multiple team members can access, edit and review documents simultaneously.
  • Gain skills that will save you time and simplify your workflow immediately.

Register for a free interactive session and experience for yourself how the latest Microsoft technology can help you be more productive.

Each session is limited to 12 participants.

Reserve your seat:

U.S. customers: Register here
Outside the U.S.:Register here

The post Test drive our hottest new collaboration tool—Microsoft Teams appeared first on Office Blogs.

What makes a Data Source a Data Source?

$
0
0

It should be obvious, and it is — at least at the Tabular 1200 compatibility level: A data source definition in a Tabular model holds the connection information for Analysis Services to connect to a source of data such as a database, an OData feed, or a file. That’s straightforward. However, at the Tabular 1400 compatibility level, this is no longer so trivial. At the Tabular 1400 compatibility level, a data source definition can include a native query and even a contextual M expression on top of the connection information, which opens interesting capabilities that didn’t exist previously and redefines to some degree the nature of a data source definition.

Let’s take a closer look at a data source definition in a Tabular 1400 model, such as the following definition for a SQL Server-based data source:
Data Source with default contextExpression
The two important properties are the query parameter in the connectionDetails, which can hold a native source query, and the contextExpression parameter, which can take an M expression. The default “…” simply stands for an expression that takes the data source definition as is without wrapping it into a further M context. You can find a more elaborate example at the end of this article. For now, just note that you won’t see the contextExpression in your data source definitions yet. A forthcoming release of SSAS and SSDT Tabular will enable this feature.

The query parameter, on the other hand, already exists in the metadata. It’s just that SSDT Tabular does not let you enter a source query through the user interface (UI) when defining a data source. This is intentional to maintain the familiar separation of connection information on data sources and source queries on table partitions. Equally, there are currently no plans to expose a contextExpression designer in the UI.

The following screenshot shows the Power BI Desktop UI in the background for a SQL Server data source with a textbox to enter a SQL query in comparison to SSDT Tabular in the foreground, which doesn’t offer this textbox.

Power BI Desktop UI vs SSDT UI

For most data modelling scenarios, a clear separation of connection information and source queries is advantageous. After all, multiple tables and partitions can refer to a single data source definition in SSDT. It doesn’t seem very useful to restrict a data source to a single result set by means of a source query, such as “SELECT * FROM dimCustomer”, defined through the data source’s query parameter. Instead, it would be more useful to specify the query when importing a table by using the Value.NativeQuery function, as the following screenshot illustrates.

Using Value.NativeQuery to specify a native source query for a table.

This way, the data source remains available for importing further tables from the same source. On the other hand, if you do need a data source with a very narrow scope, you can set the query parameter manually by using the Tabular Model Scripting Language (TMSL).

If it’s clearly not recommended to use the query parameter in a data source definition, then why did we come up with yet another such parameter called contextExpression? Well, this brings us back to the starting point: What makes a Data Source a Data Source?

broadornarrow

A data source can be defined along a varying degree of detail, as shown above. On one extreme, you could define a data source that is so narrow it returns a single character, such as by using the following source query: “SELECT TOP 1 Left(firstName, 1) FROM dimCustomer”. Not very useful, but still a source of data. On the other extreme, a data source could be so broad that the tables you import on top of it require redundant statements that could be avoided with a more precise data source definition. For example, by using Tabular Object Model (TOM) or TMSL, you could define a SQL Server data source that only specifies the server name but no database name. Any tables importing from this data source would now require an M expression that includes a line to navigate to the desired database first before importing a source table, such as “AdventureWorksDW = Source{[Name=”AdventureWorksDW”]}[Data]”. Perhaps even more extreme, some data sources can be defined so broadly that they don’t even include information about the data source type. For example, any file-based data source can be considered of type File, while in fact a better definition would be a Microsoft Access database, Microsoft Excel workbook, comma-separated values file, and so forth. This is where the contextExpression comes in. It adds context information to narrow down a very broad data source definition to make it more meaningful.

The following abbreviated data source definition for an Access database shows the contextExpression in action. The connectionDetails merely define a File data source, which is too broad. What we want to define is an Access data source, so the contextExpression takes the File data source and wraps it into an Access.Database() function. As mentioned earlier, the placeholder expression “…” stands for the data source definition without the additional context.

contextExpression for Access.Database

By using a context expression, SSDT Tabular can define data sources that build on other data sources. Through TOM or TMSL, you can also edit the context expression to build more sophisticated definitions, yet this is generally not recommended. Also, unfortunately, TOM and TMSL do not provide an API for editing an M expression. This may come at some point in the future, but for now it’s not a priority.

And this is it for a quick glance at the upcoming contextExpression feature. As always, please send us your feedback and suggestions by using ProBIToolsFeedback or SSASPrev at Microsoft.com. Or use any other available communication channels such as UserVoice or MSDN forums. You can influence the evolution of the Analysis Services connectivity stack to the benefit of all our customers.

Introducing Microsoft Data Amp

$
0
0

This post was authored by Mitra Azizirad, Corporate Vice President, Cloud Application Development & Data Marketing, Microsoft

Today, I am excited to announce that on April 19, we will host a new online event, Microsoft Data Amp.

Microsoft Data Amp is inspired by you, our customers and partners, who everyday are transforming applications and industries by using data in innovative ways, to predict, take action and create new business opportunities. We continue to accelerate our pace of innovation to enable you to meet the demands of a dynamic marketplace and harness the incredible power of data, more securely and faster than before.

Next month at Microsoft Data Amp, Executive Vice President Scott Guthrie and Corporate Vice President Joseph Sirosh will share how Microsoft’s latest innovations put data, analytics and artificial intelligence at the heart of business transformation. The event will include exciting announcements that will help you derive even more value from the cloud, enable transformative application development, and ensure you can capitalize on intelligence from any data, any size, anywhere, across Linux and other open source technologies.

Customers and partners, in industries from healthcare to retail, will illustrate how they are innovating, evolving and reshaping their businesses by infusing data into the heart of their solutions and applications. Microsoft Data Amp will also feature demos and deep dives on new scenarios enabled by a broad array of new data and analytics technologies, from SQL Server to Azure Machine Learning.

I encourage you to save the date, and I look forward to you joining us for Microsoft Data Amp on April 19.

clip_image0016.jpg

Mitra Azizirad, Corporate Vice President, Cloud Application Development & Data Marketing, Microsoft

With an expansive technical, business and marketing background, Azizirad has led multiple and varied businesses across Microsoft for over two decades. She leads product marketing for Microsoft’s developer, data and artificial intelligence offerings spanning Visual Studio, SQL Server, Cortana Intelligence Services, .NET, Xamarin and associated Azure data, cognitive and developer services.


Survey: File Server Usage

$
0
0

Hi folks,

To prioritize and plan for investments in vNext experiences for Windows Server, we could use input from you! We would like to understand more about how you are utilizing Windows File Server, especially as it relates to the size of your datasets. This survey should take approximately 2-5 minutes to complete. We appreciate your feedback!

Click here to take our survey!

Thanks,

The Windows Server Storage Team

Microsoft Data Amp – New Online Event, on April 19th

$
0
0

Re-posted from the Microsoft SQL Server blog.

We are excited to announce a new online event that will take place next month, Microsoft Data Amp. Scott Guthrie, Executive Vice President for Cloud + Enterprise, and Joseph Sirosh, Corporate Vice President, Data Group, will share exciting new announcements around the investments we are making in machine learning, artificial intelligence and intelligent apps.



Learn more about Microsoft Data Amp at this original blog post from Mitra Azizirad, Corporate Vice President for Cloud Application Development & Data Marketing. And be sure to save the date, so you can be among the first to hear about our announcements next month.

CIML Blog Team

ZEIT now deployments of open source ASP.NET Core web apps with Docker

$
0
0

ZEIT is a new cloud service and "now" is the name of their deployment tool. ZEIT World is their DNS service. If you head over to https://zeit.co/ you'll see a somewhat cryptic animated gif that shows how almost impossibly simple it is to deploy a web app with ZEIT now.

ZEIT works with .NET Core and ASP.NET

You can make a folder, put an index.html (for example) in it and just run "now." You'll automatically get a website with an autogenerated name and it'll be live. It's probably the fastest and easiest deploy I've ever seen. Remember when Heroku (then Azure, then literally everyone) started using git for deployment? Clearly being able to type "now" and just get a web site on the public internet was the next step. (Next someone will make "up" which will then get replaced with just pressing ENTER on an empty line! ;) )

Jokes aside, now is clean and easy. I appreciate their organizational willpower to make an elegant and simple command line tool. I suspect it's harder than it looks to keep things simple.

All of their examples use JavaScript and node.js, but they also support Docker, which means they support open source ASP.NET Core on .NET Core! But do they know they do? ;) Let's find out.

And more importantly, how easy is it? Can I take a site from concept to production in minutes? Darn tootin' I can.

First, make a quick ASP.NET Core app. I'll use the MVC template with Bootstrap.

C:\Users\scott\zeitdotnet>dotnet new mvc
Content generation time: 419.5337 ms
The template "ASP.NET Core Web App" created successfully.

I'll do a quick dotnet restore to get the packages for my project.

C:\Users\scott\zeitdotnet>dotnet restore
Restoring packages for C:\Users\scott\zeitdotnet\zeitdotnet.csproj...
Generating MSBuild file C:\Users\scott\zeitdotnet\obj\zeitdotnet.csproj.nuget.g.props.
Generating MSBuild file C:\Users\scott\zeitdotnet\obj\zeitdotnet.csproj.nuget.g.targets.
Writing lock file to disk. Path: C:\Users\scott\zeitdotnet\obj\project.assets.json
Restore completed in 2.93 sec for C:\Users\scott\zeitdotnet\zeitdotnet.csproj.

NuGet Config files used:
C:\Users\scott\AppData\Roaming\NuGet\NuGet.Config
C:\Program Files (x86)\NuGet\Config\Microsoft.VisualStudio.Offline.config

Feeds used:
https://api.nuget.org/v3/index.json
C:\LocalNuGet
C:\Program Files (x86)\Microsoft SDKs\NuGetPackages\

Now I need to add a Dockerfile. I'll make one in the root that looks like this:

FROM microsoft/aspnetcore
LABEL name="zeitdotnet"
ENTRYPOINT ["dotnet", "zeitdotnet.dll"]
ARG source=.
WORKDIR /app
EXPOSE 80
COPY $source .

Note that I could have ZEIT build my app for me if I used the aspnetcore Dockerfile that includes the .NET Core SDK, but that would not only make my deployment longer, it would also make my docker images a LOT larger. I want to include JUST the .NET Core runtime in my image, so I'll build and publish locally.

ZEIT now is going to need to see my Dockerfile, and since I want my app to include the binaries (I don't want to ship my source in the Docker image up to ZEIT) I need to mark my Dockerfile as "Content" and make sure it's copied to the publish folder when my app is built and published.

Always
I'll add this my project's csproj file. If I was using Visual Studio, this is the same as right clicking on the Properties of the Dockerfile, setting it to Content and then "Always Copy to Output Directory."

Now I'll just build and publish to a folder with one command:

C:\Users\scott\zeitdotnet>dotnet publish
Microsoft (R) Build Engine version 15.1.548.43366
Copyright (C) Microsoft Corporation. All rights reserved.

zeitdotnet -> C:\Users\scott\zeitdotnet\bin\Debug\netcoreapp1.1\zeitdotnet.dll

And finally, from the .\bin\Debug\netcoreapp1.1\ folder I run "now." (Note that I've installed now and signed up for their service, of course.)

C:\Users\scott\zeitdotnet\bin\Debug\netcoreapp1.1\publish>now
> Deploying ~\zeitdotnet\bin\Debug\netcoreapp1.1\publish
> Ready! https://zeitdotnet-gmhcxevqkf.now.sh (copied to clipboard) [3s]
> Upload [====================] 100% 0.0s
> Sync complete (196.18kB) [2s]
> Initializing…
> Building
>▲ docker build
> ---> 035a0a1401c3
> Removing intermediate container 289b9e4ce5d9
> Step 6 : EXPOSE 80
> ---> Running in efb817308333
> ---> fbac2aaa3039
> Removing intermediate container efb817308333
> Step 7 : COPY $source .
> ---> ff009cfc48ea
> Removing intermediate container 8d650c1867cd
> Successfully built ff009cfc48ea
>▲ Storing image
>▲ Deploying image
> Deployment complete!

Now has put the generated URL in my clipboard (during deployment you'll get redirected to a lovely status page) and when it's deployed I can visit my live site. But, that URL is not what I want. I want to use a custom URL.

I can take one of my domains and set it up with ZEIT World's DNS but I like DNSimple (ref).

I can add my domain as an external one after adding a TXT record to my DNS to verify I own it. Then I setup a CNAME to point my subdomain to alias.zeit.co.

C:\Users\scott\Desktop\zeitdotnet>now alias https://zeitdotnet-gmhcxevqkf.now.sh http://zeitdotnet.hanselman.com
> zeitdotnet.hanselman.com is a custom domain.
> Verifying the DNS settings for zeitdotnet.hanselman.com (see https://zeit.world for help)
> Verification OK!
> Provisioning certificate for zeitdotnet.hanselman.com
> Success! Alias created:
https://zeitdotnet.hanselman.com now points to https://zeitdotnet-gmhcxevqkf.now.sh [copied to clipboard]

And that's it. It even has a nice SSL certificate that they applied for me. It doesn't terminate to SSL all the way into the docker container's Kestral web server, but for most things that aren't banking it'll be just fine.

All in all, a lovely experience. Here's my Hello World ASP.NE Core app running in ZEIT and deployed with now  at http://zeitdotnet.hanselman.com (if you are visiting this long after this was published, this sample MAY be gone.)

I am still learning about this (this whole exercise was about 30 total minutes and asking Glenn Condron a docker question) so I'm not clear how this would work in a large multi-container deployment, but as long as your site is immutable (don't write to the container's local disk!) ZEIT says it will scale your single containers. Perhaps docker-compose support is coming?


Sponsor: Did you know VSTS can integrate closely with Octopus Deploy? Join Damian Brady and Brian A. Randell as they show you how to automate deployments from VSTS to Octopus Deploy, and demo the new VSTS Octopus Deploy dashboard widget. Register now!



© 2017 Scott Hanselman. All rights reserved.
     

Need help tracking KPIs? Need a Scorecard to show your Boss?

$
0
0

We are starting development on a Scorecard\KPI tracking service which will enable you to set thresholds, organize, visualize and share KPIs to constantly improve the quality of your service.

Are you interested in participating in our Alpha? Join here:https://www.surveymonkey.com/r/AzureScorecard

Satya Vel

 

Ransomware operators are hiding malware deeper in installer packages

$
0
0

We are seeing a wave of new NSIS installers used in ransomware campaigns. These new installers pack significant updates, indicating a collective move by attackers to once again dodge AV detection by changing the way they package malicious code. These changes are observed in installers that drop ransomware like Cerber, Locky, and others.

Cybercriminals have been known to hide malware in Nullsoft Scriptable Install System (NSIS) installer files. As antivirus software effectively detect these installer files, cybercriminals are once again updating their tools to penetrate computers.

The new malicious NSIS installers visibly attempt to look as normal as possible by incorporating non-malicious components that usually appear in legitimate installers:

  • More non-malicious plugins, in addition to the installation engine system.dll
  • A .bmp file that serves as a background image for the installer interface, to mimic legitimate ones
  • A non-malicious uninstaller component uninst.exe

The most significant change, however, is the absence of the usual randomly named DLL file, which was previously used to decrypt the encrypted malware. This change significantly reduces the footprint of malicious code in the NSIS installer package.

nsis-old-vs-new

Figure 1. Comparison of contents of old NSIS installers and the updated installers, highlighting the absence of the randomly named DLL file in the updated version

The adoption of these updated NSIS installers by cybercriminals is quite significant, as reflected in the uptick in the number of unique NSIS installers that drop ransomware starting last month.

nsis-unique-graph

Figure 2. There is an increase in volume of unique NSIS installers that drop ransomware

Updated NSIS malware installers

In older versions of malicious Nullsoft installers, the package contained a malicious DLL that decrypts and runs the encrypted data file, which contains both the encrypted payload and decryption code.

In the new version, the malicious DLL is absent. Instead, the Nullsoft installation script is in charge of loading the encrypted data file in memory and executing its code area.

The installation script itself is obfuscated:

nsis-installation-script

Figure 3. Installation script

After loading the encrypted data file into memory, the script gets the offset to the code area (12137):

nsis-offset

Figure 4. Part of the code that shows the offset

It then issues a call:

nsis-call

Figure 5. Part of the code that shows the call to the encrypted data file

The code area in the encrypted data file is the first decryption layer:

nsis-first-decryption

Figure 6. Data file after first decryption

The script then further decrypts the code, eventually decrypting and running the final payload.

By constantly updating the contents and function of the installer package, the cybercriminals are hoping to penetrate more computers and install malware by evading antivirus solutions.

NSIS installers in ransomware campaigns

Given the pervasiveness of NSIS installers that distribute ransomware, they are likely part of a distribution network used by attackers to install their malware.

These NSIS installers are used in campaigns that deliver malware, most notably ransomware. The campaigns usually take this scheme:

  1.  The attack vector is email. Email messages are crafted to mimic invoice delivery notification.
  2.  The email messages contain any of the following malicious attachments:
    • JavaScript downloaders
    • JavaScript downloaders in .zip files
    • .LNK files that contain PowerShell scripts
    • Documents with malicious macro codes
  3. The malicious attachment, when opened, downloads the NSIS installer
  4. The NSIS installer then decrypts and runs the malware

We have seen the NSIS installers deliver the following malware, which include notorious ransomware families, in recent campaigns:

Real-time security solutions for constantly evolving threats

Cybercriminals will stop at nothing to attempt sidestepping security solutions in order to install malware on your computer. The fact that we’re seeing these innovations in cybercriminal operations that deliver ransomware reveals that they are highly motivated to achieve their ultimate goal: to siphon money off their victims. Unfortunately, for enterprises, the damage of successful malware infection can be so much more than just cash.

At Microsoft, we monitor the threat landscape very closely to detect movements like updated infection techniques. We do this so that we can make sure we provide the best possible protection for our customers. Understanding attacker techniques not only allows us to create solutions for specific attacks but lets us see trends for which more heuristic solutions are needed.

To get the latest protection from Microsoft, upgrade to Windows 10. Keeping your computers up-to-date gives you the benefits of the latest features and proactive mitigation built into the latest versions of Windows.

Enable Windows Defender Antivirus to detect these new NSIS installers. Windows Defender Antivirus uses cloud-based protection, helping to protect you from the latest threats.

For enterprises, use Device Guard to lock down devices and provide kernel-level virtualization-based security, allowing only trusted applications to run, effectively preventing these NSIS installers from executing and downloading their payload.

Use Office 365 Advanced Threat Protection, which has machine learning capability that blocks dangerous email threats, such as the emails carrying scripts that download these malicious installers.

Finally, monitor your network with Windows Defender Advanced Threat Protection, which alerts security operations teams about suspicious activities. Evaluate it for free.

 

Andrea Lelli
MMPC

doAzureParallel: Take advantage of Azure’s flexible compute directly from your R session

$
0
0

Users of the R language often require more compute capacity than their local machines can handle. However, scaling up their work to take advantage of cloud capacity can be complex, troublesome, and can often distract R users from focusing on their algorithms.

We are excited to announce doAzureParallel– a lightweight R package built on top of Azure Batch, that allows you to easily use Azure’s flexible compute resources right from your R session.

At its core, the doAzureParallel package is a parallel backend, for the widely popular foreach package, that lets you execute multiple processes across a cluster of Azure virtual machines. In just a few lines of code, the package helps you create and manage a cluster in Azure, and register it as a parallel backend to be used with the foreach package.

doAzureParallel diagram

With doAzureParallel, there’s no need to manually create, configure, and manage a cluster of individual virtual machines. Instead, this package makes running your jobs at scale no more complex than running your algorithms on your local machine. With Azure Batch’s autoscaling capabilities, you can also increase or decrease the size of your cluster to fit your workloads, helping you to save time and/or money.

doAzureParallel also uses the Azure Data Science Virtual Machine (DSVM), allowing Azure Batch to easily and quickly configure the appropriate environment in as little time as possible.

There is no additional cost for these capabilities – you only pay for the Azure VMs you use.

doAzureParallel is ideal for running embarrassingly parallel work such as parametric sweeps or Monte Carlo simulations, making it a great fit for many financial modelling algorithms (back-testing, portfolio scenario modelling, etc).

Installation / Pre-requisites

To use doAzureParallel, you need to have a Batch account and a Storage account set up in Azure. More information on setting up your Azure accounts.

You can install the package directly from Github. More information on install instructions and dependencies.

Getting Started

Once you install the package, getting started is as simple as few lines of code:

Load the package:

library(doAzureParallel)

Set up your parallel backend (which is your pool of virtual machines) with Azure:

# 1. Generate a pool configuration json file.
generateClusterConfig(“pool_config.json”)

# 2. Edit your pool configuration file.
# Enter your Batch account & Storage account information and configure your pool settings

# 3. Create your pool. This will create a new pool if your pool hasn’t already been provisioned.
pool <- makeCluster(“pool_config.json”)

# 4. Register the pool as your parallel backend
registerDoAzureParallel(pool)

# 5. Check that your parallel backend has been registered
getDoParWorkers()

Run your parallel foreach loop with the %dopar% keyword. The foreach function will return the results of your parallel code.

number_of_iterations <- 10
results <- foreach(i = 1:number_of_iterations) %dopar% {
    # This code is executed, in parallel, across your Azure pool.
    myAlgorithm(…)
}

When developing at scale, it is always recommended that you test and debug your code locally first. Switch between %dopar% and %do% to toggle between running in parallel on Azure and running in sequence on your local machine.

# run your code sequentially on your local machine
results <- foreach(i = 1:number_of_iterations) %do% { … }

# use the doAzureParallel backend to run your code in parallel across your Azure pool
results <- foreach(i = 1:number_of_iterations) %dopar% {…}

After you finish running your R code at scale, you may want to shut down your pool of VMs to make sure that you aren’t being charged anymore:

# shut down your pool
stopCluster(pool)


Monte Carlo Pricing Simulation Demo

The following demo will show you a simplified version of predicting a stock price after 5 years by simulating 5 million different outcomes of a single stock.

Let's imagine Contoso's stock price gains on average 1.001 times its opening price each day, but has a volatility of 0.01. Given a starting price of $100, we can use a Monte Carlo pricing simulation to figure out what price Contoso's stock will be after 5 years.

First, define the assumptions:

mean_change = 1.001
volatility = 0.01
opening_price = 100

Create a function to simulate the movement of the stock price for one possible outcome over 5 years  by taking the cumulative product from a normal distribution using the variables defined above.

simulateMovement <- function() {
    days <- 1825 # ~ 5 years
    movement <- rnorm(days, mean=mean_change, sd=volatility)
    path <- cumprod(c(opening_price, movement))
    return(path)
}

On our local machine, simulate 30 possible outcomes and graph the results:

simulations <- replicate(30, simulateMovement())
matplot(simulations, type='l') # plots all 30 simulations on a graph

doAzureParallel - demo image 1

To understand where Contoso's stock price will be in 5 years, we need to understand the distribution of the closing price for each simulation (as represented by the lines). But instead of looking at the distribution of just 30 possible outcomes, lets simulate 5 million outcomes to get a massive sample for the distribution.

Create a function to simulate the movement of the stock price for one possible outcome, but only return the closing price.

getClosingPrice <- function() {
    days <- 1825
# ~ 5 years
    movement <- rnorm(days, mean=mean_change, sd=volatility)
    path <- cumprod(c(opening_price, movement))
    closingPrice <- path[days]
    return(closingPrice)
}

Using the foreach package and doAzureParallel, we can simulate 5 million outcomes in Azure. To parallelize this, lets run 50 iterations of 100,000 outcomes:

closingPrices <- foreach(i = 1:50, .combine='c') %dopar% {
    replicate(100000, getClosingPrice())
}

After running the foreach package against the doAzureParallel backend, you can look at your Azure Batch account in the Azure Portal to see your pool of VMs running the simulation.

doAzureParallel - demo image 2

As the nodes in the heat map changes color, we can see it busy working on the pricing simulation.

When the simulation finishes, the package will automatically merge the results of each simulation and pull it down from the nodes so that you are ready to use the results in your R session.

Finally, we'll plot the results to get a sense of the distribution of closing prices over the 5 million possible outcomes.

# plot the 5 million closing prices in a histogram
hist(closingPrices)

doAzureParallel - demo image 3

Based on the distribution above, Contoso's stock price will most likely move from the opening price of $100 to a closing price of roughly $500, after a 5 year period.

 



We look forward to you using these capabilities and hearing your feedback. Please contact us at razurebatch@microsoft.com for feedback or feel free to contribute to our Github repository.

Additional information:

Additional Resources:

Dive into Power BI at Summit EMEA

$
0
0
I invite you to join me at Summit EMEA and learn about how you can maximize the value in your data by using the amazing capabilities in Power BI. At the event, you’ll hear what’s new with Power BI, discover new ways to deliver insights for your business, hear from power users and experts, and learn how your peers are using their data. Data is at the heart of all business and Summit EMEA is a great opportunity to stay informed about the latest data innovations.

Instant File Recovery from Azure Linux VM backup using Azure Backup – Preview

$
0
0

We earlier announced Instant file recovery from Azure Windows VM backups which enables you to restore files instantly from the Azure Recovery Services Vault with no additional cost or infrastructure. Today, we are excited to announce the same feature for Azure Linux VM backups in preview. If you are new to Azure Backup, you can start backing directly from the Azure IaaS VM blade and start using this feature.

Value proposition:

  • Instant recovery of files – Now instantly recover files from the cloud backups of Azure VMs. Whether it’s accidental file deletion or simply validating the backup, instant restore drastically reduces the time to recover your data.
  • Mount application files without restoring them - Our iSCSI-based approach allows you to open/mount application files directly from cloud recovery points to application instances, without having to restore them. For e.g. In case of backup of a Azure Linux VM running mongoDB, you can mount BSON data dumps from the cloud recovery point and quickly validate the backup or retrieve individual items such as tables without having to download the entire data dump.

Learn how to instantly recover files from Azure Linux VM backups:

 

 

Basic requirements

The downloaded recovery script can be run on a machine which meets the following requirements.

  • OS of the machine where the script is run (recovery machine) should support/recognize the underlying file-system of the files present in the backed-up Linux VM.
  • Ensure that the OS of the recovery machine is compatible with the backed up VM and the versions are as mentioned in the following table
    Linux OSVersions
    Ubuntu12.04 and above
    CentOS6.5 and above
    RHEL6.7 and above
    Debian7 and above
    Oracle Linux6.4 and above
  • The script requires python and bash components to execute and provide a secure connection to the recovery point.
    ComponentVersion
    Python2.6.6 and above
    Bash4 and above
  • Only users with root level access can view the paths mounted by the script.

 

Advanced configurations

Recovering files from LVM/Software RAID Arrays:

In case you are using LVM/RAID Arrays in the backed-up Linux VM,  you cannot run the script on the same virtual machine due to disk conflicts. Run the script on any other Recovery machine (meeting the basic requirements as mentioned above) and the script will attach the relevant disks as shown in the output below.

linux-LVMOutput

The following additional commands need to be run by the user to make LVM/RAID Array partitions visible and online.

For LVM Partitions:

$ pvs   -  This will list all the volume groups under this physical volume

$ lvdisplay   -  This will list all logical volumes, names and their paths in a volume group

$ mount -  Now mount the logical volumes to a path of your choice.

 

For RAID Arrays:

$ mdadm –detail –scan - This will display details about all RAID Disks in this machine. The relevant RAID disk from the backed-up VM will be displayed with its name ()

If the RAID Disk has physical volumes, mount the disk directly to view all the volumes within it.

$ mount [RAID Disk Path] [/mounthpath]

If the RAID disk was used to configure LVM over and above it, then re-use the process defined for LVM above and supply the volume name as an input.

 

Related links and additional content

Never Miss a Score with Bing Sportscaster

$
0
0

Never miss a score with the Bing Sportscaster bot.

To chat with our Sportscaster bot on Facebook Messenger, search ‘Bing Sportscaster’ and start the conversation. To get all the news you want about your favorite teams and players, tell Sportscaster your favorite team’s name and the bot takes it from there.

Bing Sportscaster

Curious about the schedule and who Bing predicts is going to win?

Bing Sportscaster

Too busy for a deep dive, but want to know if your team advanced to the next round? Follow them and the Bing Sportscaster will message you when there is a scoring play or interesting news.

Bing Sportscaster

Let the Bing Sportscaster show you what it can do with college hoops. Tell it who you like, and let the scores and rankings follow you.

In addition to the Bing Sportscaster bot, keep your schedule clear of any conflicts at game time by leveraging Outlook’s Interesting calendars.

Outlook Calendar - Interesting calendar in the ribbon

With a few simple clicks Outlook will add the entire playoff schedule, so you can stay focused on the game.

- The Bing Team

Bing Sportscaster

On-demand webinar: Accelerate application delivery with Docker Containers and Windows Server 2016

$
0
0

Adoption of Windows Server 2016 is well underway, and one area of innovation getting a lot of interest is the new Windows Server and Hyper-V Containers, which work with the Docker Engine that comes with each license of Windows Server 2016. Whether your organization is creating new applications or modernizing traditional applications, please join us for this on-demand webinar to explore how containers can accelerate application delivery in your organization.

Webinar hosts Taylor Brown from Microsoft and Mike Coleman from Docker will:

  • Dive into the architecture of the new container technology.
  • Distinguish how Virtual Machines and containers are different.
  • Talk about development and deployment experiences and best practices.
  • Explore Docker, the engine used to create and run Windows application containers.
  • Walk through technical demos and application deployment scenarios.

No need to wait, start watching now!

Work smarter—not harder—on the next Office Small Business Academy

$
0
0

Is your business falling victim to silent killers of productivity? Register now for the next episode of Office Small Business Academy, “Work Smarter: Productivity Tools for Your Business,” airing March 28 at 9 a.m. PT / 12 p.m. ET, and learn how to develop processes and key disciplines that will allow your business to work smarter, not harder.

  • Robert Sher, CEO to CEO founding principal and Forbes columnist, will present his article “The 7 silent growth killers,” which outlines what can destroy your organization’s productivity, along with common pitfalls to watch out for.
  • Jeff Haden, Inc. Magazine contributing editor, will discuss how you can streamline operations using technology solutions and will share a dose of practical advice you can use today—whether you’re running a full-grown company or a thriving startup.
  • Plus, discover how one business is using Microsoft Teams as their one-stop-shop for all things collaboration, helping them stay connected across time zones and teams.

Sign up for free!

For more information, visit the Office Small Business Academy home page.

Related content

Watch these recent episodes of the Small Business Academy:

The post Work smarter—not harder—on the next Office Small Business Academy appeared first on Office Blogs.

Visual Studio 2017 Update Preview

$
0
0
Visual Studio 2017 Update Preview
Viewing all 13502 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>