Quantcast
Channel: TechNet Technology News
Viewing all 13502 articles
Browse latest View live

Award-winning GamePho app and Windows 8.1 empower gamers with multi-screen, converged experiences

$
0
0

Emre Taş recently received support from App Campus to develop and launch the two-match puzzle game, Witch Potion, in the Windows Phone Store. Taş is the CEO of the Turkish development firm Alictus. Impressed with the features and performance of the 8.1 platform, he is excited about porting his award-winning motion-controller app, GamePho, to Windows. So far, he is in love with the possibilities of the converged platform and is eagerly awaiting the release of the Miracast feature. I caught up with him to learn more about his innovative GamePho app and how he plans to use Windows 8.1 and Windows Phone 8.1 to take his game to new levels.

Tell us about GamePho and why are you excited about porting the game to Windows Phone 8.1?

“The convergence of the Windows platforms has eliminated difficulties for developers and has certainly simplified our game’s cross-platform operation.”

Witch Potions screenshot

GamePho is a motion-controller app; it enables gamers to play motion-sensing games on a PC with a smartphone rather than having to buy an additional device. GamePho detects the smartphone in 3D as it moves, allowing the user to control the game on a second screen.

It was quite easy to port GamePho to Windows Phone 8.1, and we were pleasantly surprised when our motion detection algorithms worked flawlessly the first time we tried them after the transition.

Most developers find it burdensome to have to develop games that will deliver a consistent experience across platforms because they usually encounter problems with even the most basic projects. The convergence of the Windows platforms has eliminated such difficulties for developers and has certainly simplified our game’s cross-platform operation. We love the converged app model, and it is particularly relevant to GamePho’s core dual-screen philosophy. Micracast has the potential to simplify our technical operation even further, and that’s exciting.

With the converged app model, we can bundle GamePho into Windows Phone 8.1 games, and users will end up with the game on each of their Windows devices. Some users will play on a big screen using their phones as controllers while others will just enable the GamePho feature in Windows Phone 8.1. We’re creating different yet lively experiences for both screens. For instance, users can play their GamePho-enabled shooter games on Windows Phone and then start playing on a PC, using a phone as a gun, and friends can join in with their gun-phones too.

We think our vision aligns with Microsoft’s, especially with regard to second-screen experiences and convergence. And we are using Windows Phone 8.1 on our development devices, including entry-level devices, and I can vouch that no other mobile operating system performs so well on such devices. The operation is so fluid, one can barely tell the difference between using an entry-level device or a high-level device. Windows Phone 8.1 provides more possibilities to engage users than any other platform.

Which Windows Phone 8.1 features do you plan to use?

“We plan to use [geofencing] to connect players in the same geography and encourage them to play the game together on one of those bigger screens using their Windows Phones as controllers.”

We’ll use many of the new features to increase engagement. For instance, we’ll use the Action Center and Azure to send customized push notifications across different devices, knowing we won’t bother our users since they can choose which notifications to receive on each device. And we’ll make it easier for our users to engage on those multiple devices—including bigger screens and smart TVs—by using Miracast once it’s available.

The geofencing API is fascinating too. We plan to use it to connect players in the same geography and encourage them to play the game together on one of those bigger screens using their Windows Phones as controllers. We’re also planning to get creative with the new Live Tile templates so we can visually inform users about their game progress, giving them reason to keep coming back.

 What is your advice to developers?

“The benefits and opportunities to engage your users [on Windows Phone 8.1] are endless compared to other platforms.”

No matter what you are building, keep your audience in mind. Stay in touch with your potential users and get their constant feedback. Design your games according to their needs and desires, and don’t be afraid to redesign. Your final app or game may be very different from what you set out to build because it has to become what the users think it should be.

Don’t hesitate to develop for or port to Windows Phone 8.1. Support for the platform is great, and the benefits and opportunities to engage your users are endless compared to other platforms.


Restrict Team Foundation Build permissions

$
0
0

Do you have code that should be seen by only a subset of members of your on-premises team project collection? Do you use Team Foundation Build (TFBuild)? If so, you must create some custom groups to reduce the risk that unauthorized team project collection members can use a build process to bypass team project permissions.

Problem

For example, you administer the following team projects:

You want only the members of each team project to be able to read the code it contains, as shown above. However, by default, TFBuild controllers are collection-scoped resources, and so have permission to access all code in the collection. This means people who are not members of a team project could use a build process to obtain the code it contains.

For example, Johnnie is a member only of TfvcProjectA, but it is in the same team project collection as TfvcProjectB. So he could create a build process that delivers him the content from TfvcProjectB. Specifically, he can:

  • Map the path to the code on the Source Settings tab.
  • Check in a unit test that copies the code to a folder he can access.
  • Customize the build process to copy the code to a folder he can access.

Solution

To prevent this kind of access, implement some custom groups and deny the Project Collection Build Service Accounts group all permissions. For example, you are running four build servers as NETWORK SERVICE:

The following diagram details the membership and permission settings:

Q: Why must I deny permissions to members of the Project Collection Build Service Accounts group?

Note: This guidance applies only to on-premises Team Foundation servers. We don't support this scenario for Visual Studio Online team projects.

Create the collection-level groups

From the security page of team project collection, create the collection-level groups.

For each of your collection-level groups, grant the following permissions:

Add each build service account to one (and only one) of the collection-level groups.

Q: Where can I get the name of the build service account? A: See Deploy and configure a build server.

Modify the Project Collection Build Service Accounts group:

  • Remove all its members.
  • Set all the permissions to Deny.

Grant work item permissions

From the areas page of each of the team projects served by the collection-level group, grant work item permissions:

Set all the permissions of the Project Collection Build Service Accounts group to Deny.

Grant build permissions

From the build page of each of the team projects served by the collection-level group, grant build permissions:

Set all the permissions of the Project Collection Build Service Accounts group to Deny.

Grant version control permissions

Which kind of version control does your team project have?

TFVC version control

From the version control page of each of the team projects served by the collection-level group, grant TFVC version control permissions:

Set all the permissions of the Project Collection Build Service Accounts group to Deny.

Git version control

From the version control page of each of the team projects served by the collection-level group, grant Git version control permissions:

Set all the permissions of the Project Collection Build Service Accounts group to Deny.

Create the project-level groups

From the security page of each of your team projects, create a project-level group:

For each project-level group, grant the following permissions:

Add the appropriate collection-level group to the project-level group:

Q&A

Q: Why must I deny permissions to members of the Project Collection Build Service Accounts group?

A: To mitigate the risk of unauthorized access to team project resources, you should set all permissions of this group to Deny. Even if you personally are careful not to add members to the group, this could happen:

  • Automatically by the system. If another team project collection administrator deploys a build server, TFS automatically adds the build service account to the group.
  • Manually by another team project collection administrator who is not aware of the collection-wide access that this TFS group enables.

Questions? Suggestions? Feedback?

I invite you to post:

Success with Enterprise Mobility: “Managed Everything” for Small Enterprises

$
0
0

STB_Banners_Components

In the last post I discussed how the “Managed Everything” model works for Large Enterprises, and, in this post, I’ll look at how this approach is also ideal for small organizations.

Because of their size, margins, and competition with larger organizations, small enterprises have a lot to gain from cloud-based scale and extensibility. This is why so many successful small enterprises are often early (and successful) adopters of new technologies that empower their business and allow the core workforce to focus on the fundamentals of the business.

Small enterprises rely heavily on BYOD to keep their workforces optimally productive (and therefore competitive), and this means that they need easy to use and powerful management, as well as protection. These management and protection solutions also need to avoid being a major line item on the budget since a small company’s resources need to go a lot of other places besides infrastructure maintenance. The infrastructure expenditures that are made need to not just be easy to use but ready to use quickly (preferably in an hour or less). This is one of the reasons why I believe so strongly that SaaS offerings are the perfect solution for small organizations. There are no up-front infrastructure costs and no heavy setup and configuration. Right now we’re seeing broad adoption of our Intune management service in this segment – with well over 10,000 unique small organizations subscribed and actively using Intune to manage PCs and mobile devices.

In a small enterprise, lost time and money can be disproportionately damaging to the bottom line, and this means that small organizations need their workforce to be able to consistently connect and stay productive no matter where they go, no matter which device they use. These organizations also can’t afford downtime anywhere in their infrastructure – whether that’s the result of a disaster or the time spent updating the software when new device/platform updates are available.

Everything in the last two paragraphs is proactively addressed with the Enterprise Mobility Suite. Any organization currently using Office 365 already has access to many of the functionalities needed for a widespread mobility management solution due to the fact that Intune is built on top of Azure AD (just like O365). To see just how thoroughly you can tie together SCCM, Intune, AD, Azure AD, and Office read all the good news in this post from earlier in the Success with Enterprise Mobility series. Intune can be up and running in just an hour.

This means that small enterprises get the simplicity and efficiency they need by having a single solution for MDM and PC management. It’s a solution that’s powerful, fast, seamless, and thorough.

Whenever I travel to speak at conferences or visit our offices around the world, I always set aside time to meet with customers in that region, and the feedback I get from the small enterprises I meet with is pretty unanimous:

  • They benefit from our continuous innovation in our services (and how we quickly share this value in the form of updates).
  • Their organizations and people keep getting more efficient because cloud-based solutions offer a rapid and continuous delivery of new value.
  • They no longer need to deploy or maintain expense on-prem infrastructure.
  • They can still get huge value from their existing investments in SCCM while also taking advantage of the cloud.
  • Microsoft is seen as their preferred long-term partner.

To see for yourself the value I’ve described in this post, there are a couple great “Getting Started” resources you can check out right now to help set up a pilot of your own. This overview is a step-by-step guide to setting up Intune for device management, and this overview helps with the next step of configuring Intune for the needs or your organization, and how to automate device enrollment, identity management, etc. I really encourage everyone to take the time to test drive these tools and evaluate just what you can do with these things at your disposal.

* * *

If you’re a small business, Intune offers a really big value. With Intune you can deploy and manage all of your organization’s personal and corporate devices – and, if you combine it with the SCCM deployment you already own, the value of that investment grows exponentially.

This is something that’s really unique and really exciting about the solutions Microsoft has put into the market. Our customers and partners benefit from layered protection (discussed here), enterprise-grade management, and control over everything going on within those apps and devices.

With this “Managed Everything” approach, enterprises of any size can empower their end-users to use the apps they love, and the IT team can implement the right controls to ensure that the data going to and from those devices is secure and within policy. This is a huge benefit, and, as I’ve said many times before, it’s one more reason it’s a great time to be in IT!

New poster for Cloud Ecosystem: Microsoft Azure, Windows Server 2012 R2 and System Center 2012 R2

$
0
0

For those who enjoy the Server Posterpedia posters, there is a new one that just became available for download. It’s called the "Cloud Ecosystem: Microsoft Azure, Windows Server 2012 R2 and System Center 2012 R2" poster and it depicts both public and on-premises cloud technologies.

Here’s a little thumbnail to give you an idea of what it looks like:

clip_image001

 

As you probably can’t read the small font above :-), here are some details on what this poster includes:

  • Microsoft Azure Services including Service Categories, Compute Services, Data Services and App Services
  • System Center 2012 R2 including App Controller, Virtual Machine Manager, Operations Manager, Configuration Manager, Service Manager, Orchestrator, Data Protection Manager and Azure Pack
  • Windows Intune
  • Windows Server 2012 R2 including Storage Spaces, Data Deduplication, Resilient File System, SMB Transparent Failover, Storage Quality of Service, Generation 2 Virtual Machines, Online VHDX Resize, Enhanced Session Mode, Live Migration, Failover Clustering, Cluster Shared Volumes, Scale-Out File Server, Shared Virtual Hard Disks, Hyper-V Extensible Switch, Remote Desktop Services, SMB Direct, SMB Multi-channel and NIC Teaming.

You can get the new poster from the download center at  http://www.microsoft.com/en-us/download/details.aspx?id=43718.

You can also get this and other posters using the free copy Posterpedia app from the Windows Store at http://aka.ms/sposterpedia.

Stay up-to-date with Internet Explorer

$
0
0

As we shared in May, Microsoft is prioritizing helping users stay up-to-date with the latest version of Internet Explorer. Today we would like to share important information on migration resources, upgrade guidance, and details on support timelines to help you plan for moving to the latest Internet Explorer browser for your operating system.

Microsoft offers innovative and transformational services for a mobile-first and cloud-first world, so you can do more and achieve more; Internet Explorer is core to this vision.  In today’s digital world, billions of people use Internet-connected devices, powered by cloud service-based applications, spanning both work and life experiences.  Running a modern browser is more important than ever for the fastest, most secure experience on the latest Web sites and services, connecting anytime, anywhere, on any device.

Developer and User Benefits

Developers benefit when users stay current on the latest Web browser. Older browsers may not support modern Web standards, so browser fragmentation is a problem for Web site developers. Web app developers, too, can work more efficiently and create better products and product roadmaps if their customers are using modern browsers. Upgrading benefits the developer ecosystem.

Users also benefit from a modern browser that enables the latest digital work and life experiences while decreasing online risks. Internet Explorer 11, our latest modern browser, delivers many benefits:

  • Improved Security– Outdated browsers represent a major challenge in keeping the Web ecosystem safer and more secure, as modern Web browsers have better security protection. Internet Explorer 11 includes features like Enhanced Protected Mode to help keep customers safer. Microsoft proactively fixes many potential vulnerabilities in Internet Explorer, and our work to help protect customers is delivering results: According to NSS Labs, protection against malicious software increased from 69% on Internet Explorer 8 in 2009 to over 99% on Internet Explorer 11. It should come as no surprise that the most recent, fully-patched version of Internet Explorer is more secure than older versions.
  • Productivity– The latest Internet Explorer is faster, supports more modern Web standards, and has better compatibility with existing Web apps. Users benefit by being able to run today’s Web sites and services, such as Office 365, alongside legacy Web apps.
  • Unlock the future — Upgrading and staying current on the latest version of Internet Explorer can ease the migration to Windows 8.1 Update and the latest Windows tablets and other devices, unlocking the next generation of technology and productivity.

Browser Migration Guidance

Microsoft recommends enabling automatic updates to ensure an up-to-date computing experience—including the latest version of Internet Explorer—and most consumers use automatic updates today. Commercial customers are encouraged to test and accept updates quickly, especially security updates. Regular updates provide significant benefits, such as decreased security risk and increased reliability, and Windows Update can automatically install updates for Internet Explorer and Windows.

For customers not yet running the latest browser available for your operating system, we encourage you to upgrade and stay up-to-date for a faster, more secure browsing experience. Beginning January 12, 2016, the following operating systems and browser version combinations will be supported:

Windows PlatformInternet Explorer Version
Windows Vista SP2Internet Explorer 9
Windows Server 2008 SP2Internet Explorer 9
Windows 7 SP1Internet Explorer 11
Windows Server 2008 R2 SP1Internet Explorer 11
Windows 8.1Internet Explorer 11
Windows Server 2012Internet Explorer 10
Windows Server 2012 R2Internet Explorer 11

After January 12, 2016, only the most recent version of Internet Explorer available for a supported operating system will receive technical support and security updates. For example, customers using Internet Explorer 8, Internet Explorer 9, or Internet Explorer 10 on Windows 7 SP1 should migrate to Internet Explorer 11 to continue receiving security updates and technical support. For more details regarding support timelines on Windows and Windows Embedded, see the Microsoft Support Lifecycle site.

As some commercial customers have standardized on earlier versions of Internet Explorer, Microsoft is introducing new features and resources to help customers upgrade and stay current on the latest browser. Customers should plan for upgrading to modern standards—to benefit from the additional performance, security, and productivity of modern Web apps—but in the short term, backward compatibility with legacy Web apps may be a cost-effective, if temporary, path. Enterprise Mode for Internet Explorer 11, released in April 2014, offers enhanced backward compatibility and enables you to run many legacy Web apps during your transition to modern Web standards. 

Today we are announcing that Enterprise Mode will be supported through the duration of the operating system lifecycle, to help customers extend their existing Web app investments while staying current on the latest version of Internet Explorer. On Windows 7, Enterprise Mode will be supported through January 14, 2020. Microsoft will continue to improve Enterprise Mode backward compatibility, and to invest in tools and other resources to help customers upgrade and stay up-to-date on the latest version of Internet Explorer.

Browser Migration Resources

Microsoft offers numerous online support resources for customers and partners who wish to migrate to the latest version of Internet Explorer.

  1. Modern.IE– For developers updating sites to modern standards, Modern.IE provides a set of tools, best practices, and prescriptive guidance. An intranet scanner is available for download, for assessing Web apps within corporate networks.
  2. Internet Explorer TechCenter– The Internet Explorer TechNet site includes technical resources to deploy, maintain and support Internet Explorer. Enterprise Mode for Internet Explorer 11 is covered in detail, to help customers extend Web app investments by leveraging this new backward compatibility feature.
  3. Internet Explorer Developer Center– The MSDN developer site includes resources related to application development for Internet Explorer.
  4. Microsoft Assessment and Planning (MAP) Toolkit– This is an agentless inventory and planning tool that can assess your current browser install base.

For customers and partners who want hands-on guidance, Microsoft has a number of deployment and compatibility services available to assist with migrations. These services include:

  1. Microsoft Services Support– Gain the most benefit from your IT infrastructure by pairing your business with Microsoft Services Premier Support. Our dedicated support teams provide continuous hands-on assistance and immediate escalation for urgent issues, which speeds resolution and helps you keep your mission-critical systems up and running.
  2. Microsoft Consulting Services– Fast and effective deployment of your Microsoft technologies shortens the time it takes to see value from your investments; and when your people use those technologies to their fullest extent, they help grow their skills and your business. Microsoft Services consultants work with your organization to deploy and adopt Microsoft technologies efficiently and cost-effectively, and we can help you minimize risk in your most complex initiatives. Our expertise on the Microsoft platform and collaboration with our global network of partners and technical communities fuel our ability to help you consider just what else is possible through your innovation and Microsoft technologies and solutions.
  3. Internet Explorer Migration Workshop– The Microsoft Services Internet Explorer Migration Workshop helps customers understand the migration process to the latest version of Internet Explorer, using a structured workshop targeted towards IT professionals and developers. Your subject matter experts will quickly learn how to evaluate compatibility issues and remediation techniques. For more information, contact your Microsoft Services representative or visit www.microsoft.com/services.
  4. Find a Microsoft partner on Pinpoint– Connect with a certified IT specialist in your area who knows how to help you upgrade to the most current version of Internet Explorer [and the .NET Framework], with minimal disruption to your business and applications.

By offering better backward compatibility and resources to help customers upgrade, Microsoft is making it easier than ever before for commercial customers to stay current on the latest version of Internet Explorer. In addition to modern Web standards, improved performance, increased security, and greater reliability, migrating to Internet Explorer 11 also helps unlock upgrades to Windows 8.1 Update, services like Office 365, and the latest Windows devices.

— Roger Capriotti, Director, Internet Explorer

Moving to the .NET Framework 4.5.2

$
0
0

A few months ago we announced the availability of the .NET Framework 4.5.2, a highly compatible, in-place update to the .NET 4.x family (.NET 4, 4.5, and 4.5.1). The .NET Framework 4.5.2 was released only a few short months after the release of .NET 4.5.1 and gives you the benefits of the greater stability, reliability, security and performance without any action beyond installing the .NET 4.5.2 update i.e., there is no need to recompile your application to get these benefits.

The quick pace at which we’re evolving and shipping means the latest fixes, features, and innovations are available in the latest version and not in legacy versions. To that end, we are making it easier than ever before for customers to stay current on the .NET Framework 4.x family of products with highly compatible, in-place updates for the .NET 4.x family.

We will continue to fully support .NET 4, .NET 4.5, .NET 4.5.1, and .NET 4.5.2 until January 12, 2016, this includes security updates as well as non-security technical support and hotfixes. Beginning January 12, 2016 only .NET Framework 4.5.2 will continue receiving technical support and security updates. There is no change to the support timelines for any other .NET Framework version, including .NET 3.5 SP1, which will continue to be supported for the duration of the operating system lifecycle.

We will continue to focus on .NET and as we outlined at both TechEd NA and Build earlier in 2014, we are working on a significant set of technologies, features and scenarios that will be part of .NET vNext, our next major release of the .NET Framework coming in 2015.

For more details on the .NET Framework support lifecycle, visit the Microsoft Support Lifecycle site.

If you have any questions regarding compatibility of the .NET Framework you may want to review the .NET Application Compatibility page. Should you have any questions that remain unanswered we’re here to help, you should engage with Microsoft Support through your regular channels for a resolution. Alternatively you can also write to us at netfxcompat_at_microsoft.com.


We have outlined a few Q&A below to help address any questions you may have.

Will I need to recompile/rebuild my applications to make use of .NET 4.5.2?

No, .NET 4.5.2 is a compatible, in-place update on top of .NET 4, .NET 4.5, and .NET 4.5.1. This means that applications built to target any of these previous .NET 4.x versions will continue running on .NET 4.5.2 without change. No recompiling of apps is necessary.

Are there any breaking changes in .NET 4.5.2? Why do you include these changes?

There are a very small number of changes in .NET 4.5.2 that are not fully compatible with earlier .NET versions.  We call these runtime changes. We include these changes only when absolutely necessary in the interests of security, in order to comply with industry wide standards, or in order to correct a previous incompatibility within .NET. Additionally, there are a small number of changes included in .NET 4.5.2 that will only be enabled if you choose to recompile your application against .NET 4.5.2; we call these changes retargeting changes.

More information about application compatibility including both .NET runtime and retargeting changes across the various versions in the .NET 4.x family can be found here.

Microsoft products such as Exchange Server, SQL Server, Dynamics CRM, SharePoint, and Lync are built on top of .NET. Do I need to make any updates to these products if they are using .NET 4, 4.5 or 4.5.1?

Newer versions of products such as Exchange, SQL Server, Dynamics CRM, Sharepoint, and Lync are based on the .NET 4 or .NET 4.5. Since .NET 4.5.2 is a compatible, in-place update on top of the .NET 4, 4.5, and 4.5.1 even a large software application such as Exchange that was built using .NET 4 will continue to run without any
changes when the .NET runtime is updated from .NET 4 or .NET 4.5 to .NET 4.5.2. That said we recommend you validate your deployment by updating the .NET runtime to .NET 4.5.2 in a QA/pre-production environment first before rolling this out to a production environment.

What about .NET 3.5 SP1? Is that no longer available?

No, this announcement does not affect versions prior to .NET 4. The .NET 3.5 SP1 version is installed side-by-side with .NET 4.x version, so updates to one do not have impact on the other. You can continue to use .NET 3.5 SP1 beyond January 12, 2016.

Introducing the Azure PowerShell DSC (Desired State Configuration) extension

$
0
0

Earlier this year Microsoft released the Azure VM Agent and Extensions as part of the Windows Azure Infrastructure Services. VM Extensions are software components that extend the VM functionality and simplify various VM management operations; for example, the VMAccess extension can be used to reset a VM’s password, or the Custom Script extension can be used to execute a script on the VM.

Today, we are introducing the PowerShell Desired State Configuration (DSC) Extension for Azure VMs as part of the Azure PowerShell SDK. You can use new cmdlets to upload and apply a PowerShell DSC configuration on an Azure VM enabled with the PowerShell DSC extension. PowerShell DSC extension will call into PowerShell DSC to enact the received DSC configuration on the VM.

If you already have the Azure PowerShell SDK installed, you will need to update to version 0.8.6 or later.

Once you have installed and configured Azure PowerShell and authenticated to Azure, you can use the Get-AzureVMAvailableExtension cmdlet to see the PowerShell DSC extension.

PS C:\>Get-AzureVMAvailableExtension-Publisher Microsoft.PowerShell                                                   
Publisher                  : Microsoft.Powershell                                                                       
ExtensionName              : DSC                                                                                        
Version                    : 1.0                                                                                        
PublicConfigurationSchema  :                                                                                            
PrivateConfigurationSchema :                                                                                            
SampleConfig               :                                                                                            
ReplicationCompleted       : True                                                                                       
Eula                       : http://azure.microsoft.com/en-us/support/legal/                                            
PrivacyUri                 : http://www.microsoft.com/                                                                  
HomepageUri                : http://blogs.msdn.com/b/powershell/                                                        
IsJsonExtension            : True                                                                                       

Executing a simple scenario

One scenario in which this new extension can be used is the automation of software installation and configuration upon a machine’s initial boot-up.

As a simple example, let’s say you need to create a new VM and install IIS on it. For this, you would first create a PowerShell script that defines the configuration (NOTE: I saved this script as C:\examples\IISInstall.ps1):

001
002
003
004
005
006
007
008
009
010
011
012
configuration IISInstall
{
    node ("localhost")
    {
        WindowsFeature IIS
        {
            Ensure = "Present"
            Name = "Web-Server"                      
        }
    }
}

Then you would use Publish-AzureVMDscConfiguration to upload your configuration to Azure storage. Publish-AzureVMDscConfiguration is one of the new cmdlets in the Azure PowerShell SDK. The example below uses all the default values, but later in this post we’ll go over more details of how this works.

PS C:\>Publish-AzureVMDscConfiguration-ConfigurationPath C:\examples\IISInstall.ps1                                   

This cmdlet creates a ZIP package that follows a predefined format that the PowerShell Desired State Configuration Extension can understand and then uploads it as a blob to Azure storage. The ZIP package in the above example was uploaded to

https://examples.blob.core.windows.net/windows-powershell-dsc/IISInstall.ps1.zip

“examples” in this URI is the name of my default Azure storage account, “windows-powershell-dsc” is the default storage container used by the cmdlet, and “IISInstall.ps1.zip” is the name of the blob for the file I just published.

Now my sample configuration is available for VMs to use, so let’s write a script that creates a VM that uses our sample configuration (NOTE: I saved this script as C:\examples\example-1.ps1):

001
002
003
004
005
006
007
008
$vm = New-AzureVMConfig -Name "example-1" -InstanceSize Small -ImageName "a699494373c04fc0bc8f2bb1389d6106__Windows-Server-2012-R2-201407.01-en.us-127GB.vhd" 

$vm = Add-AzureProvisioningConfig -VM $vm -Windows -AdminUsername "admin_account" -Password "Bull_dog1"

$vm = Set-AzureVMDSCExtension -VM $vm -ConfigurationArchive "IISInstall.ps1.zip" -ConfigurationName "IISInstall" 

New-AzureVM -VM $vm -Location "West US" -ServiceName "example-1-svc" -WaitForBoot

New-AzureVMConfig, Add-AzureProvisioningConfig, and New-AzureVM are the existing Azure cmdlets used to create a VM. The new kid on the block is Set-AzureVMDscExtension:

005
$vm = Set-AzureVMDSCExtension -VM $vm -ConfigurationArchive "IISInstall.ps1.zip" -ConfigurationName "IISInstall"

This cmdlet injects a DSC configuration into the VM configuration object ($vm in the example). When the VM machine boots, the Azure VM agent will install the PowerShell DSC Extension, which in turn will download the ZIP package that we published previously (IISInstall.ps1.zip), will execute the “IISInstall” configuration that we included as part of IISInstall.ps1, and then will invoke PowerShell DSC by calling the Start-DscConfiguration cmdlet.

Now, let’s go ahead and execute the sample script (NOTE: if you get an error telling you that the VM vhd is not available or you don’t have access to it, that likely means that the image referenced on line 1 of the script has been updated, and you will need to find the new image name. You can do so by enumerating the available images with Get-AzureVMImage and picking the image that you wish to use. See Azure SDK documentation for more details on this. In my case, I will use a 2012-R2 machine).

PS C:\>C:\examples\example-1.ps1
OperationDescription          OperationId                             OperationStatus                         
--------------------          -----------                             ---------------                         
New-AzureVM                   9cfb922d-db5b-cdd0-9c74-1a4e34b91e28    Succeeded                               
New-AzureVM                   17acca22-c6ff-cb5a-8116-a41ff9764d35    Succeeded                               

Our sample configuration was very simple: it just installed IIS. As a quick verification that it executed properly, we can logon to the VM and verify that IIS is installed by visiting the default web site (http://localhost):

iis

That is the PowerShell DSC Extension in a nutshell.

And now for the gory details…

Publish-AzureVMDscConfiguration

As the previous example illustrated, the first step in using the PowerShell Desired State Configuration Extension is publishing. In this context, publishing is the process of creating a ZIP package that the extension can understand and uploading that package to Azure blob storage. This is accomplished using the Publish-AzureVMDscConfiguration cmdlet.

Why use a ZIP package for publishing? Publish-AzureVMDscConfiguration will parse your configuration looking for Import-DSCResource statements and will include a copy of the corresponding modules along with the script that contains your configuration. For example, let’s take a look at the ZIP package produced by a configuration that creates an actual website instead of just installing IIS. This new example is the FourthCoffee website, which you may have already seen in other DSC blog posts or demos). The FourthCoffee demo has a dependency on the DSC resource xWebAdministration, which is included in the DSC Resource Kit Wave 5.

(NOTE: I saved this script as C:\examples\FourthCoffee.ps1)

001
002
003
004
005
006
007
008
009
010
011
012
013
014
015
016
017
018
019
020
021
022
023
024
025
026
027
028
029
030
031
032
033
034
035
036
037
038
039
040
041
042
043
044
045
046
047
048
049
configuration FourthCoffee
{
    Import-DscResource -Module xWebAdministration            

    # Install the IIS role
    WindowsFeature IIS 
    { 
        Ensure          = "Present" 
        Name            = "Web-Server" 
    } 
 
    # Install the ASP .NET 4.5 role
    WindowsFeature AspNet45 
    { 
        Ensure          = "Present" 
        Name            = "Web-Asp-Net45" 
    } 
 
    # Stop the default website
    xWebsite DefaultSite 
    { 
        Ensure          = "Present" 
        Name            = "Default Web Site" 
        State           = "Stopped" 
        PhysicalPath    = "C:\inetpub\wwwroot" 
        DependsOn       = "[WindowsFeature]IIS" 
    } 
 
    # Copy the website content
    File WebContent 
    { 
        Ensure          = "Present" 
        SourcePath      = "C:\Program Files\WindowsPowerShell\Modules\xWebAdministration\BakeryWebsite"
        DestinationPath = "C:\inetpub\FourthCoffee"
        Recurse         = $true 
        Type            = "Directory" 
        DependsOn       = "[WindowsFeature]AspNet45" 
    } 

    # Create a new website
    xWebsite BakeryWebSite 
    { 
        Ensure          = "Present" 
        Name            = "FourthCoffee"
        State           = "Started" 
        PhysicalPath    = "C:\inetpub\FourthCoffee" 
        DependsOn       = "[File]WebContent" 
    } 
}

To inspect the ZIP package created by the publish cmdlet I used the -ConfigurationArchivePath parameter, which saves the package to a local file instead of uploading it to Azure storage (NOTE: I typed the command below in two separate lines using the ` character; the >>> characters are PowerShell’s prompt):

PS C:\>Publish-AzureVMDscConfiguration C:\examples\FourthCoffee.ps1 `                                                  >>>-ConfigurationArchivePath C:\examples\FourthCoffee.ps1.zip                                                          

When I look at the ZIP package using the File Explorer I can see that it contains my configuration script and a copy of the xWebAdministration module:

FourthCoffee.ps1.zip

That copy comes from the xWebAdministration module that I already installed on my machine under “C:\Program Files\WindowsPowerShell\Modules”. The publish cmdlet requires that the imported modules are installed on your machine, and that they are located somewhere in $PSModulePath.

(NOTE: To simplify the example, I slightly altered the xWebAdministration module so it included the files needed for the website as part of the xWebAdministration module, in the “BakeryWebsite” directory)

xWebAdministration

The two previous examples use a PowerShell Script file (.ps1) to define the configuration that will be published. You can also do this in a PowerShell Module file (.psm1), or if the configuration you want to publish is part of a larger module, you can create the ZIP package manually and simply copy the directories for the module that defines your configuration and any modules referenced by your configuration. For example, if the configuration of our example was defined within a PowerShell module named FourthCoffee the ZIP package would include these two directories: the FourthCoffee module folder, and the dependent DSC resource module folder for xWebAdministration

FourthCoffee.zip

Once you have a local ZIP package (either created manually, or using the publish cmdlet), you can upload it to Azure storage with the publish cmdlet:

PS C:\>Publish-AzureVMDscConfiguration C:\examples\FourthCoffee.ps1.zip                                                
ContainerName and StorageContext parameters

By default Publish-AzureVMDscConfiguration will upload the ZIP package to Azure blob storage using “windows-powershell-dsc” as the container and picking up the default storage account from the settings of your Azure subscription.

You can change the container using the –ContainerName parameter:

PS C:\>Publish-AzureVMDscConfiguration C:\examples\FourthCoffee.ps1.zip `                                              >>>-ContainerName mycontainer                                                                                          

And you can change the storage account (and authentication settings) using the –StorageContext parameter (you can use the New-AzureStorageContext cmdlet to create the storage context).

Set-AzureVMDSCExtension

Once a configuration has been published, you can apply it to any Azure virtual machine using the Set-AzureVMDSCExtension cmdlet. This cmdlet injects the settings needed by the PowerShell DSC extension into a VM configuration object, which can then be applied to a new VM, as in our first example, or to an existing VM. Let’s use this cmdlet again to update the VM we created previously (NOTE: the first example used the configuration defined in C:\examples\IISInstall.ps1; now we will update this machine with the configuration defined in C:\examples\FourthCoffee.ps1; the script that we will use was saved as C:\examples\example-2.ps1)

001
002
003
004
005
006
$vm = Get-AzureVM -Name "example-1" -ServiceName "example-1-svc"

$vm = Set-AzureVMDSCExtension -VM $vm -ConfigurationArchive "FourthCoffee.ps1.zip" -ConfigurationName "FourthCoffee" 

$vm | Update-AzureVM
PS C:\>C:\examples\example-2.ps1
OperationDescription              OperationId                             OperationStatus                         
--------------------              -----------                             ---------------                         
Update-AzureVM                    afa38e1a-5717-cac6-a6e7-6f72d0af51d2    Succeeded                                                                                                                       

In our first example we were working with a new VM, so the Azure VM agent first installed the PowerShell DSC Extension and then it invoked it using the information provided by the Set-AzureVMDSCExtension cmdlet. In this second example we are working on an existing VM on which the extension is already installed so the Azure VM agent will skip the installation part and just invoke the PowerShell DSC Extension with the new information provided by the set cmdlet.

The extension will then

  • download the ZIP package specified by the –ConfigurationArchive parameter and expand it to a temporary directory
  • remove the .zip extension from the value given by –ConfigurationArchive and look for a PowerShell script or module with that name and execute it (in our second example, it will look for FourthCoffee.ps1)
  • look for and execute the configuration named by the -ConfigurationName parameter (in this case "WebSite")
  • invoke the Start-DscConfiguration with the output produced by that configuration

To verify that our second configuration was applied successfully we can again check the default website:

FourthCoffeeWebsite

Configuration Arguments

DSC configurations are very similar to PowerShell advanced functions and can be parameterized for greater flexibility. The PowerShell DSC extension provides support for configuration arguments via the –ConfigurationArgument parameter of Set-AzureVMDSCExtension.

As a very simple example, let’s change our last script in such a way that the name of the website is a parameter to the FourthCoffee configuration. The updated configuration has been saved as C:\examples\FourthCoffeeWithArguments.ps1; notice that we have added the $WebSiteName parameter (lines 4-7), which is used as the Name property of the BakeryWebSite resource (line 51).

001
002
003
004
005
006
007
008
009
010
011
012
013
014
015
016
017
018
019
020
021
022
023
024
025
026
027
028
029
030
031
032
033
034
035
036
037
038
039
040
041
042
043
044
045
046
047
048
049
050
051
052
053
054
055
056
configuration FourthCoffee
{
    [CmdletBinding()]
    param(
        [Parameter(Mandatory=$true, Position=0)]
        [string] 
        $WebSiteName
    )

    Import-DscResource -Module xWebAdministration            

    # Install the IIS role
    WindowsFeature IIS 
    { 
        Ensure          = "Present" 
        Name            = "Web-Server" 
    } 
 
    # Install the ASP .NET 4.5 role
    WindowsFeature AspNet45 
    { 
        Ensure          = "Present" 
        Name            = "Web-Asp-Net45" 
    } 
 
    # Stop the default website
    xWebsite DefaultSite 
    { 
        Ensure          = "Present" 
        Name            = "Default Web Site" 
        State           = "Stopped" 
        PhysicalPath    = "C:\inetpub\wwwroot" 
        DependsOn       = "[WindowsFeature]IIS" 
    } 
 
     # Copy the website content
    File WebContent 
    { 
        Ensure          = "Present" 
        SourcePath      = "C:\Program Files\WindowsPowerShell\Modules\xWebAdministration\BakeryWebsite"
        DestinationPath = "C:\inetpub\FourthCoffee"
        Recurse         = $true 
        Type            = "Directory" 
        DependsOn       = "[WindowsFeature]AspNet45" 
    } 

    # Create a new website
    xWebsite BakeryWebSite 
    { 
        Ensure          = "Present" 
        Name            = $WebSiteName
        State           = "Started" 
        PhysicalPath    = "C:\inetpub\FourthCoffee" 
        DependsOn       = "[File]WebContent" 
    } 
}

Our third example publishes the new configuration script and updates the VM that we created previously (I saved this script as C:\examples\example-3.ps1:

001
002
003
004
005
006
007
008
009
010
011
Publish-AzureVMDscConfiguration C:\examples\FourthCoffeeWithArguments.ps1

$vm = Get-AzureVM -Name "example-1" -ServiceName "example-1-svc"

$vm = Set-AzureVMDscExtension -VM $vm `
        -ConfigurationArchive "FourthCoffeeWithArguments.ps1.zip" `
        -ConfigurationName "FourthCoffee" `
        -ConfigurationArgument @{ WebSiteName = "FourthCoffee" }

$vm | Update-AzureVM
PS C:\>C:\examples\example-3.ps1
OperationDescription              OperationId                             OperationStatus                         
--------------------              -----------                             ---------------                         
Update-AzureVM                    2b6f18e7-42f2-c216-8199-edfa06b52e33    Succeeded                               

The value of the –ConfigurationArgument parameter on line 8 of C:\examples\example-3.ps1 is a hashtable that specifies the arguments to the WebSite configuration, i.e. a string specifying the name of the website (this corresponds to parameter $WebSiteName, on line 7 of C:\examples\FourthCoffeeWithArguments.ps1)

Configuration Data

Configuration data can be used to separate structural configuration from environmental configuration (see this blog post for an introduction to those concepts). The PowerShell DSC extension provides support for configuration data via the –ConfigurationDataPath parameters of Set-AzureVMDSCExtension.

Let’s create another variation of the FourthCoffee configuration: IIS and ASP.NET will always be installed by the configuration, but the FourthCoffee website will be installed only if the role of the VM is “WebServer”. The updated configuration has been saved as C:\examples\FourthCoffeeWithData.ps1; the check for the VM’s role is on line 20:

001
002
003
004
005
006
007
008
009
010
011
012
013
014
015
016
017
018
019
020
021
022
023
024
025
026
027
028
029
030
031
032
033
034
035
036
037
038
039
040
041
042
043
044
045
046
047
048
049
050
051
052
053
configuration FourthCoffee
{
    Import-DscResource -Module xWebAdministration            

    # Install the IIS role
    WindowsFeature IIS 
    { 
        Ensure          = "Present" 
        Name            = "Web-Server" 
    } 
 
    # Install the ASP .NET 4.5 role
    WindowsFeature AspNet45 
    { 
        Ensure          = "Present" 
        Name            = "Web-Asp-Net45" 
    } 
 
   # Setup the website only if the role is "WebServer"
    Node $AllNodes.Where{$_.Role -eq "WebServer"}.NodeName
    {
        # Stop the default website
        xWebsite DefaultSite 
        { 
            Ensure          = "Present" 
            Name            = "Default Web Site" 
            State           = "Stopped" 
            PhysicalPath    = "C:\inetpub\wwwroot" 
            DependsOn       = "[WindowsFeature]IIS" 
        } 
 
        # Copy the website content
        File WebContent 
        { 
            Ensure          = "Present" 
            SourcePath      = "C:\Program Files\WindowsPowerShell\Modules\xWebAdministration\BakeryWebsite"
            DestinationPath = "C:\inetpub\FourthCoffee"
            Recurse         = $true 
            Type            = "Directory" 
            DependsOn       = "[WindowsFeature]AspNet45" 
        } 

        # Create a new website
        xWebsite BakeryWebSite 
        { 
            Ensure          = "Present" 
            Name            = $WebSiteName
            State           = "Started" 
            PhysicalPath    = "C:\inetpub\FourthCoffee" 
            DependsOn       = "[File]WebContent" 
        } 
    }
}

The configuration data has been saved as C:\examples\FourthCoffeeData.psd1:

001
002
003
004
005
006
007
008
009
@{
    AllNodes = @(
        @{
            NodeName = "localhost";
            Role     = "WebServer"
        }
    );
}

And the script that publishes and applies this new configuration is C:\examples\example-4.ps1:

001
002
003
004
005
006
007
008
009
010
011
Publish-AzureVMDscConfiguration C:\examples\FourthCoffeeWithData.ps1

$vm = Get-AzureVM -Name "example-1" -ServiceName "example-1-svc"

$vm = Set-AzureVMDscExtension -VM $vm `
        -ConfigurationArchive "FourthCoffeeWithData.ps1.zip" `
        -ConfigurationName "FourthCoffee" `
        -ConfigurationDataPath C:\examples\FourthCoffeeData.psd1

$vm | Update-AzureVM
C:\ PS>C:\examples\example-4.ps1
OperationDescription              OperationId                             OperationStatus                         
--------------------              -----------                             ---------------                         
Update-AzureVM                    fa6a525b-c411-c213-8f57-69dc2a09df1c    Succeeded                               

The value of  the –ConfigurationDataPath parameter on line 8 of C:\examples\example-4.ps1 is the path to a local .psd1 file containing the configuration data. A copy of this file will be uploaded to Azure blob storage and then downloaded to the VM by the PowerShell DSC Extension and passed along to the FourthCoffee configuration. This file is uploaded to the default container (“windows-powershell-dsc”) and storage account; similarly to the Publish-AzureVmDscConfiguration cmdlet, the Set-AzureVMDscExtension cmdlet includes parameters –ContainerName and –StorageContext that can be used to override those defaults.

ACQUIRING REMOTE ACCESS TO OUR VM

Since we know the name of your VM we can simply use Azure SDK cmdlets to get RDP file and kick off a remote access to it. See Azure Powershell SDK documentation for more information. 

001
002
003
004
005
006
$vm = Get-AzureVMServiceName "example-1-svc" –Name "example-1"
$rdp = Get-AzureEndpoint -Name "RDP" -VM $vm 
$hostdns = (New-Object "System.Uri" $vm.DNSName).Authority 
$port = $rdp.Port 
Start-Process "mstsc" -ArgumentList "/V:$hostdns`:$port /w:1024 /h:768" 
READING LOGS

Let us say that I wish to check in detail that everything went well on my VM. How would I do that? I can log in to my VM from Azure and check the local logs. The files of interest to us will be the following two locations on VM hard drive: 

C:\Packages\Plugins\Microsoft.Powershell.DSC\1.0.0.0

C:\WindowsAzure\Logs\Plugins\Microsoft.Powershell.DSC\1.0.0.0

You may find that your VM has a newer version of Powershell DSC extension,in which case the version number at the end of the path might be slightly different.

“C:\Packages\Plugins\Microsoft.Powershell.DSC\1.0.0.0” contains the actual extension files. You generally don’t need to worry about this location. However, if an extension failed to install for some reason and this folder isn’t present, that is a critical issue.

Now let’s start digging into the logs: C:\WindowsAzure\Logs. This folder contains general Azure logs that were captured for us. If for some reason DSC extension failed to deploy or there was some general infrastructure error, it would appear here under log files “WaAppAgent.*.log”

The lines of interest in these files are as follows. Note that your log may look slightly different.

  • [00000003] [07/28/2014 23:57:33.02] [INFO]  Beginning installation of plugin Microsoft.Powershell.DSC.
  • [00000003] [07/28/2014 23:59:47.25] [INFO]  Successfully installed plugin Microsoft.Powershell.DSC.
  • [00000009] [07/29/2014 00:02:51.02] [INFO]  Successfully enabled plugin Microsoft.Powershell.DSC.
  • [00000009] [07/29/2014 00:02:51.03] [INFO]  Setting the install state of the handler Microsoft.Powershell.DSC_1.0.0.0 to Enabled

Now we know that DSC extension was successfully installed and enabled. We continue our analysis by going to DSC extension logs. “C:\WindowsAzure\Logs\Plugins\Microsoft.Powershell.DSC\1.0.0.0” contains various logs from DSC extension itself.

PS C:\>PS C:\WindowsAzure\Logs\Plugins\Microsoft.Powershell.DSC\1.0.0.0> dir                                           Directory: C:\WindowsAzure\Logs\Plugins\Microsoft.Powershell.DSC\1.0.0.0                                        Mode                LastWriteTime     Length Name                                                                    ----------------------------a---7/29/201412:28 AM       1613 CommandExecution.log                                                   -a---         7/28/2014  11:59 PM       1429 CommandExecution_0.log                                                 -a---         7/29/2014  12:01 AM       2113 CommandExecution_1.log                                                 -a---         7/29/2014  12:02 AM       1613 CommandExecution_2.log                                                 -a---         7/29/2014  12:28 AM      13744 DSCBOOT_script_20140729-002759.log                                     -a---         7/29/2014  12:03 AM     473528 DSCLOG_metaconf__20140729-000322.json                                  -a---         7/29/2014  12:28 AM     713196 DSCLOG_metaconf__20140729-002823.json                                  -a---         7/29/2014  12:03 AM     608050 DSCLOG__20140729-000311.json                                           -a---         7/29/2014  12:28 AM     713196 DSCLOG__20140729-002826.json                                           

As you can see there are a number of various logs present. 

“CommandExecution*.log” are logs written by Azure infrastructure as it enabled the DSC extension.

“DSCBOOT_script*.log” is a high level log that applied our configuration that we mentioned previously. It is fairly concise. If everything went well towards the end of the log you should be able to see a line such as this:

VERBOSE: [EXAMPLE-1] Configuration application complete.

If we wish to dig deeper into DSC logs, then the rest of logs can tell us much deeper story. “DSCLOG_*.json” logs are ETL DSC logs converted to JSON format. If configuration has completed successfully you should be able to see an event like this one:

    {

        "EventType":  4,

        "TimeCreated":  "\/Date(1406593703182)\/",

        "Message":  "[EXAMPLE-1]: LCM:  [ End    Set      ]    in  14.1745 seconds.",

“DSCLOG_metacong*.json” are the logs for your configuration if your PowerShell DSC configuration had a meta-config that modified PowerShell DSC properties, such as this:

      LocalConfigurationManager

        {

            ConfigurationID = "646e48cb-3082-4a12-9fd9-f71b9a562d4e"

            RefreshFrequencyMins = 23

        } 

You would see a similar event if meta configuration was applied successfully.

    {

        "EventType":  4,

        "TimeCreated":  "\/Date(1406593703182)\/",

        "Message":  "[EXAMPLE-1]: LCM:  [ End    Set      ]    in  14.1745 seconds.",

 

More info

Here are some additional resources about PowerShell DSC, Azure VM agent and extensions:

Desired State Configuration Blog Series – Part 1, Information about DSC (by Michael Green, Senior Program Manager, Microsoft)

VM Agent and Extensions - part 1 (by Kundana Palagiri, Senior Program Manager, Windows Azure)

VM Agent and Extensions - Part 2 (by Kundana Palagiri, Senior Program Manager, Windows Azure)

Automating VM Customization tasks using Custom Script Extension (by Kundana Palagiri, Senior Program Manager, Windows Azure) 

What’s the buzz on Advisor?

$
0
0

Have you heard about System Center Advisor Preview Service at TechEd or read some of the customer blogs? Need a single solution to gather, correlate and search all your log data, forecast your data center capacity needs or check if your servers have the latest software updates? Advisor maybe your answer. This service is FREE for the preview period. Here’s a quick 2 minute video to bring you up to speed on what the buzz is all about.

(Please visit the site to view this video)

We would love to have you join the rapidly growing Advisor community. If you have any questions or need help onboarding please feel free to email me (Satya) at scdata@microsoft.com

Prerequisite:
- Operations Manager 2012 SP1 with update rollup 6 orOperations Manager 2012 R2 with update rollup 2. Details on how to onboard can be found here

Note: No additional hardware is required to leverage this service and there is no impact to your OpsMgr on-premise environment.

Looking forward to having you join this new family.


 


Troubleshooting WMI Series coming

$
0
0

Hello AskPerf! My name is Jeffrey Worline and I wanted to let you know that I will be writing several troubleshooting WMI blog posts over the next few days. These will not be your typical Blog post, but more around the KB/Technet format.

There has been a lot of internal discussions here at Microsoft with Product Group, Developers, and myself concerning how many WMI issues are handled in the public sector. Whenever there is an issue with WMI itself, or suspected issue with WMI, the first words that always seem to come out is that “WMI is corrupted”. Oh no, say it isn’t so. What do we do? Grab the “Big Hammer” and rebuild the repository! Excuse my humor as we have a little fun, as the upcoming blogs are going to be more straight forward and dry.

While rebuilding the repository may seem like a good idea and a quick fix at times, what most do not realize is that if you suspect WMI or repository corruption, rebuilding repository is the last thing you should do without verifying this is truly the case. Deleting and rebuilding the repository can cause damage to Windows and/or to your installed applications. That last sentence is worth repeating; “Deleting and rebuilding the repository can cause damage to Windows and/or to your installed applications.” When it becomes necessary to rebuild the repository and this has been verified, there are right and wrong ways to go about it.

Other steps should be taken first to eliminate other possibilities or to confirm we have repository corruption before taking such actions as rebuilding the repository. There are also other proactive steps that can be taken as alternatives to rebuilding the repository if such an occasion arises.

So, the following blog posts will address various scenarios surrounding WMI, how to troubleshoot them, and the data we should collect to help resolve WMI related issues. These posts will equip you with the information you need to address WMI issues in the correct manner, and to help prevent you from causing more damage by just using the proverbial “Big Hammer” approach.

With that, here is a list of the Posts/Topics you will see in the coming days: 

  • WMI: Common Symptoms and Errors
  • WMI: Repository Corruption, or Not?
  • WMI: Missing or Failing WMI Providers or Invalid WMI Class
  • WMI: High Memory Usage by WMI Service or Wmiprvse.exe
  • WMI: How to troubleshoot High CPU Usage by WMI Components
  • WMI: How to Troubleshoot WMI High Handle Count

NOTE the above bullet points will become active URL links once each post becomes available.

-Jeff

Fixing the Touch Screen in Windows 8.1 on my old HP TouchSmart with NextWindow Drivers

$
0
0

HP TouchSmartWe've got an older HP TouchSmart all in one computer that we use as the "Kitchen PC." It's basically a browsing, emails, YouTube, and recipes machine. It's lovely machine, really. I've actually seen them at Goodwill, in fact, for cheap. If you can pick one up inexpensively, I recommend it.

Mine was starting to get sick so I opened it up (a challenge, but OK if you count all the screws) and replaced the Hard Drive. It comes with a 500gig 5400RPM full size SATA drive as I recall, but that was on its last legs. I happen to have a first gen 64G Intel laptop SSD around, so I use some 3M Command double-sided tape and basically taped this tiny hard drive to the inside of the thing and reinstalled Windows. This time, however, instead of the Windows Vista that it came with, I put on Windows 8.1.

You'd think I'd be asking for trouble. In fast, it's amazing. Literally everything worked, first try, with ZERO third party drivers. Blueooth, wireless, graphics, everything. Worked and worked immediately. Nothing was banged out in Device Manager. Even the touch screen worked, but only with 1 point of touch. That meant no pinch to zoom in browsers or maps. Cool, but I wanted to see if I could fix it.

These HP TouchSmarts had touch screens made by a New Zealand company called NextWindow, except they recently went out of business. Their website includes a few drivers, but not the one I needed.

I've mirrored them here because I don't trust that their website will be around long.

Here's the actual driver I needed for the TouchScreen. It doesn't appear to be available anywhere else, so I'm mirroring it here, as-is. It's the "HID Driver" (Human Interface Device) driver for the NextWindow 1900 TouchScreen. It's version 1.4 from May 24th, 2012. It works with NextWindow 2150 and 2700 touchscreens as well and it works under Windows XP, Vista, Windows 7, and now Windows 8 and 8.1!

This completely brought my HP TouchSmart new life with proper multitouch. It's paved completely with a new Windows 8.1 installation and just one third party driver and NO HP crapware.

Hope this helps you, random internet visitor.

Related Links



© 2014 Scott Hanselman. All rights reserved.
     

Desired State Configuration (DSC) Nodes Deployment and Conformance Reporting Series (Part 1): Series agenda, Understanding the DSC conformance endpoint

$
0
0

In any configuration management process, once the configuration is applied to the target systems, it is necessary to monitor these systems for any configuration drift. This is an important step and Desired State Configuration (DSC), which is addressed differently depending on the chosen deployment mode for DSC - push mode or pull mode.

In the push method for configuration delivery, the configuration MOF file is copied manually or via another solution to the target machine, and the Start-DscConfigurationcmdlet provides an immediate indication of success or failure of the configuration change.

Now, in large scale deployments, it is likely you will want to look at the pull method of configuration delivery. In this mode, the target systems receive or download configuration from a pull service, based on their ConfigurationID. This can either be a SMB file share or a REST-based pull service endpoint. When using the REST-based pull service endpoint, to facilitate monitoring of configuration enact process, we can deploy a conformance endpointthat provides the last configuration run status from the target node.

This blog series will focus on some examples of how to optimize pull mode configuration deployment, and how to report on the health of DSC nodes in such an environment


This whole series has been a joint effort with guest blogger Ravikanth Chaganti. It is our first series of posts with Ravi, but he’s been PowerShell MVP since 2010 and a regular contributor to PowerShellMagazine.com. He also publishes on his own blog. More specifically in this series, Ravi is behind the updated configuration to simultaneously deploy the pull service and conformance endpoints, and how to query and report with the conformance endpoint. It’s been a pleasure to work on this content with Ravi, and we look forward to any potential collaboration in the future!


Blog post series agenda

There are 4 blog posts in this series:

  1. Series agenda, Understanding the DSC conformance endpoint (this post)
  2. How to deploy a pull service endpoint and automate the configuration of the DSC nodes
  3. How to leverage the conformance endpoint deployed along with part of the pull service endpoint, to report on accurate configuration deployment and application: Are my DSC nodes downloading and applying the configurations without encountering any errors?
  4. Some options to determine if the nodes are conformant with the given configuration: Are my DSC nodes conformance with the configuration they are supposed to enforce?

Note : This last blog post will be published at a later date

Understanding the conformance endpoint

You may wonder why there are separate posts to report on the status of configuration being downloaded/applied (blog post #3), and to report on enforcement (upcoming blog post #4). This relates to something that is critical to understand before implementing the conformance endpoint: As of today, the conformance endpoint retrieves status about nodes as they download/apply configurations – or fail to do so. While this first level of information is important (understanding that configuration application should work if this first process is “green”), it does not provide status about whether a node is actually compliant regarding the configuration it is supposed to enforce. Blog post #4 will be published later in the series, and look at new capabilities coming soon in Windows Management Framework (WMF) and DSC, to surface the actual configuration health and drifts, as well as sample ways to work with them, with both the conformance server and other systems, so stay tuned!

Desired State Configuration (DSC) Nodes Deployment and Conformance Reporting Series (Part 2): Deploying a pull service endpoint and automating the configuration of the DSC nodes

$
0
0

In this post, we will cover how a pull service endpoint can be installed, and how nodes can be configured to point to this server and retrieve their DSC configurations.

There are already a few blog posts regarding the installation of the pull service endpoint, including this post that shows a snippet on how to deploy pull service and conformance endpoints via a DSC configuration…So, you might wonder why we’re having a new one here!

Well, today’s post…

  • includes an updated working snippet that combines both deployments (pull service endpoint and conformance endpoint), also updated to include the needed Windows Authentication dependencies that have been discussed in the blog comments
  • also covers one example of how to overcome one of the challenges when configuring nodes for pull service endpoint, which is managing the GUIDs for the nodes.

So, Here are the steps we are going to go through in this post:

  1. Check prerequisites to install the pull service endpoint
  2. Deploy/configure the pull service endpoint
  3. Provisioning configurations for the nodes
  4. Configuring the nodes to point to the pull service endpoint
  5. Checking nodes are applying the configuration
    • We’ll do this last step manually and on a single node in this post, and then move to the capabilities offered by the conformance endpoint to do this at scale in a larger environment, in the3rd blog post

Checking prerequisites to install the pull service endpoint

Windows Management Framework (WMF) 4.0 is a prerequisite to leverage DSC so, to make things easier, we will be deploying our pull service and conformance endpoints on a Windows Server 2012 R2 machine, which includes WMF 4.0 out of the box.

You will also need the DSC Resource Kit from this link.

The DSC Resource Kit comes as a zipped package, and you just have to copy its content into the $env:ProgramFiles\WindowsPowerShell\Modules folder on the future pull/conformance server.


Configuring the pull service endpoint

Here is the script you would need to run on the server, from ISE for example. In our situation, this was run on a server called DSCSERVER, as seen in line 54.

001
002
003
004
005
006
007
008
009
010
011
012
013
014
015
016
017
018
019
020
021
022
023
024
025
026
027
028
029
030
031
032
033
034
035
036
037
038
039
040
041
042
043
044
045
046
047
048
049
050
051
052
053
054
055
056
057
058
059
configuration Sample_xDscWebService 
{ 
    param 
    ( 
        [string[]]$NodeName = 'localhost', 
 
        [ValidateNotNullOrEmpty()] 
        [string] $certificateThumbPrint = "AllowUnencryptedTraffic"
    ) 
 
    Import-DSCResource -ModuleName xPSDesiredStateConfiguration 
 
    Node $NodeName 
    { 
        WindowsFeature DSCServiceFeature 
        { 
            Ensure = "Present" 
            Name   = "DSC-Service"            
        } 
 
        WindowsFeature WinAuth 
        { 
            Ensure = "Present" 
            Name   = "web-Windows-Auth"            
        } 
 
        xDscWebService PSDSCPullServer 
        { 
            Ensure                  = "Present" 
            EndpointName            = "PullSvc" 
            Port                    = 8080 
            PhysicalPath            = "$env:SystemDrive\inetpub\wwwroot\PSDSCPullServer" 
            CertificateThumbPrint   = $certificateThumbPrint         
            ModulePath              = "$env:PROGRAMFILES\WindowsPowerShell\DscService\Modules" 
            ConfigurationPath       = "$env:PROGRAMFILES\WindowsPowerShell\DscService\Configuration"            
            State                   = "Started" 
            DependsOn               = "[WindowsFeature]DSCServiceFeature"                        
        } 
 
        xDscWebService PSDSCComplianceServer 
        {  
            Ensure                  = "Present" 
            EndpointName            = "DscConformance" 
            Port                    = 9090 
            PhysicalPath            = "$env:SystemDrive\inetpub\wwwroot\PSDSCComplianceServer" 
            CertificateThumbPrint   = "AllowUnencryptedTraffic" 
            State                   = "Started" 
            IsComplianceServer      = $true 
            DependsOn               = @("[WindowsFeature]DSCServiceFeature","[WindowsFeature]WinAuth","[xDSCWebService]PSDSCPullServer") 
        } 
    } 
} 

Sample_xDscWebService  -ComputerName "DSCSERVER"
Start-DscConfiguration -Wait -Verbose 
.\Sample_xDscWebService




A few notes regarding this script:

  • This configuration simultaneously deploys the conformance endpoint that we will use later in the blog post series, to see how the nodes are doing when downloading and applying their assigned DSC configurations.
  • The conformance endpoint uses Windows Authentication and therefore the WinAuth Windows feature needs to be installed. In our configuration script, we used the DependsOn property to take care of the dependencies for the conformance endpoint
  • Note that the xDSCWebService still refers to the conformance endpoint as “compliance endpoint” (and actually enforces it in the URL, even if you were to rename PSDSCComplianceServer to another value). DSC components are being transitioned to the updated “conformance endpoint” name, that we prefer to use now and throughout this blog post series.
  • Finally, the last few lines are just here to apply the configuration.

Here is the output of the script running, with the future URIs highlighted, for the two web services:

image

 

We can also see that the content for the two websites has been created in the WWWROOT folder on the server:

image

Finally, running Get-DscConfiguration shows that the configuration has been applied, if we still had any doubts Smile

image


Provisioning configurations for the DSC nodes

On the DSC server, here is a script that will do the following:

- Line 21: The script receives a list of nodes to configure – In this sample, this is in the form of an array, but you could very well query Active Directory, a CMDB, a custom database, etc.

- Lines 23-30: For each node, it does generate a GUID that will be used to make this configuration unique for each node, generates a MOF file for each node.

  • The configuration applied is here called “TestConfig” and is detailed at lines 1-19. This is just a very basic sample configuration that ensures that the content of a shared folder is copied locally to the temp folder on the local node
  • Also note how the Node/GUID association is added to a CSV file at line 29. This will be important when we configure the node at the next step, and is there to ensure the node has a location to query its GUID when configuring its LCM, without any manual intervention. The CSV approach makes it easy to show the content as a blog post sample, needless to say that leveraging a database or a more reliable/secure approach would be preferred, as discussed in the community.

- Line 32-39: A checksum is generated for each file, and all files generated are copied to the pull service configuration store, so that they are made available for the future nodes

001
002
003
004
005
006
007
008
009
010
011
012
013
014
015
016
017
018
019
020
021
022
023
024
025
026
027
028
029
030
031
032
033
034
035
036
037
038
039
040
041
042
Configuration TestConfig {

    Param(
        [Parameter(Mandatory=$True)]
        [String[]]$NodeGUID
    )
   
    Node $NodeGUID {

        File ScriptPresence {
            Ensure = "Present"
            Type = "Directory"
            Recurse = $True
            SourcePath = "\\storagebox\SourceFiles\SCCM Toolkit"
            DestinationPath = "C:\Temp\DSCTest"
        }

    }
}

$Computers = @("DSCNODE1", "DSCNODE2")

write-host "Generating GUIDs and creating MOF files..."
foreach ($Node in $Computers)
    {
    $NewGUID = [guid]::NewGuid()
    $NewLine = "{0},{1}" -f $Node,$NewGUID
    TestConfig -NodeGUID $NewGUID
    $NewLine | add-content -path "$env:SystemDrive\Program Files\WindowsPowershell\DscService\Configuration\dscnodes.csv"
    }

write-host "Creating checksums..."
New-DSCCheckSum -ConfigurationPath .\TestConfig -OutPath .\TestConfig -Verbose -Force

write-host "Copying configurations to pull service configuration store..."
$SourceFiles = (Get-Location -PSProvider FileSystem).Path + "\TestConfig\*.mof*"
$TargetFiles = "$env:SystemDrive\Program Files\WindowsPowershell\DscService\Configuration"
Move-Item $SourceFiles $TargetFiles -Force
Remove-Item ((Get-Location -PSProvider FileSystem).Path + "\TestConfig\"
)



When the script runs, it creates the MOF files and shows the checksums (because of the –Verboseswitch):

image

The files are present in the DSC pull service configuration store, including our CSV file:

image

And here is the content of the CSV file:

image


 

Applying configuration on the DSC nodes

The goal here will be to be as dynamic as possible, so that a single generic PS1 file could be sent to the DSC nodes, and “discover” the configuration to apply. The script could be send via the method of your choice, including software distribution tools like Configuration Manager as part of the System Center suite.

Here is the script we will be using:

001
002
003
004
005
006
007
008
009
010
011
012
013
014
015
016
017
018
019
020
021
022
023
024
025
026
027
028
Configuration SimpleMetaConfigurationForPull 
{ 

    Param(
        [Parameter(Mandatory=$True)]
        [String]$NodeGUID
    )

     LocalConfigurationManager 
     { 
       ConfigurationID = $NodeGUID;
       RefreshMode = "PULL";
       DownloadManagerName = "WebDownloadManager";
       RebootNodeIfNeeded = $true;
       RefreshFrequencyMins = 15;
       ConfigurationModeFrequencyMins = 30; 
       ConfigurationMode = "ApplyAndAutoCorrect";
       DownloadManagerCustomData = @{ServerUrl =    "http://DSCSERVER.contoso.com:8080/PullSvc/PSDSCPullServer.svc"; AllowUnsecureConnection = “TRUE”}
     } 
} 

$data = import-csv "\\dscserver\c$\Program Files\WindowsPowershell\DscService\Configuration\dscnodes.csv" -header("NodeName","NodeGUID")

SimpleMetaConfigurationForPull -NodeGUID ($data | where-object {$_."NodeName" -eq $env:COMPUTERNAME}).NodeGUID -Output "." 
$FilePath = (Get-Location -PSProvider FileSystem).Path + "\SimpleMetaConfigurationForPull"
Set-DscLocalConfigurationManager -ComputerName "localhost" -Path $FilePath 
-Verbose


Some important parts of the scripts are:

- The configuration (lines 1-20): This set the LCM for pull mode, and specifies which pull service endpoint to use. It also specifies if we should just monitor DSC configurations, or try to auto correct them. In this sample, we apply and auto-correct. The frequency is also specified here.

- In the configuration, note that we need to specify the GUID for the ConfigurationID parameter. This is why we created that CSV file, so that the script can “discover” which GUID to use, at lines 22 and 24.

  • Note: The CSV file is directly accessed via the administrative share, to keep things simple in this sample. In reality, it would likely be on a secured shared elsewhere. Or, as we discussed earlier, you might be using a custom database or a CMDB to store this data instead of this CSV sample.

- The LCM configuration is compiled at line 30 and applied at line 32

 

This is the output of the script running on a node:

image

When we display the LCM configuration, we can see that the pull service endpoint is now configured in the LCM:

image


Checking that configurations are being applied to nodes

After the interval (or, for testing purposes, you can force things with a reboot, or via scripting), we can see the configuration pulled, in the event log – This is for the node called DSCNODE1, and you can see how the GUID matches what we had previously.

image

Note how the node did not need to pull specific modules in this case, but the pull service endpoint can provide modules when a node needs them to apply a specific configuration.

image

Finally, we can confirm that the folder was created, with content copied by DSC. And if we were to delete this folder, it will be copied again by DSC.

image

Note : You can also leverage the xDscDiagnostics module for some of these, as needed.

We’ve now checked that everything is working on a single node. In the next post in this series, we will look at how the conformance endpoint can be used to look at the status of configuration downloads/applications across nodes.


Blog post series agenda

  1. Series agenda, Understanding the DSC conformance endpoint
  2. How to deploy a pull service endpoint and automate the configuration of the DSC nodes (this post)
  3. How to leverage the conformance endpoint deployed along with part of the pull service endpoint, to report on accurate configuration deployment and application: Are my DSC nodes downloading and applying the configurations without encountering any errors?
  4. Some options to determine if the nodes are conformant with the given configuration: Are my DSC nodes conformance with the configuration they are supposed to enforce?

Note : This last blog post will be published at a later date

Desired State Configuration (DSC) Nodes Deployment and Conformance Reporting Series (Part 3): Working with the conformance endpoint

$
0
0

This blog post covers how to deploy/configure, and work with the conformance endpoint. It includes details about the type of information returned, as well as sample ways to generate reports with meaningful data.


Configuring Conformance Endpoint

Similar to the pull service endpoint, we can use the xDscWebService resource from the DSC resource kit to configure a conformance endpoint. The configuration used to deploy both endpoints is available in the first post of this series. Note the requirements (xPSDesiredStateConfiguration module namely, also included in the DSC Resource Kit – This is also explained in the first blog post)

Note As of today, the conformance endpoint needs to be deployed on the same system as the pull service endpoint. This is because the status of each target node gets stored in an Access database (devices.mdb) on the system that is configured as pull service endpoint. The conformance queries the same database for the target node status.


Exploring Conformance Endpoint

Once the configuration is complete, we can access the conformance endpoint at http://://PSDSCComplainceServer.svc. So, from our example, this URL will be http://localhost:9090/DscConformance/PSDSCComplianceServer.svc. So, if everything worked as expected, we should see similar output from the endpoint as shown here:

image

The Status method provides the configuration run status for each target node that is configured to receive configuration from the pull service endpoint. If we access the Status method, we will see browser output similar what is shown below:

image

Make a note of the highlighted section (bottom-right corner) in the previous screenshot. This shows how many target systems are available in the pull service inventory. If a pull client hasn’t received any configuration from the pull service endpoint, it won’t get listed in the Status method output. The output that we see in the browser isn’t any helpful. However, we can use this browser method to understand more about the Status method and what type of output we can expect from the method. This is done using the meta-data operation.

To see the meta-data from the Status method, append $metadata to the conformance endpoint URL.

image

 

The XML output seen in the above screenshot gives an overview of all properties that will be a part of Status method output. Here is a quick summary of these properties and what they mean.

Property Name

Description

TargetName

IP Address of the pull client

ConfigurationId

GUID configured as the ConfigurationID in the meta-configuration of the pull client

ServerChecksum

Value from the configuration MOF checksum file on the pull service endpoint

TargetCheckSum

Value of the checksum from the target system

NodeComplaint

Boolean value indicating if the last configuration run was successful or not

LastComplianceTime

Last time the pull client successfully received the configuration from pull service

LastHeartbeatTime

Last time the pull client connected to the pull service

Dirty

Boolean value indicating if the target node status is recorded in the database or not

StatusCode

Describes the Node status. Refer to PowerShell team blog for a complete list of status codes.

We can see the values of these properties using the Invoke-RestMethodcmdlet to query the oData endpoint.

001
002
003
$response = Invoke-RestMethod -Uri 'http://localhost:9090/DscConformance/PSDSCComplianceServer.svc/Status' -UseDefaultCredentials -Method Get -Headers @{Accept="application/json"}
$response.value
 

In the above example, I have specified –UseDefaultCredentials switch parameter. This is required as the Conformance endpoint uses Windows Authentication. The Value property from the web service response includes the output from the Status method for each target node.

image


Understanding Conformance status and Reporting

As you see in this output, everything seems to be working fine in my deployment and all target systems are in perfect sync with the pull service endpoint. Once again, as explained in the introduction for this blog post series, the NodeCompliantproperty in the output does not indicate if the target system is in desired state or not. It only indicates whether the last configuration run was successful or not. So, let us test that by placing a buggy configuration MOF for one of the target nodes. For demonstration purposes, I will create a configuration script to include a custom DSC resource that does not exist on the target system. So, when this configuration gets received on the target node, it should fail because of missing resource modules.

001
002
003
004
005
006
007
008
009
010
011
012
013
014
Configuration DummyConfig {
    Import-DscResource -ModuleName DSCDeepDive -Name HostsFile
    Node '883654d0-ee7b-4c87-adcd-1e10ea6e7a61' {
        HostsFile Demo {
            IPAddress = "10.10.10.10"
            HostName = "Test10"
            Ensure = "Present"
        }
    }
}

DummyConfig -OutputPath "C:\Program Files\WindowsPowerShell\DscService\Configuration"
New-DscCheckSum -ConfigurationPath "C:\Program Files\WindowsPowerShell\DscService\Configuration\883654d0-ee7b-4c87-adcd-1e10ea6e7a61.mof" -OutPath "C:\Program Files\WindowsPowerShell\DscService\Configuration"
 

Once I changed the configuration on the pull service endpoint, I ran the scheduled task manually to pull the configuration and it fails as the HostsFile resource does not exist on the pull service endpoint for the target system. So, at this moment, if we look at the Status method again, we should see that the NodeCompliant status will be set to False. To get the right statuscodevalue, we need to run the schedule task again or wait for the pull client to connect to the pull service again.

image

As you see in this output, the NodeCompliant state is set to False and the StatusCode is set to 10. So, what is StatusCode 10? From the PowerShell team blog, I understand that it means there was a failure in getting the resource module. Wouldn’t it be good if I can see the text description of the code instead of an integer value? Also, the IP address as a TargetName won’t make much sense to me. So, when I generate a report, I’d like to see the computername of the target system instead of IP address. How can we achieve that?

Yes, with a little bit of PowerShell! Smile

001
002
003
004
005
006
007
008
009
010
011
012
013
014
015
016
017
018
019
020
021
022
023
024
025
026
027
028
029
030
031
032
033
034
035
036
$statusCode = @{
    0='Configuration was applied successfully'
    1='Download Manager initialization failure'
    2='Get configuration command failure'
    3='Unexpected get configuration response from pull service endpoint’
    4='Configuration checksum file read failure'
    5='Configuration checksum validation failure'
    6='Invalid configuration file'
    7='Available modules check failure'
    8='Invalid configuration Id In meta-configuration'
    9='Invalid DownloadManager CustomData in meta-configuration'
    10='Get module command failure'
    11='Get Module Invalid Output'
    12='Module checksum file not found'
    13='Invalid module file'
    14='Module checksum validation failure'
    15='Module extraction failed'
    16='Module validation failed'
    17='Downloaded module is invalid'
    18='Configuration file not found'
    19='Multiple configuration files found'
    20='Configuration checksum file not found'
    21='Module not found'
    22='Invalid module version format'
    23='Invalid configuration Id format'
    24='Get Action command failed'
    25='Invalid checksum algorithm'
    26='Get Lcm Update command failed'
    27='Unexpected Get Lcm Update response from pull service endpoint’
    28='Invalid Refresh Mode in meta-configuration'
    29='Invalid Debug Mode in meta-configuration'
}

$response = Invoke-RestMethod -Uri 'http://localhost:9090/DscConformance/PSDSCComplianceServer.svc/Status' -UseDefaultCredentials -Method Get -Headers @{Accept="application/json"}
$response.value | Select @{Name='TargetName';Expression={[System.Net.Dns]::GetHostByAddress($_.TargetName).HostName}}, ConfigurationId, NodeCompliant, @{Name='Status';Expression={$statusCode[$_.StatusCode]}} | 
Format-List

Note: These status codes come can also be found in this blog post.

image

Note: In case you are wondering, the computer names used in this specific demo are different than the ones in the previous blog post, because this sample was created in a different environment. It does however work with any pull and conformance endpoint, as long as you use the appropriate URI.

So, what we see now is more meaningful. In this demonstration, I have only four target systems. But, when you have more target systems, it will be good to see some sort of visual indication for systems that have issues applying or working with configurations. For starters, we can use PowerShell to generate something. We can use some simple HTML to build this.

001
002
003
004
005
006
007
008
009
010
011
012
013
014
015
016
017
018
019
020
021
022
023
024
025
026
027
Function Get-DscConformanceReport {
    param (
        $Uri = 'http://localhost:9090/DscConformance/PSDSCComplianceServer.svc/Status'
    )
    $response = Invoke-RestMethod -Uri $uri -UseDefaultCredentials -Method Get -Headers @{Accept="application/json"}
    $NodeStatus = $response.value | 
                  Select @{Name='TargetName';Expression={[System.Net.Dns]::GetHostByAddress($_.TargetName).HostName}}, ConfigurationId, NodeCompliant, @{Name='Status';Expression={$statusCode[$_.StatusCode]}}
   
    #Construct HTML
    $HtmlBody = "'
    $TableContent="'
    foreach ($Node in $NodeStatus) {
        if (-not ([bool]$Node.NodeCompliant)) {
            $TableContent += "'
        } else {
            $TableContent += "'
        }
        $TableContent += ""       
    } 
    $TableContent += "
TargetNameConfigurationIdNodeComplaintStatus
$($Node.TargetName)$($Node.ConfigurationId)$($Node.NodeCompliant)$($Node.Status)
"

    $HtmlBody += $TableContent + ""

    #Generate HTML file
    $HtmlBody | Out-File "$env:Temp\DscReport.HTML" -Force
    Start-Process -FilePath iexplore.exe -ArgumentList "$env:Temp\DscReport.HTML"
}
 

What the function generates is not a fancy HTML report. It just highlights all rows with NodeComplaint set to False. I am pretty sure that people with good JavaScript kills can beautify this report and include many other details.

image

In the first release of DSC, the conformance endpoint gives only limited information as we saw so far. For starters, the current functionality is good to understand if configuration run itself is complete or not, how many target systems are available in the deployment and so on. Blog post #4 in this series should be published in a few weeks, and will look at new capabilities coming soon in Windows Management Framework (WMF) and DSC, to surface the actual configuration health and drifts, as well as sample ways to work with them. In the meantime, you can work around the current limitations by using the CIM methods offered by LCM and build custom reports based on those results. And, if you are familiar with writing ASP.NET web applications and services, you can deploy you own endpoints to perform more than what the conformance endpoint provides.


Blog post series agenda

  1. Series agenda, Understanding the DSC conformance endpoint
  2. How to deploy a pull service endpoint and automate the configuration of the DSC nodes
  3. How to leverage the conformance endpoint deployed along with part of the pull service endpoint, to report on accurate configuration deployment and application (this post) : Are my DSC nodes downloading and applying the configurations without encountering any errors?
  4. Some options to determine if the nodes are conformant with the given configuration: Are my DSC nodes conformance with the configuration they are supposed to enforce?

Note : This last blog post will be published at a later date

WMI: Common Symptoms and Errors

$
0
0

Quota Violation Issues: Memory and/or Handle

Symptoms

  • General WMI-based scripts or applications fail or fail intermittently
  • Applications such as SMS/SCCM produce errors on server and/or inventories fail or fail intermittently
  • Applications such as Exchange or SQL fail on server or fail intermittently
  • Unable to connect to specific namespace via WBEMTEST or unable to query specific classes in a namespace. May be intermittent.
  • WMI appears to be hung or non-responsive
  • Unable to run msinfo32 or tasklist
  • Events for unexpected termination/crash of wmiprvse.exe
  • Lower than normal available memory on the system
  • Out of memory errors when running certain WMI tasks
  • 0x80041033 -- SHUTDOWN of the target provider
  • 0x80041006 – OUTOFMEMORY
  • 0x80041013 -- fail to load the provider
  • 0x800705af -- The paging file is too small for this operation to complete
  • 0x8004106C-- Quota violation
  • Event logs contain WMI-related events
  • WMI service crashing

Event Source: Service Control Manager
Event ID: 7031
Description: The Windows Management Instrumentation service terminated unexpectedly

  • Handle Quota Violation

Source: Microsoft-Windows-WMI
Event 5612 Wmiprvse.exe exceeding handle quota limit Event
WMI has stopped WMIPRVSE.EXE because a quota reached a warning value. Quota: %1 Value: %2 Maximum value: %3 WMIPRVSE PID: %4

  • Memory Quota Violation does not log and event such as Handle Quota Violation does. You can check to see if any instance of wmiprvse is using 500 mb of memory or greater as the default limit is 512 mb on Vista an newer. Once you hit around 500 mb and any new allocation that would cross the 512 mb limit would fail. All instances of wmiprvse combined have a 1 gb limit. For Windows XP and 2003 the default is 128 mb. Exceed the default and you have reached a Quota Violation. Also check for Windows Management Instrumentation service crashing

Memory Quota Violations refer to Ask Perf blog: WMI: High Memory Usage by WMI Service or Wmiprvse.exe (COMING SOON)

Handle Quota Violations refer to Ask Perf blog:How to Troubleshoot WMI High Handle Count (COMING SOON)

Missing or Failing WMI Providers or Invalid WMI Class

Symptoms

  • General WMI-based scripts or applications fail
  • Applications such as SMS/SCCM produce errors on server and/or inventories fail
  • Applications such as Exchange or SQL fail on server
  • Unable to connect to specific namespace via WBEMTEST or unable to query specific classes in a namespace
  • WMI functionality appears normal on local node but unable to connect to/from machine via WMI scripts, tools or applications
  • WMI (Local) properties in wmimgmt.msc console and see the following

Failed to initialize all required WMI classes
Win32_processor: WMI: Invalid namespace
Win32_WMISetting: WMI: Invalid namespace
Security information: Successful
Win32_OperatingSystem: WMI: Invalid namespace
Win32_processor: WMI: Invalid namespace
Win32_WMISetting: WMI: Invalid namespace
Win32_OperatingSystem: WMI: Invalid namespace

  • WBEM_E_NOT_FOUND 0x80041002
  • WBEM_E_PROVIDER_FAILURE 0x80041004
  • WBEM_E_INVALID_NAMESPACE 0x8004100E
  • WBEM_E_INVALID_CLASS 0x80041010
  • WBEM_E_PROVIDER_NOT_FOUND 0x80041011
  • WBEM_E_INVALID_PROVIDER_REGISTRATION 0x80041012
  • WBEM_E_PROVIDER_LOAD_FAILURE 0x80041013

Refer to Ask Perf blog: WMI: Missing or Failing WMI Providers or Invalid WMI Class (COMING SOON)

High Memory Usage by WMI Service or Wmiprvse.exe

Symptoms

  • Wmiprvse.exe memory quota violations – refer to top of blog
  • Lower than normal available memory on the system
  • Delayed or slow logons to the box
  • Excessive or slow return times to queries to WMI or scripts that are running that call to WMI
  • Spinning donut when trying to bring up WMI (Local) properties in wmimgmt.msc console or using Wbemtest (Windows Management Instrumentation Tester) built in tool
  • Sluggish or slow responding system
  • Server hang
  • Unable to run msinfo32 or tasklist
  • Svchost process housing WMI service (winmgmt) exhibiting high memory usage or leak
  • Instance of wmiprvse reaching or exceeding 512 mb on Vista and newer, or 128 mb on XP or Windows 2003: Quota Violation issue
  • Large repository C:\Windows\System32\Wbem\Repository folder and objects.data file is 1gb or larger
  • Cluster management tools not working

Refer to Ask Perf blog:WMI: High Memory Usage by WMI Service or Wmiprvse.exe (COMING SOON)

High CPU Usage by WMI Components

Symptoms

  • High cpu usage by svchost hosting WMI service (winmgmt)
  • High cpu usage by wmiprvse
  • System sluggish or slow performance

Refer to Ask Perf blog: How to troubleshoot High CPU Usage by WMI Components (COMING SOON)

High Handle Count on WMI Components

Symptoms

  • Refer to top of blog: Quota Violation Issues: Memory and/or Handle 
  • Following Event being logged

Source: Microsoft-Windows-WMI
Event 5612 Wmiprvse.exe exceeding handle quota limit Event
WMI has stopped WMIPRVSE.EXE because a quota reached a warning value. Quota: %1 Value: %2 Maximum value: %3 WMIPRVSE PID: %4

  • Resource depletion type messages
  • Error message: No more treads can be created in the system
  • Message when trying to RDP

The User Profile Service service failed the logon

 

  • High handle count on svchost process hosting WMI service (winmgmt)
  • Cluster management tools not working

Refer to Ask Perf blog: How to Troubleshoot WMI High Handle Count (COMING SOON)

Repository Corruption

Symptoms

  • General WMI-based scripts or applications fail
  • Applications such as SMS/SCCM produce errors on server and/or inventories fail
  • Applications such as Exchange and SQL fail on server
  • WINMGMT.MSC Security tab shows missing, repeating and/or gibberish namespace entries
  • Unable to connect to specific or possibly any namespace via WBEMTEST
  • Winmgmt.msc console WMI (local) properties Security tab shows missing, repeating and/or gibberish namespace entries
  • Unable to connect to root\default or root\cimv2 namespaces. Fails returning error code 0x80041002 pointing to WBEM_E_NOT_FOUND
  • Opening Computer Management and Right Click on Computer Management (Local) and select Properties, you get the following error: "WMI: Not Found" or it hangs trying connect
  • Wbemtest (Windows Management Instrumentation Tester) built in tool hangs
  • Error 0x80041010 WBEM_E_INVALID_CLASS
  • Any events with Source = Microsoft-Windows-WMI, look for any of the following WMI event IDs: 28, 65, 5600, 5601, 5614 as any of these can be indicators or repository corruption
  • WMI namespaces end up missing

Refer to Ask Perf blog: WMI: Repository Corruption, or Not? (COMING SOON)

Next up: WMI: Repository Corruption, or Not?

Update on the Autohosted Apps Preview Program—part 2

$
0
0

On May 16, 2014 we notified you that the Office 365 Autohosted Apps Preview program—a program to determine how best to provide a friction-free deployment experience for creating apps for SharePoint—would end on June 16, 2014 and developers would not be able to create new autohosted apps in SharePoint. This decision was based on our commitment to working with the Azure and Visual Studio teams to incorporate developers’ feedback and evolve the hosting concept for apps, so that we could deliver a more seamless experience for developers. We want to update you on these changes and how they affect you.

Because you can no longer install autohosted apps in SharePoint Online, the capability to create autohosted apps and deploy them into your production SharePoint Online site has been removed from the Office Developer Tools in the Visual Studio 2013 update 3, released August 4,2014.

Existing apps are not being shut down or removed at this time, and they are still supported by Visual Studio. We will post an update notifying you when existing apps will be removed from the service, so you’ll have time to transition your autohosted apps to the new, provider-hosted model.

If you are currently using autohosted preview apps in a production environment, we recommend that you follow this step-by-step MSDN article on how to transition autohosted preview apps to provider-hosted apps, the new model we’re improving. If you want to just pull the data from your app, please contact the relevant Office support resource for assistance.

We’re excited about the possibilities for the provider-hosted app model, and are actively working on the various components necessary to deliver new capabilities that better address developers’ needs. This model will provide the features you requested, including: streamlined deployment and management, the ability to leverage the full power of Azure, and easy scaling for apps.

As always, we’re interested in your feedback and comments. Give us your suggestions on our UserVoice site.

FAQ

Why are you ending the Autohosted Apps Preview program?

Autohosted Apps was a preview program to determine how best to provide a seamless deployment experience for creating apps for SharePoint. We have gathered the valuable feedback we needed to improve these capabilities and create a first-class experience for both developers and customers.

When will my Autohosted Apps Preview program be shut down in the service?

We will provide more details about a specific date at a later time, but we will not shut down any of the apps currently running in the service until the end of 2014.

When will you have an autohosted apps experience available in production?

We’re working diligently with the Azure and Visual Studio teams to have the next iteration of this type of friction-free hosting model ready to announce by the end of 2014. We will announce updates on  Office blogs.

The post Update on the Autohosted Apps Preview Program—part 2 appeared first on Office Blogs.


Edge Show - Log Management and Other Intelligence Packs in System Center Advisor

WMI: Repository Corruption, or Not?

$
0
0

Scenario

Windows Management Instrumentation failing due to repository being corrupted

The WMI Repository (%windir%System32\Wbem\Repository) is the database that stores meta-information and definitions for WMI classes; in some cases the repository also stores static class data as well. If Repository becomes corrupted, then the WMI service will not be able to function correctly.

Before grabbing that preverbal hammer approach and just rebuilding your repository, ask yourself, “Is the WMI repository OK?”

Common symptoms that lead to this question are: provider load failure, access denied, class not found, invalid namespace, and namespace not found to mention a few.

If you suspect WMI or repository corruption, rebuilding repository is the last thing you should do without verifying this is truly the case. Deleting and rebuilding the repository can cause damage to Windows or to installed applications. Other steps should be taken first to eliminate other possibilities or to confirm we have repository corruption. Noting here that having a repository too large also creates problems; an issue that can sometimes be interpreted as a corrupt repository, which is not always the case. If issues are due to a large repository, rebuilding the repository is currently the only method available to reduce it back to a working size.

Since I mentioned “large repository”, let me set some guidelines up front. There is no hard fast number per say as to when you will start feeling performance problems with a large repository. As a guideline, if the objects.data file, located in (%windir%System32\Wbem\Repository, is 1 GB or larger, then I would recommend rebuilding your repository to reduce it back down to a working and manageable size. If the size is between 600-900 MB, and you are not feeling any noticeable performance issues, then I would recommend against rebuilding the repository.

If WMI is corrupted, you can receive various errors and symptoms, depending on what activity was being done at the time. Below is a few errors and symptoms that could indicate that the repository is corrupted:

  1. Unable to connect to root\default or root\cimv2 namespaces. Fails returning error code 0x80041002 pointing to WBEM_E_NOT_FOUND.
  2. When we open Computer Management and Right Click on Computer Management (Local) and select Properties, you get the following error: "WMI: Not Found" or it hangs trying connect
  3. 0x80041010 WBEM_E_INVALID_CLASS
  4. Trying to use wbemtest, it hangs
  5. Schemas/Objects missing
  6. Strange connection/operation errors (0x8007054e):

get-cimclass : Unable to complete the requested operation because of either a catastrophic media failure or a data structure corruption on the disk.

At line:1 char:1

+ get-cimclass -Namespace root\cimv2\TerminalServices

+ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

+ CategoryInfo : NotSpecified: (:) [Get-CimClass], CimException

+ FullyQualifiedErrorId : HRESULT 0x8007054e,Microsoft.Management.Infrastructure.CimCmdlets.GetCimClassCommand

Check the Windows Application log for events in the past week where Source = Microsoft-Windows-WMI.  Look for any of the following WMI event IDs: 28, 65, 5600, 5601, 5614. Any of these could indicate a WMI repository issue or core infrastructure problem.

If you do not find any of these events logged, your next action is to use the built in repository checker. From an elevated command prompt run "winmgmt /verifyrepository". If the repository has an issue, it will respond "repository is not consistent".

If repository check comes back as “consistent”, then look at my other Ask Perf blogs for applicability:

WMI: Missing or Failing WMI Providers or Invalid WMI Class (COMING SOON)

WMI: High Memory Usage by WMI Service or Wmiprvse.exe (COMING SOON)

How to troubleshoot High CPU Usage by WMI Components (COMING SOON)

WMI Self-Recover

When the WMI service restarts or detects Repository corruption, the self-recovery procedure will trigger automatically in two approaches (one or the other):

  1. AutoRestore: if the VSS backup mechanism enable snapshot the timestamp backup images In the system (ex. Win7 feature: previous fileversion), WMI will apply the AutoRestore approach to restore backup valid images in version queue. (if possible)
    • EVT: 65 (start restore) / 66 (succeed recovered with VSS Path)
  1. AutoRecovery: the rebuild process to generate fresh images of Repository based on registered mofs( listed @ HKLM\Software\Microsoft\WBEM\CIMOM: AutoRecover Mofs).
    • EVT: 5616 (complete recovery), eventually, there are lots of EVT:63 for mof warning about Localsystem registration of providers.

Note: Under almost no circumstance should you use the script that rebuilds the WMI repository from the MOF files

The script is inherently flawed, for 2 reasons:

    1. If you navigate to the %systemroot%\system32\wbem folder, and list the MOF files, you see find MOFs named (some provider name)_uninstall.mof. When you mofcomp those, they remove the classes in the MOF. The script mofcomps everything, so it can very easily install then immediately uninstall the classes, resulting in the class not being accessible.
    2. Replaying mofs is often sequence dependent. Example: classes in mof1 can depend on or have associations with classes in mof2. If they aren't present, MOFCOMP will not insert the classes. It's extremely difficult to know what is / is not the right sequence, so any script that simply MOFCOMPs everything is not going to be fully successful.

In addition to causing damage to your system that's almost impossible to fix correctly, if you take that approach you will blow away all information that could be used to determine the root-cause.

If the repository check (winmgmt /verifyrepository) comes back as inconsistent, your first action is to run “winmmgmt /salvagerepository” followed by running “winmgmt /verifyrepository” again to see if it now comes back as consistent.

If it is still comes back inconsistent, then you need to run “winmgmt /resetrepository.” Before running this, please read the important note below for Server 2012.

Force-recovery process -- rebuild based on the registry list of AutoRecover Mofs

 

  1. Check regkey value is empty or not @ HKLM\Software\Microsoft\WBEM\CIMOM: 'Autorecover Mofs' (** first line on some OSs is empty, review it in opening the regkey value)
  2. If above regkey is empty, copy/paste the regkey value from another machine of equal System/OS to the suspect machine
  3. Run the following command from command prompt with admin rights: “Winmgmt /resetrepositoy”
  4. If you get the error noted below, stop all dependency services on the WMI service by running following command: 

net stop winmgmt /y

then run

Winmgmt /resetrepositoy

WMI repository reset failed
Error code:     0x8007041B
Facility:       Win32
Description:    A stop control has been sent to a service that other running services are dependent on

NOTE: Applies to Server 2012

We have encountered some issues when running the mofcomp command on Windows Server 2012 which has caused the Cluster namespace to be removed due to the cluswmiuninstall.mof contained in the c:\windows\system32\wbem folder. It has also caused unregistering class names for Cluster Aware Updating (CAU) because of its cauwmiv2.mof also in the wbem folder. Those could also affect other namespace that have an uninstall type .mof in the wbem folder beyond the two mentioned above.

Furthermore, the uninstall .mof files for servers running Microsoft Cluster are also part of the autorecover folder that is used when you run the winmgmt /resetrepository command which will end up having the same effect of first installing the Cluster namespace, then uninstalling it just as if you had run a script to rebuild the repository that contained the “for” command to recompile all of the MOFs in the WBEM folder.

Take the following actions to confirm if the uninstall problem for this scenario exists on your server. If it doesn’t, then you can run the winmgmt /reset repository, otherwise follow my directions below for manually accomplishing rebuild.

  1. Open regedit, and navigate to hklm\software\microsoft\wbem\cimom, and open “Autorecover MOFS
  2. Copy the data from that string value, and paste it into notepad
  3. Do a search for ClusWmiUninstall.mof. If the cluster provider uninstall has autorecover, it will be listed here
  4. If found, then continue to manual rebuild below, if not found then go ahead and use the winmgmt /resetrepository command

How to manually rebuild repository on Server 2012 Cluster machine when cluster provider uninstall has an autorecover

First ensure you have run winmgmt /verifyrepository to ensure that it is “inconsistent” and that you have tried winmgmt /salvagerepository to see if it resolves your issue.

Change the startup type for the Window Management Instrumentation (WMI) Service to disabled.

 

  1. Stop the WMI Service, you may need to stop services dependent on the WMI service first before it allow you to successfully stop WMI Service
  2. Rename the repository folder:  C:\WINDOWS\system32\wbem\Repository to Repository.old
  3. Open a CMD Prompt with elevated privileges
  4. Cd windows\system32\wbem
  5. Run following command to re-register all the dlls: for /f %s in ('dir /b /s *.dll') do regsvr32 /s %s
  6. Set the WMI Service type back to Automatic and restart WMI Service
  7. cd /d c:\  ((go to the root of the c drive, this is important))
  8. Run following command specifically adopted for 2012 Clustered servers to recompile the MOFs:“dir /b *.mof *.mfl | findstr /v /i uninstall > moflist.txt & for /F %s in (moflist.txt) do mofcomp %s”
  9. Restart WMI service

As a final note, if you run into a reoccurring corruption issue in your environment with WMI, try to exclude the WBEM folder and all subfolders under it from AV scanning. AV scanning is known to cause corruption and other issues in WMI.

Other repository recovery solutions:

Note: in following solutions (1 & 2), if the backup images (repository) are in large size (>100MB), restoring the repository will take some time.

  1. Apply WMI AutoRestore story in the system to recover the repository image quickly and keep it in sync with previous state.
  2. Enable VSS backup-related features for storing image snapshots
    • ex.  Volume Shadow Copy (VSS), or check any valid copies listed Local Disk(C:) Properties >> Shadow copies
  3. Make sure registry key has following setting: HKLM\Software\Microsoft\WBEM\CIMOM: AutoRestoreEnabled=1
  4. Frequently snapshot the restore-points of the system (if needed, refer to the following PowerShell scripts)

$filterStmt = [String]::Format("DriveLetter='{0}'",$env:SystemDrive)

# Get systemdrive Volume info

$vlm = gcim win32_Volume -Filter $filterStmt

# create a shadowcopy

$res = icim win32_ShadowCopy -MethodName Create -Arguments @{Volume=$Vlm.DeviceID}

if ($res.ReturnValue -eq 0)

{ gcim win32_ShadowCopy -Filter ("ID='"+$res.ShadowID+"'") } # **

else

{ $res | fc }

    • AutoRestore only searches the top 3 queued snapshots for the latest valid backup, if no valid one is found, AutoRecovery will apply.
    • To restore others in the queue of snapshots ( manually )
      1. In Server sku, looking at 'previous version' tab of the repository folder to find the expected backup path 
      2. stop WMI service: Net stop winmgmt /y
      3. Replace all of the files in %windir%\system32\wbem\Repository folder with the files from the backup path found in step 1

    Note The WMI Service has auto start setting, and if it comes back alive, you will not be able to replace the files.  The service needs to be in a stopped state (if WMI service is alive at the time, repeat step: 2~3)

    ex. Directory of \\localhost\C$\@GMT-2014.03.13-01.02.49Windows\system32\wbem\repository

    03/11/2014  11:53 AM   

              .
    03/11/2014  11:53 AM              ..
    03/12/2014  05:30 PM         4,759,552 INDEX.BTR
    03/12/2014  05:30 PM            90,640 MAPPING1.MAP
    03/12/2014  03:26 PM            90,640 MAPPING2.MAP
    03/12/2014  05:24 PM            90,640 MAPPING3.MAP
    03/12/2014  05:30 PM        27,541,504 OBJECTS.DATA

    4. Run following wmic command to bring WMI service back to life: wmic os

    You should set up a regular Scheduled Task to backup the latest repository:

        • winmgmt /backup
        • tracing EVT: 67,68

      You could also schedule restores as necessary

      • winmgmt /restore 1
      • tracing EVT: 65,66

      If the issue is not a repository issue, and the objects are not retrievable:

      • Re-install the product. This is the first place to start.
      • If there is a specific provider that is not showing up, you can re-run mofcomp of a single provider. See Ask the Performance Team Blog article WMI: Missing or Failing WMI Providers or Invalid WMI Class (COMING SOON)

      If the issue persists or keeps returning, then at this point you will now need to open a Support Incident Case with Microsoft for further assistance.

      Reference this blog when you open the Support Incident Case with Microsoft as it will help the engineer understand what actions have been taken or followed and well help us track the effectiveness of the blog.

      Next up: WMI: Missing or Failing WMI Providers or Invalid WMI Class

      Pie in the Sky (August 8th, 2014)

      $
      0
      0

      It has been a really long, really busy week, so not many links today. Looking forward to the weekend and a short week next week. I'll be on vacation next Friday, so no links next week.

      Cloud

      Client/mobile

      Misc.

      Enjoy!

      - Larry

      MDT 2013 – Part II: Create a Deployment Task Sequence and Deploy a Custom Windows Server 2012 R2 Base OS Image

      $
      0
      0

      Hey there Mike Hildebrand here with Part Two of this MDT series. To catch up, go here and read Part One and then come back …

      Picking up where we left off, we had MDT setup and running and we'd captured an image of a reference server system.

      In Part Two, we'll create a Task Sequence with some custom elements and use it to deploy the reference image we created in Part One onto a physical and a virtual machine.

      Let's get crackin…

       

      Create a Deployment Task Sequence

      Open the MDT Workbench UI and right-click "Task Sequences" and choose "New Task Sequence" to launch the Wizard:

      1. As mentioned in Part One, use a solid standard for your Task Sequence IDs and Names, and description/comments fields for who/what/when
        1. Compare the screenshot below to the same screen in Part One – one has a little info; one has a lot of info
        2. Charity's Tip here – "keep it simple"

      2. Choose "Standard Server Task Sequence" for the Template…

      3. Choose the customized WIM that was captured earlier…

      4. Depending on your media and licensing situation, you can enter a key here or plan to use a KMS system for activation…
        1. Charity's Tip – if your company has Volume License Keys, DON'T enter one here – you'll only need a Product Key here if you're using MSDN or retail box software that came with a product key
        2. Volume License Media will activate against a properly configured KMS or AD-Based Activation system once the deployment completes
          1. Charity has a great post on KMS here - http://blogs.technet.com/b/askpfeplat/archive/2013/02/04/active-directory-based-activation-vs-key-management-services.aspx

      5. Enter the desired information for these fields…

      6. Enter a password for the local Administrator account or choose to get prompted for one when you build a system
        1. This should be considered a 'one-time use' password, only used by this specific build process to auto-login during the build process.
          1. The password information is obfuscated in the deployment process but it should be considered 'discoverable'
        2. Do not use one of your corporate standard passwords here
        3. Be sure you execute your normal local administrator password change processes on the deployed OS once the deployment is complete

      7. Review the Summary and click Next… then Finish.

       

      Customize the New Deployment Task Sequence

      We now have a Task Sequence but we need to add some additional Task Sequence "Actions" to meet our needs.

      I'm going to use the 'Apply Network Settings' Action to specify a particular IP address for a DNS server, but use DHCP to get the IP address for the machine:

      1. Right-click the new Task Sequence in the MDT UI and choose 'Properties' then click the Task Sequence Tab.
      2. Click the "Add" drop-down
      3. Choose "Settings" > "Apply Network Settings"

      4. Use the Up/Down green arrow buttons to position the Apply Network Settings action AFTER the PostInstall group of actions.
      5. Then, use those same buttons to move the Restart Computer action that was in the Postinstall group to AFTER your Apply Network Settings action (see the screenshot below)
        1. These steps make sure the NIC settings get applied at a point when the deployment is far enough along that the OS can 'see' the NIC but before an OS reboot might prevent the settings from sticking.

      6. Highlight the "Apply Network Settings" action and click the "star" button to start the Network Settings Wizard:

      7. Give the entry a name – 'NIC' in the example here
      • This names the first network adapter in the OS, usually defined in the OS as 'Local Area Connection'

      8.  Leave the setting to 'Obtain an IP address automatically' selected

      9.  Select the DNS tab

        • Select 'Use the following DNS Servers'
        • Click the yellow 'star' and enter the IP address of the desired DNS server in the blank
        • Click Add > OK

      10.  Note the entry now on the "Properties" tab. Click Apply.

      11.  Click the OS Info tab

      12.  Click 'Edit Unattend.xml'

        • The first time you do this to a given OS image, the image is 'cataloged' and this takes several minutes …

      13.  Eventually, the "Windows System Image Editor" dialog box will open the Unattend.XML file associated with this Task Sequence.

      14.  Expand/drill-down in the Answer File section (middle pane of the UI) until you get to '4 specialize'

      15.  Drill-down into that as shown in the screenshot to get to the 'copyProfile' setting and set it to "true"

        1. This is the magic moment where many of our customizations from our base image will get copied over to the Default User profile during the new OS deployment (most-notably, the customized Start screen layout)
        2. Any user who logs into the new system will get a profile created that is based on that Default user profile and those settings/configs will be set

      • Unrelated-but-interesting note: TimeZone shown here is a legacy setting and doesn't apply to OSes beyond Vista
      16.  Review any warnings you get at the bottom then close the dialog box … in my case, the warnings are not applicable or of concern.

       

      Deploy the Image via the Customized Deployment Task Sequence

      Burn the bootable "LiteTouchPE" ISO image for the appropriate architecture to a USB stick for a physical deployment or simply mount the ISO in a VM for a virtual deployment

      • Note – this ISO does NOT include the entire WIM/image, this is just a boot image to initiate pulling down the customized build/image

       .

       

      For this first deployment example, I deployed to a virtual machine (booted via the mounted ISO file).

      1. Click the 'Run the Deployment Wizard…' button…

      2. Enter credentials to allow connection back to the MDT server/share and click OK:

      3. The MDT Rules are then processed…

      4. Choose the custom Deployment Task Sequence we created earlier
        1. Note the details that display here for the name and "comments" we provided when we created the Task Sequence.
        2. If I'd listened to Charity, it would be simple, yet clearly informative – do whatever works for you and your org/standards

      5. Enter the desired settings for Computer name and domain join info
        1. You can define the OU, too – nice touch, MDT dev team!
          1. Just make sure you use the correct LDAP path, in the correct "distinguished name" format

      6. Note, per the Rules I defined in Part One, locale settings are indeed grayed out and my time zone defaults to CST (I am able to change it, if desired).

      7. Expand the details and review– this simple step helped me avoid wasted time several times due to mistakes – then click Begin.
        1. Note: if you dig deeper into MDT, you can automate or pre-populate most (all) of the prior prompts via the customsettings.ini and/or bootstrap.ini files
        2. That is more advance and out of scope for this blog series.

      8. The deployment begins…

      9. There will be several reboots and auto-logins as the process continues…

       

       

       

       

      • NOTE: I saw this fly-out appear during the build process but I didn't touch anything and the deployment process handled it fine, continuing on without user intervention:

       

      Finally (hopefully?), you'll end up here, logged in and with a "Success" dialog box.

      Click Finish to close the box and review your deployment:

        • Desktop color? Check!
        • Desktop icons? Check!
        • BG info? Check!

       

      Start screen layout? Check!

        • Remember, this comes from the DEPLOYMENT Task Sequence edit of the Unattend.XML we did above - where we set the 'CopyProfile' entry to TRUE. This is NOT edited/set in the 'CAPTURE' Task Sequence from Part One
        • This took me a bit to comprehend - I thought I needed to 'capture' the Start screen
        • However, the Start screen and other profile settings are included in a captured OS.
        • It is when we DEPLOY the OS that we need to tell the deployment process "copy the profile."

      Note - you can only deploy a pre-defined customized Start screen on certain versions of Windows:

        • Any GUI Windows Server 2012 or newer version
        • Windows 8.x Enterprise
        • Windows 8.x Pro that is domain-joined

       

      Addendum – "What if I want to enter static IP settings on a given build?"

      Well, the MDT folks have you covered here, too. This ain't their first build rodeo J

      For this second deployment example, I deployed to a physical machine booted from the LiteTouch ISO burned to USB media.

      • Most of the screen-shots here (except the last one) are from a VM due to the ease of capturing them.

         

      1. At the Welcome screen, click 'Configure with Static IP Address…' at the bottom…

      2. Uncheck the 'enable DHCP' box and that will light up the NIC settings fields.
      3. Enter the NIC settings you want and click Finish…

      4. Click the "Run the Deployment Wizard …" button and continue on…

      5. Once the deployment completes, you'll see your static NIC settings you defined above are now set on your system. Awwwwwe YEEEAAAAHHH

       

      Well folks, there you have it. MDT 2013 via a fairly simple, yet realistic "customized" Server OS deployment process. I hope you find MDT as amazing as I did - and I only scratched the surface.

      If you haven't explored this tool yet but are curious, I hope the MDT posts on our blog get you going - you can do some great things without too much over-head or ramp-up time.

      Kudos to the folks who work on the MDT (or BDD as it was once called) now and over the years! Here's a blog you should check out by folks who live and breathe deployment: http://blogs.technet.com/b/deploymentguys/

      Super-thanks to Charity, Kyle and Joao for their help in nailing down some of the technical details for this series, as well as my enterprising buddy, "Half-marathon" Crawford.

      Cheers!

      SMB3 PowerShell changes in Windows Server 2012 R2: Simpler setting of ACL for the folder behind a share

      $
      0
      0
       

      Introduction

       

      Windows Server 2012 R2 introduced a new version of SMB. Technically it’s SMB version 3.02, but we continue to call it just SMB3. The main changes are described at http://technet.microsoft.com/en-us/library/hh831474.aspx.

      With this new release, we made a few changes in SMB PowerShell to support the new scenarios and features.

      This includes a few new cmdlets and some changes to existing cmdlets, with extra care not break any of your existing scripts.This blog post outlines one of the 7 set of changes related to SMB PowerShell in Windows Server 2012 R2.

       

      Simpler setting of ACL for the folder behind a share

       

      In Windows Server 2012, the SMB share has a property that facilitates applying the share ACL to the file system folder used by the share.

      Here’s the syntax for a share named Share1:

       

      • (Get-SmbShare –Name Share1 ).PresetPathACL | Set-Acl

       

      In Windows Server 2012 R2, we have improved this scenario by providing a proper cmdlet to apply the share ACL to the file system used by the share.

      Here’s the new syntax for the same share:

       

      • Set-SmbPathAcl –ShareName Share1

       

      Notes

       

      1) The Windows Server 2012 syntax continues to work with Windows Server 2012 R2, but the new syntax is much simpler and therefore recommended.

      2) There is known issue with Windows Server 2012 R2 Preview that causes this new cmdlet to fail when using non-Unicode languages. As a workaround, you can use the old syntax.

      3) This blog post is an updated version of the September 2013 post at  http://blogs.technet.com/b/josebda/archive/2013/09/03/what-s-new-in-smb-powershell-in-windows-server-2012-r2.aspx focused on a single topic.

      Viewing all 13502 articles
      Browse latest View live


      <script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>