Windows Server 2012 Hyper-V: Convert VHD to VHDX

Hyper-V Virtual Hard Disk Format Overview

Updated: February 29, 2012

Applies To: Windows Server 2012

As enterprise workloads for virtual environments grow in size and in performance demands, virtual hard disk (VHD) formats need to accommodate them. Hyper-V in Windows Server 2012 introduces a new version of the VHD format called VHDX, which is designed to handle current and future workloads.

VHDX has a much larger storage capacity than the older VHD format. It also provides data corruption protection during power failures and optimizes structural alignments of dynamic and differencing disks to prevent performance degradation on new, large-sector physical disks.

 


The new VHDX format in Windows Server 2012 addresses the technological demands of an evolving enterprise by increasing storage capacity, protecting data, and ensuring quality performance on large-sector disks.

The main new features of the VHDX format are:

  • Support for virtual hard disk storage capacity of up to 64 TB.
  • Protection against data corruption during power failures by logging updates to the VHDX metadata structures.
  • Improved alignment of the virtual hard disk format to work well on large sector disks.

The VHDX format also provides the following features:

  • Larger block sizes for dynamic and differencing disks, which allows these disks to attune to the needs of the workload.
  • A 4-KB logical sector virtual disk that allows for increased performance when used by applications and workloads that are designed for 4-KB sectors.
  • The ability to store custom metadata about the file that the user might want to record, such as operating system version or patches applied.
  • Efficiency in representing data (also known as “trim”), which results in smaller file size and allows the underlying physical storage device to reclaim unused space. (Trim requires physical disks directly attached to a virtual machine or SCSI disks, and trim-compatible hardware.)

 

With Windows Server 2012 Microsoft released a new Virtual Disk Format called VHDX. VHDX improves the Virtual Disk in a lot of way.

Back in October I wrote a blog post on the improvements of the VHDX Format in the Windows Server 8 Developer Preview. Back then VHDX supported a size of 16TB, with the release of the Windows Server 8 Beta (Windows Server 2012 beta) the new Maximum size changed to 64TB.

The main new features of the VHDX format are:

  • Support for virtual hard disk storage capacity of up to 64 TB.
  • Protection against data corruption during power failures by logging updates to the VHDX metadata structures, i.e. improved corruption resistance
  • Improved alignment of the virtual hard disk format to work well on large sector disks.

The VHDX format also provides the following features:

  • Larger block sizes for dynamic and differencing disks, which allows these disks to attune to the needs of the workload.
  • A 4-KB logical sector virtual disk that allows for increased performance when used by applications and workloads that are designed for 4-KB sectors.
  • The ability to store custom metadata about the file that the user might want to record, such as operating system version or patches applied.
  • Efficiency in representing data (also known as “trim”), which results in smaller file size and allows the underlying physical storage device to reclaim unused space. (Trim requires physical disks directly attached to a virtual machine or SCSI disks, and trim-compatible hardware.)

You can download the VHDX Format Specification.

To use this new features you have to convert your existing VHDs into the new VHDX format. You can this do in two different ways, with the Hyper-V Manager or with Windows PowerShell.

Convert VHD to VHDX via Windows PowerShell

To convert a VHD to a VHDX with Windows PowerShell you can use simple this PowerShell command:

1
 Convert-VHD TestVHD.vhd -VHDFormat VHDX -DestinationPath C:\temp\VHDs\TestVHDX.vhdx -DeleteSource 

Of course you can convert the VHDX back to a VHD using the following command:

1
 Convert-VHD TestVHDX.vhdx -VHDFormat VHD -DestinationPath C:\temp\VHDs\TestVHD.vhd -DeleteSource 

Convert VHD to VHDX via PowerShell

Convert VHD to VHDX via Hyper-V Manager

  1. Start the Hyper-V Manager and click on “Edit Disk…
    Hyper-V Manager
  2. Now select the VHD you want to convert
    Edit Virtual Hard Disk
  3. Select “Convert
    Convert Virtual Hard Disk
  4. Select the target format in this case VHDX
    Convert VHD to VHDX
  5. Select the new location for your new VHDX
    Convert VHD to VHDX Location
  6. Check the summary and click finish
    Convert VHD to VHDX Finish

 

Same as with the PowerShell command, you can also convert a VHDX to a VHD. But you have to make sure that the VHDX is not bigger than 2TB.

Link to original posts:

http://technet.microsoft.com/en-us/library/hh831446.aspx

http://www.thomasmaurer.ch/2012/05/windows-server-2012-hyper-v-convert-vhd-to-vhdx/

Aviraj Ajgekar already did a post on this TechNet blog about how you can convert a VHD to VHDX via Hyper-V Manager.

VHDX

With Windows Server 2012 Microsoft released a new Virtual Disk Format called VHDX. VHDX improves the Virtual Disk in a lot of way.

Back in October I wrote a blog post on the improvements of the VHDX Format in the Windows Server 8 Developer Preview. Back then VHDX supported a size of 16TB, with the release of the Windows Server 8 Beta (Windows Server 2012 beta) the new Maximum size changed to 64TB.

Some of the VHDX improvements:

  • Support up to 64TB size
  • Supports larger block file size
  • improved performance
  • improved corruption resistance
  • the possibility to add meta data

You can download the VHDX Format Specification.

To use this new features you have to convert your existing VHDs into the new VHDX format. You can this do in two different ways, with the Hyper-V Manager or with Windows PowerShell.

Convert VHD to VHDX via Windows PowerShell

To convert a VHD to a VHDX with Windows PowerShell you can use simple this PowerShell command:

1
 Convert-VHD TestVHD.vhd -VHDFormat VHDX -DestinationPath C:\temp\VHDs\TestVHDX.vhdx -DeleteSource

Of course you can convert the VHDX back to a VHD using the following command:

1
 Convert-VHD TestVHDX.vhdx -VHDFormat VHD -DestinationPath C:\temp\VHDs\TestVHD.vhd -DeleteSource

Convert VHD to VHDX via PowerShell

Convert VHD to VHDX via Hyper-V Manager

  1. Start the Hyper-V Manager and click on “Edit Disk…
    Hyper-V Manager
  2. Now select the VHD you want to convert
    Edit Virtual Hard Disk
  3. Select “Convert
    Convert Virtual Hard Disk
  4. Select the target format in this case VHDX
    Convert VHD to VHDX
  5. Select the new location for your new VHDX
    Convert VHD to VHDX Location
  6. Check the summary and click finish
    Convert VHD to VHDX Finish

Same as with the PowerShell command, you can also convert a VHDX to a VHD. But you have to make sure that the VHDX is not bigger than 2TB.

Aviraj Ajgekar already did a post on this TechNet blog about how you can convert a VHD to VHDX via Hyper-V Manager.

How to configure VM Monitoring in Windows Server 2012

How to configure VM Monitoring in Windows Server 2012

Windows Server 2012 Hyper-V: VM Monitoring

Windows Server 2012 RC Logo

Overview

Do you have a large number of virtualized workloads in your cluster? Have you been looking for a solution that allows you to detect if any of the virtualized workloads in your cluster are behaving abnormally? Would you like the cluster service to take recovery actions when these workloads are in an unhealthy state? In Windows Server 2012, there is a great new feature, in Failover Clustering called “VM Monitoring”, which does exactly that – it allows you monitor the health state of applications that are running within a virtual machine and then reports that to the host level so that it can take recovery actions. You can monitor any Windows service (such as SQL or IIS) in your virtual machine or ANY ETW event occurring in your virtual machine. When the condition you are monitoring gets triggered, the Cluster Service logs an event in the error channel on the host and takes recovery actions.

In this blog, I will provide a step by step guide of how you can configure VM Monitoring using the Failover Cluster Manager in Windows Server 2012.

Note: There are multiple ways to configure VM Monitoring. In this blog, I will cover the most common method. In a future blog, I will cover the many different flexible options for configuring VM Monitoring.

In Windows Server 2012 Failover Clustering Microsoft offers a new feature called VM Monitoring. This feature allows you to monitor the health of applications running inside the guest operating system of a Hyper-V Virtual Machine. Now how does this exactly work and what is happening in case a service is failing.

When a monitored service fails the Recovery features of the service will take action.

Service RecoveryIn this case for the first failure the service will be restarted by the Service Control Manager inside the guest operating system, if the service fails for a second time the service will again be restarted via guest operating system. In case of a third failure the Service Control Manager will take no action and the Cluster service running on the Hyper-V host will takeover recovery actions.

 

VM-Monitoring-Application-Monitoring-Sequence

 

The Cluster Service monitors the service thought periodic health checks, when the Cluster Service recognizes a failed service he will change the status of the Virtual Machine to unhealthy. This will trigger some recovery actions.

  • A Event log entry with Event ID 1250 will be created on the host Event log. This event can be monitored by Monitoring software like System Center Operations Manager or other tools. This will also allow to run other action or trigger System Center Orchestrator Runbooks.
  • The Virtual Machine State will be changed to “Application in VM Critical”
  • And the Virtual Machine will be restarted on the same node if the service fails again the Virtual Machine will be restarted and failed over to another node in the cluster.

Of course you can configure the Recovery Settings in the Cluster.

VM Monitoring - Application Monitoring Recovery

Configuring VM Monitoring

Pre-requisites

Before you can configure monitoring from the Failover Cluster Manager on a Management Console the following pre-steps are required:

1)      Configure the guest operating system running inside the virtual machine

a)      The guest operating system running inside the virtual machine must be running Windows Server 2012

b)      Ensure that the guest OS is a member of a domain which is same as the host or a domain with a trust relationship with the host domain.

2)      Grant the cluster administrator permissions to manage the guest

a)      The administrator running Failover Cluster Manager must be a member of the local administrators group in the guest

3)      Enable the “Virtual Machine Monitoring” firewall rule on the guest

a)      Open the Windows Firewall console

b)      Select “Allow an app or feature through Windows Firewall”

c)       Click on “change settings” and enable the “Virtual Machine Monitoring” rule.

Note:

You can also enable the “Virtual Machine Monitoring” firewall rule using the Windows PowerShell® cmdlet Set-NetFirewallRule:

 Set-NetFirewallRule -DisplayGroup “Virtual Machine Monitoring” -Enabled True

Configuration

VM Monitoring can be easily configured using the Failover Cluster Manager through the following steps:

1)      Right click on the Virtual Machine role on which you want to configure monitoring

2)      Select “More Actions” and then the “Configure Monitoring” options

3)      You will then see a list of services that can be configured for monitoring using the Failover Cluster Manager

  

Note:

You will only see services listed that run on their own process e.g. SQL, Exchange. The IIS and Print Spooler services are exempt from this rule. You can however setup monitoring for any NT service using Windows PowerShell® using the Add-ClusterVMMonitoredItemcmdlet – with no restrictions:

 Add-ClusterVMMonitoredItem –VirtualMachine TestVM -Service spooler 

How does VM Monitoring work?

When a monitored service encounters an unexpected failure, the sequence of recovery actions is determined by the Recovery actions on failure for the service. These recovery actions can be viewed and configured using Service Control Manager inside the guest. In the example below, on the first and second service failures, the service control manager will restart the service. On the third failure, the service control manager will take no action and defer recovery actions to the cluster service running in the host.

The cluster service monitors the status of clustered virtual machines through periodic health checks. When the cluster services determines that a virtual machine is in a “critical” state i.e. an application or service inside the virtual machine is in an unhealthy state, the cluster service takes the following recovery actions:

1)      Event ID 1250 is logged on the host

a.       This event can be monitored with tools such as System Center Operations Manager to trigger further customized actions 

2)      The virtual machine status in Failover Cluster Manager will indicate that the virtual machine is in an “Application Critical” state.

Note:  

  •          Verbose information is logged to the Cluster debug log for post-mortem analysis of failures.
  •          The StatusInformation resource common property for a virtual machine in “Application Critical” state has the value 2 as compared to a value of 0 during normal operation. The Windows PowerShell® cmdlet Get-ClusterResource can be used to query this property.

Get-ClusterResource “TestVM” | fl StatusInformation

3)      Recovery action is taken on the virtual machine in “Application Critical” state

a.       The virtual machine is first restarted on the same node

Note: The restart of the virtual machine is forced but graceful

b.      On the second failure, the virtual machine restarted and failed over to another node in the cluster.

Note: The decision on whether to failover or restart on the same node is configurable and determined by the failover properties for the virtual machine.

 

That’s the VM Monitoring feature in Windows Server 2012 in a nutshell!

 

Subhasish Bhattacharya                                                                                                                
Program Manager                                                                                                           
Clustering & High Availability                                                                                       
Microsoft

Also refer :  Guest Clustering and VM Monitoring in Windows Server 2012

Eight Things to Like About Server 2012

Eight Things to Like About Server 2012

In a recent newsletter I mentioned the “many wondrous things in Server 2012” or something like that, and my friend Rory Hamaker (who apparently only reads some of my newsletters) dropped me a line asking when I was going to get around to telling you all about those things.  There’s not space to do that here — it’ll take several days of class, a boatload of newsletters, or a book or two — but here are eight likeable things about Windows Server 2012 , and with hope you might not yet know some of them.

Password Peekaboo

Times have changed, and apparently it’s no longer very bright to use some of my favorite old passwords.  Turns out otherpeople also use passwords like “password,” “batman” or “stopmakingmemakeuppasswordsallthetime.”  Who knew?  Okay, Idid, if you read my stuff regularly, and of course it’s pretty much essential to have longer passwords and, better, passphrases.  So let’s suppose my password is “apepperandbattery.”  (Get it?  A modification of “assault and..” ah, well, it seemed clever at the time.  Also, notice none of that ridiculous “complex” password nonsense, also known as “passwords you can’t remember,” “passwords that you must type so slowly that anyone can see you type them,” and “the Help Desk Full Employment Act of 2012.”)  Now, that’s a perfectly fine password, but on some of today’s mushy keyboards, I’m not always 100 percent sure what I typed, and have always wished that every password dialog box had a check box that said, “listen, I promise I’m the only person in the room and so please let me see what I’ve just punched in.”  Well, apparently Microsoft heard my prayer.  Check out this little “eye” icon, the thing just to the left of the blue square with the right-pointing arrow:

Click and hold it, and you see what you’ve typed.  Hey, until we all go for smart cards — and do look into the Server 2012 “virtual smart card,” a topic for another day — this will have to do.

Hyper-V Replication

Running at least one important server?  Of course you are.  Running it as a virtual machine?  Naturally.  Worried about its host system failing?  You’d be crazy not to.  Can’t afford a Hyper-V cluster?  Neither can most of us.

Hyper-V Replication lets you protect an important VM (for this example, let’s call it WEB1) sitting on a Hyper-V server (call it HV1) by setting up a second Hyper-V server (call it HV2) to stand by, ready to take over with a nearby, frequently-updated copy of WEB1’s VHD files.  It doesn’t automatically fail over if HV1 dies — you’ve got to type a command or click a couple of buttons — but it’s quick, and it works.

There are a variety of ways to set up Hyper-V Replication, but in the simplest case, you

  1. Configure HV2 to be ready to replicate with HV1.  It’s a few check boxes in HV1’s Settings or a PowerShell command.
  2. Do the initial replication of HV1’s VHD(s) from HV1 to HV2.  VHDs are usually big — tens of gigabytes — and while it’s certainly possible to tell Hyper-V to do the initial copy over the Internet, choking your Internet bandwidth for an hour or so just to move a VHD that you may or may not ever need might not be the smartest idea.  Alternatively, you can just put a copy of the VHD(s) on an external hard drive, overnight it to wherever HV2 is, and then just ask someone at the remote site plug the hard drive into HV2 and copy it over to that Hyper-V server. Or, for many of us, HV1 and HV2 are on the same site, which simplifies things further. Whether you stuff the VHD over the ‘Net, overnight it on some kind of media, or just copy it to another Hyper-V box on your intranet, you need to get that “initial replication” out of the way to get Hyper-V replication started.
  3. With HV2 set up as a replication host for WEB1 from HV1, just sit back and let the Hyper-Vs replicate.  HV1 periodically replicates changes to WEB1’s VHDs over to HV2, and in my experience it does it quite frequently — every few minutes or so.
  4. When HV1 dies, taking its virtual server “WEB1” with it, you then need only fire up the Hyper-V Manager, point it at HV2 and say, “you’re on, understudy!” and WEB1 will be back up and running in seconds.

Of course, there are lots more details — will the server’s old static IP addresses work in the new location? — but that’s the outline.  It’s not fancy clustering, it doesn’t replace backing up VHDs, it’s sort of rough-and-ready, but honestly any small outfit using Hyper-V and not implementing this is just plain crazy. 

Dead-Simple Clusters

And speaking of fault tolerance…

Once upon a time, we thought we could create reliable server systems just by spending a lot of money on one server.  Then we realized that there’s no such thing as “always works,” and so started creating “clusters,” groups of servers that fail now and then, but almost never all fail at the same time, presenting always-up face to the user community.  For example, if you’re reading this, I’m guessing that you subscribed to my mailing list.  The pages on my web site that let you subscribe or unsubscribe can’t work without a SQL database named “Readers” that I run on my server.  Now, I must admit that it’s not exactly the most heavily-worked database on the planet… I only do a mailing about once every three or four weeks… but what if the Readers database were something that absolutely needed to be up 99.99% of the time?

Well, as I said before, in the old days, I’d just resolve to buy the best server, with hand-picked RAM chips, extensive CPU cooling, the best hard drives money could buy, and so on… a single system built of good enough components that we could expect, say, a mean time between failures in the millions of hours.  But that’s expensive and hard to do — once I’ve purchased the server hardware, where do I get “more reliable electricity” or “dependable Internet?”  Nah, in the long run it just makes sense to hedge my bets and get backups for everything.  Dual power supplies.  Connections to more than one ISP.  Generators and UPSes.  Teamed NICs.  And… clusters. It’s sort of the server version of “two [or more] heads are better than one.”  It’s a great idea and has worked very well in a number of situations — ask anyone who built a cluster of VAX VMS systems in the mid-90s. (I think Ticketron may still be using those old boxes because hey, they never break).

Before clusters, I’d have that one high-powered SQL server reading and writing the database to and from an internally-attached hard disk.  In a clustered world, in contrast — at least, on a pre-Server-2012 clustered world  — I buy two or more SQL servers and put the data not on any of their local hard drives, but instead on a separate network-attached special-purpose computer called a Storage Area Network or SAN device. To the SQL server’s operating system, however, the SAN device looks like a local hard disk for all intents and purposes.  Three things, however distinguish a SAN disk from a local disk.  First, SANs are usually significantly more expensive on a dollar-per-gigabyte basis than local SAS or SATA drives.  Second, we put up with that price because hunks of data on a SAN can be simultaneously accessed by two or more computers — like our SQL servers — as so-called “shared storage.”  That’s important because I want to store my database in a place wherein both of my clustered SQL servers can get to it equally quickly, so if one of the SQL servers goes down then the other can pick up where it left off with as little switchover time as is possible.  While the idea of SANs has been around since the mid-90s, we’ve really only seen them in PC networks in any numbers since 2005 or so, and their existence has been a key ingredient in the growth of clusters.  Before that, we’d share data between two SQL servers by connecting them both to a bunch of hard drives via something called “asynchronous SCSI,” and I still remember the electric shocks you could get if you disconnected an ASCSI cable at the wrong moment!  So for clusters, we love SANs… we just hate having to pay for them.  Third, we can get a bit more reliability from a SAN if we spend a bunch more money to get a couple of SAN boxes that will replicate to each other, making our SAN not a single point of failure.

With a setup like that, I’d fire up my two SQL servers, put my Readers database files on the SAN, and run a wizard to create a cluster, using the Failover Cluster Manager.  The Cluster Manager does a bit of magic making my two SQL servers look like just one server to the other systems on the network, and what was once sql1.bigfirm.com and sql2.bigfgirm.com becomes just sqlcluster.bigfirm.com or something like that. The result is that if one of those SQL boxes goes down — or just needs to reboot to apply a patch — then the other’s still up, and the imaginary “sqlcluster.bigfirm.com” machine looks like a pretty reliable device.

It’s a great answer, but man, is it expensive.  Besides the cost of SANs, you’d need Enterprise or Datacenter to build a cluster, and they were expensive.  The version of SQL Server that you need — Enterprise — is expensive, way out of the reach of most small to medium-sized organizations.  Heck, you’ll have an easier time finding a Unitarian at an “Invade Iraq now” rally than you will finding under-250-person shops with clustered SQL or Hyper-V.

With Server 2012, things get a bit more cluster-friendly for organizations of all sizes, for two important reasons.  First, as you already know if you read my newsletters, all of those nifty capabilities in the $4000 Enterprise Server made their way into the now-$882 Standard Server.  Second, you needn’t get a SAN to get shared storage, as you can stored those shared SQL database files (or, if you wanted to make a Hyper-V cluster, some shared VHDx files) on a simple Server 2012 shared folder — the kind we’ve been building for decades.

Just to make that clear, let me restate it.  What was two copies of Enterprise on expensive systems coupled with a dedicated SAN device (or devices) is now three generic boxes each running a copy of Standard Server 2012.  Furthermore, the actual process of configuring a cluster has gotten a bit easier with every version of Server released in the past ten years.  I’m not exaggerating when I say that a few weeks ago I set up a couple of Hyper-V 2012 servers at home prior to getting on the road and, one morning, remoted from my hotel room back to my house, thinking, “I wonder how hard building a 2012 cluster could be?”  So, I fired up 2012’s Failover Cluster Manager, clicked a few times, and in three minutes had transformed my two Hyper-V boxes to a Hyper-V cluster watching over a virtual machine… worked the first time.  Of course, once I shut down one of the Hyper-V boxes to see if the failover would work properly — and it did! — I realized that my arms aren’t 3000 miles long, and so getting one of the Hyper-V boxes turned back on was a little hard.  Oh well.

It’s hard to say what 2012’s biggest feature is, but “cheap, simple clustering” is clearly on the short list, and the folks who worked on Server know that too.  Weeks (and, in one case, months) before Microsoft made their everything-goes-in-Standard-Server announcement, three different Microsoft employees leaked the news to me.  That’s how exciting it is.  If you’ve not messed with clusters or wanted to but couldn’t afford it, 2012 should change that.

Awesome New “Metro” Server UI

Bored with that same old Windows 2000-ish-looking GUI?  Well, you are going to love the new cryptic, Start Button-less user interf… okay, I’m kidding.  Actually, I like Metro, I’m just not a huge Start Screen fan and, yes, I know — many of you have already emailed me to tell me that you aren’t either.  Well, let me rephrase that.  A quick look around Amazon shows a number of $300-ish monitors that support multi-touch via an extra USB port, which sounds nice until you realize that you’d have to re-plumb your KVM infrastructure.  Guess I’ll wait for the Kinect… then I can just mount a video projector on the ceiling, wave my hands around and look like I’m running the NCC 1401-R or something like that.  (Okay, maybe that would be fun.) 

More PowerShell.  About Ten Times More.  You’ll Love It.  Really.

Okay, here I’m not kidding.  There are about ten times more PowerShell cmdlets in Server 2012 than there were for Server 2008R2, cmdlets to do all kinds of things.  And yes, I know, most of you haven’t gotten the PowerShell religion yet, and part of the reason for that is that we all so hate reading Help files.  Well, PowerShell for 2012 makes that a bit easier with a new cmdlet, “show-command.”  You pass it the name of a cmdlet that you want to use and it pops up a GUI with a simple form that you can fill out which then results in a working PowerShell command.  For example, suppose I want to create a virtual machine with “new-vm” but don’t want to have to figure out how to use the silly thing.  I just type

show-command new-vm

And a dialog box pops up that looks like this:

The asterisked fields are the ones that new-vm insists upon, and there are only two — NewVHDPath (where to store the virtual hard drive files for the new VM) and NewVHDSizeBytes (what’s the largest size in bytes that the VHD can grow to).  Fill in “c:\vhds\vm1\vm1.vhdx” for the first parameter and 40000000000 for the second parameter, click “Run” and show-command runs this on the command line:

new-vm -NewVHDPath c:\vhds\vm1\vm1.vhdx -NewVHDSizeBytes 40000000000

And in no time, you’ve got a VM running, and I know what you’re wondering — “Bytes?  Really?  Bytes? I gotta type 40000000000 to get a 40 gig drive?”  You can also add common suffixes, and so could have typed

new-vm -NewVHDPath c:\vhds\vm1\vm1.vhdx -NewVHDSizeBytes 40GB

Ever since PowerShell 1.0 came out, Microsoft has tried to make PoSH something easy to experiment with and find out about because, again, we’ve all been made to feel stupid by rotten CLI tool documentation in the past, and that doesn’t really motivate us to return to the command line.  For that reason, I understand why so many folks are allergic to CLI tools, but please permit me to take a moment to suggest that maybe PowerShell’s worth giving a look.  First, there’s show-command, a tool that I have to admit has made me use PowerShell’s Help quite a bit less in my 2012 work.  Second, PowerShell’s Help really is better than most of what you’ve seen, particularly when you add the “-example” parameter.  The examples in the PoSH cmdlets that I’ve used are syntactically correct over 97% of the time and are often non-trivial enough that they provide some creative suggestions on how to use the tool.  And third, there’s “reflectivity.”

Back when Jeff Snover first talked about PowerShell, he realized that people resist or lack the time to learn CLI tools, so he imagined Windows GUI tools all featuring a text window.  Whenever you pushed a GUI button to  create a user, restart a service, install an SSL certificate, or whatever, then in that text field, the corresponding PowerShell command would appear.  You could then copy that and save to a text file or whatever for reuse later — there’s a new feature in PowerShell 3.0 called “snippets” that would accommodate things like that — and thus slowly learn PowerShell in a series of sort of “just-in-time micro-training” sessions.  I’ve always liked the idea of GUI-to-CLI training wheels ever since I saw dBase IV do that years ago, and so couldn’t wait to see it in Windows GUI tools.  Unfortunately, the whole notion hasn’t caught on beyond Exchange and Virtual Machine Manager, although I can happily report that we Active Directory administrators in Server 2012 can now experience reflectivity in the new version of the Active Directory Administrative Center (ADAC).  Called the “PowerShell History Viewer,” it’s just a click of a button in the new ADAC.

So the gauntlet’s down, Microsoft coders!  How about a PowerShell History Viewer in regular old Hyper-V?  In the networking sections of the Control Panel?  (The networking team created a pile of new network cmdlets… it’d be nice to learn them the easy way.)  Or how about you storage guys?  Storage Spaces is cool, but sorting through the great new cmdlets is just a bit time-consuming now and then.

Network De-Duping

“De-duping,” a phrase made common in the IT business by storage folks, refers to anything that saves space by removing repetitive data.  For example, Exchange has had a de-duper called “single instance store” for ages, a tool that (if I understand it right, I’m not an Exchange guy) finds identical messages in an information store and eliminates them.  For example, imagine a case where someone sends the same 10 MB email to 100 people on a distribution list in an organization, chewing up 1GB in that information store.  SIS causes Exchange to only store one of these copies, replacing the other 99 with just a sort of “IOU” and a pointer to the one remaining copy.  It’s a sensible way to save space on a disk.

Windows Server 2012 added de-dup to its new StorageSpaces technologies, and in short you just create a volume, enable de-dup (it’s a check box), and forget it.  The de-dup on a volume is nice in that it’ll find a chunk of data of any size and de-dup it, rather than simply de-duping identical files.  Again, de-dup is a nice feature with a bunch of knobs and buttons and I’ll be covering that in detail in the future, but what I’ve found sort of interesting is the network de-duplication.

So let’s say you’ve got 20 VHDs, all system drives for virtual machines running Windows Server 2012.  You want to copy them from one Hyper-V server to another.  Now, each of these virtual servers do different things, but those things aren’t all thatdifferent, at least as far as the files on the VHD are concerned — a server running IIS and a server acting as a domain controller are about a 97% match in terms of files.  So even though these 20 VHDs are for different kinds of servers — web servers, DCs, WSUS boxes, file servers, DirectAccess servers… from a simple list-of-files-on-disk, they’re all pretty much identical.  Thus, when you use robocopy, xcopy or (heaven forbid) Explorer to copy these 20 VHDs, which may collectively be three quarters of a terabyte in size, you’ll see that it seems amazingly fast, because you didn’t have to transfer 700-ish gigabytes… just about 40 or so.  There’s not a lot on the Web about it, and I haven’t been able to find any registry entries or the like to control it, but watch this space, there could be some neat stuff there.

4K Native Sector Support

You may know that data space on hard disks, DVDs, USB sticks, floppies, or basically any kind of data storage device has a “quantum,” a smallest area that can be written called a “sector.”  On floppies and hard disks since the 1970’s, sectors have typically been 512 bytes, and that made pretty good sense, as sector size ends up affecting a device’s smallest possible file size and thus the smallest practical file size for CP/M, DOS, Windows or Linux would have been 512 bytes.  (They’re all larger for other reasons.) What that means is that because “sector=smallest file size possible,” each of those different operating systems storing a four-byte file must of necessity actually take up 512 bytes on disk, effectively wasting 508 bytes.  No one would care about that much waste nowadays and in fact every modern file system that I know of would waste more than 508 bytes in a 512-byte file, but in the 80’s, people worried about that kind of stuff.  (Then again, a 320K floppy disk cost about a dollar in the early 80’s and in that light, the concern seems quite reasonable.)

Anyway, times have changed.  Storage is much cheaper so we don’t care about a little waste, but we do care about seeing ever-improving performance.  As any communication channel — including ones from hard disks to CPUs — get faster and cleaner, it makes sense to make the data blocks transferred on that channel bigger to deliver better overall throughput.  (Also the the fact that we’re crunching more and more data onto a given physical area on a disk platter makes engineering small 512-byte sectors inefficient.)  That’s why many hard disks nowadays actually use much larger, 4096 byte (4K) sectors.  Windows doesn’t understand 4K sectors, though, and so to make Windows happy, 4096-byte drives must waste some time emulating 512-byte sector drives.  Even with the overhead of 512 byte emulation, though, 4K sectors still make for sufficiently increased performance that it’s worth making and selling 4K drives in 512 byte drives’ clothing, so to speak.  Whether you know it or not, you’ve been buying so-called “512e” hard drives for quite some time, so it’s a shame that you’ve not been able to extract the full speed of those drives.  As you’ve probably guessed by now, however, there’s good news:  Windows 8 and Server 2012 finally come with native support of 4K drives.  That should will offer better performance at a somewhat lower price.

Real, Simple, Cheap, In-the-Box DHCP Failover

DHCP’s great.  A simple protocol that hands out IP addresses to all of our Internet-connected devices.  Easy to set up, simple to administer.  But there’s just one little annoyance:  fault tolerance.  How do you provide for the uncommon but annoying situation where your DHCP server’s down for some reason?  Well, we all know the 80/20 thing, where you give 20 percent of a scope to a second DHCP server, but it’s not really kosher from an RFC point of view and it’s kind of kludgy to manage.  Of course, you could set up a cluster — you know, two Enterprise boxes and all that — but it’s expensive.  What to do?

A draft RFC cooked up back in 2003 about DHCP failover never really went anywhere, but it had some good ideas and so the DHCP folks at Microsoft put it in Server 2012.  It’s simple and easy to set up, letting you create pairs of DHCP servers that look out for each other, either failing over entire scopes or sharing the responsibility for handing out IP addresses in a given scope.  If you start installing 2012 systems into your network and intend to have them replace existing Windows-based DHCP servers, I strongly recommend that you look into DHCP failover as soon as possible.  The GUI makes it simple, but there’s some good command line tools to set it up as well.  (Hint:  from PowerShell, type “get-command *-dhcpserverv4failover.”)

Microsoft Windows Server 2012 Tips

Microsoft Windows Server 2012 Tips  –Thanks to Jason Boche

One of the benefits of working for Dell Compellent is having the privilege to collaborate with some very smart people who are subject matter experts in areas of technology I don’t get as much time to spend time on as I’d like to.  I get to share information with team members about vSphere, as well as Exchange, SQL, *nix, Oracle, and you might have guessed it… Microsoft Windows (including Hyper-V).  One of my colleagues has been working with Windows Server 2012 lately and he drew up a quick guide on some of the findings he had made.  Not only was he gracious enough to share it with his teammates, he was more than happy to share with the community when asked.  When I say community, of course I’m referring to readers of this blog.  So without further to do, here are some Windows Server 2012 (and perhaps even Windows 8) tips to get you started.

Navigating the New Server 2012 GUI

The look and feel of the Server 2012 GUI is quite different than Server 2008. While most of the familiar options and features are still available, the process of getting to them is quite different, and in some cases, more difficult.

Snagit Capture

 

1)      The “Start” button no longer exists in Server 2012.  To expose Start, jiggle your mouse in the lower left corner of the desktop and the Start option will appear as shown above.  This is a bit cumbersome in RDP sessions and takes some getting used to.

Snagit Capture

2)      The Start Menu presents applications and other options as tiles.

3)      To access Lock and Sign out, click on the User in the upper right for a drop-down menu.

Snagit Capture

 

4)      To access All Applications, right-click on any tile under Start, and then an options bar will appear at the bottom of the screen.  On this options bar, click on All Apps in the lower right.

Snagit Capture

 

5)      Under All Apps, you can find all the rest of the familiar (but now more difficult to find) options such as Command Prompt and Run.  To make these more easily accessible, pin them to the taskbar.

Snagit Capture

 

6)      Another hidden menu exits off the right side of the desktop.  To access it, move your mouse to the far right or lower-right corner of the screen and hold it there for a couple seconds.   Again, this is cumbersome in RDP sessions and takes some getting used to.

7)      As you can see above, the Restart and Shut down options are now buried a few layers deep so accessing them is a bit tedious.   Some customization suggestions below will help alleviate this.

Snagit Capture

 

8)      To stop the Server Manager window from automatically starting every time you log on, edit the Server Manager Properties and check the box Do not start Server Manager automatically at logon.

 

 

 

Customizations to Facilitate Better User Experience with Server 2012

You may find yourself a little frustrated with the changes introduced with the Server 2012 GUI because many apps/options/tools have been relocated and are therefore more difficult (and more time consuming) to find.

Below are some quick and simple customization changes to “restore” some of the of the Server 2008 look/feel/agility to the 2012 GUI.

 

1)      The first step is to install the Desktop Experience as found under Features.  Once installed, then the (My) Computer icon can be added back to the desktop.

Snagit Capture

a)      Launch Server Manager from the taskbar.

Snagit Capture

b)      Click on Add roles and features to launch the Add Roles and Features Wizard.  UnderFeatures, check the box for Desktop Experience and then complete the wizard (requires a reboot).

Snagit Capture

c)       After rebooting, from the Desktop, right click and choose PersonalizeChange Desktop Icons, and add the desired icons such as Computer and Control Panel.

d)      Right click on the Desktop again, and under View, set icon size to Small, and set Auto Arrangeand Sort By options according to your preference.

Snagit Capture

 

2)      Customize the taskbar by pinning shortcuts for I.E., RunCommand Prompt, and other frequently used apps (as found under Start and All Apps) that you want to be quickly accessible.  For directions on how to access the Start and All Apps menus, see Page 2.

3)      Right click on the taskbar, select Properties, and select Use Small taskbar buttons, and under the Toolbars tab, add the Desktop toolbar.

4)      If you desire to add the Background Info (BGI) utility to your Windows 2012 server desktop, then complete the following steps:

Snagit Capture

  • From your network share or software repository containing BGInfo, copy the folder BGInfo toC:\BGInfo.  Edit the BGInfo.bgi config file to customize (if desired) the BGInfo settings.  (this is the latest 64-bit version of BGInfo)

Snagit Capture

  • To automatically refresh BGInfo each time you log on to the server, add a reg key (string value) calledBGInfo with value ofC:\BGInfo\LaunchBGI.batto:HKLM\Software\Microsoft\Windows\CurrentVersion\Run

Snagit Capture

 

  • If using mRemote, change the Display Wallpaper setting to Yes under the configuration settings for your server (the default setting is No).  Otherwise the BGInfo screen will not be passed to your display.

 

5)      To work around the cumbersome process of having to navigate to log-off, shutdown, or reboot commands under the hidden menus, place shortcuts to these operations on the Server 2012 desktop.  To make this process quick and easy, pre-defined shortcuts can be saved on a network share and copied down to each server installation.

 Snagit Capture

 

 

  • From the network share, copy the desktop shortcuts to Libraries\Documents\Public Documentson your 2012 server.

Snagit Capture

  • Once copied, open the Desktop_Icons folder, and copy and paste the icons found there to the public desktop (a hidden folder) which can be accessed at C:\Users\public\desktop (manually type this path in Windows Explorer as shown above to get to it).
  • Add or create other shortcuts as desired here so they will show on the public desktop.
  • By placing them on the public desktop, they will be there for all users, and will be preserved even when the server is sysprepped.

Snagit Capture

6)      When finished, your desktop will look similar to the above screen capture:

  • (My) Computer and Control Panel icons added to the desktop
  • ShutdownLogoff, and Restart icons (which are shortcuts to the shutdown command) added to the desktop.  This is much quicker than having to access these options from the hidden menus on the left or right sides of the desktop, and it skips having to provide a reason for shutting down.
  • Shortcut to launch Disk Manager added to the desktop (add other shortcuts as desired)
  • Shortcuts to I.E.Run, and Command Prompt added to the taskbar
  • Desktop toolbar added to the taskbar
  • Background Info (BGInfo) provides for a blue background with the server name and other essential server specs on the desktop.  This will automatically refresh at each logon due to adding LaunchBGI.bat to Run in the system registry, and it can be refreshed manually at any time by clicking on the LaunchBGI icon on the public desktop.

 

 

Sysprep Suggestions

 

1)      When building a new gold image of a Windows 2012 server, include the above customizations before running Sysprep to allow cloned copies to boot with these modifications in place.  Most of the changes will be preserved in the sysprep image saving configuration time.

2)      Other suggested modifications you may want to consider making to a Windows 2012 image before sysprepping it to use as a gold image it include:

  1. Enable RDP
  2. Install Adobe Reader
  3. Using Roles and Features, install .Net 3.5 (set the path to <driveletter or UNC path>\sources\sxs when prompted); Failover Clustering, MPIO, and Hyper-V
  4. Disable the firewall
  5. Disable I.E. security
  6. Disable User Account Control security (set to never notify)
  7. Fully patch the server
  8. If a physical server, run the applicable driver and firmware management/update utility to apply the latest drivers and firmware.
  9. Set the time zone to Central
  10. Install JRE (version of your choice, both the 32bit and 64bit versions)
  11. Other apps and features as desired

Windows Server 2012: IT Pros Will Need WS-MAN Remoting Skills (And Not Just for PowerShell) — Redmondmag.com

Windows Server 2012: IT Pros Will Need WS-MAN Remoting Skills (And Not Just for PowerShell) — Redmondmag.com.

Windows Server 2012: IT Pros Will Need WS-MAN Remoting Skills (And Not Just for PowerShell)

I’m seeing a worrying trend in the world of Microsoft IT. Let’s politely call it the “head in the sand” phenomenon. My theory is that it comes from such a long period — around a decade, really — of relatively few major OS-level changes, especially in the Server version of Windows. Not that Windows 2008 didn’t feature improvements over 2003, or that R2 didn’t improve upon that, but they were largely incremental changes. They were easy to understand, easy to incorporate, or if they didn’t interest you, easy to ignore.

That’s not the case with Windows Server 2012, and I’m worried because I’m not seeing IT decision makers and IT teams really engaged with what’s coming. The “oh, we’re not moving to 2012” argument doesn’t hold a lot of water with me because you never know. It’s easy to have one or two servers creep in, often to support some other need, and before long you’ve got a lot of ’em.

Specifically, I’m worried about the lack of attention being paid to WS-MAN.

WS-MAN: Not Just for PowerShell
WS-MAN is the protocol that underlies PowerShell Remoting, and it’s been available for Windows XP, Windows Vista, Windows 7, Windows Server 2008, Windows Server 2003 and Windows Server 2008 R2 for a few years now. I think many IT shops have felt comfortable ignoring it because it didn’t push itself on you. If you wanted it, you learned about it before using it; if you didn’t want it, you just ignored it.

That goes away for Windows Server 2012. It enables PowerShell Remoting — and thus WS-MAN — by default, because it needs it. Server Manager, you see, has been rebuilt to run on top of PowerShell. And even if you open Server Manager right on the server console, it still needs Remoting to “talk to itself” and make configuration changes. That pattern will grow more and more common as Microsoft starts shifting management tools to PowerShell. In earnest, Remoting makes it much easier for developers to create rich GUIs, built on PowerShell, that manage remote servers. By not distinguishing between “local” and “remote,” developers ensure a consistent experience either way — and help enable headless servers, a direction in which Microsoft is most assuredly heading.

So the idea of, “well, we don’t use Remoting, so we’ll shut it off” doesn’t work anymore –it’d be as effective to just shut of Ethernet. You can’t manage new servers without it — so it’s time to start focusing on understanding WS-MAN, and creating a place for it in your environment. Now, while you’ve got time to plan, rather than later when it’s a forgone conclusion and it’s just snuck its way — uncontrolled and unmanaged — into your environment.

Learning WS-MAN
Start by reading “Secrets of PowerShell Remoting,” a free guide I put together with the help of fellow MVP Tobias Weltner. There’s even an entire chapter on WS-MAN’s security architecture, and answers to common security-related questions.

Practice setting up Remoting on your existing machines, even in a lab, so that you can become familiar with it. After all, if Win2012 is going to make you use Remoting, you might as well take advantage of it for other servers too — and reduce your management overhead.

Don’t think of WS-MAN as another protocol to deal with — think of it as enabling fewer protocols, as it starts to phase out Remote Procedure Calls (RPCs) and the other scattershot protocols that Windows has relied upon for years.

Will there be security concerns about WS-MAN? Assuredly. Interestingly, many of the questions and concerns I’ve heard raised have has substantially poorer answers when it comes to our existing management protocols. When it comes to WS-MAN, people ask about the security of credentials, the privacy of the communications, and so on — but I’ve never heard those questions raised about RPCs, which is what’s mainly running your network right now. Keep that in mind, it’s completely reasonable to ask the hard questions, but don’t set a bar for security that you’ve never, ever met before, without at least acknowledging that you’re doing so.

And keep in mind that WS-MAN isn’t optional. I’ve had folks tell me that their “IT security will never allow it.” Doesn’t matter what IT security thinks: This thing is coming and it’s mandatory for server management. Wrap your head around it now or later – although “now” will let you learn the protocol and make it a welcomed part of your environment.

Is Microsoft Crazy?
Maybe. Have you seen Ballmer jumping around at conferences? That’s crazy. But more to the point, is Microsoft crazy in introducing a new management protocol that supports encryption, compression, delegated authentication, secure delegation of credentials, mutual authentication and that only requires a single HTTP(S) port rather than entire ranges?

Um… doesn’t sound crazy.

Is Microsoft crazy for replacing a set of 20-year-old protocols with something newer, more manageable and more extensible? Yes — in much the same way that replacing MS-DOS with Windows was crazy.

I’m not here to justify what MS is doing with the product; that’s up to MS. I’m here to help people understand where they’re going, so that we can be prepared. You don’t have to like it, or agree with it, but you will have to deal with it. Better, I think, to start understanding it now than to wait until it’s snuck in and is an uncontrolled part of the environment.