Windows Server 2012 Hyper-V: Convert VHD to VHDX

Hyper-V Virtual Hard Disk Format Overview

Updated: February 29, 2012

Applies To: Windows Server 2012

As enterprise workloads for virtual environments grow in size and in performance demands, virtual hard disk (VHD) formats need to accommodate them. Hyper-V in Windows Server 2012 introduces a new version of the VHD format called VHDX, which is designed to handle current and future workloads.

VHDX has a much larger storage capacity than the older VHD format. It also provides data corruption protection during power failures and optimizes structural alignments of dynamic and differencing disks to prevent performance degradation on new, large-sector physical disks.


The new VHDX format in Windows Server 2012 addresses the technological demands of an evolving enterprise by increasing storage capacity, protecting data, and ensuring quality performance on large-sector disks.

The main new features of the VHDX format are:

  • Support for virtual hard disk storage capacity of up to 64 TB.
  • Protection against data corruption during power failures by logging updates to the VHDX metadata structures.
  • Improved alignment of the virtual hard disk format to work well on large sector disks.

The VHDX format also provides the following features:

  • Larger block sizes for dynamic and differencing disks, which allows these disks to attune to the needs of the workload.
  • A 4-KB logical sector virtual disk that allows for increased performance when used by applications and workloads that are designed for 4-KB sectors.
  • The ability to store custom metadata about the file that the user might want to record, such as operating system version or patches applied.
  • Efficiency in representing data (also known as “trim”), which results in smaller file size and allows the underlying physical storage device to reclaim unused space. (Trim requires physical disks directly attached to a virtual machine or SCSI disks, and trim-compatible hardware.)


With Windows Server 2012 Microsoft released a new Virtual Disk Format called VHDX. VHDX improves the Virtual Disk in a lot of way.

Back in October I wrote a blog post on the improvements of the VHDX Format in the Windows Server 8 Developer Preview. Back then VHDX supported a size of 16TB, with the release of the Windows Server 8 Beta (Windows Server 2012 beta) the new Maximum size changed to 64TB.

The main new features of the VHDX format are:

  • Support for virtual hard disk storage capacity of up to 64 TB.
  • Protection against data corruption during power failures by logging updates to the VHDX metadata structures, i.e. improved corruption resistance
  • Improved alignment of the virtual hard disk format to work well on large sector disks.

The VHDX format also provides the following features:

  • Larger block sizes for dynamic and differencing disks, which allows these disks to attune to the needs of the workload.
  • A 4-KB logical sector virtual disk that allows for increased performance when used by applications and workloads that are designed for 4-KB sectors.
  • The ability to store custom metadata about the file that the user might want to record, such as operating system version or patches applied.
  • Efficiency in representing data (also known as “trim”), which results in smaller file size and allows the underlying physical storage device to reclaim unused space. (Trim requires physical disks directly attached to a virtual machine or SCSI disks, and trim-compatible hardware.)

You can download the VHDX Format Specification.

To use this new features you have to convert your existing VHDs into the new VHDX format. You can this do in two different ways, with the Hyper-V Manager or with Windows PowerShell.

Convert VHD to VHDX via Windows PowerShell

To convert a VHD to a VHDX with Windows PowerShell you can use simple this PowerShell command:

 Convert-VHD TestVHD.vhd -VHDFormat VHDX -DestinationPath C:\temp\VHDs\TestVHDX.vhdx -DeleteSource 

Of course you can convert the VHDX back to a VHD using the following command:

 Convert-VHD TestVHDX.vhdx -VHDFormat VHD -DestinationPath C:\temp\VHDs\TestVHD.vhd -DeleteSource 

Convert VHD to VHDX via PowerShell

Convert VHD to VHDX via Hyper-V Manager

  1. Start the Hyper-V Manager and click on “Edit Disk…
    Hyper-V Manager
  2. Now select the VHD you want to convert
    Edit Virtual Hard Disk
  3. Select “Convert
    Convert Virtual Hard Disk
  4. Select the target format in this case VHDX
    Convert VHD to VHDX
  5. Select the new location for your new VHDX
    Convert VHD to VHDX Location
  6. Check the summary and click finish
    Convert VHD to VHDX Finish


Same as with the PowerShell command, you can also convert a VHDX to a VHD. But you have to make sure that the VHDX is not bigger than 2TB.

Link to original posts:

Aviraj Ajgekar already did a post on this TechNet blog about how you can convert a VHD to VHDX via Hyper-V Manager.

Understanding Network Location and Tags in SCVMM

Network Location: 

This represents the logical network identity for an individual physical network adapter. The Network Location Awareness (NLA) service provider on the Windows-based virtual machine host is used to return this information for every active network connection. NLA will first identify the logical network by DNS domain name. If that fails, information stored in the registry is used. If this information is not available then the subnet address is used. VMM stores the identified logical network as part of the virtual machine host object. You can query the network adapter of the virtual machine host for this information. When deploying a virtual machine to a host, VMM will match the desired logical network with the actual logical network advertised by the host. You can also override the default logical network discovered using NLA and specify your own logical network string

Network Tag: 

You can decorate a virtual network with additional metadata called Network Tag. This property is user specified and so is not automatically populated as Network Location is. Tagging a virtual network is important if you need to differentiate two networks that return the same logical network information.


For more information see the following topics in the SCVMM 2008 TechNet library:

How to Configure Network Adapters for a Virtual Machine

Network location. 

This setting is used by placement to determine equivalence between virtual networks across different hosts. Virtual networks determine their location from the host network adapter associated with them. You can link multiple host network adapters to multiple virtual networks and have one address set as the location. This allows a virtual machine to move and retain the correct connectivity.

Network tag. 

A tag is a virtual network property that allows you to define precise constraints on the network access of a virtual machine. For example, a host may have two network adapters, both with connectivity to the same network, but one dedicated to a Backup network. The Backup virtual network can be configured to have “Backup” as its tag.


Frequently Asked Questions: Virtual Networks in VMM

How are network locations created, and can I change them?

For Windows-based computers in an Active Directory domain, network locations are automatically discovered as the Domain Location. For network adapters that are disconnected, are not connected to a domain, or are on a host that is not in a domain, the network location is empty. You can configure different network locations for physical network adapters on a host by configuring settings on the Hardware tab of the Host Properties dialog box or by using the Set-VMHostNetworkAdapter cmdlet with the –NetworkLocation parameter in the Windows PowerShell – Virtual Machine Manager command shell.

During virtual machine placement, how does VMM consider the networking needs of a virtual machine?

While creating or deploying a virtual machine, you can specify the network location to which to connect the virtual machine. During virtual machine placement, VMM checks the virtual machine’s networking requirements against the networks configured on all managed hosts. Any host that does not have a network adapter configured with the specified location receives a zero star rating

Wiki: Virtualization Portal – TechNet Wiki

Wiki: Virtualization Portal – TechNet Articles – United States (English) – TechNet Wiki

Wiki: Virtualization Portal

Virtualization is primarily a software-based simulation of the underlying hardware.

The following list is a community-driven list of key products and technologies covered on the TechNet Wiki and the top associated TechNet Wiki articles. For a list of non-Wiki virtualization articles, see the Community Resources section at the bottom of this page.

General Virtualization Articles




Small Business Server

System Center Virtual Machine Manager

Windows Server

Remote Desktop Services

Community Resources

User State Virtualization



See Also

Planning for disaster recovery with Microsoft Hyper-V

Planning for disaster recovery with Microsoft Hyper-V

Eric Beehler, Contributor

Anyone who has participated in a disaster recovery exercise is familiar with the pains of rebuilding an infrastructure from scratch. The little nuances of reinstalling and restoring application servers, bringing supporting services live and the possibility of bad or missing data can all cause an exercise to fail.

Virtualization removes the guesswork from standing up new production systems since you can use the systems you already have – the ones running in production and in good working order.

While VMware’s disaster recovery tools work fairly well, what if you are a Hyper-V shop and want to use Microsoft’s virtual machines (VMs) in your disaster recovery planning?

That’s possible too – if you take the appropriate steps to implement a plan of action.

Understanding your approach to disaster recovery

The first step in disaster recovery planning is figuring out you environment’s requirements. Is your organization dependant on a near immediate Recovery Point Objective (RPO) — the point your data is restored to –or is your RPO around the mark of your last backup?

In addition, your Recovery Time Objective (RTO) — the time it takes to come back online — will impact your requirements for infrastructure and specialized software. Most likely, you have “tiered” your applications into categories of recovery — from mission-critical servers at a Tier 1 to nice-to-haves at a Tier 3 or Tier 4, for example.

Hyper-V can be supported in all of these scenarios, but each of these implementations – and their cost — are very different.

Non-critical servers and cold sites

Let’s start with the easiest of the servers — those that are considered non-critical or have a long RPO/RTO. This also includes a plan based in a cold site scenario, where you would have to rebuild your servers and network from scratch and rely on backups at the cold site recovery location.

The Hyper-V server can recover via a host server backup and a VM backup. The VM backup restoration is like a normal machine recovery, except that when a host is restored, all of the virtual hard disks (VHDs) and VM settings are restored as they were when you backed up. All the VMs must use Integration Services in order to access the Volume Shadow Copy Service (VSS) and have the VHDs come back in a consistent state. Certain applications, such as Microsoft Exchange, don’t support this kind of backup because of the risk of inconsistent databases. These applications may require your tried-and-true restoration methods. (Note that you need to reconfigure your network, even when restoring the host, so good documentation is necessary.)

A set of quality scripts that transfer backed-up hosts or VMs are good targets for your disaster recovery planning. It’s important to determine how the restore will get done and where those hosts land before you initiate it. Picking and choosing after you get there will only lead to confusion and a failed recovery.

Critical servers and aggressive recovery objectives

For applications that must be up — and stay up — turn to clustering or failover load balancing from one geographic location to another. This is an option for storage that’s being replicated across locations. SAN-based replication is done to off-site storage, which may or may not be going to a full remote data center. The storage is attached to the host when a disaster is declared.

With Hyper-V virtual machines – unlike with standard physical servers — VHDs can live on the replicated SAN . This allows for failover; simply bring the replicated SAN online and attach those VHDs to the host configurations with little data loss. Again, care must be taken because some applications will be in an unstable state and may require a true restore or another method of recovery.

In Hyper-V R2, you can use Cluster Shared Volumes (CSVs) with GeoClusters. This allows you to set up a Hyper-V cluster and span it across locations. While the benefit of this is automated failover, the downside is that it’s complex and requires certain hardware. In other words, you need to have a warm site for failover with server and SAN equipment to match. The key to this technology is that the storage is replicated across sites. Once again though, there isn’t a Microsoft solution for this storage problem. Fortunately, products from SteelEye TechnologyEMC and other vendors address this issue.

The future cloud option

Another option on the horizon involves using Microsoft’s Windows Azure, which may be headed toward supporting a form of disaster recovery without a site.

While a VHD can currently be moved into the Azure cloud, other issues — such as private networking and time of recovery — are still being worked on. In the future, although you may not be able to replicate your entire infrastructure, Windows Azure could prove to be valuable for certain components of the datacenter like websites and SQL Server databases.

Overall, while it’s true that the disaster recovery solutions for Microsoft Hyper-V haven’t yet reached the level of VMware, remember there is also quite a bit of scripting involved in a VMware restoration solution. Disaster recovery — even a low cost plan — can be developed with a little testing and a good understanding of your requirements. Become familiar with your options, as knowing how you manage your existing Hyper-V environment and planning the right strategy based on RTO and RPO objectives are crucial to success.

More on backup and recovery for virtualization

Avoid the big mistakes when backing up virtual servers

Top four Hyper-V virtualization problems that plague admins

Citrix Essentials automates disaster recovery process

Eric Beehler
 has been working in the IT industry since the mid-90’s, and has been playing with computer technology well before that. His experience includes over nine years with Hewlett-Packard’s Managed Services division, working with Fortune 500 companies to deliver network and server solutions and, most recently, I.T. experience in the insurance industry working on highly-available solutions and disaster recovery. He currently provides consulting and training through his co-ownership in Consortio Services, LLC.

Configuring VLANs for a flexible Hyper-V environment

Configuring VLANs for a flexible Hyper-V environment.

One of the problems with today’s siloed IT departments is that they don’t scale very well to changing technology. The truth is, we are entering an age of tremendous overlap in IT, and virtualization is no exception.

When it comes to new Hyper-V virtualization projects, for example, system administrators are typically concerned with server issues like processing power and storage I/O. They usually don’t worry too much about the network.

But with virtual machines, the network doesn’t stop when the cable plugs into the network interface card (NIC). It extends into the inner workings of our hosts and virtual machines via the virtual switch. How you utilize this technology can affect the performance of you external storage network and even the security of your host. Additionally, how you approach the virtual network will determine the success of your virtual infrastructure as you expand and scale up your Hyper-V deployment.

When addressing a host, you need to determine your virtual network needs. Yes, some virtual machine deployments are simple, but if you are dealing with iSCSI storage, servers that require security or features like Live Migration, you need to consider how the configuration of the virtual network switch will impact performance and security.

Like physical switches, virtual switches have ports that connect the virtual NICs on the VMs. The virtual switch lives on the host, otherwise known as the parent partition. You can use it to do 802.1q trunking and tag virtual LANs (VLANs) to specific ports. What you can’t do is access the virtual switch as you would a Cisco physical switch because it doesn’t have its own operating system. Although the VMware platform has the option to add this kind of feature by purchasing a Cisco virtual switch, in many cases it hardly seems necessary.

What you do need is an understanding of your current network configuration, how specific settings are assigned and what a configuration change will do to your connectivity.

Working with VLANs

Let’s start with the VLAN, which is used in the network to provide separate subnets within the same physical infrastructure. Even though many server NICs have supported VLAN tagging, it is rarely used. Now that you are hosting several servers on a single physical server, you need to be able to send traffic from a single NIC to separate subnets. So unless you are going to provide a separate physical NIC for virtual servers on each subnet, you’ll want to get familiar with VLANs. Note that when both the physical switch port and the virtual switch on your Hyper-V server support VLANs you have the ability to break up that traffic without the need for separate physical ports.

You can set a VLAN ID for each virtual NIC from a virtual machine. This NIC then connects to a specific virtual switch that will then connect to a physical NIC. All of the VLAN ID tagging will be passed through that physical port when trunking is enabled. You can also set a VLAN ID on the virtual switch, but it only signifies the VLAN ID used by the parent partition via that virtual switch. There is even a way to set your virtual NICs to talk on multiple VLANs if necessary, but that is aWindows Management Instrumentation (WMI) change not exposed in the GUI.

If you decide to connect virtual NICs using different VLAN IDs to the same virtual switch, you will not be able to network them together because the virtual switch does not do layer 3 routing. Instead, that traffic will pass to the external network. Once there, VLAN routing rules of the external network will determine how those machines communicate with each other.

When working with VLANs it’s important to understand that you cannot make up your own standards when sending this tagging to your physical network. The VLAN IDs need to be set up at the switch and router level with full involvement of your network engineers. They will often have a predetermined configuration for VLANs that you will need to follow depending on the network need of the particular virtual machine.

Figure 1. Managing virtual network switches (click to enlarge)
Managing virtual network switches

In order to pass all of that VLAN tagging data to the virtual switch, you must use a physical NIC that supports 802.1q trunking and have a port on the switch where trunking is enabled. This trunking passes all of the VLANs, but the switch doesn’t allow the traffic to mix. They are really on different segments, so the trunking protocol has to be enabled to see the VLAN-tagged packets. Also, remember to set the VLAN of the switch to the VLAN the parent partition needs to communicate on.

Figure 2. The Virtual Network Manager (click to enlarge)
The Virtual Network Manager

Virtual network rules to live by

Best practice dictates that your parent partitions should not mingle with the virtual machines. Putting the host on the same network as the VMs may expose the entire host system to security issues, which may in turn expose those virtual machines on the host. When configuring for this scenario, you should uncheck the box that reads “Allow management operating system to share this network adapter.” This means you need to give the host its own dedicated NIC apart from the NIC used to push virtual machine traffic.

So how many physical NICs do you really need on a typical Hyper-V server? If you plan to use trunking, iSCSI storage, Live Migration and employ best practices for security, you should have four. If you want to play it safe with NIC teaming, multiply this by two. That gives you:

  • a NIC for virtual machines using VLANs for any network segmentation needed
  • a NIC for management of the host
  • a NIC for the iSCSI connection, which leaves plenty of bandwidth free to send those storage packets
  • a NIC for Live Migration bandwidth needs, for when a machine moves all the contents of RAM from one machine to the other without exposing those contents to the regular network

VLANs and trunking add flexibility to where your virtual machines are hosted. Take advantage of these features, but remember to keep the settings straight with your networking team. You don’t want to cause an outage because of a misconfiguration. Make sure you are communicating your needs to them so that they can provide you with matching configurations on their physical switches.

Eric Beehler has been working in the IT industry since the mid-90’s, and has been playing with computer technology well before that. He currently provides consulting and training through his co-ownership in Consortio Services, LLC.

More on deploying Hyper-V

Five mistakes to avoid

Provisioning for success

Planning for disaster recovery

Security best practices

How does Storage Migration actually work? – as explained in Virtual PC Guy’s Blog – MSDN Blogs

How does Storage Migration actually work? – Virtual PC Guy’s Blog – Site Home – MSDN Blogs.


How does Storage Migration (in Windows Server 2012) actually work?

The process followed by Hyper-V internally to perform a storage migration is actually quite simple to explain (though obviously quite tricky to actually make work in code) and is as follows:

Step 1: We start with a virtual machine that is reading and writing to a virtual hard disk file (.VHDX in the diagram, but storage migration is supported for both .VHDX and .VHD files).


Step 2: After the user selects to perform a storage migration, we immediately create a new virtual hard disk in the requested destination.  We continue to read and write to the source virtual hard disk – but any new write operations are also mirrored to the new virtual hard disk.


Step 3: We perform a single single pass operation to copy the data from the source virtual hard disk to the destination virtual hard disk.  While this copy is happening we still continue to mirror writes to both disks.  We also keep track of uncopied blocks that have already been updated through a mirrored write – and make sure to not needlessly copy that data again.


Step 4: Once the copy operation is complete – we switch the virtual machine to be running only on the destination virtual hard disk.


Step 5: We delete the source virtual hard disk and the migration is now complete.


One important note to make here – we are very careful to not delete the source virtual hard disk until after the virtual machine is successfully running on the destination virtual hard disk.  This way if there is an error at any point in the storage migration – we can always fail back to running off of the source virtual hard disk.

Does a dynamic VHD have any space overhead – Server Fault

Does a dynamic VHD have any space overhead – Server Fault.

Basically, does a full (say) 1TB VHD file occupy more than 1TB of disk space on the host device?



Yes, there is space overhead and it depends on the type of VHD.

For a fixed VHD, there is a 512 byte footer after the raw disk image.

For a dynamic VHD, it’s only slightly more complicated. The size of the VHD is equal to the size of the actual data written to it plus the size of the header and footer.

In your example, your fixed disk VHD would consume (1TB + 512 bytes) of total space. A dynamic VHD of the same type would consume (1TB + (2 * 512 bytes)) of total space.

You can read about the VHD specification here (Word .doc).

ALL VHD files have overhead. A dynamic (growing) one does not however incur substantially more overhead than a preallocated (monolithic) one.

The chief difference is that the monolithic/preallocated style has a better chance of being contiguous on disk, which may improve performance. Additionally it avoids the underlying OS calls to expand the file, which improves write performance (more noticeable before it hits its maximum size).