Eight Things to Like About Server 2012
In a recent newsletter I mentioned the “many wondrous things in Server 2012” or something like that, and my friend Rory Hamaker (who apparently only reads some of my newsletters) dropped me a line asking when I was going to get around to telling you all about those things. There’s not space to do that here — it’ll take several days of class, a boatload of newsletters, or a book or two — but here are eight likeable things about Windows Server 2012 , and with hope you might not yet know some of them.
Times have changed, and apparently it’s no longer very bright to use some of my favorite old passwords. Turns out otherpeople also use passwords like “password,” “batman” or “stopmakingmemakeuppasswordsallthetime.” Who knew? Okay, Idid, if you read my stuff regularly, and of course it’s pretty much essential to have longer passwords and, better, passphrases. So let’s suppose my password is “apepperandbattery.” (Get it? A modification of “assault and..” ah, well, it seemed clever at the time. Also, notice none of that ridiculous “complex” password nonsense, also known as “passwords you can’t remember,” “passwords that you must type so slowly that anyone can see you type them,” and “the Help Desk Full Employment Act of 2012.”) Now, that’s a perfectly fine password, but on some of today’s mushy keyboards, I’m not always 100 percent sure what I typed, and have always wished that every password dialog box had a check box that said, “listen, I promise I’m the only person in the room and so please let me see what I’ve just punched in.” Well, apparently Microsoft heard my prayer. Check out this little “eye” icon, the thing just to the left of the blue square with the right-pointing arrow:
Click and hold it, and you see what you’ve typed. Hey, until we all go for smart cards — and do look into the Server 2012 “virtual smart card,” a topic for another day — this will have to do.
Running at least one important server? Of course you are. Running it as a virtual machine? Naturally. Worried about its host system failing? You’d be crazy not to. Can’t afford a Hyper-V cluster? Neither can most of us.
Hyper-V Replication lets you protect an important VM (for this example, let’s call it WEB1) sitting on a Hyper-V server (call it HV1) by setting up a second Hyper-V server (call it HV2) to stand by, ready to take over with a nearby, frequently-updated copy of WEB1’s VHD files. It doesn’t automatically fail over if HV1 dies — you’ve got to type a command or click a couple of buttons — but it’s quick, and it works.
There are a variety of ways to set up Hyper-V Replication, but in the simplest case, you
- Configure HV2 to be ready to replicate with HV1. It’s a few check boxes in HV1’s Settings or a PowerShell command.
- Do the initial replication of HV1’s VHD(s) from HV1 to HV2. VHDs are usually big — tens of gigabytes — and while it’s certainly possible to tell Hyper-V to do the initial copy over the Internet, choking your Internet bandwidth for an hour or so just to move a VHD that you may or may not ever need might not be the smartest idea. Alternatively, you can just put a copy of the VHD(s) on an external hard drive, overnight it to wherever HV2 is, and then just ask someone at the remote site plug the hard drive into HV2 and copy it over to that Hyper-V server. Or, for many of us, HV1 and HV2 are on the same site, which simplifies things further. Whether you stuff the VHD over the ‘Net, overnight it on some kind of media, or just copy it to another Hyper-V box on your intranet, you need to get that “initial replication” out of the way to get Hyper-V replication started.
- With HV2 set up as a replication host for WEB1 from HV1, just sit back and let the Hyper-Vs replicate. HV1 periodically replicates changes to WEB1’s VHDs over to HV2, and in my experience it does it quite frequently — every few minutes or so.
- When HV1 dies, taking its virtual server “WEB1” with it, you then need only fire up the Hyper-V Manager, point it at HV2 and say, “you’re on, understudy!” and WEB1 will be back up and running in seconds.
Of course, there are lots more details — will the server’s old static IP addresses work in the new location? — but that’s the outline. It’s not fancy clustering, it doesn’t replace backing up VHDs, it’s sort of rough-and-ready, but honestly any small outfit using Hyper-V and not implementing this is just plain crazy.
And speaking of fault tolerance…
Once upon a time, we thought we could create reliable server systems just by spending a lot of money on one server. Then we realized that there’s no such thing as “always works,” and so started creating “clusters,” groups of servers that fail now and then, but almost never all fail at the same time, presenting always-up face to the user community. For example, if you’re reading this, I’m guessing that you subscribed to my mailing list. The pages on my web site that let you subscribe or unsubscribe can’t work without a SQL database named “Readers” that I run on my server. Now, I must admit that it’s not exactly the most heavily-worked database on the planet… I only do a mailing about once every three or four weeks… but what if the Readers database were something that absolutely needed to be up 99.99% of the time?
Well, as I said before, in the old days, I’d just resolve to buy the best server, with hand-picked RAM chips, extensive CPU cooling, the best hard drives money could buy, and so on… a single system built of good enough components that we could expect, say, a mean time between failures in the millions of hours. But that’s expensive and hard to do — once I’ve purchased the server hardware, where do I get “more reliable electricity” or “dependable Internet?” Nah, in the long run it just makes sense to hedge my bets and get backups for everything. Dual power supplies. Connections to more than one ISP. Generators and UPSes. Teamed NICs. And… clusters. It’s sort of the server version of “two [or more] heads are better than one.” It’s a great idea and has worked very well in a number of situations — ask anyone who built a cluster of VAX VMS systems in the mid-90s. (I think Ticketron may still be using those old boxes because hey, they never break).
Before clusters, I’d have that one high-powered SQL server reading and writing the database to and from an internally-attached hard disk. In a clustered world, in contrast — at least, on a pre-Server-2012 clustered world — I buy two or more SQL servers and put the data not on any of their local hard drives, but instead on a separate network-attached special-purpose computer called a Storage Area Network or SAN device. To the SQL server’s operating system, however, the SAN device looks like a local hard disk for all intents and purposes. Three things, however distinguish a SAN disk from a local disk. First, SANs are usually significantly more expensive on a dollar-per-gigabyte basis than local SAS or SATA drives. Second, we put up with that price because hunks of data on a SAN can be simultaneously accessed by two or more computers — like our SQL servers — as so-called “shared storage.” That’s important because I want to store my database in a place wherein both of my clustered SQL servers can get to it equally quickly, so if one of the SQL servers goes down then the other can pick up where it left off with as little switchover time as is possible. While the idea of SANs has been around since the mid-90s, we’ve really only seen them in PC networks in any numbers since 2005 or so, and their existence has been a key ingredient in the growth of clusters. Before that, we’d share data between two SQL servers by connecting them both to a bunch of hard drives via something called “asynchronous SCSI,” and I still remember the electric shocks you could get if you disconnected an ASCSI cable at the wrong moment! So for clusters, we love SANs… we just hate having to pay for them. Third, we can get a bit more reliability from a SAN if we spend a bunch more money to get a couple of SAN boxes that will replicate to each other, making our SAN not a single point of failure.
With a setup like that, I’d fire up my two SQL servers, put my Readers database files on the SAN, and run a wizard to create a cluster, using the Failover Cluster Manager. The Cluster Manager does a bit of magic making my two SQL servers look like just one server to the other systems on the network, and what was once sql1.bigfirm.com and sql2.bigfgirm.com becomes just sqlcluster.bigfirm.com or something like that. The result is that if one of those SQL boxes goes down — or just needs to reboot to apply a patch — then the other’s still up, and the imaginary “sqlcluster.bigfirm.com” machine looks like a pretty reliable device.
It’s a great answer, but man, is it expensive. Besides the cost of SANs, you’d need Enterprise or Datacenter to build a cluster, and they were expensive. The version of SQL Server that you need — Enterprise — is expensive, way out of the reach of most small to medium-sized organizations. Heck, you’ll have an easier time finding a Unitarian at an “Invade Iraq now” rally than you will finding under-250-person shops with clustered SQL or Hyper-V.
With Server 2012, things get a bit more cluster-friendly for organizations of all sizes, for two important reasons. First, as you already know if you read my newsletters, all of those nifty capabilities in the $4000 Enterprise Server made their way into the now-$882 Standard Server. Second, you needn’t get a SAN to get shared storage, as you can stored those shared SQL database files (or, if you wanted to make a Hyper-V cluster, some shared VHDx files) on a simple Server 2012 shared folder — the kind we’ve been building for decades.
Just to make that clear, let me restate it. What was two copies of Enterprise on expensive systems coupled with a dedicated SAN device (or devices) is now three generic boxes each running a copy of Standard Server 2012. Furthermore, the actual process of configuring a cluster has gotten a bit easier with every version of Server released in the past ten years. I’m not exaggerating when I say that a few weeks ago I set up a couple of Hyper-V 2012 servers at home prior to getting on the road and, one morning, remoted from my hotel room back to my house, thinking, “I wonder how hard building a 2012 cluster could be?” So, I fired up 2012’s Failover Cluster Manager, clicked a few times, and in three minutes had transformed my two Hyper-V boxes to a Hyper-V cluster watching over a virtual machine… worked the first time. Of course, once I shut down one of the Hyper-V boxes to see if the failover would work properly — and it did! — I realized that my arms aren’t 3000 miles long, and so getting one of the Hyper-V boxes turned back on was a little hard. Oh well.
It’s hard to say what 2012’s biggest feature is, but “cheap, simple clustering” is clearly on the short list, and the folks who worked on Server know that too. Weeks (and, in one case, months) before Microsoft made their everything-goes-in-Standard-Server announcement, three different Microsoft employees leaked the news to me. That’s how exciting it is. If you’ve not messed with clusters or wanted to but couldn’t afford it, 2012 should change that.
Awesome New “Metro” Server UI
Bored with that same old Windows 2000-ish-looking GUI? Well, you are going to love the new cryptic, Start Button-less user interf… okay, I’m kidding. Actually, I like Metro, I’m just not a huge Start Screen fan and, yes, I know — many of you have already emailed me to tell me that you aren’t either. Well, let me rephrase that. A quick look around Amazon shows a number of $300-ish monitors that support multi-touch via an extra USB port, which sounds nice until you realize that you’d have to re-plumb your KVM infrastructure. Guess I’ll wait for the Kinect… then I can just mount a video projector on the ceiling, wave my hands around and look like I’m running the NCC 1401-R or something like that. (Okay, maybe that would be fun.)
More PowerShell. About Ten Times More. You’ll Love It. Really.
Okay, here I’m not kidding. There are about ten times more PowerShell cmdlets in Server 2012 than there were for Server 2008R2, cmdlets to do all kinds of things. And yes, I know, most of you haven’t gotten the PowerShell religion yet, and part of the reason for that is that we all so hate reading Help files. Well, PowerShell for 2012 makes that a bit easier with a new cmdlet, “show-command.” You pass it the name of a cmdlet that you want to use and it pops up a GUI with a simple form that you can fill out which then results in a working PowerShell command. For example, suppose I want to create a virtual machine with “new-vm” but don’t want to have to figure out how to use the silly thing. I just type
And a dialog box pops up that looks like this:
The asterisked fields are the ones that new-vm insists upon, and there are only two — NewVHDPath (where to store the virtual hard drive files for the new VM) and NewVHDSizeBytes (what’s the largest size in bytes that the VHD can grow to). Fill in “c:\vhds\vm1\vm1.vhdx” for the first parameter and 40000000000 for the second parameter, click “Run” and show-command runs this on the command line:
new-vm -NewVHDPath c:\vhds\vm1\vm1.vhdx -NewVHDSizeBytes 40000000000
And in no time, you’ve got a VM running, and I know what you’re wondering — “Bytes? Really? Bytes? I gotta type 40000000000 to get a 40 gig drive?” You can also add common suffixes, and so could have typed
new-vm -NewVHDPath c:\vhds\vm1\vm1.vhdx -NewVHDSizeBytes 40GB
Ever since PowerShell 1.0 came out, Microsoft has tried to make PoSH something easy to experiment with and find out about because, again, we’ve all been made to feel stupid by rotten CLI tool documentation in the past, and that doesn’t really motivate us to return to the command line. For that reason, I understand why so many folks are allergic to CLI tools, but please permit me to take a moment to suggest that maybe PowerShell’s worth giving a look. First, there’s show-command, a tool that I have to admit has made me use PowerShell’s Help quite a bit less in my 2012 work. Second, PowerShell’s Help really is better than most of what you’ve seen, particularly when you add the “-example” parameter. The examples in the PoSH cmdlets that I’ve used are syntactically correct over 97% of the time and are often non-trivial enough that they provide some creative suggestions on how to use the tool. And third, there’s “reflectivity.”
Back when Jeff Snover first talked about PowerShell, he realized that people resist or lack the time to learn CLI tools, so he imagined Windows GUI tools all featuring a text window. Whenever you pushed a GUI button to create a user, restart a service, install an SSL certificate, or whatever, then in that text field, the corresponding PowerShell command would appear. You could then copy that and save to a text file or whatever for reuse later — there’s a new feature in PowerShell 3.0 called “snippets” that would accommodate things like that — and thus slowly learn PowerShell in a series of sort of “just-in-time micro-training” sessions. I’ve always liked the idea of GUI-to-CLI training wheels ever since I saw dBase IV do that years ago, and so couldn’t wait to see it in Windows GUI tools. Unfortunately, the whole notion hasn’t caught on beyond Exchange and Virtual Machine Manager, although I can happily report that we Active Directory administrators in Server 2012 can now experience reflectivity in the new version of the Active Directory Administrative Center (ADAC). Called the “PowerShell History Viewer,” it’s just a click of a button in the new ADAC.
So the gauntlet’s down, Microsoft coders! How about a PowerShell History Viewer in regular old Hyper-V? In the networking sections of the Control Panel? (The networking team created a pile of new network cmdlets… it’d be nice to learn them the easy way.) Or how about you storage guys? Storage Spaces is cool, but sorting through the great new cmdlets is just a bit time-consuming now and then.
“De-duping,” a phrase made common in the IT business by storage folks, refers to anything that saves space by removing repetitive data. For example, Exchange has had a de-duper called “single instance store” for ages, a tool that (if I understand it right, I’m not an Exchange guy) finds identical messages in an information store and eliminates them. For example, imagine a case where someone sends the same 10 MB email to 100 people on a distribution list in an organization, chewing up 1GB in that information store. SIS causes Exchange to only store one of these copies, replacing the other 99 with just a sort of “IOU” and a pointer to the one remaining copy. It’s a sensible way to save space on a disk.
Windows Server 2012 added de-dup to its new StorageSpaces technologies, and in short you just create a volume, enable de-dup (it’s a check box), and forget it. The de-dup on a volume is nice in that it’ll find a chunk of data of any size and de-dup it, rather than simply de-duping identical files. Again, de-dup is a nice feature with a bunch of knobs and buttons and I’ll be covering that in detail in the future, but what I’ve found sort of interesting is the network de-duplication.
So let’s say you’ve got 20 VHDs, all system drives for virtual machines running Windows Server 2012. You want to copy them from one Hyper-V server to another. Now, each of these virtual servers do different things, but those things aren’t all thatdifferent, at least as far as the files on the VHD are concerned — a server running IIS and a server acting as a domain controller are about a 97% match in terms of files. So even though these 20 VHDs are for different kinds of servers — web servers, DCs, WSUS boxes, file servers, DirectAccess servers… from a simple list-of-files-on-disk, they’re all pretty much identical. Thus, when you use robocopy, xcopy or (heaven forbid) Explorer to copy these 20 VHDs, which may collectively be three quarters of a terabyte in size, you’ll see that it seems amazingly fast, because you didn’t have to transfer 700-ish gigabytes… just about 40 or so. There’s not a lot on the Web about it, and I haven’t been able to find any registry entries or the like to control it, but watch this space, there could be some neat stuff there.
4K Native Sector Support
You may know that data space on hard disks, DVDs, USB sticks, floppies, or basically any kind of data storage device has a “quantum,” a smallest area that can be written called a “sector.” On floppies and hard disks since the 1970’s, sectors have typically been 512 bytes, and that made pretty good sense, as sector size ends up affecting a device’s smallest possible file size and thus the smallest practical file size for CP/M, DOS, Windows or Linux would have been 512 bytes. (They’re all larger for other reasons.) What that means is that because “sector=smallest file size possible,” each of those different operating systems storing a four-byte file must of necessity actually take up 512 bytes on disk, effectively wasting 508 bytes. No one would care about that much waste nowadays and in fact every modern file system that I know of would waste more than 508 bytes in a 512-byte file, but in the 80’s, people worried about that kind of stuff. (Then again, a 320K floppy disk cost about a dollar in the early 80’s and in that light, the concern seems quite reasonable.)
Anyway, times have changed. Storage is much cheaper so we don’t care about a little waste, but we do care about seeing ever-improving performance. As any communication channel — including ones from hard disks to CPUs — get faster and cleaner, it makes sense to make the data blocks transferred on that channel bigger to deliver better overall throughput. (Also the the fact that we’re crunching more and more data onto a given physical area on a disk platter makes engineering small 512-byte sectors inefficient.) That’s why many hard disks nowadays actually use much larger, 4096 byte (4K) sectors. Windows doesn’t understand 4K sectors, though, and so to make Windows happy, 4096-byte drives must waste some time emulating 512-byte sector drives. Even with the overhead of 512 byte emulation, though, 4K sectors still make for sufficiently increased performance that it’s worth making and selling 4K drives in 512 byte drives’ clothing, so to speak. Whether you know it or not, you’ve been buying so-called “512e” hard drives for quite some time, so it’s a shame that you’ve not been able to extract the full speed of those drives. As you’ve probably guessed by now, however, there’s good news: Windows 8 and Server 2012 finally come with native support of 4K drives. That should will offer better performance at a somewhat lower price.
Real, Simple, Cheap, In-the-Box DHCP Failover
DHCP’s great. A simple protocol that hands out IP addresses to all of our Internet-connected devices. Easy to set up, simple to administer. But there’s just one little annoyance: fault tolerance. How do you provide for the uncommon but annoying situation where your DHCP server’s down for some reason? Well, we all know the 80/20 thing, where you give 20 percent of a scope to a second DHCP server, but it’s not really kosher from an RFC point of view and it’s kind of kludgy to manage. Of course, you could set up a cluster — you know, two Enterprise boxes and all that — but it’s expensive. What to do?
A draft RFC cooked up back in 2003 about DHCP failover never really went anywhere, but it had some good ideas and so the DHCP folks at Microsoft put it in Server 2012. It’s simple and easy to set up, letting you create pairs of DHCP servers that look out for each other, either failing over entire scopes or sharing the responsibility for handing out IP addresses in a given scope. If you start installing 2012 systems into your network and intend to have them replace existing Windows-based DHCP servers, I strongly recommend that you look into DHCP failover as soon as possible. The GUI makes it simple, but there’s some good command line tools to set it up as well. (Hint: from PowerShell, type “get-command *-dhcpserverv4failover.”)