Summary : Scott Lowe lists nine things to keep in mind when using the boot from SAN option when architecting a datacenter infrastructure.
There are a number of decisions that the IT department must make when architecting a datacenter infrastructure; one of those decisions is whether to support local boot storage in a server or use a shared storage device.
In this post, I discuss some of the potential pros and cons associated with the boot from SAN option.
- Less expensive servers. With boot from SAN, you no longer need to buy servers with expensive storage controllers and hard disks. However, some of these savings are offset by the need to include a storage adapter that has boot capabilities, such as a Fiber Channel adapter or an iSCSI adapter.
- Easier server maintenance. Without local storage, there is one less component to worry about in the server. The servers become nothing more than interchangeable compute nodes. If one of these compute nodes fails, it’s much easier to replace the server–simply connect the new server and attach it to the SAN-based boot volume. The new system may require new drivers for some components, but that’s not a horrible problem to overcome. Quick and easy server replacement is a major advantage that shouldn’t be overlooked.
- More robust storage. In most enterprise environments that use SAN-based shared storage, the storage infrastructure is intentionally built to be rock solid since it holds, in many cases,everything. This means that the SAN probably has more disks, more capacity, faster storage processors, and other redundancies that might not be found on a local server. Further, if we start to get low on disk space on one of the connected servers, we just provision more.
- Additional availability opportunities. Many SANs feature SAN-based replication capabilities, which means that the contents of the SAN can be automatically replicated to a secondary data center. Although individual servers can also be replicated, doing so requires additional software that may not be a standard operating method in an environment, thereby creating a process exception that can be difficult to support.
- New upgrade options. When the time comes to move to a new operating system across the entire organization, you can more easily stage the upgrade by creating all of the OS images on the SAN and then connecting individual servers to new boot images.
- Need for more advanced storage adapter. Boot from SAN requires some method by which a server can be connected to a storage device before the operating system has had an opportunity to load drivers. In a Fiber Channel environment, this is the job of the BIOS on the adapter. In addition, it’s possible to procure iSCSI adapters that support booting from iSCSI-based SANs. In an environment that is already using shared storage, it’s likely that the servers would already be provisioned with these storage adapters, so storage adapters may not be an additional cost that is borne to just boot from SAN.
- Potential for boot storms. If there are many servers booting from shared storage, it’s possible that all servers could attempt to boot at once. This situation can overload the storage since boot time is an intensive process from a storage perspective. As such, planning is key to make sure that all SAN-based services remain within performance parameters supported by the SAN.
- Additional complexity. Local server-based storage is a known entity that people have been managing for decades. Although many use boot from SAN, it’s not as common as local storage and requires new ways of thinking. Further, special care needs to be taken to make sure that the SAN boot environment operates as expected. The exact method can vary from SAN vendor to SAN vendor and from HBA to HBA.
- All eggs in one basket. In a properly architected storage environment, this shouldn’t be a problem, but in a SAN environment that isn’t rock solid, a boot from SAN initiative could prove to be undesirable, as it would expose the organization to additional risk from the loss of that single piece of hardware. For example, if the SAN does not have redundant everything (including controllers, power supplies, and the like), the risk is too high.
Coming up next
At Westminster College, we use a Xiotech Emprise 5000 storage array with 4 Gbps Fiber Channel. The unit has a total of 40 disks and is rock solid. We’ve recently begun implementing boot from SAN with the Xiotech unit and our Dell blade servers, and it has been a great experience. Although there is some additional upfront configuration, we have much more flexibility. I’ll describe the entire process in my next article.
Scott Lowe has spent 15 years in the IT world and is currently vice president and CIO of Westminster College in Fulton, Missouri.
This material is dated somewhat. Any Tier 1 server (if not most major brands) purchased within the last few years will have included an ethernet NIC that is iSCSI-boot capable out of the box. If it was a server class Broadcom or Intel NIC, then it is already iSCSI boot capable.
NFS advantage over SAN in cloud computing | Malaysia VMware Communities.
This blog highlights the usability of NFS with the advent of better technology available today.
One of the key area they highlighted was the storage protocol they choose, which is NFS rather than iSCSI and FC SAN, which was mainly due to the flexibility of volume shrink or re-size that could support on the fly. I am agreed with their statement as personal thought, for large scale deployment, NFS are much easier to iSCSI manage compare to iSCSI and FC environment.
NFS is pure ethernet IP base, which certified and proven in many of the large scale deployments worldwide. With 10G Ethernet today, performance throughput on the NFS will no longer concerns over the performance. If you had not evaluated NFS as an option in your deployment currently, you may want to take a try on it.