How to transition IT infrastructure from physical to virtual
Migrate a small group of physical servers to virtual machines
By Paul Venezia | PC World | Published: 17:00, 22 November 2010
After finally getting the go ahead to proceed with a project to virtualise a small business infrastructure, it may seem that the hard part is actually making it all happen. In many cases, however, the hardest part is getting the budget together to acquire all the hardware and software necessary, actually making the switch is the easier task.
The most important part of migrating from a physical to a virtual infrastructure is making sure that you have all the pieces in place before you move a single server, before you put anything into production, and even before you start testing. Much like laying out all the tools necessary to put together a table from IKEA makes the task easier, ensuring that you have everything you need before you embark on this journey will make the process smoother and quicker, and will greatly improve the quality of the finished product.
To that end, it's important to be fully aware of the features and limitations of the virtualisation solution you choose. In some cases where the budget won't allow for the higher-end features, you must understand what concessions have been made.
For instance, you may have licenses for live virtual server migrations between hosts, but not for automated load balancing or high-availability, or you may have to forego the advanced memory optimisation or similar features.
In the case of the former, you'll need to manually balance virtual servers across multiple physical hosts and manually link and restart those servers should a physical host fail. In the case of the latter, you'll need more memory per physical host than you would otherwise require because the advanced memory sharing isn't available.
There are several other examples, but these are the most common. In smaller infrastructures, the lack of these features isn't as critical as it might be otherwise, due to the smaller number of virtual servers and the general lack of unbalanced or highly variable workloads. Either way, it's important to understand what you have in your toolkit before you start.
Building the network
It's critically important that you have adequate physical server horsepower, ethernet switching and storage available. There are a plethora of small, cheap storage devices on the market that can handle a virtualised workload and multi-core servers are very reasonably priced.
If at all possible, make sure that you have a reasonable level of redundancy available in whatever solution you choose, such as redundant power supplies and protective RAID levels, with a minimum of RAID5. In the case that the infrastructure is small enough that there is no plan for shared storage, it's absolutely critical that the physical host server or servers be outfitted with battery-backed RAID controllers, and ideally a RAID6 array internal to the server.
Also note that if you do forego shared storage, you won't be able to take advantage of features such as live migrations, nor will you be able to quickly boot downed virtual servers that reside on the local storage of a failed physical host.
On the ethernet switching side, ensure that you have a switch capable of link aggregation, and if you're planning on using iSCSI storage, information on the iSCSI support in the switch, and specifically support for jumbo frames. Not all gigabit switches are created equally, and some can hamper iSCSI performance. Seek out switches that explicitly state iSCSI compliance, and these should always include jumbo frame support.
Building the network is simple once these pieces are assembled. For a shared storage solutions, each physical host should have a minimum of four network interfaces: two configured for failover, the option to switch to a backup system in case of an emergency, on the storage side and two configured for link aggregation on the front-end side. For non-shared deployments, you can get away with only two aggregated front-end interfaces.
You should configure the storage array similarly to have multiple links to the network in order to account for the failure of any single link.
Once this network is built, you're ready to install the virtualisation software on the physical hosts, and link to your shared storage, if applicable.