Touted for reducing energy consumption and making efficient use of servers and floor space, virtualization can also be the foundation for a smart DR plan.
I don't know whether it's because of improved communications of the 21st century (which has raised our awareness), population increases, or just bad luck, but it seems that there are more disasters lately than there were a few years back. I guess when you grasp the full significance of our clinging to a little ball--the Earth--wobbling around in a very unfriendly void, you can't help but feel a little insecure.
That insecurity/awareness may be behind an increasing number of people and businesses taking measurable strides to prevent losing data from their computers. Undoubtedly, it's also a function of our increased dependence on computers and the rapidly growing amount of data stored on them that is driving people to safeguard their data. There also are a greater number of solutions available today to back up your data so that choosing an elegant one is becoming more common. The leading high availability (HA) and disaster recovery (DR) company in the IBM midrange space reports that its business continues to expand worldwide, perhaps a function of the improved economies in developing countries.
One approach to HA and DR that is just beginning to gain some traction is using virtualization as the foundation for a disaster recovery plan. Current approaches--and there are many--all have drawbacks. Whether you are talking about tape backup, image capture, replication, or clustering, each has its own downside. As the recovery time and recovery point objectives improve, the cost goes up, not to mention the complexity. The result often is that only a portion of an organization's server infrastructure is included in the DR plan for a prompt and full recovery. Whether such things as email servers, internal Web servers, and batch reporting solutions receive the same level of protection as do online order processing applications is unlikely in most organizations, though most people probably don't realize it. It's not uncommon for an organization to spend the lion's share of its DR budget to protect just a few of its computing assets seen to be mission-critical.
IT departments, looking to extend disaster recovery capabilities to a broader number of servers, are beginning to seek alternatives that don't bust the budget. What is beginning to emerge as a cost-effective and performance-inspiring alternative is server virtualization. Major networking companies are beginning to develop workload portability technologies that allow organizations to effect a disaster recovery solution that is affordable and flexible while providing rapid restore times and a level of protection much broader than many of today's solutions can handle practically.
Many of today's technology companies, particularly those interested in selling customers new and better hardware, offer costly server clustering and high-end data replication solutions. Clustering is just beginning to catch on, and it does work, but it's generally affordable only by large institutions. Having to maintain a mirror image of the production data center as a source for recovery that, albeit, can be used for development work, is a bit of a luxury even given today's reduced hardware costs. Small and medium-sized businesses generally aren't interested in complex and expensive solutions.
Virtualization is a technology that is gaining rapid acceptance in the IBM i world, and PowerVM is the best-selling software product today among Power Systems users. While IBM and others are touting the benefits of virtualization as a means of consolidating servers to reduce energy usage and save space in the data center, virtualization also can be used as an effective approach to disaster recovery. Because a single physical server now can be configured to run several virtual machines through support from the installed hypervisor software layer, workloads that encapsulate data, applications, and operating systems can be moved around and configured with relative ease. Being able to move, copy, protect, and replicate an entire server workload can not only reduce costs and streamline operations, but also present options for developing affordable disaster recovery solutions.
An IT administrator can now have a warm standby environment tucked away in a secondary location as a protected workload ready for initialization or boot-up when disaster strikes. More practically, an organization can have multiple workloads replicated on a standby virtual recovery site at an offsite location. Software is available today that will automatically replicate workloads between primary and recovery sites. Users might look into offerings from PlateSpin Ltd. (which supports virtualization recovery solutions), VMware Site Recovery Manager, Microsoft Hyper-V, and IBM Director's Virtualization Manager. Instead of just replicating application data, virtualization makes it relatively easy to create a bootable virtual machine on local media or a remote recovery server that is accessed over a wide area network. Without taking the production workload offline, it is possible to replicate and transfer a workload to an offline virtual machine.
In the case of a system failure or disaster, the virtualized recovery server takes over for the production server immediately and remains active until the primary server can be brought back online. The virtual disaster recovery solution promises good recovery time and an acceptable recovery point at an affordable cost. This should allow organizations to protect a greater share of their infrastructure than may be otherwise affordable.
For those organizations interested in reviewing and upgrading their disaster recovery options, they may want to explore further ways that virtualization can meet their objectives. In a future issue, we will explore how to plan and implement a virtualized recovery solution.
LATEST COMMENTS
MC Press Online