If you are like many IT professionals, the mere mention of high availability (HA) systems conjures visions of big bucks. The holy grail of computer services, the famed "five nines" availability (roughly five minutes of downtime per year), can be very costly. But that need not always be the case. The open-source community has some free tools available that can provide you with caviar-class reliability on a tuna fish budget. This month, we look at HA and get a view from 30,000 feet of what open source can provide.
Why High Availability?
As computers become more integral to our businesses, the value of time lost to computer or network unavailability increases. Indeed, some businesses (such as the busy e-commerce sites eBay.com and Amazon.com) can find themselves out huge sums of money for even a few minutes of downtime. People tend to judge a company's quality by the performance of whatever interface they use to interact with it--whether the interaction is human-to-human or human-to-computer, as in the case of the World Wide Web.
Most of us do not work for companies like eBay or Amazon. But our customers still judge the quality of our company by the same criteria. And our company's employees judge the quality of our (the IT group's) product by the same criteria. Inaccessible service does not make for a positive image!
How Much High Availability Do I Need?
The strategy used to achieve HA is straightforward: Identify single points of failure and then design "Plan B" to address them. The simplest plan would be to make everything in your system completely redundant, but from a financial standpoint, that option is not very practical for most companies. Instead, you need to determine how much one hour of downtime would cost your company. Next, consider that 90% availability means losing around 885 hours in one year, but 99.99999% availability means losing only about 3 seconds. Then, calculate how much money you have to spend to reduce your exposure.
Fortunately, any currently available commodity-level hardware can easily achieve "one nine" (90%) availability. Going to three nines (99.9%, or 9 hours lost) can be as simple as choosing good-quality hardware (such as the IBM iSeries and xSeries machines) that have husky, redundant power supplies, well-designed cooling systems, RAID controllers for the inevitable disk failures, and uninterruptible power supplies for when the lights go out.
More Than Hardware
Buying top-quality hardware with some redundancy is only part of the HA puzzle. The remaining pieces are software-based, which, remarkably, can be more expensive than the hardware on which they run. There is a very good reason for this: The complexity involved in building redundant software installations can be daunting. Take, for instance, the case of a database server. As soon as node number two is added, a need for synchronization between it and the master node becomes apparent. The synchronization requirement for this scenario (where one node fails over to another) is simple compared to that for a multi-node system where all member nodes service requests (for performance reasons).
IBM provides software to address this complexity, and it is not unreasonably priced for what it does. But if you start adding up your costs, you'll find that the hardware prices are easily outdistanced by the software prices. That fact is even more evident if you move to the Microsoft SQL Server software, since the Intel-based hardware on which it runs tends to be less expensive than a comparable iSeries in terms of the acquisition cost.
Although databases are arguably one of the trickier facilities to synchronize, all of the others require some thought. For instance, what about the user names and passwords on your authentication servers? Even something as simple as static documents that get served on your intranet require some planning if you want them readily available.
So far, I've only discussed scenarios where the storage on each node was discrete. If you are using a shared device--perhaps an external SCSI device or NAS server--then you have another consideration. What happens if the master node is taken offline for some reason and a slave node takes over, then the master node is restored but the slave node doesn't know it? This situation is known in the HA world as a "split brain," and if it occurs, you are looking at potential corruption of your data as both nodes attempt to write to the storage device. This is not good!
The Open-Source Projects
As you can well imagine, the open-source community has approached the HA problem in many ways. Let me go down the list and talk about the HA solutions to these problems, starting with database replication.
The two big commercial databases that can be hosted on the Linux platform are IBM's DB2 and Oracle, either of which can be replicated. Naturally, Microsoft's SQL server is unavailable for the Linux platform. In the open-source database world, we have two major players: PostgreSQL and MySQL. PostgreSQL, the more advanced of the two in terms of features you've come to know and love from DB2 for iSeries, does not have replication services in the base product, but there are commercial extensions to the product that will do replication. MySQL, on the other hand, does include a replication feature. If you are willing to forgo transactions (perhaps you want to run a content management system such as PostNuke, described in "The Linux Letter: Nuke It!," that uses MySQL as a database), then you're all set.
When I mentioned authentication servers, I'm sure that many of you thought, "Duh. I have a backup domain controller (BDC) in my network." OK. If you are using a Microsoft primary domain controller (PDC), point taken. But if you use Samba to offer authentication services to your Windows clients (as I do), you have other alternatives. First, as of Version 3, one Samba server can be a BDC for a Samba server acting as a Samba PDC. If you are using OpenLDAP for authentication, then once again you are covered. LDAP does replication by design!
The final and easiest item on my list is that of static pages and files. Two quick solutions come to my mind: 1) Use rsync to keep one machine in sync with another (see "The Linux Letter: Getting in Rsync") or 2) use the Distributed Replicated Block Device (DRBD) to keep machines in sync. This open-source project is fascinating; it creates a special Linux device that creates a mass storage device that, when written to, causes the data to be replicated over TCP/IP to a slaved node. In other words, you get real-time synchronization easily and cheaply. Be sure to visit that project's Web site for more information.
Split-Brain Syndrome and Takeover
The biggest obstacle to failover support is determining when the master node pukes and the slave needs to be brought online. Let's say that you have an Internet sight that needs HA. This hypothetical site is interactive, uses a database that is updated continuously, and shares a hard drive over a SCSI chain. Upon failure of the master, how do we get incoming requests for the original machine routed to the backup? How do we keep the two from becoming split-brained if the master somehow comes back online?
The answers to these questions are found at the High-Availability Linux Project Web site. At the site, you'll find a product called Heartbeat. In essence, it is a Linux daemon (service to you Windows professionals) that allows machines to monitor the health of each other's services and react when a failure is detected. If our aforementioned site goes down for some reason, then our backup will be automatically brought online within seconds.
Accomplishing this requires very inexpensive and simple hardware. First, both machines should have two network interface cards (NICs). The first NIC in each machine provides the access to the Internet. The second NIC in each machine is interconnected to the other's second NIC via a cross-over cable. It's this private network that Heartbeat uses to communicate with its partners. An additional level of protection (what if one of the secondary NICs fails?) can be garnered by connecting the two machines together via one of their communication ports. So the Heartbeat daemons have redundant communication paths.
If the master fails, the secondary machine will quickly start the corresponding services, perform IP takeover (so the requests are now routed to it), and resume operations as though nothing has happened.
"What about split-brain syndrome?" you ask. There are two traditional methods to avoid that. One of the features of Heartbeat is a technique known as "STONITH," an acronym for the indelicate "Shoot The Other Node In The Head." For a computer, that means cutting off its power supply. That can be accomplished in two ways. You can use a network addressable power switch (which allows you to remotely control the power of a device via an Ethernet connection and TCP/IP) and have the secondary machine kill the power on the primary machine. The other option, as either a primary or redundant means, is to connect the second communications port of the slave to the control port of the UPS that is protecting the primary machine, and vice versa. That way, the machine can cut off the power to its partner via its UPS. For sites that have a large UPS covering the data room, the former option will probably be the most convenient.
An Executive Overview
I have only scratched the surface of Linux HA in this article. There is much more that this software can do for you. I have also avoided writing any "how-to" content here because frankly, each of you has a unique requirement and implementation. One short article can not possibly do this topic justice. You will, however, find a plethora of material, such as case studies and links to other resources, at the sites I mentioned. Once you spend a little time researching HA and get your feet wet, you'll find all kinds of nice places to shore up your systems.
That's all for this month. Get those projects rolling so that your system will be available to receive the next issue of MC Mag Online!
Barry L. Kline is a consultant and has been developing software on various DEC and IBM midrange platforms for over 20 years. Barry discovered Linux back in the days when it was necessary to download diskette images and source code from the Internet. Since then, he has installed Linux on hundreds of machines, where it functions as servers and workstations in iSeries and Windows networks. He also co-authored the book Understanding Linux Web Hosting with Don Denoncourt. Barry can be reached at
LATEST COMMENTS
MC Press Online