13
Fri, Dec
4 New Articles

Going Back in Time: A New Approach to AIX HA and DR

High Availability / Disaster Recovery
Typography
  • Smaller Small Medium Big Bigger
  • Default Helvetica Segoe Georgia Times

A comprehensive HA/DR environment includes the ability to "go back in time" using snapshots and continuous data protection.

 

The term "high availability" can be confusing in the IBM Power Systems realm because the definition of the term is different in the IBM i and AIX worlds. In IBM i shops, HA is achieved by setting up a backup system and using an HA product to replicate applications, along with business and system data, from the production server to the backup in real-time or near real-time. The result is a hot-standby backup server that is fully ready to take over operations at any time.

 

This contrasts with AIX environments, where HA usually refers to a clustered configuration in which two or more nodes in the cluster share a common data store. In this environment, a secondary node can take over operations if the primary node fails or needs to be taken offline for maintenance; however, the configuration does not inherently shield operations from a failure of the shared data store.

 

Typically, the data store is protected using technologies such as RAID or hardware-based disk mirroring. But because these forms of data redundancy typically operate only locally, this does nothing to protect data from destruction due to a disaster. Consequently, companies that want to provide a higher standard of availability augment traditional AIX HA with data replication.

 

Optimal availability can be achieved by replicating data to a system that is sufficiently distant from the production server such that a disaster is unlikely to affect both servers. Then, when a disaster strikes, users can be switched to this backup system, without the need for lengthy recovery operations.

 

An article that was published here about a year ago, "AIX Clustering Versus Replication: Why Settle for Just One?", examined this coupling of traditional AIX clustered HA with replication. Please see that article for a more detailed discussion of the topic.

 

The definition of "disaster recovery" is usually identical in AIX and IBM i shops. In both cases, DR traditionally refers to the backing up of data to tapes. This is typically done nightly. Then, in the event of a disaster, data and applications can be recovered from the tapes.

 

Tape-based recovery of a large data center can be a lengthy, labor-intensive, error-prone process. Furthermore, it risks the loss of data that has been added or updated after the backup tapes were created during the previous night because that data will not yet have been backed up. As a result of these issues, some companies have moved to disk-based DR alternatives, such as backing up data to disk, rather than to tape, or continuously replicating data to disks at a remote location. Nevertheless, many companies still use tape-based backups as their only DR technology.

 

Even ignoring the problems of tape-based backups mentioned in the preceding paragraph, traditional HA and DR technologies do not offer a complete solution. Both technologies allow recovery to only a very limited number of points in time. In the case of HA, recovery can be performed to only the point of failure. In the case of DR, the only available recovery points are the times when the currently existing backup tapes were created.

 

Also, because it takes a long time to load data from tape, backup tapes are typically used only for disaster recovery or, possibly, for restoring data items after they've been corrupted or accidentally deleted. Thus, because they normally don't serve any purposes other than DR, tape-based backups are usually considered to offer only insurance against a disaster. Consequently, they often provide little or no value unless and until a disaster strikes.

 

An HA configuration that incorporates replication offers an advantage in this regard. Read-only operations, such as query and reporting functions and tape-based backup tasks, can be performed on the replica server, eliminating these loads from the production server and data store. However, because the replica must always be in perfect synch with the production database, the replica normally cannot be used for read/write tasks.

Snapshots Deliver HA/DR ROI

An HA solution goes beyond a mere insurance policy because it protects against downtime from not only rare, unforeseeable events such as disasters, but also from events that are guaranteed to happen regularly—namely, scheduled maintenance. Nonetheless, an HA/DR solution can deliver even greater value by incorporating an additional capability: snapshotting.

 

There are two generic types of snapshots: traditional and virtual. A traditional snapshot copies the full production data store, or a predefined segment of it, and stores that data, often in a flat file. Using the snapshot in an online application is often difficult. Generally, the snapshot has to be loaded into a database before employing the data for that purpose.

 

A virtual snapshot technology is much more flexible and useful than a traditional snapshot. It creates a virtual, yet fully functional read/write copy of data at a point in time.

 

To be effective and efficient, a snapshot facility must augment an HA solution, rather than replace it. This is essential because taking snapshots on the production server can severely impact the performance of operational applications. In contrast, when using an HA solution that includes replication to maintain a real-time or near real-time data replica on a backup server, snapshots can be created using the replica data, thereby eliminating any impact on the production systems.

 

It is important to note that snapshots are independent of the HA replica data store. Consequently, after it has been created, a snapshot can be used for any purpose, including read/write operations, without fear of damaging the hot-standby replica database that is required for HA purposes. No matter what happens to the snapshot, the backup server will still be ready to take over operations if necessary.

 

These features allow virtual snapshots to be employed to serve a wide variety of productive business purposes, including the following:

  • Snapshots are commonly used to offload tape backup operations in order to eliminate the backup-processing load from the production server. This eliminates the need for a backup window during which some or all applications must be shut down or curtailed. In addition, the snapshot can be created at a clean recovery point, thereby reducing the likelihood of data corruption or referential integrity issues during recovery options. Furthermore, because the tape backups are created on the recovery system, there is no need to transport tapes offsite for protection, provided that the recovery system is sufficiently distant from the production system.
  • Snapshots can also be used for reporting, business intelligence, and data mining purposes. Because these tasks, in particular business intelligence and data mining, can be very resource intensive, using snapshots to offload that processing to the recovery system can significantly improve production application performance.

    In addition, there are times when reports are required to show the state of the business as of a particular point in time. Quarter-end and year-end reporting are good examples of this. Because snapshots can provide a frozen view of data, they provide a simple way to produce these reports while the business continues to operate normally.
  • It is impossible to know for certain whether you are fully prepared to recover from a disaster until you test your recovery processes, resources, and data. That testing can be a problem when using a traditional DR or HA solution. Loading tape-based backups onto a backup server to test that they are useable for recovery purposes is a lengthy, cumbersome, labor-intensive process. As a result, disaster recovery readiness is rarely tested in these environments.

    When you use a remote, replicated HA server as your disaster recovery solution, recovery testing is simpler but risky. Testing is easy because the secondary server is a ready-to-run backup of the primary server. To test its readiness, you can simply perform a role-swap. What was the backup server then takes over the production role. If it's not ready to take on that role, the failure will quickly become apparent.

    However, the danger in this testing strategy is obvious. There's no point in testing something if you are 100 percent certain it's going to work. If a test of this nature fails, production operations may stop until users can be switched back to the functioning primary server.

    Simply using the backup server as a test environment without switching production operations to that server isn't practical because the testing process must perform updates on the backup database to ensure that it is fully useable. Thus, after the test is finished, the backup data will no longer be synchronized with the production server.

    A snapshot can provide the answer to this testing dilemma. Because the snapshot is independent of the replica data store, you can perform non-intrusive disaster-recovery readiness testing against the snapshot. Then, when you are finished testing, the snapshot can be discarded. And, because the snapshot is a point-in-time replica of the backup data store, it provides an accurate test of recovery readiness.
  • Snapshots also provide an easy way to quickly create test environments for developers. Because the snapshot is a copy of the production data store, it ensures that tests are performed on real-world data, without threatening the integrity of the production or backup data stores.
  • Likewise, a snapshot can be used as an isolated "sandbox" training system for new employees. This can minimize employee ramp-up time and ensure that practice activities—and mistakes—do not impact production operations.

When evaluating snapshot options, look for one that incorporates Copy-On-Write technology. This technology incurs very low disk-use overhead, typically on the order of a few percent of the protected data set's size. Consequently, snapshots created using this technology typically consume considerably less space than, for example, full disk mirrors.

Continuous Data Protection Completes the HA/DR Picture

Adding one more technology, Continuous Data Protection (CDP), to the HA/DR mix creates an exceptionally highly available, completely recoverable data infrastructure.

 

The problem with traditional DR or even HA solutions is that they offer only "point-in-time" recovery options. In the case of tape-based DR solutions, those points in time are when the backup tapes are created, typically nightly. Consequently, the tapes alone can't help you if you need to recover data to its state at some point in the middle of the day.

 

In the case of traditional HA, the only possible recovery point is "right now." Because the HA product maintains a real-time replica of production data, if some data is corrupted or accidentally deleted on the production database, that corruption or deletion will likely be immediately replicated to the backup data store. Thus, the backup will not offer a way to repair the damage.

 

This point-in-time limitation of traditional DR and HA offerings is a serious problem because one of the most commonly required recovery tasks is to restore data that was corrupted or is accidentally deleted. This can happen at any time of the day and may result from an operator error, a computer virus, malicious activities of disgruntled employees, or many other circumstances. CDP provides a way to recover from these events.

 

The CDP technology monitors file activity on the production system. It then captures every write operation and stores those writes individually into logs that can be used to, in effect, undo and redo data activity.

 

Vendors offer two flavors of CDP: near CDP and true CDP. With near CDP, data writes are not continuously transmitted to the CDP logs. Instead, they are batched and transmitted periodically, possibly during system or network slow periods or after a file is closed. The disadvantage of near CDP is that recovery points may be infrequent and there is no way to recover data to its state at a point in time between those recovery points.

 

True CDP, on the other hand, transmits writes to the CDP logs continuously, meaning that data can be recovered to any point in time. Because the CDP logs are disk-based and because the CDP tool can automate the recovery processes, recovery from disasters, data corruptions, and accidental data deletions can be performed rapidly, completely, and with little chance of human error.

 

Thus, in the common HA/DR lingo, an HA/DR solution that includes CDP can meet the most stringent of recovery time objectives (RTOs) and recovery point objectives (RPOs).

Full-Spectrum HA/DR

As stated above, what are labeled as HA solutions in AIX environments solve only a limited range of problems. They can keep operations going if the primary server has to be taken offline for maintenance, such as to upgrade the hardware, operating system, or application software. However, because the clustered nodes in an AIX HA configuration share a common data store, this solution does not, in itself, protect against data destruction or operational downtime due to a disaster. And if that shared data store needs to be taken offline for any reason, applications running in the cluster will stop as well.

 

Furthermore, this environment serves only as an HA platform. It does nothing to support secondary purposes, such as generating isolated environments that can be used for creating tape-based backups, testing, or performing offline query and reporting functions.

 

Adding replication to the mix protects the production data store by maintaining a ready-to-run backup server complete with its own up-to-the-moment copy of production data. If this backup system is sufficiently remote from the production server, it also serves to protect against data destruction and downtime due to disasters.

 

By augmenting this with a snapshot facility, you can increase the return on your investment by making it possible to quickly and easily create copies of production data that can be used for additional purposes without any impact on either the production or backup data stores.

 

Finally, adding CDP creates a full-spectrum HA/DR solution that protects against all eventualities, while also allowing you to fully leverage the value inherent in your enterprise data.

BLOG COMMENTS POWERED BY DISQUS

LATEST COMMENTS

Support MC Press Online

$

Book Reviews

Resource Center

  • SB Profound WC 5536 Have you been wondering about Node.js? Our free Node.js Webinar Series takes you from total beginner to creating a fully-functional IBM i Node.js business application. You can find Part 1 here. In Part 2 of our free Node.js Webinar Series, Brian May teaches you the different tooling options available for writing code, debugging, and using Git for version control. Brian will briefly discuss the different tools available, and demonstrate his preferred setup for Node development on IBM i or any platform. Attend this webinar to learn:

  • SB Profound WP 5539More than ever, there is a demand for IT to deliver innovation. Your IBM i has been an essential part of your business operations for years. However, your organization may struggle to maintain the current system and implement new projects. The thousands of customers we've worked with and surveyed state that expectations regarding the digital footprint and vision of the company are not aligned with the current IT environment.

  • SB HelpSystems ROBOT Generic IBM announced the E1080 servers using the latest Power10 processor in September 2021. The most powerful processor from IBM to date, Power10 is designed to handle the demands of doing business in today’s high-tech atmosphere, including running cloud applications, supporting big data, and managing AI workloads. But what does Power10 mean for your data center? In this recorded webinar, IBMers Dan Sundt and Dylan Boday join IBM Power Champion Tom Huntington for a discussion on why Power10 technology is the right strategic investment if you run IBM i, AIX, or Linux. In this action-packed hour, Tom will share trends from the IBM i and AIX user communities while Dan and Dylan dive into the tech specs for key hardware, including:

  • Magic MarkTRY the one package that solves all your document design and printing challenges on all your platforms. Produce bar code labels, electronic forms, ad hoc reports, and RFID tags – without programming! MarkMagic is the only document design and print solution that combines report writing, WYSIWYG label and forms design, and conditional printing in one integrated product. Make sure your data survives when catastrophe hits. Request your trial now!  Request Now.

  • SB HelpSystems ROBOT GenericForms of ransomware has been around for over 30 years, and with more and more organizations suffering attacks each year, it continues to endure. What has made ransomware such a durable threat and what is the best way to combat it? In order to prevent ransomware, organizations must first understand how it works.

  • SB HelpSystems ROBOT GenericIT security is a top priority for businesses around the world, but most IBM i pros don’t know where to begin—and most cybersecurity experts don’t know IBM i. In this session, Robin Tatam explores the business impact of lax IBM i security, the top vulnerabilities putting IBM i at risk, and the steps you can take to protect your organization. If you’re looking to avoid unexpected downtime or corrupted data, you don’t want to miss this session.

  • SB HelpSystems ROBOT GenericCan you trust all of your users all of the time? A typical end user receives 16 malicious emails each month, but only 17 percent of these phishing campaigns are reported to IT. Once an attack is underway, most organizations won’t discover the breach until six months later. A staggering amount of damage can occur in that time. Despite these risks, 93 percent of organizations are leaving their IBM i systems vulnerable to cybercrime. In this on-demand webinar, IBM i security experts Robin Tatam and Sandi Moore will reveal:

  • FORTRA Disaster protection is vital to every business. Yet, it often consists of patched together procedures that are prone to error. From automatic backups to data encryption to media management, Robot automates the routine (yet often complex) tasks of iSeries backup and recovery, saving you time and money and making the process safer and more reliable. Automate your backups with the Robot Backup and Recovery Solution. Key features include:

  • FORTRAManaging messages on your IBM i can be more than a full-time job if you have to do it manually. Messages need a response and resources must be monitored—often over multiple systems and across platforms. How can you be sure you won’t miss important system events? Automate your message center with the Robot Message Management Solution. Key features include:

  • FORTRAThe thought of printing, distributing, and storing iSeries reports manually may reduce you to tears. Paper and labor costs associated with report generation can spiral out of control. Mountains of paper threaten to swamp your files. Robot automates report bursting, distribution, bundling, and archiving, and offers secure, selective online report viewing. Manage your reports with the Robot Report Management Solution. Key features include:

  • FORTRAFor over 30 years, Robot has been a leader in systems management for IBM i. With batch job creation and scheduling at its core, the Robot Job Scheduling Solution reduces the opportunity for human error and helps you maintain service levels, automating even the biggest, most complex runbooks. Manage your job schedule with the Robot Job Scheduling Solution. Key features include:

  • LANSA Business users want new applications now. Market and regulatory pressures require faster application updates and delivery into production. Your IBM i developers may be approaching retirement, and you see no sure way to fill their positions with experienced developers. In addition, you may be caught between maintaining your existing applications and the uncertainty of moving to something new.

  • LANSAWhen it comes to creating your business applications, there are hundreds of coding platforms and programming languages to choose from. These options range from very complex traditional programming languages to Low-Code platforms where sometimes no traditional coding experience is needed. Download our whitepaper, The Power of Writing Code in a Low-Code Solution, and:

  • LANSASupply Chain is becoming increasingly complex and unpredictable. From raw materials for manufacturing to food supply chains, the journey from source to production to delivery to consumers is marred with inefficiencies, manual processes, shortages, recalls, counterfeits, and scandals. In this webinar, we discuss how:

  • The MC Resource Centers bring you the widest selection of white papers, trial software, and on-demand webcasts for you to choose from. >> Review the list of White Papers, Trial Software or On-Demand Webcast at the MC Press Resource Center. >> Add the items to yru Cart and complet he checkout process and submit

  • Profound Logic Have you been wondering about Node.js? Our free Node.js Webinar Series takes you from total beginner to creating a fully-functional IBM i Node.js business application.

  • SB Profound WC 5536Join us for this hour-long webcast that will explore:

  • Fortra IT managers hoping to find new IBM i talent are discovering that the pool of experienced RPG programmers and operators or administrators with intimate knowledge of the operating system and the applications that run on it is small. This begs the question: How will you manage the platform that supports such a big part of your business? This guide offers strategies and software suggestions to help you plan IT staffing and resources and smooth the transition after your AS/400 talent retires. Read on to learn: