26
Tue, Nov
1 New Articles

Respond Deftly to Change by Implementing a Dynamic Infrastructure

High Availability / Disaster Recovery
Typography
  • Smaller Small Medium Big Bigger
  • Default Helvetica Segoe Georgia Times

Businesses evolve and grow. Technologies advance. Consequently, organizations need dynamic infrastructures that allow them to quickly react to change.

 

Repetition breeds complacency. As a result, after decades of reiteration, some people no longer pay as much attention as they should to the old saying, "the only constant is change." But, for better or worse, the last couple of years, which carried us over a peak and into a low valley in the economy, have made the truth of that adage abundantly clear.

 

Many of the changes that businesses have undergone recently have, to say the least, not been entirely positive. Despite signs of "green shoots" in the economy, still fresh in our memories are screaming headlines about massive layoffs, crashing housing markets, large business losses, financial institution failures, banks that would have failed were it not for government bailouts, and corporate bankruptcy filings.

 

And as the economy recovers, we will confront yet more changes. Fortunately, most of them will be for the better, but there will, no doubt, be some bumps in the road ahead.

 

Even in the midst of an economic downturn, some organizations achieve triumphs in spite of bleak conditions, and others plant the seeds for their future success. For instance, some companies acquire businesses with market capitalizations that are perceived to have fallen below their long-term value. Other companies, rather than retrenching, take advantage of competitors' stumbles to aggressively capture greater market share. And still other companies do need to search for ways to cut back or gain efficiencies through improvements to their operations.

 

When times start to improve, everyone scrambles to take advantage of the emerging opportunities. Under these conditions, new market initiatives and even new lines of business may be launched as a result of increased optimism.

 

The upshot is that, through good times and bad, the old adage about change being constant, hackneyed though it may be, remains true. What's more, the pace of change is accelerating. And IT often finds itself at the leading edge of the effort to accommodate that rapid change, regardless of its source.

 

For example, from time to time, new technologies offer benefits that are too good to forgo. Significant growth in the quantity of information that businesses receive, generate, store, analyze, and report on requires new technologies and tactics for managing that information. Corporate mergers and acquisitions create a need for IT to integrate or replace systems. New business initiatives require new applications. Ubiquitous networking opens opportunities to improve efficiencies through increased automated supply-chain interaction. The list of IT transformation drivers is virtually endless.

 

No organization is immune to change. Therefore, the companies that achieve the greatest success are the nimble ones that can adapt to and take advantage of those changes as quickly and inexpensively as possible. Consequently, planning for a dynamic IT infrastructure that is capable of readily facilitating a highly agile enterprise should be a major objective of every IT department.

 

There is an important point to keep in mind as you plan for a dynamic infrastructure. Many of the significant external transformations that organizations will face down the road cannot be predicted with any great accuracy. Thus, it is not adequate to plan for only specific future states. Instead, you need an architecture that is sufficiently flexible to accommodate any business or technology requirement that might come your way.

Heterogeneous Data Bridges

One of the ways to maintain this flexibility is to incorporate versatile bridges within your IT infrastructure. These connecting pieces are, for the most part, platform-agnostic. A prime example is a heterogeneous data replicator.

 

In the strictest sense of the word, "replicator" is somewhat of a misnomer for many products in this category because data replicators copy the meaning of data but not necessarily its form. For example, apart from replicators included under the covers of high availability (HA) software, which are special cases of replication designed for a specific purpose, data replicators typically accommodate differences in data types and formats between the source and target databases.

 

Field-type mappings aren't usually visible to users, but sophisticated data replicators also facilitate transformations that accommodate differing user requirements. For example, a ZIP or postal code on the source database may be copied to the target, but the code might also be used to populate a "region" field on the target that doesn't exist on the source. A single date column on the source database may be split into year, month, and day columns on the target. American measures may be converted into metric measures. The list of possible transformations knows few limits.

 

The word "heterogeneous" in "heterogeneous data replicator" refers to another important capability of these tools. The source and target systems can run on different hardware and operating systems and use different database management systems.

 

Heterogeneous data replicators support enterprise flexibility by allowing applications to be integrated at the data level, without concern for the system platforms and without the need to code complex interfaces. The result is that, for example, after a corporate merger, the IT department can integrate the predecessor systems—or replace one or both of them—at a pace of its choice. In the meantime, the old systems can be run in parallel and share data transparently using the data replicator.

 

In addition, when a new business requirement arises, the company can choose a best-of-breed application to fulfill that requirement. A heterogeneous data replicator can then integrate the new application with other enterprise applications at the data level, even when the various applications run on disparate computing platforms.

Hardware Upgrades Without Downtime

The Capacity on Demand offerings on IBM Power Systems provide affordable scalability by allowing organizations to activate idle processors and memory resources either temporarily to accommodate spikes in system demand or permanently to accommodate ongoing business activity growth. Using Capacity on Demand, you pay for additional processors and memory resources only when you need them.

 

Capacity on Demand can defer the need for new hardware, but business evolution and growth, combined with technology advances too beneficial to pass up, will eventually leave you with little choice but to upgrade your physical servers. When this happens, the downtime required to complete the upgrade can be exceptionally costly, particularly for organizations that support 24x7 operations.

 

Data replication offers a way to avoid most of this downtime. The new hardware can be brought in before the old hardware is removed. The replicator can then copy all of the application and system data and objects from the old server to the new one. The IT department can then take as long as necessary to ensure that the new system is set up and configured properly, while the replicator keeps it fully synchronized with the old system until the switchover is complete.

 

Once the new hardware is in place and fully tested, the only downtime that users will experience during the upgrade is the time it takes to switch from the old to the new system, which may be as little as just a few minutes or seconds, depending on the environment and the software involved.

 

When upgrading, the old and new servers typically run on the same platform, although possibly using different versions of the operating system. Consequently, the data replicator used to support the upgrade does not have to support heterogeneous replication. Thus, because HA software includes homogeneous data replication as an inherent component, companies that already have HA software may be able to use it to keep the old and new systems synchronized during the upgrade.

Software Upgrades Without Downtime

When upgrading the operating system, the database management system, or an application, an organization doesn't necessarily have a second system available. Nonetheless, it is still possible to complete these sorts of upgrades with little or no downtime thanks to the partitioning capabilities of IBM Power Systems.

 

Each partition acts as a virtual server. Consequently, HA software or standalone replicators that can replicate between independent servers can also replicate between partitions.

 

When IT upgrades system or application software, the upgrade can be installed in one partition, while the old software continues to run as normal in another partition. While the new version is being installed, a replicator can keep the data in the two partitions synchronized. Then, when the new software is fully implemented and tested, users can be switched to the partition containing the upgraded software.

Database Reorganizations Without Downtime

Database reorganizations are the bane of many IT departments. Records deleted from a file are only logically deleted. They are physically deleted only when the database is reorganized. Thus, even if an organization's information content did not expand over time, its storage requirements would grow nonetheless.

 

But, of course, information content does grow. In many organizations, the combination of general business expansion and the growth in the types of data collected results in explosive increases in the volume of stored data. Regular database reorganizations are necessary to keep this mushrooming of storage requirements in check.

 

Some people suggest that because the per-terabyte cost of storage has dropped dramatically over the years, storage costs are less of a concern than they once were. There is some truth in this—although the declining per-terabyte cost is at least partly offset, and sometimes overwhelmed, by the exploding growth in data volumes—but the cost of storage devices is not the only concern.

 

Logically deleted records still exist as far as the physical storage device is concerned. When a query is issued against a database, all of the physical records, whether logically deleted or not, are brought into the buffers. The deleted records are then filtered out.

 

Processing these logically deleted records consumes both disk I/O and processor resources. Thus, as the proportion of logically deleted records in a database increases, application response times lengthen, possibly to an unacceptable level.

 

The optimal frequency of database reorganizations depends on both the frequency of record deletions and the cost of doing reorganizations. This latter factor often leads many companies to defer database reorganizations far longer than would be advisable in the absence of high reorganization costs. Yet the resulting database atrophy can threaten the agility of the organization.

 

The primary cost of file reorganizations is the downtime that has been traditionally required to perform them. In the past, it was necessary to shut down applications while the databases they use were being reorganized. As organizations moved toward around-the-clock operations to take advantage of opportunities afforded by globalization and the Internet, this downtime became that much more costly.

 

Fortunately, new tools that have been introduced into the market over the past few years make it possible to reorganize databases with only minimal downtime.

 

There are two generic ways to reorganize databases while applications remain active. Some vendors offer both methods as options within a single product. The mirrored-file method copies the file to be reorganized into a new library and reorganizes it as it is being copied. The copy is kept in sync with production changes until it can replace the production file. In contrast, the in-place, reorganize-while-active method reclaims space occupied by all deleted records, without the need to copy or synchronize files. And, unlike traditional reorganization functions, this newer reorganization technology can be performed with minimal impact on production operations.

 

Obviously, in-place reorganization requires less storage space than the mirrored file method, but it is not ideal in some circumstances. The mirrored-file method is typically used if triggers execute actions when records are added or deleted, referential integrity constraints are defined, journaling is used for data warehousing purposes, or you want to reorganize all members in a file at the same time.

 

For both the in-place and mirrored-file methods, the tool needs only a brief period of exclusive file use. This very short period—often just a few seconds—when applications will not be able to access the files being reorganized does not have to coincide with or immediately follow any of the other reorganization processes. Instead, it can be deferred until when it will have the least impact on the organization.

Dynamic Implies Resilient

Regardless of the occurrence of unplanned events, such as disasters and hardware failures, or planned events, such as scheduled maintenance, the organization has to be able to keep functioning. Consequently, a dynamic infrastructure must be one that can keep the business going no matter what the world throws at it.

 

HA software facilitates a resilient IT infrastructure that provides a high level of protection against downtime, both planned and unplanned. How much protection it offers depends on the HA topology.

 

HA software maintains up-to-date replicas of production servers. Theses replicas can be located in the same room, or they can be on opposite sides of the globe from each other. Or they can be in different partitions on the same system.

 

If the remote backup server is located far enough from the production server such that a single disaster will almost certainly not affect both servers, then this topology offers business resiliency in all circumstances. Even if a disaster destroys the primary data center, operations can still continue virtually uninterrupted. In addition, the remote replica server is also available to keep the business running through any planned maintenance events such as the upgrades or migrations described above.

 

A replica server can also help to make the IT infrastructure more scalable and versatile. For example, because the backup server contains a complete, current copy of all data, nightly backup tapes can be created there rather than on the production server. This removes the processing load from the production machine and eliminates the downtime that is often required when creating backup tapes.

 

It's not just backup jobs that can be moved to the replica server. Read-only functions, such as queries and batch reporting, can also be run there, thereby removing their processing and disk I/O load from the production systems.

 

Systems used as solely backup servers are often considerably underutilized until they are called upon to take on the production role, but this doesn't have to be so. You can use Power Systems partitioning to run multiple virtual servers on each of the two physical systems. That way, some of the partitions on each system can run production servers, while others back up the production servers running on the other machine.

 

By designing the architecture in this way, both systems can be lesser powered than would otherwise be required. When either system shuts down or must be taken offline for maintenance, the organization may not be able to run at full capacity on the single remaining system, but this drawback may be outweighed by lower hardware costs.

 

The above are merely examples of the technologies and tactics that can serve to make an IT infrastructure and, in turn, the organization more dynamic. The gamut of options is too large to describe in full here.

 

The point is, when you design or redesign your IT infrastructure, as you look at each component of that infrastructure, ask yourself, "Will this design and the technologies we use to implement it allow us to continue to operate with optimal efficiency and effectiveness if tomorrow looks considerably different from today?" If the answer is "no," it is usually a good idea to search for ways to change that answer to "yes."

Craig Johnson

Craig Johnson, vice president of research and development at Vision Solutions, has over 14 years of experience in developing high availability and disaster recovery products and features targeted specifically for enterprise class customers. Mr. Johnson manages a team that works very closely with Vision's customers, business partners, and IBM in successfully developing leading-edge high availability solutions.

 

Mr. Johnson has 25 years of experience in software and high technology business and also serves as an advisor to the University of Minnesota on its Management of Technology (MOT) advisory board. In addition, Craig has written or co-written a number of articles on high availability topics and technologies and is a recognized expert in the area.

 

Prior to joining Vision, Mr. Johnson worked in a variety of software engineering projects and technologies at NCR Comten, Unisys, and IBM. He holds a Bachelor of Science degree in computer science and business finance from the Minnesota State University in Mankato.

BLOG COMMENTS POWERED BY DISQUS

LATEST COMMENTS

Support MC Press Online

$

Book Reviews

Resource Center

  • SB Profound WC 5536 Have you been wondering about Node.js? Our free Node.js Webinar Series takes you from total beginner to creating a fully-functional IBM i Node.js business application. You can find Part 1 here. In Part 2 of our free Node.js Webinar Series, Brian May teaches you the different tooling options available for writing code, debugging, and using Git for version control. Brian will briefly discuss the different tools available, and demonstrate his preferred setup for Node development on IBM i or any platform. Attend this webinar to learn:

  • SB Profound WP 5539More than ever, there is a demand for IT to deliver innovation. Your IBM i has been an essential part of your business operations for years. However, your organization may struggle to maintain the current system and implement new projects. The thousands of customers we've worked with and surveyed state that expectations regarding the digital footprint and vision of the company are not aligned with the current IT environment.

  • SB HelpSystems ROBOT Generic IBM announced the E1080 servers using the latest Power10 processor in September 2021. The most powerful processor from IBM to date, Power10 is designed to handle the demands of doing business in today’s high-tech atmosphere, including running cloud applications, supporting big data, and managing AI workloads. But what does Power10 mean for your data center? In this recorded webinar, IBMers Dan Sundt and Dylan Boday join IBM Power Champion Tom Huntington for a discussion on why Power10 technology is the right strategic investment if you run IBM i, AIX, or Linux. In this action-packed hour, Tom will share trends from the IBM i and AIX user communities while Dan and Dylan dive into the tech specs for key hardware, including:

  • Magic MarkTRY the one package that solves all your document design and printing challenges on all your platforms. Produce bar code labels, electronic forms, ad hoc reports, and RFID tags – without programming! MarkMagic is the only document design and print solution that combines report writing, WYSIWYG label and forms design, and conditional printing in one integrated product. Make sure your data survives when catastrophe hits. Request your trial now!  Request Now.

  • SB HelpSystems ROBOT GenericForms of ransomware has been around for over 30 years, and with more and more organizations suffering attacks each year, it continues to endure. What has made ransomware such a durable threat and what is the best way to combat it? In order to prevent ransomware, organizations must first understand how it works.

  • SB HelpSystems ROBOT GenericIT security is a top priority for businesses around the world, but most IBM i pros don’t know where to begin—and most cybersecurity experts don’t know IBM i. In this session, Robin Tatam explores the business impact of lax IBM i security, the top vulnerabilities putting IBM i at risk, and the steps you can take to protect your organization. If you’re looking to avoid unexpected downtime or corrupted data, you don’t want to miss this session.

  • SB HelpSystems ROBOT GenericCan you trust all of your users all of the time? A typical end user receives 16 malicious emails each month, but only 17 percent of these phishing campaigns are reported to IT. Once an attack is underway, most organizations won’t discover the breach until six months later. A staggering amount of damage can occur in that time. Despite these risks, 93 percent of organizations are leaving their IBM i systems vulnerable to cybercrime. In this on-demand webinar, IBM i security experts Robin Tatam and Sandi Moore will reveal:

  • FORTRA Disaster protection is vital to every business. Yet, it often consists of patched together procedures that are prone to error. From automatic backups to data encryption to media management, Robot automates the routine (yet often complex) tasks of iSeries backup and recovery, saving you time and money and making the process safer and more reliable. Automate your backups with the Robot Backup and Recovery Solution. Key features include:

  • FORTRAManaging messages on your IBM i can be more than a full-time job if you have to do it manually. Messages need a response and resources must be monitored—often over multiple systems and across platforms. How can you be sure you won’t miss important system events? Automate your message center with the Robot Message Management Solution. Key features include:

  • FORTRAThe thought of printing, distributing, and storing iSeries reports manually may reduce you to tears. Paper and labor costs associated with report generation can spiral out of control. Mountains of paper threaten to swamp your files. Robot automates report bursting, distribution, bundling, and archiving, and offers secure, selective online report viewing. Manage your reports with the Robot Report Management Solution. Key features include:

  • FORTRAFor over 30 years, Robot has been a leader in systems management for IBM i. With batch job creation and scheduling at its core, the Robot Job Scheduling Solution reduces the opportunity for human error and helps you maintain service levels, automating even the biggest, most complex runbooks. Manage your job schedule with the Robot Job Scheduling Solution. Key features include:

  • LANSA Business users want new applications now. Market and regulatory pressures require faster application updates and delivery into production. Your IBM i developers may be approaching retirement, and you see no sure way to fill their positions with experienced developers. In addition, you may be caught between maintaining your existing applications and the uncertainty of moving to something new.

  • LANSAWhen it comes to creating your business applications, there are hundreds of coding platforms and programming languages to choose from. These options range from very complex traditional programming languages to Low-Code platforms where sometimes no traditional coding experience is needed. Download our whitepaper, The Power of Writing Code in a Low-Code Solution, and:

  • LANSASupply Chain is becoming increasingly complex and unpredictable. From raw materials for manufacturing to food supply chains, the journey from source to production to delivery to consumers is marred with inefficiencies, manual processes, shortages, recalls, counterfeits, and scandals. In this webinar, we discuss how:

  • The MC Resource Centers bring you the widest selection of white papers, trial software, and on-demand webcasts for you to choose from. >> Review the list of White Papers, Trial Software or On-Demand Webcast at the MC Press Resource Center. >> Add the items to yru Cart and complet he checkout process and submit

  • Profound Logic Have you been wondering about Node.js? Our free Node.js Webinar Series takes you from total beginner to creating a fully-functional IBM i Node.js business application.

  • SB Profound WC 5536Join us for this hour-long webcast that will explore:

  • Fortra IT managers hoping to find new IBM i talent are discovering that the pool of experienced RPG programmers and operators or administrators with intimate knowledge of the operating system and the applications that run on it is small. This begs the question: How will you manage the platform that supports such a big part of your business? This guide offers strategies and software suggestions to help you plan IT staffing and resources and smooth the transition after your AS/400 talent retires. Read on to learn: