Companies today are very aware of the high costs associated with managing stored data and keeping this data available to business-critical applications. These management costs are escalating at a time when corporate IT organizations are looking to streamline operations to ensure that infrastructure investments are leading to increases in productivity and profitability. At the same time, pressure to manage more infrastructure resources with fewer personnel is at an all-time high.
Highly centralized data centers serving the needs of both internal departments and external customers are now regarded as the path to address these problems by centralizing procurement and administration of complex systems. These data centers inherently contain a massive amount of storage—on the order of tens and hundreds of terabytes—and a heterogeneous set of server platforms suited to the needs of each application or department.
The one-size-fits-all approach to storage no longer works. Storage requirements now vary by application and even by user within an application. Storing files, for instance, has different requirements for performance, recoverability, and scalability than storing Web content, internal email, or mission-critical database instances does. Based upon the relative value to the business, data storage requirements have become very diverse and complex.
This article will shed light on how to get the most out of your existing storage infrastructure, how to cost-effectively scale storage resources for future growth, and how to build an information infrastructure that is secure, manageable, and reliable enough to support mission-critical applications.
Improving Storage Utilization Drives Down Costs and Complexity
As the cost of storage hardware continues to decline, many organizations choose to simply purchase more storage capacity as their data requirements increase. Unfortunately, such a tactic is not an ideal long-term solution. Besides the immediately apparent hardware and labor costs of such an approach, adding capacity also demands additional floor space, real estate, maintenance, and administrators. And every added disk array is an added potential point of system failure.
Worse, as companies add storage capacity, they accumulate systems from various hardware vendors, each with its own operating software and utilities. The environment becomes more complex with each addition, and this complexity adds expense. The IT department must have administrators who are proficient in multiple storage technologies and ensure that they are available at all times to address problems when they arise.
The most cost-effective storage, consequently, is the storage that has already been purchased. Although the concept sounds simple, analyst studies have shown that companies typically purchase and deploy excess capacity, leaving their utilization rates at 20–40 percent (Gartner Group 2005 Data Center Conference). This means the real total cost of ownership of storage is exponentially higher than anticipated, which minimizes any chance of seeing a return on investment.
Understanding how to increase utilization of existing storage resources and then justifying new purchases should be the primary goal before evaluating and purchasing any new storage resources. This not only eliminates the pain of growing data, but also provides important benefits for data backup and recovery. In addition, it aligns IT with changing business needs.
Defining Storage Utilization
Companies that feel they have high levels of storage utilization probably haven't run the numbers lately. They may look at how full their disk arrays are and assume that, because they're at 70–80 percent of capacity, their storage utilization is acceptable. But simply reviewing overall disk capacity usage fails to address what is being stored.
The majority of storage devices contain a lot of data that is of little or no immediate business value. Much of it may be non-business-related files such as MP3s. More often, many of the files may be duplicated, old, or rarely accessed. Of the files that are clearly business-related, many may not have been used in the last 90 days or even the past year. So the question to ask is this: What percentage of capacity that is being used has ongoing business value?
When companies analyze storage utilization from this standpoint, the results are usually surprising, if not shocking. One financial institution in particular recently determined, after careful analysis, that its actual storage utilization was only 8 percent. Other companies confirmed estimates in the single digits as well.
The challenge, then, is to better manage where and how this information is stored.
Virtualization Enables Storage Pooling for Better Utilization
The ability to pool storage into logical volumes has been around for some time. Yet the technology is still somewhat underutilized. Consider a situation in which a particular disk array (A) is only 50 percent full. If an application that uses another array (B) needs more storage capacity, but it can't get any from A, the administrator has to consider buying more capacity while array A sits half-empty.
Storage virtualization helps improve this utilization problem by enabling administrators to pool all storage into logical groups that can be reallocated quickly or in real-time based on demand. The best virtualization software can do this across any storage array from a variety of vendors, running under a variety of operating systems, from a single management interface.
When storage resources are virtualized, they appear to administrators as a single resource. For example, two 72GB drives can be combined to create a virtual 144GB disk or volume. Data can be moved transparently across vendors and operating systems to utilize available capacity. Storage management tools also enable IT shops to classify data by age or type so that less-valuable or less-current data can be moved automatically to less-costly storage (more about this tiered approach later). Storage utilization improves. Capital costs shrink. Additionally, new tools enable users to migrate data between operating systems—from AIX to Linux, for example, or from Linux to Solaris.
Not only does this storage pooling improve storage utilization, but administrators instantly become more productive and can then spend more time on other tasks, such as building business applications.
Creating a Tiered Storage Infrastructure
Another useful response to the utilization problem has been to segregate data into multiple tiers according to the cost of hardware, thereby freeing up expensive high-performance storage (like fibre-channel-based storage) by migrating older, less-used data to lower-cost storage like SATA. This data migration can be done based on the file's age, size, owner, or other attributes. And it can be done in reverse if a file that was once unimportant suddenly becomes very important.
Tiered architectures can reduce storage capital and operating expenses by hosting less-critical and stale data on lower-cost storage devices. A tiered storage strategy provides organizations with snapshot backups and point-in-time copies to be hosted on multiple tiers, replicates mirrored data to less-costly storage, and uses dynamic storage tiering for active, policy-based movement of data.
Tiering storage is about recapturing high-rent primary disk space and redirecting data that doesn't belong on a higher class of storage to secondary or tertiary targets. Implementing a tiered storage infrastructure enables organizations to better utilize existing resources, reduce management complexities, and reduce overall costs.
Simple, Cost-Effective Data Replication and Migration
In addition to these savings and efficiencies, storage management solutions can eliminate other pain points and improve the lives of IT administrators in a number of important ways. Describing all those benefits is beyond the scope of this article, but several benefits are worth highlighting.
Heterogeneous storage management tools can replicate data over large distances to secondary (or remote) sites more efficiently, greatly reducing (or eliminating) the threat of data loss and downtime caused by a disaster at the primary site. This capability is fast and efficient when data needs to be recovered after a disaster. Because this replication can be done from a high-end array to a low-cost array, the utilization of expensive storage arrays is improved and the capital cost of replication is driven down.
Another capability of some storage management solutions is the ability to migrate data for consolidation or for switching operating systems. Enabling better migration allows administrators to better utilize server resources as well.
Moving Toward Increased Storage Utilization
Organizations can immediately realize fundamental cost efficiencies by identifying and reclaiming unused storage and reconfiguring the overall storage infrastructure.
Understanding current capacity and future growth will provide financial benefits, reducing capital expenses for new storage resources, while giving IT organizations visibility into which business unit is consuming storage.
To analyze current storage utilization, ask some basic questions:
- How much storage really exists? In large data centers and IT departments, it may be difficult to know exactly how many storage devices exist and what their capacities are.
- How much of that capacity—on a device-by-device basis—is actually filled with data?
- How much of that data is actually important and has been accessed (over the last 30, 60, or 90 days)?
In the course of this complex discovery process, organizations will be able to arrive at a number of conclusions, such as how much total storage capacity exists; how much capacity falls into high-end, mid-range, and low-end categories; how much capacity in each category is currently unused; and what percentage of current storage actually consists of data with current business value.
Managing Risk, Cost, and Complexity
IT organizations face a difficult balancing act to ensure market responsiveness while operating efficiently. Companies today are continually seeking new ways to innovate and closely align IT to the changing needs of their business. IT's job is never done when it comes to managing risk, cost, and complexity across an enterprise. It takes innovation to drive market strategy, and it takes efficiency to drive innovation. In order to unlock innovation, IT organizations must proactively attain greater productivity across existing resources and staff. Decision-makers must keep abreast of the latest technologies that can help drive down IT costs and streamline the management of the storage environments in order to drive innovation. It is not until efficiency and innovation are recognized that value can be derived from the IT budget.
Danny Milrad is the Senior Product Marketing Manager for the Storage and Server Management Group of Symantec.
LATEST COMMENTS
MC Press Online