06
Sun, Oct
2 New Articles

Making the Most of Existing Storage Resources

Typography
  • Smaller Small Medium Big Bigger
  • Default Helvetica Segoe Georgia Times

Companies today are very aware of the high costs associated with managing stored data and keeping this data available to business-critical applications. These management costs are escalating at a time when corporate IT organizations are looking to streamline operations to ensure that infrastructure investments are leading to increases in productivity and profitability. At the same time, pressure to manage more infrastructure resources with fewer personnel is at an all-time high.

Highly centralized data centers serving the needs of both internal departments and external customers are now regarded as the path to address these problems by centralizing procurement and administration of complex systems. These data centers inherently contain a massive amount of storage—on the order of tens and hundreds of terabytes—and a heterogeneous set of server platforms suited to the needs of each application or department.

The one-size-fits-all approach to storage no longer works. Storage requirements now vary by application and even by user within an application. Storing files, for instance, has different requirements for performance, recoverability, and scalability than storing Web content, internal email, or mission-critical database instances does. Based upon the relative value to the business, data storage requirements have become very diverse and complex.

This article will shed light on how to get the most out of your existing storage infrastructure, how to cost-effectively scale storage resources for future growth, and how to build an information infrastructure that is secure, manageable, and reliable enough to support mission-critical applications.

Improving Storage Utilization Drives Down Costs and Complexity

As the cost of storage hardware continues to decline, many organizations choose to simply purchase more storage capacity as their data requirements increase. Unfortunately, such a tactic is not an ideal long-term solution. Besides the immediately apparent hardware and labor costs of such an approach, adding capacity also demands additional floor space, real estate, maintenance, and administrators. And every added disk array is an added potential point of system failure.

Worse, as companies add storage capacity, they accumulate systems from various hardware vendors, each with its own operating software and utilities. The environment becomes more complex with each addition, and this complexity adds expense. The IT department must have administrators who are proficient in multiple storage technologies and ensure that they are available at all times to address problems when they arise.

The most cost-effective storage, consequently, is the storage that has already been purchased. Although the concept sounds simple, analyst studies have shown that companies typically purchase and deploy excess capacity, leaving their utilization rates at 20–40 percent (Gartner Group 2005 Data Center Conference). This means the real total cost of ownership of storage is exponentially higher than anticipated, which minimizes any chance of seeing a return on investment.

Understanding how to increase utilization of existing storage resources and then justifying new purchases should be the primary goal before evaluating and purchasing any new storage resources. This not only eliminates the pain of growing data, but also provides important benefits for data backup and recovery. In addition, it aligns IT with changing business needs.

Defining Storage Utilization

Companies that feel they have high levels of storage utilization probably haven't run the numbers lately. They may look at how full their disk arrays are and assume that, because they're at 70–80 percent of capacity, their storage utilization is acceptable. But simply reviewing overall disk capacity usage fails to address what is being stored.

The majority of storage devices contain a lot of data that is of little or no immediate business value. Much of it may be non-business-related files such as MP3s. More often, many of the files may be duplicated, old, or rarely accessed. Of the files that are clearly business-related, many may not have been used in the last 90 days or even the past year. So the question to ask is this: What percentage of capacity that is being used has ongoing business value?

When companies analyze storage utilization from this standpoint, the results are usually surprising, if not shocking. One financial institution in particular recently determined, after careful analysis, that its actual storage utilization was only 8 percent. Other companies confirmed estimates in the single digits as well.

The challenge, then, is to better manage where and how this information is stored.

Virtualization Enables Storage Pooling for Better Utilization

The ability to pool storage into logical volumes has been around for some time. Yet the technology is still somewhat underutilized. Consider a situation in which a particular disk array (A) is only 50 percent full. If an application that uses another array (B) needs more storage capacity, but it can't get any from A, the administrator has to consider buying more capacity while array A sits half-empty.

Storage virtualization helps improve this utilization problem by enabling administrators to pool all storage into logical groups that can be reallocated quickly or in real-time based on demand. The best virtualization software can do this across any storage array from a variety of vendors, running under a variety of operating systems, from a single management interface.

When storage resources are virtualized, they appear to administrators as a single resource. For example, two 72GB drives can be combined to create a virtual 144GB disk or volume. Data can be moved transparently across vendors and operating systems to utilize available capacity. Storage management tools also enable IT shops to classify data by age or type so that less-valuable or less-current data can be moved automatically to less-costly storage (more about this tiered approach later). Storage utilization improves. Capital costs shrink. Additionally, new tools enable users to migrate data between operating systems—from AIX to Linux, for example, or from Linux to Solaris.

Not only does this storage pooling improve storage utilization, but administrators instantly become more productive and can then spend more time on other tasks, such as building business applications.

Creating a Tiered Storage Infrastructure

Another useful response to the utilization problem has been to segregate data into multiple tiers according to the cost of hardware, thereby freeing up expensive high-performance storage (like fibre-channel-based storage) by migrating older, less-used data to lower-cost storage like SATA. This data migration can be done based on the file's age, size, owner, or other attributes. And it can be done in reverse if a file that was once unimportant suddenly becomes very important.

Tiered architectures can reduce storage capital and operating expenses by hosting less-critical and stale data on lower-cost storage devices. A tiered storage strategy provides organizations with snapshot backups and point-in-time copies to be hosted on multiple tiers, replicates mirrored data to less-costly storage, and uses dynamic storage tiering for active, policy-based movement of data.

Tiering storage is about recapturing high-rent primary disk space and redirecting data that doesn't belong on a higher class of storage to secondary or tertiary targets. Implementing a tiered storage infrastructure enables organizations to better utilize existing resources, reduce management complexities, and reduce overall costs.

Simple, Cost-Effective Data Replication and Migration

In addition to these savings and efficiencies, storage management solutions can eliminate other pain points and improve the lives of IT administrators in a number of important ways. Describing all those benefits is beyond the scope of this article, but several benefits are worth highlighting.

Heterogeneous storage management tools can replicate data over large distances to secondary (or remote) sites more efficiently, greatly reducing (or eliminating) the threat of data loss and downtime caused by a disaster at the primary site. This capability is fast and efficient when data needs to be recovered after a disaster. Because this replication can be done from a high-end array to a low-cost array, the utilization of expensive storage arrays is improved and the capital cost of replication is driven down.

Another capability of some storage management solutions is the ability to migrate data for consolidation or for switching operating systems. Enabling better migration allows administrators to better utilize server resources as well.

Moving Toward Increased Storage Utilization

Organizations can immediately realize fundamental cost efficiencies by identifying and reclaiming unused storage and reconfiguring the overall storage infrastructure.

Understanding current capacity and future growth will provide financial benefits, reducing capital expenses for new storage resources, while giving IT organizations visibility into which business unit is consuming storage.

To analyze current storage utilization, ask some basic questions:

  • How much storage really exists? In large data centers and IT departments, it may be difficult to know exactly how many storage devices exist and what their capacities are.
  • How much of that capacity—on a device-by-device basis—is actually filled with data?
  • How much of that data is actually important and has been accessed (over the last 30, 60, or 90 days)?

In the course of this complex discovery process, organizations will be able to arrive at a number of conclusions, such as how much total storage capacity exists; how much capacity falls into high-end, mid-range, and low-end categories; how much capacity in each category is currently unused; and what percentage of current storage actually consists of data with current business value.

Managing Risk, Cost, and Complexity

IT organizations face a difficult balancing act to ensure market responsiveness while operating efficiently. Companies today are continually seeking new ways to innovate and closely align IT to the changing needs of their business. IT's job is never done when it comes to managing risk, cost, and complexity across an enterprise. It takes innovation to drive market strategy, and it takes efficiency to drive innovation. In order to unlock innovation, IT organizations must proactively attain greater productivity across existing resources and staff. Decision-makers must keep abreast of the latest technologies that can help drive down IT costs and streamline the management of the storage environments in order to drive innovation. It is not until efficiency and innovation are recognized that value can be derived from the IT budget.

Danny Milrad is the Senior Product Marketing Manager for the Storage and Server Management Group of Symantec.

BLOG COMMENTS POWERED BY DISQUS

LATEST COMMENTS

Support MC Press Online

$

Book Reviews

Resource Center

  • SB Profound WC 5536 Have you been wondering about Node.js? Our free Node.js Webinar Series takes you from total beginner to creating a fully-functional IBM i Node.js business application. You can find Part 1 here. In Part 2 of our free Node.js Webinar Series, Brian May teaches you the different tooling options available for writing code, debugging, and using Git for version control. Brian will briefly discuss the different tools available, and demonstrate his preferred setup for Node development on IBM i or any platform. Attend this webinar to learn:

  • SB Profound WP 5539More than ever, there is a demand for IT to deliver innovation. Your IBM i has been an essential part of your business operations for years. However, your organization may struggle to maintain the current system and implement new projects. The thousands of customers we've worked with and surveyed state that expectations regarding the digital footprint and vision of the company are not aligned with the current IT environment.

  • SB HelpSystems ROBOT Generic IBM announced the E1080 servers using the latest Power10 processor in September 2021. The most powerful processor from IBM to date, Power10 is designed to handle the demands of doing business in today’s high-tech atmosphere, including running cloud applications, supporting big data, and managing AI workloads. But what does Power10 mean for your data center? In this recorded webinar, IBMers Dan Sundt and Dylan Boday join IBM Power Champion Tom Huntington for a discussion on why Power10 technology is the right strategic investment if you run IBM i, AIX, or Linux. In this action-packed hour, Tom will share trends from the IBM i and AIX user communities while Dan and Dylan dive into the tech specs for key hardware, including:

  • Magic MarkTRY the one package that solves all your document design and printing challenges on all your platforms. Produce bar code labels, electronic forms, ad hoc reports, and RFID tags – without programming! MarkMagic is the only document design and print solution that combines report writing, WYSIWYG label and forms design, and conditional printing in one integrated product. Make sure your data survives when catastrophe hits. Request your trial now!  Request Now.

  • SB HelpSystems ROBOT GenericForms of ransomware has been around for over 30 years, and with more and more organizations suffering attacks each year, it continues to endure. What has made ransomware such a durable threat and what is the best way to combat it? In order to prevent ransomware, organizations must first understand how it works.

  • SB HelpSystems ROBOT GenericIT security is a top priority for businesses around the world, but most IBM i pros don’t know where to begin—and most cybersecurity experts don’t know IBM i. In this session, Robin Tatam explores the business impact of lax IBM i security, the top vulnerabilities putting IBM i at risk, and the steps you can take to protect your organization. If you’re looking to avoid unexpected downtime or corrupted data, you don’t want to miss this session.

  • SB HelpSystems ROBOT GenericCan you trust all of your users all of the time? A typical end user receives 16 malicious emails each month, but only 17 percent of these phishing campaigns are reported to IT. Once an attack is underway, most organizations won’t discover the breach until six months later. A staggering amount of damage can occur in that time. Despite these risks, 93 percent of organizations are leaving their IBM i systems vulnerable to cybercrime. In this on-demand webinar, IBM i security experts Robin Tatam and Sandi Moore will reveal:

  • FORTRA Disaster protection is vital to every business. Yet, it often consists of patched together procedures that are prone to error. From automatic backups to data encryption to media management, Robot automates the routine (yet often complex) tasks of iSeries backup and recovery, saving you time and money and making the process safer and more reliable. Automate your backups with the Robot Backup and Recovery Solution. Key features include:

  • FORTRAManaging messages on your IBM i can be more than a full-time job if you have to do it manually. Messages need a response and resources must be monitored—often over multiple systems and across platforms. How can you be sure you won’t miss important system events? Automate your message center with the Robot Message Management Solution. Key features include:

  • FORTRAThe thought of printing, distributing, and storing iSeries reports manually may reduce you to tears. Paper and labor costs associated with report generation can spiral out of control. Mountains of paper threaten to swamp your files. Robot automates report bursting, distribution, bundling, and archiving, and offers secure, selective online report viewing. Manage your reports with the Robot Report Management Solution. Key features include:

  • FORTRAFor over 30 years, Robot has been a leader in systems management for IBM i. With batch job creation and scheduling at its core, the Robot Job Scheduling Solution reduces the opportunity for human error and helps you maintain service levels, automating even the biggest, most complex runbooks. Manage your job schedule with the Robot Job Scheduling Solution. Key features include:

  • LANSA Business users want new applications now. Market and regulatory pressures require faster application updates and delivery into production. Your IBM i developers may be approaching retirement, and you see no sure way to fill their positions with experienced developers. In addition, you may be caught between maintaining your existing applications and the uncertainty of moving to something new.

  • LANSAWhen it comes to creating your business applications, there are hundreds of coding platforms and programming languages to choose from. These options range from very complex traditional programming languages to Low-Code platforms where sometimes no traditional coding experience is needed. Download our whitepaper, The Power of Writing Code in a Low-Code Solution, and:

  • LANSASupply Chain is becoming increasingly complex and unpredictable. From raw materials for manufacturing to food supply chains, the journey from source to production to delivery to consumers is marred with inefficiencies, manual processes, shortages, recalls, counterfeits, and scandals. In this webinar, we discuss how:

  • The MC Resource Centers bring you the widest selection of white papers, trial software, and on-demand webcasts for you to choose from. >> Review the list of White Papers, Trial Software or On-Demand Webcast at the MC Press Resource Center. >> Add the items to yru Cart and complet he checkout process and submit

  • Profound Logic Have you been wondering about Node.js? Our free Node.js Webinar Series takes you from total beginner to creating a fully-functional IBM i Node.js business application.

  • SB Profound WC 5536Join us for this hour-long webcast that will explore:

  • Fortra IT managers hoping to find new IBM i talent are discovering that the pool of experienced RPG programmers and operators or administrators with intimate knowledge of the operating system and the applications that run on it is small. This begs the question: How will you manage the platform that supports such a big part of your business? This guide offers strategies and software suggestions to help you plan IT staffing and resources and smooth the transition after your AS/400 talent retires. Read on to learn: