Compared to data centers as we’ve known them, it’s the former, but some projections say it could be the latter by 2030. Resolving this environmental conundrum will likely shape cloud computing’s future over the next few years.
Cloud computing has enabled a transition in business computing from a situation where most transactions are handled by in-house infrastructures to one in which a majority of compute-intensive activities are handled by independent Cloud Service Providers (CSPs). A Gartner Group study published in June 2022 estimated that in 2021 alone, the cloud services market for Infrastructure as a Service (IaaS) grew by more than 41 percent, with 80 percent of the market being taken up by just five companies (Amazon, Microsoft, Alibaba, Google, and Huawei). Obviously, this kind of migration has been going on for a number of years.
On the green side, consolidation of numerous data centers into the regional megacenters favored by CSPs has cut back on the total megawattage required to handle the world’s computer needs. An often quoted study by Lawrence Berkeley National Laboratory and Northwestern University (that was funded by Google) in 2020 found that if the software apps used by an additional 86 million U.S. workers moved to the cloud, that action alone could save enough energy to power Los Angeles for a year. Cloud computing has already contributed in a significant way to reducing energy consumption compared to the days when every business handled its computerized business transactions in-house, and that consolidation continues apace.
On the other hand, though, cloud data centers constantly consume a fair amount of energy, considering that while such activities include handling both extremes of cycle consumption (e.g., short-term service requests and longer-running dataset analyses) and anything in between, cloud data centers are running 24/7 to satisfy a myriad of clients and their needs in an environment where it’s business hours somewhere on the globe every minute of every hour. That situation is unlikely to change and points to rising power consumption needs every year into the foreseeable future, given the increasing popularity of Artificial Intelligence, the Internet of Things, and the recent introduction of 5G technology, to name just a few of the most prominent potential growth areas. However, the day will come when the vast majority of enterprises converting to cloud computing has already been achieved, after which the cost savings of this consolidation will presumably level off.
Energy Innovation Policy & Technology, which describes itself as a “non-partisan energy and climate policy think tank” said in a 2020 research synopsis that global data center energy use may have doubled since 2010 and actually consumed 2.5 Terawatts in 2018. Mere extrapolation based on that metric could result in significantly higher global demand for electricity by 2030 just for corporate data systems, which presumably would include the global cloud. Given the exponential growth alone of additional computing enhancements in consumer services and entertainment devices such as virtual-reality headsets, “smart” homes and cities, and the tracking of every website any of us visits or buys from, that could be a conservative estimate. The power demands of computing in general, as well as those of cloud computing and its allied applications, are likely to increase long-term. With that will come other ramifications of generating more power, such as a rise in carbon dioxide production and related concerns.
In support of the idea of power demand increasing, in a speech given to the 2023 IEEE International Solid State Circuits Conference in February, CEO Lisa Su of Advanced Micro Devices stated that some supercomputers, for example, could conceivably require as much as 500 megawatts apiece within a decade. She went on to cite energy efficiency as one of the most important challenges to face computing in the next ten years. Although one could argue about the exact impact of supercomputing on the cloud, her point shows a logical trend in thinking.
So Which Way Is the “Power Curve” Really Bending?
As these few examples represent, there’s really no consensus yet on whether or not computing in general —and cloud computing in particular—will end up starting to starve the globe of electricity by 2030 and beyond. One can find posts, articles, and books pointing in both directions, and as with so many things, the amount of danger resides in the eye of each beholder.
What does seem certain, whether you’re inclined to agree or disagree on the importance of paying attention to environmental impacts is that, barring some catastrophe, energy demand for cloud and non-cloud computing could rise significantly in the next ten years. After all, it’s just human nature to want more of any good thing.
When that happens, it’s a pretty sure bet that at the very least the cloud service providers will be looking for ways, explicit or implicit, to have their customers foot the power bill. For that reason alone (although of course there are others), it makes good economic and business sense to consider adoption of some kind of “green” energy practices for cloud and other forms of computing.
Energy Saving Techniques Among CSPs
A metric called Power Usage Effectiveness (PUE) was invented in 2016 by The Green Grid, a non-profit consortium of data-center operators. It provides a formula for determining how much energy is used by computing resources in a data center, which was adopted as an international ISO standard (ISO/IEC 30134-2:2016). Although this standard does provide a measuring stick for comparing the efficiency of one data center to another, it notoriously doesn’t take into account such factors as lighting, cooling, and other data-center energy uses that are environmental for the centers themselves rather than strictly pertinent to use of the computers the centers house, rendering strict comparisons of data center PUEs to each other problematic and potentially unreliable. Some PUE calculations make honest efforts to include the data-center environmental costs while others don’t, and which PUE ratings do this and which don’t is often obscured, intentionally or not.
Green Cloud Computing (often called “Green Cloud” for short) refers to a grouping of practices that aim to mitigate the power consumption of the large data centers cloud computing requires. Green Cloud is different from the generalized concept of “green computing,” which includes such tasks as producing chips and other computer-related equipment in a way that reduces the carbon footprint of computer equipment manufacturing operations.
Some of the most basic Green Cloud practices have to do with how providers operate their service centers. Recently there has been a move to start building such centers in northern locales, such as Sweden, so as to use climate as a means of dissipating some of the excess heat of massive server farms, as well as locating new centers near hydroelectric plants, underwater or underground, or to have the centers use the excess heat from centers to warm nearby buildings without causing even more power consumption. This practice usually also includes careful consideration of the service center’s floor plans and other overall design features to minimize energy use. In addition, there are projects that use artificial intelligence apps running on in-plant systems that analyze power use and make recommendations for eliminating excess and waste. There are also flexible hardware devices centers can use that control server voltages under predefined circumstances.
A second general area of recommendations is in using software engineering practices to help cloud clients adapt their applications for more efficient operation. This includes such techniques as selecting the best computer languages for applications running on the cloud, optimizing application configurations, and finding ways of reducing excessive disk reads of databases. Containerization, which consolidates all parts of an application into a single software entity, can provide faster access and more-efficient execution for those apps. Using edge computing techniques to have smaller computing devices nearer to the sources of data gathering can help reduce data transmissions across the Internet. Finding ways to help customers migrate Infrastructure as a Service (IaaS) platforms with minimal re-engineering is another idea.
Ultimately in the software engineering area is the concept of cloud-native application development, in which applications are built with cloud execution in mind from the start. This requires building apps out of code snippets that don’t run on any specific hardware, uses such concepts as microservices and containers, and provides more efficient scaling to meet the requirements of any workload.
Closely allied with this area is the need to efficiently oversee resource consumption by virtual machines (VMs). Because workloads shift frequently between VMs, the energy allocated to each fluctuates, so a VM hypervisor is needed to balance resource use and make sure VM workloads don’t fall below the energy being provided to a VM at a particular moment. Similarly, load balancing between servers can provide some significant energy savings over time, particularly if the load balancing can be achieved with minimal networking support, such as by sharing one terminal server among multiple VMs. VMs can also be moved from servers currently experiencing low load to other servers with capacity available, enabling reduction in power use by idling the low-load server. Managing VMs is easier than managing physical servers and provides more flexibility in scheduling work overall.
Workflow optimization is another source of savings. These can include methods such as reconfiguring network routes to minimize traffic, automating repetitive tasks, and reducing routine storage and server caching. Job scheduling to run certain tasks during times of less demand, where appropriate, and spreading work to servers in a different time zone can also even out server use and level out electrical demand.
Service centers can also reduce toxic waste from electronic and electrical equipment by careful management of hardware resources. For example, uninterruptible power supplies (UPSs) come in “dynamic” models rather than the older standard static versions. A dynamic UPS uses a flywheel rather than batteries, thereby eliminating the environmental impact of periodically disposing the batteries used in static UPSs.
Energy-Saving Techniques for Cloud User Companies
Some of the major cloud providers, particularly Amazon’s AWS, provide a “tags” service that lets client companies view and analyze the amount of their costs and cloud usage. By being able to track use, those responsible for cloud governance at a user company can make intelligent decisions about setting user policies that can reduce wasteful overuse of services and resources.
On the end-user side of the equation, encouraging a remote-worker policy is a way to reduce the need for office space and energy resources at client companies. It can also reduce the need for physical commuting and contributes to less resource use that way.
Choosing a CSP
Client companies that want to see their cloud-services bills rise as slowly as possible over the next decade should consider patronizing providers using some of these practices—and perhaps questioning why other potential providers aren’t doing so.
LATEST COMMENTS
MC Press Online