Ask IT executives what’s driving initiatives to build their own private cloud infrastructures and they will tell you that it’s all about operational efficiency within IT. It’s about agility — and driving down labor costs. Having already virtualized servers, enterprises are now working on software defined storage and networking in a bid to eliminate the need for manual configuration, to enable user self-service when fulfilling IT infrastructure requests, and to allow infrastructure to respond automatically to the collective needs of all applications running in the data center.
The motivation to cut labor costs makes sense. After all, labor is one of the biggest parts of the IT operating budget. But the next biggest cost is power and cooling, and not all enterprises are taking that into account — at least not yet. CIOs focus on software defined servers, storage and networks. They don’t always realize that they should wrap power and cooling into their software defined data center infrastructures.
Data center energy costs aren’t always part of the IT budget, but with data center power consumption rising, developing an architecture that can dynamically optimize power and cooling loads will be an important consideration to the business as private cloud architectures evolve. By the same token, not factoring that into your planning horizon could be risky.
So far, not much has been done to extend private cloud automation all the way from the application layer down through server, storage and networking and into power and cooling systems. Fully software defined data centers (SDDCs) could step down server processors as application workload requirements change, as well as shut down power distribution paths and relocate virtual machines into one area of the data center during off-peak, evening hours to save energy. While power and cooling system vendors have made data available through APIs, the data is often unused in enterprise-grade software defined data center systems. OpenStack, for example, one of the most popular open source architectures upon which enterprises are building SDDCs, doesn’t even touch upon the issue.
Large Internet giants like Google and Microsoft, who have built custom cloud infrastructures from the ground up, have been working with power and cooling system vendors for some time. But enterprise customers have been mostly silent. “The Internet giants are asking us to do more than our enterprise customers are, especially on converged IT infrastructure,” says a spokesperson at Schneider Electric.
However, some forward thinking enterprise IT organizations are already laying the groundwork. Kevin Humphries, senior vice president of IT at FedEx Corporate Services, says that storage and networking are currently dominating the discussion because those have been the hardest parts of the data center to automate. But ultimately, all architectural layers will be software defined, from the application layer all the way down to power distribution, heating, cooling – even white space. “The way we envision the evolution of our data centers and infrastructure, software rules everything,” he says.
Das Kamhout, principal engineer at Intel, says the chip maker started with a custom-built private cloud a few years ago, when commercial software defined networking and storage products weren’t ready for prime time. Since then it has moved to a software defined data center architecture built using OpenStack-compliant components. Using low-level power and cooling system data to make decisions about moving resource pools or stepping down servers to save energy is coming. “Today it’s in the very basic stages,” Kamhout says, but it will evolve as private cloud automation tools continue to advance.