Getting IT pros to renounce old habits is without doubt one of the hardest things about building a brand new, private cloud architecture.
Why will we need different boxes for servers, storage, and network switches within the datacenter? They’re all just computers, says David Reilly, who’s the worldwide technology infrastructure executive for Bank of America. Why can’t companies fill their datacenters with white-box computers filled with x86 chips and a ton of memory, controlled by software that may make that box an in-memory storage device today, a software-defined switch tomorrow, and a server next week?
This radical departure from today’s datacenter approach is not only idle salon chatter. Bank of America, this country’s second-largest bank with about $2.1 trillion in assets, has a team of folks instantaneously exploring the way to reinvent the bank’s datacenters using a personal cloud architecture.
The hardest portion of attending to this type of total reset of the datacenter, Reilly says, is persuading technologists to throw out their old ways of doing things and think more ambitiously. It’s why Bank of America has created a separate team to develop the company’s next-generation architecture, so team members could consider big ideas equivalent to having only 1 kind of hardware. “It is not the technical piece. It’s: Why stop there, why not go further, why not do more?” Reilly says.
[Read how more clouds are moving from the idea phase to working: Private Cloud Adoptions On A Roll.]
The bank wants that sort of blank-sheet thinking from its tech vendors, too. Reilly won’t name vendors it’s working with, but he says the team stood up two platforms for its private cloud environment, one proprietary and one in line with OpenStack. The vendors it’s working with are those embracing software-driven architecture and nonproprietary hardware.
“The hardware side of what they’d do is something they ought to start to let go,” Reilly says. The bank is running its pilot on two platforms to maintain its vendor options open, while “encouraging our large partners to feel like that is something of a burning platform that we want everyone to answer.”
Bank of America has about 200 workloads running on pilot versions of the brand new architecture, and it plans to lay about 7,000 workloads into production this year. That volume still represents a small section of the bank’s computing, but when it delivers strong results, it sets the stage for major adoption in 2015.
The business goal is to dramatically cut costs — up to 50% from today’s best-case datacenter costs, Reilly says — and let BofA respond more quickly to changing business needs, similar to a spike trendy for network capacity or computing power (or, just as important, drops fashionable when the bank wants less capacity).
Different technology, different skills
Bank of America’s vision for a more flexible, responsive private cloud architecture has similarities in concept to what another companies, from FedEx to Fidelity, are putting into place. The applying tells the infrastructure what it needs — the computing power, the resiliency and recoverability, the geographic dispersion and restrictions, the safety and regulatory requirements. The price of all those elements would even be clear, in order business leaders work with developers to create apps, they understand the infrastructure involved and weigh the related benefits and prices.
BofA expects to provision and de-provision that capacity more quickly and in much smaller increments. Its private cloud is also meant to let some workloads eventually move to public cloud environments, be it Amazon Web Services, CenturyLink, Verizon, AT&T, IBM, or other third parties. Certain sensitive data and workloads will always stay on premises.
Changing to a personal cloud architecture and a software-centric datacenter would require different skills. Infrastructure pros today define themselves by the gear they run: “I run my company’s million-port Cisco network,” or “I manage our 50,000 servers.” In a cloud model, as those technical silos get blown away, infrastructure pros will need more software skills.
“The infrastructure professional will look so much more just like the software development professional,” Reilly says. The bank might want to hire and retrain people. “You are attempting to bring everyone with you. Some people make that journey, and a few people don’t.”
Reilly shares these ideas in his calm British accent, but if he talks of the present state as a “burning platform” and calls the non-public cloud shift “a great deal a when and never if,” his sense of urgency is unmistakable. “We expect it’s as big a move because the mainframe to distributed computing was,” he says. “We expect it’s that enormous a shift within the industry. The chance for us is to get there first.”
The race is on. In our InformationWeek Private Cloud report published in November, 17% of the firms we surveyed said they use private cloud for all apps, 30% for some apps, and 30% are testing or developing a personal cloud. Only 23% said their companies had no interest.
What companies mean by “private cloud,” in fact, varies. Is it merely a highly virtualized datacenter that permits workload shifting? Or is it the type of rethink Reilly lays out? “Everybody’s observing this,” he says. “I just think we now have the next level of ambition.”
Private clouds are moving rapidly from concept to production. But some fears about expertise and integration still linger. Also in the Private Clouds Step Up issue of InformationWeek: The general public cloud and the steam engine have more in common than you may think. (Free registration required.)
Chris Murphy is editor of InformationWeek magazine and Global CIO columnist on IT strategy issues. He have been covering technology leadership and strategy issues for InformationWeek since 1999. Before that, he was editor of the Budapest Business Journal, a business newspaper … View Full Bio
More Insights