Nebula CEO Chris Kemp says IT must develop private clouds for the subsequent generation of applications. Otherwise, it risks irrelevancy.
I write this post after spending per week on the OpenStack summit in Hong Kong. My company, Nebula, has not yet launched in Asia, so I took the chance to take part within the sessions and check with the developers, leaders, and users within the OpenStack community. It was the suitable chance to mirror on where we’re today, and the way forward for the project that I helped start just over three years ago.
My conclusion? OpenStack has captured the hearts of developers, but not the minds of enterprise IT.
As the CTO of NASA and CIO at Ames Research Center, I had the chance to deeply immerse myself in a corporation where thousands of old applications ran on tens of thousands of servers across thousands of networks in hundreds of datacenters.
While NASA may need a bigger and more complex IT footprint than many organizations, all large enterprises seek to run all of those old applications in an atmosphere which looks and acts like the original computers and networks they were designed to run on.
[Wish to learn more about OpenStack? See Google App Engine Swings But OpenStack Is King.]
As servers continue to get bigger and faster while software stays much a similar, we’ve seen servers get virtualized, then the storage. After we virtualize the network, we shall finally be capable to faithfully simulate the tangled mess of physical infrastructure it’s today’s enterprise datacenter. At that time, most software could be in a position to run on a single, homogenous system. As processors, storage, and networks continue to get exponentially faster and denser, it’s conceivable that the contents of a complete datacenter may well be virtualized and run on a single computer.
In short, virtualization maximizes the efficiency of running yesterday’s PC-era-inspired software on today’s PC-era-derived hardware.
This is a superb thing, and may keep a lot of the world’s software running without intervention or modification for lots decades to return. But this model has little or no to do with OpenStack, Nebula, Amazon Web Services, or cloud computing ordinarily.
OpenStack is an open-source reference implementation for infrastructure-as-a-service. OpenStack’s community of developers is defining how physical computing, networking, and storage infrastructure are mapped to a group of logical services in a method on the way to form a brand new open foundation for a brand new generation of software that runs on service-driven, scale-out infrastructure.
The first enterprises to adopt OpenStack — Internet companies like Yahoo and eBay; research institutions like Xerox PARC and CERN; service providers like AT&T and Comcast; government agencies like NASA and NSA — retain a few of the most talented computer scientists and engineers on the earth. Regularly, these organizations are using OpenStack to power new, highly strategic, and infrequently very large applications.
Efficiently building large-scale systems is becoming increasingly critical for almost every business (or government) that extracts value from better understanding all of our web logs, GPS location data, social media graphs, financial transactions, retail transactions, stock market transactions, electronic health records, genomic data, photographs, videos, satellite imagery, and naturally the knowledge from all of the sensors in our cell phones, wrist bands, watches, televisions, cars, and so forth.
At Nebula, I even have the chance to chat to thousands of organizations about our product, and it’s clear that the chasm between “enterprise IT” and “mission” organizations at most enterprises is growing larger and bigger. Business units that operate computing infrastructure outside of “corporate IT” are usually often known as “shadow IT” in older enterprises. At tech companies here in Silicon Valley, that sort of “shadow IT” is referred to as “technical operations,” or TechOps.
At top Internet companies, TechOps is home to some of the most talented (and well compensated) engineers in the world. These teams operate very differently from corporate IT. They do not manage servers, VMs, or software — at least, not the way that most CIOs think of it. They often do not (or ever intend to) virtualize anything. In TechOps, very small teams deploy new software on fleets of hundreds or thousands of servers, often several times a day. Working closely with software engineers, these teams strive to increase the velocity at which new features can be deployed, and often ensure all features are tested at scale.
A new generation of infrastructure that powers new mobile and web applications and puts large amounts of data to good use (in most cases, at least) is being developed. Today, most of that development takes place on public clouds like Amazon Web Services, Google Cloud, and Microsoft Azure.
This new generation of cloud applications and the public clouds they are being built on have inspired the hearts of developers, but the mind of enterprise it’s still focused on providing “reasonable accommodation” for old applications.
Enterprise IT must either watch as their most strategic and critical applications are built on public clouds, or they must immediately invest in real, standards-based, API-driven private clouds.
The longer enterprise IT waits before providing a true private cloud, the larger the chasm grows between where the business has been and where it’s going, and the greater the risk that IT will lose the hearts and minds of the innovators that are essential to the cycle of reinvention and crucial to the success of every enterprise.
Battle lines are forming behind hardware-centric and virtual approaches to software-defined networking. We size up strengths and weaknesses. Also within the SDN Skirmish issue of InformationWeek: Anonymity has a task in business communities. (Free registration required.)
More Insights