Many virtual machines are being spun up within the cloud to run a single application. Often, the resources consumed by that application are dwarfed by the dimensions of the operating system on the subject of memory, disk space and CPU utilization. So why run a whole OS simply to run one application? It truly is one problem virtual containers were created to unravel.
Containers-as-a-service is a kind of infrastructure-as-a-service specifically aimed toward efficiently running a single application. A container is a kind of operating system virtualization that’s more efficient than typical hardware virtualization. It provides the mandatory computing resources to run an application as though it’s the only application running within the operating system — in other words, with a guarantee of no conflicts with other application containers running at the same machine. For agencies and enterprises moving applications to the cloud, the containers represent a wiser and less expensive technique to move to the cloud.
Webcasts
More >>
White Papers
More >>
Reports
More >>
In traditional hardware virtualization, a hypervisor (either software or bare metal) can run a number of guest operating systems. Each operating system acts as though it’s in command of the whole machine. With containers (that are currently implemented on Linux, BSD and Solaris), applications may be virtualized more efficiently and run as though they control the full OS user space. For instance, a container could be rebooted, have root access, IP addresses, memory, processes, files, applications, system libraries and configuration files.
[ IBM aims to reinforce federal agencies’ adoption of cloud services and open standards. Read more at IBM Creates Federal Cloud Innovation Center. ]
A very important distinction on this operating-system-level virtualization is that by OS virtualization, we do not mean the kernel, just the system libraries and binaries to permit isolation between containers. This use of the kernel across containers is analogous to a hypervisor but is way more efficient and would not allow different guest operating systems. A container is an isolation unit in one OS (subsequently, Linux).
The obvious advantage of Linux containers is they are a lot more efficient when it comes to memory, drive space and CPU utilization than hardware virtualization because they save the price of the OS-overhead in each virtual machine. You’ll be able to run many more containers at the same hardware as you are able to run virtual machines. Additionally, there’s no boot time with Linux containers, so spinning up new containers is an order of magnitude faster than booting a complete operating system.
What does this mean for the way forward for infrastructure-as-a-service? Containers are a more efficient competitor to hardware virtualization, and plenty of platform-as-a-service implementations — including Heroku, OpenShift, dotCloud and CloudFoundry — use containers. Additionally, a number of the private cloud IaaS implementations, like OpenStack and Cloudstack, offer support for containers. So containers are a viable new form of virtualization to be able to keep growing and influence the direction of cloud computing.
Additionally, as cost competition within the IaaS space heats up, CaaS could become an element in competitiveness simply by its greater efficiency and function against hardware virtualization technologies.
There are some ramifications of CaaS to remember:
— Since most CaaS activity is at the Linux operating system, CaaS will strengthen, if not cement, Linux’s leadership position inside the cloud. In most cloud providers, Linux operating systems are a less expensive alternative to Windows and run on smaller configurations that require less memory and disk space. Additionally, Web applications are usually more platform-neutral and therefore run equally well on Linux or Windows operating systems. Obviously, if the appliance is restricted to Windows technologies (like ASP.net), it must run on a Windows operating system. However, Windows instances absorb to nine times longer than Linux instances to begin up, based on one performance study.
— CaaS allows real-time cloud-native applications. Demonstrating cloud-based applications might be tricky with traditional virtual machines because each can absorb to 5 minutes to begin up. That startup time is usually as a result of the boot time of the operating system. Containers eliminate that boot time and begin up in seconds. That improvement in start time makes containers ready to become a brand new base unit for distributed applications as opposed to using threads. Why? Containers offer a better degree of isolation and looser coupling than threads. The isolation provides a better degree of reliability, within the same way that Google Chrome chose process isolation over threads to enhance reliability. In distributed cloud applications, reliability and loose coupling are the centerpieces of a powerful application.
— CaaS will spread to all major operating systems. Obviously, this prediction relies upon another prediction: the cloud is inevitable. A testament to this growth in CaaS interest are the recent CaaS implementations which are shooting up, including Google’s lmctfy (let me container that for you), Heroku’s Dyno and CloudFoundry’s Warden. These are as well as other containers resembling Docker, lxc, OpenVZ, BSD Jails and Solaris Zones. MacOS also has something called an App Sandbox (clone of the Java Sandbox concept). Windows also has an application sandbox concept, however is very important to notice while there are some similarities between a sandbox and a container, the 2 are different. A sandbox usually revolves around security protections for an application instead of round the broader requirements of application isolation.
— The CaaS concept will continue to adapt, akin to the style Java Virtual Machines evolved right into a type of application sandbox for Java bytecode-based applications or even the J2EE Web containers and EJB (Enterprise JaveBeans) containers evolved into higher-level types of containers. All of those isolation concepts are important; they support different but intersecting audiences and may help forge the next understanding of what’s had to efficiently and safely run applications within the cloud. The full container/app sandbox/app engine concept will continue to enhance and evolve.
Finally, CaaS is a crucial element of the evolution of cloud computing. In my new book, The Great Cloud Migration, I discuss the role and manifestations of this cloud evolution and its impact on migrating applications to the cloud. CaaS isn’t the only way wherein clouds are evolving. Every other areas of evolution include the blurring lines between PaaS and IaaS, how the “Internet of items” is influencing the cloud and cloud interoperability. Of these, CaaS is the foremost significant change because it affects the foundational components of cloud computing. Such disruption is a great thing because it pushes the cloud to larger levels of efficiency and bigger avenues for disruption of traditional IT.
Bold visions are competing with practical budget realities for federal IT leaders. Our latest annual survey looks on the top IT priorities. Also within the new, all-digital Tech Priorities issue of InformationWeek Government: IT leaders are making progress improving the efficiencies of their IT operations, but many lack the tools to prove it. (Free registration required.)