While many developers are enthusiastic about the way containers can speed up deployments, administrators and operators may be a bit more wary, given the considerable amount of retooling that their internal systems may need to go through to support container-based pipelines.
Which is why the emerging Containers as a Service (CaaS) approach may prove popular to both camps.
CaaS changes the dynamic for how containers are perceived by operations teams that must otherwise build-out platforms that manage complex environments. At scale, containers bring with them the need for new tooling, services and platforms. And for most businesses, the expertise is missing to manage complex, scaled-out platforms built on container-based clusters.
CaaS changes the dynamic for how containers are perceived by operations teams.
Colm Keegan, senior analyst for Enterprise Strategy Group, explained to The New Stack last year that developers have largely driven the use of containers and have done so somewhat independently. Operations teams haven’t had much exposure to container technology. They’re unfamiliar with how containers work and the tooling required to manage application-centric architectures. “As a consequence of that, developers, out of necessity, started managing these environments themselves,” Keegan said.
CaaS abstracts the complexities of building a scaled-out platform. Further, it means that IT can sanction it as an enabled service and put all the needed controls — such as security and governance — around it, providing the service not only in the local data center but also possibly across public clouds.
Birth of a True Product Category
The year 2015 brought widespread interest from operations organizations about how to best deploy and manage containers.
Figure 1: The New Stack’s 2016 container orchestration survey looked at end users’ and vendors’ perspectives on what functionality they would expect from Containers as a Service.
In the past two years, many cloud services providers have developed their own CaaS platforms. CaaS is becoming the new Platform as a Service (PaaS), writes Janakiram MSV, an analyst and writer for The New Stack in an article for Forbes.
The services point to some distinctions for how CaaS, like other “as a service” offers, can have so many different meanings. It’s important to note that Containers as a Service is still a hotly disputed area — many say that what we’ve traditionally called container services is also a type of Containers as a Service; however, Docker suggests a different set of criteria to be considered a CaaS solution, and central to the qualifications is being cloud infrastructure agnostic. We think the larger market interpretation of this product category still has room to be clarified.
Janakiram MSV writes that Azure Container Service is an interface between its assets and the orchestration layer. That allows for the mapping of the block storage, the VMs and other assets to the orchestration layer. The goal: give the developer the ability to launch a container cluster from the command line as simply as they can launch an “HDInsight Hadoop cluster on Linux.”
Google Container Engine (GKE) also serves as the separation between Google Compute Engine and Kubernetes. It acts as a thin layer that bridges the gap between Google Compute Engine and the orchestration engine powered by Kubernetes. GKE schedules containers into the cluster and manages them automatically based on the user-defined requirements, such as CPU and memory. It’s built on the Kubernetes system, giving the flexibility to take advantage of on-premises, hybrid or public cloud infrastructure.
According to Janakiram MSV, clusters on GKE are comprised of virtual machines. As he writes, this is pretty conventional, as most infrastructure is a pool of virtual machines.
Amazon EC2 Container Service (ECS) runs on the EC2 Infrastructure as a Service (IaaS) layer to deliver hosted container services. Amazon’s service is entirely proprietary. It supports Docker containers and allows the user to run applications on a managed cluster of Amazon Elastic Compute Cloud (EC2) instances.
You do not need to install, operate, and scale your cluster management infrastructure. With simple API calls, you can launch and stop container-enabled applications and discover the state of your cluster. You can use Amazon EC2 with other AWS services, and you can take advantage of familiar features, such as Elastic Load Balancing, EBS volumes, EC2 security groups, and IAM roles.
IBM launched IBM Containers on Bluemix in June 2015, providing one of the first hosted Docker container services. Bluemix is IBM’s cloud platform providing access to PaaS, IaaS, and now CaaS providing choices to meet the user’s requirements. With IBM Containers, the user’s Docker experience begins with deploying containers as opposed to deploying a cluster of virtual machines as Docker hosts. IBM Containers allows the administrators to specify user quota and determine which images from the private, hosted Docker registry can be deployed. The offering also provides integrated monitoring and logging at the container level, overlay networking, scalable groups with integrated load balancer and auto-recovery, storage volumes for persistent data needs, content and policy vulnerability insight to every Docker image in the registry, and the ability to bind to any of the 150+ services in the Bluemix catalog.
Joyent has Triton, a service that treats containers natively and allows containers to run on bare metal while eliminating the VM abstraction layer, which Bryan Cantrill, chief technical officer at Joyent, calls “an unnecessary layer of fat.” The challenge to that, he said in a story on The New Stack, has been the Linux substrate used for containers — namely namespaces and cgroups — was not designed for multi-tenancy, creating security issues that make it either difficult or unwise to deploy Linux containers on bare metal. The Triton service instead uses Joyent’s SmartOS substrate, which builds on some previous security work done by Sun Microsystems in the open source variant of Solaris.
Rackspace launched Carina, a CaaS offering that Rackspace hosts and manages for the user. Carina’s web-based console is designed to instantiate and deploy scalable container images. From there, Rackspace’s administrators take over the roles of scheduling and orchestration, scaling up workloads as necessary and assuming responsibility for all the maintenance.
Carina uses the Open Container Initiative, allowing it to absorb containers and follow a customer’s instructions about how they are to be deployed, including their choice of bare metal or VM-based infrastructure.
OpenStack’s Nova scheduler recognizes host aggregates or clusters of physical servers designed to be separated from other availability zones. Servers within these host aggregates may be a particular “flavor,” which (theoretically) enables Windows hosting on OpenStack. It also allows for container hosting.
Within these host aggregate clusters, Magnum pools together Nova instances into what it calls bays. These individual bays can conceivably be orchestrated using Docker Swarm, Kubernetes and Mesos. As Carina is built out, users will be allowed their preference of orchestration engine.
In early 2016, Docker released software for running CaaS operations in-house with the Docker Datacenter (DDC). It provides a way to deploy applications without worrying about issues that might arise from moving a codebase from development into production. The integrated package is a collection of open source technologies for deploying and managing containers.
“It’s not just the recognition that we need to manage these containers, but offering a suite of products and having a very opinionated point of view about what’s required to manage those successfully,” Scott Johnston, Docker chief operating officer, said.
Applications are driving this next wave, and operations wants to support it, not get in the way of it. Users need products that can provide governance over those compute, network and storage resources, and at the same time provide the flexibility, agility and portability of applications that the development team is producing.
Ops is writing the checks, but they’re trying to be a partner in this workflow that is largely driven by the application development teams.
Johnston used security as an example, saying 2016 will bring more requirements for end-to-end security solutions. Teams need security to be a consideration from the beginning, not something that’s only a consideration for production environments. If you’re waiting that long to consider the security of your containers, it can never be more than a patch or a quick fix.
The security-related technology image signing will be an early draw. As a containerized application moves through development, different departments can sign it, such as QA, providing a trail of the steps showing where the software has traveled.
“That allows the Ops team to make real-time decisions — and ultimately automated decisions — on how to deploy that application,” Johnston said. One container may have a “super-secret IP,” so it must stay in-house, whereas a simple test application can be deployed to a public cloud. This early consideration for security is a critical investment in how your stack performs in the future. “Security is a great example of how doing things at the beginning of the pipeline gives you huge benefits downstream.”
Image signing is just one example of how Containers as a Service provides the agility that developers are looking for and the control that operations is looking for. Johnston sees similar developments with networking and storage.
“Enterprises are saying, ‘Don’t cause me to break my app as it moves through the application stages from dev to QA to prod. Let my app maintain its logical relationships and swap out the actual implementation as it goes through those stages,’” Johnston said. “This is putting pressure on storage and networking vendors to produce implementation plugins, and has given us great feedback in terms of what those APIs for networking and storage, from an application viewpoint, need to look like.”
Interoperability for the Future
Cantrill sees the real challenge as clearing up the confusion surrounding containers. The impediments to containers are seemingly rival solutions in all these different parts of the stack, and many don’t understand how the pieces compete with one another or fit in with one another, Cantrill said. Interoperability has been a tertiary concern.
As a result, interoperability will be a much larger concern in 2016, he predicted. There is a critical need to define how these solutions interface or developers won’t be able to innovate beneath that level of integration.
Cantrill says this concern for interoperability is not what some vendors want to hear because they’d like to think they can provide that next “magic stack” as a singular solution, without the help of others. Cantrill and many others don’t see this one-solution approach being feasible — orchestration and management will need the collaboration of many tools.
Summary
The introduction of Containers as a Service and container services to the marketplace and its growing user base represents a maturity in container management that appeals to both developer and operations teams. It represents the possibility of a radically shorter lifecycle for adopting containers across the entire business, instead of the crawl between development, operations, and true usage in production. Offering container services allow for pre-packaged security and governance, providing the service in the local data center and across public clouds.
As this new type of product and service evolves over time, it’s important that users and vendors look to define the functionality that you expect, and that you know your team needs. Meeting functional needs in an interoperable package is going to be critical to the future of the entire orchestration market.
Docker, IBM and Joyent are sponsors of The New Stack.
Feature Image via Unsplash.
The post The Emerging Containers as a Service Marketplace appeared first on The New Stack.