Wednesday, January 21, 2015

Don't skeuomorph your containers

Containers were initially pitched as more or less just another form of partitioning. A way to split large systems into smaller ones in which workloads not requiring a complete system by themselves could coexist without interfering with each other. Server/hardware virtualization is the most familiar form of partitioning today but, in its x86 form, it was only the latest in a long series of partitioning techniques initially applied mostly to mainframes and Unix servers. 

The implementation details of these various approaches differed enormously and even within a single vendor—nay, within a single system design—multiple techniques hit different points along a continuum which mostly traded off flexibility against degree of isolation between workloads. For example, the HP Superdome had a form of physical partitioning using hardware, a more software-based partitioning approach, as well as a server virtualization variant for HP-UX on the system’s Itanium processors. 

But, whatever their differences, these approaches didn’t really change much about how one used and interacted with the individual partitions. They were like the original pre-partitioned systems, There were just more of them and they were correspondingly smaller. Indeed that was sort of the point. Partitioning was fundamentally about efficiency and was logically just an extension of resource management approaches that allowed for the co-existence of multiple workloads historically .

Ohc breakout 02

At a financial industry luncheon discussion I attended last December, one of the participants coined a term that I promptly told him I was going to steal. And I did. That term was “skeuomorphic virtualization” which he used to describe hardware/server virtualization. Skeuomorphism is usually discussed in the context of industrial design. Wikipedia describes a skeuomorph as "a derivative object that retains ornamental design cues from structures that were necessary in the original.” The term has entered the popular lexicon because of the shift away from shadows and other references to the physical world such as leather-patterned icons in recent versions of Apple’s iOS

However, the concept of skeuomorphism can be thought of as applying more broadly—to the idea that existing patterns and modes of interaction can be retained even though they’re not necessarily required for a new technology. In the case of “skeuomorphic virtualization,” a hypervisor abstracts the underlying hardware. While this abstraction was employed over time to enable new capabilities like live migration that were difficult and expensive to implement on bare metal, virtualized servers still largely look and feel like physical ones to their users. Large pools of virtualized servers do require new management software and techniques—think the VMware administrator role—but the fundamental units under management still have a lot in common with a physical box: independent operating system instances that are individually customizable and which are often relatively large and long-lived. Think of all the work that has gone into scaling up individual VMs in both proprietary virtualization and open source KVM/Red Hat Enterprise Virtualization. 

In fact, I’ll go so far as to argue that the hardware virtualization approach largely won out over the alternatives of the time in c. 2000 because of skeuomorphism. Hardware virtualization let companies use their servers more efficiently by placing more workloads on each server. But it also let them continue to use whatever hodgepodge of operating system versions they were using and to continue to treat individual instances as unique “snowflake” servers if they so chose. The main OS virtualization (a.k.a. containers) alternative at the time—SWSoft’s Virtuozzo—wasn’t as good a match for highly heterogeneous enterprise environments because it required all the workloads on a server to run atop a single OS kernel. In other words, it imposed requirements that went beyond the typical datacenter reality of the day. (Lots more on that background.)

Today, however, as containers enjoy a new resurgence of interest, it would be a mistake to continue to treat this form of virtualization as essentially a different flavor of physical server. As my Red Hat colleague Mark Lamourine noted on a recent podcast:

One of the things I've hit so far, repeatedly, and I didn't really expect it at first because I'd already gotten myself immersed in this was that everybody's first response when they say, "Oh, we're going to move our application to containers," is that they're thinking of their application as the database, the Web server, the communications pieces, the storage.They're like, "Well, we'll take that and we'll put it all in one container because we're used to putting it all on one host or all in one virtual machine. That'll be the simplest way to start leveraging containers." In every case, it takes several days to a week or two for the people looking at it to suddenly realize that it's really important to start thinking about decomposition, to start thinking about their application as a set of components rather than as a unit.

In other words, modern containers can be thought of and approached as “fat containers” that are essentially a variant of legacy virtual machines. But it’s far more fruitful and useful to approach them as something fundamentally new and enabling that’s part and parcel of an environment including containerized operating systems, container packaging systems, container orchestration like Kubernetes, DevOps practices, microservices architectures, “cattle” workloads, software-defined everything, and pervasive open source as part of a new platform for cloud apps. 

 

 

No comments: