Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!


Forgot your password?
Linux Business Software Linux

Virtual Containerization 185

AlexGr alerts us to a piece by Jeff Gould up on Interop News. Quoting: "It's becoming increasingly clear that the most important use of virtualization is not to consolidate hardware boxes but to protect applications from the vagaries of the operating environments they run on. It's all about 'containerization,' to employ a really ugly but useful word. Until fairly recently this was anything but the consensus view. On the contrary, the idea that virtualization is mostly about consolidation has been conventional wisdom ever since IDC started touting VMware's roaring success as one of the reasons behind last year's slowdown in server hardware sales."
This discussion has been archived. No new comments can be posted.

Virtual Containerization

Comments Filter:
  • Contain (Score:3, Informative)

    by Anonymous Coward on Tuesday July 24, 2007 @08:15AM (#19967869)
    The word is contain, people, not containerization.
  • I'd say it's both (Score:5, Informative)

    by Toreo asesino ( 951231 ) on Tuesday July 24, 2007 @08:23AM (#19967945) Journal
    I've used virtualization for both containerisation and also to consolidate boxes too...

    At my previous company, we invested in two almighty servers with absolutely stacks of RAM in a failover cluster. They ran 4-5 other servers for critical tasks...each virtual machine was stored on a shared RAID5 array. If anything critical happened to the real server, the virtual servers would be switched to the next real server and everything was back up again in seconds. The system was fully automated too, and frankly, it saved having to buy several not-so-meaty boxes while not losing much redundancy and giving very quick scalability (want one more virtual server? 5 minute job. want more performance? Upgrade redundant box and switch over virtual machines).

    The system worked a treat, and frankly, the size & power of the bigger, more important fewer servers gave me a constant hard-on.
  • Really about rPath (Score:3, Informative)

    by rowama ( 907743 ) on Tuesday July 24, 2007 @08:23AM (#19967963)
    In case your interested, the article is really a review of rPath, a virtual appliance builder based on a custom tailored gnu/linux...
  • PHP 6 (Score:4, Informative)

    by gbjbaanb ( 229885 ) on Tuesday July 24, 2007 @08:26AM (#19967989)
    I read somewhere (possibly on the PHP bug system) that they were considering scrapping most fo the security features we've all grown the .. well, hate really, and replace them all with a virtualisation system. I did think at the time that the virtualisation system they'd implement to keep PHP-based vhosts separate and secure would be to run apache in many virtual OSes.

    I suppose jailing applications is a well-known way of securing them, this really just improves on that, but with much more overhead. I wonder if anyone is thinking about providing "lightweight" virtualisation for applications instead of the whole OS?
  • by assantisz ( 881107 ) on Tuesday July 24, 2007 @08:33AM (#19968051)
    Solaris has Zones [sun.com] for that exact purpose. Lguest [ozlabs.org], I believe, offers something similar for Linux.
  • by ls671 ( 1122017 ) on Tuesday July 24, 2007 @08:37AM (#19968099) Homepage

    This is kind of obvious, I used to use more machines for security reasons, now I use less machines but they are more powerful. When you do server consolidation, it implies that applications used to run on different hardware for security and stability reason will now be running on the same hardware within different VMs. So how can they say "protect applications from the vagaries of the operating environments" is opposed to "consolidating hardware box".

    "Consolidating hardware boxes" implies "protect applications from the vagaries of the operating environments" you just do that with less machines.

    I use virtualization because it leaves me with less physical servers to manage, "protect applications from the vagaries of the operating environments" was already done before virtualization. So, virtualization doesn't help me "protect applications from the vagaries of the operating environments", it helps me because I have less servers to manage.

  • Makes sense to me (Score:4, Informative)

    by jimicus ( 737525 ) on Tuesday July 24, 2007 @08:49AM (#19968227)
    I run a whole bunch of virtual servers and that's exactly what I'm doing.

    It's fantastically handy to be able to install and configure a service in the knowledge that no matter how screwed up the application (or, for that matter, how badly I screw it up), it's much harder for that application to mess up other services on the same host - or, for that matter, for existing services to mess up the application I've just set up.

    Add to that - anyone who says "Unix never needs to be rebooted" has never dealt with the "quality" of code you often see today. The OS is fine, it's just that the application is quite capable of rendering the host so thoroughly wedged that it's not possible to get any app to respond, it's not possible to SSH in, it's not even possible to get a terminal on the console. But yeah, the OS itself is still running fine apparently, so there's no need to reboot it.

    This way I can reboot virtual servers which run one or two services rather than physical servers which run a dozen or more services.

    Granted, I could always run Solaris or AIX rather than Linux, but then I'll be replacing a set of known irritations with a new set of mostly unknown irritations, all with the added benefit that so much Unix software never actually gets tested on anything other than Linux these days that I could well find myself with just as many issues.
  • by GiMP ( 10923 ) on Tuesday July 24, 2007 @08:52AM (#19968243)
    > What do you run the virtual machine on - an OS!!

    Unless you're running Xen, unless you consider Xen an OS. But this brings us back to the question, "what is an OS?"

    Xen is a kernel for managing virtualized guests, it sits at Ring-0 where traditional OS normally resides. Xen requires that a single guest machine is setup to be booted by default, which will receive special priviledges for purposes of managing Xen. This special guest is called the "dom0", but is for all other intents and purposes -- just another virtual machine.
  • by cwills ( 200262 ) on Tuesday July 24, 2007 @09:57AM (#19968967)
    Since the late 60's IBM's mainframe VM operating system has been available. It too went through the same phases that is happening now with VMWare, xen, etc. Initially VM was used for hosting multiple guest systems (a good history -> VM and the VM community, past present, and future [princeton.edu] - pdf warning), but quickly a small project (Cambridge Monitoring System - CMS) became an integral part of VM. CP provided the virtualization and CMS provided a simple single user operating system platform.

    Within a VM system, one will now find three types of systems running in the virtual machines.

    1. Guest systems, such as Linux, z/OS, z/VSE, or even z/VM
    2. General users using CMS in a PC like environment (sorry no GUI's, and yes there are arcane references to card punches, readers, etc. -- but question -- why does linux still have TTYs?). In the heyday before PC's, CMS provided an excellent end user environment for development, as well as a general computing platform.
    3. And finally Service Virtual Machines (SVMs).

    It is these Service Virtual Machines that equate to the topic of the original post. A SVM usually provides one specific function, and while there may be interdependence between SVMs (for example the TCPIP SVM that provides the TCP/IP stack and each of the individual TCP/IP services), they are pretty much isolated from each other. A failure in a single SVM, while disruptive, usually doesn't impact the whole system.

    One of the first SVM's was the Remote Spooling Communication Subsystem (or RSCS). This service allowed two VM systems to be linked together via some sort of communication link -- think UUCP.

    The power of SVM's is in the synergy between the Hypervisor system, and a light weight platform for implementing services. The light weight platform itself doesn't provide much in terms of services. There is no TCP/IP stack, no "log in" facility (only relying on the base virtual machine login console), and maybe not even any paging memory (letting the base VM system manage a huge address space). Instead a light weight platform will provide a robust file system, memory management, and task/program management. In IBM's z/VM product, CMS is an example of a light weight platform. The Group Control System (GCS) is another example (GCS was initially introduced to provide a platform to support VTAM - which was ported from MVS).

    Part of the synergy between between the Hypervisor and the SVMs is that the Hypervisor needs to provide a fast, low overhead intra-virtual machine communication path that is not built upon the TCP/IP stack. In otherwords the communication between two virtual machines should not require that each virtual machine contain it's own TCP/IP stack with it's own IP address. Think more along the lines of using the IPC or PIPE model between the SVMs.

    Since the SVM itself is not a full suite of services, maintenance and administration is done via meta-administration, in otherwords you maintain the SVM service from outside the SVM itself. There is no need to "log into" the SVM to make changes. Instead of the SVM providing a sys-log facility, a common sys-log facility is shared among all the SVM's. Instead of each SVM doing paging, simply define the virtual machine size to meet the storage requirements of the application, and let the Hypervisor manage the real storage and paging.

    Maybe a good analogy would be taking a Linux kernel and implementing a service via using the init= parameter in the kernel to invoke a simple set up (mounting the disks) and running just the code needed to perform the service. Communication for other services would be provided via hypervisor PIPEs between the different SVM's. So one would have a TCP/IP SVM that provides the TCP/IP network stack to the outside world. A web server SVM that provides just the HTTP protocol and base set of applications, using a hypervisor PIPE to talk to the TCP/IP stack. Within the web server SVM, would use hypervisor PIPEs to talk to the individual application SVMs.

  • Re:Contain (Score:2, Informative)

    by Hal_Porter ( 817932 ) on Tuesday July 24, 2007 @10:02AM (#19969039)
    You'll never be able to accumulatarize consultancy dollars if you speak like some hick from the Mid West. Take your Mactop to your favourite ReCaPrO, get yourself a vegan skinny hicaf latte and start learning the lingo from the blargocube.
  • Re:Contain (Score:3, Informative)

    by Bohnanza ( 523456 ) on Tuesday July 24, 2007 @10:19AM (#19969231)
    "Containment" would even work.
  • Re:Buzzword alert! (Score:3, Informative)

    by Courageous ( 228506 ) on Tuesday July 24, 2007 @12:42PM (#19971445)
    Enterprise Management Associates conducted a survey of big users of Virtualization, and asked them to rank order the importance of certain functions of virtualization to their organizations. It was ranked thus:

    1. Disaster Site Operations (specifically the use case where main operations are still on metal, but the disaster site is virtual; this is a use case where there are less physical boxes than there are operating systems, so this is a consolidation case, just not the usual one).

    2. Increased Agility (as in, clone virtual machines to deploy servers fast).

    3. Classic Consolidation.

    4. Increased Availability (virtual machines seen as more reliable due to the uniform driver model).

    5. Decreased Cost of Administration.


Beware of Programmers who carry screwdrivers. -- Leonard Brandwein