Become a fan of Slashdot on Facebook

 



Forgot your password?
typodupeerror
×
Linux Business Software Linux

Virtual Containerization 185

AlexGr alerts us to a piece by Jeff Gould up on Interop News. Quoting: "It's becoming increasingly clear that the most important use of virtualization is not to consolidate hardware boxes but to protect applications from the vagaries of the operating environments they run on. It's all about 'containerization,' to employ a really ugly but useful word. Until fairly recently this was anything but the consensus view. On the contrary, the idea that virtualization is mostly about consolidation has been conventional wisdom ever since IDC started touting VMware's roaring success as one of the reasons behind last year's slowdown in server hardware sales."
This discussion has been archived. No new comments can be posted.

Virtual Containerization

Comments Filter:
  • by saibot834 ( 1061528 ) on Tuesday July 24, 2007 @08:07AM (#19967797)
    The great thing about virtual machines is that you basically can do whatever you want with them. Things you'd normally never do to your computer.

    It's only lacking a feature of throwing the virtual computer out of the window.
    • Re: (Score:2, Insightful)

      by MalHavoc ( 590724 )

      It's only lacking a feature of throwing the virtual computer out of the window.


      You sort of get this feature with Parallels - the ability to drag a virtual server into a trash bin is almost as satisfying, and far less expensive.
    • Really. You can run applications in their own protected space, sealed off from the 'real' computer. I do this a lot -- I have QEMU-virtualized Windows XP and Linux machines that I can try all kinds of garbage in. I just back up the image file, and when/if I totally mess the thing up -- 'cp winxp-qemu.img.old winxp-qemu.img', for instance. Nice and simple.

    • by niceone ( 992278 ) * on Tuesday July 24, 2007 @08:18AM (#19967901) Journal
      The great thing about virtual machines is that you basically can do whatever you want with them. Things you'd normally never do to your computer.

      Same as virtual girlfriends.
    • Re: (Score:2, Insightful)

      The summary says that the nice thing about virtualization is that it can "...protect applications from the vagaries of the operating environments they run on." I would have thought that the really great thing about virtualization is that it can protect operating environments from the vagaries of applications which are run on them. This is especially handy when you just want to try out a new bit of software etc.
  • by Anonymous Coward on Tuesday July 24, 2007 @08:07AM (#19967801)
    Sure, containerization might sound like a good idea... but if you find the word 'containerization' ugly NOW, wait until you see what furry abominations grow in the containers you forget about at the back of the work server for 2 months. >_>
    • Re: (Score:3, Insightful)

      by mdd4696 ( 1017728 )
      Wouldn't a better word for "containerization" be "encapsulation"?
      • by JonTurner ( 178845 ) on Tuesday July 24, 2007 @08:51AM (#19968233) Journal
        Isn't this de facto evidence that the sandboxing, which was supposed to be a key component of both Java and .Net's security models, has either failed to deliver on their promises, or simply isn't adequately well engineered to provide protection against rogue applications?

        As has been said before, we need a way to grant applications permissions to use resources. We have that, to some degree, with firewalls and apps like ZoneAlarm/LittleSnitch which ask you for permission before an application is allowed to "call home", but what about other resources -- for example, being able to access only a particular directory or install a system-level event hook which acts as a keylogger? etc.
        • Indeed or chroot jails ? Sun's containerizationing solution [sun.com]
          • Sun's containerization (or OpenVZ to similar extent) is exactly what we want from our OSes. 90% of our problems in the server space come not from the overly broad power of our operating systems and frameworks, but from our default policy of "grant everything, and deny only the bad stuff". If we treated Firewalls like we treated our application servers, well, we're seeing exactly what the result it.

            Java and .NET sandboxing does work, to an extent, but other than the web arena, it doesn't apply to server ho
          • Re: (Score:3, Insightful)

            by Sancho ( 17056 )
            chroot jails tend to be restrictive. You can't access all your entries in /dev, or if you can, you've removed a lot of the protection afforded by the jail in the first place.

            Virtualization (or containerization... how awful!) generally allows this. Want to play with your hard drive driver? No problem.

            Of course, it fails when you actually /do/ want direct access to the hardware. Can't test that new Nvidia driver in a containerized OS.
        • by TheRaven64 ( 641858 ) on Tuesday July 24, 2007 @09:00AM (#19968323) Journal
          I think it's more evidence that operating systems suck. The whole point of a modern operating system is to allow you to run multiple programs at once, without them interfering with each other. This is why we have filesystems (with permissions) rather than letting each process write to the raw device. This is why we have pre-emptive multitasking rather than letting each process use as much CPU as it wants. This is why we have protected memory, instead of letting processes trample each others' address space.

          If you can't trust your OS to enforce the separation between processes, then you need to start re-evaluating your choice of OS.

          • by afidel ( 530433 ) on Tuesday July 24, 2007 @09:27AM (#19968605)
            That's funny because ALL OS's suck (infact all hardware and software suck, some just suck less). Even on the S/390 nee zOS mainframes from IBM there is compartmentalization both in hardware and software. If an OS that's been around for over 40 years running the largest companies in the world isn't always trusted to enforce separation of processes I don't see how any other OS stands a chance.
            • by Sloppy ( 14984 )

              If an OS that's been around for over 40 years running the largest companies in the world isn't always trusted to enforce separation of processes I don't see how any other OS stands a chance.
              Good point. But the others, at least in theory, have one advantage: hindsight. I wouldn't rule out the possibility that someday, someone will get it right.
            • There are a few people who agree with you on the OS feeling. Like these folks:
              Three Dead Trolls: Every OS sucks! [youtube.com]
          • by pla ( 258480 ) on Tuesday July 24, 2007 @09:55AM (#19968943) Journal
            If you can't trust your OS to enforce the separation between processes, then you need to start re-evaluating your choice of OS.

            And for the most part, modern OSs handle that well. They do allow for a certain degree of IPC, but mostly, two processes not strongly competing for the same resources can run simultaneously just fine.

            The problem arises in knowing what programs need what access... The OS can't make that call (without resorting to running 100% signed binaries, and even then, I personally lump plenty of "legitimate" programs in the "useless or slightly malicious" category), and we obviously can't trust the applications to say what they need. Most programs, for example, will need to at least write in their own directory, many need access to your home dir, some create files in your temp directory, some need to write in random places around the machine, some need logging access, some even need to write directly to system directories; Some programs need network access, but the majority don't (even though they may want to use it - I don't care if Excel wants to phone home, I don't use any of its features that would require network access and would prefer to outright block them). How does the OS know which to consider legitimate and which to disallow?

            The concepts of chroot(and now registry) jails and outbound firewalling work well, as long as the user knows exactly what resources a given program will need access to; But even IT pros often don't know that ahead of time, and many well-behaved programs still snoop around in places you wouldn't expect.

            The problem mentioned by the GP, with the likes of Java and .NET, arise from them still running on the real machine - They may waste CPU cycles running on a virtual CPU with what amounts to chroot'ed memory, but all of their actions still occur on the real system. Deleting a file really deletes a file.

            "real" VMs basically avoid the entire issue by letting even a highly malicious program do whatever it wants to a fake machine. They can have full unlimited access, but any damage ends when you halt the VM. Repair of worst-case destruction requires nothing more than overwriting your machine image file with a clean version (you could argue the same for a real machine, but "copy clean.vm current.vm" takes a hell of a lot less time than installing Win2k3, MSSQL, IIS, Exchange, and whatever else you might have running on a random server, from scratch.



            Or, to take your argument one layer lower, I would tend to consider XP the untrusted app, and VMWare the OS.
            • It seems to me that instead of multiple OSes running under a single VM, a single OS should be running, which runs each application under a sort-of VM so that they're all isolated from each other. After all, isn't the point of an OS the ability to multitask, and run separate applications (by separate users) without them interfering with each other? On an old VMS mainframe, for instance, 100 users could be simultaneously using the machine, and one user's dumb actions wouldn't affect the others.

              It seems that
              • by dave562 ( 969951 )
                Microsoft has tried to impliment that with their Volume Shadow Copy service. Unfortunately it only works on network shares in a domain.
        • I think the growing need for virtualisation as a safety/management measure reveals major flaws in the fundamental design philosophy of both operating systems and languages. Specifically, it is becoming abundantly clear now that our existing methods of breaking software into modular components simply don't work. If they worked, we wouldn't need to draw boxes around things at the physical or virtual server level in order to guarantee containment.

          I think basically the problem is that our languages still think
      • Or even sequestration?
      • It's the worldwide system of intermodal freight transport using ISO standard containers. [wikipedia.org] It revolutionized shipping starting in the mid 1950s. You wouldn't be buying cheap Chinese crap at Wal-Mart without it. Perhaps the authors are trying to play on this connotation?
      • Or "compartmentalization". Or "containment".
  • Contain (Score:3, Informative)

    by Anonymous Coward on Tuesday July 24, 2007 @08:15AM (#19967869)
    The word is contain, people, not containerization.
    • Contain contains a conceptual context that must be decontextualized and dereified. It's reality becomes process not product an the virtual world of containerization. In short Contain has lot's its content.
      --beatnik avatar.
    • Re: (Score:2, Informative)

      by Hal_Porter ( 817932 )
      You'll never be able to accumulatarize consultancy dollars if you speak like some hick from the Mid West. Take your Mactop to your favourite ReCaPrO, get yourself a vegan skinny hicaf latte and start learning the lingo from the blargocube.
    • Re: (Score:3, Informative)

      by Bohnanza ( 523456 )
      "Containment" would even work.
      • Comment removed based on user account deletion
        • Yes, I thought 'containment' straight away.

          Was this written by GWB, or is there a real semantic difference between 'containment' and 'containerization'?
      • by MsGeek ( 162936 )
        Yes, as in containment of hazardous or radioactive waste. Which Windows can be fruitfully compared to. Windows 2K as a guest OS on top of Mac OS X is as good as it gets. You can even prevent it from accessing the Internet.
  • by inflex ( 123318 ) on Tuesday July 24, 2007 @08:20AM (#19967917) Homepage Journal
    As a software developer, being able to take snapshots, clone, pause, rewind (via snapshots) and backup makes VM'ing worth the cost in CPU/performance.

    It's proved so useful that I'm sincerely considering doing the same for my actual WWW server so that if at any given time things go -bad- on the device I can just either roll back or transparently transfer to another machine, the latter, due to the (mostly) hardware agnostic nature of the VM setup makes disaster recovery just that much simpler (sure, you still have to setup the host but at least it's a simpler process than redoing every tiny little trinket again).
    • by Nursie ( 632944 ) on Tuesday July 24, 2007 @08:28AM (#19968017)
      ... that develops applications, mostly in C, I also find it extremely useful, especially when installing software. Some installers change the state of the system, some problems only occur first time round. There is nothing else like the ability to take your blank windows VM, copy it, install stuff, screw around with it in every possible way and then when you're done just delete the thing. They also allow you to install stuff you just don't want on your native box, but need to develop against.

      And you still have that blank windows install to clone again when you need it.

      VMs are a fantastic dev tool.
      • by inflex ( 123318 ) on Tuesday July 24, 2007 @08:38AM (#19968121) Homepage Journal
        I was nodding my head in agreement. Writing installers for your apps often takes longer than the app itself (or they're larger!), so yes, (also a C developer myself) being able to test the install, roll-back, try again... brilliant stuff.
        • by Wiseazz ( 267052 )
          And let's not forget the uninstall - it is frustrating in the extreme to test a complex install/uninstall cycle on a real machine. I never get the uninstall right the first time through, potentially leaving scattered remains of your app around to hose your next attempt at installing/testing.

          The easier it is to test these things, then the more likely you're going to end up with a quality product. If it takes me a half hour to install, test, uninstall, test, clean-up, etc., etc. then it's likely I'm not goi
        • Also for QA. (Score:4, Interesting)

          by antdude ( 79039 ) on Tuesday July 24, 2007 @10:51AM (#19969659) Homepage Journal
          Many QA people, including myself, use VM as well. Very useful with buggy builds. The best part is sharing the image. I can send a copy of my image to a developer with the reproduced issues without having him/her to come over to see it on my real machine. We still use real machines for testing, but VM is useful.
          • ... trying to reproduce problems. Snapshots are SO convient in VMware v4+, take a snapshot before the problem occur to skip all the steps before (e.g., install, configure, update).
      • Indeed! I was programming an app which required me to test it on a completely clean windows box, as well as different patch levels (vanilla, SP1, SP2, current) for both Home and Pro versions, which meant that I'd have to reinstall after each test run. With being able to install each from CD, snapshot the clean machine, and then zip a copy of the folder and drop it over to my server in case I killed or corrupted the initial snapshot, I could have a clean machine after each run within a few seconds. Furthe
  • I'm sorry, thats an attempt to jump on the virtualization bandwagon. Use that word these days, people throw money at you.

    Application isolation is not virtualization, its nothing more than shimming the application with band aid APIs that fix deficiencies in the original APIs. Calling it virtualization is a marketing and VC-focused strategy, it has nothing to do with the technology.
    • Sure, but if you use a VM for each application, you have easy containerization.
    • No, virtualization allows application instanciation, and therefore 'containerizes' the application instance as an atomic/discrete entity for manipulation.

      It also abstracts the instance from a physical hardware location, provided uniform hardware resource needs. It also permits throttling application resources, or conversely, changing application resource capacities nearly in an ad hoc way.

      If you accept this premise, contains are an effect of virtualization and a mathematical relationship shows containers as
  • I'd say it's both (Score:5, Informative)

    by Toreo asesino ( 951231 ) on Tuesday July 24, 2007 @08:23AM (#19967945) Journal
    I've used virtualization for both containerisation and also to consolidate boxes too...

    At my previous company, we invested in two almighty servers with absolutely stacks of RAM in a failover cluster. They ran 4-5 other servers for critical tasks...each virtual machine was stored on a shared RAID5 array. If anything critical happened to the real server, the virtual servers would be switched to the next real server and everything was back up again in seconds. The system was fully automated too, and frankly, it saved having to buy several not-so-meaty boxes while not losing much redundancy and giving very quick scalability (want one more virtual server? 5 minute job. want more performance? Upgrade redundant box and switch over virtual machines).

    The system worked a treat, and frankly, the size & power of the bigger, more important fewer servers gave me a constant hard-on.
    • by swb ( 14022 )
      And one enables the other. You really want to be able to dedicate boxes to specific services, but you also can't have a zillion boxes. VMs allow some slack to at least get the most annoying (*cough*BES*cough*) and least cooperative stuff on their own boxes.
    • Re: (Score:3, Interesting)

      In fact I'd say that in my data center the driver used to be containerization and is increasingly consolidation. The reasons are radically increased power costs and increasingly complex disaster recovery issues. Virtualization offers significant advantages in both areas.
  • I've only had an X86 box at home since the 80s and only this year putting XP Pro on a qemu cylinder with a Samba share _finally_ got me to rigidly separate the OS that I can zip tar and burn to DVDs for backup and the data on the Samba share that I can backup regularly. Now if I can benefit from the example and get more professional about the greater linux machines in the home.

  • Really about rPath (Score:3, Informative)

    by rowama ( 907743 ) on Tuesday July 24, 2007 @08:23AM (#19967963)
    In case your interested, the article is really a review of rPath, a virtual appliance builder based on a custom tailored gnu/linux...
  • PHP 6 (Score:4, Informative)

    by gbjbaanb ( 229885 ) on Tuesday July 24, 2007 @08:26AM (#19967989)
    I read somewhere (possibly on the PHP bug system) that they were considering scrapping most fo the security features we've all grown the .. well, hate really, and replace them all with a virtualisation system. I did think at the time that the virtualisation system they'd implement to keep PHP-based vhosts separate and secure would be to run apache in many virtual OSes.

    I suppose jailing applications is a well-known way of securing them, this really just improves on that, but with much more overhead. I wonder if anyone is thinking about providing "lightweight" virtualisation for applications instead of the whole OS?
    • It's already done. It's called Operating system.
    • What you are looking for:

      Virtuozzo. OpenVZ. Solaris Containers. BSD Jails. Linux has something (at least one!) too, I forget the name.

      In terms of Enterprise class features, Virtuozzo is the best of them, and comparatively cheap.

      C//
  • It's all about (Score:5, Insightful)

    by suv4x4 ( 956391 ) on Tuesday July 24, 2007 @08:30AM (#19968033)
    It's becoming increasingly clear that the most important use of virtualization is not to consolidate hardware boxes but to protect applications from the vagaries of the operating environments they run on. It's all about 'containerization,'

    Don't trust "it's all about" or "it turns out that to the contrary" or "set to fully replace" statements, especially when there's lack of evidence of what is claimed.

    Hosting services use virtualization to offer 10-20 virtual server per one physical machine, I and many people I know use virtual machines to test many configurations we can't afford to have separate physical machines for.

    So even though it's also about "containerization" (is "isolation" a bad word all of a sudden?), it's not ALL about it.
    • I suppose you think when someone claims to be able to eat a horse, they actually have the capacity to devour an entire equine. Relax, it's a figure of speech. [wikipedia.org]
      • by suv4x4 ( 956391 )
        I suppose you think when someone claims to be able to eat a horse, they actually have the capacity to devour an entire equine. Relax, it's a figure of speech.

        My ancestors used to consume equine for subsistence, so if someone would say they can eat a horse I'll expect them to. If not, I'll kill them damn liars, and ceremonially drink wine from their skull (something my ancestors used to do a lot too).

        My ancestors also didn't know the concept of hyperbole, much like readers of tech news.
        • by spun ( 1352 )
          Great. Now I have this picture in my head of a horde of barbarian dorks, as if one had crossed the cast of the "What's in your wallet?" commercials with the stars of "Revenge of the Nerds." Chainmail pocket protectors. Helmets with horn-rim glasses. Slide rules in scabbards. Run away! Run away!
  • by assantisz ( 881107 ) on Tuesday July 24, 2007 @08:33AM (#19968051)
    Solaris has Zones [sun.com] for that exact purpose. Lguest [ozlabs.org], I believe, offers something similar for Linux.
    • by Sancho ( 17056 )
      You don't get anything similar on Linux, and generally speaking, these alternates can't run proprietary OS.

      We've had UML and chroot for quite a while in Linux, but it's equally limited. With virtualization, I can run Windows on my Linux box, which is (to me) where the real use is.
  • Buzzword alert! (Score:5, Insightful)

    by drspliff ( 652992 ) on Tuesday July 24, 2007 @08:33AM (#19968053)
    With virtualization like linux vserver, xen, vmware etc. there are two main reasons to why people are using it.

      1) Consolidation
      2) "Containerization" or whatever their calling it today.

    The company that I work for are using multiple virtual servers to be able to keep applications separate and be able to migrate them from machine to machine easier which is a common use for vmware (e.g. the appliance trend). So you're trading performance and memory usage for security and robustness/redundancy.

    Across maybe 100-200 servers, the number of vservers we have is astonishing (probably around 1200 to 1500, which is a bit of a nightmare to maintain) which are hosting customer applications, when an application starts to use more resources the vserver is moved over to a machine with less servers on it, and gradually to it's own server, which in the long run saves money & downtime.

    The other major industry using them is the hosting industry, allowing customers a greater amount of personalization rather than the one-size-fits-all cpanel hosting companies. This is the real industry where consolodation has increased, biting into the hardware markets possible sales because thousands of customers are now leasing shared resources, instead of leasing actual hardware.

    Either way, the number of new machines (virtual) machines and ip addresses, all managed by different people is becoming a management nightmare. Now everybody can afford a virtual dedicated server on the internet regardless of their technical skills which often ends up as a bad buy (lack of memory and resource constraints compared to shared hosting on a well maintained server).
    • Re: (Score:3, Informative)

      by Courageous ( 228506 )
      Enterprise Management Associates conducted a survey of big users of Virtualization, and asked them to rank order the importance of certain functions of virtualization to their organizations. It was ranked thus:

      1. Disaster Site Operations (specifically the use case where main operations are still on metal, but the disaster site is virtual; this is a use case where there are less physical boxes than there are operating systems, so this is a consolidation case, just not the usual one).

      2. Increased Agility (as

  • What do you run inside a virtual machine - an OS!!

    What do you run the virtual machine on - an OS!!

    So, any application now has to withstand two OSes, not just one. Isolation can be an important part of virtualization, but it's about isolating applications from each other, not from the OS.
    • Re: (Score:3, Informative)

      by GiMP ( 10923 )
      > What do you run the virtual machine on - an OS!!

      Unless you're running Xen, unless you consider Xen an OS. But this brings us back to the question, "what is an OS?"

      Xen is a kernel for managing virtualized guests, it sits at Ring-0 where traditional OS normally resides. Xen requires that a single guest machine is setup to be booted by default, which will receive special priviledges for purposes of managing Xen. This special guest is called the "dom0", but is for all other intents and purposes -- just
  • by jkrise ( 535370 ) on Tuesday July 24, 2007 @08:36AM (#19968079) Journal
    From the referenced article:

    why did Intel just invest $218.5 million in VMware? Does Craig Barrett have a death wish? Or maybe he knows something IDC doesn't? There has got to be a little head scratching going on over in Framingham just now.
    As I replied to an earlier thread on the Linux kernel being updated with 3 VMs, this sounds very fishy and intriguing. Virtualisation is simply a technique of emulating the hardware in software - memory, registers, interrupts, instruction sets etc. If VMs will only emulate standard instructions and functions, the the Intel processors will be useless as a platform for reliable DRM or Trustworthy Computing purposes, where the hardware mfr. controls the chip - not the customer or software developer. If the virtualisation vendor is also secretive and opaque about his software, that is ideal for Intel because they will now be able to re-implement the secretive features in the VM engines.

    The obvious explanation for Barrett's investment (which will net Intel a measly 2.5% of VMware's shares after the forthcoming IPO) is that Intel believes virtualization will cause people to buy more, not less, hardware.
    True virtualisation will cause the opposite effect - people will buy less hardware. It is simply amazing that Windows 98 for instance, can deliver the same (and often better) end-user experience and functionality that Vista does, but with only 5% CPU MHz, RAM and Disk resources. And so virtualisation will allow 20 Windows 98 instances on hardware required for a single instance of Vista without degrading the user experience.

    That can be a chilling thought to companies like Intel, Microsoft or Oracle. Also, the carefully woven concoluted DRM and TCPA architectures that consume gazillions of instructions and slow down performance to a crawl... will simply be impossible if the Virtualisation layer simply ignores these functions in the hardware. Which is why I felt it very strange for the Linux Kernel team to get involved in porting these VMs in order to allow Vista to run as a guest OS. It shouldn't have been a priority item [slashdot.org] for the kernel team at all, IMO.
    • True virtualisation will cause the opposite effect - people will buy less hardware.

      But every desktop user is going to have a CPU in their machine and the number of CPU's in the big server farms isn't going to change much because they pile on capacity to suit the application. Odd sites like the one I work at will use vmware where they have a requirement for a calendar server running linux 2.2 (I am not making this up) and don't want to waste a box on it. Fair enough but that not a big market to lose.

    • by GiMP ( 10923 )

      True virtualisation will cause the opposite effect - people will buy less hardware.

      Perhaps, though for myself, this is untrue. I run a hosting provider. Back in the day, we simply needed a few large hosting machines and that was sufficient -- providers could pile accounts onto machines. Even medium-sized companies could get by with less than 10 shared-hosting servers.

      However, that has changed with VPS... We can only fit a few customers onto each machine. The more customers we have, the more virtual mac

      • by afidel ( 530433 )
        You should see 16 core machines with 64GB for a "reasonable" price by fall. The HP DL585G2 will be upgradable to AMD Barcelona. Estimated cost of a 4 way quad machine with 64GB of ram is about $30K by my estimates, that's a 20% premium over a similar machine with near top of the line quad duals today. I know IBM and Dell both have four socket machines that are prequalified for Barcelona upgrades as well.
    • by drew ( 2081 )

      . And so virtualisation will allow 20 Windows 98 instances on hardware required for a single instance of Vista without degrading the user experience.

      That is of course assuming that you don't consider running Windows 98 to be a degraded user experience. Heck why not run Windows 3.1? You could probably run 100 instances of that for the same hardware requirements as Vista.
    • ...why did Intel just invest $218.5 million in VMware?

      Intel has over 60,000 computers in their data centers. Over 40,000 of those servers run VMWare.

      Maybe they did it for the discount? Ha.

      Joe.

    • by Wolfrider ( 856 )
      [[
      It is simply amazing that Windows 98 for instance, can deliver the same (and often better) end-user experience and functionality that Vista does, but with only 5% CPU MHz, RAM and Disk resources. And so virtualisation will allow 20 Windows 98 instances on hardware required for a single instance of Vista without degrading the user experience.
      ]]

      --Win98 is not even supported by MS anymore -- go with Win2kpro instead.
       
      // Ya, srsly
    • It is simply amazing that Windows 98 for instance, can deliver the same (and often better) end-user experience and functionality that Vista does, but with only 5% CPU MHz, RAM and Disk resources.

      Absolutely.

      Thats why I play World of Warcraft on my Windows 98 box with DirectX 9 and my Nvidia 8800 GTX video card.

      w00t!
  • by ls671 ( 1122017 )

    This is kind of obvious, I used to use more machines for security reasons, now I use less machines but they are more powerful. When you do server consolidation, it implies that applications used to run on different hardware for security and stability reason will now be running on the same hardware within different VMs. So how can they say "protect applications from the vagaries of the operating environments" is opposed to "consolidating hardware box".

    "Consolidating hardware boxes" implies "protect applic

    • by DaveCar ( 189300 )

      I think it like when you have an application that is certified with a particular OS configuration and set of patches, etc., and another application which would require a conflicting setup. You can run all your applications in a known good setup and not worry about updates on one application (and the OS dependancies which it drags in) affecting another. You could freeze your package manager at a certain configuration for an application so random OS updates don't go breaking things. Those kind of vagaries.
  • Node Locking (Score:4, Interesting)

    by Pvt_Ryan ( 1102363 ) on Tuesday July 24, 2007 @08:44AM (#19968167)
    I use vmware servers for software that is node locked.. Node locked software is usually done by a machines MAC address, I find that using VMs reduces downtime in the event of either host or client failing. In the case of the host if we can recover the VM we just copy it to another host and run it. In the case of the client dying the great thing is I just create a new VM and change its mac address to match the dead one then reinstall my licence files, saving me from having to reregister all of the licences to the "new" machine.. Hardware consoladation also plays a large part of my use of VMs, but the main reason is recoverability so much so that all my DCs are on VMs so if their host dies (hardware other than HDD) then i can either pull the disks and put them in another machine, or if my replication has succeeded more recently then I just start my backup copy of the DC and let it update from the domain. Total downtime is about 15min tops.
    • by Mendy ( 468439 )
      If you were doing this purely because of the ability to change the MAC you don't have to, most network card drivers have an option to allow this to be overridden.
  • Is there actually a metric of why companies are turning to virtualization somewhere? We are doing it for stability of applications to a very small degree, but also for development ease, backup ease and also for a big part to consolidate and use hardware more efficiently. What about you, why are you considering/using/investigating virtualization?

  • Makes sense to me (Score:4, Informative)

    by jimicus ( 737525 ) on Tuesday July 24, 2007 @08:49AM (#19968227)
    I run a whole bunch of virtual servers and that's exactly what I'm doing.

    It's fantastically handy to be able to install and configure a service in the knowledge that no matter how screwed up the application (or, for that matter, how badly I screw it up), it's much harder for that application to mess up other services on the same host - or, for that matter, for existing services to mess up the application I've just set up.

    Add to that - anyone who says "Unix never needs to be rebooted" has never dealt with the "quality" of code you often see today. The OS is fine, it's just that the application is quite capable of rendering the host so thoroughly wedged that it's not possible to get any app to respond, it's not possible to SSH in, it's not even possible to get a terminal on the console. But yeah, the OS itself is still running fine apparently, so there's no need to reboot it.

    This way I can reboot virtual servers which run one or two services rather than physical servers which run a dozen or more services.

    Granted, I could always run Solaris or AIX rather than Linux, but then I'll be replacing a set of known irritations with a new set of mostly unknown irritations, all with the added benefit that so much Unix software never actually gets tested on anything other than Linux these days that I could well find myself with just as many issues.
  • So in the future we will not release rpm packages and setup files but VM images to our customers? Ok, why not. It could ease deployment of highly customizable enterprise software. So you basically deploy all the OS config with it. Sounds cool. No more telling the sysadmin to open ports, create mount points, set permissions, install init scripts, update this and that library, etc.
  • by Thumper_SVX ( 239525 ) on Tuesday July 24, 2007 @09:06AM (#19968393) Homepage
    Well, yes and no.

    As I keep telling people when I work with virtualization, it does not necessarily lead to server consolidation in the logical sense (as in instances of servers), rather it tends to lead to server propogation. This is probably expected; generally I/O will be lower for a virtual machine than for a physical machine, thus requiring the addition of another node for load balancing in certain circumstances. However, this is not always the case.

    Virtualization DOES help lead to BOX consolidation; as in it helps reduce the physical server footprint in a datacenter.

    Let me give you my viewpoint on this; generally virtualization is leveraged as a tool to consolidate old servers to bigger physical boxes. Generally, these old servers (out of warranty, breaking/dying and so on) have lower I/O requirements anyway so often see a speed boost going to the new hardware... or at the very least performance remains consistent. However, where new applications are being put on virtual platforms, quite often the requirements of the application cause propogation of servers because of the I/O constraints. This is generally a good thing as it does encourage the developers to write "enterprise ready" applications that can be load balanced instead of focusing on stand-alone boxes with loads of I/O or CPU requirements. This is good for people like me as it provides a layer of redundancy and scalability that otherwise wouldn't be there.

    However, the inevitable cost of this is management. While you reduce physical footprint, there are more server instances to manage, thus you need a larger staff to manage your server infrastructure... not to mention the specialized staff managing the virtual environment itself. This is not in itself a bad thing, and generally might lead to better management tools, too... but this is something that needs to be considered in any virtualization strategy.

    Generally in a Wintel shop, more newer applications get implemented in most companies these days. This is particularly true since most older applications have been or need to be upgraded to support newer operating systems (2003 and the upcoming 2008). This means that the net effect of all I've mentioned is an increase in server instances even while the footprint decreases.

    "Containerization" (yuck!) is not new by the way. This is just someone's way of trying to "own" application isolation and sandboxing. People have done that for years, but I definitely see more of it now that throwing up a new virtual machine is seen as a much lower "cost" than throwing up a new physical box. The reality of this is that virtualization is VERY good for companies like Microsoft who sell based on the instances of servers. It doesn't matter if it's VMWare or some other solution; licensing becomes a cash cow rapidly in a virtualized environment.

    Where I work we've seen about a 15% net server propogation in the process of migrating systems so far. Generally, low-load stuff like web servers virtualize very well, while I/O intensive stuff like SQL does not. However, a load-balanced cluster pair of virtual machines on different hardware running SQL can outperform SQL running on the same host hardware as a single intstance... this means that architecture changes are required, and more software licenses are needed, but the side effect is a more redundant, reliable and scalable infrastructure... and this is definitely a good thing.

    I am a big believer in virtualization; it's somewhat harking back to the mainframe days, but this isn't a bad thing either. The hardware vendors are starting to pump out some truly kick-ass "iron" that can support the massive I/O that VM's need to be truly "enterprise ready". I am happy to say that I've been on the leading edge of this for several years, and I plan to stay on it.
    • by darkuncle ( 4925 )
      ----
      However, the inevitable cost of this is management. While you reduce physical footprint, there are more server instances to manage, thus you need a larger staff to manage your server infrastructure... not to mention the specialized staff managing the virtual environment itself. This is not in itself a bad thing, and generally might lead to better management tools, too... but this is something that needs to be considered in any virtualization strategy.
      ----

      This is completely wrong - the increased scalabil
      • OK, I'll concede you're probably correct in some instances. However, I don't know if you're dealing with virtualizing Windows servers or something else. Typically, Windows servers without a good systems management infrastructure will tend to increase the workload (patching, maintenance, troubleshooting bluescreens and so forth)... so the server propagation in our environment has caused an increase in administrators as well. I'm spearheading a project to improve our management infrastructure at the moment, s
        • by darkuncle ( 4925 )
          Windows (assorted versions), FreeBSD (>= 6.x July 2006 or later, prior to that there was a scsi enumeration bug between FreeBSD and ESX that made running 6-prerelease impossible), OpenBSD, RHEL 4 (a few Debian as well). Approaching 4 figures on windows VMs, and we have a single person that manages patching, anti-virus and whatnot.

          It's not just that VMware removes (or rather, greatly reduces) the hardware administration requirements, it's that it makes managing infrastructures _much_ more scalable. Change
  • I prefer "encapsulation" myself
    • You'd think that the noun pertaining to the verb "contain" would be "containment". But that would just be too easy. Since the software is running inside a container, then obviously the buzzword must include the whole of "container".
  • The reason we run Vi3 is so that we can deploy servers on demand. There's no need to prep hardware. You just right-click and deploy. And, yes, the initial impetus was to consolidate from about 20 hardware servers down to two. We now run about 40 virtual servers on 4 octo-core servers. Consolidation is definitely at work here. "Containerization" is a stupid word, as it's entirely possible through non-virtual deployment (100% probable, in fact). Virtualization is about flexibility in stack deployhment,
  • by cwills ( 200262 ) on Tuesday July 24, 2007 @09:57AM (#19968967)
    Since the late 60's IBM's mainframe VM operating system has been available. It too went through the same phases that is happening now with VMWare, xen, etc. Initially VM was used for hosting multiple guest systems (a good history -> VM and the VM community, past present, and future [princeton.edu] - pdf warning), but quickly a small project (Cambridge Monitoring System - CMS) became an integral part of VM. CP provided the virtualization and CMS provided a simple single user operating system platform.

    Within a VM system, one will now find three types of systems running in the virtual machines.

    1. Guest systems, such as Linux, z/OS, z/VSE, or even z/VM
    2. General users using CMS in a PC like environment (sorry no GUI's, and yes there are arcane references to card punches, readers, etc. -- but question -- why does linux still have TTYs?). In the heyday before PC's, CMS provided an excellent end user environment for development, as well as a general computing platform.
    3. And finally Service Virtual Machines (SVMs).

    It is these Service Virtual Machines that equate to the topic of the original post. A SVM usually provides one specific function, and while there may be interdependence between SVMs (for example the TCPIP SVM that provides the TCP/IP stack and each of the individual TCP/IP services), they are pretty much isolated from each other. A failure in a single SVM, while disruptive, usually doesn't impact the whole system.

    One of the first SVM's was the Remote Spooling Communication Subsystem (or RSCS). This service allowed two VM systems to be linked together via some sort of communication link -- think UUCP.

    The power of SVM's is in the synergy between the Hypervisor system, and a light weight platform for implementing services. The light weight platform itself doesn't provide much in terms of services. There is no TCP/IP stack, no "log in" facility (only relying on the base virtual machine login console), and maybe not even any paging memory (letting the base VM system manage a huge address space). Instead a light weight platform will provide a robust file system, memory management, and task/program management. In IBM's z/VM product, CMS is an example of a light weight platform. The Group Control System (GCS) is another example (GCS was initially introduced to provide a platform to support VTAM - which was ported from MVS).

    Part of the synergy between between the Hypervisor and the SVMs is that the Hypervisor needs to provide a fast, low overhead intra-virtual machine communication path that is not built upon the TCP/IP stack. In otherwords the communication between two virtual machines should not require that each virtual machine contain it's own TCP/IP stack with it's own IP address. Think more along the lines of using the IPC or PIPE model between the SVMs.

    Since the SVM itself is not a full suite of services, maintenance and administration is done via meta-administration, in otherwords you maintain the SVM service from outside the SVM itself. There is no need to "log into" the SVM to make changes. Instead of the SVM providing a sys-log facility, a common sys-log facility is shared among all the SVM's. Instead of each SVM doing paging, simply define the virtual machine size to meet the storage requirements of the application, and let the Hypervisor manage the real storage and paging.

    Maybe a good analogy would be taking a Linux kernel and implementing a service via using the init= parameter in the kernel to invoke a simple set up (mounting the disks) and running just the code needed to perform the service. Communication for other services would be provided via hypervisor PIPEs between the different SVM's. So one would have a TCP/IP SVM that provides the TCP/IP network stack to the outside world. A web server SVM that provides just the HTTP protocol and base set of applications, using a hypervisor PIPE to talk to the TCP/IP stack. Within the web server SVM, would use hypervisor PIPEs to talk to the individual application SVMs.

  • It's all about 'containerization,' to employ a really ugly but useful word

    How about just "containment". That way, rampant verbification won't overrunerrize things.

  • is not consolidation (although that's popular with the CFO). In fact, I'd say personally it's not even the ability to build vastly more scalable and redundant infrastructures (although that's a close second). My favorite feature of virtualization is how much easier it makes life for sysadmins (a selfish perspective, but entirely valid, as sysadmins are the ones who will be doing the management work, whether it's on physical gear or virtual). Need a new server? Clone one from a template and you've got add'l
  • I love that some guy made up this new buzzword. After all, there are no other words in existence today which can convey the same meaning!

    Well, except for compartmentalization which I guess has been used alongside words like virtualization & partitioning in computer science for ages. :)
  • If we were using an OS with decent memory protection and scheduling (VMS, among others), there would be no need to use an extra layer of software to run more than one task on one box. Back in the day, I supported several hundred users on each individual machine in a VAX cluster, doing everything from large finite element analyses, CAD for large engineering projects, large Oracle database activities, prgram development in several languages, word processing and office automation, and accounting and financi
  • Containerization is nothing new. In fact, application isolation (that being the proper name) was a primary selling point for Win95, for MacOS5, for OS/2, for OS/2 Warp, for NeXTstep, .NET, for Java, and Geos. This is nothing new. The "consensus belief," if it really did forget about this aspect of things - about which I retain intense doubts - is just forgetting history.
  • As a computer scientist I'd like to welcome you all to the near-40 year old world of virtual machines, hypervisors, and extreme flexibility. Tho I've only been using them personally for ~20 years.

    Server CPU's have for all practical purposes always had VMs. Intel resisted adding the needed hardware support to it's consumer chips for a very very long time, to avoid exactly what we see happening now.

    And yes, VMware rocks harder then a fox with socks.
  • There have been several times when I've wanted to try some high-level OSS package for a quick test run. Some "Exchange Killers" come to mind, but never I've gotten beyond the install docs because there are literally 50 dependencies and who knows how many config changes required just to install the entire stack of software. Now, thanks largely to VMWare's free VMWare Server, there are tons of pre-configured builds for all of the major OSS applications.

    It really brings down the knowledge and time required f

BLISS is ignorance.

Working...