Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×

An Overview of Virtualization 119

IndioMan writes to point us to an overview of virtualization — its history, an analysis of the techniques used over the years, and a survey of Linux virtualization projects. From the article: "Virtualization is the new big thing, if 'new' can include something over four decades old. It has been used historically in a number of contexts, but a primary focus now is in the virtualization of servers and operating systems. Much like Linux, virtualization provides many options for performance, portability, and flexibility."
This discussion has been archived. No new comments can be posted.

An Overview of Virtualization

Comments Filter:
  • virtuosity (Score:4, Funny)

    by User 956 ( 568564 ) on Tuesday January 02, 2007 @02:49PM (#17434746) Homepage
    IndioMan writes to point us to an overview of virtualization -- its history, an analysis of the techniques used over the years, and a survey of Linux virtualization projects.

    That article had the virtue of being a virtual cornucopia of information.
  • by Anonymous Coward
    Virtually anyone can do it!
  • QEMU (Score:3, Informative)

    by the.metric ( 988575 ) on Tuesday January 02, 2007 @03:12PM (#17435038)
    Just wanted to point out that qemu can also do virtualisation on Linux, just like Vmware, with a closed-source kernel module. It works quite well too. http://fabrice.bellard.free.fr/qemu/qemu-accel.htm l [bellard.free.fr]
    • Re: (Score:3, Interesting)

      by dignome ( 788664 )
      Kernel Virtual Machine - http://kvm.sf.net/ [sf.net] It requires a processor with Intel's Vt or AMD's SVM technology (cpuflags will read vmx for VT or svm for AMD-V). The developers are looking for people to test optimizations that have just gone in. http://thread.gmane.org/gmane.comp.emulators.kvm.d evel/657/focus=662 [gmane.org] It uses a slightly modified qemu.
    • Just wanted to point out that qemu can also do virtualisation on Linux, just like Vmware, with a closed-source kernel module.

      The article lists the following solutions on Linux: Bochs, QEMU, VMware, z/VM, Xen, UML, Linux-VServer, and OpenVZ. I'm not sure why you felt the need to mention QEMU specifically. It does, however, seem like one of the more promising solutions. Have you used it in production?

      • Re:QEMU (Score:5, Informative)

        by julesh ( 229690 ) on Tuesday January 02, 2007 @03:42PM (#17435376)
        I'm not sure why you felt the need to mention QEMU specifically.

        I suspect because the article incorrectly describes it as an emulator, while it is capable of full virtualization if the plugin the GP post linked to is used.

        It's not the only such mistake in the article: Xen is described as performing paravirtualization, but it too is capable of full virtualization in some cases (i.e., when it is supported by the hardware).
        • I suspect because the article incorrectly describes it as an emulator, while it is capable of full virtualization if the plugin the GP post linked to is used.

          The article does one better and specifically mentions KVM+Qemu, but don't take my word for it.
          • by julesh ( 229690 )
            The article does one better and specifically mentions KVM+Qemu, but don't take my word for it.

            Which is entirely different to qemu with qemu-accel; the latter is *not* a kernel module.
      • by eno2001 ( 527078 )
        I've used QEMU in "production" both at home and at work for what it's worth. As good as it is, it's really more a desktop virtualization solution IMHO as you can't run multiple instances of QEMU on a normal box and expect full blown performance even with the kqemu module. My work application was to let some of my users run a test workstation environment to test some software that couldn't have multiple instances of different versions running. These users were on Windows 2000 boxes and were running virtua
    • I haven't played with it in a while, but QEMU was considerably slower than VMWare when I last gave it a spin. I figured it was a performance penalty from emulating the whole x86 where VMWare could run closer to the bare metal (on an underlying x86, that is) for near-native performance. Am I mistaken?
      • Re: (Score:2, Informative)

        by megaditto ( 982598 )
        As I recall, qemu had an option to do fast emulation, but it only worked on certain architectures (e.g. on PowerPC you can only use a slow way x86 emulation). You would also need to run it as root and be OK with an occasional kernel panic.

  • Apple (Score:5, Interesting)

    by 99BottlesOfBeerInMyF ( 813746 ) on Tuesday January 02, 2007 @03:12PM (#17435042)

    This article is an okay overview of many of ways virtualization is now being used. As an aside, has anyone else noticed Apple seems to be missing the boat this time? They're certainly benefitting from virtualization with several players in the market providing emulation solutions and tools now that they are on Intel, but Apple themselves seem to have done nothing and not even provided a strategy. Servers are moving to more virtual servers on one real machine, but OS X's license forbids it from fulfilling that role. Tools for using OS X as a thin client for accessing remote virtual machines are likewise weak. Apple hasn't even provided a virtual machine for their customers to emulate old macs so that users can run OS 9 apps on the new intel machines and they restrict redistribution of their ROM files to make 3rd parties unable to do this. No mention of adding VM technology to OS X has been heard, despite its inclusion in the Linux kernel among others.

    Does Apple have something against VM technology? Are they simply behind the times and failing to see the potential?

    • Re: (Score:2, Interesting)

      The answer here is simple. Virtualization is usless in a desktop multimedia user environment. Video cards, sound cards and the like are bus mastering devices and you cannot virtualize hardware access unless you own the whole environment. In simple terms, virtualization is only usefull in the server arena and useless on the desktop.
      • Re:Apple (Score:5, Interesting)

        by 99BottlesOfBeerInMyF ( 813746 ) on Tuesday January 02, 2007 @03:26PM (#17435214)

        Virtualization is usless in a desktop multimedia user environment. Video cards, sound cards and the like are bus mastering devices and you cannot virtualize hardware access unless you own the whole environment. In simple terms, virtualization is only usefull in the server arena and useless on the desktop.

        As someone with two VMs running on my OS X laptop right now, I'd have to disagree with you. As for sound cards and video cards the sound works just fine and at least two companies I know of are working on support for allowing hosted OS's full access to video card acceleration.

    • Re:Apple (Score:5, Insightful)

      by diamondsw ( 685967 ) on Tuesday January 02, 2007 @03:27PM (#17435218)
      Let's see, why don't they do virtualization...
      • They don't want you to use OS X in a VM, as it makes it trivial to use it on generic PC's, which eliminates the vast majority of their revenue.
      • They don't include virtualization software themselves as Parallels and VMWare are doing a good job if you need such a thing, and they don't want to alienate them.
      • And not strictly virtualization, but you mentioned it - they don't want to make it easy to use OS 9. It's been dead to them for years (and porting Classic to Intel would not have been easy, given the way Rosetta works). Meanwhile, they do nothing to hinder or help SheepShaver and others; the ROM files needed are available from Apple's website (although not easy to find).


      None of this is hard to figure out. Yes, there are reasons it would be nice, but it's pretty obvious why they're not too keen on it.
      • Re:Apple (Score:4, Insightful)

        by 99BottlesOfBeerInMyF ( 813746 ) on Tuesday January 02, 2007 @03:45PM (#17435398)

        They don't want you to use OS X in a VM, as it makes it trivial to use it on generic PC's, which eliminates the vast majority of their revenue.

        I'm sure that is true, but do they have a plan for what happens when/if the industry moves toward virtual machines on the server? Are they just going to let OS X server die, or try to target only really small businesses? What about thin client support? if more and more VMs start running on big hardware and exporting to thin clients do they have a plan to provide better support for those clients? Integrate with those UIs? Are they just assuming none of this will happen?

        They don't include virtualization software themselves as Parallels and VMWare are doing a good job if you need such a thing, and they don't want to alienate them.

        It is perfectly understandable not to include a VM in their workstation, but that does not preclude kernel level support for virtualization, including API's and hooks for interoperability. What about hooks for supporting virtual machines like Parallels, but treating the apps as more "native" with Windows or Linux binaries showing up as icons in OS X?

        And not strictly virtualization, but you mentioned it - they don't want to make it easy to use OS 9.

        Depending upon how access to OS 9 apps is accomplished, it certainly is virtualization. I certainly understand not including it in a the default install to discourage the use of OS 9 apps, but making it hard to find and install your own VM of this sort is counterproductive, in my opinion. Even PS3's provide a way to run PS2 games.

        Meanwhile, they do nothing to hinder or help SheepShaver and others; the ROM files needed are available from Apple's website (although not easy to find).

        SheepShaver is useless without ROMs, the discovery of, extraction of, and installation of is well beyond the capabilities of even many advanced users. Apple does not allow the SheepShaver project to redistribute those ROMS or include them in a pre-build binary. That certainly hinders the project a lot and prevents it from ever being user friendly enough to attract a significant body of developers. It seems like a tiny bit of privilege from Apple would go a long way here, but they withhold it.

        It just seems like VM is a very promising new technology that MS and Linux distros are leaping at, and which is finally evolving a few standards. Ignoring it on so many fronts, seems dangerous to me, akin to MS ignoring the internet until the final hour. Ignoring some of the fronts on which VM is making inroads is one thing, but ignoring them all seems almost like a cultural bias. I wonder if maybe the term is taboo at Apple, since they are worried about it one one front and have applied a policy a little too liberally.

        • by Moofie ( 22272 )
          "but treating the apps as more "native" with Windows or Linux binaries showing up as icons in OS X?"

          OK, your other points aside, I got a full-body shivering skeeve with a bit of throw up in my mouth when I had a mental picture of usually-very-nicely designed OSX icons sitting right next to the crap you get on Windows and Linux.

          Sandbox==good. That way you can keep all the turds in the same place.
          • OK, your other points aside, I got a full-body shivering skeeve with a bit of throw up in my mouth when I had a mental picture of usually-very-nicely designed OSX icons sitting right next to the crap you get on Windows and Linux.

            Be that as it may, it makes sense for end users and maintaining a consistent paradigm is important especially for novice users.

            Sandbox==good. That way you can keep all the turds in the same place.

            I'm all for sandboxing applications, even native ones. I think it is going to ha

        • And you think Apple would tell you about it if they *were* investigating VM support? They rarely announce *anything* until it's a certainty, or in rare exceptions when a new feature in one product demands a new product that doesn't yet exist (eg the new iTV thingy they've been talking about .... /me drools).
        • Are they just going to let OS X server die, or try to target only really small businesses?

          Funny, that's all they do now. I don't see them shipping any servers beyond pizza boxes.

          that does not preclude kernel level support for virtualization, including API's and hooks for interoperability

          Given how fast Parallels implemented it, I'd say those hooks are in place. Parallels wrote a kernel extension (as did Fusion) and they have virtualization. Or does this go back to "OS X in a VM" again?

          What about hooks for su
        • by 4D6963 ( 933028 )

          SheepShaver is useless without ROMs, the discovery of, extraction of, and installation of is well beyond the capabilities of even many advanced users. Apple does not allow the SheepShaver project to redistribute those ROMS or include them in a pre-build binary. That certainly hinders the project a lot and prevents it from ever being user friendly enough to attract a significant body of developers. It seems like a tiny bit of privilege from Apple would go a long way here, but they withhold it.

          If I recall c

    • "

      This article is an okay overview of many of ways virtualization is now being used. As an aside, has anyone else noticed Apple seems to be missing the boat this time? They're certainly benefitting from virtualization with several players in the market providing emulation solutions and tools now that they are on Intel, but Apple themselves seem to have done nothing and not even provided a strategy. Servers are moving to more virtual servers on one real machine, but OS X's license forbids it from fulfilling t
    • As for the consumer virtualization... personally, it makes me throwup a little bit in my mouth every time I think about it.

      In my humble opinion, OS 9 needed to get the boot. Very few people depend on classic apps, and maintaining the classic environment is just another damn thing to chew up development resources. Yet, this is becoming more of a topic of emulation then virtualization.

      The big thing for consumers is Windows virtualization on OS X. It's a cool concept, yet it's a human factors nightmare and not
      • In my humble opinion, OS 9 needed to get the boot.

        I think excluding the ability the run OS 9 by default is a reasonable decision. But it's nice to have some more gradual deprecation. They went from supporting OS9 apps out of the box with every machine, to not supporting them even as an option and intentionally making it hard for others to provide that functionality in a single step.

        Very few people depend on classic apps, and maintaining the classic environment is just another damn thing to chew up deve

    • by jafac ( 1449 )
      Does Apple have something against VM technology? Are they simply behind the times and failing to see the potential?

      I wouldn't say that. They played a key role in helping Connectix get Virtual PC off the ground. (Though I wish they had bought Connectix instead of letting Microsoft do it - but then again, given the tight CPU-specific reliance in Connectix, they got screwed TWICE; once when IBM bailed on the endian-opcode for the G5, and then again with the switch to intel).

      On the server side, I don't think
      • On the server side, I don't think that OS X is inherently a good platform for virtualization, given the performance issues with the kernel

        Virtualization will never be as fast as the "real thing" but for the most part, it doesn't need to be. I don't have any major issues with the speed of VMs on OS X, only with the feature set.

        I don't begrudge them on that point. That was a clusterfuck from the start. Honestly, I don't run any more OS 9 apps at all. I still have an occasional old non-carbonized game or

  • Since I'm not a server admin, I've always wondered about the use and importance of this "virtualization" I've been hearing so much about. TFA is a pretty useful overview of the topic, and I'm glad it was posted here.

    No, it's not red-hot, breaking news, but valuable stuff like this is why I keep coming back to this mess (/., I mean). If you're like me and spend most of your Slashdot time reading the comments, take time to read this TFA.
    • by moco ( 222985 ) on Tuesday January 02, 2007 @03:42PM (#17435382)

      Since I'm not a server admin, I've always wondered about the use and importance of this "virtualization" I've been hearing so much about
      .

      For the home user virtualization can be used as a separate PC to surf the net without fear of malware, when you are done surfing just restore the VM to the "clean" state, think "your pr0n browsing PC". You can also use it to test software before contaminating your host PC with stuff you decide not to keep. I visualize it as a sandbox to play in before messing up with the "real" system.

      Check the vmware player appliances [vmware.com], there are lots of good ideas there, many of them are for business use but there are several that can be used at home.

      For the developer / tester virtualization provides a set of target operating systems to test / debug the software on without need of having the actual physical hardware.

      Of course, in the data center it is the next big thing, too many advantages to list here.
      • Thanks, moco. I will check on the vmware player appliances. Since I've got a computer that has a fair amount of overhead, virtualization does make sense in several circumstances. "Virtualization" is a pretty broad name. When I work on my DAW, I use a lot of virtual instruments (Gigasampler, NI Battery, etc.), which give me the capability of making nearly (virtually) the same sounds in the same way as the instruments they represent. That's what virtualization has always meant to me.

        But I've only now sta
      • by bazorg ( 911295 )
        A home user willing to change his/her OS of choice may also be interested in VMware player or such:

        1) you can try out your new operating system from within your normal OS of choice, without breaking anything (including the bank)

        2) for a Windows 98/XP user, there may be benefits from keeping your old, familiar OS running in a virtual machine on top of Linux (or OS X).

        This is useful to transfer files from NTFS partitions, from Outlook PST files or simply to go back to some old application you've grown to lik

    • "Since I'm not a server admin, I've always wondered about the use and importance of this "virtualization" I've been hearing so much about."

      To further Moco's post; take for example of old software on much older hardware. It take a lot of care and attention and spare parts to keep an old Compaq NT4 box we have running, not to mention power and air conditioning. It's only purpose is to keep an old Foxpro based software archive running, for historical records purposes. No one wants to spend the money to get
  • by RAMMS+EIN ( 578166 ) on Tuesday January 02, 2007 @03:16PM (#17435094) Homepage Journal
    "Virtualization is the new big thing, if 'new' can include something over four decades old."

    You just wait. Next thing you know we'll be running Lisp machines under our virtualization software. And then there's going to be a new remake of ADVENT and the Great Worm. And a new AI summer.
  • by anomalous cohort ( 704239 ) on Tuesday January 02, 2007 @03:18PM (#17435122) Homepage Journal

    I work at a small ISV which just bought SourceForge, Enterprise Edition [sourceforge.net] which is an Apache/JBOSS/PostgreSql/CentOS app for managing SDLC. For a company of our size, they package this as a VMWare image. Installation is incredibly easy. I can definitely see how free virtualization can be a big boon to companies selling and/or consuming web applications for small deployments.

    • > For a company of our size, they package this as a VMWare image.

      FWIW, there are also GForge [gforge.org] VMWare appliances out there [spisser.it]. I can see how it'd make a normal installation easier to troubleshoot, too, if you had a VMWare installation for comparison.

      And of course, it's awesome that they both run on PostgreSQL, great stuff [blogs.com]!
  • by Anonymous Coward on Tuesday January 02, 2007 @03:19PM (#17435134)
    The article seems a bit outdated. Reading it make it sounds like hardware virtualization (Intel VT-x / AMD's AMD-V, ex Pacifica) isn't there yet. As a user running both para-virtualized guests and hardware-virtualized guests under Xen since months, I disagree with how Xen is presented. Xen can also run unmodified guests (and is very good at it): I've tried several unmodified Linux version (though para-virt is faster than hardware-virt, so I run para-virtualized Linux guests) and I'm currently running an unmodified Windows XP Pro 32 bit under Xen. I also installed W2K3 server under Xen. Performance is incredible, there's no comparison, for example, with VMWare running fully-virtualized Windows, which is way slower than hardware-virt (nowadays maybe that VMWare allows to use VT-x/AMD-V? I honestly don't know, I haven't checked). As a second note, para-virt. under Xen is even faster than hardware-virt, it just feels native.

    The article seems a bit light on qemu too.

    ... # xm info | grep xen_caps
    xen_caps : xen-3.0-x86_32 , hvm-3.0-x86_32

    • Re: (Score:1, Informative)

      by Anonymous Coward
      Performance is incredible, there's no comparison, for example, with VMWare running fully-virtualized Windows, which is way slower than hardware-virt (nowadays maybe that VMWare allows to use VT-x/AMD-V?

      VMware has supported VT/Pacifica from the begining. The other statement about hardware virtualization being fast now (without MMU support) is not true. That is the reason VMware recomends using software virtualization even on products that has full support for VT/Pacifica.

  • OSes Targeting VMs (Score:4, Interesting)

    by RAMMS+EIN ( 578166 ) on Tuesday January 02, 2007 @03:21PM (#17435152) Homepage Journal
    An idea that I've been toying with lately is what if we got operating systems targeting virtual machines, especially ones that expose a simplified interface rather than trying to emulate a real machine. Instead of having to duplicate drivers for every piece of hardware in every OS, drivers would only need to be developed for the virtualization environment, and operating systems would only have to support the interface exposed by the VM.
    • by julesh ( 229690 )
      Well, that's the idea of paravirtualization: make an easier target to emulate and then port the OS to it. And yes, people are doing OS development with paravirtualized targets as their initial architecture. Xen is quite popular among the hobbyist-OS community.
    • So then the virtual machine needs to understand the underlying hardware, in order to provide useful virtualized devices. Thus, you have to write a virtual machine for every native platform, which is responsible for providing a unified interface to the underlying hardware.

      Hey, maybe instead what you could do is create a machine that virtualized the devices and provided some sort of interface that programs could just access directly! Of course, you'd need is some additional interfaces for process creation a
      • ``So then the virtual machine needs to understand the underlying hardware, in order to provide useful virtualized devices. Thus, you have to write a virtual machine for every native platform, which is responsible for providing a unified interface to the underlying hardware.''

        Yes, but that's a lot less work than writing a driver for every OS and device...at least as long as there are more OSes than virtual machines.

        ``Hey, maybe instead what you could do is create a machine that virtualized the devices and pr
    • And sold it on : http://www.vitanuova.com/inferno/ [vitanuova.com]

      It's great and you can learn Denis Ritchie's favourite language : Limbo
    • by cyroth ( 103888 )
      Great idea but there is one small problem. The virtual server has to talk to the hardware directly, to do that you really some form of OS to do that job for you and it still need to know how to interface with the hardware.
      There is a good reason for not being able to run something like ESX server on any old box.
      Even with a simple interface you still need to be able to write to memory / disk, whatever.

      Something to think about before you throw too much money behind your idea.
  • Missing Mac On Linux (Score:4, Informative)

    by also-rr ( 980579 ) on Tuesday January 02, 2007 @03:25PM (#17435200) Homepage
    MOL [maconlinux.org] is a true work of genius. Even on pretty old PPC hardware it functions with almost no slowdown. (Linux host, OS X and Linux clients). Compared to contemporaries it had no equal - the current generation of products on x86 are just starting to catch up. I'm most impressed with the way my powerbook can sleep (close the lid) under Linux and all of the hosted sessions quietly pause themselves with no problems. They even resume a network connection perfectly on waking up.

    It'm glad to see similar happening on x86, finally, as it's one of the things that really made PPC based machines special. (There is some documentation for MOL and Kubuntu here [revis.co.uk].)
  • Is it theoretically possible to virtualize a few copies of the OS+BIOS+etc. for each program launched to further isolate one program from crashing/infecting others? Or maybe that'd be way too resource intensive?
    • ``Is it theoretically possible to virtualize a few copies of the OS+BIOS+etc. for each program launched to further isolate one program from crashing/infecting others?''

      Sure, you could run every program in its own VM. But why would you want to? You say this further isolates them from one another, but I don't think that's really true. From the top of my head, there are 3 ways programs can affect each other:

      1. Through shared memory. *nix systems isolate processes from one another with the aid of a memory manag
    • Is it theoretically possible to virtualize a few copies of the OS+BIOS+etc. for each program launched to further isolate one program from crashing/infecting others? Or maybe that'd be way too resource intensive?

      It is theoretically possible and it is resource intensive. A more workable solution is simply to build strict application level restrictions into the kernel to prevent applications from interacting, and this seems to be the way modern OS's are heading. A VM is useful for running legacy application

  • Virtual appliances (Score:2, Insightful)

    by fatnicky ( 991652 )
    Virtual appliances will drastically change the way tech sales is handled in 2007. Instead of a sales rep promising their product can perform, they'll now be immediately asked to put their VM where their mouth is.

    I for one look forward to vendors coming in and pitching me their software. The ones that can instantly show me the product in a virtualized session running on their laptop will be the ones that we write the check out to.

    I for one look VERY MUCH forward to placing our systems on virtualized resour
  • *Another* Layer? (Score:4, Insightful)

    by timeOday ( 582209 ) on Tuesday January 02, 2007 @03:43PM (#17435388)
    In practice, I really like virtualization because it allows me to boot up Linux and run MS Exchange and Office, and most other (non 3d) Windows software using VMWare.

    But in theory, it bothers me. The basic idea (as I see it) is to provide an isolated environment for applications to run. But that's what the OS was/is supposed to do in the first place, and typesafe languages (like Java) also do much of the same thing once again! (E.g. I see no inherent reason for virtual to physical address translation when running Java applications). The biggest commercial application I see for virtualization is server consolidation. Why not just run all those server processes within the same OS? Yes there are good reasons, but is virtualization really the most efficient solution to those problems?

    Maybe virtualization is the best compromise given the legacy that computing currently has, but I wonder if some clever researchers have expressed a vision of how all the same ends could be accomplished much more simply and consistently. Or do all these layers upon layers of abstraction really provide necessary degrees of freedom?

    • Re: (Score:3, Insightful)

      by Cheeze ( 12756 )
      Virtualization is probably best used in test environments. Copy a normalized disk image and run from the copy to see how the software interacts with the hardware or operating system. When you're done testing, no need to mess with any hardware, just delete the disk image.

      It should also be good for those environments that do not require much performance. if you are running a specialized java app and you consolidate 2 hardware solutions into a virtualized solution, you might need to have 2 different java compi
    • Why not just run all those server processes within the same OS?

      Because not all server process were written for the same OS.

      Granted if the source code is available you might be able to recompile it, but if it is not then you don't have much of a choice to to use the OS it was written for.

      You could have one box per OS approach, but due to fact that hardware is become fast enough to make this a moot point and you'll save money on the electric bill alone to make it worth your while.
    • by jimicus ( 737525 )
      I like to think of it as sticking-plaster to solve an age old problem: Software sucks.

      Sometimes two pieces of software don't play nicely together, sometimes they're not available for your OS of choice (or the OS of your choice no longer supports any hardware you can still purchase), sometimes it brings the whole system crashing to the ground and you'd like to limit the damage it can do to a single virtual machine running one process rather than a physical machine running many. Or maybe you don't expect ev
    • ``is virtualization really the most efficient solution to those problems? ''

      No.

      ``Maybe virtualization is the best compromise given the legacy that computing currently has, but I wonder if some clever researchers have expressed a vision of how all the same ends could be accomplished much more simply and consistently.''

      They have. Just to name one example, one of the ideas for TUNES was to have a kernel-less design: just communicating processes. I am told that crossing the kernel-user space barrier is quite co
    • Re: (Score:3, Informative)

      by Abcd1234 ( 188840 )
      Well, in the server space, virtualization gives you immense flexibility, thanks to simplified backup and restore, migration, failover, and so forth, not to mention enhanced security and auditability. Yes, you could implement those features for every service you run, but why go through that effort when it's more easily done with a virtualized environment?

      For developers, VMs are fantastic. Not only does it let you target multiple architectures easily, it also makes it possible to create, backup, restore, an
    • by moco ( 222985 )
      I get your point, and you are right, virtualization is not the optimal solution but it is the best we have. It would require a tremendous ammount of cooperation between too many entities that have different and conflicting goals to come up with a standard interface so any service can run on any server that runs any operating system.

      So for now you are left with autonomous "units of processing" defined as an Operating System with its applications and you can't go any deeper without lots of effort (and cost).
    • Re:*Another* Layer? (Score:5, Interesting)

      by QuantumRiff ( 120817 ) on Tuesday January 02, 2007 @06:31PM (#17437220)
      For us, the nicest thing about virtualization is the disaster recovery. If our building burns down, we can quite literally get any PC we can find with a ton of RAM, load Virtual Server, and load the hosts right back up. Much, much faster than going and configuring all the weird drivers and raid cards, partitions, etc on a normal non-virtualized system. On the same note, if one of my servers goes down, I can quickly load up the VM on another box, which means I can take all the time in the world to get the original server back up, so I don't have to worry about the really expensive "4-hour" support plans, but the much cheaper "next-day" support plans. I also keep a VM copy of our web server handy. (Web server isn't on a VM, yet, because of the speed issues), so that when I need to take down the real, faster, web server, I change one DNS setting, and all my users notice is that the web is running a little slower...
  • and had serious issues with stability and networking. I detailed my experience at my blog [blogspot.com] if anyone is interested. I also tried using the Linux version of Parallels and had similar issues. If anyone has gotten Vista to run under Xen I would be very interested in your feedback
  • My experience (limited though it is) with virtualization makes me doubt the objectivity of TFA. Specifically, the comments regarding VMWare appear to be pretty far off given my experience that MSWindows (XP at least) runs fatster in a virtual environment than it does natively, which is the opposite of what the article claims.
  • This may be a little off-topic, but I noticed that the article claims that Xen runs on FreeBSD. I was under the impression that Xen support on FreeBSD was still a work in progress, which the Wikipedia article [wikipedia.org] seems to confirm. Can anybody comment on this?
  • I have a hard time coming up with cases where virtualization is that useful. If you run an ISP and want to give root-level access to your hosted accounts, sure. If you want to run a few different OSs on your desktop, sure.

    But, virtualization is so often touted as a way to consolidate servers. I keep asking myself "Who are these people that have that many servers with so little load, that many servers that they could consolidate (making a single COMPLETE point of failure), and have
    • Re: (Score:2, Informative)

      by Anonymous Coward
      You'll probably never see this comment, but so be it...

      Where we've found it incredibly useful is in 3 cases:

      1) Any server where end users may cause damage. In our case, we have a remote desktop available to the end users via Citrix. Yes, it's secured -- but there's always SOME chance that an end user will mess it up somehow. By running the Citrix server in a virtualized non-persistant environment, any damage can be corrected by simply rebooting the virtual Citrix box.

      2) Very low utilization applications
      • AC is very right. Except perhaps that 1) should be 1) Fault tolerance. Same thing as what AC said + if you have some service set up and the server croaks then it's a breeze to get that service up and running in a heartbeat. Case where I wished I had dealt with a virtulized install: print server dies after a blackout and of course this is the same day that they HAVE to deliver a bid/proposal (small consulting company). Instead of hunting down all install options and docs for the network printer/copier took
        • So you are imaging VMWare, instead of imaging disk drives - same thing as far as I can figure.
          • So you are imaging VMWare, instead of imaging disk drives - same thing as far as I can figure.

            With less complexity. You can move it to a shared server for the duration when you're fixing the old server/getting a new one. Or if/when you move it to a virgin server there is less need to match the replacement server in absolute physical terms... the abstraction of using virtualization makes things easy. Personally I wouldn't feel that comfortable with imaging a disk drive from one server with hardware configur

    • But, virtualization is so often touted as a way to consolidate servers. I keep asking myself "Who are these people that have that many servers with so little load, that many servers that they could consolidate (making a single COMPLETE point of failure), and haven't already done so?"

      While I know this isn't what you're talking about; I'm currently using VMWare to host multiple servers for educational reasons. I simply do not have enough spare systems to mimic a few servers doing their normal tasks. VMWare
    • Re: (Score:1, Informative)

      by Anonymous Coward
      Sure Steve, Virtualization isn't a panacea, and you've cited some of the places where it doesn't fit.

      However the more common example by far in the enterprise is the deployment of a fairly small app that consumes a fraction (say ~10 pct) of a server's worth of CPU capacity. For all sorts of reasons, including ownership and independence, these tend to run on their own dedicated server.

      Real cool example? Let's start with your scenario, a small ISP with a web server, mail server, and DNS server. To avoid the
    • Ummm. It is useful because REAL hardware costs lots of money to purchase,run, and maintain. Think HVAC costs for starters. Also, there is no reason you can't have HA using VM. IIRC VMWARE have a failover feature that fails over to a VM running on identical hardware that is in another location.
    • Let's see, I am at a non-profit, I have 2x old HP 700MHz dual Xeon Servers. these servers are mostly used for File,Print - but there are some funky programs on them I can migrate them to One new server (use MS FSMT, then fiddle with funky settings for a bunch of programs), or I can get two new servers and use VMware's Physical2Virtual tool to make the old machines virtual. If I do the virtual thing, I can set one of my new servers up at the main site, and one at the remote site across the Gig wirel
    • I know this is either never going to be seen, but you've obviously never worked in a large company that runs their apps on Windows servers. There the norm is to run one-app-per-box which has led to a massive proliferation of physical servers across the average datacenter. I know, I work in one such environment and have worked in several others over the years.

      Even in UNIX this makes some sense; I run three UNIX boxes at home. One of them is a physical box that acts as my primary file / print / SQL server whi
  • Big New Thing??? (Score:3, Interesting)

    by eno2001 ( 527078 ) on Tuesday January 02, 2007 @04:32PM (#17435904) Homepage Journal
    Uhm... I started using it on the PC platform in 1998/99 with VMWare on RedHat 7. I was amazed when I saw I could boot a Windows 98 system simultaneously with my already running Linux system on a lowly Pentium MMX 233 with 32 megs of RAM. Then I found out that what I thought was new back then was something the big iron world had enjoyed for decades and originated in the 60s. It was just new to x86 is all come 1998/99. Since then, I've moved onto Xen for Linux which is rather amazing in terms of performance and flexibility if you paravirtualize the system. I've got three VMs running on an old Pentium II era Celeron at 400 MHz with 384 megs of RAM. That system has enough horsepower to do the following for my network:

    Internal: DHCP, DNS, postfix SMTP server for internal clients, Squid proxy, OpenVPN MySQL DB, DBMail IMAP services that use MySQL as the backend. All in 128 megs of RAM. And they all perform smoothly and quickly.

    External: DNS, postfix SMTP server for spam filtering and relaying to the virtual internal SMTP server, OpenVPN server. All in 64 megs of RAM.

    I plan to add an Asterisk PBX to that same box for a third VPN so I can have private VoIP with my OpenVPN users (all friends and family as I'm talking about a system at home, not at work).

    I've, of course also played with Virtual PC, Virtual Server, QEMU and poked at OpenVZ. For me, a decent virtualization solution has to be able to run other OSes to count as good which is why certain virtualization solutions don't do much for me. If I need access to Windows, I want to be able to do it without wasting good hardware on it. That's why UserMode and Linux Virtual Servers (more akin to chroot jails) do absolutely nothin for me other than when I'm building a Gentoo box. But, this is not the big new thing. It's only that MS is making waves with it now... typical.
  • I, for one, welcome our virtualized overlords. And their virtualized management systems.
  • by LordMyren ( 15499 ) on Tuesday January 02, 2007 @05:19PM (#17436420) Homepage
    The original plan for microkernels was to create more componentized runtime environments, so you could dynamically create virtualized OS's as a collection of the active componenets you needed. Very much like chroot, but pervading way beyond file systems and in to running libraries kernel modules and devices. The tooling was never here, but many people had stary eyes for esssentially a mix-and-match environment that would let you configure and cobble together operating environments at will, and maintain strong privledge seperation.

    It really is a pity we gave the whole project up and decided to just implement YET ANOTHER page table in hardware, rather than try to solve the PIC code layout, IPC performance issues, and wrestle with building a new dynamical component based environment. I think we'd see virtualization on a much more pervasive level and a much stronger conception of mobile code, stretching all the way to embedded devices. As it is, the hardware virtualized environments are so insular from each other that there is a) no reason to run it on embedded systems (since integration is all application level, tracing through pretty meaty stacks) (watchdog systems aside) and b) it would impose colossal power consumption needs for mobile devices since it has to run each OS seperately.

    Virtualization as we know it is a terrible terrible excuse for unix never having built itself a sufficiently dynamical and configurable environment. Two thumbs down. As cool as running multiple OS's is, it should not have been necessary in the first place.

    LordMyren
  • Ok, I thought CTSS was a task switching layer on top of the basic OS, FMS. The article goes on to talk about OS-level virtualization and yet doesn't mention TopView or DESQview?

    Hands-up who doesn't remember running up QEMM and DESQview to run their BBSes back in the 80's?

  • Article missing MDF (Score:3, Interesting)

    by Anthony ( 4077 ) * on Tuesday January 02, 2007 @05:35PM (#17436586) Homepage Journal
    Amdahl was the first to offer physical machine partitioning in the mid to late eighties. IBM finally came out with PR/SM (Processor Resource/Systems Manager) some years later. MDF provided complete isolation of resources between two or more partitions. There were no shared channels, intercommunication was done with a Channel-To-Channel connector. This provided a secure, isolated development/test/QA systems at a reasonable price. It was all managed at the macrocode level, which had a Unix-like shell.
  • The article describes VMware as a full virtualization solution. "A hypervisor sits between the guest operating systems and the bare hardware as an abstraction layer." Is this really how it works? The hypervisor runs on the bare hardware? I thought VMware was launched as an application under the hosting OS. Then it is able to load guest OS's. So it does not sit between the bare hardware and the guest OS, but rather between the host OS and the guest OS. See the PDF datasheet for VMware Server [vmware.com] which shows this
    • VMware ESX runs on "bare metal" - see this page: http://www.vmware.com/products/vi/esx/ [vmware.com]
      • by larstr ( 695179 )
        ESX can't be installed in BEA's [url=http://www.google.com/search?hl=en&q=bare+me t al+java]Bare Metal[/url] operating system or any other OS ;) ESX does however provide a kernel that is written from scratch to support virtualization. This removes much of the overhead you see with "normal" operating systems, but it also limits the amount of drivers available. As this kernel does not have a user interface, ESX also ship with a virtual machine based on redhat that has extended access to administer the vmk
      • by jaseuk ( 217780 )
        "bare metal" meaning a kernel module in a relatively standard redhat linux distribution.

        Jason.
        • Depends who you ask. VMware themselves swear blind that the VMware itself runs in its own memory space using Linux only as a bootstrap in much the same way as Netware used DOS as a bootstrap. Whether or not you subscribe to that opinion... well that's another matter :)
    • the new enterprise vmware server runs on hardware.

      it's great for deployment servers.

      top notch.
    • In addition to what these guys are saying, you're probably thinking of VMWare Workstation [vmware.com].
  • I have been toying with this idea for a long time. I would love to see a service where i can upload a vmware image and have them host it for me. Its my blackbox running whatever i want it to run. They wouldnt need to or care to know about it. I would love to see such a service pop up.
    It could be like conventional hosting as far as billing goes.(charge for disk space/bandwidth) But I would like to have complete control of whats inside the blackbox.
    I find many uses for it. Like moving my home Asterisk server
  • The next step for virtualization will be to processors that make the instruction set virtual. Most importantly specific hardware acceleration for this that makes it practical. The processors will obviously be designed for an x86 default instruction set. But it will be possible to execute foreign instruction sets like PowerPC or 68000. What can't be handled by the processor will be handled in a hypervisor processes though much slower obviously. The big impact of this is that VM byte code can be run dir
  • ... The same concept was used in the 1960s for Basic Combined Programming Language (BCPL), an ancestor of the C language.

    And here I thought BCPL stood for Bitchin' Camaro Propulsion Language...

Dennis Ritchie is twice as bright as Steve Jobs, and only half wrong. -- Jim Gettys

Working...