Follow Slashdot blog updates by subscribing to our blog RSS feed

 



Forgot your password?
typodupeerror
×
Operating Systems Software Linux

Linux Gains Two New Virtualization Solutions 170

An anonymous reader writes "The upcoming 2.6.23 kernel has gained two new virtualization solutions. According to KernelTrap, both Xen and lguest have been merged into the mainline kernel. These two virtualization solutions join the already merged KVM, offering Linux multiple ways to run multiple virtual machines each running their own OS."
This discussion has been archived. No new comments can be posted.

Linux Gains Two New Virtualization Solutions

Comments Filter:
  • just asking...
    • Re: (Score:2, Insightful)

      KVM (have been in the kernel since 2.6.20) already runs windows.
      • Re: (Score:3, Interesting)

        by zlatko ( 222385 )
        Absolutely, running Windows XP on Linux [linuxinsight.com] is both easy to setup and performs quite well. I'm quite amazed with kvm technology for both reasons. This is not to say that Xen is bad, but it seems so much harder to setup, that I haven't even tried. kvm is dead simple.
    • by init100 ( 915886 ) on Saturday July 21, 2007 @02:22PM (#19939501)

      You mean Lguest? FTA:

      Lguest doesn't do full virtualization: it only runs a Linux kernel with lguest support.

      So the answer is no, Lguest does not run Windows. Xen runs Windows, but only if you have a VT-capable processor. Like Lguest, Xen can run Linux without a VT-capable processor.

      • by baadger ( 764884 )
        For those wondering the best (IMHO) FOSS solution for those wanting to run Windows on Linux without a VT-capable processor is Virtualbox [virtualbox.org]
    • That was a perfectly legitimate question (and one that I'd have asked, too). Right now, most people install VMWare to run Windows on their Linux hosts. I'd be quite pleased to be able to run it using standard, Free, built-in functionality.

  • Why? (Score:4, Interesting)

    by realdodgeman ( 1113225 ) on Saturday July 21, 2007 @09:57AM (#19937657) Homepage
    Wouldn't it be enough with one? Or maybe they could have merged all the features into one VM.

    I think this will confuse users. Choice is good, yes, but 3 VMs in the kernel? Sounds like overkill.
    • Re:Why? (Score:5, Insightful)

      by QuantumG ( 50515 ) <qg@biodome.org> on Saturday July 21, 2007 @10:13AM (#19937761) Homepage Journal
      Yeah, like all those file systems the kernel supports. What's with that? You only need one. Man. Choice is good and all, but it sounds like overkill.

      Don't get me started on buses.. PCI, USB, SCSI, IDE, how many do you need?!

      • Re: (Score:2, Informative)

        by evilbessie ( 873633 )
        IDE is not a bus, don't confuse this with ATA (more recently SATA and PATA). IDE == Integrated Drive Electronics.
        • by init100 ( 915886 )

          The electrical interface of IDE is certainly a bus, since it connects more than one device to each channel. On the other hand, SATA is not a bus, it is a point-to-point link, which connects exactly one device to each channel.

    • Xen has all the features that KVM and lguest have. That's the problem. Xen is extremely complex, and the patches to support it are very invasive. This is why KVM beat it getting merged. LWN infamously predicted Xen could get merged as early as 2.6.10, whereas lguest was only created a few months ago, weighing in at a mere 5000 lines of code.

      Xen does some really cool things, but it has a lot of human overhead in terms of management and maintenance that the other two don't have. Now you get to pick the r
  • by Tribbin ( 565963 ) on Saturday July 21, 2007 @10:00AM (#19937673) Homepage
    What are the pro's for heaving two implementations of, seemingly, the same solution?
    • The more people who use both solution, the quicker the kernel team can figure out which one works better, and go with that.
      • by QuantumG ( 50515 ) <qg@biodome.org> on Saturday July 21, 2007 @10:45AM (#19937945) Homepage Journal
        Actually, it doesn't work like that. What actually happens is that the code which is maintained poorly gets dropped. So if there are dedicated people working on KVM but no-one actually working on lguest, eventually something will change that results in lguest not working anymore. Eventually people will drop the broken code from their tree until someone fixes it. If no-one fixes it, then it'll never be picked up again. There's no "oh, lguest is actually faster than KVM, we should all work on that".. it's individuals making their own decisions on what to work on (be it that they find it interesting, or they find that bit of code more pretty, or they are paid by someone to work on it) and those individuals are responsible for what happens to that code.

        As long as N solutions are maintained there will be N solutions in the kernel. A solution won't be dropped because it performs worse.. or any other "technical" reason.

        • Good point...but I believe that, over time, the one that most users choose will end up being the most actively maintained.
        • by bfields ( 66644 )

          Actually, it doesn't work like that. What actually happens is that the code which is maintained poorly gets dropped.

          That's a pretty unfortunate situation if the unmaintained code is still actually used by someone. Even if another alternative has come along with a superset of the given features, if they provide different system interfaces--so if it would mean rewriting scripts or applications or retraining users--then the migration can be a pain. And you want people to be able to drop a new kernel into a

          • Actually, it doesn't work like that. What actually happens is that the code which is maintained poorly gets dropped.

            That's a pretty unfortunate situation if the unmaintained code is still actually used by someone.

            (...)

            That said, yeah, if someone notices that filesystem FooFS has been completely broken for ages and nobody has even noticed, then that's a pretty good argument for dropping it. But even then it's not just because it's unmaintained, it's because at that point you're pretty sure nobody really gives a crap about it.

            The Linux kernel *almost never* drops support for any devices/filesystems unless (a) it's INCREDIBLY obsolete and NO ONE is using it, or (b) it's been superseded by something clearly better and there's a straightforward upgrade path.

            For example, if you read the kernel changelog summaries on LWN.net, you'll see that support for IBM PC/XT hard disks was only dropped in the last couple years... although they have been obsolete since the late 80s and perhaps literally no one has used them for 5-10 years. And

    • Re: (Score:2, Informative)

      by sekra ( 516756 )
      It's not the same solution because lguest and KVM have different goals. While KVM is trying to use as much hardware virtualization support as possible to gain full speed, lguest is not using these functions to run on more hardware. XEN tries to do everything and is thus a bit more bloated, but also with more functionality. Choice is good, just take the solution which fits your requirements best.
    • "The happy theme of today's kvm is the significant performance improvements, brought to you by a growing team of developers. I've clocked kbuild at within 25% of native. This release also introduces support for 32-bit Windows Vista. "

      I can't understand why the Linux kernel development team had 'Windows Vista support' as one of the items on their agenda at all. Virtualisation as I understand it, is basically an abstraction of the hardware that is performed in software. Should not all operating systems be designed to work with standard instruction sets, interrupts, registers and memory?

      Why should it be the job of a particular kernel or it's VM component to satisfy specific requirements of a specific version of another kernel (the Vista k

      • Re: (Score:3, Informative)

        by QuantumG ( 50515 )
        The people who work on this stuff really wouldn't call themselves kernel developers, but ok, whatever. Associating any of the VM stuff with Linus is even more retarded.. what they do in their own modules is none of his fault or concern. Anyway, some people want to run Vista in a VM on Linux. These VM solutions don't try to virtualize every nook and cranny of the x86 hardware. Vista uses the system level x86 hardware in a slightly different way to XP. As such, it takes some changes to make Vista work.

        Should it not be the other way round - i.e. for closed-source Vista to be compatible and optimised for the open-source Linux kernel?

        Y

        • by jkrise ( 535370 )

          The people who work on this stuff really wouldn't call themselves kernel developers, but ok, whatever. Associating any of the VM stuff with Linus is even more retarded.. what they do in their own modules is none of his fault or concern.

          I find the announcement about these VMs is from Linus himself. Besides, it is Linus who decides which components get into the main kernel tree, so he is answerable for any decisions made.

          Anyway, some people want to run Vista in a VM on Linux. These VM solutions don't try to virtualize every nook and cranny of the x86 hardware. Vista uses the system level x86 hardware in a slightly different way to XP. As such, it takes some changes to make Vista work.

          If Vista has any idiosyncracies, it should be the job of the overpaid, bloated development team in Redmond to iron out the kinks and make it standards-compliant. Why should it be a concern of the Linux kernel development team? Besides, how did these developers gain access to quirky behaviour of Vista?

          • by QuantumG ( 50515 )

            I find the announcement about these VMs is from Linus himself. Besides, it is Linus who decides which components get into the main kernel tree, so he is answerable for any decisions made.

            Linus puts whatever he wants into his tree, yes. His tree is the defacto "main" kernel tree, yes.

            If Vista has any idiosyncracies, it should be the job of the overpaid, bloated development team in Redmond to iron out the kinks and make it standards-compliant. Why should it be a concern of the Linux kernel development team? Besides, how did these developers gain access to quirky behaviour of Vista?

            What standards are you talking about exactly? The Intel x86 hardware documentation? I can assure you they are writing their code to those "standards" otherwise their code wouldn't work..

            If anything the virtualization guys are the ones who are not implementing the "standards".. as not everything that will run on an x86 processor will run the same way under virtualization. That's simply because it's a lot of

      • Virtualisation as I understand it, is basically an abstraction of the hardware that is performed in software. Should not all operating systems be designed to work with standard instruction sets, interrupts, registers and memory?

        Ideally. My impression is it doesn't actally work very well without some "cheating" (optimization). For instance VMWare works a lot better if you use the special VMWare video driver on the guest instead of sticking with generic VESA or whatever. Also some timing related issues,

    • by Chris Snook ( 872473 ) on Saturday July 21, 2007 @01:09PM (#19938927)
      These aren't even close to the same solution. KVM provides hardware-assisted virtualization, with Linux as the hypervisor. Lguest provides linux-in-linux paravirtualization (no hardware support), and is extremely lightweight (5000 lines of code, total), but lacks many advanced features. Xen provides both paravirtualization and full virtualization, runs under a custom hypervisor intended to run multiple different OSes (Linux, Solaris, Windows, etc.) simultaneously, and has a plethora of sophisticated features, such as live migration (and all the maintenance headache of the correspondingly huge codebase).

      They each fill very different niches, so there are very good reasons for having all 3 in the kernel.
  • ...why should virtualization technology be incorporated into the kernel, and not kept outside, as a "3rd" party app? Shouldn't the kernel be essentially a library and some low level support (multi-tasking, handle certain interrupts, that sort of stuff)? I've never really even considered bash, or even ls as part of the kernel. Am I just really mistaken, or is the word kernel used more broadly than that?
    • Re: (Score:3, Insightful)

      by QuantumG ( 50515 )
      The hardware support for virtualization is in the kernel.

      Just like the hardware support for webcams is in the kernel.

      • See, now, that would make sense. So it's not the entire virtualization programs, just hardware hooks and drivers, basically? Meaning that there still needs to be a separate program to take care of actuality running things and what not?
        • Re: (Score:3, Informative)

          by QuantumG ( 50515 )
          Yes. Thing is, bare x86 metal can do virtualization.. you just gotta be creative. There's a lot of ways to do it, utilizing different parts of the hardware. So there's some solutions that work great for some things and some solutions that work great for others. It's like having two drivers for the same bit of hardware and choosing which one to use based on how you're using the device.

          Then there's para-virtualization.. modifying the kernel of the guest OS so you don't even need anything in the kernel. W
    • You're thinking of a microkernel. Most modern operating systems have a monolithic virtual memory model, in that a large number of system services run in the kernel memory space, but they use dynamic linking to achieve a degree of modularity. That said, the Linux kernel internal API is fairly fluid, so any code that runs in kernelspace has to be maintained quite regularly to keep up with the changes. Merging your code into the main tree makes this much easier.

      Bash and ls are still userspace. All of these
    • Anything that uses VT or AMD-V needs to run in ring-0 (when not in VMX mode, at kernel boot, and then in a special VMX mode that can be entered from there), and so needs to be in the kernel. Adding virtualisation support is basically (warning: oversimplification) adding support for a new type of process. These have to be able to do things like respond to interrupts raised in the process (for example, to handle system calls). Solutions like VMWare get around this be re-writing interrupt instructions to ju
  • by JustNiz ( 692889 ) on Saturday July 21, 2007 @11:08AM (#19938123)
    So do any of these solutions support 3D graphics (nvidia) hardware?
    The only reason I currently have a windows partition at all is for gaming.

    Being able to run Windows 3D games in a VM would allow me to move to a Linux-only box and also give me a nice way of:
    * managing the way windows keeps grabbing diskspace
    * remove the need to go through reinstalling/reactivating windows every 6 months or so
    * limiting the damage Windows virusses can do
    * limiting all the phone-home comms with Microsoft that windows keeps doing
    • Re: (Score:3, Informative)

      by QuantumG ( 50515 )
      No. But if/when there is ever an open source nvidia kernel driver with 3d support that isn't completely broken and is integrated into the kernel, you might see some people take an interest in virtualizing it.

      Probably the first thing they'll do is make it so X running in a virtual machine can share the same DRM (Direct Rendering Module) as X running on the host. Of course, that's not much good to a Windows guest.

    • Re: (Score:3, Interesting)

      by EvilRyry ( 1025309 )

      So do any of these solutions support 3D graphics (nvidia) hardware?
      The only reason I currently have a windows partition at all is for gaming.

      I recently read an article on the progress of just this. It sounds pretty cool and the initial results are impressive. This combined with the DX->OpenGL Wine code, that I'm sure will be open sourced from the makers of parallels (just had a slashdot story on this), makes for an exciting future for providing hardware acceleration to guest applications.

      More information: http://www.cs.toronto.edu/~andreslc/vmgl/ [toronto.edu]

      • by baadger ( 764884 )
        Parallel's patches to Wine have already been released, apparently they weren't very exciting...
    • Not very well. Xen with PCI pass-through might work here, but that requires having a dedicated graphics card for each OS. 3D video generally involves some amount of writing directly from userspace to hardware, without any kernel interaction after initial setup. This is difficult to do right in all cases with virtualization, but they are working on it.

      Just buy Cedega and be done with it.
  • by GiMP ( 10923 ) on Saturday July 21, 2007 @12:02PM (#19938475)
    Each of Xen, KVM, lguest, and UML can be considered virtualization products but they are all vastly different. Below I describe each of these products in relation to their inclusion to the Linux kernel.

    Xen - the Linux kernel supports code allowing it to be run as a guest underneath the Xen kernel, all through software. Linux's support for Xen does not make Linux a virtualization platform, only a GUEST for the Xen kernel which sits at Ring-0. (though a "dom0" Linux system can interact intimately with the Xen kernel, it actually sits at Ring-1). I should note that the Xen kernel also supports hardware virtualized domains, though this is unrelated to the patches to Linux.

    KVM - the Linux kernel supports virtualization of guests through hardware extensions, this requires supported hardware. Linux becomes the Ring-0 kernel.

    lguest - (my understanding is) an unmodified Linux kernel can act as a hyper-supervisor through loading Linux kernels as modules. Linux sits as both Ring-0 (supervisor) and Ring-1 (guests). This is experimental with limited features and only supports Linux guests.

    UML - the Linux kernel becomes a userspace program. This allows Linux to run as an executable application/program. With UML, Linux can be compiled for a Linux or Microsoft Windows target. The executing OS sits at Ring-0 and the UML program sits at Ring-1. This has the advantage of requiring no modifications to the host OS and is very portable (you could email an entire Linux system to a friend without requiring anything installed to their system), but the disadvantage of poor performance.

    From a high-level, the products UML, Xen, and lguest are actually very similar in function. They act as architectures to which Linux can be compiled in order to make it a guest OS of another Ring-0 kernel. These architectures provide the targets of a kernel module (lguest), a userspace program (UML), or a xen-domU guest (Xen). On the other hand, KML is the only patch that is intended to add support to Linux to act as a Ring-0 kernel on behalf of guest systems -- and even then, KML can be viewed more as a hardware driver for the processor extensions.
    • by _Knots ( 165356 ) on Saturday July 21, 2007 @01:42PM (#19939209)
      Slight corrections:

      The UML program sits at ring-3 on X86 machines: it's just a normal user program using the ptrace() mechanism and extensions [except when the host has been patched with SKAS, but even here it's just a "normal user program". Rumor has it that SKAS might eventually make it into mainline, but it's time in 'real soon now' is starting to rival Duke Nukem Forever's.]. Rings 1 and 2 are odd, rarely used (IIRC there's the current virtualization craze and OS/2 as notable consumers) features of the x86, derived from MULTICS. For processors with only two (user & supervisor) modes, identify ring 0 with supervisor mode and the other rings with user mode.

      It is a little odd to say that Linux "becomes" the Ring-0 kernel under KVM. It was already running in ring 0.
    • by Per Wigren ( 5315 ) on Saturday July 21, 2007 @01:44PM (#19939239) Homepage
      Yes, they are all very different but at the same time quite similar from a user's perspective. All of them (unless I've missed something) more or less emulate a whole machine. This means you have to mess with disk images or dedicated drives/partitions/LVs, allocate a fixed amount of RAM to the guest, among other things.

      Personally I like the approach of OpenVZ [openvz.org] and VServer [linux-vserver.org] better. The main OS and the guests all share the same kernel, share the RAM and their root filesystems can be just subdirectories of the host's filesystem. When inside the virtual server you don't realize that though. You only see your own processes and everything works as if it was a dedicated server. You can run iptables, reboot and just about everything you could normally do in XEN/KVM/VMWare. Including live migration of virtual servers to other physical hosts. chroot on steroids.

      I really hope OpenVZ and/or VServer will be merged at some point. VServer seem to keep up with current kernel releases so that wouldn't be too hard to merge I guess. OpenVZ usually have a lag of something like half a year.
    • Hope this isn't too far off topic...


      Xen - the Linux kernel supports code allowing it to be run as a guest underneath the Xen kernel, all through software. Linux's support for Xen does not make Linux a virtualization platform, only a GUEST for the Xen kernel which sits at Ring-0. (though a "dom0" Linux system can interact intimately with the Xen kernel, it actually sits at Ring-1). I should note that the Xen kernel also supports hardware virtualized domains, though this is unrelated to the patches to Linux.

  • I think all the xen users out there will agree with me when i say "yes!!!!!!!!!!!". I'm actually quite impressed, given what is involved in maintaining xen in the kernel, that this happened as soon as it did.
  • I keep an old PII clunker kicking around to run Galactic Civilizations V2.5, an OS/2-only game. I'd really like to get rid of it, but keep OS/2 for the game. With QEMU and Virtualbox, I've occasionally managed to "install OS/2" but the VM crashes when trying to do much more than merely bring up the OS/2 desktop. I'd be interested in any working solutions. TIA.
  • Anyone attempting to compile a full Linux kernel with every conceivable feature that doesn't clash with another turned on, non-moduluar, will be able to measure the build time in months...

    Unless they're running a virtualized cluster of machines! :-)

Our OS who art in CPU, UNIX be thy name. Thy programs run, thy syscalls done, In kernel as it is in user!

Working...