Forgot your password?
typodupeerror
Oracle Virtualization Linux

Linux 3.0 Will Have Full Xen Support 171

Posted by timothy
from the complex-mellow-calm dept.
GPLHost-Thomas writes "The very last components that were needed to run Xen as a dom0 have finally reached kernel.org. The Xen block backend was one major feature missing from 2.6.39 dom0 support, and it's now included. Posts on the Xen blog, at Oracle and at Citrix celebrate this achievement."
This discussion has been archived. No new comments can be posted.

Linux 3.0 Will Have Full Xen Support

Comments Filter:
  • by GauteL (29207) on Friday June 03, 2011 @03:28AM (#36328826)

    ... what??

  • by pasikarkkainen (925729) on Friday June 03, 2011 @03:53AM (#36328916)
    Actually you have been able to run newer kernel on EC2 for a long time! Xen domU (guest VM) support has been in upstream Linux kernel since version 2.6.24. Now upcoming Linux kernel 3.0 adds Xen dom0 support, which is the *host* support, ie. Linux kernel 3.0 can run on Xen hypervisor (xen.gz) as the "management console", providing various backends (virtual networks, virtual disks) allowing you to launch Xen VMs.
  • Re:Now all I need... (Score:4, Informative)

    by glwtta (532858) on Friday June 03, 2011 @04:00AM (#36328950) Homepage
    16 cores and 32 GB of RAM

    That's, uh, not exactly all that out there, these days.
  • Re:KVM vs XEN (Score:4, Informative)

    by pasikarkkainen (925729) on Friday June 03, 2011 @04:18AM (#36329014)
    Xen has features that KVM doesn't have (by design). For example Xen "stubdomains" and "driver domains", full memory address space separation between domains, etc.. and of course it's good to have multiple opensource virtualization platforms, competition is a good thing!
  • Re:KVM vs XEN (Score:4, Informative)

    by GPLHost-Thomas (1330431) on Friday June 03, 2011 @04:22AM (#36329036)
    This why Xen [xen.org] PDF might explain it well. Under Xen, guests are running inside the host operating system. In Xen, the hypervisor starts a special Linux kernel (the dom0) that will only take care of drivers for the guests. The design is really different, and has different features. For example, in Xen, you can have your dom0 to run on 2 cores, leaving the rest for the guests (I'm not sure that is possible in KVM), and if you want to avoid any possible CPU starvation, you can even have the guests to not use the cores that the dom0 is using. The CPU scheduler is also very different (and there's not only one available...).
  • by Superken7 (893292) on Friday June 03, 2011 @04:36AM (#36329086) Journal

    I'm not sure if you are trolling on purpose, or if you don't understand what this news is all about. But I'll bite.

    You see, linux runs on almost any kind of hardware: from embedded systems on toasters to phones, desktop computers, laptops, to big servers. Even most supercomputers to date are running Linux. There is a _lot_ of different users that would use Linux in many different ways.

    Xen is a technology that virtualizes machines, mainly intended for the data center and cloud computing environments.

    This is NOT intended for users in any way. Your mom does NOT have to know that Xen even exists, just like windows users don't need to know what IIS or Apache is in order to browse the web.

    Would you also say that windows and OSX is "is way too complicated for people" because you read slashdot news about some geeky kernel details about windows/OSX ?
    Surely "no user should need to know, or care about this sort of thing.".

    They don't. So do you about Xen. I'm not sure why someone like you is reading and posting on /., because this is usually "news for nerds", as the site indicates. :)

    As many slashdotters would say about your reasoning behind your post: "You are doing it wrong." ;)

  • Re:KVM vs XEN (Score:3, Informative)

    by pasikarkkainen (925729) on Friday June 03, 2011 @04:40AM (#36329106)
    Actually the design is pretty different. Take a look at these slides: http://www.slideshare.net/xen_com_mgr/why-xen-slides [slideshare.net] . That should explain the differences. Xen is also multi-OS, ie. you can use also BSD/Solaris in addition to Linux as a Xen host, while KVM is Linux-only as host.
  • Re:KVM vs XEN (Score:5, Informative)

    by pasikarkkainen (925729) on Friday June 03, 2011 @05:24AM (#36329284)
    Xen is a secure baremetal hypervisor (xen.gz), around 2 MB in size, and it's the first thing that boots on your computer from GRUB. After Xen hypervisor has started it boots the "management console" VM, called "Xen dom0", which is most often Linux, but it could also be BSD or Solaris. Upstream Linux kernel v3.0 can run as Xen dom0 without additional patches. Xen dom0 has some special privileges, like direct access to hardware, so you can run device drivers in dom0 (=use native Linux kernel device drivers for disk/net etc), and dom0 then provides virtual networks and virtual disks for other VMs through Xen hypervisor. Xen also has the concept of "driver domains", where you can dedicate a piece of hardware to some VM (with Xen PCI passthru), and run the driver for the hardware in the VM, instead of dom0, adding further separation and security to the system. Xen "Driver domain" VMs can provide virtual network and virtual disk backends for other VMs. KVM on the other hand is a loadable module for Linux kernel, which turns Linux kernel into a hypervisor. The difference is that in KVM all the processes (sshd, apache, etc) running on the host Linux and the VMs share the same memory address space. So KVM has less separation between the host and the VMs, by design. VMs in KVM are processes on the host Linux, not "true" separated VMs.
  • by Lemming Mark (849014) on Friday June 03, 2011 @05:47AM (#36329350) Homepage

    It's partly historical and partly because Xen is structured differently to lots of other virtualisation systems.

    "Domain" is to "virtual machine" as "process" is to "program". i.e. it's a running instance of a virtual machine. If you kill a VM and restart it, it's the same VM but a different domain. In practice VM and domain are blurred a bit when people talk, though.

    Domain 0 is a bit like the host OS, but for technical reasons it's not exactly.

  • by martyros (588782) on Friday June 03, 2011 @05:49AM (#36329358)

    What is Xen? Xen is a virtualization project that is run by four of the top five major cloud providers (including Amazon, Rackspace, &c); a commercial version written by Citrix run by thousands of sites worldwide, including large companies like Tesco, SAP, &c. It's also the approved way of running Oracle databases in a virtual machine.

    What does that have to do with Linux? The Xen project is focused on virtualization. But Xen still needs to run on systems with all manner of devices. There are several ways they could have handled this. One is to try to put drivers for all of the devices in Xen. This would require a huge amount of work, mostly copying new device drivers and device fixes from Linux and porting them over to Xen. It would be a colossal waste of time: they would be duplicating effort of what Linux already does well, instead of doing what they want to do -- work on virtualization.

    So what they do instead is run Xen as the hypervisor, but leverage the device drivers in Linux. They do this by creating a special VM, called "domain 0" or "dom0", which is booted first after Xen boots, that has drivers to control all of the devices. This domain is a version of Linux that is designed to be able to work with Xen to control and drive devices, while allowing Xen to control memory, CPU, and interrupts (the key hardware required to do virtualization).

    Xen has been out for years. Why is this just being announced? The Xen project started out of a University research project. As is typical, they were trying to answer the question "what is possible?", and as a result, felt free to completely rip out and rewrite large sections of Linux code. This code was not upstream-able -- changes were made that were (rightly) not acceptable to the Kernel community.

    Since that time, the Xen community has maintained branches of Linux with these intrusive, non-upstreamable patches, and used these branches as domain 0. At the same time, they have worked to try to get support for Linux-as-domain-0 into the mainline tree. This has been a long process, and something that has been a sore point for users of Xen for some time.

    But as of Linux 3.0, all of the functionality required to use the mainline kernel tree as a basic dom0 with Xen is in. This means that if you install Xen, you'll be able to use the same kernel you booted with natively as the dom0 for Xen. It means that distributions won't have to maintain two separate kernels, one for booting bare metal, and one for booting on Xen. And it means not having to maintain the xen-linux fork, which has been a lot of painful work for the Xen community.

  • Re:Meanwhile (Score:5, Informative)

    by zefrer (729860) <zefrer@@@gmail...com> on Friday June 03, 2011 @06:04AM (#36329400) Homepage

    Just had to reply to this.. Sun forked Xen 3.1 something like 4 years ago, yes. That same fork, Xen version 3.1 is what is still being used today in Solaris and Sun had previously (pre-buyout) said they would not merge to any newer versions of xen.

    So while Solaris can claim Xen Dom0 support it is no where near the capabilities of current Xen 4.0 and with no plans to update you're stuck on 3.1 with support only coming from, now, Oracle. Yeah, awesome.

  • Re:KVM vs XEN (Score:4, Informative)

    by martyros (588782) on Friday June 03, 2011 @06:11AM (#36329412)
    There doesn't have to be a battle -- there's room in the OSS world for two technologies. Xen and KVM are different technologies. For most desktop users, KVM is probably the best option; but on big servers, linux running KVM has to mix scheduling between VMs and processes. Since Xen runs VMs exclusively, it can focus only on algorithms that work well for VMs.
  • Re:KVM vs XEN (Score:5, Informative)

    by TheRaven64 (641858) on Friday June 03, 2011 @07:49AM (#36329710) Journal

    Not sure which Xen book you read, but the grandparent makes a lot of errors and I'd be surprised if a book was that inaccurate. Mine [amazon.co.uk] is slightly out of date, but at least was accurate at the time of printing (technical review was done by the original Xen developer).

    Let's start at the end. KVM VMs and userspace Linux applications do not share the same address space. This isn't even true if you remove KVM - userspace processes have isolated address spaces. KVM requires the CPU have virtualisation extensions, which means (among other things) nested page tables. This means that there is hardware-enforced separation between the pages. The guest OS sees page tables that map from virtual to pseudophysical address space, but thinks that they map from virtual to physical. The host (Linux) sets the mapping from these pseudophysical pages to real memory pages and the CPU enforces this mapping. Xen uses exactly the same mechanism in HVM mode (it uses some other tricks in paravirtual mode).

    The driver domains are correct, but it's worth noting that Xen will use VT-d or equivalent to protect against malicious use. Linux can't give a userspace program direct access to the disk controller, because if it did then a rogue DMA command could compromise the kernel. Xen will use the IOMMU to ensure that each peripheral may only issue DMAs to memory owned by the driver domain. The Solaris VM that you have accessing your block device and exporting virtual disks from ZVOLs, for example, can trample its own address space with rogue DMAs, but it can't touch any memory in other VMs.

    This means that Xen (in theory) has a smaller attack profile than KVM. Xen is basically a microkernel, and it enforces low privilege on the services (OS instances) that provide drivers and the management console. With KVM, the entire kernel runs in privileged mode. It's fairly common these days for the management console domain to have either no network access, or highly-restricted access, and be separated from the driver domains. If there is a flaw in the network stack in Linux and an attacker compromises it, then with KVM they now have access to all of your VMs. With Xen, they control that driver domain, and they can inject packets into the other VMs, but they are no more able to compromise them than they would be if they controlled the router one hop away.

    KVM recently gained support or live migration (this has been stable in Xen for a long time - they were doing demos of live-migrating a Quake 2 server with clients connected since the early 2000s), but it doesn't have any of the high-availability stuff that Xen 4 includes. This allows you to do things like run two instances of the same VM on different machines and transparently fail-over when one dies.

  • Re:Meanwhile (Score:5, Informative)

    by diegocg (1680514) on Friday June 03, 2011 @08:06AM (#36329780)

    'VMWare lobby', WTF? The real problem were things like this [lkml.org] and this [lkml.org]:

    The fact is (and this is a _fact_): Xen is a total mess from a development
    standpoint. I talked about this in private with Jeremy. Xen pollutes the
    architecture code in ways that NO OTHER subsystem does. And I have never
    EVER seen the Xen developers really acknowledge that and try to fix it.

    Thomas pointed to patches that add _explicitly_ Xen-related special cases
    that aren't even trying to make sense. See the local apic thing.

    So quite frankly, I wish some of the Xen people looked themselves in the
    mirror, and then asked themselves "would _I_ merge something ugly like
    that, if it was filling my subsystem with totally unrelated hacks for some
    other crap"?

    Seriously.

    If it was just the local APIC, fine. But it may be just the local APIC
    code this time around, next time it will be something else. It's been TLB,
    it's been entry_*.S, it's been all over. Some of them are performance
    issues.

    I dunno. I just do know that I pointed out the statistics for how
    mindlessly incestuous the Xen patches have historically been to Jeremy. He
    admitted it. I've not seen _anybody_ say that things will improve.

    Xen has been painful. If you give maintainers pain, don't expect them to
    love you or respect you.

    So I would really suggest that Xen people should look at _why_ they are
    giving maintainers so much pain.

                    Linus

    BTW, I have absolutely no doubt that NetBSD and Solaris merged Xen faster than anyone else.

  • by Anonymous Coward on Friday June 03, 2011 @09:07AM (#36330164)

    I still get 15 points and this on a fairly regular schedule. Most of the time I don't use them or only partially, though.

What this country needs is a dime that will buy a good five-cent bagel.

Working...