Linux 3.0 Will Have Full Xen Support 171
GPLHost-Thomas writes "The very last components that were needed to run Xen as a dom0 have finally reached kernel.org. The Xen block backend was one major feature missing from 2.6.39 dom0 support, and it's now included. Posts on the Xen blog, at Oracle and at Citrix celebrate this achievement."
I hopefully speak for lots of people when I say (Score:1, Informative)
... what??
Re:Vanilla kernel on EC2 (Score:5, Informative)
Re:Now all I need... (Score:4, Informative)
That's, uh, not exactly all that out there, these days.
Re:KVM vs XEN (Score:4, Informative)
Re:KVM vs XEN (Score:4, Informative)
Re:This is the reason why... (Score:4, Informative)
I'm not sure if you are trolling on purpose, or if you don't understand what this news is all about. But I'll bite.
You see, linux runs on almost any kind of hardware: from embedded systems on toasters to phones, desktop computers, laptops, to big servers. Even most supercomputers to date are running Linux. There is a _lot_ of different users that would use Linux in many different ways.
Xen is a technology that virtualizes machines, mainly intended for the data center and cloud computing environments.
This is NOT intended for users in any way. Your mom does NOT have to know that Xen even exists, just like windows users don't need to know what IIS or Apache is in order to browse the web.
Would you also say that windows and OSX is "is way too complicated for people" because you read slashdot news about some geeky kernel details about windows/OSX ?
Surely "no user should need to know, or care about this sort of thing.".
They don't. So do you about Xen. I'm not sure why someone like you is reading and posting on /., because this is usually "news for nerds", as the site indicates. :)
As many slashdotters would say about your reasoning behind your post: "You are doing it wrong." ;)
Re:KVM vs XEN (Score:3, Informative)
Re:KVM vs XEN (Score:5, Informative)
Re:This is the reason why... (Score:4, Informative)
It's partly historical and partly because Xen is structured differently to lots of other virtualisation systems.
"Domain" is to "virtual machine" as "process" is to "program". i.e. it's a running instance of a virtual machine. If you kill a VM and restart it, it's the same VM but a different domain. In practice VM and domain are blurred a bit when people talk, though.
Domain 0 is a bit like the host OS, but for technical reasons it's not exactly.
Re:I hopefully speak for lots of people when I say (Score:5, Informative)
What is Xen? Xen is a virtualization project that is run by four of the top five major cloud providers (including Amazon, Rackspace, &c); a commercial version written by Citrix run by thousands of sites worldwide, including large companies like Tesco, SAP, &c. It's also the approved way of running Oracle databases in a virtual machine.
What does that have to do with Linux? The Xen project is focused on virtualization. But Xen still needs to run on systems with all manner of devices. There are several ways they could have handled this. One is to try to put drivers for all of the devices in Xen. This would require a huge amount of work, mostly copying new device drivers and device fixes from Linux and porting them over to Xen. It would be a colossal waste of time: they would be duplicating effort of what Linux already does well, instead of doing what they want to do -- work on virtualization.
So what they do instead is run Xen as the hypervisor, but leverage the device drivers in Linux. They do this by creating a special VM, called "domain 0" or "dom0", which is booted first after Xen boots, that has drivers to control all of the devices. This domain is a version of Linux that is designed to be able to work with Xen to control and drive devices, while allowing Xen to control memory, CPU, and interrupts (the key hardware required to do virtualization).
Xen has been out for years. Why is this just being announced? The Xen project started out of a University research project. As is typical, they were trying to answer the question "what is possible?", and as a result, felt free to completely rip out and rewrite large sections of Linux code. This code was not upstream-able -- changes were made that were (rightly) not acceptable to the Kernel community.
Since that time, the Xen community has maintained branches of Linux with these intrusive, non-upstreamable patches, and used these branches as domain 0. At the same time, they have worked to try to get support for Linux-as-domain-0 into the mainline tree. This has been a long process, and something that has been a sore point for users of Xen for some time.
But as of Linux 3.0, all of the functionality required to use the mainline kernel tree as a basic dom0 with Xen is in. This means that if you install Xen, you'll be able to use the same kernel you booted with natively as the dom0 for Xen. It means that distributions won't have to maintain two separate kernels, one for booting bare metal, and one for booting on Xen. And it means not having to maintain the xen-linux fork, which has been a lot of painful work for the Xen community.
Re:Meanwhile (Score:5, Informative)
Just had to reply to this.. Sun forked Xen 3.1 something like 4 years ago, yes. That same fork, Xen version 3.1 is what is still being used today in Solaris and Sun had previously (pre-buyout) said they would not merge to any newer versions of xen.
So while Solaris can claim Xen Dom0 support it is no where near the capabilities of current Xen 4.0 and with no plans to update you're stuck on 3.1 with support only coming from, now, Oracle. Yeah, awesome.
Re:KVM vs XEN (Score:4, Informative)
Re:KVM vs XEN (Score:5, Informative)
Not sure which Xen book you read, but the grandparent makes a lot of errors and I'd be surprised if a book was that inaccurate. Mine [amazon.co.uk] is slightly out of date, but at least was accurate at the time of printing (technical review was done by the original Xen developer).
Let's start at the end. KVM VMs and userspace Linux applications do not share the same address space. This isn't even true if you remove KVM - userspace processes have isolated address spaces. KVM requires the CPU have virtualisation extensions, which means (among other things) nested page tables. This means that there is hardware-enforced separation between the pages. The guest OS sees page tables that map from virtual to pseudophysical address space, but thinks that they map from virtual to physical. The host (Linux) sets the mapping from these pseudophysical pages to real memory pages and the CPU enforces this mapping. Xen uses exactly the same mechanism in HVM mode (it uses some other tricks in paravirtual mode).
The driver domains are correct, but it's worth noting that Xen will use VT-d or equivalent to protect against malicious use. Linux can't give a userspace program direct access to the disk controller, because if it did then a rogue DMA command could compromise the kernel. Xen will use the IOMMU to ensure that each peripheral may only issue DMAs to memory owned by the driver domain. The Solaris VM that you have accessing your block device and exporting virtual disks from ZVOLs, for example, can trample its own address space with rogue DMAs, but it can't touch any memory in other VMs.
This means that Xen (in theory) has a smaller attack profile than KVM. Xen is basically a microkernel, and it enforces low privilege on the services (OS instances) that provide drivers and the management console. With KVM, the entire kernel runs in privileged mode. It's fairly common these days for the management console domain to have either no network access, or highly-restricted access, and be separated from the driver domains. If there is a flaw in the network stack in Linux and an attacker compromises it, then with KVM they now have access to all of your VMs. With Xen, they control that driver domain, and they can inject packets into the other VMs, but they are no more able to compromise them than they would be if they controlled the router one hop away.
KVM recently gained support or live migration (this has been stable in Xen for a long time - they were doing demos of live-migrating a Quake 2 server with clients connected since the early 2000s), but it doesn't have any of the high-availability stuff that Xen 4 includes. This allows you to do things like run two instances of the same VM on different machines and transparently fail-over when one dies.
Re:Meanwhile (Score:5, Informative)
'VMWare lobby', WTF? The real problem were things like this [lkml.org] and this [lkml.org]:
The fact is (and this is a _fact_): Xen is a total mess from a development
standpoint. I talked about this in private with Jeremy. Xen pollutes the
architecture code in ways that NO OTHER subsystem does. And I have never
EVER seen the Xen developers really acknowledge that and try to fix it.
Thomas pointed to patches that add _explicitly_ Xen-related special cases
that aren't even trying to make sense. See the local apic thing.
So quite frankly, I wish some of the Xen people looked themselves in the
mirror, and then asked themselves "would _I_ merge something ugly like
that, if it was filling my subsystem with totally unrelated hacks for some
other crap"?
Seriously.
If it was just the local APIC, fine. But it may be just the local APIC
code this time around, next time it will be something else. It's been TLB,
it's been entry_*.S, it's been all over. Some of them are performance
issues.
I dunno. I just do know that I pointed out the statistics for how
mindlessly incestuous the Xen patches have historically been to Jeremy. He
admitted it. I've not seen _anybody_ say that things will improve.
Xen has been painful. If you give maintainers pain, don't expect them to
love you or respect you.
So I would really suggest that Xen people should look at _why_ they are
giving maintainers so much pain.
Linus
BTW, I have absolutely no doubt that NetBSD and Solaris merged Xen faster than anyone else.
Re:I hopefully speak for lots of people when I say (Score:0, Informative)
I still get 15 points and this on a fairly regular schedule. Most of the time I don't use them or only partially, though.