Xen To Become Linux Foundation Collaborative Project 62
jrepin writes "The Linux Foundation, the nonprofit organization dedicated to accelerating the growth of Linux, today announced the Xen Project is becoming a Linux Foundation Collaborative Project. Linux Foundation Collaborative Projects are independently funded software projects that harness the power of collaborative development to fuel innovation across industries and ecosystems. The Xen Project is an open source virtualization platform licensed under the GPLv2 with a similar governance structure to the Linux kernel. Designed from the start for cloud computing, the project has more than a decade of development and is being used by more than 10 million users. As the project experiences contributions from an increasingly diverse group of companies, it is looking to The Linux Foundation to be a neutral forum for providing guidance and facilitating a collaborative network."
Re: (Score:1, Interesting)
Re: (Score:1)
But if something is at -1, it won't show up in the search rankings at all I don't think.
Here's something a friend told me. They posted a horrid post, and included a pretty unique string. It got modded -1 of course. They then tried to search for it again, and got no results.
Try it and see.
Wouldn't KVM... (Score:3)
Re:Wouldn't KVM... (Score:5, Interesting)
Wouldn't KVM be the most natural fit for a Linux virtualization project? Or are we talking about something other than Xen virtualization project here?
Xen has been around longer, as I understand it, and at one time I used it in para-virtualization mode because running Linux VMs on the non-assist hardware I had at the time was very painful, performance-wise. I still have 1 VM host running para-virtualized.
For a while it appeared that Red Hat - one of Xen's initial promoters - was going to drop Xen for KVM, but they seem to have been retreating from that. At any rate, recent RHEL kernels are easier for me to work with using Xen than KVM, for the most part. Don't take that as meaning much, however, since Xen is where I have a lot more practice.
Re: (Score:2)
RHEV uses KVM [wikipedia.org]. With that said, I believe (haven't checked recently though) that Red Hat still supports running RHEL as a guest inside of a KVM hypervisor.
Re: (Score:2)
(A Xen hypervisor, rather)
Re: (Score:2)
If "Xen host" means "dom0", there are a whole string of OS releases in the Red Hat family that cannot run a dom0. They keep saying that the "next" Fedora release will be able to host a dom0, but they've been saying that since about Fedora 8, I think.
Re: (Score:1)
Re:Wouldn't KVM... (Score:4, Informative)
This isn't the Linux Foundation going looking for a virtualisation project and picking Xen.
From what I've read, this is the open source Xen community asking Citrix if the open source project can shift to being run by an independent foundation, Citrix agreeing to that, and the Xen community picking the Linux Foundation as the best fit.
I'm sure that if Redhat wanted to move KVM to the Linux Foundation, that would be able to happen too. It would be like eg the Apache Foundation managing 'competing' projects just fine.
The Xen community felt that this move will make it easier for other companies and developers to contribute to the open source Xen project, and hope that it will improve collaboration with other projects etc etc.
Citrix still has its commercial XenServer product, and will presumably still employ developers to work on the open source Xen. But the management of the open source project is now independent from Citrix.
Personally I think it's a good move - after all Xen is running an awful lot of hosting/cloud providers out there.
Re: (Score:3)
Well.. I have 1648 paying Xen-customers and 1 paying KVM-customer.
If it were my money I'd go for Xen. Like I did.
(for context: the average customer pays roughly $100 per month..)
Re: (Score:2)
Recently I got myself a nice little rackmount for experiments at home. My own little computer lab, so to say. Now, at work we use Xen: basically Debian Dom0 with lots and lots of Debian DomU. I think we have exactly one Windows server in a DomU. It's a simple situation, really.
Given I read on serveral forums that KVM is "the preferred Linux virtualization technology", I'd thought: Well, why not use that instead. Now, I admit, I was too lazy to read the manpages, but from the How-To's I read, this is
Re: (Score:1)
I didn't even need to wait for "cloud computing" (Score:1)
harness the power of collaborative development to fuel innovation across industries and ecosystems
BINGO!
Re: (Score:3)
Damn you. I'm still missing "synergistic".
Xen's biggest obstacle right now (Score:5, Interesting)
...from my own anecdotal perspective, is that VMs are very often used as a way to isolate commercial software products into their own little box where they don't have to play nice with other applications on the box -- and which VM's are supported for these products depends entirely on the vendor. Major vendors who have these products are only just now beginning to think beyond VMWare, and when they do, they are thinking HyperV before Xen. Not many shops want to be supporting more than one virtualization suite -- the only reason they do is because some vendors demand VMWare for their crap, and the price difference between that and supporting a second suite is workable. Once the VMWare premium is out of the picture, because vendors went to HyperV, there will be less of a compelling reason to maintain support for a second suite.
So closed source software vendors may dictate which suite wins between HyperV and Xen.
Re: (Score:2, Interesting)
Because no commercial software supports BSD?
Hell, for a laugh one time try to explain to a vendor you want to chroot their commercial product.
Those kinds of shit software are why we having all these workarounds.
Re: (Score:2)
People are still buying crap like that? They kinda get what they deserve.
Re: (Score:2)
All the time.
The people buying it are not the ones implementing it.
OpenVZ (Score:5, Insightful)
OpenVZ [openvz.org] is very much like jails for Linux. I introduced it at my job four years ago and we've been using it ever since. I can attest to the savings in hardware overhead and in sysadmin time, compared to the alternatives of either full-blown VMs or all-services-in-one-Linux-box.
Nowadays there is also LXC [sourceforge.net], which supposedly is the future for Linux jails, seeing as their patch-set got into the mainline kernel—something OpenVZ failed to achieve. But IMHO LXC is not as stable and reliable as OpenVZ, nor as well-isolated by default, which is an aspect that is too often neglected.
Re: (Score:3)
Things may have changed in the past couple of years, but I distro-upgraded-before-I-looked from Ubuntu Hardy LTS to Lucid LTS and found that the OpenVZ components were removed and LXC components added which threw me for a few loops on my home containers [itkia.com]. At the time I found LXC to be lacking in tools and documentation, and OpenVZ wasn't being supported in a sane way. I had enough troubles with Lucid in containers that I put everything on a hard box and later moved all of it to Windows and Ubuntu-in-Hyper-V.
Re: (Score:2)
What happens if you have a bunch of OpenVZ 'virtual machines' running apache webservers and, on the host, you run 'killall -9 apache'?
Re: (Score:2)
I agree that VMWare will run you nontrivial money; but the 'overhead' argument leaves me more skeptical these days.
I don't deny that there is some, but now that hardware-assistance for virtualization is effectively obligatory in modern hardware, it isn't huge. Compared to the convenience features that you don't get with OS-level segregation(trivial snapshot/rollback, and migration from host to host being the big ones, it's just hard to get worked up about.
Certainly, if I were looking at a much bigger, much
Re: (Score:3, Insightful)
Agree 1000%. We have a fairly large VMWare cluster. Okay, maybe not that large, but it consists of 6*4U quad-socket, 8 core per socket servers with 512GB of RAM each. You can host a lot of stuff with that.
What has happened is that any time a user says they need a new source code repo, a new application, another build server for their project, or whatever, is that we just spin up another virtual machine for them. That's how our IT middle management has sold this solution to our executives (being able to prov
Re: (Score:2)
That situation right there is why block-level dedup on your VMware storage is awesome. Oh, you want another server? I'll just spin up this template that is exactly the same as the other ones, which will de-dupe and the only data being written is what changes from every other server already on the host.
Yes, it doesn't save the additional CPU and memory of having yet another OS there, but it helps.
Re: (Score:2, Insightful)
Can jails do live migration between hosts?
How about live storage migration? (awesome when it comes time to migrate to a new SAN)
Do they have a centralized management software that in the case of a hardware failure auto starts the jail on another machine?
Templates?
Snapshots?
Dynamic resource management (i.e. CPU is busy on this machine, migrate the more idle VMs to another host to increase performance for the busy app)?
Built in APIs for deploying new jails?
Shared storage devices across physical hosts? (could
Re: (Score:3)
Re: (Score:3)
Perhaps, but I have found that I can very often use a chroot jail when some would use a virtual machine. It takes more knowledge to set up, but generally performs better and can interact with applications on the native platform.
Re: (Score:2)
I think you're correct if you're talking about Citrix's commercial XenServer product whose main market would be enterprise users.
But this is about the open source Xen hypervisor project. It's main deployment is with cloud and VPS hosting providers eg Amazon, Rackspace, Linode etc. And it is doing well in that world.
And now that the open source Xen is free from Citrix, ongoing development shouldn't be as dependent on Citrix's fortunes with XenServer.
Re: (Score:2)
So closed source software vendors may dictate which suite wins between HyperV and Xen.
This may be true in traditional enterprise IT, but when you've got a fully-open stack (lower case) platform, the argument to tying down your hypervisor layer with licensing, when nothing else is so-encumbered, is going to be a very tough sell. Though to be fair, XenServer gets you some of the benefits of Xen, with extra licensing if that's your cup o' tea.
Re: (Score:2)
Never worked with Cloud computing? (Score:2)
Sorry, no. Closed software Vendors will be looking at their product running on large scale cloud providers like EC2, IBMs cloud and maybe MicroSoft Azure if it will ever get out of Beta. This means that they will possibly be looking at supporting HyperV, but not after they will be able to run on both Xen and KVM. EC2 will be a much more interesting target and they happen to be running Xen. Something like this is way more important than supporting some silly company that wants to run their linux boxes on Hyp
Re: (Score:2)
You may be right, the segment wanting to target e.g. EC2 may be larger and more determinant; OP is just what I see from my perspective. The apps in question are not something you'd put on the cloud except possibly as emergency backup instances. There are of course some insane people that do put stuff like their LWAP controllers on the cloud, and companies more than happy to sell them on that idea :-/
As to security, VMs are far from it when it comes to the network side. Implementation of dot1br is lagging
Re: (Score:2)
Re: (Score:2)
Re: (Score:2)
Re: (Score:2)
Why not KVM when it has all of the momentum? (Score:1)
Are they looking for a challenge in reviving a dead project? Even Red Hat gave-up on XEN.
Re: (Score:2)
Re:Why not KVM when it has all of the momentum? (Score:4, Interesting)
For some workloads, Xen outperforms kvm, and vice versa. It is better for us all to have competing open source solutions than competing against closed source solutions. Also worth noting, Yes Xen was pretty much dead in the water for some time, but since getting their act together and getting support in the mainline kernel, they have been doing very well.
Re: (Score:3)
No, RedHat has been co-opting projects that give it a unique competitive edge. They pretty much own the KVM project, and now they don't have to compete with Citrix on the Xen platform. RHEL dropped support for Xen in version 6, at which point the Linux kernel devs retorted by putting Xen support into the kernel. If Xen was such a dog, then why would the Linux kernel dev team work so hard to keep it?
I'm not downing KVM or Xen. Both work well for their intended purposes. But RHEL's decision probably had
Re: (Score:2)
No, RedHat has been co-opting projects that give it a unique competitive edge. They pretty much own the KVM project, and now they don't have to compete with Citrix on the Xen platform. RHEL dropped support for Xen in version 6, at which point the Linux kernel devs retorted by putting Xen support into the kernel.
Coincidentally I have been comparing the two over the last week and have found the Xen project to have fewer barriers to installing and using than the RH KVM. Much of the subscription model of RH makes installing and using KVM quite painful. I'm sure it's great for what it does but it seem the barriers for entry to use it are also quite high.
I'm looking forward to applying SOA tools to Xen as it seem to be quite a good fit.
Doesn't Amazon... (Score:1)
Coolest use of XEN that I know of- (Score:4, Interesting)
http://qubes-os.org/ [qubes-os.org]
It gives you hardware-enforced security for your desktop.