Follow Slashdot blog updates by subscribing to our blog RSS feed

 



Forgot your password?
typodupeerror
×
Red Hat Software Linux

Work Underway To Return Xen Support To Fedora 13 93

Julie188 writes "Details on this are admittedly sketchy, but both Red Hat and Xen.org have gone on record promising that some kind of support for the Xen hypervisor is forthcoming for Fedora users. As we know, on Monday, Fedora 13 was released, chock full of features to appeal to business users. One of the ballyhooed improvements to 13 is virtualization — meaning KVM and only KVM — for Red Hat. Xen was dropped from Fedora a few releases ago and it hasn't come back in 13, except that 13 still supports Xen guests. Meanwhile, 'work is underway in Xen.org to add platform support to Fedora 13 post-release,' promises Xen.org's Ian Pratt."
This discussion has been archived. No new comments can be posted.

Work Underway To Return Xen Support To Fedora 13

Comments Filter:
  • KVM catches Xen (Score:5, Informative)

    by Pecisk ( 688001 ) on Wednesday May 26, 2010 @02:38AM (#32345456)

    As more new servers are deployed for virtualization, Xen superiority over KVM slowly, but surely disappears. First of all, all new tech has virtualization acceleration support in CPUs. Now, for example, using KVM in combination with paravirtualized network and storage drivers (which are packaged and used in Ubuntu as default), Ubuntu Server 10.04 LTS has the same speed and performance as guest as using Xen. Also huge improvements into libvirt stack and virt-manager have played role here - yes, I know, Xen also can use libvirt, but still - as it is more and more easier to deploy virtual machines, be it development or server environment. I have worked with Xen exclusively in the past for some three years, and most problems have been kernel patching issues and in fact, HVM support (because you still have to emulate some devices with quemu, which leaks like crazy, I guess that's reason why KVM now uses paravirtualized devices for net/storage). I don't have time to compile code for production servers, so if KVM is in kernel, and it is supported by kernel team and distribution, I will go for it. I was reserved And I guess lot of newcommers in virtualization will too.

    In nutshell, Xen devs shoot in the foot here. Have they agreed to be included in main kernel three and be more welcome with patches, it would be more interesting competition here.

  • Re:KVM catches Xen (Score:5, Informative)

    by hax0r_this ( 1073148 ) on Wednesday May 26, 2010 @03:22AM (#32345594)
    I was responsible for maintaining a Xen environment for about a year and a half and had much the same experience. We compile our own kernels and in this regard Xen was a nightmare. Do I stick with a 2.6.18 kernel which is the latest supported? If we do that we have to make sure to get backported security fixes. Or do we use forward-ported Xen patchsets which weren't all that reliable and were a pain in the ass to apply?

    We finally switched to KVM and suddenly life got a lot easier.

    (Going slightly off topic here)For a while we used libvirt and the associated tools, then we discovered Ganeti [google.com], a project at Google which has made cluster management a breeze. Libvirt has a "network" driver, but really isn't designed to manage redundant virtualization clusters. Ganeti, on the other hand, is designed specifically for managing clusters, and takes care of all the dirty work like setting up and managing LVM and DRBD. You can build out a new virtual machine, complete with an operating system in just one command, or even do it over the HTTP API. You can use Ganeti with KVM or Xen, but until/unless Xen is in mainline I won't be touching it.
  • Re:KVM catches Xen (Score:2, Informative)

    by Anonymous Coward on Wednesday May 26, 2010 @06:52AM (#32346356)
    I rolled out a small number of KVM virtual machines at a remote site (6000 miles remote), and looked at Ganeti to manage them. It looked good, but in practice it was designed for much larger clusters than we had, and in the end it just flat out didn't work. It also didn't help that KVM has apparently only been supported since Ganeti 2.0 was released and that the documentation is very Xen-centric.

    After a little more research I became a little disappointed by the current state of virtualisation tools. They're either incredibly simple GUI's to qemu for manageing one or two Qemu disc images on your desktop machine or huge complex beasts like Ganeti that are so specialised to the original authors that they're almost impossible to fit into anywhere else. In the end I just wrote a few simple scripts that could be used to clone & manage KVM instances which were no more than a couple of tens of line of Bash each, and we called it done. Sure it isn't a large scale virtualisation platform (we use VSphere 4 for our core infrastructure) but it works.
  • by Anonymous Coward on Wednesday May 26, 2010 @07:09AM (#32346436)

    I've been using virtualization on non-x86 hardward for over 10 years. The ideas are not new to me.

    I've used VMware, Xen, KVM, QEMU, VirtualBox, LMU, Jails, on Linux-based and BSD-based installs.

    I started testing KVM in our lab about 6 months ago to be ready for the releases that have all come out recently. I've not been impressed. Performance sucks when compared to Xen and that is putting it nicely. Xen is lighter than VMware and VirtualBox. I don't really care about the controlling tools provided the CLI commands work. We have custom commands for Xen that can easily be migrated to KVM.

    We're not going to use any virtualization that isn't part of the main production distributions, so Xen appears to be on the way out for us. I hope that KVM performance becomes similar and that the default setup matures to provide reasonable defaults with reasonable performance. Sadly, if I had to deploy new production VM infrastructure today, it would be ESX.

    Someday KVM may catch Xen, but it hasn't yet, at least in the current packaged repository releases.

  • by Thundersnatch ( 671481 ) on Wednesday May 26, 2010 @09:09AM (#32347290) Journal

    Is ESX so much faster than VirtualBox?

    Yes, it is. At least it was in all of our internal testing, and every published benchmark I've ever seen.

    In my experience VirtualBox beats VMWare in quite a few areas (though, sadly, not in networking). And the most recent version, 3.2, with fully asynchronous I/O widened the gap further. It's almost to the point that having virtualbox run a VM in a file on ext2 beats VMWare running the same VM with it's "partition filesystem" in normal setups.

    Are you comparing VirtualBox with VMware Server or VMware Desktop? VMware ESX/vShpere are completely different products. VirtualBox may in fact out-run the lower-end, OS-hosted VMware Server and VMware Desktop. But in general, it won't come close to anything from the VMware ESX/vShpere product line (which is a hypervisor that runs on bare metal).

  • Re:KVM catches Xen (Score:3, Informative)

    by pasikarkkainen ( 925729 ) on Wednesday May 26, 2010 @09:24AM (#32347542)
    Some comments.. Xen hypervisor (xen.gz) is not meant to be integrated to Linux kernel. Xen is designed to be a separate piece of software. Xen is a secure, type-1 baremetal hypervisor, not a module for Linux. Xen dom0 ("service console") can be Linux, NetBSD or OpenSolaris. Most people use Linux as Xen dom0. When Linux is used as dom0 it needs to be able to run as Xen dom0 (obviously) - and this is where some people have had pain. For a long time the official Xen dom0 kernel patches were only available for Linux 2.6.18. This was difficult for many people and caused some distros to drop Xen dom0 kernel support because they couldn't affort porting the patches to newer kernels themselves. Today the situation is different. Xen developers are actively working on rewriting the Xen dom0 patches based on the (already existing) upstream pvops framework. pvops has been in the upstream Linux kernel since 2.6.24. Xen pvops dom0 patches are available today for the long-term maintained 2.6.32 kernel, and also for 2.6.31, 2.6.33 and 2.6.34. Novell has also forward-ported the old/traditional Xenlinux patches from 2.6.18 to first 2.6.27 and also to 2.6.31, 2.6.32 and 2.6.33. So there are many options today. For more information about the various Xen dom0 kernels see: http://wiki.xensource.com/xenwiki/XenDom0Kernels [xensource.com] Also xen.org offers XCP (Xen Cloud Platform) which is a full platform, including installation CD and multi-host/pool management. If you use XCP you don't need to install custom kernels or anything - you get all included in the XCP bundle. More information about XCP: http://www.xen.org/products/cloudxen.html [xen.org]
  • Re:KVM catches Xen (Score:3, Informative)

    by 0100010001010011 ( 652467 ) on Wednesday May 26, 2010 @09:37AM (#32347704)

    Not every OS shuns Xen as a Dom0 like Linux seems to. I run Open Solaris this is what it took to get running:

    pfexec pkg install xvm-gui
    pfexec svcadm enable milestone/xvm
    pfexec reboot

    That's it. I wanted to use Linux. I read every single manual I could. There seemed to be 10 different ways to do the same thing and the documentation was never quite there. I could just never get it working the way I wanted to.

    With virsh and virt-manager setting up another OS is cake. I'm running Debian64 as DomU and Windows7 under HVM.

  • by Thundersnatch ( 671481 ) on Wednesday May 26, 2010 @09:41AM (#32347740) Journal

    Actually ESX is a hypervisor that runs on an old redhat distro last time I checked. Given the featureset of the newer VMWare's it would amaze me if half of them (specifically the "new" hardware support) isn't just the result of a linux kernel upgrade.

    No, the VMware ESX hypervisor runs on bare metal. The management console for ESX is based on an old RedHat, but that just talks with the Hypervisor via an API. In fact, the ESXi version doesn't even have the management console, and you use the VMware client to manage the hypervisor.

    But I'd appreciate a few links with recent benchmarks if you have them.

    I'll see what I can dig up, but comparing VirtualBox to ESX isn't frequently done because they're so different. I recall that I saw benchamrks comparing VirtualBox to VMware desktop, and then benchmarks comparing ESX to VMware Desktop, and did the transitive analysis.

    Our internal tests threw out VirtualBox (and VMware Server) as options after very simlpe IOmeter benchmarks. They were both dog-slow compared with ESX.

  • by pasikarkkainen ( 925729 ) on Wednesday May 26, 2010 @09:46AM (#32347814)
    This has changed pretty much lately. A lot of new documentation has been written to the wiki, for example: http://wiki.xensource.com/xenwiki/XenCommonProblems [xensource.com] has a lot of stuff and links to other new documentation pages. Have you heard of XCP (Xen Cloud Platform, http://www.xen.org/products/cloudxen.html [xen.org])? It's a full "Xen distribution" featuring install CD, including everything needed for multi-host/pool management. No need to install custom kernels or anything. You can use OpenXenCenter (http://www.openxencenter.com/) to manage it, if you need a GUI tool.

Lots of folks confuse bad management with destiny. -- Frank Hubbard

Working...