Follow Slashdot stories on Twitter

 



Forgot your password?
typodupeerror
×
Red Hat Software Linux

Work Underway To Return Xen Support To Fedora 13 93

Julie188 writes "Details on this are admittedly sketchy, but both Red Hat and Xen.org have gone on record promising that some kind of support for the Xen hypervisor is forthcoming for Fedora users. As we know, on Monday, Fedora 13 was released, chock full of features to appeal to business users. One of the ballyhooed improvements to 13 is virtualization — meaning KVM and only KVM — for Red Hat. Xen was dropped from Fedora a few releases ago and it hasn't come back in 13, except that 13 still supports Xen guests. Meanwhile, 'work is underway in Xen.org to add platform support to Fedora 13 post-release,' promises Xen.org's Ian Pratt."
This discussion has been archived. No new comments can be posted.

Work Underway To Return Xen Support To Fedora 13

Comments Filter:
  • KVM catches Xen (Score:5, Informative)

    by Pecisk ( 688001 ) on Wednesday May 26, 2010 @02:38AM (#32345456)

    As more new servers are deployed for virtualization, Xen superiority over KVM slowly, but surely disappears. First of all, all new tech has virtualization acceleration support in CPUs. Now, for example, using KVM in combination with paravirtualized network and storage drivers (which are packaged and used in Ubuntu as default), Ubuntu Server 10.04 LTS has the same speed and performance as guest as using Xen. Also huge improvements into libvirt stack and virt-manager have played role here - yes, I know, Xen also can use libvirt, but still - as it is more and more easier to deploy virtual machines, be it development or server environment. I have worked with Xen exclusively in the past for some three years, and most problems have been kernel patching issues and in fact, HVM support (because you still have to emulate some devices with quemu, which leaks like crazy, I guess that's reason why KVM now uses paravirtualized devices for net/storage). I don't have time to compile code for production servers, so if KVM is in kernel, and it is supported by kernel team and distribution, I will go for it. I was reserved And I guess lot of newcommers in virtualization will too.

    In nutshell, Xen devs shoot in the foot here. Have they agreed to be included in main kernel three and be more welcome with patches, it would be more interesting competition here.

    • Re:KVM catches Xen (Score:5, Informative)

      by hax0r_this ( 1073148 ) on Wednesday May 26, 2010 @03:22AM (#32345594)
      I was responsible for maintaining a Xen environment for about a year and a half and had much the same experience. We compile our own kernels and in this regard Xen was a nightmare. Do I stick with a 2.6.18 kernel which is the latest supported? If we do that we have to make sure to get backported security fixes. Or do we use forward-ported Xen patchsets which weren't all that reliable and were a pain in the ass to apply?

      We finally switched to KVM and suddenly life got a lot easier.

      (Going slightly off topic here)For a while we used libvirt and the associated tools, then we discovered Ganeti [google.com], a project at Google which has made cluster management a breeze. Libvirt has a "network" driver, but really isn't designed to manage redundant virtualization clusters. Ganeti, on the other hand, is designed specifically for managing clusters, and takes care of all the dirty work like setting up and managing LVM and DRBD. You can build out a new virtual machine, complete with an operating system in just one command, or even do it over the HTTP API. You can use Ganeti with KVM or Xen, but until/unless Xen is in mainline I won't be touching it.
      • Re: (Score:2, Informative)

        by Anonymous Coward
        I rolled out a small number of KVM virtual machines at a remote site (6000 miles remote), and looked at Ganeti to manage them. It looked good, but in practice it was designed for much larger clusters than we had, and in the end it just flat out didn't work. It also didn't help that KVM has apparently only been supported since Ganeti 2.0 was released and that the documentation is very Xen-centric.

        After a little more research I became a little disappointed by the current state of virtualisation tools. They'
      • Re: (Score:3, Informative)

        Some comments.. Xen hypervisor (xen.gz) is not meant to be integrated to Linux kernel. Xen is designed to be a separate piece of software. Xen is a secure, type-1 baremetal hypervisor, not a module for Linux. Xen dom0 ("service console") can be Linux, NetBSD or OpenSolaris. Most people use Linux as Xen dom0. When Linux is used as dom0 it needs to be able to run as Xen dom0 (obviously) - and this is where some people have had pain. For a long time the official Xen dom0 kernel patches were only available
      • Re: (Score:3, Informative)

        Not every OS shuns Xen as a Dom0 like Linux seems to. I run Open Solaris this is what it took to get running:

        pfexec pkg install xvm-gui
        pfexec svcadm enable milestone/xvm
        pfexec reboot

        That's it. I wanted to use Linux. I read every single manual I could. There seemed to be 10 different ways to do the same thing and the documentation was never quite there. I could just never get it working the way I wanted to.

        With virsh and virt-manager setting up another OS is cake. I'm running Debian64 as DomU and Windows7 un

        • by drsmithy ( 35869 )

          That's it. I wanted to use Linux. I read every single manual I could. There seemed to be 10 different ways to do the same thing and the documentation was never quite there.

          Well that's pretty much Linux <anything> in a nutshell. ;)

      • My hope is to get Xen in mainline for a quasi-microkernel. Though it would take some work - here is my initial concept - libfuse and <insert favorite fs her> as a domU guest, linux as dom0, and exporting the fuse kernel module interface as a vdev. Think about it - native ZFS and NTFS performance, no licensing issues. Don't forget - Xen is based on the Nemesis microkernel. And that's just the beginning... Any takers? I'm not much of a programmer...
    • by thule ( 9041 ) on Wednesday May 26, 2010 @03:35AM (#32345656) Homepage
      Plus, KVM has xenner that provides Xen compatible devices to virtual machines. I also saw some patches going into KVM that provide Hyper-V hypercalls to KVM. Right now they are fairly basic, but it is a start.

      There is no doubt that KVM is the future. It is built into the kernel -- no dom0 patches required. RedHat is heavily investing in it. Note the sponsored oVirt project that integrates libvirt and FreeIPA to manage a network of virtual machine servers using kerberos and ldap as the security framework.
      • Last time I looked, KVM was orders of magnitudes slower than everything else. Has that changed now? I want something that at least beats Xen.

        • Re: (Score:3, Interesting)

          by thule ( 9041 )
          I'm guessing the only reason KVM was slower was because it didn't have special virtual drivers. It does have block and network drivers now, but NO video drivers. Since KVM is more focused on server performance than graphics I have never had an issue with the graphics speed.
    • by Anonymous Coward

      I've been using virtualization on non-x86 hardward for over 10 years. The ideas are not new to me.

      I've used VMware, Xen, KVM, QEMU, VirtualBox, LMU, Jails, on Linux-based and BSD-based installs.

      I started testing KVM in our lab about 6 months ago to be ready for the releases that have all come out recently. I've not been impressed. Performance sucks when compared to Xen and that is putting it nicely. Xen is lighter than VMware and VirtualBox. I don't really care about the controlling tools provided the CLI c

      • Is ESX so much faster than VirtualBox ? In my experience VirtualBox beats VMWare in quite a few areas (though, sadly, not in networking). And the most recent version, 3.2, with fully asynchronous I/O widened the gap further.

        It's almost to the point that having virtualbox run a VM in a file on ext2 beats VMWare running the same VM with it's "partition filesystem" in normal setups.

        • Re: (Score:3, Informative)

          Is ESX so much faster than VirtualBox?

          Yes, it is. At least it was in all of our internal testing, and every published benchmark I've ever seen.

          In my experience VirtualBox beats VMWare in quite a few areas (though, sadly, not in networking). And the most recent version, 3.2, with fully asynchronous I/O widened the gap further. It's almost to the point that having virtualbox run a VM in a file on ext2 beats VMWare running the same VM with it's "partition filesystem" in normal setups.

          Are you comparing VirtualBox with VMware Server or VMware Desktop? VMware ESX/vShpere are completely different products. VirtualBox may in fact out-run the lower-end, OS-hosted VMware Server and VMware Desktop. But in general, it won't come close to anything from the VMware ESX/vShpere product line (which is a hypervisor that runs on bare metal).

          • Actually ESX is a hypervisor that runs on an old redhat distro last time I checked. Given the featureset of the newer VMWare's it would amaze me if half of them (specifically the "new" hardware support) isn't just the result of a linux kernel upgrade.

            Have you found a lot of benchmarks online ? I've never seen more than a few, and I would require relatively recent versions of the hypervisor (vSphere 4 vs VirtualBox 3.1 or 3.2 preferably) to be compared to find such benchmarks relevant.

            But I'd appreciate a fe

            • by Thundersnatch ( 671481 ) on Wednesday May 26, 2010 @09:41AM (#32347740) Journal

              Actually ESX is a hypervisor that runs on an old redhat distro last time I checked. Given the featureset of the newer VMWare's it would amaze me if half of them (specifically the "new" hardware support) isn't just the result of a linux kernel upgrade.

              No, the VMware ESX hypervisor runs on bare metal. The management console for ESX is based on an old RedHat, but that just talks with the Hypervisor via an API. In fact, the ESXi version doesn't even have the management console, and you use the VMware client to manage the hypervisor.

              But I'd appreciate a few links with recent benchmarks if you have them.

              I'll see what I can dig up, but comparing VirtualBox to ESX isn't frequently done because they're so different. I recall that I saw benchamrks comparing VirtualBox to VMware desktop, and then benchmarks comparing ESX to VMware Desktop, and did the transitive analysis.

              Our internal tests threw out VirtualBox (and VMware Server) as options after very simlpe IOmeter benchmarks. They were both dog-slow compared with ESX.

              • Our internal tests threw out VirtualBox (and VMware Server) as options after very simlpe IOmeter benchmarks. They were both dog-slow compared with ESX.

                Under which versions was this evaluation made ? Vsphere 3 or 4 ? VirtualBox 2 ? 3 ?

                • I think it was Vmware ESX 3.5 (u1 I think) and VirtualBox 2.0 on Windows X64. It was mid-2008, so it was whatever was current then.
              • ESXi version doesn't even have the management console

                At server (or via ILO or DRAC connection) Hit Alt+F1, type unsupported (don't worry if you see nothing on screen just keep typing), login as root user with password and there is your console.

                - Eric

    • Re: (Score:3, Insightful)

      by shallot ( 172865 )

      In nutshell, Xen devs shoot in the foot here. Have they agreed to be included in main kernel three and be more welcome with patches, it would be more interesting competition here.

      Why, yes, they have agreed to include their patches in the main kernel tree, but not all of them have the consent of the upstream kernel developers. The response of the Xen developers has been a significant refactoring of their code. Try reading the readily available documentation on the progress of merging Xen kernel patches upstream [xensource.com].

  • by Alain Williams ( 2972 ) <addw@phcomp.co.uk> on Wednesday May 26, 2010 @03:18AM (#32345580) Homepage
    I liked the idea of Xen and tried to use it for some time, however:
    • The (free) management tools were bad. If you wanted to do anything other than something quite simple, it was hard or impossible
    • The documentation was hopeless: not complete; not enough examples; out of date

    An exciting technology that I hoped to use, but just not up to the mark.

    • by shallot ( 172865 )

      Is this the right place to add more anecdotal "insight"? :p

      We're using it at work in dozens if not hundreds of hardware and software combinations, and it's not exciting at all, it just works. We use only its free management tools and rely only on its freely available documentation.

      HTH, HAND.

    • Re: (Score:2, Informative)

      This has changed pretty much lately. A lot of new documentation has been written to the wiki, for example: http://wiki.xensource.com/xenwiki/XenCommonProblems [xensource.com] has a lot of stuff and links to other new documentation pages. Have you heard of XCP (Xen Cloud Platform, http://www.xen.org/products/cloudxen.html [xen.org])? It's a full "Xen distribution" featuring install CD, including everything needed for multi-host/pool management. No need to install custom kernels or anything. You can use OpenXenCenter (http://www.open
  • by Chris Snook ( 872473 ) on Wednesday May 26, 2010 @03:28AM (#32345618)

    Fedora, not RHEL, is really where Xen belongs anyway. It's exactly the sort of mix of neat ideas, dirty hacks, and blatant wheel re-invention that could only have come from academia, and it was only ever made enterprise-grade by throwing heaps of money at it, and even then only for carefully tested configurations. Yes, it's pretty much single-handedly responsible for commoditizing virtualization, but the combination of the design and the lack of cooperation with the kernel community made it a nightmare to support. Xen is responsible for the existence of KVM because it showed such immense promise, and then delivered extreme frustration and pain.

    Since Xen decided long ago it was going to be the center of its own universe, it's really in a great position to do cool experimental things that the kernel community would be more cautious about and the enterprise market wouldn't touch with a 10' pole without seeing a strong proof of concept first. That kind of innovation is a stated goal of the Fedora project.

    The only technical advantage Xen enjoys right now is a lack of dependency on hardware virtualization features. Since it's impossible to buy a new machine that you can call a server with a straight face that lacks hardware virtualization, this is meaningless in the enterprise world, but Fedora (like other community distros) has a much broader scope, so there's still a real chance there for necessity to give birth to more invention, much like it did in the early days of Xen when x86 hardware virtualization was still a whisper in the halls at Intel and AMD.

    Of course, Xensource/Citrix has already driven away most of the community that would have done this kind of pre-product development, so I'm not holding my breath, but it would be nice to see something more to come of all that work (and years of my own life) beyond simply supporting existing users.

    • The only technical advantage Xen enjoys right now is a lack of dependency on hardware virtualization features. Since it's impossible to buy a new machine that you can call a server with a straight face that lacks hardware virtualization, this is meaningless in the enterprise world, but Fedora (like other community distros) has a much broader scope, so there's still a real chance there for necessity to give birth to more invention, much like it did in the early days of Xen when x86 hardware virtualization was still a whisper in the halls at Intel and AMD.

      I do agree with the post in general, but I wouldn't belittle the fact that XEN doesn't need hardware support. I have already been affected by the dropping of xen from distributions. I do work in academia, so I have to work with older, but completely functional machines.

      We've tried to set up our own interal eucalyptus cloud, but ran into trouble when it wanted to use KVM by default (can't recall which installation this was). Also for other basic virtual machines, we'd need HW support, which we don't have in

      • by Viol8 ( 599362 )

        "Also for other basic virtual machines, we'd need HW support"

        Why? Just use vmware or similar.

        • Why? Just use vmware or similar.

          Well we'd still need some kind of performance. Of course qemu / vmware would work, but the performance difference vs xen is pretty huge.

      • From the new enterprise distributions Suse Linux Enterprise 11 (SLES11) SP1 fully supports Xen, including the new Xen 4.0 hypervisor, and Linux 2.6.32 based dom0 kernel. OpenSUSE also has Xen included/supported. Upcoming Debian 6.0 ("Squeeze") will also have Xen (including dom0).
      • I don't mean to belittle the significance of Xen's paravirtualization approach, but rather to point out that it's much more suitable for environments like the ones that tend to use Fedora than the ones that tend to use RHEL. Xen tried to be a commercial product way before it was technically ready for it, simply because there was so much market demand for commodity virtualization. Maybe it's ready now, but I wouldn't know, because I gave up on it as soon as KVM reached feature parity, because it didn't req

    • Re: (Score:1, Interesting)

      by Anonymous Coward

      f you think hardware visualization is fast, you guys need to revisit what hardware virtualization is and how KVM, Xen, Hyper-V implement it.
      Please do not jump to conclusions without knowing the concepts and implementation.
      Paravirtualization is faster and a much better approach. Do some benchmarks yourself and you will know. Have some of your VMs run in HVM mode, run benchmarks. Then deactivate HVM and use paravirtualized kernel for the VMs and run the benchmarks again.... you will get pretty conclusive res

      • For one generation of CPUs, you're right, hardware virtualization was slower. Since then we've gotten hardware nested paging that eliminates the TLB penalty and virtio device drivers that give us paravirtualization in confined I/O devices without having to coordinate the entire memory space between guest and host.

        KVM decided to be stable first, THEN fast, and it succeeded. Xen decided to be fast first, THEN stable. The results are a clear lesson in the ills of premature optimization.

    • Re: (Score:1, Offtopic)

      by shallot ( 172865 )

      ... a nightmare to support ...

      ... extreme frustration and pain ...

      ... market wouldn't touch with a 10' pole ...

      You sound like you have an axe to grind.

      • Yes, I have an axe to grind. I worked in support for Red Hat back when RHEL 5 was released, before the switch to KVM. There were rumors going around engineering about switching to KVM as early as RHEL 5.1, because as unready as KVM was at the time, Xen was a disaster. We shed enough blood to get Xen usable until KVM had good paravirt I/O, so that got deferred for a few updats. Prior to leaving Red Hat Support, I'd get bug reports from customers, and track down patches in xen-unstable that were summarize

        • by shallot ( 172865 )

          product that just wasn't absolutely rock solid.

          I no longer work there, but last I heard those financial services customers were much more interested in KVM than they ever were in Xen.

          This doesn't really make much sense, as it goes against the conventional wisdom of software development - newly written or younger software by default scores worse on the stability front compared to old software, even if old software was known to be buggy - exactly because of the simple fact that someone somewhere already dealt with many of its quirks, and hopefully put it in order, whereas there is inherently less proof that anyone ever did that with novelty software. There are exceptions, obviously, but I

    • Of course, Xensource/Citrix has already driven away most of the community that would have done this kind of pre-product development, so I'm not holding my breath

      I'm curious (genuinely curious, not being snarky)...I've had a bad experience with Xen too, but I haven't kept up with what Xensource/Citrix have been doing. Why do you say they've driven away the community?

  • by Viol8 ( 599362 ) on Wednesday May 26, 2010 @06:56AM (#32346386) Homepage

    Seems to me they mostly get used to run multiple OS's that each run a single main app. Last time I looked modern OS's are quite capable of running multiple apps at the same time so unless you really need to run different OS's on the same machine (er why?) then what exactly is the point?

    • by betterunixthanunix ( 980855 ) on Wednesday May 26, 2010 @08:45AM (#32347074)
      Here is a scenario for you: you have some old server that is going to be decommissioned, but it is running some important software, and perhaps a different operating system from all of your other servers. On the one hand, you can go ahead and configure a new system...or with virtualization, you can simply take a snapshot of the hard disk image and run a VM on some other server (perhaps one that is underutilized).

      In general, the point of virtualization in a server room is flexibility. VMs are easy to move around, you do not have to be tied to a single operating system (I hear the most common use of KVM is to run Exchange on system with a Linux hyperviser) on a single physical machine, which is particularly useful for sharing hardware resources when you have to deal with software that only runs on a specific OS. VMs also make it easy to checkpoint a system; KVM has "copy on write" disk images, for example, which track changes from a base image, which could be a COW image itself.

      IBM did a lot of research on virtualization use, and I believe they had discovered uses for using VMs within VMs up to four layers deep (I am not really sure what they were doing).
    • by Shimbo ( 100005 )

      Seems to me they mostly get used to run multiple OS's that each run a single main app. Last time I looked modern OS's are quite capable of running multiple apps at the same time..then what exactly is the point?

      It can make management of large numbers of machines easier. If you've got half a dozen applications on a machine, owned by different groups, it can be a nuisance to manage upgrade cycles. Some groups may buy their own little servers to get around the problem: with virtualization, you can let the development folk can run their server, *and* host it in the data centre.

      Another useful thing is splitting off the hardware support issues from the application support ones. Need to roll out a critical BIOS update ac

    • It's harder to see when you're not at the enterprise level. Picturing being tasked with providing 10,000 identically configured desktop PCs that are intended to run a limited subset of applications in a locked-down manner.

      You have two choices. You can use actual desktop computers -- power-hungry, subject to failure at that number, and generally expensive to maintain at that scale. Or you can use VMs - a fairly small number of physical servers. Your desktop component can be a simple thin client device -- cheap, power efficient, much lower likelihood of failure, and - when they do fail - much simpler to manage as it takes a few minutes to replace the physical device. Your user never knows the difference, as he still connects to the same VM. If someone screws up a VM, you just rebuild it - an automated process that takes a few seconds. Compared to a PC which can require someone to physically be present at the machine; and at the very least will take 10-20 minutes to copy down the hard drive image over a network.

      Now let's say you're expanding - another 2000 users. With desktop PCs you have to order PCs -- hopefully no major changes in hardware since you last ordered, because then your client apps need to be re-certified. Then you have physically image the hard drives to the standard image; then deploy and set up the hardware at each workstation.

      On the other hand, if you were using VMs you would add 20-40 servers to the cluster and bring the VMs online. The configuration is completely automated, with a human being required only to press "go". You still have to purchase the thin client devices, but even considering the new servers you bought, the cost is *far* less than buying 2000 PCs. Too, it doesn't matter if the specs change on the thin client devices, because their sole purpose is to connect to the VM. The ongoing cost remains much lower, and setup consists of plugging it in and flashing the ROM. (Though at that size, there's a fair chance you can work out a deal with the manufacturer to do it for you.)

      As far as personal/desktop use - good for testing apps on different OSs, and playing with new OSs. Also good from a security perspective (run your web browsers in a VM, and questionable software. Roll it back daily or whenever you think there's been a compromise.) Also good if you're running Linux but have a couple of needed apps that don't work on WINE.

      • Here's a business idea - single purpose, hardware specialized X11 servers getting SCSI RDMA over Ethenet. Run Windows HVM Xen guests on server side, with FBSD/ZFS on dom0, with ro snapshot of the system partion of a base system, with each ~/ on a separate vdev. You still pay Windows licenses, but management is simplified, hardware upgrades as well. Run that on a ATAoE San, and your golden. The only non-commodity part is the thin client - probably a customized ARM SoC with a tweaked GPU. Oh, and to save on l
        • Sorry for replying to my self, but it realized that with virtualized hardware and Win7 registry virtualization, my approach isn't nearly as intersting as when applied to XP, so there [golod.com]. Hell, you can go crazy and offload TCP connections to the host with an appropriate driver and virtio Ethenet dev. Why the hell run a firewall and shit when you can use a much more powerful TCP stack to begin with.
    • by drsmithy ( 35869 )

      Seems to me they mostly get used to run multiple OS's that each run a single main app. Last time I looked modern OS's are quite capable of running multiple apps at the same time so unless you really need to run different OS's on the same machine (er why?) then what exactly is the point?

      Isolating a single application to a single server is the ideal situation. It dramatically simplifies change management, troubleshooting and dealing with special requirements.

      All modern OSes are, indeed, quite capable of r

  • We recently purchased RHEL 5 Advanced Server after a few months of trying out Red Hat's virtualization options. One of the questions we had was whether Xen or KVM was the right tool to use going forward -- because we didn't want to adopt something then have it be replaced by new shiny almost immediately afterwards. We want to use a single virtualization platform for all our servers which means a migration from ESXi. I had already setup Xen and was starting to benchmark it.

    We were emphatically told by sev
    • It seems that Red Hat is only going to support KVM in RHEL, regardless of what the Fedora community does. Really, that is not so uncommon -- Fedora has a lot of things RHEL does not have (look at the "spins").
    • by suso ( 153703 ) *

      I can tell you know that it really doesn't matter what you choose. It will eventually be replaced by something shinier and more popular/modern. Such is the way of computers and especially in the open source world. IMHO, KVM became popular because a bunch of people who weren't really using Xen decided that they liked it better and didn't give consideration to the real issues at hand. And now that they've started using KVM for real work, they are running into those issues and regret the whole thing. I've ha

      • Do you have servers that are handling thousands/millions of transactions per day? Or are you just running a departmental webserver with just a few users?

        Since we're using RHEL AS, which starts at $1500 per year, it's not a departmental webserver.

        When we started testing KVM in the initial RHEL 5 release, KVM setup was needlessly difficult -- both Xen and KVM were installed, a Xen kernel was selected by default, and dom0 was setup and started automatically. You can imagine my confusion as a new KVM user.

        Now it's just the opposite, as of 5.5 it appears that KVM is the simpler choice. It also seems to be much faster -- although there was a tweak to our networ

    • It's mainly Xen who is trying to get Xen supported in Fedora 13, more than the contrary. If the Xen guys manage to get Xen merged in the kernel Fedora probably won't have problems enabling it or offering alternative packages to use it. But KVM is still the focus.

    • RHEL5 will fully support Xen until 2014 (which is when RHEL5 goes EOL). Redhat has stated this multiple times. Also upcoming RHEL6 runs as Xen guest, on RHEL5 Xen host/dom0. So there's no need to switch away from Xen.
    • by b0bby ( 201198 )

      We want to use a single virtualization platform for all our servers which means a migration from ESXi.

      I'm curious as to the advantages of moving away from ESXi - I'm in a much smaller environment than what yours sounds like, but I've been happy with ESXi for all the guests I've run on it (Windows & various Linux distros). Before spending real money on it, though, I'm looking at other options. To me the ESXi hypervisor idea seems great - next to nothing running on the box means less to go wrong etc. The VSphere stuff seems to give all the bells & whistles I would need, and the price isn't excessive f

      • The biggest reasons are the primary systems we want to virtualize are RHEL5, so I want to stick with an RHEL5 host; and ESXi only uses 4 cores, which is not enough for our terminal servers... our uses are starved for CPU time while the blade itself is only partially utilized. In general ESXi has been great until we get a heavy load, then there are issues. Other reasons -- relayed to me by my VM sysadmin; I'm just the Linux guy -- include the need to re-convert to copy a VM and its reliance on a proprietary
        • by b0bby ( 201198 )

          Thanks for that - I hadn't thought of the 4 cores issue. I have been a little frustrated with the difficulty of moving images around, though. Maybe I'll dig out an old server & do some playing with alternatives, I have a buddy who loves Xen but I haven't done anything with it or KVM.

    • by drsmithy ( 35869 )

      We want to use a single virtualization platform for all our servers which means a migration from ESXi.

      You're crazy to be moving away from ESXi, then. Especially to RHEL, where the builtin management capabilities around virtualisation are pathetic, to say the least.

      We were emphatically told by several people at Red Hat (2 salespeople, our dedicated support engineer, and 2 other support staff) that Xen was the wrong direction and that only KVM would be supported in the future AND that existing support fo

  • Are there any competitive KVM solutions compared with Citrix's XenServer? The ease of management is great in Xenserver. Al tough ironically it's IMO harder to use Linux guests than windows guests since they only support a limited set of distros (and versions). If i want to run gentoo in a xenserver enviroment that's a lot of work with swapping kernels and grub config etc. That's just not feasible so i'm forced to run Cent OS, which I kinda dislike. Alternatives out there? How good are standard KVM tools?
  • Fedora 13 contains Xen hypervisor and tools, but it doesn't contain rpm package for a Xen dom0 capable kernel. There are unofficial Fedora rpm packages for a Xen dom0 capable kernel, based on the upstream pvops dom0 kernels (Linux 2.6.32). More information about Fedora Xen status and links to rpms see: http://fedoraproject.org/wiki/Features/XenPvopsDom0 [fedoraproject.org] . More information about available Xen dom0 kernel options see: http://wiki.xensource.com/xenwiki/XenDom0Kernels [xensource.com] .
  • I saw a live virtual machine migration across processor families and types [youtube.com]. Wish they would release that tech soon rather than work on getting Xen support back.

  • Does this version of glibc.x86_64 make my butt look big?

  • Maybe this will force the Xen people to write maintainable code so we don't have to keep using 2.6.18 and backport everything.

    • Xen dom0 patches are available for 2.6.27, 2.6.29, 2.6.31, 2.6.32, 2.6.33 etc..most people prefer 2.6.32 atm, since that's the long-term maintained kernel from both kernel.org and xen.org. For more information: http://wiki.xensource.com/xenwiki/XenDom0Kernels [xensource.com] . Xen developers are also busy preparing (and rewriting) the Xen dom0 support for the mainline Linux Xen pvops framework, which has been in upstream Linux since 2.6.24.
  • Use the myoung dom0 repo. I've been running a Fedora 12 Dom0 for a few months now.

    I so want to like KVM, but it's still fragile. I'll continue to help file and test KVM bugs so it's better than Xen at some point.

"If it ain't broke, don't fix it." - Bert Lantz

Working...