Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
Red Hat Software Linux

Work Underway To Return Xen Support To Fedora 13 93

Julie188 writes "Details on this are admittedly sketchy, but both Red Hat and Xen.org have gone on record promising that some kind of support for the Xen hypervisor is forthcoming for Fedora users. As we know, on Monday, Fedora 13 was released, chock full of features to appeal to business users. One of the ballyhooed improvements to 13 is virtualization — meaning KVM and only KVM — for Red Hat. Xen was dropped from Fedora a few releases ago and it hasn't come back in 13, except that 13 still supports Xen guests. Meanwhile, 'work is underway in Xen.org to add platform support to Fedora 13 post-release,' promises Xen.org's Ian Pratt."
This discussion has been archived. No new comments can be posted.

Work Underway To Return Xen Support To Fedora 13

Comments Filter:
  • by Alain Williams ( 2972 ) <addw@phcomp.co.uk> on Wednesday May 26, 2010 @03:18AM (#32345580) Homepage
    I liked the idea of Xen and tried to use it for some time, however:
    • The (free) management tools were bad. If you wanted to do anything other than something quite simple, it was hard or impossible
    • The documentation was hopeless: not complete; not enough examples; out of date

    An exciting technology that I hoped to use, but just not up to the mark.

  • by Chris Snook ( 872473 ) on Wednesday May 26, 2010 @03:28AM (#32345618)

    Fedora, not RHEL, is really where Xen belongs anyway. It's exactly the sort of mix of neat ideas, dirty hacks, and blatant wheel re-invention that could only have come from academia, and it was only ever made enterprise-grade by throwing heaps of money at it, and even then only for carefully tested configurations. Yes, it's pretty much single-handedly responsible for commoditizing virtualization, but the combination of the design and the lack of cooperation with the kernel community made it a nightmare to support. Xen is responsible for the existence of KVM because it showed such immense promise, and then delivered extreme frustration and pain.

    Since Xen decided long ago it was going to be the center of its own universe, it's really in a great position to do cool experimental things that the kernel community would be more cautious about and the enterprise market wouldn't touch with a 10' pole without seeing a strong proof of concept first. That kind of innovation is a stated goal of the Fedora project.

    The only technical advantage Xen enjoys right now is a lack of dependency on hardware virtualization features. Since it's impossible to buy a new machine that you can call a server with a straight face that lacks hardware virtualization, this is meaningless in the enterprise world, but Fedora (like other community distros) has a much broader scope, so there's still a real chance there for necessity to give birth to more invention, much like it did in the early days of Xen when x86 hardware virtualization was still a whisper in the halls at Intel and AMD.

    Of course, Xensource/Citrix has already driven away most of the community that would have done this kind of pre-product development, so I'm not holding my breath, but it would be nice to see something more to come of all that work (and years of my own life) beyond simply supporting existing users.

  • Re:KVM catches Xen (Score:3, Insightful)

    by shallot ( 172865 ) on Wednesday May 26, 2010 @07:13AM (#32346454)

    In nutshell, Xen devs shoot in the foot here. Have they agreed to be included in main kernel three and be more welcome with patches, it would be more interesting competition here.

    Why, yes, they have agreed to include their patches in the main kernel tree, but not all of them have the consent of the upstream kernel developers. The response of the Xen developers has been a significant refactoring of their code. Try reading the readily available documentation on the progress of merging Xen kernel patches upstream [xensource.com].

  • by betterunixthanunix ( 980855 ) on Wednesday May 26, 2010 @08:45AM (#32347074)
    Here is a scenario for you: you have some old server that is going to be decommissioned, but it is running some important software, and perhaps a different operating system from all of your other servers. On the one hand, you can go ahead and configure a new system...or with virtualization, you can simply take a snapshot of the hard disk image and run a VM on some other server (perhaps one that is underutilized).

    In general, the point of virtualization in a server room is flexibility. VMs are easy to move around, you do not have to be tied to a single operating system (I hear the most common use of KVM is to run Exchange on system with a Linux hyperviser) on a single physical machine, which is particularly useful for sharing hardware resources when you have to deal with software that only runs on a specific OS. VMs also make it easy to checkpoint a system; KVM has "copy on write" disk images, for example, which track changes from a base image, which could be a COW image itself.

    IBM did a lot of research on virtualization use, and I believe they had discovered uses for using VMs within VMs up to four layers deep (I am not really sure what they were doing).
  • It's harder to see when you're not at the enterprise level. Picturing being tasked with providing 10,000 identically configured desktop PCs that are intended to run a limited subset of applications in a locked-down manner.

    You have two choices. You can use actual desktop computers -- power-hungry, subject to failure at that number, and generally expensive to maintain at that scale. Or you can use VMs - a fairly small number of physical servers. Your desktop component can be a simple thin client device -- cheap, power efficient, much lower likelihood of failure, and - when they do fail - much simpler to manage as it takes a few minutes to replace the physical device. Your user never knows the difference, as he still connects to the same VM. If someone screws up a VM, you just rebuild it - an automated process that takes a few seconds. Compared to a PC which can require someone to physically be present at the machine; and at the very least will take 10-20 minutes to copy down the hard drive image over a network.

    Now let's say you're expanding - another 2000 users. With desktop PCs you have to order PCs -- hopefully no major changes in hardware since you last ordered, because then your client apps need to be re-certified. Then you have physically image the hard drives to the standard image; then deploy and set up the hardware at each workstation.

    On the other hand, if you were using VMs you would add 20-40 servers to the cluster and bring the VMs online. The configuration is completely automated, with a human being required only to press "go". You still have to purchase the thin client devices, but even considering the new servers you bought, the cost is *far* less than buying 2000 PCs. Too, it doesn't matter if the specs change on the thin client devices, because their sole purpose is to connect to the VM. The ongoing cost remains much lower, and setup consists of plugging it in and flashing the ROM. (Though at that size, there's a fair chance you can work out a deal with the manufacturer to do it for you.)

    As far as personal/desktop use - good for testing apps on different OSs, and playing with new OSs. Also good from a security perspective (run your web browsers in a VM, and questionable software. Roll it back daily or whenever you think there's been a compromise.) Also good if you're running Linux but have a couple of needed apps that don't work on WINE.

Solutions are obvious if one only has the optical power to observe them over the horizon. -- K.A. Arsdall

Working...