Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
Linux Business Operating Systems Software

Red Hat Wants Xen In Linux Kernel 278

DIY News writes "Red Hat is aggressively pushing to get Xen virtualization technology included in the Linux kernel as quickly as possible. This move comes as Microsoft is pushing its own virtualization products and recently relaxed some of its licensing requirements around Windows Server 2003 to facilitate more pervasive adoption and use of those technologies."
This discussion has been archived. No new comments can be posted.

Red Hat Wants Xen In Linux Kernel

Comments Filter:
  • Xen into kernel (Score:5, Interesting)

    by b100dian ( 771163 ) on Tuesday November 01, 2005 @07:38AM (#13922482) Homepage Journal
    What exactly does "virtualization technology included in the Linux kernel" means?
    That you can run virtual machines with that kernel? that that kernel can be hosted into a virtual machine?

    Or that you can install parallel kernels and run part of the ELF binaries on the other machine?..
    • From a link in TFA:
      "Xen is a virtualization technology available for the Linux kernel that lets you enclose and test new upgrades as if running them in the existing environment but without the worries of disturbing the original system"
      • Re:Xen into kernel (Score:5, Informative)

        by secureboot ( 920488 ) on Tuesday November 01, 2005 @08:58AM (#13922864)
        Don't let that comment fool you though. Xen is much, much more. What if you organization had 4 distinct sites they wanted to host on one server? Start up 4 virtual machines, and back up their running state from time to time. If one goes down, just restart it from your clean backup IN SECONDS. Better yet, do it automatically. At the same time, Xen enforces separation from the host OS that the virtual machines are running on, so you don't have to worry about it being compromised (well, not in any way anyone has been able to demonstrate or even postulate yet).
    • It means (Score:3, Interesting)

      by Anonymous Coward
      in part making Linux aware of the virtual architecture provide by Xen, not by Intel or AMD. Some of it is for better performance. Some of it is to make it just work. The latter is more worrying since there may be serious security issues as one of the major advantages of VM should be better security than that which is provided by Linux to begin with.

      The other problem here is there are other VM's out there and they all have different requirements for kernel modifications so talk about mess.

      The major underl

      • Re:It means (Score:5, Informative)

        by Octorian ( 14086 ) on Tuesday November 01, 2005 @08:53AM (#13922838) Homepage
        I still remeber reading that the whole x86 architecture didn't meet the requirements for virtualization, meaning that this recent trend is probably the result of VMware figuring out some "tricks to make it work", and then everyone else jumping on the bandwagon.

        In any case, if you really want to learn about the fundamental concepts behind virtualization, I strongly recommend reading the following paper: Formal Requirements for Virtualizable Third Generation Architectures [u-tokyo.ac.jp]

        Yes, it was published in 1974, but most of the concepts are still very applicable and make a lot of sense. (though the architecture examples are obviously dated)

        This is a very good paper which lays out all the ground rules. Sure, it may sound a bit academic in terminology and explanation, but it is still quite readable.
        • I was under the impression that both Intel and AMD were/are going to
          add some new CPU instructions and another processor run level that
          would support virtualisation at the hardware level (which needs to be
          the case to do it properly).
        • Re:It means (Score:5, Informative)

          by Chirs ( 87576 ) on Tuesday November 01, 2005 @10:34AM (#13923566)

          The full x86 architecture is not suitable for virtualization, because there are a few instructions which fail silently when run from user level.

          VMware uses various techniques to get around this, including full simulation and binary re-writing.

          Xen uses another approach, where they port to an instruction set that is basically x86 without the problematic instructions. This approach requires that the guest OS's be modfied.

          This will all change with the new virtualization instructions being added by both AMD and Intel. Once that is in place, Xen will be able to run unmodified guest OS's (such as Windows, for instance). There will be a speed hit though, so modified guests will be prefered if speed is an issue.
    • Re:Xen into kernel (Score:5, Informative)

      by caseih ( 160668 ) on Tuesday November 01, 2005 @08:51AM (#13922832)
      Since you've received few answers to you actual question, getting xen into the kernel means the xen patches required to run the linux kernel in a xen hypervisor (both as a guest and a host) will be a part of the mainstream kernel and be able to be built trivially. RedHat ships 3 different kernels now with FC4. One is a normal kernel available in both smp and non-smp configurations. Then we have the XenU kernel, which is a kernel designed to boot in a guest xen session. The Xen0 kernel is the kernel that you'd actually boot on top of xen and use as your main OS.

      Once the Xen0 kernel is running on top of xen, you have basically a normal linux kernel running that does all the hardware support. Then you load up Xen guest machines running the Xen0 kernel and these run their in their own virtual machines complete with their own disk images and linux distro. So xen doesn't really have anything to do with running elf binaries on the other machine. If you ran FreeBSD in the guest, it would run those binaries inside of that OS and that libc. When Xen 3.0 comes out, if you have the new intel or amd chips that support on-chip virtualization, then Windows XP can even run as a guest underneath the linux kernel-Xen0 host.
      • Re:Xen into kernel (Score:4, Informative)

        by MyHair ( 589485 ) on Tuesday November 01, 2005 @10:02AM (#13923320) Journal
        The Xen0 kernel is the kernel that you'd actually boot on top of xen and use as your main OS.

        Nitpicking the "main OS" wording: it's the host OS. I would think in a production server environment you'd keep this OS minimal and not do much in the Xen0 domain so you don't risk crashing or compromising the host environment. On the other hand, if it's a game box it would make sense to have the 3d video drivers in domain 0, and if it's a workstation it may or may not make sense to have the host OS run apps and have the user domains for testing purposes.

        Once the Xen0 kernel is running on top of xen, you have basically a normal linux kernel running that does all the hardware support. Then you load up Xen guest machines running the Xen0 kernel and these run their in their own virtual machines complete with their own disk images and linux distro. (my bolding)

        Typo: The guests are XenU (user) kernels which typically have no real device drivers and are therefore much smaller. Very well put, though. Note that you can use any block device as a disk image: a file, an LVM volume or even an actual hdd.
      • When Xen 3.0 comes out, if you have the new intel or amd chips that support on-chip virtualization, then Windows XP can even run as a guest underneath the linux kernel-Xen0 host.

        Does that new hardware support allow running Windows without having to modify it first? For the moment, it appears that no OS will work without modification and support for Windows might not be forthcoming:

        In addition to Linux, members of Xen's user community have contributed or are working on ports to other operating systems s

    • Re:Xen into kernel (Score:5, Interesting)

      by hey! ( 33014 ) on Tuesday November 01, 2005 @09:15AM (#13922979) Homepage Journal
      easier management of computing resource, unless I miss my mark.

      In rough terms:

      Admin Cost = N * (H + S)
      where N is the number of computers, H is the network and system hardware admin costs for a single machine, and S is the sys admin costs for the machine. Distributing:

      Cost = NH + NS

      This is a gross simplification, since we all know that complexity is not a linear function of network size, but it will do to be goign on with. Now we take NH and by virtualizing the machine it becomes simply H, so

      Admin Cost (multiple virtual machines) = H + NS

      Basically, I think it'll be common practice in the future to create virtual machines out of thin air by copying a config file or some directories on a machine with available bandwidth. If the cost of enough surplus hardware is less than (N-1)H, then wouldn't it be cheaper to virtualize?

      Of course the complexity is that costs aren't linearly related to N, or for that matter constant in the size and class of machine you are managing. Which is another way of saying YMMV. I think there's clear application in many kinds of situations, for example in software development where we're constantly worried about the various combinations of software our work will have to coexist with. It'd be very convenient to be able to pull a certain system configuration out of a library and have it up in a few minutes, then trash it after a few hours of use. But it may have potential in production environments too.
  • Xen... (Score:5, Funny)

    by Anonymous Coward on Tuesday November 01, 2005 @07:40AM (#13922495)
    Xen isn't all that hard, you just need some jump boots and a particle weapon of some sort.
  • Umm (Score:5, Informative)

    by interiot ( 50685 ) on Tuesday November 01, 2005 @07:40AM (#13922497) Homepage
    Well, Xen is free, and Intel/AMD hardware solutions are comming soon, which will allow Xen to run Windows unmodified. So, once everyone is upgraded to the new CPU's, virtualization will become a basic standard feature for everyone. MS can compete by giving their solution away for free, but either way, it doesn't get better than free for the consumer.
    • Re:Umm (Score:3, Insightful)

      by The OPTiCIAN ( 8190 )
      Well.. Microsoft *could* try paying me to run their solution.
    • Re:Umm (Score:4, Interesting)

      by LaughingCoder ( 914424 ) on Tuesday November 01, 2005 @08:11AM (#13922613)
      I am having trouble understanding the rules of engagement. If MS "gives away" IE for free, they get called to task. If they "give away" Media Player for free, they get in hot water. Now why exactly would giving away virtualization not result in the same harsh treatment? Is it because there aren't established for-profit companies in that space already? How about VMWare? If MS gave away virtualization, the "anti-trust" crowd would drag them into court faster than you can say "billable hours".
      • They got in trouble for bundling both I.E. and media player.

        They had to re-engineer Windows to make I.E. replacement simpler.

      • Re:Umm (Score:3, Informative)

        by top_down ( 137496 )
        Giving things away for free is bad when it is used to stiffle competition.

        There are basically 2 ways in which you can do that: by dumping or by creating private standards. Dumping is selling stuff below cost price (and thus taking losses) until your competitors are out of business. Private standards can be used to make competing related products incompatible or generally inconvience users of those products and thus try to set up a monopoly, this is at the core of Microsofts business strategy.

        For many people
      • Re:Umm (Score:5, Informative)

        by 'nother poster ( 700681 ) on Tuesday November 01, 2005 @08:50AM (#13922826)
        Microsoft got in trouble because the integrated their browser and media player into the operating system. If they would simply have had a browser and media player on your Windows install CD that you could install, or not, there wouldn't have been nearly the ruckus. What they did was make the install of the items required, and made it impossible for the average user to remove them without crippling the operating enviroment.

        Operating system virtualization on the other hand pretty much requires it be hooked into th OS by its very nature.
      • Re:Umm (Score:4, Informative)

        by NutscrapeSucks ( 446616 ) on Tuesday November 01, 2005 @09:33AM (#13923094)
        There's a lot of misinformation on this thread. Microsoft actually wasn't convicted of "dumping" or "bundling", at least in the US. They were mainly busted for exploiting their contracts with OEMs.

        Furthermore, all of their anti-trust problems were in the desktop market. As long as virtualization was positioned as a server feature, and as long as MS didn't threaten any VMWare supporters, I don't think they would have any legal problems.
      • Re:Umm (Score:3, Informative)

        by div_2n ( 525075 )
        MS has a monopoly on the desktop. When they start bundling software that does X when there are competitors that offer similar software, MS has abused their monopoly when they actively work to exclude those competitors and force a choice on consumers.

        In other words, if MS decides to start shipping software to learn how to speak Italian with every copy of Windows XP, then they would have to allow makers of competing software the same courtesy to ship alongside their own.

        If they didn't command a monopoly of th
    • Yes, that is my main reaction. Also, mind you, that virtualization integrated into the OS is completely unnecessary. Full virtualization (which is not OS dependent) really offers the fastest and best support for general use.

      Xen has a number of interesting features that require paravirtualization techniques, which would need to be built into the OS.
  • Two birds, one stone (Score:5, Interesting)

    by bernywork ( 57298 ) * <bstapleton.gmail@com> on Tuesday November 01, 2005 @07:41AM (#13922501) Journal
    Not only do they get the ability to knock the shit out of Microsoft, by taking away the base platform from them, they also get to try to get some market share from VMWare.

    Imagine if you would the ability to use Xen for unlimited operating systems, no licensing cost of the base OS, thinking about it, I would prefer to be in Microsoft's shoes as opposed to VMWare's. Only difference is that Xen when compared to VMWare is a very immature platform and no IT manager is going to take Xen over VMWare just yet (Unless cost is a BIG factor).

    I would have to say that this is still very cool, with all the new Virtualization options come out in the new cores shortly and if they can get to market before Microsoft, this is a great way to pick up some customers. Kudos to RedHat and IBM and Intel and everyone else for making this happen.
    • no IT manager is going to take Xen over VMWare just yet (Unless cost is a BIG factor)
      With VM Player and vi you may get a free virtual machine.. see http://b100dian.lx.ro/wordpress/index.php?p=90 [b100dian.lx.ro]
    • by dc2447 ( 920780 )
      That gives the impression that Vmare is a great product, I'd debate that. I find it a PITA alot of the time. XEN isn't the only kid on the block, Qemu works really well too.
    • Imagine (Score:2, Funny)

      by mrjb ( 547783 )
      Imagine if you would the ability to use Xen for unlimited operating systems
      Are you asking us to imagine what I think you're asking us to imagine?
    • Not only do they get the ability to knock the shit out of Microsoft, by taking away the base platform from them, they also get to try to get some market share from VMWare.

      Have you looked at the instructions to run something with XEN? I don't think this is a threat to VMware, in fact it is probably good for them because they can use it and it will make their product look better (better performance, better integration) and they can continue to sell an application to configure the virtualization system.
      • At the same time, how long do you think the tools are going to come out to manage all of this?

        I truly don't think it will be that long. The hardware vendors IBM / HP etc will be willing to support this as if the Xen community comes out with extensions for the Virtualization stuff in the new CPU cores (also coming out shortly), then it will help them to push more kit (sell more hardware).
    • by Anonymous Coward
      You must have never used vmware before for it's best use.

      vmware in linux is the absolute #1 tool useful for reverse engineering. I can pipe the rs232 and usb as well as ethernet ports fo the hosted OS to files or through sniffers on my linux machine and figure out quite quickly how a device is talking to it's hardware or server on the net. Capturing an entire session into a input and an output file makes is trivial to reverse engineer something.

      this can not be done with zen or the utter crap that msft is
    • by j-pimp ( 177072 )

      Imagine if you would the ability to use Xen for unlimited operating systems, no licensing cost of the base OS, thinking about it, I would prefer to be in Microsoft's shoes as opposed to VMWare's. Only difference is that Xen when compared to VMWare is a very immature platform and no IT manager is going to take Xen over VMWare just yet (Unless cost is a BIG factor).


      I've been using xen here in what I call "production development." Its serving several development servers. One of them is running a crappy spam as
  • by gringer ( 252588 ) on Tuesday November 01, 2005 @07:51AM (#13922543)
    "My goal is to get this done in the most collaborative way possible with anyone in the community who wants to participate," Stevens said, adding that Red Hat is committed to putting on this project enough of its staff who have the technical knowledge necessary to get the work done.

    Perhaps it's only me, but this doesn't sound aggressive; this sounds friendly and cooperative.
    • by bcmm ( 768152 ) on Tuesday November 01, 2005 @08:21AM (#13922668)
      I don't think it meant "aggressive towards the open source community". It's Microsoft they'll be competing with, and it seems that it's going to be Linux, rather than just Red Hat, against them.

      So, they're "aggressively" pushing Linux instead of Windows as a virtualisation host OS. Six staff members hired to work solely on integrating it into the mainstream kernel is fairly aggressive (toward MS), I would say, as it could lose them a major new market.
  • Forking? (Score:4, Insightful)

    by Lardmonster ( 302990 ) * on Tuesday November 01, 2005 @07:51AM (#13922544)
    Why don't they fork? Or just build and rpm their own kernel, like they did with GCC 2.96?
    • Re:Forking? (Score:3, Informative)

      by Jonny_eh ( 765306 )
      I'm pretty sure they do that, most distros tweak the kernel to suit them somewhat.

      I know that SUSE 9.3+ has built-in support for Xen, and since there are different kernel packages especially for Xen support, I assume that Suse has accomplished what Red Hat is working towards. Although, I could be completely wrong since I didn't RTFA.
    • Re:Forking? (Score:3, Interesting)

      by m50d ( 797211 )
      Because that's bad for the community, something redhat cares about, even if the kernel developers don't seem to. They probably will ship their own xen version for the moment, but the less difference between their kernel and the mainline one the better it is for everyone.
  • The irony (Score:5, Interesting)

    by Alioth ( 221270 ) <no@spam> on Tuesday November 01, 2005 @07:53AM (#13922550) Journal
    The irony is that Microsoft provided some of the funding for Xen (probably for the early experimental Xenised versions of Windows XP). Yes - Microsoft does fund GPLd projects. Often in a company that big, the left hand doesn't know what the right hand is doing, so whilst Gates/Balmer spout off about how evil open source is, another part of MS is funding it (or even releasing it on Sourceforge).
  • My Bias (Score:4, Funny)

    by Vodak ( 119225 ) on Tuesday November 01, 2005 @08:00AM (#13922572)
    The anti Redhat Linux part of me is saying Do not cave into the demands of Redhat because they are becoming as bad as Microsoft with pushing Linux to their own sinister goals. But then the sane part of me says " If the technology is awesome it should be in the standard kernel."

    And then the crazy part of me says. "Heh, I can compile modules for the Xbox controller and other weird hardware into the kernel. Maybe useful technology should be in the kernel =]

    but then again. I just might have to many voices in my head
    • Re:My Bias (Score:4, Informative)

      by RotateLeftByte ( 797477 ) on Tuesday November 01, 2005 @08:24AM (#13922692)
      Remember that anything RedHat pushes into the Linux Kernel will automatically become available for ALL OTHER LINUX DISTROS. So please forgive my ignorance, but where is the "badness" in that aim? AFAIK, nothing that RedHat has developed has been proprietary in any way. They do have a track record of buying things from other companies and releasing them as OSS stuff. Again please let me know of the badness in that aim.

      IMHO, virtualisation is going to become very important to all sofware developers over the next few years. If it is easy to fire up a Debian system on top of a SUSE and have Mandriva & RedHat running as well then you can test your app on all these platforms at the same time. Hurrah!
    • Re:My Bias (Score:3, Insightful)

      by Erwos ( 553607 )
      No, the sane part of you should be saying "Red Hat is nothing like Microsoft." So far, their own goals have been anything but sinister, and every other distro on the market has benefitted from the time and money they've invested in gcc, the kernel, and any other number of projects.

      They've done nothing anti-community since dropping free Red Hat 9 support years ago. Get over it.

      -Erwos
      • Re:My Bias (Score:3, Insightful)

        by Vodak ( 119225 )
        I will freely admit that is it silly to dislike RedHat. And that for the most part the reason alot of people dislike RedHat is simple because they are the biggest and most known of the Linux Distros.

        Have they done anything sinister? Not yet. Will they? Who knows. But it's fun to complain about them =]
    • Re:My Bias (Score:2, Insightful)

      by Anonymous Coward
      "The anti Redhat Linux part of me is saying Do not cave into the demands of Redhat because they are becoming as bad as Microsoft with pushing Linux to their own sinister goals."

      That's the retarded part of you. Learn to ignore it.

  • by Library Spoff ( 582122 ) on Tuesday November 01, 2005 @08:01AM (#13922579) Journal
    *hmmm*
    Must remember to patent the idea of trojan/virus that uses visualisation to run a spam/DOS server
    in a Windows environment...

    Rubs hands with glee as he tries to sell the idea on IRC.

    • Been done... The DoS is called Java.

      (Ducking and running)

      Just kidding. Java is my friend.
    • I expect that any self-respecting adware/spyware/rootkit maker will hide the bulk of their work out of sight in a virtualized environment. Were I designing a zombie clone, it would hide out-of-sight ona virtualised machine -- a no-brainer because it's harder to discover. And then people can have fresh installs of Windows which are patched up still resulting in their 14-month old 4.2GHz Pentium crawling and needing replacing.
  • ...If RedHat wants XEN in the [Linux] Kernel, they can put it in there themselves or they could pay someone to do it...or they could fork the Kernel. I have failed to see what is preventing RedHat from putting XEN in there. So right ahead RedHat. Go!
    • RTFA

      "Part of the Red Hat emerging technology team's efforts will be to drive the Xen virtualization technologies as part of the Linux kernel rather than as part of a sidebar project, as is currently the case, Stevens said."

      JOhn
  • by Zugot ( 17501 ) * <bryan@osCURIEesm.com minus physicist> on Tuesday November 01, 2005 @08:05AM (#13922589)
    Sun can do this now with Solaris 10. Virtualization is a cool technology, and everyone in this space seems to be heading there.
    • by Octorian ( 14086 ) on Tuesday November 01, 2005 @09:15AM (#13922975) Homepage
      Solaris "zones" technically aren't really virtualization, per se. Rather, they are virtual-machine-"like" process containers. Inside of a zone, it behaves very much like a virtual machine, but it really isn't.

      This concept likely provides many advantages for system resource management on a server, where you only care about a single operating system. It does not, however, let you run different OSs at the same time.
  • by ptaff ( 165113 ) on Tuesday November 01, 2005 @08:09AM (#13922604) Homepage

    While Xen appears as a neat package, why choose Xen instead of vservers [linux-vserver.org]?

    The hardware cost of running multiple copies of the same OS with vservers is smaller than Xen - there is one and only one copy of glibc in memory, one and only scheduler, and so on.

    • by Anonymous Coward on Tuesday November 01, 2005 @08:16AM (#13922633)
      "While Xen appears as a neat package, why choose Xen instead of vservers?

      The hardware cost of running multiple copies of the same OS with vservers is smaller than Xen - there is one and only one copy of glibc in memory, one and only scheduler, and so on."

      But part of the purpose of a virtual machine is that you can run a different operating system in each partition, including different schedulers and libc versions.
    • by Anonymous Coward on Tuesday November 01, 2005 @08:23AM (#13922686)
      With Xen, a kernel panic effects only that kernel, the other kernels keep on running. Under vservers, it takes down the machine.

      Under Xen you can reduce the parent kernel down to obare minimum, reducing the chance of errors.

      eg: you want to run an experimental iptable module on one of the virtual servers, no problem, if it crashes, all the other servers keep on trucking.

      Essentially Xen provides a better sandbox from a stability/security perspective.
    • by NitsujTPU ( 19263 ) on Tuesday November 01, 2005 @08:39AM (#13922757)
      Xen is 100% different. Also, Xen supports over 100 VMs per machine.

      Also, Xen does things to make that 1 copy of glibc a reality. Arguably, that 1 scheduler is one of the primary reasons you would prefer Xen.
    • We have a X terminal server here at work that uses vservers. Doing it over again, we would probably use Xen as opposed to vservers. The vserves have weird bugs once and awhile. Such as processes not being able to talk to eachother for nor apearnent reason. The entire system will work fine for weeks, but once and a while there will be a problem that nothing short of a restart of the vserver will fix. If Xen works, and you aren't short of resources, use Xen by all means.
    • While Xen appears as a neat package, why choose Xen instead of vservers?

      Perhaps because vservers lack some of the neat features of Xen, such as on-the-fly instance migration [cam.ac.uk] and full iptables support?

      Furthermore, vservers is, for the foreseeable future, a Linux-only project. So far, NetBSD [netbsd.org] and Solaris [newsforge.com] have been ported to Xen, and basic support [fsmware.com] for FreeBSD as a guest host is available. Once Intel VT and AMD Pacifica are available, Xen will also support [com.com] Windows XP SP2.

      Given just these benefits (and Xen has ma
  • by general_boy ( 635045 ) on Tuesday November 01, 2005 @08:18AM (#13922651)
    Mandriva Linux 2006 includes xen0 and xenU-enabled kernels and the Xen supervisor utilities package. The Community version of Mandriva 2006 can be downloaded from many Linux mirror sites.

    I'm running such a box now with a total of three Linux domains (one host domain and two guest)... much easier than manually patching everything.
    • Mandriva Linux 2006 includes xen0 and xenU-enabled kernels and the Xen supervisor utilities package. The Community version of Mandriva 2006 can be downloaded from many Linux mirror sites.

      But unfortunately it's broken on 64-bit machines. (see bug #18432 at qa.mandriva.com; Mandriva says that they will upload the x86_64 glibc-xen packages when they have verified that xen works with their x86_64 kernels, which they currently don't).

  • Virtualization technology is a very good thing. It allows you to use multiple operating systems at once, without fights for hardware control (which is why VMWare doesn't do it like this). But, if it's doable in hardware, it's doable in OS level software. Why didn't anybody do it then?

    Put differently, how are AMD and Intel going to make it work? Since hardware doesn't like multiple masters (try a PS2 mouse with 4-5 byte protocol, it completely freaks out with a KVM switch), it's going to go haywire if you ha
  • by DoofusOfDeath ( 636671 ) on Tuesday November 01, 2005 @08:23AM (#13922691)
    It basically lets you run multiple instances of the OS concurrently, where each instance thinks it's the only one running on the computer, right?

    But then what do you when two or more OS instances want to monkey around with hardware that has state? For example, if one OS wants the screen resolution to be 640x480, and another OS wants the screen resolution to be 1024x768, you can't very well keep switching the screen between those two resolutions every time you change which OS is getting CPU time. Or another example is with printing: you can't very well interleave the print data streams from two OS's to the printer without hosing the print jobs.
    • Print jobs are not an issue, the printer is just busy and the OS in question queues it.
      As for varying screen resolutions, it just made me think of the Amiga and pulling down a low resolution screen to reveal a high res on behind.

      very very cool stuff, but inherantly impractical, I imagine it will most likely be similar to the KVM switching or simple desktop switching?
    • by peterpi ( 585134 ) on Tuesday November 01, 2005 @08:48AM (#13922816)
      That's not a problem with linux; you take what X decides is good for you and thank your lucky stars if you don't have to edit your config file ;)
    • by digitalhermit ( 113459 ) on Tuesday November 01, 2005 @08:50AM (#13922825) Homepage
      Virtual machines also have virtual screens that are independent of each other. You can, for example, have a 800x600 window right next to a 1024x768 window. Depending on how you have it configured, toggling between full screen sessions of the VM will either re-size the screen or play inside a portion of the existing screen. It's still virtual video, however, so there's no conflict.

      For printers you can either set up a print server or the printer gets attached to a particular OS instance.
    • by Waffle Iron ( 339739 ) on Tuesday November 01, 2005 @09:02AM (#13922888)
      But then what do you when two or more OS instances want to monkey around with hardware that has state?

      The Schrodinger Corp. makes special PC cases that can handle those requirements.

    • IIRC, screen resolution issues are handled in Xen by only letting the host OS set it on the display. The various guest OS's are accessed via VNC within that. Check out the Xen demo CD at: http://www.cl.cam.ac.uk/Research/SRG/netos/xen/do w nloads.html [cam.ac.uk]. You get a Debian system with some friends:

      The Xen demo CD is a live ISO CD running Debian Linux that enables you to try Xen on your system without installing it to the hard disk. It enables you to start other guests running Linux 2.4 and 2.6, NetBSD and Fr

    • It basically lets you run multiple instances of the OS concurrently, where each instance thinks it's the only one running on the computer, right?

      Sort of.

      There is really two parts to virtualization: 1) processor virtualization and 2) hardware virtualization. Processor virtualization allows multiple virtual machines to think they have the whole processor. In this case, there's a lot of interesting tricks you can do (and new hardware support) so that the virtual machine thinks it has the actual underlying pr
    • Or another example is with printing: you can't very well interleave the print data streams from two OS's to the printer without hosing the print jobs.

      A common FreeBSD setup is to run a print server in the host OS and configure jails (FreeBSD's virtualization systems, which are completely unlike Xen) to speak to that server. In short, you treat the jail environments like standalone machines on a network. I'd suspect you'd do something similar under Linux.

  • by nietsch ( 112711 ) on Tuesday November 01, 2005 @08:44AM (#13922793) Homepage Journal
    These guys(Xen) have all these companies donating money to them, but have been beaten to kernel inclusion by UML. UML is basically a two man project, developed by Jeff Dike and Paolo Giarrusso (aka Blaisorblade). Xen may be multi platform and all, but thus far UML is easier to handle and does not require the host to run a patched kernel (you could use a patched kernel, but the newest development Skas0 does not need it).
    • by zeromemory ( 742402 ) on Tuesday November 01, 2005 @09:09AM (#13922930) Homepage
      These guys(Xen) have all these companies donating money to them, but have been beaten to kernel inclusion by UML.

      Being the first to the party doesn't always mean you're going to the best; see DevFS vs. udev [kerneltrap.org].

      Xen has much greater performance [cam.ac.uk] than UML and supports more operating systems. While UML is currently more mature and stable than Xen, it's only a matter of time before Xen surpasses UML as the preferred virtual server technology. Hell, even Linode, a strong proponent of UML technology and virtual server hosting provider is migrating to Xen [linode.com].

      FYI, I'm currently running a Xen-based system with 15 virtual server instances for a system administration course at UC Berkeley on a server built with cheap off the shelf components (AMD Athlon 64 2800+, 1 GB RAM) and everything is quite snappy. It'd be difficult to even approach such usability with UML, and I'm using Xen 2.0.7. I can't see what Xen 3.0 will bring.
  • MS leapfrogging (Score:3, Interesting)

    by MobyDisk ( 75490 ) on Tuesday November 01, 2005 @09:05AM (#13922901) Homepage
    I know everyone complains about how MS lacks innovation, but this is a good example of BUSINESS innovation. Virtualization isn't new. I've used it before, seen it before. But MS bought an existing product, then wrapped it up nice and pretty and easy, and presented it as a solution to a major problem. And it is getting widely adopted. My office uses virtual servers constantly to simulate production environments for development: it saves time, money, and effort.

    I never even considered virtualization of servers or development environments until I learned about MS Virtual PC and MS Virtual Server. Norton Ghost or dd dumps were all that I knew. So Microsoft is doing something right, and they will be perceived as the innovator and the winner here. They will be selling that you can virtualize servers to save time and money, and companies will buy it. They won't even know that this originated in the *nix world.

    I look forward to seeing what the next leap in this technology is. I suspect we are just beginning to see some novel uses for it.
    • by zeromemory ( 742402 ) on Tuesday November 01, 2005 @09:18AM (#13922995) Homepage
      Yeah, but having Xen in the kernel mainline gives the project much more credibility and exposure.

      A problem with Xen has been facing is keeping up with all the changes occuring in 2.6. If Xen is merged into mainline, there's a much better chance that Xen will be able to support the features and bug/security fixes that get added to 2.6 with each release.

      For example, the current Xen stable (2.0.7) supports kernel 2.6.11.12. Every time a new security hole is discovered, system administrators using Xen have to manually backport a fix from the latest kernel. Having Xen in mainline should make this process much easier.
  • by FishandChips ( 695645 ) on Tuesday November 01, 2005 @09:07AM (#13922917) Journal
    The Slashdot summary is a bit misleading. What the article says is that Andrew Morton has been expecting a kernel submission for Xen for quite some time now but a) has yet to receive it, and b) needs to go through the usual process with other "stakeholders" before any incorporation. Later the article quotes the Xen folks themselves who point out that "feature creep" and the need to generally get things really solid and stable has made everything take a little longer.

    What the article actually seems to be saying - it uses the word "agressive" a lot as if this was some kind of virtue - is that Red Hat has a new senior honcho who'd like to make his mark. The issue of incorporating virtualization technologies into the Linux kernel is taken as a given by all parties. Which is hardly news. Chalk one up to the Red Hat marketing department for a nicely planted "news" story about their increased investment in the area (new hires, etc.), perhaps.
  • Of course (Score:3, Insightful)

    by RandomPrecision ( 911416 ) on Tuesday November 01, 2005 @09:28AM (#13923063)
    In desiring to put Xen in the kernel, they have already failed.
  • Both AMDs Pacifica, and Intels "VT" (Virtulization Technology, used to be called Vanderpool) is getting support in the up comming Xen 3.0 release in december (hold your thumbs). So we will perhaps see some serious boost up in performance, there was recent discussion on the mailinglist [xensource.com] about it. I'm waiting for Pacifica support which seems to be abit better (DMA virtualization), but that willl probably not be in XEN 3.0.
  • Could this type of virtualization be the stop gap solution to security patches and updates breaking things? So a new patch is released for something? Clone your working environment into a virtualized one and apply the patch to that. Work under the patched environment and if after X amount of time the patch shows no signs of breaking things, commit the patch to the base install.

    Seems like a good idea to me unless I am missing something.
  • by gtrubetskoy ( 734033 ) * on Tuesday November 01, 2005 @10:18AM (#13923416)

    There are a few problems with Xen. First, it's i386 only. Second (and this is the biggest problem IMO) - Xen is venture-backed, and seems to be extremely eager to show their investors a return. Nothing wrong with that, but it's important to consider the motivation, and the consequence of a funding pull back. If XenSource does not turn out to be a great business, then will Xen still be developed and maintained? Why not wait a little bit, in the open source world quality over quantity matters and time pressure should not influence development.

    Also, there is another project that I plug every chance I get - Linux Vserver [linux-vserver.org]. Unlike Xen, this is a purely volunteer effort, and is very innovative and attemtps to solve a difficult issue. Unlike Xen, these guys actually do not want to be in the mainline for now, becuase they think it will slow down development. Because Linux VServer is taking a different approach to virtualization (better known as separation, which was pioneered by FreeBSD jails and is also now supported in Solaris), the end result is cross-platform, i.e. runs on any architecture that Linux runs on.

    Now in the past whenever I posted about Linux VServer a lot of folks said that Xen allows you to run multiple operating systems and that that is why it is so useful. I think that in reality running multiple OS's isn't all that valuable - the only case where it may be very useful is software development, but that's a tiny fraction of the Linux users. We've been using Linux VServer for hosting, and we are absolutely convinced that this is the right solution - for using Xen for example would introduce all kinds of problems (starting with resource bloat).

    Yet unfortunately the OSS world has become PR driven lately. Very few people are technically capable of looking at things based on its merits and just go after the things that have the most buzz, not realizing that the buzz is artificially generated.

    • by Anthony Liguori ( 820979 ) on Tuesday November 01, 2005 @12:08PM (#13924385) Homepage
      There are a few problems with Xen. First, it's i386 only.

      Not true. Today, Xen supports i386, x86_64, and ia64. Xen is currently being ported to PowerPC also.

      Second (and this is the biggest problem IMO) - Xen is venture-backed, and seems to be extremely eager to show their investors a return.

      XenSource is a company backed by VC. Xen is developed by a much larger community though. There are a ton of press-releases that XenSource puts out that have the typical marketting junk that most Open Source folks despise but whatever, XenSource != Xen. Most of there people aren't even actively working on Xen anyway (they have a product for Xen management),

      If XenSource does not turn out to be a great business, then will Xen still be developed and maintained?

      Absolutely.

      Also, there is another project that I plug every chance I get - Linux Vserver. Unlike Xen, this is a purely volunteer effort, and is very innovative and attemtps to solve a difficult issue. Unlike Xen, these guys actually do not want to be in the mainline for now, becuase they think it will slow down development.

      Yup. That's why VServer is not in the kernel--they don't want to be in the kernel. VServer is a cool project, and I would love to see it end up in the kernel. Xen is also a cool project and it would be great to see it in the kernel. The kernel guys *will not* accept crap. Large portions of the Xen Linux port are currently being rewritten to live up to kernel standards. I have a ton of faith in the kernel folks overseeing the process.
    • Why is it that you consider an all-volunteer effort inherently more robust? Key volunteers can have life changes (job change, health, etc) that cause their involvement to change. VC projects have the benefit of providing dedicated staff, professional project management, business development and marketing to keep momentum alive.

Brain off-line, please wait.

Working...