Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
Oracle Virtualization Linux

Linux 3.0 Will Have Full Xen Support 171

GPLHost-Thomas writes "The very last components that were needed to run Xen as a dom0 have finally reached kernel.org. The Xen block backend was one major feature missing from 2.6.39 dom0 support, and it's now included. Posts on the Xen blog, at Oracle and at Citrix celebrate this achievement."
This discussion has been archived. No new comments can be posted.

Linux 3.0 Will Have Full Xen Support

Comments Filter:
  • Finally I get to run a newer kernel on EC2! I have been looking forward to this for months.

    • by pasikarkkainen ( 925729 ) on Friday June 03, 2011 @03:53AM (#36328916)
      Actually you have been able to run newer kernel on EC2 for a long time! Xen domU (guest VM) support has been in upstream Linux kernel since version 2.6.24. Now upcoming Linux kernel 3.0 adds Xen dom0 support, which is the *host* support, ie. Linux kernel 3.0 can run on Xen hypervisor (xen.gz) as the "management console", providing various backends (virtual networks, virtual disks) allowing you to launch Xen VMs.
  • by Suiggy ( 1544213 ) on Friday June 03, 2011 @03:55AM (#36328922)

    ... is 16 cores and 32 GB of RAM, and I can recompile the Kernel on Linux, encode an H.264 video on OS X, serve files via Apache HTTPD from OpenBSD, and watch streaming porn videos on Windows all simultaneously on the same machine!

    • Re:Now all I need... (Score:4, Informative)

      by glwtta ( 532858 ) on Friday June 03, 2011 @04:00AM (#36328950) Homepage
      16 cores and 32 GB of RAM

      That's, uh, not exactly all that out there, these days.
      • by lgftsa ( 617184 )

        Yeah, the new servers I just received to upgrade our vmware cluster have 128G with only half the slots filled. Still 16 cores (2 x 8) per host, our limiting factor is having enough free RAM available for failover. The linux guests share ram nicely, but the windows guests are pigs.

        • The linux guests share ram nicely,

          ...until all of your linux boxes need ram at the same time.
          at least with overprovisioning, you know you won't tip the scales if all your machines burst at the same time.

      • 16 cores and 32 GB of RAM

        That's, uh, not exactly all that out there, these days.

        You forgot the implied "...Beowulf cluster of..."

      • Re: (Score:2, Funny)

        by dltaylor ( 7510 )

        If you're gonna stream porn on the Windows guest, instead of something useful like original Star Craft/Brood War, keep your clean guest image for reloads. You're better off streaming the porn on a Linux guest, since the embedded malware is much less likely to run.

        • Last time I tried it, Star Craft ran better under Wine than under 64-bit Vista or 7, so even if you want to do something useful under Windows it probably shouldn't be that.
      • It depends whether he means 16 good cores or 16 shitty cores ;)

        Yes both intel and AMD sell CPUs that let you put 16+ cores in one machine BUT afaict in both cases the individual cores are substantially slower than you can get in a 12-core (2x6) xeon 56xx machine. The prices are also pretty crazy afaict.

        • CPU loadout prices and performance metrics for various setups [cpubenchmark.net]

          The 12-core X56xx's solutions arent touching the 48-core solutions from AMD as of yet in parallel workloads. The Opteron 6168 solution is cheaper with more performance and the Opteron 6174 route is more expensive but significantly faster over-all, than a pair of X5690 priced at $3300+

          I am simply amazed that Intel has not taken its older designs for larger process sizes and simply packed on more cores during a process reduction in order to bre
          • The 12-core X56xx's solutions arent touching the 48-core solutions from AMD as of yet in parallel workloads

            Yeah if you push the core count insanely high you can get to the point where (for some workloads) the number of cores makes up for the low performance of the individual cores but afaict there is no 16-core system on the market that is faster overall than a 12 core 56xx series system.

      • by dbIII ( 701233 )
        48 cores (4 sockets with AMD 12 core) and 64GB here and it was only around $10K. There are a lot of much bigger machines around - that's effectively just an overgrown gaming machine these days which is why economies of scale brought the price down to something sane instead of Sun or IBM prices. I've seen people spend as much on two laptops.
        It's not for virtual machines. The stuff it runs works properly in parallel but runs faster on one machine with shared memory than it can on a cluster.
        • by h4rr4r ( 612664 )

          Just spend $20k on 12 cores and 128GB on two machines. Vmware licensing restricted our core count.

      • 16 cores and 32 GB of RAM

        That's, uh, not exactly all that out there, these days.

        GP mentioned Windows. The Windows Server license that runs on 16 cores is really, really "out there" for home users. So we can assume that he is talking about a home OS, and for a home PC 16 cores really is "out there".

        • by glwtta ( 532858 )
          The Windows Server license that runs on 16 cores is really, really "out there" for home users.

          He just mentioned streaming porn in a domU, a single (or two) core license would do just fine.

          It's more than a typical off-the-shelf PC, but it's not like it's some crazy spec, with "consumer" components you can put one together for $3-4k (well, ok, with 12, not 16 cores).
        • GP mentioned Windows. The Windows Server license that runs on 16 cores is really, really "out there" for home users. So we can assume that he is talking about a home OS, and for a home PC 16 cores really is "out there".

          Well... I was curious. The major cost in a multi-CPU setup is generally the motherboard. Enthusiast boards are typically in the $150-$250 range, dual-CPU boards are generally in the $400-$550 range (Tyan Thunder n3600T). The 2.8GHz Opteron 6-core CPUs are around $310 each, with slightly
    • by KZigurs ( 638781 )

      Last 5 machines assigned for me to have fun in had each 16 cores and 96GB RAM each... I have requested 3 more.

  • Meanwhile (Score:5, Interesting)

    by TheRaven64 ( 641858 ) on Friday June 03, 2011 @03:57AM (#36328932) Journal
    Xen Dom0 support has been supported in released versions of NetBSD and Solaris for something like 4 years, while the VMWare lobby on the LKML was requiring the entire paravirtualisation subsystem to be rewritten before they'd accept patches, and Red Hat decided to push KVM as a Xen replacement, in spite of them having very different capabilities.
    • by buchanmilne ( 258619 ) on Friday June 03, 2011 @05:45AM (#36329344) Homepage

      ... as most users don't use vanilla upstream kernels. And, most distributors / distros have a supported release which provides Xen Dom0 support (including Red Hat).

      • by cynyr ( 703126 )

        hmm, since the new kernel dev model(2.5.x basically) I've been running vanilla kernels, or at least distros that do not require the huge custom patch sets... ahh the good old days of redhat 7.2, and suse 6....

    • Xen support got into NetBSD and Solaris more easily, I think, because influential individuals pushed it in there whereas the Linux community had lots of quibbles over the patches and how they should be done correctly. The debate with VMware was a bit confusing and didn't help things get done quickly. RH and IBM and SuSE and others were behind Xen originally but that has gone a bit quieter subsequently.

      Part of all of this, though, is due to the Xen team having different priorities to most of those other or

    • Re:Meanwhile (Score:5, Informative)

      by zefrer ( 729860 ) <zefrer&gmail,com> on Friday June 03, 2011 @06:04AM (#36329400) Homepage

      Just had to reply to this.. Sun forked Xen 3.1 something like 4 years ago, yes. That same fork, Xen version 3.1 is what is still being used today in Solaris and Sun had previously (pre-buyout) said they would not merge to any newer versions of xen.

      So while Solaris can claim Xen Dom0 support it is no where near the capabilities of current Xen 4.0 and with no plans to update you're stuck on 3.1 with support only coming from, now, Oracle. Yeah, awesome.

    • Re:Meanwhile (Score:5, Informative)

      by diegocg ( 1680514 ) on Friday June 03, 2011 @08:06AM (#36329780)

      'VMWare lobby', WTF? The real problem were things like this [lkml.org] and this [lkml.org]:

      The fact is (and this is a _fact_): Xen is a total mess from a development
      standpoint. I talked about this in private with Jeremy. Xen pollutes the
      architecture code in ways that NO OTHER subsystem does. And I have never
      EVER seen the Xen developers really acknowledge that and try to fix it.

      Thomas pointed to patches that add _explicitly_ Xen-related special cases
      that aren't even trying to make sense. See the local apic thing.

      So quite frankly, I wish some of the Xen people looked themselves in the
      mirror, and then asked themselves "would _I_ merge something ugly like
      that, if it was filling my subsystem with totally unrelated hacks for some
      other crap"?

      Seriously.

      If it was just the local APIC, fine. But it may be just the local APIC
      code this time around, next time it will be something else. It's been TLB,
      it's been entry_*.S, it's been all over. Some of them are performance
      issues.

      I dunno. I just do know that I pointed out the statistics for how
      mindlessly incestuous the Xen patches have historically been to Jeremy. He
      admitted it. I've not seen _anybody_ say that things will improve.

      Xen has been painful. If you give maintainers pain, don't expect them to
      love you or respect you.

      So I would really suggest that Xen people should look at _why_ they are
      giving maintainers so much pain.

                      Linus

      BTW, I have absolutely no doubt that NetBSD and Solaris merged Xen faster than anyone else.

      • This kind of post lead to a full re-write of the local APIC code in Xen, and many other sub-systems. What you see happening today is the result of the work due to the critics above (and not only this critics, there was others).
      • Unfortunately when this e-mail was sent, Jeremy was just about the only developer working on upstreaming the dom0 work for quite a while; and Jeremy was, unfortunately, still learning how to interact effectively with the kernel community. This can be largely blamed on a tactical error made by the people in charge of XenSource before Citrix acquired them. They were hoping to force RedHat to work on upstreaming dom0, so they kept the Xen fork of linux (linux-xen) at 2.6.18, and only hired one developer to w

  • by simoncpu was here ( 1601629 ) on Friday June 03, 2011 @04:02AM (#36328954)
    Dear FreeBSD,

    When will you ever have a Xen dom0 support?

    Thanks,

    Charlie Root
    FreeBSD Fanboi
  • Just when Linus finally started convincing people that Linux 3.0 would be a "normal time based release" with "no major changes" they whip this milestone feature out from under the rug.

    Xen out of the box? Linux 3.0.

    • by Nimey ( 114278 )

      That feature would have been in 2.6.40 had it been numbered that.

      • by cronius ( 813431 )

        I know, I just thought it was nice that there's now a milestone pegged to the 3.0 release as opposed to "just the normal fixes and new drivers" kinda thing. I understand that it's a complete coincidence.

  • My understanding of Xen was that it was a hypervisor, had a dom0 guest VM for administering the hypervisor, and dom0s for less privileged guest VMs.

    Is this about running Xen inside Xen, or am I way off target?

    • "Xen inside Xen" is in fact called "nested virtualization", and it's been a long time that Xen is capable of doing that. Even better, now it's possible to run HVM inside HVM, since few patches have reached the xen-devel list. The drawback? Well, there isn't any, because in fact, the nested part is only an illusion (or, let's say, an administrative view), as Xen "sees" the VMs as all being equal.

      But in fact, no, it's not about nested virtualization. It's about Linux from kernel.org not having to be patched
      • Ah, got it. Thanks.

        (Also, for the record, I recognize I should have said "domUs" for less-privileged guest VMs)

        (Also, Slashdot's commenting system is driving me batty. *flies away*)

    • My understanding of Xen was that it was a hypervisor, had a dom0 guest VM for administering the hypervisor

      dom0 does run under Xen and does the administrative tasks. But dom0 has another purpose: it has drivers for all of the hardware on the system. It doesn't make sense for Xen to try to have drivers for every bit of hardware that's out there -- Linux already does that very well, so there's no point in duplicating effort, especially since device drivers have *nothing* to do with virtualization. So the Xe

THEGODDESSOFTHENETHASTWISTINGFINGERSANDHERVOICEISLIKEAJAVELININTHENIGHTDUDE

Working...