Follow Slashdot blog updates by subscribing to our blog RSS feed


Forgot your password?
Red Hat Software Businesses Operating Systems Software Windows Linux

Red Hat Releases Windows Virtualization Code 183

dan_johns writes "Only one month after Microsoft released Linux code to improve the performance of Linux guests on Windows, Red Hat has done the reverse. Red Hat has quietly released a set of drivers to improve the performance of Windows guests hosted on Linux's Kernel-based Virtual Machine (KVM) hypervisor. The netkvm driver is a network driver and viostor is a Storport driver to improve the performance of high-end storage. This release includes paravirtual block drivers for Windows. Linux and Windows — virtually coming together at last."
This discussion has been archived. No new comments can be posted.

Red Hat Releases Windows Virtualization Code

Comments Filter:
  • Lack of Caring (Score:4, Interesting)

    by tychoish ( 1013857 ) on Wednesday August 26, 2009 @07:29PM (#29210139) Homepage
    I suppose this is a good thing, and I'm a big fan of the virtualization, but really, why? Windows fails to compel.
  • by mlts ( 1038732 ) * on Wednesday August 26, 2009 @07:30PM (#29210157)

    I've always wondered how paravirtualizing some functions such as I/O or networking affects security.

    Say a VM gets compromised, and is able to do what it wants with the block devices, how tough would it be to get out of the VM? If malicious code is able to access the host's block device that runs in kernel mode and start running code directly on the host's OS, game over.

  • A good thing. (Score:5, Interesting)

    by LoRdTAW ( 99712 ) on Wednesday August 26, 2009 @07:32PM (#29210179)

    Cooperation like this is a great gesture. MS releasing code to help Linux run better in their VM's is a good thing and I am glad Red Hat returned the favor. With shops today running a mixed environment this helps them with transitioning or running apps side by side. Great for Linux development/testing on Windows and now better Windows development/testing on Linux systems. Now if only Apple would allow OSX to run in a VM. Developers could have one system running the OS of their choice and do all their cross platform development and testing on one system. Great for small developers who might code on a laptop or prefer to have a single system for development.

  • by Sycraft-fu ( 314770 ) on Wednesday August 26, 2009 @07:40PM (#29210239)

    For better or worse, right or wrong, Apple is convinced they are a hardware company. They make their money on hardware in their mind, they just use their software to help sell their hardware. So they don't want you doing virtualization. They are not at all interested in your running their software on other people's hardware. For that matter, they aren't really interested in you running VMs all on their stuff. They'd much rather you have to buy 5 Xserves than buy 1 and do 5 VMs.

    Just life, and it isn't likely to change unless Apple starts losing money (and probably not even then).

  • by mlts ( 1038732 ) * on Wednesday August 26, 2009 @07:50PM (#29210321)

    The nice thing is that if you need to run VMs on OS X, you can move VMs from VMWare ESXi to VMWare Parallels on the Mac with little effort. Most of the time, it can copy directly. Worst case, you might need to copy the hard disk files and reinstall the VMWare client stuff.

    Though it would be nice for Apple to have VM functionality built into the OS, or available easily, thankfully there are programs that allow Macs to be VM hosts. VMWare is a big one, but I have used Sun's VirtualBox as well, and even though it might not have the features that VMWare has, it still is decent.

  • At parity once again (Score:3, Interesting)

    by stox ( 131684 ) on Wednesday August 26, 2009 @07:50PM (#29210323) Homepage

    No longer does Microsoft enjoy an advantage hosting mixed VM's. I am sure the boys in Redmond are not amused. Kudos to the folks at RedHat.

  • by AltGrendel ( 175092 ) <ag-slashdot AT exit0 DOT us> on Wednesday August 26, 2009 @07:58PM (#29210405) Homepage
    That depends on if you are using Xen or Qemmu. There's a design flaw in Xen/SELinux that will allow a hacked guest to write to the physical drive without notifying SELinux. This was "fixed" when the Qemm/SELinux interaction was worked out. There's a blog from one of the Red Hat SELinux guys that gives more detail, but I can't find the link just now.
  • by _KiTA_ ( 241027 ) on Wednesday August 26, 2009 @08:07PM (#29210485) Homepage

    I am sure the boys in Redmond are not amused.

    Microsoft and Red Hat agreed to support each others' operating systems in their virtual environments [], so this action is to be expected.

    Yes, they expected it just like they expected people to extend Kerebos Authentication and XML filetypes right back at them. Microsoft embraces and extends OTHERS, they don't GET embraced and extended.

    Windows Server able to run Linux VMs easily means more people willing to move from Linux to Windows, cause they can virtualize their Linux apps until they've ported them over -- and since they went to all that trouble to pay for Windows server... Might as well keep it.

    It doesn't really "work" for Microsoft the other way around, ya know.

  • (Score:2, Interesting)

    by viking567 ( 1403269 ) on Wednesday August 26, 2009 @08:11PM (#29210515)
    One step closer to []
  • Re:See! (Score:5, Interesting)

    by fuzzyfuzzyfungus ( 1223518 ) on Wednesday August 26, 2009 @09:14PM (#29211051) Journal
    A common filesystem(one nicer than fat32, or iso9660, and more generally useful than UDF, at any rate) would be nice; for external storage devices and for certain hobbyist dual-boot scenarios; but I, in my own experience, just don't feel the need as keenly as I used to. I wouldn't be surprised if the reason that one doesn't exist(to any really useful degree) is that others have similar experiences.

    With computers so cheap, and getting ever cheaper, and networking going from common to ubiquitous, and little network storage widgets popping up even on home networks, not to mention the increasing amount of stuff that lives on a remote server somewhere, I just don't find myself needed to access one OS's partition from the other very much. If I really do need to grab some file, NTFS-3G's inefficiency just isn't a big deal.

    The overwhelming majority of file transfers between OSes(or between the same OS on different machines) that I end up doing these days are via some network protocol, http, sftp, smb, IMAP, etc. that abstracts away the filesystem on the other end, and is spoken just fine by most anything. With virtualization becoming an increasingly common, and for most purposes superior, alternative to dual booting, network transfers even work for two OSes on the same machine.

    It would be nice if there were a properly interoperable filesystem in common use(if only so we could shove a stake through exFAT's black heart before it takes off); but it just hasn't been a big deal for a while now, for me.
  • Re:See! (Score:3, Interesting)

    by timeOday ( 582209 ) on Wednesday August 26, 2009 @09:44PM (#29211293)

    Come to think of it, I've only had it actually lock up when running VMWare from that ntfs partition. VMWare can be very disk intenstive (snapshots, suspend+resume) and runs largely in kernel mode, maybe it's choking on the delays?

    I'd be very curious what you get from the following test - here is my output from running the following command on both ntfs and ext3 filesystems:

    time dd if=/dev/zero of=test bs=1024 count=2000000

    On NTFS:
    2000000+0 records in
    2000000+0 records out
    2048000000 bytes (2.0 GB) copied, 146.024 s, 14.0 MB/s

    real 2m26.053s
    user 0m1.168s
    sys 0m15.221s

    On ext3
    2000000+0 records in
    2000000+0 records out
    2048000000 bytes (2.0 GB) copied, 18.2012 s, 113 MB/s

    real 0m18.213s
    user 0m0.448s
    sys 0m9.605s

    As you can see, the ntfs-3g write speed is slower by a factor of 8! Moreover mount.ntfs saturates a core under sustained writing. It's just not good enough for running an i/o intensive application on.

  • by QuantumG ( 50515 ) * <> on Wednesday August 26, 2009 @09:45PM (#29211301) Homepage Journal

    when I worked at VMware we used to just call it "cheating". You'd often hear engineers referring to "the drivers we use to cheat", and communicating through the "backdoor port".

  • by itzdandy ( 183397 ) < minus bsd> on Wednesday August 26, 2009 @10:46PM (#29211659) Homepage

    The problem for me with this is that Windows is a poor server OS. The only compelling reason to run Windows servers is active directory and exchange. IIS is not nearly as good as apache or nginx or comanche or lighttpd (specifically, overhead, flexability, security, and performance!)

    The costs for many organizations to engineer, deploy, and support windows servers for exchange and sharepoint is equal to or greater that the cost of outsourced/hosted. You can get hosted exchange for under $12/user/month at rackspace which compares well enough to a MCTS for Windows server and exchange as that 55,000 can do well over 350 exchange accounts without a power bill.

    A linux server may take some expertise to setup but needs far far less daily upkeep. You can employ many less techs and hire in from the local tech shop for big deployments. I have an email server (ubuntu 6.04) that has been running for over 3 years without any effort on my part. The only downtime it has ever had was when the power failed and it shut down after the UPS was drained. $1200+ about 6 hours config (say $85/h) and no maintenance is something is am sure no windows server can or ever has matched.

    back on point here, stop investing time and money is getting windows to run faster virtualized, put those dollars into alternatives to windows software. it has happened before that an OSS alternative (apache) has become so dominant that the big vendors have the alternatives rather than the standard. (bind, apache, sendmail and postfix, courier etc)

  • Re:See! (Score:2, Interesting)

    by izomiac ( 815208 ) on Wednesday August 26, 2009 @11:09PM (#29211829) Homepage
    I prefer Ext2FSD [] myself, but neither is ideal. They require a helper application that doesn't autostart (there's a non-working option for it), and they can be fickle about mounting (e.g. click mount and it doesn't happen, or open the drive and Windows asks to format it). I've had data loss with NTFS-3g (hopefully that bug's been squashed), and exFAT isn't supported in Linux.

    IMHO filesystem compatibility is a great example of how Linux devs are bad at leaving boring, but critical applications half done. E.g. they work, but have you have to jump through hoops and even then there are major bugs and little to no polish. Ideally, you could use any Windows or Linux filesystem in the other OS transparently with all features, to the point that the common user doesn't need to know what filesystems their partitions use.

    All that said, I use FAT32 or Ext2 for shared partitions for lack of a better alternative.
  • Re:See! (Score:1, Interesting)

    by tinthing ( 1364073 ) on Thursday August 27, 2009 @07:00AM (#29214407)
    Whoa there, don't you believe it. MS wants Windows at the hardware level because that's where they exercise control of the platform. MS (with Intel) sets the rules of the game for hardware (bus, graphics, sound etc. etc.) and vendors write their drivers to those rules --- for windows. No other OS is able to support newly released hardware because of this. The vendors almost universally don't write drivers for other OSs and third party support (e.g. the linux kernel) always lags behind (when free) or is a delay, risk and cost burden for box builders (e.g. Asus) that cancels out savings on OS licences. It's a major factor against preinstalled linux --- naive users can't safely buy new hardware for those boxes. Look at the state (until very recently at any rate) of cheap webcams on linux.

"Call immediately. Time is running out. We both need to do something monstrous before we die." -- Message from Ralph Steadman to Hunter Thompson