Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×

An Overview of Virtualization 119

IndioMan writes to point us to an overview of virtualization — its history, an analysis of the techniques used over the years, and a survey of Linux virtualization projects. From the article: "Virtualization is the new big thing, if 'new' can include something over four decades old. It has been used historically in a number of contexts, but a primary focus now is in the virtualization of servers and operating systems. Much like Linux, virtualization provides many options for performance, portability, and flexibility."
This discussion has been archived. No new comments can be posted.

An Overview of Virtualization

Comments Filter:
  • Apple (Score:5, Interesting)

    by 99BottlesOfBeerInMyF ( 813746 ) on Tuesday January 02, 2007 @04:12PM (#17435042)

    This article is an okay overview of many of ways virtualization is now being used. As an aside, has anyone else noticed Apple seems to be missing the boat this time? They're certainly benefitting from virtualization with several players in the market providing emulation solutions and tools now that they are on Intel, but Apple themselves seem to have done nothing and not even provided a strategy. Servers are moving to more virtual servers on one real machine, but OS X's license forbids it from fulfilling that role. Tools for using OS X as a thin client for accessing remote virtual machines are likewise weak. Apple hasn't even provided a virtual machine for their customers to emulate old macs so that users can run OS 9 apps on the new intel machines and they restrict redistribution of their ROM files to make 3rd parties unable to do this. No mention of adding VM technology to OS X has been heard, despite its inclusion in the Linux kernel among others.

    Does Apple have something against VM technology? Are they simply behind the times and failing to see the potential?

  • Re:Apple (Score:2, Interesting)

    by InsaneProcessor ( 869563 ) on Tuesday January 02, 2007 @04:19PM (#17435130)
    The answer here is simple. Virtualization is usless in a desktop multimedia user environment. Video cards, sound cards and the like are bus mastering devices and you cannot virtualize hardware access unless you own the whole environment. In simple terms, virtualization is only usefull in the server arena and useless on the desktop.
  • by Anonymous Coward on Tuesday January 02, 2007 @04:19PM (#17435134)
    The article seems a bit outdated. Reading it make it sounds like hardware virtualization (Intel VT-x / AMD's AMD-V, ex Pacifica) isn't there yet. As a user running both para-virtualized guests and hardware-virtualized guests under Xen since months, I disagree with how Xen is presented. Xen can also run unmodified guests (and is very good at it): I've tried several unmodified Linux version (though para-virt is faster than hardware-virt, so I run para-virtualized Linux guests) and I'm currently running an unmodified Windows XP Pro 32 bit under Xen. I also installed W2K3 server under Xen. Performance is incredible, there's no comparison, for example, with VMWare running fully-virtualized Windows, which is way slower than hardware-virt (nowadays maybe that VMWare allows to use VT-x/AMD-V? I honestly don't know, I haven't checked). As a second note, para-virt. under Xen is even faster than hardware-virt, it just feels native.

    The article seems a bit light on qemu too.

    ... # xm info | grep xen_caps
    xen_caps : xen-3.0-x86_32 , hvm-3.0-x86_32

  • OSes Targeting VMs (Score:4, Interesting)

    by RAMMS+EIN ( 578166 ) on Tuesday January 02, 2007 @04:21PM (#17435152) Homepage Journal
    An idea that I've been toying with lately is what if we got operating systems targeting virtual machines, especially ones that expose a simplified interface rather than trying to emulate a real machine. Instead of having to duplicate drivers for every piece of hardware in every OS, drivers would only need to be developed for the virtualization environment, and operating systems would only have to support the interface exposed by the VM.
  • Re:QEMU (Score:3, Interesting)

    by dignome ( 788664 ) on Tuesday January 02, 2007 @04:22PM (#17435170)
    Kernel Virtual Machine - http://kvm.sf.net/ [sf.net] It requires a processor with Intel's Vt or AMD's SVM technology (cpuflags will read vmx for VT or svm for AMD-V). The developers are looking for people to test optimizations that have just gone in. http://thread.gmane.org/gmane.comp.emulators.kvm.d evel/657/focus=662 [gmane.org] It uses a slightly modified qemu.
  • Re:Apple (Score:5, Interesting)

    by 99BottlesOfBeerInMyF ( 813746 ) on Tuesday January 02, 2007 @04:26PM (#17435214)

    Virtualization is usless in a desktop multimedia user environment. Video cards, sound cards and the like are bus mastering devices and you cannot virtualize hardware access unless you own the whole environment. In simple terms, virtualization is only usefull in the server arena and useless on the desktop.

    As someone with two VMs running on my OS X laptop right now, I'd have to disagree with you. As for sound cards and video cards the sound works just fine and at least two companies I know of are working on support for allowing hosted OS's full access to video card acceleration.

  • Xen on FreeBSD? (Score:2, Interesting)

    by Logic and Reason ( 952833 ) on Tuesday January 02, 2007 @05:09PM (#17435652)
    This may be a little off-topic, but I noticed that the article claims that Xen runs on FreeBSD. I was under the impression that Xen support on FreeBSD was still a work in progress, which the Wikipedia article [wikipedia.org] seems to confirm. Can anybody comment on this?
  • Big New Thing??? (Score:3, Interesting)

    by eno2001 ( 527078 ) on Tuesday January 02, 2007 @05:32PM (#17435904) Homepage Journal
    Uhm... I started using it on the PC platform in 1998/99 with VMWare on RedHat 7. I was amazed when I saw I could boot a Windows 98 system simultaneously with my already running Linux system on a lowly Pentium MMX 233 with 32 megs of RAM. Then I found out that what I thought was new back then was something the big iron world had enjoyed for decades and originated in the 60s. It was just new to x86 is all come 1998/99. Since then, I've moved onto Xen for Linux which is rather amazing in terms of performance and flexibility if you paravirtualize the system. I've got three VMs running on an old Pentium II era Celeron at 400 MHz with 384 megs of RAM. That system has enough horsepower to do the following for my network:

    Internal: DHCP, DNS, postfix SMTP server for internal clients, Squid proxy, OpenVPN MySQL DB, DBMail IMAP services that use MySQL as the backend. All in 128 megs of RAM. And they all perform smoothly and quickly.

    External: DNS, postfix SMTP server for spam filtering and relaying to the virtual internal SMTP server, OpenVPN server. All in 64 megs of RAM.

    I plan to add an Asterisk PBX to that same box for a third VPN so I can have private VoIP with my OpenVPN users (all friends and family as I'm talking about a system at home, not at work).

    I've, of course also played with Virtual PC, Virtual Server, QEMU and poked at OpenVZ. For me, a decent virtualization solution has to be able to run other OSes to count as good which is why certain virtualization solutions don't do much for me. If I need access to Windows, I want to be able to do it without wasting good hardware on it. That's why UserMode and Linux Virtual Servers (more akin to chroot jails) do absolutely nothin for me other than when I'm building a Gentoo box. But, this is not the big new thing. It's only that MS is making waves with it now... typical.
  • Article missing MDF (Score:3, Interesting)

    by Anthony ( 4077 ) * on Tuesday January 02, 2007 @06:35PM (#17436586) Homepage Journal
    Amdahl was the first to offer physical machine partitioning in the mid to late eighties. IBM finally came out with PR/SM (Processor Resource/Systems Manager) some years later. MDF provided complete isolation of resources between two or more partitions. There were no shared channels, intercommunication was done with a Channel-To-Channel connector. This provided a secure, isolated development/test/QA systems at a reasonable price. It was all managed at the macrocode level, which had a Unix-like shell.
  • Re:*Another* Layer? (Score:5, Interesting)

    by QuantumRiff ( 120817 ) on Tuesday January 02, 2007 @07:31PM (#17437220)
    For us, the nicest thing about virtualization is the disaster recovery. If our building burns down, we can quite literally get any PC we can find with a ton of RAM, load Virtual Server, and load the hosts right back up. Much, much faster than going and configuring all the weird drivers and raid cards, partitions, etc on a normal non-virtualized system. On the same note, if one of my servers goes down, I can quickly load up the VM on another box, which means I can take all the time in the world to get the original server back up, so I don't have to worry about the really expensive "4-hour" support plans, but the much cheaper "next-day" support plans. I also keep a VM copy of our web server handy. (Web server isn't on a VM, yet, because of the speed issues), so that when I need to take down the real, faster, web server, I change one DNS setting, and all my users notice is that the web is running a little slower...
  • Re:Virtually Here (Score:3, Interesting)

    by bl8n8r ( 649187 ) on Tuesday January 02, 2007 @08:25PM (#17437748)
    > The kick ass server being a Celeron 2ghz machine with 256 megs of ram.

    There is a note of cynicism in your statement, but yes you will need adequate hardware and resources to take advantage of virtualisation. You should not expect to run two identical instances of a server environment on your hypervisor and expect a performance increase (depending on utilization of course). Also keep in mind your host os is going to need resources to run the show. This is where a stripped linux install has the advantage. One problem is people run their hypervisors on windows which really wasn't intended to be a multitasking server OS in the first place. They expect to see nice fluid resource management and it just doesn't happen and they get aggravated and try to throw more hardware at it with very little improvement. Also, keep in mind that Vmware and Xen are two separate types of hypervisors. As I understand, Xen is the operating system and hypervisor all rolled into one, whereas Vmware is an additional layer on top of the host os. YMMV with either.
  • Re:Bias??? (Score:2, Interesting)

    by evanspw ( 872471 ) on Tuesday January 02, 2007 @11:50PM (#17439512)
    No, I get some apps running faster in the VM than natively. I can only make the claim for number-crunching apps (EM solvers and the like) running in a 32-bit XP VM on a 64-bit linux host (running VMWare Workstation 5.5x), no swapping going on. But yes, definitely faster in the VM than on the same hardware running XP natively. Maybe it's that the VM presents fewer overheads to the XP OS, with it's simplified virtual hardware. You would expect at least nearly comparable performance for number crunching only (almost no disk, no graphics). Graphics intensive apps run like shit in the VM.

    -pete

"The one charm of marriage is that it makes a life of deception a neccessity." - Oscar Wilde

Working...