Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
Virtualization Linux

Linus Thinks Virtualization Is 'Evil' 330

Front page first-timer crdotson writes "Linus said in an interview that he thinks virtualization is 'evil' because he prefers to deal with the real hardware. Hardware virtualization allows for better barriers between systems by running multiple OSes on the same hardware, but OS-level virtualization allows similar barriers without a hypervisor between the kernel and the hardware. Should we expect more focus on OS-level virtualization such as Linux-VServer, OpenVZ, and LXC?"
This discussion has been archived. No new comments can be posted.

Linus Thinks Virtualization Is 'Evil'

Comments Filter:
  • by drolli ( 522659 ) on Friday August 19, 2011 @03:40PM (#37147080) Journal

    but its cheap in human resources since it is the ultimate reuse of code.

  • Good for beginners (Score:2, Interesting)

    by Anonymous Coward on Friday August 19, 2011 @03:41PM (#37147092)

    Virtualization is good for new junior programmers learning how to program firmware, sinceeany low level calls can not really destroy the real hardware, since protection can bee built right in.

    It's a crutch, but since we have a generation of programmers who can't do "the hard stuff" becuase "java does it for them", its certaintly good to have around.

  • It's mostly true (Score:4, Interesting)

    by Mad Merlin ( 837387 ) on Friday August 19, 2011 @03:52PM (#37147220) Homepage

    Linus has never been diplomatic, but it's mostly true. A huge amount of virtualization done today involves the same host and guest OS, and in most of those cases, using something slimmer than full blown virtualization would make a whole lot more sense, even if only for the improved performance. One of the problems is familiarity, container type isolation isn't applicable to as many cases, so fewer people are familiar with it. One of the other problems is the perception that full virtualization is more secure (which is probably untrue).

    There is however, a large swath of problems that aren't solved well by container type isolation that virtualization does solve well. If you need to simulate different physical systems (with separate IP addresses), that's much easier with virtualization. Likewise if you need very different guest and host OSes, that's not a strong point of container type isolation. Also, if your guest OS is sensitive to hardware changes, virtualization makes a lot of sense. There's more, but you get the idea.

  • by Anonymous Coward on Friday August 19, 2011 @03:56PM (#37147298)

    For those of you that look at FreeBSD jails, Linux OpenVZ, etc etc and say "but I want to migrate between servers!!!" there is an example of this being a possibility.

    http://www.7he.at/freebsd/vps/

    This guy did it with FreeBSD, but the real problem is that he needs funding to continue polishing it before it can ever be implemented into a FreeBSD release. I wish more people knew about this as we'd love to have it at work.

  • by BlueCoder ( 223005 ) on Friday August 19, 2011 @03:58PM (#37147316)

    The whole point of a modern OS is to virtualize the hardware so that each software application can play nice with each other.

    The hypervizor is the new ring 0. And it's going to evolve into a microkernel and user mode drivers. It's the new operating system and that what he should be working on if he likes hardware bits. The "Operating Systems" of old are evolving into plug in Operating Environments. It's the future, the revolution, get over it.

  • Linus Torvalds is... (Score:2, Interesting)

    by vranash ( 594439 ) on Friday August 19, 2011 @04:03PM (#37147384)
    ... the John Carmack of Open Source *nix Kernels. Seriously, what has he personally done in the past 5 years other than fsck us with first Bitlocker and then Git, a decade long string of incompatible 2.6.x releases, and finally, in order to 'me too' bad judgements by other open source companies, releasing a half baked kernel as 3.0 that might as well have been called 2.7 or 2.8 for all the new features it provides. (That is to say... none?)
  • Comment removed (Score:5, Interesting)

    by account_deleted ( 4530225 ) on Friday August 19, 2011 @06:10PM (#37148702)
    Comment removed based on user account deletion
  • by garyebickford ( 222422 ) <gar37bic@noSPam.gmail.com> on Friday August 19, 2011 @07:05PM (#37149128)

    This reminds me of some discussion back (IIRC the late 1970s) when the US Social Security dept. was upgrading. They finally had to rewrite their code for the new 3000 series (3090?). Supposedly, the code that they were running was originally written in Autocoder (a kind of assembly language) for the IBM 702 or IBM 705. Then it was moved to a 1620, which ran an emulation of the 702. Then it was moved to an IBM 360, which simulated the 1620 running the emulation. Then it was moved to VM, which could run multiple instances of the 360 program simultaneously. Then, finally, they were going to have to rewrite the program because there were so many changes to it and nobody knew how to write Autocoder any more, and anyway the emulations took up too many cycles. It's apocryphal, but I'll bet it's not far off the truth.

The cost of living hasn't affected its popularity.

Working...