Follow Slashdot blog updates by subscribing to our blog RSS feed

 



Forgot your password?
typodupeerror
×
Linux Business Software Linux

Virtual Containerization 185

AlexGr alerts us to a piece by Jeff Gould up on Interop News. Quoting: "It's becoming increasingly clear that the most important use of virtualization is not to consolidate hardware boxes but to protect applications from the vagaries of the operating environments they run on. It's all about 'containerization,' to employ a really ugly but useful word. Until fairly recently this was anything but the consensus view. On the contrary, the idea that virtualization is mostly about consolidation has been conventional wisdom ever since IDC started touting VMware's roaring success as one of the reasons behind last year's slowdown in server hardware sales."
This discussion has been archived. No new comments can be posted.

Virtual Containerization

Comments Filter:
  • Re:The great thing (Score:2, Insightful)

    by MalHavoc ( 590724 ) on Tuesday July 24, 2007 @08:14AM (#19967851)

    It's only lacking a feature of throwing the virtual computer out of the window.


    You sort of get this feature with Parallels - the ability to drag a virtual server into a trash bin is almost as satisfying, and far less expensive.
  • by tgd ( 2822 ) on Tuesday July 24, 2007 @08:22AM (#19967943)
    I'm sorry, thats an attempt to jump on the virtualization bandwagon. Use that word these days, people throw money at you.

    Application isolation is not virtualization, its nothing more than shimming the application with band aid APIs that fix deficiencies in the original APIs. Calling it virtualization is a marketing and VC-focused strategy, it has nothing to do with the technology.
  • by mdd4696 ( 1017728 ) on Tuesday July 24, 2007 @08:25AM (#19967985)
    Wouldn't a better word for "containerization" be "encapsulation"?
  • by Nursie ( 632944 ) on Tuesday July 24, 2007 @08:28AM (#19968017)
    ... that develops applications, mostly in C, I also find it extremely useful, especially when installing software. Some installers change the state of the system, some problems only occur first time round. There is nothing else like the ability to take your blank windows VM, copy it, install stuff, screw around with it in every possible way and then when you're done just delete the thing. They also allow you to install stuff you just don't want on your native box, but need to develop against.

    And you still have that blank windows install to clone again when you need it.

    VMs are a fantastic dev tool.
  • It's all about (Score:5, Insightful)

    by suv4x4 ( 956391 ) on Tuesday July 24, 2007 @08:30AM (#19968033)
    It's becoming increasingly clear that the most important use of virtualization is not to consolidate hardware boxes but to protect applications from the vagaries of the operating environments they run on. It's all about 'containerization,'

    Don't trust "it's all about" or "it turns out that to the contrary" or "set to fully replace" statements, especially when there's lack of evidence of what is claimed.

    Hosting services use virtualization to offer 10-20 virtual server per one physical machine, I and many people I know use virtual machines to test many configurations we can't afford to have separate physical machines for.

    So even though it's also about "containerization" (is "isolation" a bad word all of a sudden?), it's not ALL about it.
  • Buzzword alert! (Score:5, Insightful)

    by drspliff ( 652992 ) on Tuesday July 24, 2007 @08:33AM (#19968053)
    With virtualization like linux vserver, xen, vmware etc. there are two main reasons to why people are using it.

      1) Consolidation
      2) "Containerization" or whatever their calling it today.

    The company that I work for are using multiple virtual servers to be able to keep applications separate and be able to migrate them from machine to machine easier which is a common use for vmware (e.g. the appliance trend). So you're trading performance and memory usage for security and robustness/redundancy.

    Across maybe 100-200 servers, the number of vservers we have is astonishing (probably around 1200 to 1500, which is a bit of a nightmare to maintain) which are hosting customer applications, when an application starts to use more resources the vserver is moved over to a machine with less servers on it, and gradually to it's own server, which in the long run saves money & downtime.

    The other major industry using them is the hosting industry, allowing customers a greater amount of personalization rather than the one-size-fits-all cpanel hosting companies. This is the real industry where consolodation has increased, biting into the hardware markets possible sales because thousands of customers are now leasing shared resources, instead of leasing actual hardware.

    Either way, the number of new machines (virtual) machines and ip addresses, all managed by different people is becoming a management nightmare. Now everybody can afford a virtual dedicated server on the internet regardless of their technical skills which often ends up as a bad buy (lack of memory and resource constraints compared to shared hosting on a well maintained server).
  • by jkrise ( 535370 ) on Tuesday July 24, 2007 @08:36AM (#19968079) Journal
    From the referenced article:

    why did Intel just invest $218.5 million in VMware? Does Craig Barrett have a death wish? Or maybe he knows something IDC doesn't? There has got to be a little head scratching going on over in Framingham just now.
    As I replied to an earlier thread on the Linux kernel being updated with 3 VMs, this sounds very fishy and intriguing. Virtualisation is simply a technique of emulating the hardware in software - memory, registers, interrupts, instruction sets etc. If VMs will only emulate standard instructions and functions, the the Intel processors will be useless as a platform for reliable DRM or Trustworthy Computing purposes, where the hardware mfr. controls the chip - not the customer or software developer. If the virtualisation vendor is also secretive and opaque about his software, that is ideal for Intel because they will now be able to re-implement the secretive features in the VM engines.

    The obvious explanation for Barrett's investment (which will net Intel a measly 2.5% of VMware's shares after the forthcoming IPO) is that Intel believes virtualization will cause people to buy more, not less, hardware.
    True virtualisation will cause the opposite effect - people will buy less hardware. It is simply amazing that Windows 98 for instance, can deliver the same (and often better) end-user experience and functionality that Vista does, but with only 5% CPU MHz, RAM and Disk resources. And so virtualisation will allow 20 Windows 98 instances on hardware required for a single instance of Vista without degrading the user experience.

    That can be a chilling thought to companies like Intel, Microsoft or Oracle. Also, the carefully woven concoluted DRM and TCPA architectures that consume gazillions of instructions and slow down performance to a crawl... will simply be impossible if the Virtualisation layer simply ignores these functions in the hardware. Which is why I felt it very strange for the Linux Kernel team to get involved in porting these VMs in order to allow Vista to run as a guest OS. It shouldn't have been a priority item [slashdot.org] for the kernel team at all, IMO.
  • by inflex ( 123318 ) on Tuesday July 24, 2007 @08:38AM (#19968121) Homepage Journal
    I was nodding my head in agreement. Writing installers for your apps often takes longer than the app itself (or they're larger!), so yes, (also a C developer myself) being able to test the install, roll-back, try again... brilliant stuff.
  • Most important? (Score:1, Insightful)

    by Anonymous Coward on Tuesday July 24, 2007 @08:48AM (#19968211)
    the most important use of virtualization is not to consolidate hardware boxes but to protect applications from the vagaries of the operating environments they run on

    Most important means different things to different people.

    In the real world, to run a reasonably reliable application requires a modern rackmount server with remote out-of-band management, redundant power supplies and RAID. The most common failure modes for computers are hard disk and power supply failures, and this protects you from both. Remote management lets you control & reboot the machine from offsite.

    These kinds of servers are available off the shelf from any major vendor (Dell, HP, IBM, etc) and will run you $2000 or so. Given the speed of computers today, that server will run most apps really, really fast. In fact, many apps will rarely go above 10% utilization (you do monitor your servers with SNMP, right?).

    So, to get a reliable server with next-day onsite parts replacement, you had to buy far more server than you need. Many (most?) data centers are full of servers like this.

    For one software project I'm working on, the vendor recommends 5 servers: one for oracle, two for crystal reports, and two application servers. The vendor recommends hardware costing $40,000. This is for a custom software app that will have 5 users. Yes, 5 users, and it's not a complex app that demands a lot of performance. Having talked to other customers, utilization rarely goes above 3%. Quite a waste, even though the total project cost is $200,000.

    Hardware consolidation with VMware can lead to very big savings in hardware, colocation, power, cooling, and admin costs.

    And if you get the Vmotion software from VMware, you can move a running virtual machince from one server to another, while it is running, without skipping a beat. That is very, very useful. Need to take your real server down for maintenance? Move the virtual machines to another server. Need to do your end-of-month reconciliation? Move it from the slow backup server to the big fast number cruncher.
  • by TheRaven64 ( 641858 ) on Tuesday July 24, 2007 @09:00AM (#19968323) Journal
    I think it's more evidence that operating systems suck. The whole point of a modern operating system is to allow you to run multiple programs at once, without them interfering with each other. This is why we have filesystems (with permissions) rather than letting each process write to the raw device. This is why we have pre-emptive multitasking rather than letting each process use as much CPU as it wants. This is why we have protected memory, instead of letting processes trample each others' address space.

    If you can't trust your OS to enforce the separation between processes, then you need to start re-evaluating your choice of OS.

  • by Thumper_SVX ( 239525 ) on Tuesday July 24, 2007 @09:06AM (#19968393) Homepage
    Well, yes and no.

    As I keep telling people when I work with virtualization, it does not necessarily lead to server consolidation in the logical sense (as in instances of servers), rather it tends to lead to server propogation. This is probably expected; generally I/O will be lower for a virtual machine than for a physical machine, thus requiring the addition of another node for load balancing in certain circumstances. However, this is not always the case.

    Virtualization DOES help lead to BOX consolidation; as in it helps reduce the physical server footprint in a datacenter.

    Let me give you my viewpoint on this; generally virtualization is leveraged as a tool to consolidate old servers to bigger physical boxes. Generally, these old servers (out of warranty, breaking/dying and so on) have lower I/O requirements anyway so often see a speed boost going to the new hardware... or at the very least performance remains consistent. However, where new applications are being put on virtual platforms, quite often the requirements of the application cause propogation of servers because of the I/O constraints. This is generally a good thing as it does encourage the developers to write "enterprise ready" applications that can be load balanced instead of focusing on stand-alone boxes with loads of I/O or CPU requirements. This is good for people like me as it provides a layer of redundancy and scalability that otherwise wouldn't be there.

    However, the inevitable cost of this is management. While you reduce physical footprint, there are more server instances to manage, thus you need a larger staff to manage your server infrastructure... not to mention the specialized staff managing the virtual environment itself. This is not in itself a bad thing, and generally might lead to better management tools, too... but this is something that needs to be considered in any virtualization strategy.

    Generally in a Wintel shop, more newer applications get implemented in most companies these days. This is particularly true since most older applications have been or need to be upgraded to support newer operating systems (2003 and the upcoming 2008). This means that the net effect of all I've mentioned is an increase in server instances even while the footprint decreases.

    "Containerization" (yuck!) is not new by the way. This is just someone's way of trying to "own" application isolation and sandboxing. People have done that for years, but I definitely see more of it now that throwing up a new virtual machine is seen as a much lower "cost" than throwing up a new physical box. The reality of this is that virtualization is VERY good for companies like Microsoft who sell based on the instances of servers. It doesn't matter if it's VMWare or some other solution; licensing becomes a cash cow rapidly in a virtualized environment.

    Where I work we've seen about a 15% net server propogation in the process of migrating systems so far. Generally, low-load stuff like web servers virtualize very well, while I/O intensive stuff like SQL does not. However, a load-balanced cluster pair of virtual machines on different hardware running SQL can outperform SQL running on the same host hardware as a single intstance... this means that architecture changes are required, and more software licenses are needed, but the side effect is a more redundant, reliable and scalable infrastructure... and this is definitely a good thing.

    I am a big believer in virtualization; it's somewhat harking back to the mainframe days, but this isn't a bad thing either. The hardware vendors are starting to pump out some truly kick-ass "iron" that can support the massive I/O that VM's need to be truly "enterprise ready". I am happy to say that I've been on the leading edge of this for several years, and I plan to stay on it.
  • by afidel ( 530433 ) on Tuesday July 24, 2007 @09:27AM (#19968605)
    That's funny because ALL OS's suck (infact all hardware and software suck, some just suck less). Even on the S/390 nee zOS mainframes from IBM there is compartmentalization both in hardware and software. If an OS that's been around for over 40 years running the largest companies in the world isn't always trusted to enforce separation of processes I don't see how any other OS stands a chance.
  • by pla ( 258480 ) on Tuesday July 24, 2007 @09:55AM (#19968943) Journal
    If you can't trust your OS to enforce the separation between processes, then you need to start re-evaluating your choice of OS.

    And for the most part, modern OSs handle that well. They do allow for a certain degree of IPC, but mostly, two processes not strongly competing for the same resources can run simultaneously just fine.

    The problem arises in knowing what programs need what access... The OS can't make that call (without resorting to running 100% signed binaries, and even then, I personally lump plenty of "legitimate" programs in the "useless or slightly malicious" category), and we obviously can't trust the applications to say what they need. Most programs, for example, will need to at least write in their own directory, many need access to your home dir, some create files in your temp directory, some need to write in random places around the machine, some need logging access, some even need to write directly to system directories; Some programs need network access, but the majority don't (even though they may want to use it - I don't care if Excel wants to phone home, I don't use any of its features that would require network access and would prefer to outright block them). How does the OS know which to consider legitimate and which to disallow?

    The concepts of chroot(and now registry) jails and outbound firewalling work well, as long as the user knows exactly what resources a given program will need access to; But even IT pros often don't know that ahead of time, and many well-behaved programs still snoop around in places you wouldn't expect.

    The problem mentioned by the GP, with the likes of Java and .NET, arise from them still running on the real machine - They may waste CPU cycles running on a virtual CPU with what amounts to chroot'ed memory, but all of their actions still occur on the real system. Deleting a file really deletes a file.

    "real" VMs basically avoid the entire issue by letting even a highly malicious program do whatever it wants to a fake machine. They can have full unlimited access, but any damage ends when you halt the VM. Repair of worst-case destruction requires nothing more than overwriting your machine image file with a clean version (you could argue the same for a real machine, but "copy clean.vm current.vm" takes a hell of a lot less time than installing Win2k3, MSSQL, IIS, Exchange, and whatever else you might have running on a random server, from scratch.



    Or, to take your argument one layer lower, I would tend to consider XP the untrusted app, and VMWare the OS.
  • by Sancho ( 17056 ) on Tuesday July 24, 2007 @10:45AM (#19969547) Homepage
    chroot jails tend to be restrictive. You can't access all your entries in /dev, or if you can, you've removed a lot of the protection afforded by the jail in the first place.

    Virtualization (or containerization... how awful!) generally allows this. Want to play with your hard drive driver? No problem.

    Of course, it fails when you actually /do/ want direct access to the hardware. Can't test that new Nvidia driver in a containerized OS.
  • Re:The great thing (Score:2, Insightful)

    by Mode_Locrian ( 1130249 ) on Tuesday July 24, 2007 @11:56AM (#19970699)
    The summary says that the nice thing about virtualization is that it can "...protect applications from the vagaries of the operating environments they run on." I would have thought that the really great thing about virtualization is that it can protect operating environments from the vagaries of applications which are run on them. This is especially handy when you just want to try out a new bit of software etc.

"When it comes to humility, I'm the greatest." -- Bullwinkle Moose

Working...