


Virtual Containerization 185
AlexGr alerts us to a piece by Jeff Gould up on Interop News. Quoting: "It's becoming increasingly clear that the most important use of virtualization is not to consolidate hardware boxes but to protect applications from the vagaries of the operating environments they run on. It's all about 'containerization,' to employ a really ugly but useful word. Until fairly recently this was anything but the consensus view. On the contrary, the idea that virtualization is mostly about consolidation has been conventional wisdom ever since IDC started touting VMware's roaring success as one of the reasons behind last year's slowdown in server hardware sales."
The great thing (Score:5, Funny)
It's only lacking a feature of throwing the virtual computer out of the window.
Re: (Score:2, Insightful)
You sort of get this feature with Parallels - the ability to drag a virtual server into a trash bin is almost as satisfying, and far less expensive.
Re:The great thing (Score:5, Funny)
Re:The great thing (Score:5, Funny)
Re: (Score:3, Funny)
Re: (Score:2, Funny)
Re: (Score:2)
Re:The great thing (Score:5, Funny)
Same as virtual girlfriends.
Re: (Score:2, Insightful)
Containerization (Score:5, Funny)
Re: (Score:3, Insightful)
Whatever happened to "Sandboxing?" (Score:5, Interesting)
As has been said before, we need a way to grant applications permissions to use resources. We have that, to some degree, with firewalls and apps like ZoneAlarm/LittleSnitch which ask you for permission before an application is allowed to "call home", but what about other resources -- for example, being able to access only a particular directory or install a system-level event hook which acts as a keylogger? etc.
Re: (Score:2)
Re: (Score:2)
Java and
Re: (Score:3, Insightful)
Virtualization (or containerization... how awful!) generally allows this. Want to play with your hard drive driver? No problem.
Of course, it fails when you actually
Re: (Score:2)
Re:Whatever happened to "Sandboxing?" (Score:5, Insightful)
If you can't trust your OS to enforce the separation between processes, then you need to start re-evaluating your choice of OS.
Re:Whatever happened to "Sandboxing?" (Score:4, Insightful)
Re: (Score:2)
Re: (Score:2)
Three Dead Trolls: Every OS sucks! [youtube.com]
Re:Whatever happened to "Sandboxing?" (Score:5, Insightful)
And for the most part, modern OSs handle that well. They do allow for a certain degree of IPC, but mostly, two processes not strongly competing for the same resources can run simultaneously just fine.
The problem arises in knowing what programs need what access... The OS can't make that call (without resorting to running 100% signed binaries, and even then, I personally lump plenty of "legitimate" programs in the "useless or slightly malicious" category), and we obviously can't trust the applications to say what they need. Most programs, for example, will need to at least write in their own directory, many need access to your home dir, some create files in your temp directory, some need to write in random places around the machine, some need logging access, some even need to write directly to system directories; Some programs need network access, but the majority don't (even though they may want to use it - I don't care if Excel wants to phone home, I don't use any of its features that would require network access and would prefer to outright block them). How does the OS know which to consider legitimate and which to disallow?
The concepts of chroot(and now registry) jails and outbound firewalling work well, as long as the user knows exactly what resources a given program will need access to; But even IT pros often don't know that ahead of time, and many well-behaved programs still snoop around in places you wouldn't expect.
The problem mentioned by the GP, with the likes of Java and
"real" VMs basically avoid the entire issue by letting even a highly malicious program do whatever it wants to a fake machine. They can have full unlimited access, but any damage ends when you halt the VM. Repair of worst-case destruction requires nothing more than overwriting your machine image file with a clean version (you could argue the same for a real machine, but "copy clean.vm current.vm" takes a hell of a lot less time than installing Win2k3, MSSQL, IIS, Exchange, and whatever else you might have running on a random server, from scratch.
Or, to take your argument one layer lower, I would tend to consider XP the untrusted app, and VMWare the OS.
Re: (Score:2)
It seems that
Re: (Score:2)
Re: (Score:2)
It seems like an OS that only allows the word processor access to files the user wants to edit would be extremely cumbersome to use, as the user would have to manually specify which app can access each file. With a home directory with thousands or tens of thousands of files, this would take an eternity. Am I missin
We need a new language/OS philosophy (Score:2)
I think basically the problem is that our languages still think
Re: (Score:2)
Containerization is a real word (Score:2)
Re: (Score:2)
Contain (Score:3, Informative)
Re: (Score:3)
--beatnik avatar.
Re: (Score:2, Informative)
Re: (Score:3, Informative)
Re: (Score:2)
Re: (Score:2)
Was this written by GWB, or is there a real semantic difference between 'containment' and 'containerization'?
Re: (Score:2)
VM's just allow so many opportunities (Score:5, Interesting)
It's proved so useful that I'm sincerely considering doing the same for my actual WWW server so that if at any given time things go -bad- on the device I can just either roll back or transparently transfer to another machine, the latter, due to the (mostly) hardware agnostic nature of the VM setup makes disaster recovery just that much simpler (sure, you still have to setup the host but at least it's a simpler process than redoing every tiny little trinket again).
As another software developer... (Score:5, Insightful)
And you still have that blank windows install to clone again when you need it.
VMs are a fantastic dev tool.
Re:As another software developer... (Score:4, Insightful)
Re: (Score:2)
The easier it is to test these things, then the more likely you're going to end up with a quality product. If it takes me a half hour to install, test, uninstall, test, clean-up, etc., etc. then it's likely I'm not goi
Also for QA. (Score:4, Interesting)
Also... (Score:2)
As yet another software developer... (Score:2)
Re: (Score:2)
Apps screw up the system all the time by hooking calls, inserting themselves in networking chains, or leaving cruft behind in the registry. When you're building an uninstaller, you have to make sure it grabs all this junk and leaves the system in a reasonable state, and that's where a VM has its usefullness; you c
Containerization != Virtualization (Score:2, Insightful)
Application isolation is not virtualization, its nothing more than shimming the application with band aid APIs that fix deficiencies in the original APIs. Calling it virtualization is a marketing and VC-focused strategy, it has nothing to do with the technology.
Virtualization = Containerization (Score:2)
Re: (Score:2)
It also abstracts the instance from a physical hardware location, provided uniform hardware resource needs. It also permits throttling application resources, or conversely, changing application resource capacities nearly in an ad hoc way.
If you accept this premise, contains are an effect of virtualization and a mathematical relationship shows containers as
I'd say it's both (Score:5, Informative)
At my previous company, we invested in two almighty servers with absolutely stacks of RAM in a failover cluster. They ran 4-5 other servers for critical tasks...each virtual machine was stored on a shared RAID5 array. If anything critical happened to the real server, the virtual servers would be switched to the next real server and everything was back up again in seconds. The system was fully automated too, and frankly, it saved having to buy several not-so-meaty boxes while not losing much redundancy and giving very quick scalability (want one more virtual server? 5 minute job. want more performance? Upgrade redundant box and switch over virtual machines).
The system worked a treat, and frankly, the size & power of the bigger, more important fewer servers gave me a constant hard-on.
Re: (Score:2)
Re: (Score:3, Interesting)
I guess that's true (Score:2)
Really about rPath (Score:3, Informative)
PHP 6 (Score:4, Informative)
I suppose jailing applications is a well-known way of securing them, this really just improves on that, but with much more overhead. I wonder if anyone is thinking about providing "lightweight" virtualisation for applications instead of the whole OS?
Re: (Score:2)
Re: (Score:2)
Virtuozzo. OpenVZ. Solaris Containers. BSD Jails. Linux has something (at least one!) too, I forget the name.
In terms of Enterprise class features, Virtuozzo is the best of them, and comparatively cheap.
C//
It's all about (Score:5, Insightful)
Don't trust "it's all about" or "it turns out that to the contrary" or "set to fully replace" statements, especially when there's lack of evidence of what is claimed.
Hosting services use virtualization to offer 10-20 virtual server per one physical machine, I and many people I know use virtual machines to test many configurations we can't afford to have separate physical machines for.
So even though it's also about "containerization" (is "isolation" a bad word all of a sudden?), it's not ALL about it.
It's all about context (Score:2)
Re: (Score:2)
My ancestors used to consume equine for subsistence, so if someone would say they can eat a horse I'll expect them to. If not, I'll kill them damn liars, and ceremonially drink wine from their skull (something my ancestors used to do a lot too).
My ancestors also didn't know the concept of hyperbole, much like readers of tech news.
Re: (Score:2)
VMs are overkill for "containerization" (Score:3, Informative)
Re: (Score:2)
We've had UML and chroot for quite a while in Linux, but it's equally limited. With virtualization, I can run Windows on my Linux box, which is (to me) where the real use is.
Buzzword alert! (Score:5, Insightful)
1) Consolidation
2) "Containerization" or whatever their calling it today.
The company that I work for are using multiple virtual servers to be able to keep applications separate and be able to migrate them from machine to machine easier which is a common use for vmware (e.g. the appliance trend). So you're trading performance and memory usage for security and robustness/redundancy.
Across maybe 100-200 servers, the number of vservers we have is astonishing (probably around 1200 to 1500, which is a bit of a nightmare to maintain) which are hosting customer applications, when an application starts to use more resources the vserver is moved over to a machine with less servers on it, and gradually to it's own server, which in the long run saves money & downtime.
The other major industry using them is the hosting industry, allowing customers a greater amount of personalization rather than the one-size-fits-all cpanel hosting companies. This is the real industry where consolodation has increased, biting into the hardware markets possible sales because thousands of customers are now leasing shared resources, instead of leasing actual hardware.
Either way, the number of new machines (virtual) machines and ip addresses, all managed by different people is becoming a management nightmare. Now everybody can afford a virtual dedicated server on the internet regardless of their technical skills which often ends up as a bad buy (lack of memory and resource constraints compared to shared hosting on a well maintained server).
Re: (Score:3, Informative)
1. Disaster Site Operations (specifically the use case where main operations are still on metal, but the disaster site is virtual; this is a use case where there are less physical boxes than there are operating systems, so this is a consolidation case, just not the usual one).
2. Increased Agility (as
Re: (Score:2)
Though perhaps rare, there are providers that are very keen on security on shared hosts. I do agree though that there are likely many companies for which this is not true. It is a shame, though, that the majority of bad apples spoils it for the few good ones ;-)
Re: (Score:2)
Now, how long do you expect it to take for them to realize their VPS has been compromised by spammers/hackers/scriptkiddies etc.? Probably much longer than the hosting company because their actively looking out for these things.
Virtualization can't protect from the OS (Score:2)
What do you run inside a virtual machine - an OS!!
What do you run the virtual machine on - an OS!!
So, any application now has to withstand two OSes, not just one. Isolation can be an important part of virtualization, but it's about isolating applications from each other, not from the OS.
Re: (Score:3, Informative)
Unless you're running Xen, unless you consider Xen an OS. But this brings us back to the question, "what is an OS?"
Xen is a kernel for managing virtualized guests, it sits at Ring-0 where traditional OS normally resides. Xen requires that a single guest machine is setup to be booted by default, which will receive special priviledges for purposes of managing Xen. This special guest is called the "dom0", but is for all other intents and purposes -- just
Very fishy and intriguing... (Score:5, Insightful)
That can be a chilling thought to companies like Intel, Microsoft or Oracle. Also, the carefully woven concoluted DRM and TCPA architectures that consume gazillions of instructions and slow down performance to a crawl... will simply be impossible if the Virtualisation layer simply ignores these functions in the hardware. Which is why I felt it very strange for the Linux Kernel team to get involved in porting these VMs in order to allow Vista to run as a guest OS. It shouldn't have been a priority item [slashdot.org] for the kernel team at all, IMO.
Re: (Score:2)
True virtualisation will cause the opposite effect - people will buy less hardware.
But every desktop user is going to have a CPU in their machine and the number of CPU's in the big server farms isn't going to change much because they pile on capacity to suit the application. Odd sites like the one I work at will use vmware where they have a requirement for a calendar server running linux 2.2 (I am not making this up) and don't want to waste a box on it. Fair enough but that not a big market to lose.
Re: (Score:2)
Perhaps, though for myself, this is untrue. I run a hosting provider. Back in the day, we simply needed a few large hosting machines and that was sufficient -- providers could pile accounts onto machines. Even medium-sized companies could get by with less than 10 shared-hosting servers.
However, that has changed with VPS... We can only fit a few customers onto each machine. The more customers we have, the more virtual mac
Re: (Score:2)
Re: (Score:2)
Re: (Score:2)
That is of course assuming that you don't consider running Windows 98 to be a degraded user experience. Heck why not run Windows 3.1? You could probably run 100 instances of that for the same hardware requirements as Vista.
Re: (Score:2)
Intel has over 60,000 computers in their data centers. Over 40,000 of those servers run VMWare.
Maybe they did it for the discount? Ha.
Joe.
Re: (Score:2)
It is simply amazing that Windows 98 for instance, can deliver the same (and often better) end-user experience and functionality that Vista does, but with only 5% CPU MHz, RAM and Disk resources. And so virtualisation will allow 20 Windows 98 instances on hardware required for a single instance of Vista without degrading the user experience.
]]
--Win98 is not even supported by MS anymore -- go with Win2kpro instead.
Re: (Score:2)
Absolutely.
Thats why I play World of Warcraft on my Windows 98 box with DirectX 9 and my Nvidia 8800 GTX video card.
w00t!
Obvious and redundent ? (Score:2, Informative)
This is kind of obvious, I used to use more machines for security reasons, now I use less machines but they are more powerful. When you do server consolidation, it implies that applications used to run on different hardware for security and stability reason will now be running on the same hardware within different VMs. So how can they say "protect applications from the vagaries of the operating environments" is opposed to "consolidating hardware box".
"Consolidating hardware boxes" implies "protect applic
Re: (Score:2)
I think it like when you have an application that is certified with a particular OS configuration and set of patches, etc., and another application which would require a conflicting setup. You can run all your applications in a known good setup and not worry about updates on one application (and the OS dependancies which it drags in) affecting another. You could freeze your package manager at a certain configuration for an application so random OS updates don't go breaking things. Those kind of vagaries.
Re: (Score:2)
It's not consolidation if the company was running multiple applications on one server before - and were occasionally having problems when the OS mucked something up for one of those applications after a patch put in to fix up a different application.
There are *many* companies that cannot afford the hardware to run a separate physical server for each app.
Node Locking (Score:4, Interesting)
Re: (Score:2)
Who decides most? (Score:2)
Is there actually a metric of why companies are turning to virtualization somewhere? We are doing it for stability of applications to a very small degree, but also for development ease, backup ease and also for a big part to consolidate and use hardware more efficiently. What about you, why are you considering/using/investigating virtualization?
Makes sense to me (Score:4, Informative)
It's fantastically handy to be able to install and configure a service in the knowledge that no matter how screwed up the application (or, for that matter, how badly I screw it up), it's much harder for that application to mess up other services on the same host - or, for that matter, for existing services to mess up the application I've just set up.
Add to that - anyone who says "Unix never needs to be rebooted" has never dealt with the "quality" of code you often see today. The OS is fine, it's just that the application is quite capable of rendering the host so thoroughly wedged that it's not possible to get any app to respond, it's not possible to SSH in, it's not even possible to get a terminal on the console. But yeah, the OS itself is still running fine apparently, so there's no need to reboot it.
This way I can reboot virtual servers which run one or two services rather than physical servers which run a dozen or more services.
Granted, I could always run Solaris or AIX rather than Linux, but then I'll be replacing a set of known irritations with a new set of mostly unknown irritations, all with the added benefit that so much Unix software never actually gets tested on anything other than Linux these days that I could well find myself with just as many issues.
Application Deployment (Score:2)
Let Me Be the First to say "Duh!" (Score:5, Insightful)
As I keep telling people when I work with virtualization, it does not necessarily lead to server consolidation in the logical sense (as in instances of servers), rather it tends to lead to server propogation. This is probably expected; generally I/O will be lower for a virtual machine than for a physical machine, thus requiring the addition of another node for load balancing in certain circumstances. However, this is not always the case.
Virtualization DOES help lead to BOX consolidation; as in it helps reduce the physical server footprint in a datacenter.
Let me give you my viewpoint on this; generally virtualization is leveraged as a tool to consolidate old servers to bigger physical boxes. Generally, these old servers (out of warranty, breaking/dying and so on) have lower I/O requirements anyway so often see a speed boost going to the new hardware... or at the very least performance remains consistent. However, where new applications are being put on virtual platforms, quite often the requirements of the application cause propogation of servers because of the I/O constraints. This is generally a good thing as it does encourage the developers to write "enterprise ready" applications that can be load balanced instead of focusing on stand-alone boxes with loads of I/O or CPU requirements. This is good for people like me as it provides a layer of redundancy and scalability that otherwise wouldn't be there.
However, the inevitable cost of this is management. While you reduce physical footprint, there are more server instances to manage, thus you need a larger staff to manage your server infrastructure... not to mention the specialized staff managing the virtual environment itself. This is not in itself a bad thing, and generally might lead to better management tools, too... but this is something that needs to be considered in any virtualization strategy.
Generally in a Wintel shop, more newer applications get implemented in most companies these days. This is particularly true since most older applications have been or need to be upgraded to support newer operating systems (2003 and the upcoming 2008). This means that the net effect of all I've mentioned is an increase in server instances even while the footprint decreases.
"Containerization" (yuck!) is not new by the way. This is just someone's way of trying to "own" application isolation and sandboxing. People have done that for years, but I definitely see more of it now that throwing up a new virtual machine is seen as a much lower "cost" than throwing up a new physical box. The reality of this is that virtualization is VERY good for companies like Microsoft who sell based on the instances of servers. It doesn't matter if it's VMWare or some other solution; licensing becomes a cash cow rapidly in a virtualized environment.
Where I work we've seen about a 15% net server propogation in the process of migrating systems so far. Generally, low-load stuff like web servers virtualize very well, while I/O intensive stuff like SQL does not. However, a load-balanced cluster pair of virtual machines on different hardware running SQL can outperform SQL running on the same host hardware as a single intstance... this means that architecture changes are required, and more software licenses are needed, but the side effect is a more redundant, reliable and scalable infrastructure... and this is definitely a good thing.
I am a big believer in virtualization; it's somewhat harking back to the mainframe days, but this isn't a bad thing either. The hardware vendors are starting to pump out some truly kick-ass "iron" that can support the massive I/O that VM's need to be truly "enterprise ready". I am happy to say that I've been on the leading edge of this for several years, and I plan to stay on it.
Re: (Score:2)
However, the inevitable cost of this is management. While you reduce physical footprint, there are more server instances to manage, thus you need a larger staff to manage your server infrastructure... not to mention the specialized staff managing the virtual environment itself. This is not in itself a bad thing, and generally might lead to better management tools, too... but this is something that needs to be considered in any virtualization strategy.
----
This is completely wrong - the increased scalabil
Re: (Score:2)
Re: (Score:2)
It's not just that VMware removes (or rather, greatly reduces) the hardware administration requirements, it's that it makes managing infrastructures _much_ more scalable. Change
Horrible word (Score:2)
Re: (Score:2)
Completely wrong (Score:2)
Nothing new here... -or- history repeats itself (Score:3, Informative)
Within a VM system, one will now find three types of systems running in the virtual machines.
It is these Service Virtual Machines that equate to the topic of the original post. A SVM usually provides one specific function, and while there may be interdependence between SVMs (for example the TCPIP SVM that provides the TCP/IP stack and each of the individual TCP/IP services), they are pretty much isolated from each other. A failure in a single SVM, while disruptive, usually doesn't impact the whole system.
One of the first SVM's was the Remote Spooling Communication Subsystem (or RSCS). This service allowed two VM systems to be linked together via some sort of communication link -- think UUCP.
The power of SVM's is in the synergy between the Hypervisor system, and a light weight platform for implementing services. The light weight platform itself doesn't provide much in terms of services. There is no TCP/IP stack, no "log in" facility (only relying on the base virtual machine login console), and maybe not even any paging memory (letting the base VM system manage a huge address space). Instead a light weight platform will provide a robust file system, memory management, and task/program management. In IBM's z/VM product, CMS is an example of a light weight platform. The Group Control System (GCS) is another example (GCS was initially introduced to provide a platform to support VTAM - which was ported from MVS).
Part of the synergy between between the Hypervisor and the SVMs is that the Hypervisor needs to provide a fast, low overhead intra-virtual machine communication path that is not built upon the TCP/IP stack. In otherwords the communication between two virtual machines should not require that each virtual machine contain it's own TCP/IP stack with it's own IP address. Think more along the lines of using the IPC or PIPE model between the SVMs.
Since the SVM itself is not a full suite of services, maintenance and administration is done via meta-administration, in otherwords you maintain the SVM service from outside the SVM itself. There is no need to "log into" the SVM to make changes. Instead of the SVM providing a sys-log facility, a common sys-log facility is shared among all the SVM's. Instead of each SVM doing paging, simply define the virtual machine size to meet the storage requirements of the application, and let the Hypervisor manage the real storage and paging.
Maybe a good analogy would be taking a Linux kernel and implementing a service via using the init= parameter in the kernel to invoke a simple set up (mounting the disks) and running just the code needed to perform the service. Communication for other services would be provided via hypervisor PIPEs between the different SVM's. So one would have a TCP/IP SVM that provides the TCP/IP network stack to the outside world. A web server SVM that provides just the HTTP protocol and base set of applications, using a hypervisor PIPE to talk to the TCP/IP stack. Within the web server SVM, would use hypervisor PIPEs to talk to the individual application SVMs.
A better word (Score:2)
How about just "containment". That way, rampant verbification won't overrunerrize things.
best feature of virtualization (Score:2)
Containerization (Score:2)
Well, except for compartmentalization which I guess has been used alongside words like virtualization & partitioning in computer science for ages.
A decent OS... (Score:2)
Oh c'mon, is this 1983? (Score:2)
Welcome (Score:2)
Server CPU's have for all practical purposes always had VMs. Intel resisted adding the needed hardware support to it's consumer chips for a very very long time, to avoid exactly what we see happening now.
And yes, VMware rocks harder then a fox with socks.
VMs are great for OSS at small companies. (Score:2)
It really brings down the knowledge and time required f
Re: (Score:2)
Re: (Score:2)
Presumably with some sort of shared storage?
I'd be interested to know whats used. Is it a generic shared/cluster storage system or some special VMWare-provided system?