Virtual Containerization 185
AlexGr alerts us to a piece by Jeff Gould up on Interop News. Quoting: "It's becoming increasingly clear that the most important use of virtualization is not to consolidate hardware boxes but to protect applications from the vagaries of the operating environments they run on. It's all about 'containerization,' to employ a really ugly but useful word. Until fairly recently this was anything but the consensus view. On the contrary, the idea that virtualization is mostly about consolidation has been conventional wisdom ever since IDC started touting VMware's roaring success as one of the reasons behind last year's slowdown in server hardware sales."
Re:The great thing (Score:2, Insightful)
You sort of get this feature with Parallels - the ability to drag a virtual server into a trash bin is almost as satisfying, and far less expensive.
Containerization != Virtualization (Score:2, Insightful)
Application isolation is not virtualization, its nothing more than shimming the application with band aid APIs that fix deficiencies in the original APIs. Calling it virtualization is a marketing and VC-focused strategy, it has nothing to do with the technology.
Re:Containerization (Score:3, Insightful)
As another software developer... (Score:5, Insightful)
And you still have that blank windows install to clone again when you need it.
VMs are a fantastic dev tool.
It's all about (Score:5, Insightful)
Don't trust "it's all about" or "it turns out that to the contrary" or "set to fully replace" statements, especially when there's lack of evidence of what is claimed.
Hosting services use virtualization to offer 10-20 virtual server per one physical machine, I and many people I know use virtual machines to test many configurations we can't afford to have separate physical machines for.
So even though it's also about "containerization" (is "isolation" a bad word all of a sudden?), it's not ALL about it.
Buzzword alert! (Score:5, Insightful)
1) Consolidation
2) "Containerization" or whatever their calling it today.
The company that I work for are using multiple virtual servers to be able to keep applications separate and be able to migrate them from machine to machine easier which is a common use for vmware (e.g. the appliance trend). So you're trading performance and memory usage for security and robustness/redundancy.
Across maybe 100-200 servers, the number of vservers we have is astonishing (probably around 1200 to 1500, which is a bit of a nightmare to maintain) which are hosting customer applications, when an application starts to use more resources the vserver is moved over to a machine with less servers on it, and gradually to it's own server, which in the long run saves money & downtime.
The other major industry using them is the hosting industry, allowing customers a greater amount of personalization rather than the one-size-fits-all cpanel hosting companies. This is the real industry where consolodation has increased, biting into the hardware markets possible sales because thousands of customers are now leasing shared resources, instead of leasing actual hardware.
Either way, the number of new machines (virtual) machines and ip addresses, all managed by different people is becoming a management nightmare. Now everybody can afford a virtual dedicated server on the internet regardless of their technical skills which often ends up as a bad buy (lack of memory and resource constraints compared to shared hosting on a well maintained server).
Very fishy and intriguing... (Score:5, Insightful)
That can be a chilling thought to companies like Intel, Microsoft or Oracle. Also, the carefully woven concoluted DRM and TCPA architectures that consume gazillions of instructions and slow down performance to a crawl... will simply be impossible if the Virtualisation layer simply ignores these functions in the hardware. Which is why I felt it very strange for the Linux Kernel team to get involved in porting these VMs in order to allow Vista to run as a guest OS. It shouldn't have been a priority item [slashdot.org] for the kernel team at all, IMO.
Re:As another software developer... (Score:4, Insightful)
Most important? (Score:1, Insightful)
Most important means different things to different people.
In the real world, to run a reasonably reliable application requires a modern rackmount server with remote out-of-band management, redundant power supplies and RAID. The most common failure modes for computers are hard disk and power supply failures, and this protects you from both. Remote management lets you control & reboot the machine from offsite.
These kinds of servers are available off the shelf from any major vendor (Dell, HP, IBM, etc) and will run you $2000 or so. Given the speed of computers today, that server will run most apps really, really fast. In fact, many apps will rarely go above 10% utilization (you do monitor your servers with SNMP, right?).
So, to get a reliable server with next-day onsite parts replacement, you had to buy far more server than you need. Many (most?) data centers are full of servers like this.
For one software project I'm working on, the vendor recommends 5 servers: one for oracle, two for crystal reports, and two application servers. The vendor recommends hardware costing $40,000. This is for a custom software app that will have 5 users. Yes, 5 users, and it's not a complex app that demands a lot of performance. Having talked to other customers, utilization rarely goes above 3%. Quite a waste, even though the total project cost is $200,000.
Hardware consolidation with VMware can lead to very big savings in hardware, colocation, power, cooling, and admin costs.
And if you get the Vmotion software from VMware, you can move a running virtual machince from one server to another, while it is running, without skipping a beat. That is very, very useful. Need to take your real server down for maintenance? Move the virtual machines to another server. Need to do your end-of-month reconciliation? Move it from the slow backup server to the big fast number cruncher.
Re:Whatever happened to "Sandboxing?" (Score:5, Insightful)
If you can't trust your OS to enforce the separation between processes, then you need to start re-evaluating your choice of OS.
Let Me Be the First to say "Duh!" (Score:5, Insightful)
As I keep telling people when I work with virtualization, it does not necessarily lead to server consolidation in the logical sense (as in instances of servers), rather it tends to lead to server propogation. This is probably expected; generally I/O will be lower for a virtual machine than for a physical machine, thus requiring the addition of another node for load balancing in certain circumstances. However, this is not always the case.
Virtualization DOES help lead to BOX consolidation; as in it helps reduce the physical server footprint in a datacenter.
Let me give you my viewpoint on this; generally virtualization is leveraged as a tool to consolidate old servers to bigger physical boxes. Generally, these old servers (out of warranty, breaking/dying and so on) have lower I/O requirements anyway so often see a speed boost going to the new hardware... or at the very least performance remains consistent. However, where new applications are being put on virtual platforms, quite often the requirements of the application cause propogation of servers because of the I/O constraints. This is generally a good thing as it does encourage the developers to write "enterprise ready" applications that can be load balanced instead of focusing on stand-alone boxes with loads of I/O or CPU requirements. This is good for people like me as it provides a layer of redundancy and scalability that otherwise wouldn't be there.
However, the inevitable cost of this is management. While you reduce physical footprint, there are more server instances to manage, thus you need a larger staff to manage your server infrastructure... not to mention the specialized staff managing the virtual environment itself. This is not in itself a bad thing, and generally might lead to better management tools, too... but this is something that needs to be considered in any virtualization strategy.
Generally in a Wintel shop, more newer applications get implemented in most companies these days. This is particularly true since most older applications have been or need to be upgraded to support newer operating systems (2003 and the upcoming 2008). This means that the net effect of all I've mentioned is an increase in server instances even while the footprint decreases.
"Containerization" (yuck!) is not new by the way. This is just someone's way of trying to "own" application isolation and sandboxing. People have done that for years, but I definitely see more of it now that throwing up a new virtual machine is seen as a much lower "cost" than throwing up a new physical box. The reality of this is that virtualization is VERY good for companies like Microsoft who sell based on the instances of servers. It doesn't matter if it's VMWare or some other solution; licensing becomes a cash cow rapidly in a virtualized environment.
Where I work we've seen about a 15% net server propogation in the process of migrating systems so far. Generally, low-load stuff like web servers virtualize very well, while I/O intensive stuff like SQL does not. However, a load-balanced cluster pair of virtual machines on different hardware running SQL can outperform SQL running on the same host hardware as a single intstance... this means that architecture changes are required, and more software licenses are needed, but the side effect is a more redundant, reliable and scalable infrastructure... and this is definitely a good thing.
I am a big believer in virtualization; it's somewhat harking back to the mainframe days, but this isn't a bad thing either. The hardware vendors are starting to pump out some truly kick-ass "iron" that can support the massive I/O that VM's need to be truly "enterprise ready". I am happy to say that I've been on the leading edge of this for several years, and I plan to stay on it.
Re:Whatever happened to "Sandboxing?" (Score:4, Insightful)
Re:Whatever happened to "Sandboxing?" (Score:5, Insightful)
And for the most part, modern OSs handle that well. They do allow for a certain degree of IPC, but mostly, two processes not strongly competing for the same resources can run simultaneously just fine.
The problem arises in knowing what programs need what access... The OS can't make that call (without resorting to running 100% signed binaries, and even then, I personally lump plenty of "legitimate" programs in the "useless or slightly malicious" category), and we obviously can't trust the applications to say what they need. Most programs, for example, will need to at least write in their own directory, many need access to your home dir, some create files in your temp directory, some need to write in random places around the machine, some need logging access, some even need to write directly to system directories; Some programs need network access, but the majority don't (even though they may want to use it - I don't care if Excel wants to phone home, I don't use any of its features that would require network access and would prefer to outright block them). How does the OS know which to consider legitimate and which to disallow?
The concepts of chroot(and now registry) jails and outbound firewalling work well, as long as the user knows exactly what resources a given program will need access to; But even IT pros often don't know that ahead of time, and many well-behaved programs still snoop around in places you wouldn't expect.
The problem mentioned by the GP, with the likes of Java and
"real" VMs basically avoid the entire issue by letting even a highly malicious program do whatever it wants to a fake machine. They can have full unlimited access, but any damage ends when you halt the VM. Repair of worst-case destruction requires nothing more than overwriting your machine image file with a clean version (you could argue the same for a real machine, but "copy clean.vm current.vm" takes a hell of a lot less time than installing Win2k3, MSSQL, IIS, Exchange, and whatever else you might have running on a random server, from scratch.
Or, to take your argument one layer lower, I would tend to consider XP the untrusted app, and VMWare the OS.
Re:Whatever happened to "Sandboxing?" (Score:3, Insightful)
Virtualization (or containerization... how awful!) generally allows this. Want to play with your hard drive driver? No problem.
Of course, it fails when you actually
Re:The great thing (Score:2, Insightful)