IBM Saves $250M Running Linux On Mainframes 274
coondoggie writes "Today IBM will announce it is consolidating nearly 4,000 small computer servers in six locations onto about 30 refrigerator-sized mainframes running Linux, saving $250 million in the process. The 4,000 replaced servers will be recycled by IBM Global Asset Recovery Services. The six data centers currently take up over 8 million square feet, or the size of nearly 140 football fields."
Proof of Linux's Environmentalist Friendlyness (Score:5, Funny)
OT: Feeding the Troll (Score:3, Interesting)
Well, this has been the first /. flooding I've ever witnessed...
It is rather interesting that you should flood flood like you do, then bemoan cultural intolerance... I participate in a forum where several users (or "morons", as I dub them) demand and exercise their "right to flood", claiming that cleaning up their flood is denying them their right to free speech.
I just don't understand how do you find the time to do things like that...
Re: (Score:2)
Of course they're saving money (Score:5, Funny)
(it's a joke)
Must be SCO jacked up the rates... (Score:4, Funny)
Re: (Score:2, Offtopic)
There is no "Maybe" about it. SCO does not own any of the System V code - they own Unixware and Opensewer, just like SGI owns Irix. None of the copyrights on the SysV code transferred to The Santa Cruz Operation, let alone SCO/Caldera. And SCO has not provided any documentation that shows what (code) copyrights were transferred with the specificity requir
Ric Romero says "virtualization saves space" (Score:2, Insightful)
People are consolidating lightly (and heavily!) used servers into VMs all over the place.
Re: (Score:2)
IBM's been doing this for-ever, dude. (Score:5, Interesting)
It was pension and payroll software and it was legally blessed.
It was such a frigging song and dance trying to get anything done that it was cheaper and faster for the company to emulate their butts off rather than trying to go through the management and the unions and the employees.
But I did learn about optimizing instruction fetches by scattering the compiled code around the circumference of a magnetic drum so that the drum would have rotated around beneath the read head in time for the next instruction.
Try and tell that to the young people of today, and they wont believe you, eh Obadiah?
Re: (Score:3, Insightful)
Just curious because I recall reading that even the latest zSeries systems can natively run code dating all the way back to the original System/360 models.
-Z
Re:IBM's been doing this for-ever, dude. (Score:5, Informative)
Re:IBM's been doing this for-ever, dude. (Score:5, Funny)
Re: (Score:2)
But I did learn about optimizing instruction fetches by scattering the compiled code around the circumference of a magnetic drum so that the drum would have rotated around beneath the read head in time for the next instruction.
Given the description of a 360 emulating a 1401, I find this comment a bit difficult to follow. IIRC, none of the 370, 360, or 1401 was a drum-based computer, and the code run in each would not be optimized by consideration of location in memory.
I do remember stories back in the old days, of 360 emulating 1401 emulating 650 (an even older machine). The 650 was a drum machine, and relied greatly on SOAP (Symbolic Optimizing Assembly Program) to develop assembly language programs which were then alloca
Re: (Score:2, Interesting)
Re: (Score:3, Interesting)
Re: (Score:2)
I bet the OS was virtual too running inside mainframe.
I bet as ex IT manager he could answer my questions but I don't think they are allowed to answer and guy was really busy moving
A pleasure to work with, as well.. (Score:5, Informative)
Re: (Score:2, Interesting)
Re: (Score:3, Informative)
OS/2 was declared dead 10000 times (even by fans) while it was getting some new graphic drivers actually purchased from Sci-Tech software.
If you are a PowerPC (G4/G5) user and in desperate need for non beta, working Java 6, you simply install PPC-Linux and install IBM supported, non beta Java 6 along with CPU acceleration to it. That is the system and OS which Nvidia/ATI refuses to ship
Your sig is broken. (Score:2)
2000 sq feet per small computer? (Score:5, Interesting)
Re:2000 sq feet per small computer? (Score:5, Funny)
(I'm so sorry)
Re: (Score:2)
Re: (Score:2)
Yea, but tubes sound way better.
Six 1.2 million square foot data centers. (Score:2)
The article says that the data centers required for the 4000 "small computer servers" aggregate to about 8 million square feet. It takes IBM 2000 square feet to house a small computer?
It's less amazing when you think of it as six 300x400 yard warehouses run by clients. Those are big buildings but "data centers" are usually large. There are not enough details from the article to figure it all out but four thousand computers to 30 boxes is an impressive feat that will save lots of electricity.
Re: (Score:2)
Consider this... (Score:3, Interesting)
Yes, 4000 "small computer servers" times 2000 square feet equals 8 million square feet. But this is unlikely the arrangement. Consider instead a few buildings of data centers, each with 1 or more relatively small rooms. Within a room, there may be a few racks, all surrounded by walking space, and other perhipherals like AC units. Then outside of those rooms, more walking space for hallways. When you factor in all the human space and simple space for ventilation, and then cubicles and monitoring for sup
Re: (Score:2)
Re: (Score:2)
My employer recently 'consolidated' too. (Score:5, Interesting)
Now we have ten VM servers running all the migrated services, PLUS a room with about fifty aging Dell PowerEdge servers, each running independently, requiring massive support, cooling, and electricity.
I never thought 'consolidation' would require so much more space, electricity, air conditioning, and upgrades to core switches and UPS units.
Re:My employer recently 'consolidated' too. (Score:5, Funny)
(Strange thing is, I make a good living replacing aging mainframes by linux clusters. mainframes are fine when you're doing transaction processing. But for cpu-bound stuff, you're better off with a room full of opterons).
Re: (Score:2)
Re:My employer recently 'consolidated' too. (Score:4, Insightful)
I wonder if IBM factored in the number oddball projects that require Windows systems in their server count? Windows won't run in a zSeries VM, and there is plenty of software out there that is still Windows only.
Re: (Score:2)
Not on servers. IBM uses Lotus Notes so no need for Exchange Servers. They probably us DB2 for any SQL systems so no MSSQL servers.
Frankly I can't think of many reasons that IBM would have to keep many Windows servers around except for testing IBM software running on a Windows server.
Re:My employer recently 'consolidated' too. (Score:4, Funny)
Please don't say that word. I get chills and shivers.
It's been already three years since I had to use it, but even seeing the name of Lotus Notes (AAH, MY FINGERS!) makes me curl up and sob on the floor.
What a great product!
Re: (Score:2)
System z Mainframes (Score:2, Interesting)
Re:System z Mainframes (Score:5, Informative)
Part of that is because IBM will customize the machines to your heart's content. The sky and your budget are the only limits. They leave a good many of the loadout details (xGB/TB of RAM, DASD storage size, # of CPUs per card, # of CPU cards, even number of mainframes - they can be chained in parallel). You should look at the Z series hardware specs [ibm.com] for the general details and look up what details you don't know.
If you're looking for benchmarks or comparisons to x86/x86-64 or other commodity architectures good luck - they are nearly impossible to find. This is due to the implementations being on entirely different scales. The best comparison you an find is the MIPS per CPU. You can find some slightly stale numbers here [isham-research.co.uk] (BTW: an LPAR [wikipedia.org] is something that's been around on mainframes for several decades - one LPAR can run up to several hundred x86 VMs concurrently).
Re: (Score:2, Interesting)
zLinux is great if you're consolidating mostly idle, low priority resources. The "magic" that allows you to save money while simultaneously getting raped by IBM Global Services and paying too much for hardware is thin provisioning. You might assign 10 Linux VMs 1GB RAM each, and only have 4GB of actual memory available. Same thing with CPU. This is an efficient use of resources, if your applications don't all require memory at the same time.
No, You Don't Need Different LPARs for RHEL 4 & (Score:4, Informative)
There are a lot of errors in your comments, unfortunately. Of course you can run Red Hat and SuSE concurrently in a single LPAR under z/VM, and multiple versions thereof. This has always been true, ever since Linux began running on mainframes many years ago. You might want to have more than one LPAR to run more than one version of (first level) z/VM, but you don't need many. Two or three for z/VM and Linux is typical and just fine. And it's not as if LPARs are in short supply on mainframes: up to 60 are available on a single machine (30 on the smaller model), so "spending" 1 to 3 is no big deal.
Re: Investing in new mainframes, come on, get real. It's so easy to find market data because companies like Gartner and IDC publish it, and IBM just announced its 8th straight quarter of mainframe hardware growth, something that hasn't happened since before Y2K. It's impossible to do that with "a few showboat customers."
And no, you simply cannot approach the level of virtualization these machines offer on any other system, at least for typical business computing, and still offer reliable service to users. In fact, in IBM's case many of the software licenses are presumably "free," and they still found big cost savings by taking 4,000 machines down to 30. For the rest of the world the mathematics in such situations are even more compelling.
Re: (Score:3, Informative)
When I started out the "hot" PC, the best you could get, was a 4Mhz Z80 running CP/M. I had one of those at home and at work, I worked the operating system of a very old (even then) CDC mainframe. It was a CDC6600. We had a Z80 emulator that ran on the 6600 and we could emulate a Z80 at about 20 times real time. Not bad, a virtual PC running on a mainframe in the late 1970's
Us software people really need to get off the ball and think of som
Re:System z Mainframe Specs (Score:2, Informative)
One is that the things you find on IBM's website are designed for CEOs and CIOs who don't really care about technical details -- only "solutions"
The second is that the specs themselves aren't well-defined. As an earlier poster pointed out, you don't buy one of these things off the shelf. You tell IBM what you want to do with it, and you work with them to construct not just a mainframe, but all of the storage and other add-ons that come
Awesome consolidation (Score:2)
An obvious conclusion (Score:2, Interesting)
I think Slashdotters would be more interested in stories that focus on a company switching its desktops to Linux though. Servers running Linux are pretty common. We want news about the desktop front; it would be more newsworthy at least.
Re: (Score:2)
The point of this story isn't to point out that Linux servers are cheaper/better/faster, its to point out that the Linux platform got some publicity. For those trying to get everyone switching to Linux for their desktops, publicity is their one major problem (as well as many smaller problems, but thats an argument for another story).
Remember when Microsoft said that Linux infringes on their patents but they weren't going to sue? They were ne
$250M?? (Score:4, Insightful)
Re:$250M?? (Score:5, Funny)
Re:$250M?? (Score:5, Informative)
Combine IT salary for 3-5 years, power over 3-5 years, etc. etc. and that number makes sense.
Re: (Score:2)
Re: (Score:2)
They're probably computing cost over the expected lifetime.
Combine IT salary for 3-5 years, power over 3-5 years, etc. etc. and that number makes sense.
This is the crux of IBM's virtualizations ads; you can get rid of admins and save big bucks. Also electricity and floor space, but you have to throw salaries in there to come up with the savings they suggest.
Why does it take less IT salary as suggested above for 4000 virtual Linux servers on a mainframe than with 4
Re:$250M?? (Score:4, Informative)
2003 Server, Dude! (Score:2)
Lets see... $250M / 4000 = $62,500 per server being consolidated? I mean, I know floor space, buildings, racks, power, AC etc cost money... but that's still a *lot*. Anyone care to chime in on how close to normal that is?
Running 2003 server, each needs an "admin", licensing fees for software, you know it gets expensive.
Really, I think the cost was in 130 football fields of mostly unused server warehouse.
Re: (Score:2)
1920 square feet per server? WTF? (Score:2)
Google tells me that a football field is 300'x160' or 57600 square feet.
If every server had 14 square feet you could put all 140 of them on one football field.
what does this have to do with Linux? (Score:5, Insightful)
The story here is about consolidation, virtualization, etc.
Linux is a small part of the technology involved here. z/OS is the real story here.
Re: (Score:2)
I've always been fuzzy on where z/OS ends and z/VM begins, so you're probably right, but the point remains the same -- this isn't about Linux, it's about something that starts with a "z".
Well Duh. (Score:5, Insightful)
The reason why companies are in this pickle is because they thought more was better. They though "All we need to do is buy 4000 x86 servers and we can do tons of work." They didn't realize how HARD it is to get 4000 servers to operate in a cluster so you can take advantage of those individual systems as one body. So, they ended up with islands of computing power instead of a cluster. Naturally the mainframe consolidates these islands back to computing continents and you end up running the mainframes at near capacity all the time. Modern mainframes make this easy with dynamic CPU/RAM allocation, as well as dynamic storage. So you segment out the mainframe in to four or eight chunks. Chunk 1 is hot, chunk 3 and 5 are idle. Simply re-assign some of the CPUs from chunk 3 and 5 to 1 until the load goes down. You can take advantage of this in a big way if you segment your work load to match global demand. So chunk 1 might be data for the western USA, and chunk 7 might be EMEA. You can bounce resources between those segments much more easily. You can even script it. HP has an offering that does this automagically, I'm sure IBM has something similar.
Now, my personal opinion is why Linux? Some of the more advanced features like dynamic RAM, CPU, and IO allocation don't appear to be that solid to me. Perhaps IBM added these features to Linux or made them more robust? Maybe they run Linux inside an AIX virtualization container?
single points of failure (Score:2)
Re: (Score:2)
Re:single points of failure (Score:5, Informative)
These are machines that don't break, period. We're talking the types of machines that run the major banking systems of the world and the like. They simply do not go down. In this situation, if one of the 133 apps buggers up, it's only that VM that's shot. You just nuke it and restart it, the rest of the machine just keeps ticking along.
Re: (Score:3, Interesting)
It's those kinds of things that make disaster recovery necessary. If the apps were distributed across discrete servers, its possible that not all of them would have been destroyed. Remember the end o
Re:single points of failure (Score:4, Insightful)
Re: (Score:2)
Re: (Score:3, Informative)
Re: (Score:3, Informative)
From what we were told, IBM z/OS mainframes are the *most* reliable platform to host software services (but of course, they'd say that).
The following is from memory, as best as I can remember it, and may not be 100% accurate:
The 'z' in 'z/OS' stands for 'zero downtime'. z System mainframes are engineered for 99.999% availability, or less than 3 minutes of d
A Little More on Availability (Score:2)
Yes, System z mainframes are engineered for 99.999+% availability. But it's important to define availability here very precisely. IBM defines this as business service: what the user gets. Therefore, planned downtime is just as bad as unplanned downtime. A lot of IT people get confused by this point, but it's very important. "Excuse me while I shut down credit card approvals for a couple hours to upgrade the database" and you'll be escorted out of the building promptly.
Now, there's nothing in the original s
Re:single points of failure (Score:4, Informative)
The Linux on these machines is running under z/VM, in multiple virtual machines. When one of them has a software fault, you reboot that one's VM and keep going; the other 132 Linux-running VMs run without noticing anything happened. (It is possible for z/VM to fault, sure. But it's an OS with 40 years of refinement in the "100% uptime" mainframe culture, and its task is just managing the virtual machines.) When something goes wrong with the hardware, the fault tolerance and self-healing features keep things running, and you fix the faulty element with a hot-swap. A properly set-up datacenter is going to minimize external risks, with backup power and such. Proper choice of datacenter location will minimize natural disaster risk.
So, yeah, the big risk is human failure, and these IBM-built, IBM-owned datacenters are presumably going to have extensively trained IBM-employed mainframe personnel, which minimizes that risk.
Now, if some cable company cuts the fiber optic lines . . .
A Small Request (Score:3, Funny)
Comment removed (Score:5, Interesting)
Re: (Score:2, Informative)
Re: (Score:2)
Is it really linux on the mainframe that was the problem, or rather virtualization and IBM pricing? In other words, if you ran one instance of linux hosting 1,000 server processes, might it not work well?
If so, linux might have advantages over a special-purpose mainframe OS. Presumably there are more apps for linux and mainf
Re: (Score:2)
Isn't this the entire crux, extreme reliability costs a lot of money and resources. Yes, a single node is slower than a regular server but you can also run many nodes in parallel: so you are only limited by software design.
To implement remotely similar reliability with regular hardware you are going to need a redundant SAN, redundant switching, redundant NICs, redundant CPUs, redundant memory, etc, and a lot of cabling. Running something like VMWare ESX will allow you to bump VMs in realtime between host
Amplification re: CPU Sparing (Score:5, Interesting)
Actually, on a System z9 EC (Enterprise Class), a single CPU chip failure is not a "Call Home" repair event. Only the second CPU chip failure would result in an automatic call, while your business keeps running of course. (There are a minimum of two spares in each machine.) The average time to first failure for a particular machine is somewhere in the many decades range.
OK, just for fun (because it never actually happens in the real world), what happens with a triple failure? If you happen to have a "fully configured" mainframe -- all processors turned on -- then.... your business still keeps running. Yes, the system might lose some processing capacity, but it keeps running. The higher priority stuff (from a business view) takes precedence automatically, and life goes on. This is all on a single machine still.
If you've got an S18, S28, S38, or S54 model, then, at your business's convenience, the faulty hardware can be replaced. (You might do this at night, for example.) The repair technician tells the mainframe to "evacuate" memory on a portion of the machine while the OS and applications keep chugging along, possibly with reduced capacity, often not. (Depends on what configuration you choose.) When the evacuation is complete, the technician can pull a processor/memory group (called a "book"), insert the new one, bring the new one online, and... everything still keeps running. Again, this is all on a single machine -- no clusters required for any of this.
It's just an endless cycle (Score:5, Interesting)
But I bet that a small farm of modern medium sized servers running Linux on VMWare would be even less expensive. Or Solaris/Niagara. Why would you want to run an open source operating system, whose major benefits are openness and affordability on the what is literally the most expensive and most proprietary computing platform in the world!
These server consolidation projects are just giant boondoggles spawned because the server sprawl finally got insane. It's an endless cycle:
A. Giant server consolidation project that takes 4000 servers down to 30 servers.
B. Department B complains that Department A's application keeps hanging and consuming all of the CPU. They demand their own hardware "for availability reasons".
C. Vendor C demands dedicated hardware for licensing/capacity planning/supportability reasons. Rather than constantly bicker with the vendor over supportability they get dedicated hardware.
D. Department D complains that the IT department is charging outrageous prices for time sharing on the mainframe. After all a dedicated server only costs $XXX.
E. Suddenly there are 4000 servers again.
F. IT department spends some insane amount of money on infrastructure to manage the 4000 servers.
G. IT department budget gets insanely large trying to manage that much stuff.
H. Some CIO gets the idea that all of this money managing servers is ridiculous and we should do a server consolidation project.
I. IT department spends an even larger amount of money on the latest super high availability gear and consulting services so that the can run 4000 commodity servers inside a few big servers. All because it will "cost less to maintain".
J. Go back to A.
4000 servers doing nothing (Score:2)
In my experience, consolidation using virtualization only works if the servers in question don't have anything to do and only runs a zoo of defunct web sites for example.
Failure: A cautionary tale (Score:4, Interesting)
After,
But wait...
When you do have one machine fail- it takes down 133 virtual servers at the same time. You raised your risk enormously.
IBM will tell you all about fail-over just like they did our executives.
Half the country down for three days is the reality.
---
Still it is interesting to see a return to the centralized mainframe farm. Sure hope those multiply redundant communication lines don't go down.
Big Iron. Right concept, wrong platform. (Score:5, Insightful)
For example, for email we run Lotus Notes on a couple of BIG pSeries (AIX) servers. We could have run it on a farm (technical term) of windows boxes.
For webservers, which you could run on AIX, or linux on zSeries. We have multiple (read: many) x86 servers running linux+apache. Why? They connect to a backend app server (pSeries) which connects to a backend zSeries DB2 (I'd prefer Oracle however, to run Oracle on zSeries requires it to be run in a linux VM).
We definitely subscribe to the school of using VMs whether they are zSeries, pSeries, or VMWare on x86. Even if the x86 server is running ONE application, we still put vmware underneath, as it allows for us to move the image to a newer hardware platform when it's time to upgrade. Even some of the larger x86 servers run vmware but in each partition there is a single instance of apache. Makes for managing storage that much easier (fewer zones, cabling etc).
Would I consider moving our apache on linux on x86 to apache on linux on zSeries? Not really. It's a waste of CPU cycles (MIPS). I'd rather use zSeries MIPS for something a bit more critical like keeping my database up and running than serving out webpages (static or dynamic).
IT isn't not about religion, it's about finding the best tool considering your requirements. I have no problems telling IBM that product XYZ is trash. While my servers are IBM, you won't see IBM disk, or IBM tape, and atleast once a quarter some salesman from IBMs storage group is at my door. He buys me lunch and every quarter he is sent packing. You won't see ibm bladecenters as the thought of hundreds of additional servers to manage isn't appealing (but I'll gladly take 100s of VMs across larger x86/pseries boxes).
I know many of you were expecting to hear me say 5000 linux servers, but there are options for my requirements that did not lead to big "google style" linux farms.
BTW: I have no problems kicking out IBM on x86 if HP/Dell/Sun have a better product, and knowing this and letting IBM know this gives me a great advantage over them, as they very well know I'm capable of bringing in something more suitable. (I *used* to have IBM storage).
Re: (Score:2)
Oracle Correction (Score:2)
Oracle Database is available for both Linux on z and z/OS....
But it'd be extremely unlikely you'd want to switch from DB2 for z/OS. Even Larry Ellison concedes that.
What was before? (Score:2)
What operating system were they previously using?
Re: (Score:2)
"Football fields"? (Score:3, Interesting)
I suppose when the US finally goes metric, they'll have to deal with units of area such as "millifields", "centifields" and "kilofields". In time, the measure will have to formalised e.g. "the distance a 100kg, 190cm man is able to kick a leather-encased rubber bladder...".
Or maybe the current generation of writers that thinks "140 football fields" is a meaningful substitute for "a really big chunk of space" will have died off by then.
Re: (Score:2)
Ok, maybe I can see that some folks would have a problem understanding volume (because our poor educational system means that we are barely able to manage two dimensions), but why must t
I was that soldier (Score:4, Informative)
They also saved 20% on their car insurance... (Score:2, Funny)
Metric? (Score:3, Funny)
In metric, that would be around 104 soccer fields.
Re:No (Score:4, Informative)
Re: (Score:3, Funny)
I should hope so!
8,000,000 ft^2 / 4,000 servers = 2,000 ft^2 per server
My god, those are large servers! They must still be using vacuum tubes...or maybe a 65cm manufacturing process.
Re: (Score:2, Funny)
Re: (Score:2)
Right now they are unemployed and this is why we need things like socialized unemployment he
Re: (Score:2, Funny)
Re:Imagine....Need To Update... (Score:2)
Don't you mean a virtual Beowulf cluster?
You've got to get up to date. After all, this is 2007 -- the year of Linux! (or something)
Re: (Score:2, Insightful)
Re: (Score:3, Insightful)
Re: (Score:2)
Re: (Score:3, Interesting)
IBM is a hardware company, always has been, they've been into open source software since before GNU existed.
http://en.wikipedia.org/wiki/SHARE_(computing) [wikipedia.org]
Sure, they are an evil corporation with too much money on retainer, but they realized long ago, that software has an intrinsic value that crashes once the software is written.
For instance, the labor theory of value - the most influential of the intrinsic theories - holds that the value of an item comes from the amount of labor spent producing said item.
http://en.wikipedia.org/wiki/Intrinsic_theory_of_v alue [wikipedia.org]
Basically, once software is written, it's value rapidly approaches zero, because the ability to replicate that work is
OS/2 (Score:4, Insightful)
I think you're hesitant to accept IBM because of the whole 70's/80's "Big Blue" stuff, but after Microsoft swept the rug out from under their feet the company's strength was permanently compromised. The consumer market rejected them (hence the sale of the PC division to Lenovo) and until they committed to Linux software was a major vulnerability for them. The openess of Linux enabled them to get back in the game - their customers didn't have to worry about the future of the platform while their immense contributions to Linux enabled the OS to really threaten Microsoft. So yeah - as a Slashdotter, IBM are the good guys. They support Linux and they don't aggressively protect their many, many patents (they use their patents to protect themselves rather than trying to sue everyone they can for $$$). Personally, I think IBM is the most important tech company in the world.
Re: (Score:2)
IBM pushes Linux because as a hardware vendor / service provider they make the most possible money in a world with a commodity standard OS with no licensing cost. They put PR resources towards hyping Linux so that when they say "and this awesome system we've built you runs Linux" the PHBs say "awesome, that's the gold standard of operating systems" rather than "why are you using that hobbyist crap? You're IBM".
Re: (Score:2)
Re: (Score:2)