Please create an account to participate in the Slashdot moderation system

 



Forgot your password?
typodupeerror
×
Linux Business IBM IT

IBM Saves $250M Running Linux On Mainframes 274

coondoggie writes "Today IBM will announce it is consolidating nearly 4,000 small computer servers in six locations onto about 30 refrigerator-sized mainframes running Linux, saving $250 million in the process. The 4,000 replaced servers will be recycled by IBM Global Asset Recovery Services. The six data centers currently take up over 8 million square feet, or the size of nearly 140 football fields."
This discussion has been archived. No new comments can be posted.

IBM Saves $250M Running Linux On Mainframes

Comments Filter:
  • by Anonymous Coward on Tuesday July 31, 2007 @09:02PM (#20065329)
    The article says that the data centers required for the 4000 "small computer servers" aggregate to about 8 million square feet. It takes IBM 2000 square feet to house a small computer? Also, saving $250 million suggests that it costs them something over $60K per "small computer" even ignoring the price of the new mainframes. Amazing.
  • by MarcQuadra ( 129430 ) on Tuesday July 31, 2007 @09:02PM (#20065331)
    My employer recently 'consolidated' their server farm too. We used to have a room with fifty aging Dell PowerEdge servers, each running independently, requiring massive support, cooling, and electricity.

    Now we have ten VM servers running all the migrated services, PLUS a room with about fifty aging Dell PowerEdge servers, each running independently, requiring massive support, cooling, and electricity.

    I never thought 'consolidation' would require so much more space, electricity, air conditioning, and upgrades to core switches and UPS units.
  • System z Mainframes (Score:2, Interesting)

    by o2sd ( 1002888 ) <iankt68NO@SPAMgmail.com> on Tuesday July 31, 2007 @09:09PM (#20065375) Homepage Journal
    It's kinda hard to find technical specifications on these mainframes beyond marketing fluff. After some looking I found this brochure [ibm.com], which has some interesting information on the firmware and a few details of the I/O, but not much about the processing units, and why one of these would be able to replace 133 blade servers. It does mention up to 30 superscalar processors per box, but I'm not really sure what that means. (Maybe they go next to the inverting flux capacitor).
  • by GFree ( 853379 ) on Tuesday July 31, 2007 @09:29PM (#20065533)
    Saving a lot of dough by using Linux on servers makes sense, heck it's fairly obvious to anyone here, that's where it excels.

    I think Slashdotters would be more interested in stories that focus on a company switching its desktops to Linux though. Servers running Linux are pretty common. We want news about the desktop front; it would be more newsworthy at least.
  • by crovira ( 10242 ) on Tuesday July 31, 2007 @09:53PM (#20065737) Homepage
    I had to maintain some software that was running on a aging 370 mainframe. The 370 was emulating a 360 which was emulating a 1401.

    It was pension and payroll software and it was legally blessed.

    It was such a frigging song and dance trying to get anything done that it was cheaper and faster for the company to emulate their butts off rather than trying to go through the management and the unions and the employees.

    But I did learn about optimizing instruction fetches by scattering the compiled code around the circumference of a magnetic drum so that the drum would have rotated around beneath the read head in time for the next instruction.

    Try and tell that to the young people of today, and they wont believe you, eh Obadiah?
  • by jimmydevice ( 699057 ) on Tuesday July 31, 2007 @09:54PM (#20065745)
    I have seen applications that are well written, understood and maintained and are 30 years old. I suffered through the rewrite of a well known, commercial revision control application we used to maintain our code. Originally in C, it was rewritten in Java with horrible results. Our checkout times went from a few minutes to 10 minutes for a full checkout. All our custom tools no longer worked, but the interface looked fabulous. Frankly, I'll take a text console based application over some bloated "modern age" crapware any day.
  • by cyphercell ( 843398 ) on Tuesday July 31, 2007 @10:21PM (#20065937) Homepage Journal
    depends on the database behind that 30 year old software for me. I've seen extremely flat databases with nothing but a text console wrapped around it. Extremely, poor integrity standards and the data falls apart like a stack of Jenga blocks (the more people playing the quicker it falls). Computers back then were (often) used to supplement paper systems and sometimes it really shows.
  • Comment removed (Score:5, Interesting)

    by account_deleted ( 4530225 ) on Tuesday July 31, 2007 @10:23PM (#20065955)
    Comment removed based on user account deletion
  • by ogren ( 32428 ) on Tuesday July 31, 2007 @10:37PM (#20066057) Homepage
    I still haven't seen any conclusive evidence that Linux on mainframe is a good idea. I'm sure running 30 new mainframes is going to cost less than 4000 aging servers. Just about anything would be less expensive than 4000 aging servers.

    But I bet that a small farm of modern medium sized servers running Linux on VMWare would be even less expensive. Or Solaris/Niagara. Why would you want to run an open source operating system, whose major benefits are openness and affordability on the what is literally the most expensive and most proprietary computing platform in the world!

    These server consolidation projects are just giant boondoggles spawned because the server sprawl finally got insane. It's an endless cycle:

    A. Giant server consolidation project that takes 4000 servers down to 30 servers.
    B. Department B complains that Department A's application keeps hanging and consuming all of the CPU. They demand their own hardware "for availability reasons".
    C. Vendor C demands dedicated hardware for licensing/capacity planning/supportability reasons. Rather than constantly bicker with the vendor over supportability they get dedicated hardware.
    D. Department D complains that the IT department is charging outrageous prices for time sharing on the mainframe. After all a dedicated server only costs $XXX.
    E. Suddenly there are 4000 servers again.
    F. IT department spends some insane amount of money on infrastructure to manage the 4000 servers.
    G. IT department budget gets insanely large trying to manage that much stuff.
    H. Some CIO gets the idea that all of this money managing servers is ridiculous and we should do a server consolidation project.
    I. IT department spends an even larger amount of money on the latest super high availability gear and consulting services so that the can run 4000 commodity servers inside a few big servers. All because it will "cost less to maintain".
    J. Go back to A.

  • by duffbeer703 ( 177751 ) * on Tuesday July 31, 2007 @10:40PM (#20066087)
    The other part of it is that the zLinux miracle is mostly bullshit.

    zLinux is great if you're consolidating mostly idle, low priority resources. The "magic" that allows you to save money while simultaneously getting raped by IBM Global Services and paying too much for hardware is thin provisioning. You might assign 10 Linux VMs 1GB RAM each, and only have 4GB of actual memory available. Same thing with CPU. This is an efficient use of resources, if your applications don't all require memory at the same time... if you're like me, your employer has lots of memory-hogging J2EE stuff. On the other hand, crypto and networking between VMs is blazing fast.

    Another problem is that in a big business that has mainframes, the mainframe folks are very conservative, use much stricter change and other controls than most open systems shops and don't understand the workloads that Unix/Linux systems get. They get prickly when your linux systems start looking for lots of resources. Everything takes about 3x longer.

    The other issue is that the VMs are dependent on the Linux installation on the LPAR, and you may not have many LPARs available. If you want to run Red Hat & Suse, or RHEL5 and RHEL4, you need an LPAR for each. Nobody (except for a few showboat customers) is investing in new mainframes, and you may only have a few LPARs available on an existing mainframe available.

    So if you have a business model like providing lots of cheap (and mostly idle) virtual servers, and you already have a major Mainframe investment, zLinux is a great solution. Otherwise, you're probably better off looking at the hardware virtualization options that you can get from Sun or even on whatever IBM calls RS/6000's these days for 1/5 of the cost.

    Just a note: I'm not a mainframe guru, and my views are slanted based on my experience in working at a particular employer about a year and a half ago. So some of the issues may have changed, or the options available to me may have been limited due to some site-specific restriction that I am not aware of.
  • by freeze128 ( 544774 ) on Tuesday July 31, 2007 @11:08PM (#20066277)
    While I agree that IBM's mainframe systems are rock-solid (and, as a colleague is fond of saying, self-healing), accidents *DO* happen. I'm sure the mainframe is happily running its code just fine only seconds before a hurricane rips the roof off of the data center and hurls the machine into the next county....

    It's those kinds of things that make disaster recovery necessary. If the apps were distributed across discrete servers, its possible that not all of them would have been destroyed. Remember the end of Twister? The barn was wasted, but the house was left intact.
  • by Maxo-Texas ( 864189 ) on Wednesday August 01, 2007 @12:12AM (#20066769)
    Before, .33% failure rate = 13 failures a day. You had well understood procedures for dealing with failures.

    After, .33% failure rate = 1 failure per thousand days. This is a recipe for hell.

    But wait...
    When you do have one machine fail- it takes down 133 virtual servers at the same time. You raised your risk enormously.

    IBM will tell you all about fail-over just like they did our executives.

    Half the country down for three days is the reality.

    ---

    Still it is interesting to see a return to the centralized mainframe farm. Sure hope those multiply redundant communication lines don't go down.
  • by etnu ( 957152 ) on Wednesday August 01, 2007 @12:19AM (#20066799) Homepage
    I recently worked with IBM to interop with sametime (their IM network), and my opinion of their engineering practices would probably get me fired for disparaging a partner.
  • Consider this... (Score:3, Interesting)

    by Krozy ( 755542 ) on Wednesday August 01, 2007 @12:53AM (#20066955)

    Yes, 4000 "small computer servers" times 2000 square feet equals 8 million square feet. But this is unlikely the arrangement. Consider instead a few buildings of data centers, each with 1 or more relatively small rooms. Within a room, there may be a few racks, all surrounded by walking space, and other perhipherals like AC units. Then outside of those rooms, more walking space for hallways. When you factor in all the human space and simple space for ventilation, and then cubicles and monitoring for support personnel it could average around 2000 square feet (40x50).

    The same logic can be applied to costs.. $250 million / 4000 machines = $62.5K. Some of that is actual hardware, and software licenses. Some of that is ongoing support from their full time employees on staff to maintain the things.

  • "Football fields"? (Score:3, Interesting)

    by darnok ( 650458 ) on Wednesday August 01, 2007 @03:02AM (#20067587)
    "The six data centers currently take up over 8 million square feet, or the size of nearly 140 football fields."

    I suppose when the US finally goes metric, they'll have to deal with units of area such as "millifields", "centifields" and "kilofields". In time, the measure will have to formalised e.g. "the distance a 100kg, 190cm man is able to kick a leather-encased rubber bladder...".

    Or maybe the current generation of writers that thinks "140 football fields" is a meaningful substitute for "a really big chunk of space" will have died off by then.
  • by cyphercell ( 843398 ) on Wednesday August 01, 2007 @03:02AM (#20067591) Homepage Journal

    IBM is a hardware company, always has been, they've been into open source software since before GNU existed.

    http://en.wikipedia.org/wiki/SHARE_(computing) [wikipedia.org]

    Sure, they are an evil corporation with too much money on retainer, but they realized long ago, that software has an intrinsic value that crashes once the software is written.

    For instance, the labor theory of value - the most influential of the intrinsic theories - holds that the value of an item comes from the amount of labor spent producing said item.

    http://en.wikipedia.org/wiki/Intrinsic_theory_of_v alue [wikipedia.org]

    Basically, once software is written, it's value rapidly approaches zero, because the ability to replicate that work is well within the skill levels of the neophyte. IBM conceded the value of software long before Bill Gates came around floating the idea that the value of software could be upheld by government interference, essentially creating a new fiat currency, and entering the business of printing money, they hired lawyers to back it up along with becoming an extremely predatory business entity.

    • Microsoft entered the business of brokering software copyrights
    • IBM began brokering in software patents (primarily) and copyrights
    • RMS decided to rewrite some copyrights and lay the legal groundwork for open Intellectual Property
    • Sun Microsystems and Apple Computers are two companies that managed to survive IBM and Microsoft, respectively
    • Every company that hasn't been purchased or destroyed by IBM or Microsoft has started moving towards FOSS, Apple and Sun are stragglers.
    • Linus is an apolitical hacker, generally happy with GPLv2
    • Red Hat found that the intrinsic value of software rests with the people that know how to use it, ie support
    • IBM is looking at this situation and realizes that their business model was built on OSS, if they remove Microsoft they have a good shot at Dell, HP, Gateway, then finally Apple and Sun.

    While IBM may have quite a bit to lose going the free software route they have a lot more to gain. Once they own all the copyrights/patents they can do whatever they want and that (currently) includes GPLvX or greater.

  • by cp.tar ( 871488 ) <cp.tar.bz2@gmail.com> on Wednesday August 01, 2007 @03:27AM (#20067711) Journal

    Well, this has been the first /. flooding I've ever witnessed...

    It is rather interesting that you should flood flood like you do, then bemoan cultural intolerance... I participate in a forum where several users (or "morons", as I dub them) demand and exercise their "right to flood", claiming that cleaning up their flood is denying them their right to free speech.

    I just don't understand how do you find the time to do things like that...

  • by BBCWatcher ( 900486 ) on Wednesday August 01, 2007 @05:21AM (#20068191)

    Actually, on a System z9 EC (Enterprise Class), a single CPU chip failure is not a "Call Home" repair event. Only the second CPU chip failure would result in an automatic call, while your business keeps running of course. (There are a minimum of two spares in each machine.) The average time to first failure for a particular machine is somewhere in the many decades range.

    OK, just for fun (because it never actually happens in the real world), what happens with a triple failure? If you happen to have a "fully configured" mainframe -- all processors turned on -- then.... your business still keeps running. Yes, the system might lose some processing capacity, but it keeps running. The higher priority stuff (from a business view) takes precedence automatically, and life goes on. This is all on a single machine still.

    If you've got an S18, S28, S38, or S54 model, then, at your business's convenience, the faulty hardware can be replaced. (You might do this at night, for example.) The repair technician tells the mainframe to "evacuate" memory on a portion of the machine while the OS and applications keep chugging along, possibly with reduced capacity, often not. (Depends on what configuration you choose.) When the evacuation is complete, the technician can pull a processor/memory group (called a "book"), insert the new one, bring the new one online, and... everything still keeps running. Again, this is all on a single machine -- no clusters required for any of this.

"The one charm of marriage is that it makes a life of deception a neccessity." - Oscar Wilde

Working...