Become a fan of Slashdot on Facebook

 



Forgot your password?
typodupeerror
×
Linux Business IBM IT

IBM Saves $250M Running Linux On Mainframes 274

coondoggie writes "Today IBM will announce it is consolidating nearly 4,000 small computer servers in six locations onto about 30 refrigerator-sized mainframes running Linux, saving $250 million in the process. The 4,000 replaced servers will be recycled by IBM Global Asset Recovery Services. The six data centers currently take up over 8 million square feet, or the size of nearly 140 football fields."
This discussion has been archived. No new comments can be posted.

IBM Saves $250M Running Linux On Mainframes

Comments Filter:
  • by flayzernax ( 1060680 ) on Tuesday July 31, 2007 @08:58PM (#20065269)
    This proves Linux has a smaller carbon footprint then other OS's!
  • by Trailer Trash ( 60756 ) on Tuesday July 31, 2007 @08:59PM (#20065289) Homepage
    Because they're using all that Microsoft IP without paying for it....

    (it's a joke)
  • by jkrise ( 535370 ) on Tuesday July 31, 2007 @08:59PM (#20065301) Journal
    for AIX on those mainframes! After all, AIX has more Unix IP than Linux, isn't it?
  • Um, hello, while this may deserve the 'neat' tag, it's hardly newsworthy.

    People are consolidating lightly (and heavily!) used servers into VMs all over the place.
    • by Urusai ( 865560 )
      VMs are just an excuse to keep running the same shoddy software forever. As for arguments that they help protect against crashes and the like, well, that's because you have shoddy software. Arguments that they let you run multiple environments, well, same story. You could have been paid as a software developer to rewrite crapware for the modern age, but instead all of you just cream your jeans for VMs that steal the livelihood of software developers. Dunces, all!
      • by crovira ( 10242 ) on Tuesday July 31, 2007 @09:53PM (#20065737) Homepage
        I had to maintain some software that was running on a aging 370 mainframe. The 370 was emulating a 360 which was emulating a 1401.

        It was pension and payroll software and it was legally blessed.

        It was such a frigging song and dance trying to get anything done that it was cheaper and faster for the company to emulate their butts off rather than trying to go through the management and the unions and the employees.

        But I did learn about optimizing instruction fetches by scattering the compiled code around the circumference of a magnetic drum so that the drum would have rotated around beneath the read head in time for the next instruction.

        Try and tell that to the young people of today, and they wont believe you, eh Obadiah?
        • Re: (Score:3, Insightful)

          by ZorinLynx ( 31751 )
          Isn't the System/370 backwards compatible with the 360? Why would it need to "emulate" a 360 when it can just run the 360 code directly?

          Just curious because I recall reading that even the latest zSeries systems can natively run code dating all the way back to the original System/360 models.

          -Z
          • by tuomoks ( 246421 ) <tuomo@descolada.com> on Tuesday July 31, 2007 @11:25PM (#20066427) Homepage
            Hi, yes and no. 370 runs 360 code but, as too often even today, people coded to bypass the OS. Old devices, drums, paper / magnetic card readers, terminals, channels, etc. Even todays systems have the idea, VM especially, of 80 column cards, punches, readers, etc and if used correcly they work wonders, trust me, 360 architecture is one of the best even today. The problems is that not too many people any more want to learn the basics, i.e. Priciples of Operation ( any 3xx, a good book to read, required reading, IMHO ). Search on which OS version macro libraries Linux ( 370 HAL ) was first compiled on 360/370, you will be amazed. Emulations in 360-xxx mostly mean address space differences ( 24/31/32/64/.. ) and some added machine code / functionality, done by OS/hardware. And of course a long time trapping the floating point was/is(?) one if you didn't have the fp hardware installed.
        • by CodeMunch ( 95290 ) on Tuesday July 31, 2007 @11:54PM (#20066669) Homepage

          But I did learn about optimizing instruction fetches by scattering the compiled code around the circumference of a magnetic drum so that the drum would have rotated around beneath the read head in time for the next instruction.
          Mel [pbm.com]?? Is that you?
        • But I did learn about optimizing instruction fetches by scattering the compiled code around the circumference of a magnetic drum so that the drum would have rotated around beneath the read head in time for the next instruction.

          Given the description of a 360 emulating a 1401, I find this comment a bit difficult to follow. IIRC, none of the 370, 360, or 1401 was a drum-based computer, and the code run in each would not be optimized by consideration of location in memory.

          I do remember stories back in the old days, of 360 emulating 1401 emulating 650 (an even older machine). The 650 was a drum machine, and relied greatly on SOAP (Symbolic Optimizing Assembly Program) to develop assembly language programs which were then alloca

      • Re: (Score:2, Interesting)

        by jimmydevice ( 699057 )
        I have seen applications that are well written, understood and maintained and are 30 years old. I suffered through the rewrite of a well known, commercial revision control application we used to maintain our code. Originally in C, it was rewritten in Java with horrible results. Our checkout times went from a few minutes to 10 minutes for a full checkout. All our custom tools no longer worked, but the interface looked fabulous. Frankly, I'll take a text console based application over some bloated "modern age
        • Re: (Score:3, Interesting)

          by cyphercell ( 843398 )
          depends on the database behind that 30 year old software for me. I've seen extremely flat databases with nothing but a text console wrapped around it. Extremely, poor integrity standards and the data falls apart like a stack of Jenga blocks (the more people playing the quicker it falls). Computers back then were (often) used to supplement paper systems and sometimes it really shows.
        • by Ilgaz ( 86384 ) *
          Same for UI too I think. There is a British bank which their branches are relatively new here. I was there yesterday and I was amazed watching a single broker doing 5 customers work same time, million dollars stuff in 5-10 mins. There was no "mouse" or any graphics involved, I think it was some Mainframe Terminal application.

          I bet the OS was virtual too running inside mainframe. :)

          I bet as ex IT manager he could answer my questions but I don't think they are allowed to answer and guy was really busy moving
  • by bigattichouse ( 527527 ) on Tuesday July 31, 2007 @09:01PM (#20065321) Homepage
    We (Bigattichouse's Vectorspace Database [bigattichouse.com]) went through their Linux certification (as well as Grid cert), and they were a pleasure to work with - providing expert advice and patience in every step of the process. Not exactly on topic, I guess, but I thought I'd share. They really seem to embrace the engineering and spirit of Linux.
    • Re: (Score:2, Interesting)

      by etnu ( 957152 )
      I recently worked with IBM to interop with sametime (their IM network), and my opinion of their engineering practices would probably get me fired for disparaging a partner.
    • Re: (Score:3, Informative)

      by Ilgaz ( 86384 ) *
      IBM always impresses with level of the support and especially their attitude of "never abandon people you sell stuff to".

      OS/2 was declared dead 10000 times (even by fans) while it was getting some new graphic drivers actually purchased from Sci-Tech software.

      If you are a PowerPC (G4/G5) user and in desperate need for non beta, working Java 6, you simply install PPC-Linux and install IBM supported, non beta Java 6 along with CPU acceleration to it. That is the system and OS which Nvidia/ATI refuses to ship
  • by Anonymous Coward on Tuesday July 31, 2007 @09:02PM (#20065329)
    The article says that the data centers required for the 4000 "small computer servers" aggregate to about 8 million square feet. It takes IBM 2000 square feet to house a small computer? Also, saving $250 million suggests that it costs them something over $60K per "small computer" even ignoring the price of the new mainframes. Amazing.
    • by RuBLed ( 995686 ) on Tuesday July 31, 2007 @09:44PM (#20065665)
      Vacuum tubes = costly = takes up large space = less green = it's about time

      (I'm so sorry)
    • The article says that the data centers required for the 4000 "small computer servers" aggregate to about 8 million square feet. It takes IBM 2000 square feet to house a small computer?

      It's less amazing when you think of it as six 300x400 yard warehouses run by clients. Those are big buildings but "data centers" are usually large. There are not enough details from the article to figure it all out but four thousand computers to 30 boxes is an impressive feat that will save lots of electricity.

    • That must include the office space for the hundreds of MSFTs required to apply patches and restart the 4000 servers all the time.
    • Consider this... (Score:3, Interesting)

      by Krozy ( 755542 )

      Yes, 4000 "small computer servers" times 2000 square feet equals 8 million square feet. But this is unlikely the arrangement. Consider instead a few buildings of data centers, each with 1 or more relatively small rooms. Within a room, there may be a few racks, all surrounded by walking space, and other perhipherals like AC units. Then outside of those rooms, more walking space for hallways. When you factor in all the human space and simple space for ventilation, and then cubicles and monitoring for sup

    • Each of the smaller facilities require lots of overhead in terms of money and space for air conditioning, security guards, power generators, walls, fire control systems, UPS, etc. Once the systems were centralized, these expenses also became centralized. Thus, less space is required, costs go down, and everyone is happy.
  • by MarcQuadra ( 129430 ) on Tuesday July 31, 2007 @09:02PM (#20065331)
    My employer recently 'consolidated' their server farm too. We used to have a room with fifty aging Dell PowerEdge servers, each running independently, requiring massive support, cooling, and electricity.

    Now we have ten VM servers running all the migrated services, PLUS a room with about fifty aging Dell PowerEdge servers, each running independently, requiring massive support, cooling, and electricity.

    I never thought 'consolidation' would require so much more space, electricity, air conditioning, and upgrades to core switches and UPS units.
    • by Anonymous Coward on Tuesday July 31, 2007 @09:12PM (#20065387)
      You're supposed to turn off and sell off the servers you replace. (Score -1: Obvious).

      (Strange thing is, I make a good living replacing aging mainframes by linux clusters. mainframes are fine when you're doing transaction processing. But for cpu-bound stuff, you're better off with a room full of opterons).

    • Yeah... that sounds about right. I've ordered plenty of VMWare host servers for "consolidation" as well, only to have them grabbed up my new projects that were higher priority and had more potential to make the boss look good to his managers. Sure... A few servers get shut off every year due to projects that leave, but the actual server count ends up expanding every year.

      I wonder if IBM factored in the number oddball projects that require Windows systems in their server count? Windows won't run in a zSeries VM, and there is plenty of software out there that is still Windows only.
      • by LWATCDR ( 28044 )
        "I wonder if IBM factored in the number oddball projects that require Windows systems in their server count? Windows won't run in a zSeries VM, and there is plenty of software out there that is still Windows only."
        Not on servers. IBM uses Lotus Notes so no need for Exchange Servers. They probably us DB2 for any SQL systems so no MSSQL servers.
        Frankly I can't think of many reasons that IBM would have to keep many Windows servers around except for testing IBM software running on a Windows server.
    • Now we have ten VM servers running all the migrated services, PLUS a room with about fifty aging Dell PowerEdge servers
      So what everybody is now wondering: "WTF are those fifty aging Dell PowerEdge servers doing besides idling?"
  • System z Mainframes (Score:2, Interesting)

    by o2sd ( 1002888 )
    It's kinda hard to find technical specifications on these mainframes beyond marketing fluff. After some looking I found this brochure [ibm.com], which has some interesting information on the firmware and a few details of the I/O, but not much about the processing units, and why one of these would be able to replace 133 blade servers. It does mention up to 30 superscalar processors per box, but I'm not really sure what that means. (Maybe they go next to the inverting flux capacitor).
    • by BrynM ( 217883 ) * on Tuesday July 31, 2007 @09:46PM (#20065679) Homepage Journal

      It's kinda hard to find technical specifications on these mainframes beyond marketing fluff.

      Part of that is because IBM will customize the machines to your heart's content. The sky and your budget are the only limits. They leave a good many of the loadout details (xGB/TB of RAM, DASD storage size, # of CPUs per card, # of CPU cards, even number of mainframes - they can be chained in parallel). You should look at the Z series hardware specs [ibm.com] for the general details and look up what details you don't know.

      If you're looking for benchmarks or comparisons to x86/x86-64 or other commodity architectures good luck - they are nearly impossible to find. This is due to the implementations being on entirely different scales. The best comparison you an find is the MIPS per CPU. You can find some slightly stale numbers here [isham-research.co.uk] (BTW: an LPAR [wikipedia.org] is something that's been around on mainframes for several decades - one LPAR can run up to several hundred x86 VMs concurrently).

      • Re: (Score:2, Interesting)

        by duffbeer703 ( 177751 ) *
        The other part of it is that the zLinux miracle is mostly bullshit.

        zLinux is great if you're consolidating mostly idle, low priority resources. The "magic" that allows you to save money while simultaneously getting raped by IBM Global Services and paying too much for hardware is thin provisioning. You might assign 10 Linux VMs 1GB RAM each, and only have 4GB of actual memory available. Same thing with CPU. This is an efficient use of resources, if your applications don't all require memory at the same time.
        • by BBCWatcher ( 900486 ) on Wednesday August 01, 2007 @04:41AM (#20068021)

          There are a lot of errors in your comments, unfortunately. Of course you can run Red Hat and SuSE concurrently in a single LPAR under z/VM, and multiple versions thereof. This has always been true, ever since Linux began running on mainframes many years ago. You might want to have more than one LPAR to run more than one version of (first level) z/VM, but you don't need many. Two or three for z/VM and Linux is typical and just fine. And it's not as if LPARs are in short supply on mainframes: up to 60 are available on a single machine (30 on the smaller model), so "spending" 1 to 3 is no big deal.

          Re: Investing in new mainframes, come on, get real. It's so easy to find market data because companies like Gartner and IDC publish it, and IBM just announced its 8th straight quarter of mainframe hardware growth, something that hasn't happened since before Y2K. It's impossible to do that with "a few showboat customers."

          And no, you simply cannot approach the level of virtualization these machines offer on any other system, at least for typical business computing, and still offer reliable service to users. In fact, in IBM's case many of the software licenses are presumably "free," and they still found big cost savings by taking 4,000 machines down to 30. For the rest of the world the mathematics in such situations are even more compelling.

      • Re: (Score:3, Informative)

        by ChrisA90278 ( 905188 )
        "one LPAR can run up to several hundred x86 VMs concurrently)."

        When I started out the "hot" PC, the best you could get, was a 4Mhz Z80 running CP/M. I had one of those at home and at work, I worked the operating system of a very old (even then) CDC mainframe. It was a CDC6600. We had a Z80 emulator that ran on the 6600 and we could emulate a Z80 at about 20 times real time. Not bad, a virtual PC running on a mainframe in the late 1970's

        Us software people really need to get off the ball and think of som
    • by Anonymous Coward
      There are a few reasons why the specs for mainframes are so hard to find.
      One is that the things you find on IBM's website are designed for CEOs and CIOs who don't really care about technical details -- only "solutions"
      The second is that the specs themselves aren't well-defined. As an earlier poster pointed out, you don't buy one of these things off the shelf. You tell IBM what you want to do with it, and you work with them to construct not just a mainframe, but all of the storage and other add-ons that come
  • There's nothing new under the Sun: And Sun's offerings in hardware and software are also very much aimed at consolidation. Bring it on.
  • by GFree ( 853379 )
    Saving a lot of dough by using Linux on servers makes sense, heck it's fairly obvious to anyone here, that's where it excels.

    I think Slashdotters would be more interested in stories that focus on a company switching its desktops to Linux though. Servers running Linux are pretty common. We want news about the desktop front; it would be more newsworthy at least.
    • That would be good, but I do want at least one new story a week.

      The point of this story isn't to point out that Linux servers are cheaper/better/faster, its to point out that the Linux platform got some publicity. For those trying to get everyone switching to Linux for their desktops, publicity is their one major problem (as well as many smaller problems, but thats an argument for another story).
      Remember when Microsoft said that Linux infringes on their patents but they weren't going to sue? They were ne
  • $250M?? (Score:4, Insightful)

    by evanbd ( 210358 ) on Tuesday July 31, 2007 @09:37PM (#20065617)
    Lets see... $250M / 4000 = $62,500 per server being consolidated? I mean, I know floor space, buildings, racks, power, AC etc cost money... but that's still a *lot*. Anyone care to chime in on how close to normal that is?
    • Re:$250M?? (Score:5, Funny)

      by brxndxn ( 461473 ) on Tuesday July 31, 2007 @09:44PM (#20065671)
      The old servers were Macs?
    • Re:$250M?? (Score:5, Informative)

      by Anonymous Coward on Tuesday July 31, 2007 @09:54PM (#20065749)
      They're probably computing cost over the expected lifetime.

      Combine IT salary for 3-5 years, power over 3-5 years, etc. etc. and that number makes sense.
      • by caluml ( 551744 )
        You pay your servers a salary? You're doing it all wrong...

      • They're probably computing cost over the expected lifetime.

        Combine IT salary for 3-5 years, power over 3-5 years, etc. etc. and that number makes sense.


        This is the crux of IBM's virtualizations ads; you can get rid of admins and save big bucks. Also electricity and floor space, but you have to throw salaries in there to come up with the savings they suggest.

        Why does it take less IT salary as suggested above for 4000 virtual Linux servers on a mainframe than with 4
    • Re:$250M?? (Score:4, Informative)

      by thedarknite ( 1031380 ) on Tuesday July 31, 2007 @10:50PM (#20066153) Homepage
      According to another article [com.com] it is saving the $250M over 5 years, predominately from reduced running costs
    • Lets see... $250M / 4000 = $62,500 per server being consolidated? I mean, I know floor space, buildings, racks, power, AC etc cost money... but that's still a *lot*. Anyone care to chime in on how close to normal that is?

      Running 2003 server, each needs an "admin", licensing fees for software, you know it gets expensive.

      Really, I think the cost was in 130 football fields of mostly unused server warehouse.

    • You are forgetting about the 4000 MCSE that they can now lay off.
    • 140 football fields for 4000 servers? That's about 30 servers to a football field, or about 40'x40' per server.

      Google tells me that a football field is 300'x160' or 57600 square feet.

      If every server had 14 square feet you could put all 140 of them on one football field.
  • by pedantic bore ( 740196 ) on Tuesday July 31, 2007 @09:37PM (#20065621)

    The story here is about consolidation, virtualization, etc.

    Linux is a small part of the technology involved here. z/OS is the real story here.

  • Well Duh. (Score:5, Insightful)

    by ryanisflyboy ( 202507 ) on Tuesday July 31, 2007 @09:50PM (#20065715) Homepage Journal
    If you take hundreds of cabs and consolidate them down to 40 (with the associated consolidated storage) you are going to save millions. That has little to do with Linux. It is the modern mainframe that makes this kind of thing possible, which is why more people are moving to them. They must have a lot of servers spinning idle to get this done.

    The reason why companies are in this pickle is because they thought more was better. They though "All we need to do is buy 4000 x86 servers and we can do tons of work." They didn't realize how HARD it is to get 4000 servers to operate in a cluster so you can take advantage of those individual systems as one body. So, they ended up with islands of computing power instead of a cluster. Naturally the mainframe consolidates these islands back to computing continents and you end up running the mainframes at near capacity all the time. Modern mainframes make this easy with dynamic CPU/RAM allocation, as well as dynamic storage. So you segment out the mainframe in to four or eight chunks. Chunk 1 is hot, chunk 3 and 5 are idle. Simply re-assign some of the CPUs from chunk 3 and 5 to 1 until the load goes down. You can take advantage of this in a big way if you segment your work load to match global demand. So chunk 1 might be data for the western USA, and chunk 7 might be EMEA. You can bounce resources between those segments much more easily. You can even script it. HP has an offering that does this automagically, I'm sure IBM has something similar.

    Now, my personal opinion is why Linux? Some of the more advanced features like dynamic RAM, CPU, and IO allocation don't appear to be that solid to me. Perhaps IBM added these features to Linux or made them more robust? Maybe they run Linux inside an AIX virtualization container?
  • Now when something goes wrong, 133 server apps go down all at once! I know, Linux is stable, but a machine hosting 133 apps just sounds like a recipe for a molly-guard type disaster.
    • Nah ... Linux is just the icing on the cake. There's plenty of real mainframe meat underneath.
    • by Strider- ( 39683 ) on Tuesday July 31, 2007 @10:17PM (#20065899)

      Now when something goes wrong, 133 server apps go down all at once! I know, Linux is stable, but a machine hosting 133 apps just sounds like a recipe for a molly-guard type disaster.


      These are machines that don't break, period. We're talking the types of machines that run the major banking systems of the world and the like. They simply do not go down. In this situation, if one of the 133 apps buggers up, it's only that VM that's shot. You just nuke it and restart it, the rest of the machine just keeps ticking along.
      • Re: (Score:3, Interesting)

        by freeze128 ( 544774 )
        While I agree that IBM's mainframe systems are rock-solid (and, as a colleague is fond of saying, self-healing), accidents *DO* happen. I'm sure the mainframe is happily running its code just fine only seconds before a hurricane rips the roof off of the data center and hurls the machine into the next county....

        It's those kinds of things that make disaster recovery necessary. If the apps were distributed across discrete servers, its possible that not all of them would have been destroyed. Remember the end o
      • I know they don't break, that's why I said molly-guard type disaster. That means if someone turns the box off, it is like turning off 133 computers all at once. I'm not saying it happens much, but BRB's do get hit.
    • Re: (Score:3, Informative)

      by rascher ( 1069376 )
      Not quite. They are engineered (as they have been for decades) for stability and were designed to handle that kind of load. Its CPU/RAM/storage are redundant, so that if something in the system goes down, new resources are allocated. Additionally, shops will have multiple mainframes just for that kind of redundancy. Its kind of like saying your car is a "single point of failure" - sure, it is, but they were engineered for the purpose of being reliable.
    • Re: (Score:3, Informative)

      Last week, I attended a presentation at IBM's Australian Development Lab in West Perth, where a lot of the z/OS-related code is maintained and developed.

      From what we were told, IBM z/OS mainframes are the *most* reliable platform to host software services (but of course, they'd say that).

      The following is from memory, as best as I can remember it, and may not be 100% accurate:

      The 'z' in 'z/OS' stands for 'zero downtime'. z System mainframes are engineered for 99.999% availability, or less than 3 minutes of d
      • Yes, System z mainframes are engineered for 99.999+% availability. But it's important to define availability here very precisely. IBM defines this as business service: what the user gets. Therefore, planned downtime is just as bad as unplanned downtime. A lot of IT people get confused by this point, but it's very important. "Excuse me while I shut down credit card approvals for a couple hours to upgrade the database" and you'll be escorted out of the building promptly.

        Now, there's nothing in the original s

    • by SEE ( 7681 ) on Wednesday August 01, 2007 @01:54AM (#20067241) Homepage
      Yeah, if somebody hits the Big Red Switch, there's going to be a problem. But, if they don't, well, it's a mainframe.

      The Linux on these machines is running under z/VM, in multiple virtual machines. When one of them has a software fault, you reboot that one's VM and keep going; the other 132 Linux-running VMs run without noticing anything happened. (It is possible for z/VM to fault, sure. But it's an OS with 40 years of refinement in the "100% uptime" mainframe culture, and its task is just managing the virtual machines.) When something goes wrong with the hardware, the fault tolerance and self-healing features keep things running, and you fix the faulty element with a hot-swap. A properly set-up datacenter is going to minimize external risks, with backup power and such. Proper choice of datacenter location will minimize natural disaster risk.

      So, yeah, the big risk is human failure, and these IBM-built, IBM-owned datacenters are presumably going to have extensively trained IBM-employed mainframe personnel, which minimizes that risk.

      Now, if some cable company cuts the fiber optic lines . . .
  • by Nom du Keyboard ( 633989 ) on Tuesday July 31, 2007 @10:08PM (#20065843)
    Can I have the old ones?
  • Comment removed (Score:5, Interesting)

    by account_deleted ( 4530225 ) on Tuesday July 31, 2007 @10:23PM (#20065955)
    Comment removed based on user account deletion
    • Re: (Score:2, Informative)

      by 1lapracer ( 970110 )
      The blue team was in the other day and I sat through the entire 2 hours of how much money running Linux on the Z would save us. It sounded great on paper. As I was leaving with the AIX guys they could barely contain themselves so I asked what was so funny. They gave me the summary of how the last time several years ago we tried this with results that were similiar if not worse then the perivous poster. We would have spent many many millions to run our p series and or the i series servers. I'm sure we w
    • Personally I consider Linux on the mainframe to be on par with running Linux on an iPhone. Sure you probably can, but does it actually do anything uniquely useful for the business?

      Is it really linux on the mainframe that was the problem, or rather virtualization and IBM pricing? In other words, if you ran one instance of linux hosting 1,000 server processes, might it not work well?

      If so, linux might have advantages over a special-purpose mainframe OS. Presumably there are more apps for linux and mainf

    • by phoebe ( 196531 )

      Isn't this the entire crux, extreme reliability costs a lot of money and resources. Yes, a single node is slower than a regular server but you can also run many nodes in parallel: so you are only limited by software design.

      To implement remotely similar reliability with regular hardware you are going to need a redundant SAN, redundant switching, redundant NICs, redundant CPUs, redundant memory, etc, and a lot of cabling. Running something like VMWare ESX will allow you to bump VMs in realtime between host

  • by ogren ( 32428 ) on Tuesday July 31, 2007 @10:37PM (#20066057) Homepage
    I still haven't seen any conclusive evidence that Linux on mainframe is a good idea. I'm sure running 30 new mainframes is going to cost less than 4000 aging servers. Just about anything would be less expensive than 4000 aging servers.

    But I bet that a small farm of modern medium sized servers running Linux on VMWare would be even less expensive. Or Solaris/Niagara. Why would you want to run an open source operating system, whose major benefits are openness and affordability on the what is literally the most expensive and most proprietary computing platform in the world!

    These server consolidation projects are just giant boondoggles spawned because the server sprawl finally got insane. It's an endless cycle:

    A. Giant server consolidation project that takes 4000 servers down to 30 servers.
    B. Department B complains that Department A's application keeps hanging and consuming all of the CPU. They demand their own hardware "for availability reasons".
    C. Vendor C demands dedicated hardware for licensing/capacity planning/supportability reasons. Rather than constantly bicker with the vendor over supportability they get dedicated hardware.
    D. Department D complains that the IT department is charging outrageous prices for time sharing on the mainframe. After all a dedicated server only costs $XXX.
    E. Suddenly there are 4000 servers again.
    F. IT department spends some insane amount of money on infrastructure to manage the 4000 servers.
    G. IT department budget gets insanely large trying to manage that much stuff.
    H. Some CIO gets the idea that all of this money managing servers is ridiculous and we should do a server consolidation project.
    I. IT department spends an even larger amount of money on the latest super high availability gear and consulting services so that the can run 4000 commodity servers inside a few big servers. All because it will "cost less to maintain".
    J. Go back to A.

  • What bugs me is why IBM had 4000 servers that were evidently doing nothing?

    In my experience, consolidation using virtualization only works if the servers in question don't have anything to do and only runs a zoo of defunct web sites for example.
  • by Maxo-Texas ( 864189 ) on Wednesday August 01, 2007 @12:12AM (#20066769)
    Before, .33% failure rate = 13 failures a day. You had well understood procedures for dealing with failures.

    After, .33% failure rate = 1 failure per thousand days. This is a recipe for hell.

    But wait...
    When you do have one machine fail- it takes down 133 virtual servers at the same time. You raised your risk enormously.

    IBM will tell you all about fail-over just like they did our executives.

    Half the country down for three days is the reality.

    ---

    Still it is interesting to see a return to the centralized mainframe farm. Sure hope those multiply redundant communication lines don't go down.
  • by HockeyPuck ( 141947 ) on Wednesday August 01, 2007 @12:17AM (#20066791)
    I work in a longtime "blue shop" perspective, not always from a software/OS perspective. While I like the 'concept' of running linux on zSeries, I think you could take a look at the requirements and choose a platform that can run the same Apps.

    For example, for email we run Lotus Notes on a couple of BIG pSeries (AIX) servers. We could have run it on a farm (technical term) of windows boxes.

    For webservers, which you could run on AIX, or linux on zSeries. We have multiple (read: many) x86 servers running linux+apache. Why? They connect to a backend app server (pSeries) which connects to a backend zSeries DB2 (I'd prefer Oracle however, to run Oracle on zSeries requires it to be run in a linux VM).

    We definitely subscribe to the school of using VMs whether they are zSeries, pSeries, or VMWare on x86. Even if the x86 server is running ONE application, we still put vmware underneath, as it allows for us to move the image to a newer hardware platform when it's time to upgrade. Even some of the larger x86 servers run vmware but in each partition there is a single instance of apache. Makes for managing storage that much easier (fewer zones, cabling etc).

    Would I consider moving our apache on linux on x86 to apache on linux on zSeries? Not really. It's a waste of CPU cycles (MIPS). I'd rather use zSeries MIPS for something a bit more critical like keeping my database up and running than serving out webpages (static or dynamic).

    IT isn't not about religion, it's about finding the best tool considering your requirements. I have no problems telling IBM that product XYZ is trash. While my servers are IBM, you won't see IBM disk, or IBM tape, and atleast once a quarter some salesman from IBMs storage group is at my door. He buys me lunch and every quarter he is sent packing. You won't see ibm bladecenters as the thought of hundreds of additional servers to manage isn't appealing (but I'll gladly take 100s of VMs across larger x86/pseries boxes).

    I know many of you were expecting to hear me say 5000 linux servers, but there are options for my requirements that did not lead to big "google style" linux farms.

    BTW: I have no problems kicking out IBM on x86 if HP/Dell/Sun have a better product, and knowing this and letting IBM know this gives me a great advantage over them, as they very well know I'm capable of bringing in something more suitable. (I *used* to have IBM storage).

  • The headline and article imply that they are switching to Linux (among other things).

    What operating system were they previously using?
    • by simong ( 32944 )
      Probably a mixture of AIX, Solaris and Linux with a sprinkling of HP-UX thrown in. IBM are nothing if not eclectic and indeed pragmatic about the services they provide.
  • "Football fields"? (Score:3, Interesting)

    by darnok ( 650458 ) on Wednesday August 01, 2007 @03:02AM (#20067587)
    "The six data centers currently take up over 8 million square feet, or the size of nearly 140 football fields."

    I suppose when the US finally goes metric, they'll have to deal with units of area such as "millifields", "centifields" and "kilofields". In time, the measure will have to formalised e.g. "the distance a 100kg, 190cm man is able to kick a leather-encased rubber bladder...".

    Or maybe the current generation of writers that thinks "140 football fields" is a meaningful substitute for "a really big chunk of space" will have died off by then.
    • I've always hated the media's tendency to measure things in football fields {area} (American football of course), Football Stadiums {numbers of people and/or volume} (again, American football), Olympic swimming pools, Empire State Buildings, Statues of Liberty*, feet of water covering a given US State, and trips to the moon.

      Ok, maybe I can see that some folks would have a problem understanding volume (because our poor educational system means that we are barely able to manage two dimensions), but why must t
  • I was that soldier (Score:4, Informative)

    by simong ( 32944 ) on Wednesday August 01, 2007 @04:42AM (#20068033) Homepage
    I was involved in a migration to the zOS architecture three years ago. I am currently involved in a similar exercise for a British telecoms company whose name escapes me. In both cases the principle was perfectly sound, but the reality rapidly starts to come down to what can be migrated, when, and why. At IBM application compatibility was a major consideration, and ultimately prevented key parts of the system from being migrated. At the current site, surprise surprise, the problems are the same, plus reluctance to do the work (upgrades, work required on the client's part, age of applications and Plain Old Politics). I wish IBM good luck, and perhaps because there is a better integration of operations and systems they might succeed, but I would be willing to bet that by the end of the process, they will have reached about 80% of their target.
  • ...by switching to Geico.
  • Metric? (Score:3, Funny)

    by vidnet ( 580068 ) on Wednesday August 01, 2007 @03:12PM (#20075783) Homepage
    The six data centers currently take up over 8 million square feet, or the size of nearly 140 football fields.

    In metric, that would be around 104 soccer fields.

What is research but a blind date with knowledge? -- Will Harvey

Working...