Slashdot is powered by your submissions, so send in your scoop

 



Forgot your password?
typodupeerror
×
Linux Business IT

Linux From A CIO's Perspective 163

An anonymous reader writes "CIO.com has a story on Linux and OSS in the enterprise from the perspective of the CIO of Cendant Travel Distribution Services, Mickey Lutz. 'In the summer of 2003, Mickey Lutz did something that most CIOs, even today, would consider unthinkable: He moved a critical part of his IT infrastructure from the mainframe and Unix to Linux. For Lutz, the objections to Linux, regarding its technical robustness and lack of vendor support, had melted enough to justify the gamble.' His organization saved 90% in costs in so doing. Read on if you want to see how the top brass views OSS."
This discussion has been archived. No new comments can be posted.

Linux From A CIO's Perspective

Comments Filter:
  • by AKAImBatman ( 238306 ) * <akaimbatman AT gmail DOT com> on Friday July 01, 2005 @02:51PM (#12963282) Homepage Journal
    This pretty much sums it up:

    Lutz's IT group rewrote a complex, real-time airline pricing application that serves hundreds of thousands of travel agents around the world and that also acts as the system of record for all of United Airlines' ticket reservations. When this application came up on Linux, it proved to be so demanding--it handles up to 700 pricing requests per second--that it completely redefined Cendant's expectations about what it would take to get Linux to work. "We have broken every piece of software we've ever thrown at this platform, including Linux itself," says Lutz.

    With Big Iron you're paying a LOT of money. But you're not paying it for nothing. Big Iron will give you a lot of guarantees for stability, reliability, and thoroughput that don't exist on other systems. The key to this CIO's success is that he was willing to accept the challenges of doing Big Iron work on Little Brass systems. As long as you work all the details out yourself, this *can* work. (As Google has so eloquently proven. [linuxtoday.com]) The issue is that you're working without a safety net. If things go really wrong, there's no backup army of specially trained techs to run in and fix things. (And trust me, if you're paying enough money you'll have your own personal army of techs.)

    The upshot to all of this is that if the gamble pays off, it pays off in a big way. All that money you were spending for a personal army, plus some other company's R&D now goes into your own pockets. You don't get away scott free (someone has to maintain the systems), but you see your rewards. And isn't that what business is about? Taking risks and making profits? If you've got the infrastructure to go for something like this, then go ahead and grab fate by the balls. No one ever got anywhere in life by playing it safe. ;-)

    The "black box" of open source has transformed into something any CIO can appreciate: reliable performance and consistent uptime. The penguin can fly now.
    • The issue is that you're working without a safety net. If things go really wrong, there's no backup army of specially trained techs to run in and fix things.

      Well, there is a backup Army, and it's you.

      Google can have a 4000-node Linux cluster because they have enough staff to maintain and optimize the system (Keep an eye on their job pages to get an idea).

      Google also has some highly specalized needs-- some machines only crunch data for the DB, other machines only serve webpages, etc. It's in their interest to optimize the Kernel, OS, Database & Web applications as much as possible. Take a tweak which gains a 1% performance gain, multiply that against 4000 machines, and it's quite an advantage.

      There isn't a vendor in the world that can totally support their infrastructure, so Google does it themselves.
      • by Anonymous Coward on Friday July 01, 2005 @03:26PM (#12963755)
        Take a tweak which gains a 1% performance gain, multiply that against 4000 machines, and it's quite an advantage.

        Let's see . . . that's . . . [pencil scratching] . . . 1%! Amazing!

      • by AKAImBatman ( 238306 ) * <akaimbatman AT gmail DOT com> on Friday July 01, 2005 @03:53PM (#12964064) Homepage Journal
        Well, there is a backup Army, and it's you.

        No, you are the front lines army. The backup army was the reason you were paying the annual fees. Without those annual fees, there is no backup army. i.e. If you can't get it right, there's no one else to come in and fix it for you.

        Take a tweak which gains a 1% performance gain, multiply that against 4000 machines, and it's quite an advantage.

        That's something of a straw man argument. If 3 Sun machines and 10 LinTel boxes have the same Flop capacity, then a 1% increase in either one will add up to the same increase in computing power. The key difference is that there are only three Sun machines to update.

        There isn't a vendor in the world that can totally support their infrastructure, so Google does it themselves.

        That doesn't mean that there couldn't be. Google made their choice to go with a large number of decentralized systems. It works for them and it works well. But they have to do everything internally *because* of that decision. Had they gone the EBay route, they would be able to get that backup army, but then they would pay for the priveledge.
        • No, you are the front lines army. The backup army was the reason you were paying the annual fees. Without those annual fees, there is no backup army.

          Well, sometimes even with the annual fees there is no backup army.

          In my experience with small to mid-sized businesses, external vendors can't support much beyond the core product. If you customize the site beyond a minimum level, the vendor can't support it well (But they will still take your money).

          In Google's case, they made the choice to optimize their p
        • It's not like you can't pay a huge amount of money and get an army of outside Linux consultants and come and do things for you if something goes wrong.

          You can get the same service for the same amount of money if you want to - with Linux you actually get a choice, though, instead of having to pay the huge amount of money even if the Big Iron support somehow proves nearly useless to you.

      • Google can have a 4000-node Linux cluster

        Sorry, but last time I counted, Google had 120.000 Linux machines...
    • "The issue is that you're working without a safety net. If things go really wrong, there's no backup army of specially trained techs to run in and fix things."

      They had serious problems, but this sounds like a sufficient safety net: "a 40- to 50-person cutover team of IBM, Red Hat and Cendant engineers brought the problems under control by throwing more servers into the mix."

    • Dude I use to work for Cendant, and I can tell you that the "Army of IBM" people you got was mostly a bunch of idiots. They use to be good, but most don't know jack anymore.

      I won't get in to the fact that they (IBM) flat out stole $150 Million from a Cendant affiliate (RCI) by telling them that they could build a distibuted system to replace their mainframe. You could sit in the room with the IBMers and they would say "This thing is NEVER going to work, we can't replicate data to all these AS400's around
  • unthinkable? (Score:4, Insightful)

    by delirium of disorder ( 701392 ) on Friday July 01, 2005 @02:51PM (#12963291) Homepage Journal
    Moving desktops to linux might be considdered revolutionary, but this isn't. The big iron market of servers and HPC machines has really been dominated by linux for several years now.
    • It is in the Fortune 500. As a previous poster pointed out, big iron means very high reliability. Lutz made a very brave decision and made it work.
  • by bedroll ( 806612 ) on Friday July 01, 2005 @02:51PM (#12963300) Journal
    The only thing that makes this news is that a CIO actually recognized it.
  • by Anonymous Coward on Friday July 01, 2005 @02:52PM (#12963305)
    would consider unthinkable: He moved a critical part of his IT infrastructure from the mainframe and Unix to Linux.

    This is actually quite thinkable. Now if some CIO moved all his desktops to Linux, I would be impressed. Moving the backoffice stuff from expensive licenses of Unix and mainframes to Linux is a no-brainer.
    • by Decaff ( 42676 ) on Friday July 01, 2005 @03:39PM (#12963901)
      Moving the backoffice stuff from expensive licenses of Unix and mainframes to Linux is a no-brainer.

      No it isn't. There are many very high volume commercial and financial websites that use features of commercial Unixes, such as memory and resource partitioning, self-healing, fault management and very high scalability. Linux will certainly get all these at some point, but until then it is certainly not a 'no-brainer' to move. Even with smaller systems there are many applications that require certain Unix versions.
      • I'm not sure how to respond to the "no-brainer" label. My development team just migrated our company's production software development platform from a network of aging HP-UX machines (which served us quite well in their day, don't get me wrong) to a Linux network. The new development tools run 20 times faster (that's the actual figure, not hyperbole), our experience over the past two years is that the Linux network is much more reliable, and the server hardware is simply an increment onto their existing W

  • by Anonymous Coward on Friday July 01, 2005 @02:54PM (#12963332)
    There are two things I find really interesting here:
    1. Vendor support. OK, so if vendor support has gotten better, then which vendors does this CIO recommend? I don't see which ones he used in the article, maybe I just missed something.
    2. This quote
      "Open source is propelling us to adopt Java and a new way of programming," he says.
      Should be a bit of a cluebat for both Sun and the Open Source extremists. Java and Open Source can be extremely good for each other, it's just that both Sun and the Open Source community need to learn to cooperate on practical matters with those whose ideological goals differ... unfortunately it seems that neither Sun nor the Open Source community is extremely interested in realizing how much they could benefit from the other.
  • Clearly (from the get the facts site) it costs even less than Linux (just kidding) -- so I'm guessing Linux won not because of cost but because of technical superiority.

    Any other ideas?

    • The obvious answer is that it is an order of magnitude easier to port legacy, core business applications from Unix to Linux. This wasn't some startup with no existing infrastructure.

      When making a decision to change OS platforms, you must consider the cost in moving legacy applications over.

      • Except that the core business was NOT in unix. They were trying to move from TPF to Unix and abandon it. So they still had to move from TPF to Linux and that would be just as difficult as moving from Windows to Linux. In fact, even more so.
    • Uhhhh....why on Earth would he have chosen Windows? As another poster mentioned, porting Unix->Linux legacy code is worlds easier than to Unix->Windows. But..this decision was about cost, it was never about technical superiority at all. It wasn't that the CIO really badly wanted to switch operating systems, if mainframe Unix would have cost him $2.5 million, he very obviously would have stuck with their existing setup.
  • by Exter-C ( 310390 ) on Friday July 01, 2005 @02:59PM (#12963403) Homepage
    An interesting question that this article raises for me. Is what intel arch was being used (itanium/x86). For example could the costs have been reduced just by using linux on say a large scale IBM server similar to their other mainframe?.

    It also goes to show that just because something is old does not mean its slow..
  • by C0vardeAn0nim0 ( 232451 ) on Friday July 01, 2005 @03:03PM (#12963464) Journal
    some guy name "bill" called from redmond. he wants to explain you why linux is more expensive...
  • by Alphabet Pal ( 895900 ) on Friday July 01, 2005 @03:05PM (#12963496)
    Lutz was in command of the alternative to those bright, shiny websites: an expensive, aging global distribution system (GDS) called Galileo.

    Actually, all of those bright, shiny websites (Expedia, Travelocity, and Orbitz) rely on a GDS (Sabre, Amadeus, Worldspan or Galileo) to provide their content.

    • Some rely on all 5 of the Big GDSs.

      The Company I worked for did the middleware integration. We basically created the entire backend for Travelocity. Oddly enough, Galileo was one of the first GDSs to move to some form of XML messaging. When I first started wiorking at ??? in 2001, we were having to screen scrape the Windows travel agent terminals and translate everything. And this crap-ass system was written in VB 6.0.

      Eventually we rewrote it in VB.Net, but that's not much better for high volume system
  • Cost breakdown (Score:5, Interesting)

    by alvinrod ( 889928 ) on Friday July 01, 2005 @03:06PM (#12963498)
    Mainframe: $100 million

    Unix: $25 million

    Linux: $2.5 million

    These numbers were taken from a table in the article. Interestingly enough, the cost if something does break favors Linux as well. From the same table we get that the mainframe solution consists of 4 IBM mainframes, whereas Linux and Unix solutions require around 144 servers for Linux and 100 - 120 servers for Unix. If the hardware goes to hell it's so much easier to replace the single bad part than a mainframe.

    Hopefully, more people will begin a transition to open source solutions when they realize it can be successful.

    • Except IBM mainframes tend to call home and a tech is at your site before you typically know there is a problem... drastic failuers not included.
      • Actually, I would say that drastic failures _ARE_ included. True 'Big Iron' is designed so that massive failures can occur, and processing can still continue.

        For example, I have heard of Mainframes that continue running after a projectile has travelled through Processing and Memory sub-units.

        The CPUs in the machines sometimes compute the same data twice and compare the results. If they differ, it uses a separate CPU to perform the work.

        That same thinking goes through the I/O subsystem as well. Prope

    • Re:Cost breakdown (Score:4, Insightful)

      by AKAImBatman ( 238306 ) * <akaimbatman AT gmail DOT com> on Friday July 01, 2005 @03:14PM (#12963611) Homepage Journal
      If the hardware goes to hell it's so much easier to replace the single bad part than a mainframe.

      Not to detract from your point, but mainframes don't break as a single piece unless the machine blows up or is otherwise completely destroyed. Big Iron systems are designed with redundant *everything* including motherboards, CPU, memory, network cards, power supplies, and disk drives. If any one part fails, the system will route around it. The part can then be powered down and ejected from the machine. To bring it back up to full capacity, you simply plug in the replacement part and walk away.

      In that light, Linux system failures are actually going to be more difficult to repair. However, the cost of repairing a Linux system is far less (disposable box) despite the inherent difficulty. :-)
      • Lemme rephrase your sentance a bit...

        "Not to detract from your point, but clusters don't break as a single piece unless the cluster blows up or is otherwise completely destroyed. Large clustered systems are designed with redundant *everything* including of course motherboards, cpu, memory, network cards, power supplies, and disk drives as each system is independant of the others. If any one part fails the cluster will route around it. The system can then be power down and removed from the cluster. To b
      • Mainframes may be dual and even quad redundant, but you linux cluster is 100-120 times redundant. With blades you could be talking about seconds of interuption to minimal portions of the entire application. If done right there should be no user presevable service interuption.
      • you can spread a cluster across multiple sites to gain safety from a single disaster like a fire or plane crash or some twat putting his JCB blade through the power cable (happened to me, was 2 days before the power came back up, he'd hit a 14,000 volt main cable, ruined his day permanently)... try doing that with a single mainframe...
    • I'm not a Big Iron guy but it's my understanding that main frame hardware does NOT "go to hell" unless someone hits it with a hammer. Repeatedly. And with forethought. They are highly redundant devices, hence the expense.

    • Please note that these are annual costs associated with each system.

      If the hardware goes to hell it's so much easier to replace the single bad part than a mainframe.

      The reason the mainframe costs so much per year is exactly this issue - first, if something breaks it likely cripples but does not disable the machine. Second, IBM fixes it within four hours or less.

      Of course, they are still using IBM hardware, and the reason linux is so expensive (2.5M/Yr) is because they likely have the same type o
    • You forgot a line (Score:3, Informative)

      by jd ( 1658 )
      A CIO who takes a "chance" with Linux: Priceless

      Seriously, the biggest problem with mainframes is that switching them off is a big problem. These are not boxes you can easily - or safely - reboot, if there is a problem. There usually isn't, because the hardware is usually of very high calibre and massively redundant, but scheduled maintenance of, say, an Amdahl or a (when they existed) a Prime was not a trivial affair.

      "Routine" maintenance wasn't much better - DEC would charge the Earth (and Mars) to s

      • True, you don't usually do major brain-surgery on an IBM mainframe, as IBM isn't stupid enough to make severe enough changes to AIX to force a major overhaul on a regular basis, but (a) that limits how AIX can evolve (which will eventually kill it), and (b) major overhauls are a part of the computer business and do happen - you can't avoid them.

        Um, IBM mainframes don't run AIX. They run z/OS, Linux, z/VM, TPF, or VSE. IBM has been able to make huge changes to these OSs and still maintain compatibility.
    • Re:Cost breakdown (Score:3, Insightful)

      by twbecker ( 315312 )
      Well you can turn that logic around too. When you have 144 boxen, you much more likely to have a failure then when you have 4. But neither that nor your arguement makes any sense. Mainframes as a whole do not go down. Period. A CPU (or 3) can get completely fried and the machine won't miss a beat. Really. And you'll probably have an IBM tech there to fix it before you even know it's happened, since the machine phoned the problem in as soon as it happened. Big iron is expensive no doubt, but if there

    • It should be obvious to anyone with a brain that a cluster is designed as a cluster for the exact same reason a mainframe is designed as a mainframe: availability and power.

      There is NO significant difference between a cluster and a mainframe except one: economy of scale in cost.

      And by that I mean that the components of a cluster cost FAR less than the components of a mainframe - which is why a cluster that offers as much or more power than a mainframe costs as little as TEN PERCENT of a mainframe.

      Studies
  • I switched my desktop and saved $90. Seriously though, do any slashdoters have experience switching their companies computers to, or even away from linux?
    • Of course there are people in this audience who have experience switching corporate computing platforms to linux - I've been working with a number of companies who have moved, or are in the process of moving services to linux, not only from old school risc unix systems, but also from high-maintenance microsoft windows platforms.

      I'm sure my experiences are just like those of a lot of other sys admins here - nothing surprising, just a quiet evolution that is working quite well.

      On the subject of linux to oth
    • do any slashdoters have experience switching their companies computers to, or even away from linux?

      As an IT consultant I switch my clients to Linux whenever it makes sense. Usually it's a case where they have a handful of Windows boxes that were poorly implemented by some other firm. I look at what they have and if it's easier to move file/print/DB/web/email etc to a single Linux box than it is to clean up the existing clusterf*ck then I do it. And they are always happy! Not to mention small shops are
  • Spread the word! (Score:4, Interesting)

    by bogaboga ( 793279 ) on Friday July 01, 2005 @03:11PM (#12963562)
    [...] Mickey Lutz did something that most CIOs, even today, would consider unthinkable: He moved a critical part of his IT infrastructure from the mainframe and Unix to Linux. For Lutz, the objections to Linux, regarding its technical robustness and lack of vendor support, had melted enough to justify the gamble.'[...] His organization saved 90% in costs in so doing.

    Now, let's get prepared to rebut any Microsoft officials whenever they talk about the common "Total Cost of Ownership" as far as Linux is concerned.

    • Sure! :-) Windows 2003 Server license, that's about 300$, multiplied by 144, 43.200$, right?

      And who says windows needs TCO? Any guy in the office can admin a windows box...
      • Compared to the millions spent on even the Linux solution it's a non-issue. Hell the support cost per box is probably some large fraction of that cost per year, no matter who you go to for support. Hell the cost of one employee to admin any solution will be some decent multiple of that per year (even if you were only paying the person $43k/year they would cost at least twice that with taxes and crappy benifits.)
    • Now, let's get prepared to rebut any Microsoft officials whenever they talk about the common "Total Cost of Ownership" as far as Linux is concerned.

      You're right, the debate is over. There is nothing left to be said. This anectode about a guy switching from Unix to Linux has finally solved the question: Which costs more? Windows or Linux? I heard that Bill has read this article and has already begun nailing some planks to the front door .

    • If you look, these figures were sponserd by Linus Torvolds. Looks like this Mr Lutz is in Linus's back pocket.

    • Now, let's get prepared to rebut any Microsoft officials whenever they talk about the common "Total Cost of Ownership" as far as Linux is concerned.

      we can make any conclusions about this. they just transferred from mainframes and unix boxes to linux. it would be better if they transferred to windows them transferred to linux so there can be a better comparison. and besides, there may be instances when microsoft software may have better tco than oss and vice-versa. i don't think there is a law (a

  • by AB3A ( 192265 ) on Friday July 01, 2005 @03:18PM (#12963652) Homepage Journal
    Like most critics, I'm not good at leading large companies. But I know good leadership when I see it. This guy Lutz has his head bolted on right.

    The first thing most CIOs usually throw at their workforce is not to re-invent anything. If a product exists off the shelf at a reasonable cost, there are lots of disadvantages for taking the risk of inventing another one and few advantages if you succeed.

    However, most of us workers have known that the "big iron" mainframe technologies of yesteryear are starting to "rust." It's getting difficult to find technical help who understand this stuff reasonably well. That brings me to the second point: Follow the technology market. The people will be there.

    I suspect that in the not too distant future, many big-iron mainframers are going to be asking theselves whether the many millions they're spending are a good ROI. Open source databases and distributed computing are starting to look awfully attractive.

    It's scary from a CIO's position because the old systems are working, even if they're not well understood any more. They're leaping from the systems they know, toward a high cost potential boondoggle. This guy apparently knew how to hire and retain good technical help, he knew how to organize that help, and he knew how to keep them focused on the goal.

    Most leaders aren't that good. All too many businesses operate by habit. Only the red tape holds them together. Those organizations won't be making this leap until a certain critical mass has been reached to convince them one by one to make the effort.

    We should be doing everything we can to encourage others like Lutz to push these efforts. This is how you really evangelize Linux. And when all this is over, the desktop will be an afterthought.
    • by khasim ( 1285 ) <brandioch.conner@gmail.com> on Friday July 01, 2005 @05:21PM (#12965057)
      Like most critics, I'm not good at leading large companies. But I know good leadership when I see it. This guy Lutz has his head bolted on right.
      I'll have to disagree with that. He made a good choice in going to Linux from Unix, but he did so is such a fucked up way that it was only Linux's technological goodness that saved him from being a poster boy for Microsoft's "Linux sucks" campaign.

      Here, from TFA:
      The decision not to focus more on testing came back to haunt them.
      The CIO decided not to TEST the system correctly?
      Frantic calls began coming in from some of the 44,000 travel agency locations in 116 countries that were unable to access Fares.
      Their customers cannot access their new Linux system!
      Lutz would not comment on the financial losses incurred by United or Galileo during the downtimes.
      They were LOSING money with their new Linux system.
      "In hindsight," says Lutz, "we shouldn't have tried to cut over to a new infrastructure at the same time we were deploying a new software application. It was too much at once."
      This guy made novice-level mistakes and it was only because Linux is so good that this became a huge success rather than a terrible failure.
      Rather than falling back to the old platform at the first signs of trouble and reworking the new one, the engineers always thought the answer was around the corner.
      You always have a back-out plan. Always.

      This guy took a huge risk ... screwed it up royally ... and was saved by IBM, Red Hat and Linux.

      And the Linux system STILL saves him $$$MILLIONS$$$ every year and OUTPERFORMS his old system.

      It's one thing when you're a genius CIO who plans and test for every contingency and deploys a working Linux system.

      It's a completely different thing when you don't BUT YOU STILL SUCCEED BECAUSE OF LINUX.

      This story is important because it shows the average CIO that, even if you aren't a genius and you DO make mistakes, Linux can STILL save you barrels of money and make you LOOK like a genius.

      • Look, you have three ways to trasition. -
        • Pure cut - which they did
        • metered out - ie 10% then 20% ...
        • parallel systems

        This gut chose the first one - linux had nothing to do with it. If he had gone to a new propriatry system - the SAME thing would have happened. Linux is only a bit player here.

        Sera

  • A Bold Move (Score:3, Insightful)

    by endeavour31 ( 640795 ) on Friday July 01, 2005 @03:20PM (#12963686)
    An interesting read. But very interesting for what was left unsaid as well. There was a fair amount of pain associated with the switch - aggravated by rolling out a new application simultaneously. The slowdowns and the associated costs are glossed over but I wonder how the business side feels about this change to only 25% of the entire infrastructure.

    The time window seems fairly broad as well. No one disputes that lots of cheaper intel servers can do the same job as big iron. THe question is how many does it take and what happens with the applications involved.

    Quite telling is the comment that they needed every bit of support possible. Although it is great that one CIO bit the bullet here....there is an ominous side to this story which means that few others will follow suit.
    • There will prove to be right and wrong ways to make transistions like this. The catch we won't learn what those all are until a few more companies do things like this. Yeah, there are some downsides to United's experience with this but on balance this is more good news than bad. A "few others" following suit would be all it takes layout all of the pitfalls and benefits. In a few more years, stories about this kind of switchover will be boring too.
  • aww (Score:2, Funny)

    by Pyrrus ( 97830 )
    I was hoping to see a "Windows has lower TCO than Linux" ad that slashdot runs for Microsoft when I clicked the article.
  • His organization saved 90% in costs in so doing.

    But did he get a raise? Say about half of what he saved them.

    • But did he get a raise? Say about half of what he saved them

      Most executives have bonus programs in place that encourages them to take steps like this. In some companies, the amount of the bonus is directly proportional to the amount of money saved or earned. The only problem with these programs is that they often encourage execs to take measures such as drastic layoffs even thought those layoffs will hurt the business in the long term.
    • Did he get a raise??? Did he get a "cut" of the savings???

      Come on, these are one of probably 2-3 CORE job responsibilities of CIOs and anyone in IT. It is OUR JOB to do things efficiently, and do them more efficiently in the future (so there is capital and resources to do other things).

      This guy's reward is that the company he saved money is going to succeed, instead of die, and probably has $$ to give him, his staff, and other parts of the company raises now and in the years to come...instead of going b
  • by iabervon ( 1971 ) on Friday July 01, 2005 @03:34PM (#12963848) Homepage Journal
    "In hindsight," says Lutz, "we shouldn't have tried to cut over to a new infrastructure at the same time we were deploying a new software application. It was too much at once."

    They found that their Linux servers couldn't support the new application they had deployed at the same time. That doesn't mean it's less capable than the mainframes they replaced: they didn't even try running the higher-load application against the mainframes.

    They should have first ported their servers to Linux on the mainframes, then switched them to Linux on clusters, then sent out new software that they could force back to the old behavior, then supported the new software in general.

    That way, they'd have been able to isolate the problems more easily (which really turned out to be that the new application generated extreme peak loads, and nothing to do with Linux per se, aside from that they managed to improve the Linux performance to deal with it) and keep things stable while they fixed the issues.
    • "They should have first ported their servers to Linux on the mainframes"

      Yeah ... like a 1970s era mainframe could run Linux! They could have LEASED enough Linux servers to do a full test run. Still, until you actually do the cutover, it's hard to really know what will break with a complex app.

      • A fifth of the capacity of their system before they switched to Linux was on hardware purchased in 2001 to handle the rush of bookings when airports reopenned after September 11th. Most likely, routine upgrades in capacity and regular equipment replacement meant that the rest of their system was relatively recent as well; the savings on running a modern mainframe over running a 1970s era one (in terms of maintenance, power usage, and space occupied for the amount of computational power) would pay for buying
  • The article seems to be comparing a $100 million implementation on a thing called UNIX against a $2.5 million dollar implementation using Linux on Intel. What's a UNIX? And why does it cost so much? A clearer definition of the hardware platforms being compared would be quite useful.

    Additionally, in my opinion, the guy should have been canned by Cendant the moment United Airlines was off the air for 45 minutes. If YOU were responsible for this serious a screwup would you still have a job? Probably not.
  • Would've saved even more...

    It may not support all of the latest sound and video cards, but it sure makes a better server.

  • This article is badly focused. Their porting problems are not with the Linux platform. Their problems are related to lousy application architecture. (Or they would seem to be. Since I have not seen what they did I really can't say.)

    Yes, distributing software is HARD, but it's something that can be modeled ahead of time with suprising fidelity. That's the difference between engineering and hacking ... we do the up front analysis, and should have a pretty damn good idea that it will actually work when w

  • The moment that major CAD software operates reliably on Linux I'll start to pay very close attention. I said *major* software, not some homegrown thing that can draw only lines and circles.
    • If you want major then you should be looking at Pro/Engineer from PTC. I haven't used it but a student co-op I was supervising used it at the University of Utah and thought I would like it. It looks awesome, and expensive.
      http://www.ptc.com/go/wildfire/index.htm [ptc.com]

      The next logical choice would likely be Varicad. They have a demo version you can download and play with. I've used this and its not bad, about on the same level as AutoCAD. When I get around to buying a CAD application for linux this will likely
    • I personally hate them, but their software is powerful and considered major CAD software.

      The mainstream vendors, (Solidworks, Solid Edge, Inventor, et al.) are all married to the win32 API. For them, it will be a good long while just like Microsoft likes it.

      However the big three, Dassaut, PTC, UGS all run on UNIX today, with one PTC Linux port. The others all claim too many support issues. (copout, support one distro and let your users sort it out.)

      It's coming, but slowly.
    • Uh, that's very "desktopy" type shit there. It isn't going to happen anytime soon since the cost savings would not be so dramatic.

      The really high end (expensive) stuff - scientific work - across almost all industries, has already gone linux. Autodesk and Bentley will take a while, but Landmark, Schlumberger et. al. have gone linux big time.
  • There are a lot of things in the article that make me think that these guys are throwing a bit of smoke and confusion - or just don't know what they're doing. Here's one example:

    According to Lutz, the number of possible combinations of flights and prices for all the airline carriers between two major cities has been estimated by researchers at MIT to be 10 to the 30th power.

    Sure, if you want *every single* combination. Yes, I could fly from Denver to Las Vegas via Miami, New York, Chicago, Seattle

"If it ain't broke, don't fix it." - Bert Lantz

Working...