Linux From A CIO's Perspective 163
An anonymous reader writes "CIO.com has a story on Linux and OSS in the enterprise from the perspective of the CIO of Cendant Travel Distribution Services, Mickey Lutz. 'In the summer of 2003, Mickey Lutz did something that most CIOs, even today, would consider unthinkable: He moved a critical part of his IT infrastructure from the mainframe and Unix to Linux. For Lutz, the objections to Linux, regarding its technical robustness and lack of vendor support, had melted enough to justify the gamble.'
His organization saved 90% in costs in so doing. Read on if you want to see how the top brass views OSS."
Difficult, but big payoff (Score:5, Insightful)
Lutz's IT group rewrote a complex, real-time airline pricing application that serves hundreds of thousands of travel agents around the world and that also acts as the system of record for all of United Airlines' ticket reservations. When this application came up on Linux, it proved to be so demanding--it handles up to 700 pricing requests per second--that it completely redefined Cendant's expectations about what it would take to get Linux to work. "We have broken every piece of software we've ever thrown at this platform, including Linux itself," says Lutz.
With Big Iron you're paying a LOT of money. But you're not paying it for nothing. Big Iron will give you a lot of guarantees for stability, reliability, and thoroughput that don't exist on other systems. The key to this CIO's success is that he was willing to accept the challenges of doing Big Iron work on Little Brass systems. As long as you work all the details out yourself, this *can* work. (As Google has so eloquently proven. [linuxtoday.com]) The issue is that you're working without a safety net. If things go really wrong, there's no backup army of specially trained techs to run in and fix things. (And trust me, if you're paying enough money you'll have your own personal army of techs.)
The upshot to all of this is that if the gamble pays off, it pays off in a big way. All that money you were spending for a personal army, plus some other company's R&D now goes into your own pockets. You don't get away scott free (someone has to maintain the systems), but you see your rewards. And isn't that what business is about? Taking risks and making profits? If you've got the infrastructure to go for something like this, then go ahead and grab fate by the balls. No one ever got anywhere in life by playing it safe.
The "black box" of open source has transformed into something any CIO can appreciate: reliable performance and consistent uptime. The penguin can fly now.
Re:Difficult, but big payoff (Score:5, Interesting)
Well, there is a backup Army, and it's you.
Google can have a 4000-node Linux cluster because they have enough staff to maintain and optimize the system (Keep an eye on their job pages to get an idea).
Google also has some highly specalized needs-- some machines only crunch data for the DB, other machines only serve webpages, etc. It's in their interest to optimize the Kernel, OS, Database & Web applications as much as possible. Take a tweak which gains a 1% performance gain, multiply that against 4000 machines, and it's quite an advantage.
There isn't a vendor in the world that can totally support their infrastructure, so Google does it themselves.
Re:Difficult, but big payoff (Score:5, Funny)
Let's see . . . that's . . . [pencil scratching] . . . 1%! Amazing!
Re:Difficult, but big payoff (Score:2)
Yes, 1% - but 1% of what? If you have thousands of machines, it can be quite a saving.
Re:Difficult, but big payoff (Score:2)
Re:Difficult, but big payoff (Score:3, Funny)
Re:Difficult, but big payoff (Score:4, Interesting)
No, you are the front lines army. The backup army was the reason you were paying the annual fees. Without those annual fees, there is no backup army. i.e. If you can't get it right, there's no one else to come in and fix it for you.
Take a tweak which gains a 1% performance gain, multiply that against 4000 machines, and it's quite an advantage.
That's something of a straw man argument. If 3 Sun machines and 10 LinTel boxes have the same Flop capacity, then a 1% increase in either one will add up to the same increase in computing power. The key difference is that there are only three Sun machines to update.
There isn't a vendor in the world that can totally support their infrastructure, so Google does it themselves.
That doesn't mean that there couldn't be. Google made their choice to go with a large number of decentralized systems. It works for them and it works well. But they have to do everything internally *because* of that decision. Had they gone the EBay route, they would be able to get that backup army, but then they would pay for the priveledge.
Re:Difficult, but big payoff (Score:2)
Well, sometimes even with the annual fees there is no backup army.
In my experience with small to mid-sized businesses, external vendors can't support much beyond the core product. If you customize the site beyond a minimum level, the vendor can't support it well (But they will still take your money).
In Google's case, they made the choice to optimize their p
Re:Difficult, but big payoff (Score:2)
You can get the same service for the same amount of money if you want to - with Linux you actually get a choice, though, instead of having to pay the huge amount of money even if the Big Iron support somehow proves nearly useless to you.
Re:Difficult, but big payoff (Score:2)
Google can have a 4000-node Linux cluster
Sorry, but last time I counted, Google had 120.000 Linux machines...
Here's the ARMY! (Score:2)
They had serious problems, but this sounds like a sufficient safety net: "a 40- to 50-person cutover team of IBM, Red Hat and Cendant engineers brought the problems under control by throwing more servers into the mix."
Re:Difficult, but big payoff (Score:3, Interesting)
I won't get in to the fact that they (IBM) flat out stole $150 Million from a Cendant affiliate (RCI) by telling them that they could build a distibuted system to replace their mainframe. You could sit in the room with the IBMers and they would say "This thing is NEVER going to work, we can't replicate data to all these AS400's around
unthinkable? (Score:4, Insightful)
Re:unthinkable? (Score:2)
Re:unthinkable? (Score:5, Insightful)
The particular solution the CIO in the article choose gambled with reliability because they used 144 separate servers in 12 clusters. Well implemented clusters of x86 hardware can run seamlessly, but individual machines are likely to fail. Redundancy should mean that a couple blown power supplies or corrupted disks a year is no big deal, but it's still a slight risk and a pain to fix. There are Linux solutions that run on larger machines. They could have replaced their four IBM mainframes with four Linux mainframes. IBM supports Linux on the zSeries mainframes (formerly called System/390, before that System/370, which was the successor to System/360..its about as traditional a mainframe as you can still buy). The top high end computers are Linux based. The top 3 most powerful computers in the world at this time, two IBM eServers and a Sgi Altix, all run Linux. Linux offers the most flexible, powerful, and reliable solutions out there.
Re:unthinkable? (Score:2)
"Additionally HP-UX, Linux, NetApp NetCache, Solaris and recent releases of FreeBSD cycle back to zero after 497 days, exactly as if the machine had been rebooted at that precise point. Thus it is not possible to see a HP-UX, Linux or Solaris system with an uptime measurement above 497 days"
Re:unthinkable? (Score:2)
You left out some important context:
Why do some Operating Systems never show uptimes above 497 days ?
The method that Netcraft uses to determine the uptime of a server is bounded by an upper limit of 497 days for some Operating Systems (see above). It is therefore not possible to see uptimes for these systems that go beyond this upper limit. Although we could in theory attempt to compute the true uptime for OS's with this upper limit by monitoring for restarts at the expected time, we prefer not to do
Re:unthinkable? (Score:2)
This is definitelly NOT TRUE anymore. Are you saying that Chicago Mercantile Exchange was after cutting costs and they never needed reliable, just cheap system. That's just one of the examples.
Linux beats Unix on cost (Score:4, Funny)
Re:Linux beats Unix on cost (Score:2)
Really?
Re:Linux beats Unix on cost (Score:1)
Re:Linux beats Unix on cost (Score:1)
Considered unthinkable (Score:3, Insightful)
This is actually quite thinkable. Now if some CIO moved all his desktops to Linux, I would be impressed. Moving the backoffice stuff from expensive licenses of Unix and mainframes to Linux is a no-brainer.
Re:Considered unthinkable (Score:5, Insightful)
No it isn't. There are many very high volume commercial and financial websites that use features of commercial Unixes, such as memory and resource partitioning, self-healing, fault management and very high scalability. Linux will certainly get all these at some point, but until then it is certainly not a 'no-brainer' to move. Even with smaller systems there are many applications that require certain Unix versions.
Re:Considered unthinkable (Score:3, Interesting)
I'm not sure how to respond to the "no-brainer" label. My development team just migrated our company's production software development platform from a network of aging HP-UX machines (which served us quite well in their day, don't get me wrong) to a Linux network. The new development tools run 20 times faster (that's the actual figure, not hyperbole), our experience over the past two years is that the Linux network is much more reliable, and the server hardware is simply an increment onto their existing W
I'm reading through this and it's interesting. (Score:5, Interesting)
Re:I'm reading through this and it's interesting. (Score:5, Interesting)
Re:I'm reading through this and it's interesting. (Score:2, Informative)
Re:I'm reading through this and it's interesting. (Score:2, Informative)
Re:I'm reading through this and it's int eresting. (Score:2)
More specifics, please. (Score:2)
Re:I'm reading through this and it's interesting. (Score:3, Interesting)
Sun, (Java..) and OpenSource Communities learning to cooperate?
Tomcat is the standard for JSP development, and the reference implementation.
JBOSS is J2EE certified.
JSTL implementation was released by the apache group, not Sun
Sun used Struts-EL for JSP 2.0
Xerces was a joint Sun/IBM effort
There's Hibernate which is pretty much going to be EJB 3.0.
X-Doclet has been acepted as the new Metadata format.
Sun is doing a shitload with opensource, they just aren't promoting it.
Re:I'm reading through this and it's interesting. (Score:2)
The story says that Red Hat and IBM were the vendors used, and that they did very well in the crisis. I'm guessing that he's recommending both.
In any event, the CIO said that he is definitely staying with Red Hat Linux, as it's extremely dependable.
Why didn't he choose Windows? (Score:2, Funny)
Any other ideas?
Re:Why didn't he choose Windows? (Score:2, Insightful)
When making a decision to change OS platforms, you must consider the cost in moving legacy applications over.
Re:Why didn't he choose Windows? (Score:2)
Re:Why didn't he choose Windows? (Score:2)
interesting questions raised (Score:3, Interesting)
It also goes to show that just because something is old does not mean its slow..
hey Mickey... (Score:3, Funny)
Small point of correction (Score:3, Informative)
Actually, all of those bright, shiny websites (Expedia, Travelocity, and Orbitz) rely on a GDS (Sabre, Amadeus, Worldspan or Galileo) to provide their content.
Re:Small point of correction (Score:2)
The Company I worked for did the middleware integration. We basically created the entire backend for Travelocity. Oddly enough, Galileo was one of the first GDSs to move to some form of XML messaging. When I first started wiorking at ??? in 2001, we were having to screen scrape the Windows travel agent terminals and translate everything. And this crap-ass system was written in VB 6.0.
Eventually we rewrote it in VB.Net, but that's not much better for high volume system
Re:Small point of correction (Score:2)
Cost breakdown (Score:5, Interesting)
Unix: $25 million
Linux: $2.5 million
These numbers were taken from a table in the article. Interestingly enough, the cost if something does break favors Linux as well. From the same table we get that the mainframe solution consists of 4 IBM mainframes, whereas Linux and Unix solutions require around 144 servers for Linux and 100 - 120 servers for Unix. If the hardware goes to hell it's so much easier to replace the single bad part than a mainframe.
Hopefully, more people will begin a transition to open source solutions when they realize it can be successful.
Re:Cost breakdown (Score:2)
Re:Cost breakdown (Score:1)
For example, I have heard of Mainframes that continue running after a projectile has travelled through Processing and Memory sub-units.
The CPUs in the machines sometimes compute the same data twice and compare the results. If they differ, it uses a separate CPU to perform the work.
That same thinking goes through the I/O subsystem as well. Prope
Re:Cost breakdown (Score:4, Insightful)
Not to detract from your point, but mainframes don't break as a single piece unless the machine blows up or is otherwise completely destroyed. Big Iron systems are designed with redundant *everything* including motherboards, CPU, memory, network cards, power supplies, and disk drives. If any one part fails, the system will route around it. The part can then be powered down and ejected from the machine. To bring it back up to full capacity, you simply plug in the replacement part and walk away.
In that light, Linux system failures are actually going to be more difficult to repair. However, the cost of repairing a Linux system is far less (disposable box) despite the inherent difficulty.
Re:Cost breakdown (Score:2)
"Not to detract from your point, but clusters don't break as a single piece unless the cluster blows up or is otherwise completely destroyed. Large clustered systems are designed with redundant *everything* including of course motherboards, cpu, memory, network cards, power supplies, and disk drives as each system is independant of the others. If any one part fails the cluster will route around it. The system can then be power down and removed from the cluster. To b
Re:Cost breakdown (Score:2)
Re:Cost breakdown (Score:2)
Re:Cost breakdown (Score:2)
Re:Cost breakdown (Score:2)
Please note that these are annual costs associated with each system.
If the hardware goes to hell it's so much easier to replace the single bad part than a mainframe.
The reason the mainframe costs so much per year is exactly this issue - first, if something breaks it likely cripples but does not disable the machine. Second, IBM fixes it within four hours or less.
Of course, they are still using IBM hardware, and the reason linux is so expensive (2.5M/Yr) is because they likely have the same type o
You forgot a line (Score:3, Informative)
Seriously, the biggest problem with mainframes is that switching them off is a big problem. These are not boxes you can easily - or safely - reboot, if there is a problem. There usually isn't, because the hardware is usually of very high calibre and massively redundant, but scheduled maintenance of, say, an Amdahl or a (when they existed) a Prime was not a trivial affair.
"Routine" maintenance wasn't much better - DEC would charge the Earth (and Mars) to s
Re:You forgot a line (Score:2)
Um, IBM mainframes don't run AIX. They run z/OS, Linux, z/VM, TPF, or VSE. IBM has been able to make huge changes to these OSs and still maintain compatibility.
Re:Cost breakdown (Score:3, Insightful)
Re:Cost breakdown (Score:2)
It should be obvious to anyone with a brain that a cluster is designed as a cluster for the exact same reason a mainframe is designed as a mainframe: availability and power.
There is NO significant difference between a cluster and a mainframe except one: economy of scale in cost.
And by that I mean that the components of a cluster cost FAR less than the components of a mainframe - which is why a cluster that offers as much or more power than a mainframe costs as little as TEN PERCENT of a mainframe.
Studies
Switching (Score:1)
Re:Switching (Score:2)
I'm sure my experiences are just like those of a lot of other sys admins here - nothing surprising, just a quiet evolution that is working quite well.
On the subject of linux to oth
Re:Switching (Score:2)
As an IT consultant I switch my clients to Linux whenever it makes sense. Usually it's a case where they have a handful of Windows boxes that were poorly implemented by some other firm. I look at what they have and if it's easier to move file/print/DB/web/email etc to a single Linux box than it is to clean up the existing clusterf*ck then I do it. And they are always happy! Not to mention small shops are
Re:Switching (Score:2)
I looked a little bit and I don't see that you need root on the host system to compile and run UML, but I didnt read too close so I could be wrong.
Spread the word! (Score:4, Interesting)
Now, let's get prepared to rebut any Microsoft officials whenever they talk about the common "Total Cost of Ownership" as far as Linux is concerned.
Re:Spread the word! (Score:2)
And who says windows needs TCO? Any guy in the office can admin a windows box...
Re:Spread the word! (Score:2)
Re:Spread the word! (Score:2)
You're right, the debate is over. There is nothing left to be said. This anectode about a guy switching from Unix to Linux has finally solved the question: Which costs more? Windows or Linux? I heard that Bill has read this article and has already begun nailing some planks to the front door .
Re:Spread the word! (Score:2)
Re:Spread the word! (Score:2)
we can make any conclusions about this. they just transferred from mainframes and unix boxes to linux. it would be better if they transferred to windows them transferred to linux so there can be a better comparison. and besides, there may be instances when microsoft software may have better tco than oss and vice-versa. i don't think there is a law (a
The risks and the rewards (Score:5, Insightful)
The first thing most CIOs usually throw at their workforce is not to re-invent anything. If a product exists off the shelf at a reasonable cost, there are lots of disadvantages for taking the risk of inventing another one and few advantages if you succeed.
However, most of us workers have known that the "big iron" mainframe technologies of yesteryear are starting to "rust." It's getting difficult to find technical help who understand this stuff reasonably well. That brings me to the second point: Follow the technology market. The people will be there.
I suspect that in the not too distant future, many big-iron mainframers are going to be asking theselves whether the many millions they're spending are a good ROI. Open source databases and distributed computing are starting to look awfully attractive.
It's scary from a CIO's position because the old systems are working, even if they're not well understood any more. They're leaping from the systems they know, toward a high cost potential boondoggle. This guy apparently knew how to hire and retain good technical help, he knew how to organize that help, and he knew how to keep them focused on the goal.
Most leaders aren't that good. All too many businesses operate by habit. Only the red tape holds them together. Those organizations won't be making this leap until a certain critical mass has been reached to convince them one by one to make the effort.
We should be doing everything we can to encourage others like Lutz to push these efforts. This is how you really evangelize Linux. And when all this is over, the desktop will be an afterthought.
No, Linux saved that guy's a$$. (Score:4, Funny)
Here, from TFA: The CIO decided not to TEST the system correctly? Their customers cannot access their new Linux system! They were LOSING money with their new Linux system. This guy made novice-level mistakes and it was only because Linux is so good that this became a huge success rather than a terrible failure. You always have a back-out plan. Always.
This guy took a huge risk
And the Linux system STILL saves him $$$MILLIONS$$$ every year and OUTPERFORMS his old system.
It's one thing when you're a genius CIO who plans and test for every contingency and deploys a working Linux system.
It's a completely different thing when you don't BUT YOU STILL SUCCEED BECAUSE OF LINUX.
This story is important because it shows the average CIO that, even if you aren't a genius and you DO make mistakes, Linux can STILL save you barrels of money and make you LOOK like a genius.
Re:No, Linux saved that guy's a$$. (Score:3, Interesting)
Look, you have three ways to trasition. -
This gut chose the first one - linux had nothing to do with it. If he had gone to a new propriatry system - the SAME thing would have happened. Linux is only a bit player here.
Sera
A Bold Move (Score:3, Insightful)
The time window seems fairly broad as well. No one disputes that lots of cheaper intel servers can do the same job as big iron. THe question is how many does it take and what happens with the applications involved.
Quite telling is the comment that they needed every bit of support possible. Although it is great that one CIO bit the bullet here....there is an ominous side to this story which means that few others will follow suit.
Re:A Bold Move (Score:2)
aww (Score:2, Funny)
His Reward? (Score:2)
But did he get a raise? Say about half of what he saved them.
Re:His Reward? (Score:2)
Most executives have bonus programs in place that encourages them to take steps like this. In some companies, the amount of the bonus is directly proportional to the amount of money saved or earned. The only problem with these programs is that they often encourage execs to take measures such as drastic layoffs even thought those layoffs will hurt the business in the long term.
Re:His Reward? (Score:2)
Come on, these are one of probably 2-3 CORE job responsibilities of CIOs and anyone in IT. It is OUR JOB to do things efficiently, and do them more efficiently in the future (so there is capital and resources to do other things).
This guy's reward is that the company he saved money is going to succeed, instead of die, and probably has $$ to give him, his staff, and other parts of the company raises now and in the years to come...instead of going b
The main mistake is changing everything together (Score:5, Interesting)
They found that their Linux servers couldn't support the new application they had deployed at the same time. That doesn't mean it's less capable than the mainframes they replaced: they didn't even try running the higher-load application against the mainframes.
They should have first ported their servers to Linux on the mainframes, then switched them to Linux on clusters, then sent out new software that they could force back to the old behavior, then supported the new software in general.
That way, they'd have been able to isolate the problems more easily (which really turned out to be that the new application generated extreme peak loads, and nothing to do with Linux per se, aside from that they managed to improve the Linux performance to deal with it) and keep things stable while they fixed the issues.
Re:The main mistake is changing everything togethe (Score:2)
Yeah ... like a 1970s era mainframe could run Linux! They could have LEASED enough Linux servers to do a full test run. Still, until you actually do the cutover, it's hard to really know what will break with a complex app.
Re:The main mistake is changing everything togethe (Score:2)
Linux on Intel vs. what? (Score:1, Interesting)
Additionally, in my opinion, the guy should have been canned by Cendant the moment United Airlines was off the air for 45 minutes. If YOU were responsible for this serious a screwup would you still have a job? Probably not.
Should've picked FreeBSD (Score:2)
It may not support all of the latest sound and video cards, but it sure makes a better server.
Whatever ... (Score:2)
Yes, distributing software is HARD, but it's something that can be modeled ahead of time with suprising fidelity. That's the difference between engineering and hacking
CAD (Score:2)
The moment that major CAD software operates reliably on Linux I'll start to pay very close attention. I said *major* software, not some homegrown thing that can draw only lines and circles.
Re:CAD (Score:2)
http://www.ptc.com/go/wildfire/index.htm [ptc.com]
The next logical choice would likely be Varicad. They have a demo version you can download and play with. I've used this and its not bad, about on the same level as AutoCAD. When I get around to buying a CAD application for linux this will likely
PTC is ported to Linux. (Score:2)
The mainstream vendors, (Solidworks, Solid Edge, Inventor, et al.) are all married to the win32 API. For them, it will be a good long while just like Microsoft likes it.
However the big three, Dassaut, PTC, UGS all run on UNIX today, with one PTC Linux port. The others all claim too many support issues. (copout, support one distro and let your users sort it out.)
It's coming, but slowly.
Re:CAD (Score:2)
The really high end (expensive) stuff - scientific work - across almost all industries, has already gone linux. Autodesk and Bentley will take a while, but Landmark, Schlumberger et. al. have gone linux big time.
What are these guys smoking? (Score:2)
According to Lutz, the number of possible combinations of flights and prices for all the airline carriers between two major cities has been estimated by researchers at MIT to be 10 to the 30th power.
Sure, if you want *every single* combination. Yes, I could fly from Denver to Las Vegas via Miami, New York, Chicago, Seattle
Re:Well (Score:5, Insightful)
Oh no (Score:1)
And I thought C# was bad enough. This naming scheme is getting out of hand
Re:ok, and? (Score:3, Insightful)
That's not entirely true. If you look at the TCO, Linux is only cheaper if you're willing to cut out the safety nets that are so expensive. i.e. If you get an annual mantenence contract with RedHat and Dell, then how much are you actually saving over a Sun machine with a contract for both?
Corporate purchasing decisions are never as simple as the upfront cost. The key is that if you're willing to t
Re:ok, and? (Score:2, Insightful)
Re:ok, and? (Score:2, Insightful)
I have no personal experience with that, but I suspect you're right, based on extrapolation upward, given that, at the low-end, recent Windows versions seem to require more hardware to do almost anything.
> In addition, the licensing issues to go along with Windows 2003 advanced
> server or whatever you need to get HPC is ridiculous.
That's irrelevant for this article. This CIO was dealing with systems at the high end of enterpr
Re:ok, and? (Score:2)
Re:Not good from my experience (Score:2)
Re:Not good from my experience (Score:3, Interesting)
However, the next step was to go to Windows Update and apply all critical & security patches. It did and wanted to reboot.
Then refused to reboot, even into Safe Mode. WinUpdate had hosed the system but good.
After searching around I found that one of the update
Re:Not good from my experience (Score:2)
No, it doesn't speak for everyone. My post was meant to highlight that exact fact to the parent.
I think the number of devices supported "out of the box" by major Linux distros and WinXP is about the same. I've had LOTS better luck with Linux. Of course, it is a moot point -- the first thi
Re:Not good from my experience (Score:2, Funny)
Here's the rebuttal:
Re:I don't get it (Score:4, Funny)
by NoMoreNicksLeft (516230) Alter Relationship on Thursday June 30, @03:01PM (#12952440)
(http://24.125.88.66/ [24.125.88.66] | Last Journal: Saturday June 04, @12:50PM)
I tried switching the family over to JSF attack jets over the summer
vacation and the wails of terror, utter anxiety, and lack of any flight training whatsoever was enough to crash the jets straight into the ground.
So why all the troubles?
Afterall JSF pilots love to tell storie
Re:Not good from my experience (Score:2)
MOD THIS TROLL DOWN!
I tried switching the family over to Linux machines over the summer vacation and the objections from the other family members was more than enough to send all 7 machines right back to Windows ME, Windows 2000 and Windows XP.
No one with any intelligence at all would switch all of 7 machines over to a new, untested OS at once. This is a troll.
Re:Not good from my experience (Score:1)
I, for instance, only buy hardware supported by Linux, so I don't have your problems, problems that will happen until the vendors add support to Linux.
Linux is not magic and doesn't automatically support any new hardware that may appear, someone must sit down and with a lot of patie
Not good generally. This post originally in 2003 (Score:2)
via Stephanie Klugg Aug 15 2003, 6:33 pm [google.com]
Someone is obviously paid to do this.
Re:Not good from my experience (Score:2)
Given what he describes, it's a realistic scenario, in that He didn't do ANY research etc.
If you walk into something totally unfamiliar/ blind, the expected happens. You step in something or worse.
Kudos for using Ghost, btw. I use a Knoppix CD and partimage. YMMV.
He also doesn't indicate how long ago this "event" occurred... Things have improved greatly in recent years.
Also... ATI SUCKS. There, I said it.
NVidia drivers, both the free/Free ones WORK as a rule, and t
Re:Not good from my experience (Score:2)
First, you're feeding a troll.
Second, that's not exactly the most convincing bit of Linux advocacy I've ever heard...