Please create an account to participate in the Slashdot moderation system

 



Forgot your password?
typodupeerror
×
Linux Software

Are There Large RDBMS Using Linux? 327

Jason Perlow of Linux Magazine writes:" With all of the recent computer press coverage of Amazon and Intel converting their web servers and other front end application servers to Linux, many of these stories neglect to mention that the back end systems these companies use still rely on commercial Unixes like Solaris, AIX and HPUX to host their RDBMSes (Oracle, DB2, Sybase, Informix) for their mission critical transactional applications and data mining. Are there any companies out there actively using Linux to host a mission-critical RDBMS ? or looking to replace UNIX with Linux for this purpose?"
This discussion has been archived. No new comments can be posted.

Are There Large RDBMS Using Linux?

Comments Filter:
  • Shareholders... (Score:1, Insightful)

    by Tensor ( 102132 )
    I dont think that any large companies can use them. The use of free (as in beer) appz looks bad on sharehodlers.

    Plus, senior IT execs need reliable support and assurance that they got the best software in the market for the job, just in case things go wrong. Its a liabilities thing
    • These seem non-arguments.
      Since when has using financial resources intelligently 'look[ed] bad on shareholders'?
      Your second point could point out a strong market opportunity for consultants.
      Of course, that consultant market diminishes the cost savings of using open source applications.
      However, when a particular open source database is as ubiquitous as, say, TCP/IP, it strikes me that _savvy_ shareholders would view its use as a strength, as the company reduces the heroin-addiction-like lock-in of, say, SQL Server.
      • Re:Shareholders... (Score:2, Interesting)

        by Tensor ( 102132 )
        Yes, exactly this is the case with consultants, big firms dont even add a door to a building without a consulting company coming in. And it is also a case of liabilities.

        Just as with the DB, if things run smoothly then everything is ok ... but if for some reason it collapses (or the door gets jammed) you better have someone large behind you to lay the blame. (well, we hired Accenture, and they are the best, and they said a door there was a great idea, hell they studied it for 6 months!!)
      • Of course, that consultant market diminishes the cost savings of using open source applications.

        Not at all. Since the use of a consultant comes first and the solution follows, the consultant fees were a given regardless of the recommended solution so the savings are still there. However, consultants implementing Linux solutions may find themselves at a disadvantage when their proposal comes in significantly lower and is viewed with skepticism since everyone else was in a certain (higher) range. One way to combat that is to propose high and then come in well under budget for the first job or two.
    • Re:Shareholders... (Score:2, Insightful)

      by Anonymous Coward
      Plus, senior IT execs need reliable support and assurance that they got the best software in the market for the job, just in case things go wrong. Its a liabilities thing

      you have never read any of the licenses have you? shame on you.
    • I dont think that any large companies can use them. The use of free (as in beer) appz looks bad on sharehodlers.

      Only if you've bought RHAT or LNUX

      Plus, senior IT execs need reliable support and assurance that they got the best software in the market for the job, just in case things go wrong. Its a liabilities thing

      It's not about things going wrong, and don't kid yourself that it is, after all there are plenty of organizations offering support on Linux, even IBM will do so (if you pay them enough). It's a matter of the right tool for the job, and at the high end, Linux trails behind commercial Unix implementations like Solaris and AIX, tightly integrated with their hardware, and with solid high performance capability, for example Solaris' threading and logical domains.
  • I administrate 'Theoldnewsstand' (dot com), an archive of newspaper articles some hundreds of years old for genealogists to search for their family in these time periods. The system relies on MySQL and Linux, and we have some > 10,000 entries for articles now. I've found myself actually requiring to use this operating system to keep the great performance, and boy, does it work well.
  • Prada uses Linux (Score:5, Informative)

    by Nadir ( 805 ) on Wednesday November 07, 2001 @07:45AM (#2532091) Homepage
    Ok, maybe they are not huge, but Prada (Italian fashion designer and sponsor of "Luna Rossa" at the last America's Cup), uses Oracle running on RedHat stored on a pair of EMC Clariions for their datawarehouse.

    I don't know what size the database is, but the Clariions had 400GB each worth of disks.
  • It'll change (Score:5, Informative)

    by darylb ( 10898 ) on Wednesday November 07, 2001 @07:48AM (#2532093)

    As distributions like SuSE continue pushing ahead with high-end features (like logical volume managers, which SuSE already has), usage of these products on Linux will undoubtedly increase. Part of the situation here is cost. When Oracle Enterprise Edition costs $40,000 per CPU, plus another $8,000 or so per year for support, who cares about spending a little more for high end Sun or IBM systems?

    Also, Oracle 8i, while supported on Linux, did not offer a couple of features found in Oracle 8i for other systems. In particular, full interMedia support for full-text searches of all sorts of documents (especially from software made in Redmond) was not available in the 8i Linux version. The new 9i does support this feature under Linux.

    • Re:It'll change (Score:2, Informative)

      by John Hasler ( 414242 )
      LVM is not a SuSE exclusive. It comes with many other distributions (Debian, for example) and can be installed on any. Please save phrases such as " As distributions like SuSE continue pushing ahead with high-end features" for the suits.
  • by Daeslin ( 95666 ) on Wednesday November 07, 2001 @07:50AM (#2532102) Homepage
    Are there people stabbing themselves in their ears?

    I like Linux, but on the scalabilty front, it's still got a ways to go. Moreover, since most Linux used by corps (at least here) is Intel based, you've got to deal with less mature hardware (backplanes, reduncancy, etc.). Plus the enterprise management tools required are only starting to appear for Linux.

    *climbs into his asbestos underwear to wait for the inevitable jihad*
    • by jellomizer ( 103300 ) on Wednesday November 07, 2001 @09:06AM (#2532353)
      Why are they using Sun/HP/IBM Own Unix other then Linux for their mission Critical Apps its really simple The Software is Designed around the Hardware and the Hardware is designed around the Software. I do a lot of work with Sun Sparc Systems with Solaris. And I find that Solaris Works Really good with the Sparc Arcecture and Vice Versa. Linux on the other hand was designed for Hardware that the hardware was designed to run on DOS, Windows systems. And linux had did a good job making their OS run this platform and do it better then windows. But still I find that using Solaris on Sparc/Ultra Sparc systems runs very smoothly and I have little to no trouble adding Hardware. Or upgrading. And I find that Solaris is far more stable then Linux is in special cases. Such as their X Server that runs a lot smoother then XFree86 (I know XFree is not Linux) but Every once in a while XFree86 Will completly crash on me with no way of accessing Linux (Including telnet). In a sence it locked up. Wile I never had that problem with Solaris.The main reason is that their are thousands of different vidio driver to use. But still Solaris and other UNIX on their own platform seem to take the Brunt of the work very well. (Plus it helps that these system generally use higher quality parts)
      • So, what you're really saying is: A special purpose tool is better than a general purpose tool - for specific purposes. I cannot disagree. If you want High-End Multiprocessor Mega Server Computing then yes... a High-End Multiprocessor Mega Server OS is probably going to do the better job.

        I find it, however, interesting that a general purpose implementation of UNIX such as Linux is even playing in the same ballpark. The stability, utility, affordability, ubiquity and avalabilty of Linux make for a strong argument for it's use in many situations.
  • Some people seem to run Linux on S/390's. There's a bunch of case studies here [ibm.com] on IBM's website.
    • <FLAMEON>
      Case studies on ibm.com that favourably argue the use of Linux and DB2 on S/390? I'm going to buy one right away! If IBM sez it is good for me to buy their stuff then who am I to argue!

      ;-)
      </FLAMEOFF>
  • With the advent of NDAs being signed many times over by any professional at pretty much every company on the face of the planet... this may be a more difficult answer to find than you think. I can tell you that my company uses linux for our servers, but we only have around 75 or 100gb of financial data in our databases
  • by mparaz ( 31980 ) on Wednesday November 07, 2001 @07:55AM (#2532123) Homepage
    On a related note, what are the largest installations of free software databases... especially the most popular, PostgreSQL and MySQL?

    Any war stories?

    How about building Redundant Arrays of Inexpensive Database Hosts?
    • I keep a journal at www.livejournal.com They distribute copies of their clients and their server under the terms of the GPL. They use mySQL in what I consider to be a very large environment. I don't have exact numbers, but it is a large (very large) site and keeping track of all of those journal entries is obviously very trying. I guess I should also share that they are having their fair share of problems keeping their hardware up to date to handle their load. Check them out!
    • I run avidgamers.com, a community hosting service currently hosting around 7000 communities. We have 1.2 million records getting an average of 20 queries per second, ranging from sigle-record results to large summarizing queries. (With a fairly large part leaning towards the latter, tallying the number of replies to each thread in message boards.)

      Running MySQL 3.23.40 on a 1.4GHz Athlon with 1GB of RAM and an 18GB 15krpm SCSI drive, the system is doing ok, but it's starting to feel the load peaks. I'll be upgrading to RAID fairly soon, which should help things.
      All in all, I'm very happy with MySQL, but I'm strongly considering a move to Postgres, because the lack of row-level locking is starting to become a problem. Stability has been no problem... no crashes, no data corruption, nothing.

      I'm sure this is in no way one of the largest installations of free software databases, but I thought I'd post my experiences anyway.
      • We considered moving to Postgres for FastMail.FM [fastmail.fm] as well because of the row locking issue. But instead we moved to MySQL with the InnoDB backend (which also drives Slashdot). We've found it works extremely well, and actually doing the upgrade was just a case of running 'ALTER TABLE TableName TYPE=InnoDB' for each table. InnoDB comes with the standard 4.0 binary now too, so you don't have to separately get the -max binary or compile it in yourself. And InnoDB supports multiple files over separate disks (including putting the log on a separate disk of course) so you don't have to worry about converting to RAID.
        • I had looked into InnoDB earlier, but the row size restrictions made it problematic. Your comment prompted me to check the documentation again, and what do you know: They fixed that limitation starting with 3.23.41. Thanks for the suggestion :)

          I don't want to upgrade to 4.0 (which is still in alpha) just yet, but I believe I'll compile 3.23.44 with InnoDB support and give it a shot. Hopefully, the upgrade is as easy as you say. Any hints/tips/caveats or possible problems you've run into would be helpful.
  • Give it time (Score:2, Informative)

    by rleyton ( 14248 )
    I think the question is right - most people are relying upon the big DBMS players, using big iron boxes to host their systems data, and that's unlikely to change this week, next week, or next year. Heck, we did when i was at beenz.com even though the code was originally written with mysql. [beenz.com]

    I think we're going to see things change gradually as acceptance grows. Don't rush things. People will move when they're ready, and trust is there. Redhat's worth watching. And it doesn't have to be big vendors, as so much less functionality is needed in the DBMS in these days of N-tier & appservers based infrastructures

    And how about designing FOR failure and using commodity boxes (running a free OS?) at the same time? Check out Clustra [clustra.com] for a RDBMS that runs on Linux & Solaris, runs over LOTS of small, cheap commodity boxes, and is as a result, very reliable (yes, I do use it). Ok, so it's not free in any sense, but it's good and solid, and used by some big players in the telecoms industry.

  • by vinyl1 ( 121744 ) on Wednesday November 07, 2001 @08:06AM (#2532153)
    If you are running a very large Unix box, such as an E10000, the operating system is optimized for the hardware, and the release of Oracle you're running is optimized for the OS. Even so, they still don't work that well--there are many unexplained bugs and glitches, even with the latest stable releases of Solaris and Oracle. No one would want to introduce further instability with a new OS.

    Furthermore, there are no potential cost savings. Solaris essentially 'comes with' an E1000, and all your administrators are trained in Solaris.
    • Linux users are familiar with unexplained bugs. Developers like to muck around with "stable" code, but don't like to document their changes well.

      I also keep in mind that when I apply a kernel patch to Solaris, I don't have to worry about getting little suprises like a completely redesigned and poorly tested VM subsystem, because the Solaris development team were stuck in a mailing list flamewar.
      • by Anonymous Coward
        because the Solaris development team were stuck in a mailing list flamewar.

        Or because a team member was actually getting laid that week...
      • And patches from Sun never have any bugs. Yeah. Sure.

        And the completely redesigned VM in Linux 2.4.10 meant that my cluster had a speedup of 1-2 orders of magnitude (heavy swap vs virtually none). And all of my systems running 2.4.10 (12 of them) have been up since the first boot. I'd rather have them fix things in the kernel than use a band-aid approach.

        >because the Solaris development team were stuck in a mailing list flamewar.

        Are you in elementary school? You seem to have no grasp of corporate politics. First, I wouldn't call it a real flame war. I read the mailing list every day, so I'm familiar with what is going on. Second, one difference between open source projects like Linux is that discussions are public; proprietary projects can have real warfare going on, but the conflict might not be known to the public. I bet you that the fight at Sun going from SunOS to Solaris would make any lkml flamewar look quite tame.

        -asb
  • by nettdata ( 88196 ) on Wednesday November 07, 2001 @08:06AM (#2532155) Homepage
    When Oracle first started producing their appliance [oracle.com] products, they were based on Sun's microkernel.

    That has since changed. They are now using Suse Linux for all of their appliances. They work fairly well for what they are designed to do, which is to provide an administratively simple appliance... you don't deal with the OS, ony the Oracle admin interfaces.

    Looking at my client list, 4 out of 12 of them are running various Oracle instances in Production on Linux, both Suse (the only officially Oracle supported Linux distro, if I'm not mistaken) and Red Hat. 9 of those 12 run Linux in development environments.

    While the Linux deployment has usually been in a development environment, I've seen the trend start to move into Production environments. I think this can be attributed to a number of factors; the maturity/stability of Linux, the cost (hardware and software), the feature set (journalling file systems without having to pay through the nose for Veritas), and the hardware availability.

    That and the fact that Oracle offers support for Suse. That is HUGE.

    While the bigger companies are still using Solaris and HP-UX for their Oracle needs due to the hardware involved (I have yet to see an E10K run Linux, never mind in production), most of the smaller companies I deal with are running Oracle on Linux in some part of their company.

    Also, a number of Oracle's newer integrated development tools (JDeveloper, Enterprise Manager, etc.) are being ported to be 100% Java so that they will (and do) run on Linux.

  • Momentum... (Score:5, Insightful)

    by larien ( 5608 ) on Wednesday November 07, 2001 @08:08AM (#2532158) Homepage Journal
    Certainly what I've found is that there's a momentum in whatever platform is currently in use. I've been trying to persuade someone to move from IBM to Sun for their Oracle DB since the new V880 [sun.com] is a damn good deal and would fit their needs. However, I detected a certain reluctance to move from an IBM solution as that's what their systems are now.

    Aside from this, much of the main databases (including almost all the mission critical stuff) here are on HP systems. Despite HP's uncertain future (having ditched PA-RISC), I doubt they'll move from HP in the near future.

    Now take this reluctance to move between mainstream Unix vendors and apply this to linux, the upstart on the block. Quite aside from the "free" nature of linux and perceived lack of accountability, there's a further issue. Even when sticking with mainstream ventors, there's a reluctance to mix vendors; i.e. there's a desire to use IBM software on an AIX box, simply to avoid the finger pointing that can ensue. IBM have even had ad campaigns based on this. There's a certain comfort factor in knowing that you can go to one vendor and say "fix this" which you don't get with linux on Intel. IBM, HP and Sun all make the hardware and OS; you don't get that with linux (with the potential exception of some IBM kit like the S/390).

    To get over this, there need to be vendors willing to support the software and hardware side of a linux solution. Hopefully IBM will pave the way with things like S/390 and the zSeries server.

    • Re:Momentum... (Score:2, Insightful)

      by wobblie ( 191824 )
      There's a certain comfort factor in knowing that you can go to one vendor and say "fix this" which you don't get with linux on Intel. IBM, HP and Sun all make the hardware and OS; you don't get that with linux (with the potential exception of some IBM kit like the S/390).

      Well, that's not true, it is just different. Why not go to the core developers and offer them some money to fix something or add a feature you would like? I think this system would be far better than complaining through 20 levels of incompetent tech support to finally get the message that it "will be fixed in the next release". I've never heard of anyone getting some software bug fixed by going to Oracle or Microsoft or whoever else and saying "fix this". Hell, I remember a recent article (sorry I can't find it now) where a CIO was relating all the massive problems he was having with Oracle (the company) fixing his software - and they were a multi-million dollar client.

      It would be nice if sites like source forge were set up so that the development group could accept donations or payments for bug fixes or add ons. This was a great oversight.

      • OK I found the article - here [cio.com]. There are countless others like this.

        Note that the UCITA and DCMA make it even more difficult - actually almost impossible - to sue your software vendor.

        So WHY does everyone keep repeating this mantra that you can "at least sue your vendor" with proprietary software? YOU CAN'T. And how is a contract with a closed source vendor any more legitimate than a contract with an open source one?

        • Ok, I'll give you a counter-example. We've just installed an IBM server and there's a problem with a couple of bits in it (I won't give details as I don't know if I'll get into trouble for it...). As everything is IBM (hardware, OS and the software), we can get support on it to the extent that two groups in the US are working on ironing out the problem.

          Would we have had this if the software package was from Sun? Well, Sun might have blamed IBM, IBM might have blamed Sun and we'd be left with something which doesn't work. We've been lucky in that IBM want this to work to secure future business, and that is the carrot you can use to 'bribe' vendors to fix bugs.

          While open source allows you to track down the bugs and fix them yourself, it relies on you hiring programmers and/or smart admins. Many companies don't want to do that, particularly when you can get the people who wrote the code to fix it (whether you can get them to fix it or not is a different matter; managers' perception is that you can and that's what affects buying decisions).

          As for suing, it depends on the terms of the contract. A large enough business should be able to negotiate special terms with vendors to secure business (don't play ball with us, you don't get our money). If a company wants to be bullish enough, it can negotiate terms that do allow it to sue the company, even with UCITA and DCMA. Unless I'm mistaken, those acts mean that vendors are allowed to put horrible restrictions on sale of software etc. It doesn't say that individual purchasers can't negotiate a better deal.

          One final point. I'm not saying this to say "linux is doomed, it's never going to make it". I have great hopes for linux (in my last job, I made a lot of use of open source software to good effect), but there are still a few things to be ironed out before big companies are going to adopt it in a large scale. Half of what I'm doing here is playing devil's advocate because I like a good argument (NB: argument != flame-fest!).

          • I want to give an anecdote of client leverage that sort of relates. This is a third hand story, but knowing the person who told it to me I suspect it's true.

            A friend of mine was consulting many years ago with a large financial firm helping them to maintain their Netware 3.x servers.(as you can see it was several years ago) They had a tape backup system in house from one of the really large vendors that was not working.

            They went for like a month where they could not get good reliable tape backups on the servers, and playing phone tag with the vendor trying to figure out the problem. Just wasn't working.

            Anyway towards the end of the month, my friend griped to the CIO about the problems they were having and his frustration with dealing with the vendor. The CIO brought up the issue at the board meeting and how it was a risk to the company.

            At this point the VP of trading piped up... "You know, we own several million shares of that company in our portfolio... let me see what I can do"

            VP of trading calls up the President of the vendor company, tells him that if they don't fix the problem with the tape backup software he's going to issue a warning about the companies product quality and dump every single share of their stock on the market.

            The next morning a team of developers were flown in and working on the problems. They had to recompile several modules, but they had the issues resolved within two days.

            I guess the point is, there are many ways you can leverage a vendor. It doesn't have to be a lawsuit.

            As larien said, usually you just threaten to not pay the contract, or not renew. Or add stipulations as part of the negotiation. I've been involved in many an instance where that has played a huge part in getting better support.

            Once I had some issues with a GIS package we had purchased. I tried to work with support, and they ignored me. So when the $5k yearly maintenance agreement came up, I told my boss not to pay it because it didn't gain us anything. I also posted a note to a usenet group explaining my problem.

            Next day I got a phone call from the development manager.

            Financial incentives are the strongest leverage you can have with a software vendor. Like it's been pointed out... that doesn't work with Open Source in quite the same way.
          • Would we have had this if the software package was from Sun? Well, Sun might have blamed IBM, IBM might have blamed Sun and we'd be left with something which doesn't work. We've been lucky in that IBM want this to work to secure future business, and that is the carrot you can use to 'bribe' vendors to fix bugs.

            Yeah, this is basically why Sun and Oracle have a
            special support thing - basically Sun will support
            both it's stuff and Oracle's (and Veritas too if
            you're using that) with just one number to call
            for all of it. "One throat to choke" as Scott
            McNealy calls it.

            However, I guess DB2 on Sun hardware is too small
            to do the same thing... (they'll push you to
            migrate to Oracle instead I guess)
    • Re:Momentum... (Score:4, Insightful)

      by The Man ( 684 ) on Wednesday November 07, 2001 @10:59AM (#2532822) Homepage
      More even than that, getting a RDBMS set up is a lot of hard work from sysadmins and DBAs. That work represents an investment, and there's no good reason to take a working system, with that investment, and throw it away. These systems get used as long as they possibly can be, and then a little longer. Migration happens as infrequently as possible. So when my Sun SC2000E isn't powerful enough any more, instead of switching architecture and OS, I'll just buy a new E6500, hook my disk units into it, and hope I don't have to tweak Oracle too much. Much lower risk that way...
  • We hosted roughly 2tera of mission critical db on 2 Quad processor Linux servers. They were running Oracle as their db. It worked great, and we had little problems.

    We were also an AIX shop, but decided to go with Linux for this application because of the overall price of hardware and supporting applications.
  • by redrobysoftware ( 534842 ) on Wednesday November 07, 2001 @08:21AM (#2532185)
    Our company, a custom e-solutions provider, uses Oracle 9i on Linux almost exclusively because of Oracle's reliability and the fact that we have the resources in-house to support it. There is a caveat to this, though.

    At $5,250 for just a 2-year. single processor standard edition license, 9i is not cheap and
    most companies who already have an infrastructure built on it will not always realize a signifigant cost savings by moving to a Linux platform. 9i
    Enterprise Edition is a cool $45K per processor so it is easy to see how the difference between $20K and $100K for an 8-way Intel versus an 8-way Sun
    machine may not always be the determining factor in a platform decision for a system with a 5+ year time horizon.
  • not a company, but.. (Score:4, Interesting)

    by transient ( 232842 ) on Wednesday November 07, 2001 @08:21AM (#2532187)
    The City of Bloomington, IN [bloomington.in.us] will be doing this. All of our servers are Linux, with the exception of one NT machine for a small Progress database, and several HP-UX machines for Oracle. We'll be migrating them to Linux in 2002.
  • hmm (Score:5, Interesting)

    by YakumoFuji ( 117808 ) on Wednesday November 07, 2001 @08:22AM (#2532188) Homepage
    to answer your question, i dont know of any.

    i myself am in the datawarehouse of a large international company, our DWH is run off IBM as400's with DB2 + essbase/hyperion.

    there are several factors why there will be no change in this.

    IBM offers complete intergrated solutions (HW+SW) that you dont get with opensource solutions.

    the opensource rdbms cant compete with the likes of DB2 and Oracle in terms of scalability and features.

    3rd party integration. (Esssbase/Hyperion) database cube solutions dont exist for linux/freebsd. (man 3d cube db's are funky)

    stable cross platform ODBC drivers, (winnt drivers for ASP, JODBC java+websphere, AS400 + RS6000 drivers)

    support. (who gives 24/7 support on postgress, and send out tech support guys giving consultations, will come on site on a sunday at 4am?)

    what OpenSource rdbms provide true mutli language support (we have records in cryllic, japanese, american, german, etc)?

    high availablity (i dont know the current state of HA functionality in the linux kernel)

    Linux on the AS400 is not seen as providing anywhere need the requirements at present, and its opensource database solutions are same.

    (and i dont even think there is any cube database products in the opensource area... ???)

    • Re:hmm (Score:4, Informative)

      by prizog ( 42097 ) <novalis-slashdotNO@SPAMnovalis.org> on Wednesday November 07, 2001 @11:44AM (#2533046) Homepage
      "support. (who gives 24/7 support on postgress, and send out tech support guys giving consultations, will come on site on a sunday at 4am?)"

      RedHat either already does or will soon.

      "what OpenSource rdbms provide true mutli language support (we have records in cryllic, japanese, american, german, etc)?"

      PostgreSQL [postgresql.org].

      "high availablity (i dont know the current state of HA functionality in the linux kernel)"

      Why not look it up? [redhat.com]
      • Re:hmm (Score:2, Insightful)

        by dijit ( 13990 )
        Oh puh-leeze. RedHat will not be sending out people that will be able to answer the extremely complex questions surrounding databases sized for extremely large enterprises -- the oracle market. They will send out someone that knows something about the operating system with maybe some cursory RDBMS experience. Make no mistake, these WILL NOT be terribly seasoned individuals.

        // dijit
  • by _|()|\| ( 159991 ) on Wednesday November 07, 2001 @08:25AM (#2532194)
    Red Hat [successes.com], Oracle [oracle.com], IBM [ibm.com].

    In addition to the links above, most of the big database systems have active Linux ports. Any Oracle [oracle.com], Sybase [sybase.com], Informix or DB2 [ibm.com], InterSystems [intersys.com], Poet [fastobjects.com], or Versant [versant.com] customer is a potential Linux customer.

  • by vt0asta ( 16536 ) on Wednesday November 07, 2001 @08:26AM (#2532199)
    We have four linux machines using Oracle 9i RAC for our database. The boxes are penguin computing 200x Relions each with qlogic 2200 fibre channel cards and an Intel 10/100 dual nic card, which ties into our SAN'd up Clariion 4500 disk processor/array. The three nics (including the onboard) gives us a frontend/app network, backup network, and an oracle IPC interface.

    We have had success using Redhat 7.1 (upgraded kernel to use LVM) and Suse 7.2 (comes w/LVM) for the linux distribution. Do not attempt RAC or OPS without an LVM of some sort. It can be done, but it shouldn't.

    The biggest expense you will have is the disk array, and you should not skimp on this. Buy fast reliable maintained disk.

    The Linux solution beats out Sun solutions in price hands down. You are talking $30,000 per box for the minimal Sun allowed hardware requirement for the Sun Cluster software with the Oracle Parallel DB runtime licenses (this has changed with v3 and so have the hw requirements). The Sun Cluster software requires an extensive review process by Sun which basically insures your company has two extra of everything and can be onsite to help Sun with their software and hardware in 4 hours. If your company doesn't have it's shit together, Sun and the few vendors that even know what Sun Cluster is aren't even going to bother talking to you about it.

    This Linux solutions beats out a Windows NT solution in reliablity over the simple fact that the disk and volume management is clumsy. There is no easy way to create labeled raw devices on a Windows machine. The process as I remembered it was creating unlabeled logic partitions for each disk space and then maintaining a file pointing to the value of the related registry key to map out the tablespaces. As soon as you added a partition, modified a partition, or even used another node to look at the partition table, you and the database were screwed (i.e. restore). This problem with managing shared disk may have been fixed in 2000.

    The weakest point in the entire Oracle 9i RAC is the cluster software layer. Whether you are using Sun's Cluster Software, the Oracle supplied cluster manager for Linux, or the hardware vendor supplied OSD layer for Windows. Be prepared to spend serious time in monitoring and getting it under control with appropriate patches.

    Once you have fought your way through all of this you can reap in the rewards that multiple nodes with shared data gives you. The greatest benefit is the ability to partition your data and your application which allows you more opportunities to scale. If your data does not partition by some logical means (date, timezone, city, planet, etc) forget about it. Just get a big honking database machine (especially you SAP/Peoplesoft poor SOBs).
    • Sounds good! But you're probably top-ending your machines (buying them fully packed rather than leaving headroom for expansion). When you run out of logical partitioning options adding nodes will be less effective, so for really big operations you still have to escape the Intel architecture and go with a Sun or HP or IBM or what-have-you. Especially us poor SAP SOBs.

      With the possible exeption of IBM you'll probably not see Linux running DB clusters on those platforms, as the companies have put so much time and money into optimizing their own cluster technology. The "big iron" is still proprietary space for now, and the big databases need big iron unless they're the digital equivalent of scrapyards (where the most common activity is rusting).

      That's not to say Linux isn't ideal for the space you're in -- as a better alternative to NT on commodity boxes. Many of today's "small-to-average" databases run in the 100-500GB range, and dwarf the "large databases" of not-so-long-ago. It's just that today's VLDB are measured in terabytes, headed for petabytes, and expecations for response times are shrinking. Our ambition grows with our grasp.

    • The Linux solution beats out Sun solutions in price hands down. You are talking $30,000 per box for the minimal Sun allowed hardware requirement for the Sun Cluster software with the Oracle Parallel DB runtime licenses (this has changed with v3 and so have the hw requirements).

      But who says you need a Sun cluster anyway ? Couldn't one just get a single Sun box ? The Sun model seems to be that you get a big machine, and scale it up by pouring CPU juice into it.

      Of course, the big-iron machines from Sun are fairly pricy though -- I agree with your main point, that Sun is expensive

  • Energis Squared [energis-squared.com] runs the technical side of Freeserve [freeserve.com] and other ISPs. Most of their core systems are Linux based, with some Solaris and *BSD boxes in there too.
    • the Freeserve portal uses Apache webservers on Sun E220Rs with Oracle running on Sun E4500s at the back. No Linux RDBMS that I'm aware of. I know, cos I built 'em. ;-)

      The mail system used to be on Linux (presumably still is), DNS on Solaris and a heap of NT boxes for customers' websites. Things might have changed since I last worked there, but I don't remember a large Linux database anywhere.
  • Large? (Score:5, Insightful)

    by hey! ( 33014 ) on Wednesday November 07, 2001 @08:35AM (#2532225) Homepage Journal
    I'm sure there are plenty large databases running on Linux and even MySQL. Solving the problem of large databases is relatively easy.

    The much more difficult problems are availability (i.e. 7x24, runs for years with no interruption) and throughput.

    When you combine these constraints to specify the problem of a large, highly available and highly active database that meets ACID test criteria, you have an enormously difficult problem. Until recently with the advent of Linux on mainframes Linux couldn't even dream of playing in this space simply because of the hardware it ran on. Sure, lots of people have Linux boxes that have uptimes for years, but some people have had to reboot because of a bad hard disk or other component. It doesn't happen very often, but it does happen. And the I/O bandwidth hasn't been there to support the kind of throughput needed at the high end.

    Linux on mainframes doesn't really change this at all in the short term, even if you have a proven DBMS like Oracle (forget MySQL or Postgres), because the system as a whole hasn't proven itself. Question: How much money does an airline lose if it's reservation system is down for a few hours, even if it happens once every several years? How much money does a financial institution lose by being unable to execute transactions for even an hour? Answer: enough to buy plenty of proprietary software. People who run these kinds of applications are willing to pay the price for systems with a track record of success in this demanding area. They are often willing to sacrifice certain kinds of sophistication to ensure the safety of their company's critical operations.

    I think that once Linux is established on the kind of iron that is needed for these applications, it will take as much as a decade before people will trust it for these kinds of missions. Phrases like "mission critical" are bandied about so they have little meaning; Linux is ready to support many applications that are important to businesses today, but can't be entrusted with other ones yet.

    Nobody with a working application of the type I describe here is going to migrate to Linux. Nobody starting such an application from scratch will give more than a moment's consideration to Linux. The most likely entree into this space will be evolution of an application from something that is reasonable to host on Linux on small to midrange computers. If the company doesn't have the resources or the time to migrate to something more reasonable, the Linux will begin to get its shot at proving itself.
  • by Kiro ( 220724 ) on Wednesday November 07, 2001 @08:42AM (#2532256)
    If you have to use Access, you can connect to it via PHP or Perl from Linux using ODBC Socket Server, located at http://odbc.sourceforge.net

    ODBC Socket Server is an open source database access toolkit that exposes Windows ODBC data sources with an XML-based TCP/IP interface.

    It has clients for PHP, Perl, C (in Windows, Mac, and Linux), Java

    .
    • I'm sorry, but did you not read his question? He didn't ask which databases you could connect to with Linux, he asked which LARGE, MISSION CRITICAL RDBMS Servers ran on Linux. This is not a troll, rather a correction on what this post should have gotten: off topic.
  • by Anonymous Coward on Wednesday November 07, 2001 @08:45AM (#2532268)
    At http://www.wohl.com/middleware5-01.htm they mention a couple of real world examples (where the Wimbledon example might be considered as a high capacity showcase for IBM technology)

    "At the Wimbledon Tennis Championships, Linux, dB2, and Netfinity servers make it possible to offer real-time information on scores to fans around the globe. Last year, over 914 million web hits occurred during the games, requesting scores and statistics."

    "ERP Central is a portal for ERP consultants. They offer ERP news, job postings, and other information, but their big 'traffic builder' is a free time and expense tracking program which users can access to maintain their schedule information and submit it back to their offices from the site. Linux hosted and built on top of Websphere and dB2, the application can scale to handle over 100,000 users and organizations whose consultants use the software estimate that it saves them 75% in time savings, an average value of $500,000 per organization per year."

    JK
  • by buckeyeguy ( 525140 ) on Wednesday November 07, 2001 @08:48AM (#2532281) Homepage Journal
    The reason that more orgs don't use Linux is at least partly a function of the corporate purchasing process... it boils down, roughly, to:

    We have a need for a new DB system

    What systems are available?

    Schedule meetings with the sales people from the various vendors, so that we can compare what's out there.

    Boink! That's where Linux bounces up against the wall of established companies... except for a smattering of VARs, nobody is there to "attend the meeting" to tout Linux's praises to the big boss... except for the internal sysadmin and/or program managers, who then have to plug the stuff as a better alternative to the established vendors. So, IMHO, for corporate usage, it's not about what the OS can do, it's all in the selling of it.

    Now if you'll pardon me, I have to go to a meeting where a big storage vendor will be showing us their wares. Really. ;)

  • by aviator ( 83555 ) on Wednesday November 07, 2001 @08:50AM (#2532286) Homepage
    One might consider a "large" database in terms of total disk used for the tables, indexes, and logs. Or it could be total concurrent users logged in to the database. Or it could be total simultaneous users - different than concurrent users since simultaneous users are those actually issuing a SQL statement.

    A high number of simultaneous users will require more processor/CPU capacity. A high number of concurrent users (with a low number of simultaneous users) might not require much processor capacity but will likely require more memory capacity due to the number of concurrent connections (and each connection having some amount of it's own memory).
  • by avdp ( 22065 ) on Wednesday November 07, 2001 @08:57AM (#2532315)
    At the company I work for (which will remain unnamed because I am not in a position to speak on its behalf - but it is an old and large american company with a single character stock symbol) we use Oracle 8i on Compaq Proliants running Red Hat Linux - not only that but it's RH6.2 with all of the limitations of that line of kernels.

    None of the databases are gigantic - 80Gb is the largest, but we haven't had any problems at all. If anything, most of these databases used to be on True64 (Digital Unix before that) and we had a lot of problems (although they were probably hardware related). Also - users have reported that performance is better (not that it was a real issue before) but we've never bothered/attempted to document that.

    I can't say that the main factor for the move was money (although it was a factor) - after all, if you can afford the Oracle licenses you probably should not be cheap with the hardware/OS but we've had a whole lot of RH Linux for other applications and it just made sense to consolidate.
  • by jlubenow ( 534859 ) on Wednesday November 07, 2001 @09:34AM (#2532459)
    At my company we are in the process of switching our progress database currently running on a sco unix box to a new compaq server that will be running Red Hat. This database is extremely mission critical to our company (ie pays the bills. Progress is one of the best platforms I have ever used and is extremely stable on linux.
  • by tmark ( 230091 ) on Wednesday November 07, 2001 @09:35AM (#2532463)
    I'm *certain* there are companies out there using Linux to host "mission-critical" (whatever that means) RDMSes. But this by itself would tell us nothing of Linux's suitability for this purpose. I happen to know lots of companies that use Linux for this purpose, but they also are companies that would not be able to afford the Sun boxes and Oracle licenses that they wish they could run. I also know several places running Linux for - what they would consider to be - "mission-critical" RDMS, but what they consider to be mission-critical is FAR different than what a big investment bank or hospital would consider to be mission critical.

    Instead of just asking a question that is almost guaranteed to pat ourselves on the back, we need to also ask for descriptions of the conditions that people are using Linux for RDMSes. That is, before the answer "are you using Linux" can be properly interpreted, we also need to know answers to questions like: How many connections ? How many users ? What size of a database ? What kind of availability do you demand ? What kind of information is being stored ? How big is your staff ? How big is your budget ?

    After all, knowing that a company uses Linux to host Postgres/MySql tells us nothing if the company can't afford to buy a Sun box/Oracle license in the first place.
    • but what they consider to be mission-critical is FAR different than what a big investment bank or hospital would consider to be mission critical.

      Excellent point!

      we also need to know ... How many connections ? How many users ?

      Also: How much money in financial transactions does your company stand too lose if the server is ofline for an hour? What other implications are there of such an outage? How much money would be lost, and what other implications are there is an hour's worth of data is lost completely? How much is at risk, what sensitive data is compromised if that data is exposed to malicious hackers?

      This kind of question defines just how mission critical "mission critical" really is.

  • We [autosoft.com] run Linux [redhat.com] at one of our fabs [auo.com] here in Taiwan [taiwan.com] running a mission critical DB system called C-Tree [faircom.com]. This is 24/7 stuff for those of you who don't know how Fabs work.
  • Amazon uses... (Score:2, Informative)

    by Anonymous Coward


    Objectstore. An object oriented database (see www.objectdesign.com ) thats known for its speed.

    Who knows why we didn't say that.
  • PICK Database (Score:2, Interesting)

    by dbworker ( 534861 )
    The PICK database [rainingdata.com] (aka D3) is a little known database that's been around for about 20 years, and was ported to Linux about 4 or 5 years ago. This DB is fully implemented on Linux, and I've talked w/ people that have 1000+ users running. The DB itself has several million user-licenses in the field, and a lot of them are running Linux. The Linux implementation supports multi gigabyte DBs and the user count is limited mostly by the power of the machine. I think this counts.
  • DB2 on linux has great performance.

    Linux on rs/6000, as/400 (iSeries), and system /390 (zSeries) is awesome.

    Now, buy that nice hardware (better than plain ole Intel boxes) and either run Linux on em with DB2, or AIX, os/400, or z/OS ...
  • Our Oracle 8.1.7.2 instances running on Solaris 7 to Oracle 9i running on Linux. Our biggest problem so far is vendor related, as our ERP (Peoplesoft) climbed into bed with Microsoft some years ago and basically has just ignored the Linux market for an apps port :(

    Anyway, we're shopping replacements for our 3500's and we've found that bang for the buck, Linux for Databases is the way to go. Most of these servers are one-task anyway, and Oracle runs like a champ so far. There are some issues with Glibc that require some manipulation of libraries to get around if you want to use any other dist. than SUSE tho, which sux. That said, we're testing with mandrake 8.1 and it runs fine (post patch).
  • by f00zbll ( 526151 ) on Wednesday November 07, 2001 @10:25AM (#2532664)
    Although linux can run large RDBMS like Oracle, and sybase, the issue is disk storage and hardware redundancy. Things like having Veritas hooked up to a couple of large systems to handle failover is crucial. Large in my mind is systems with more than 10terabytes of data. Buying enterprise level storage solutions for solaris is more plentiful than for PC solutions. Things like getting a solid gigabit ethernet card or bonding several together has been tested on solaris longer than on linux. Who in their right mind wants to be responsible when the cheap storage device dies and failover doesn't kick in? I sure wouldn't.

    Linux can run RDBMS just fine, it's all the other stuff that is lagging. Manufacturers of fiber storage and other high end products tend to focus on solaris more than linux. Large RDBMS includes a lot of other important details that need constant management and attention. Building a PC box with redudant powersource, fans, backup CPU's and motherboards gets you close to solaris prices, so enterprise projects tend to choose solaris or mainframes.

  • by Anonymous Coward on Wednesday November 07, 2001 @11:18AM (#2532921)
    I hate to throw cold water on people, but we're talking apples/oranges in a good portion of this topic.



    A large database (in this context) is an enterprise-sized system: multiple platforms serving many millions of records in short periods of time.



    I have customers fielding databases on multiple Enterprise 10000 servers...single tables of more than 35 million rows. This is actually a "medium" system in my mind.



    I love Linux, I hacked around the pre 1.0 kernels many years ago. BUT, it does not scale up too well. Even the little things in Linux make it hard to do a good (maintainable) job: shifting device names (pull one of your HDs and see what happens), inability to modify hardware subsystems (storage in particular) while running live, etc. Even EMC, NetApp and XIOtech hardware can't fix these issues.



    If the Linux crowd wants to be accepted by Big Business, they must learn the needs of Big Business.



    Running a few 4-proc Intel servers with Oracle or Sybase does not put you in the same league. Nor does storing 10,000 articles in MySQL.



    If you can imagine doing it yourself, if you can even imagine the amount of data to store, then you are almost surely below the threshholds I need to work in every day.

    • Why, oh why doesn't the moderator god give me moderator points when a really good post like this comes up?

      Too bad the guy posted as an AC...
    • I may not be in your leage Most guys in my league measure the size of their systems in disk drives first, then memory, then processors. Yes, I can imagine doing a big database on linux. High availability is something else.



      I am the lead dba for a company that processes 15-20 million us dollars worth of transactions per day. My backend database is solaris/oracle, it does 3000-4000 sql statements per second, and my company would loose maybe $1000 in revenue for each minute it is down. The larger
      two tables in this databasehave in excess of 300 million rows, and are acessed by 100k customers per day. We have over 11 million customers.


      It's running on a E4500, which is saving us a lot of money *not* buying E10000s. I like to think it's tuned well, but a big part of the reason it works (fast) is also that it is on an EMC with over 90 disk drives in it. I'ts all about IO bandwidth and servicability in my world, and on those points you are correct in saying sun is a handsdown winner over linux.
      .


      Now, I work with a sysadmin who is a whiz at making lots of linux boxes work reliably as a web frontend, and is also good at keeping our backend solaris based database up 24/7. neither of us is anxious to put the backend on linux, but we did put up a significantly large, high performance, but *relatively * low availability database up on linux.


      It's a 6x800mhz intel box with 4g ram and 16 disks on mylex caching raid 5 controlers. Raid-5 sucks in general, but the point of this system was to get a lot of bang for the buck, so as a big league dba, I took the challenge of making data loads fast in spite of raid-5, in order to get a crack at de-installing windows from this box. If I spent some bucks on more disks, we could get a much faster system, but then that was never the point of this system.


      The system is about 200G worth of partition tables (copies of the same 300M row tables mentioned above) with partitioned rollup tables off the sides, for business analysis. The real trick is the partitioning. because of the partitioning, this system is able to do many types of analyses that cant be done on our other analysis system which happens to be solaris with 60 disk drives.


      the linux box was a leftover from a failed windows project, so in some sense it was free, but I belive it woulda cost about $80k new. gig ethernet and controler was about 10 or 15k of it.


      It's working well for DSS, since the 2 times it's crashed in the last few months didn't really hurt anything.


      I'm rambling on now, but I'll talk to the DBAs out there, who speak my language.
      If you're gonna do Linux oracle:
      - reiserfs sucked performance wise on top of raid 5. Don't know if I did something wrong, but I abandoned it in favor of ext2. I don't care if fsck takes a long time on this system, and ext2 creamed it for database io perf on raid5. I also couldn't get perf out of reiser on simple stripes without the added hurt of raid5, so go figure. fsck times are irrelevant if you use raw partitions, so this is the way to go in most cases.


      - Max out the memory (of course) on an intel box. I think the most you can do is 4G on intel platforms. this is sufficient for me, but I kept the SGA down to about 500m, so I could have 10 way parallel processes with 200-200M of sort area size.
      - Watch out for linux caching. I've turned it off for my filesystems. It's easy to get into "writeback debt" by pushing a lot of dirty blocks out of oracle cache into ext2fs cache. Add raid5 suckiness at random writeback, and you've got serious constipation problems on your hands.
      - I've used some raw partitions, for this system , they seem to be worth it to avoid ext2fs caching hassles, but I haven't migrated completely yet. The "raw" command must be used to "bind" a name to a disk partition before it can be used by oracle as a raw partition, so it makes for a few extra hassles, but no big deal.
      - I got a mylex caching controler, which aparently has hot swapping capability in the hardware, mitigating the absence of veritas volume manager and hot plug capabilities at the linux level. It also makes raid5 tolerable. Haven't proven hot swapping by testing yet tho.
      - Ext2 fs has some raid5 aware stuff, this helped on the raid5 mylex vols I have, based on cursory thruput tests, but I'm not sure I'm getting the block alignment proper at the oracle level. (don't know after all the oracle/ext2/controler layers, if oracles 16k blocks are aligned with the stripes on the mylex. sigh.

      FWIW, back in the dot-com heyday, I also had clients doing modest high availability (to them) databases on oracle/linux. Even then, on relatively small (in gigabytes) database the biggest tunining hassle was writeback caching of linux getting in the way of oracle, and the biggest hassle of scalability was managing many many disks. Raw partitions can get around the former, intelligent controlers (mylex etc) or intelligen disk arrays (clariion, sun t3 etc.)
      get around the latter

    • You are just using the wrong database, then. There is a 50cpu linux cluster (not Beowolf, but the native clustering to the database) that was loaded with 2.5 billion stock transactions. It performed very well using KDB (taken from kx.com):
      on thursday jan 4, 2001 steve miano, ed bierly, keith mason and i
      loaded 2.5 billion trades and quotes on a 50cpu linux cluster.

      simple table scans on one billion trades, e.g.

      select distinct sym from trade
      select max price from trade

      take 1 second

      multi-dimensional aggregations, e.g.

      / 100 top traded stocks
      100 first desc select sum size*price by sym from trade

      / daily high and close
      select high:max price, close:last price by sym, date from trade

      take 10 to 20 seconds

      translating the data from TAQ to kdb took about 5 hours.
      (steve had loaded the 200 TAQ cd's onto several disk drives.)

      distributing the 100gigabytes over the 100Mbit ethernet took 3 hours.
      (this cluster should probably have Gbit ethernet)

      loading the database (k db taq.m -P 2080), starting 50 slaves,
      connecting, mapping shared indicative tables over nfs, building
      parallel partitions, etc. took .1 second.
  • by Anonymous Coward
    We ran extensive comparisons for a Data Warehousing project using Sun HW/Solaris/Oracle versus Penguin Computing/RedHat/Oracle and while the Sun solution was slightly faster in our tests, it was only marginally faster, yet cost significantly more. No way could we justify the additional expense based on our results. And we haven't looked back. Our Oracle servers haven't failed us in nearly two years, and they just keep getting better. and today's options for Linux hardware are much batter than 2 years ago. We even discovered a problem with a particular Sun server during our testing that Sun asked us to keep quiet about. We took that to mean they'd sue us if we discussed it. Didn't take long to realize that this was not a company we wanted to do any business with. Sun sucks.
    • See, it's not what it CAN do, it's what it COULD do. Linux/Oracle/4 way P4 Xeon may very well be on par with a four-way sun e450. But what if you need to move up to an E10K? What if you need a fully fault-tolerant cluster? What if you want one that any competant admin could drop into? It's not there yet. Remember, for the high end stuff, one doesn't buy hardware, database, OS and go. One buys a solution. "This solution WILL perform as follows, and WILL cost this much..." whereas the Linux setups are all very custom, and very hacked, and strictly for cost or strictly because it's Linux. Well, either way, you're stuck.
  • Weather.com (Score:2, Informative)

    by Anonymous Coward
    Weather.com is using Linux quite successfully to host its Oracle backend. They have replaced 250K Sun machines with 50K Intel based systems doing the same work.
  • I work on Unix machines running Oracle and on OS/390 machines running DB2. Based on my work in that and all my tinkering on linux over the years I think Linux is now able to handle mission critical on the right hardware. All the tools for big bizz mission critical stuff became available in Linux recently.

    But, and this is a big but, it has to be setup by the right person. I have seen Unix and MVS systems setup and hose up for mission critical situations. We lost a lot of money while the systems were down. The higher ups would blame the people (as they should have) because the systems work in other situations just fine so it must be the people.

    Based on perceptions, if it were Linux setup by the wrong guy and things went belly up they would blame the Linux because it's untested. It would end up the scape goat instead of the lazy implementation group. That's what Linux has to overcome.

    I remember a quote I think was from the Red Baron, "It's not the crate, but the man in it that counts".
    • There are many qualified and competent people ready to go to work providing Linux solutions; as opportunities become available, there will be even more. Based on what I've seen, the average Linux user is better trained and more dedicated than many Windows uber users too.

      That said, it seems as if your argument bears more weight on the manufacturer of said crate than the man in it.
  • PostgreSQL 8GB (Score:2, Informative)

    by /dev/zero ( 116295 )

    We run a large auditing system (OLAP-oriented rather than OLTP-oriented) on PostgreSQL (v7.1.3) on Linux (RH 7.1), using Tomcat (v4.0.1) as the front-end. We're running it on a Dell PowerEdge 2400 (2x PIII-866) with their Perc RAID controller with a Raid 1 and a raid 0+1 volume.



    Our database is currently a bit over 8 GB, with many of the tables exceeding one million records. Queries typically join > 5 tables.



    We moved from an MS Access/SQL Server environment and are much happier with the functionality , performance, and stability we now have.


    Not to slam DB2, as I think it's a great product and have successfully used it for some really big projects, but for this application I found the PostgreSQL delivered ~4x the performance on many of our key queries. The lower cost and lower administrative overhead sealed the deal in favor of PostgreSQL.


    As always, though, your mileage may vary.


    Gordon.

  • I know we're all 'rah, rah Linux!' around here, but the question being asked is pretty unbalanced. I don't know firsthand of any large RDBMS Linux implementations, but that's not saying much.

    I do know there are a *lot* of large-scale BSD RDBMS systems out there.

    It seems a little skewed to put Linux against 'commercial OSes' when BSD isn't a commercial OS, and is arguably better suited to the tasks at hand than Linux.

    Use a hammer for a nail, and a screwdriver for screws.
  • by King_TJ ( 85913 ) on Wednesday November 07, 2001 @02:02PM (#2533785) Journal
    We're currently running Oracle 8i under Windows NT on a couple of DEC Alphaservers (4100's with quad processors).

    With MS's abysmal support for NT on the Alpha these days, we've considered moving the Oracle database to another OS. I don't think we want to trash the DEC Alphaservers yet though - since they're still respectable machines. Linux for Alpha is definitely an interesting option for us - but I'm wondering if anyone has had experiences with Oracle for Linux on the DEC Alpha? How does it compare, performance-wise, to running Oracle on the Alpha version of NT?

    Last time I checked, Oracle wasn't really giving a high level of support to Oracle for Linux unless you used it on Intel hardware?
  • One of the PostgreSQL developers was telling me about a database he once designed. The details are a bit hazy and second-hand, but I believe it was originally using Ingres, which was what piqued his interest in PgSQL.

    Anyways, the system basically handled a few gigs a day or so of data from GPS satellites and such. It basically crunched numbers and stored results in an effort to figure out how much the earth's tectonic plates were moving from day to day. I would imagine that this system handled many, many rows and transactions daily. I'm pretty sure they moved away from Ingres to PgSQL, which they're probably still using now.

    It's not exactly a commercial application, but it is an RDBMS that handles a lot of data, and apparently worked quite well.

    J
  • We use PostgreSQL on Linux here at TrustCommerce [trustcommerce.com]. "Mission critical" might be an overstatement (it's credit card processing, which is important but not exactly life-or-death).
  • ...I know, I know, Mandrake is a newbie distro, we had a sysadmin who was nuts for it though...but we've never had a hiccup from the database, and a good thing too...


    Cyclopatra

  • google (Score:2, Interesting)

    by sunkingXIV ( 188942 )
    What about Google?

    Google has huge databases (caching the web). It is run on tons of linux boxes. Their entire business depends on speed and accurate information.

    an article about Google [nwfusion.com]

The goal of Computer Science is to build something that will last at least until we've finished building it.

Working...