Please create an account to participate in the Slashdot moderation system

 



Forgot your password?
typodupeerror
×
Linux Business Data Storage Software

Open Source Database Clusters? 350

grugruto asks: "A lot of open source solutions are available to scale web sites with clusters but what about databases? I can't afford an Oracle RAC license but can I have something more reliable and fault tolerant than my single Postgres box? I have seen this recent article that looks promising for open source solutions. Do anyone have experiences with clusters of MySQL , Postgres-R, C-JDBC or other solutions? How does it compare to commercial products?"
This discussion has been archived. No new comments can be posted.

Open Source Database Clusters?

Comments Filter:
  • by Anonymous Coward on Thursday September 11, 2003 @06:46PM (#6937845)
    Works everytime.
  • by solarisguy ( 59280 ) on Thursday September 11, 2003 @06:48PM (#6937866) Homepage
    For what it's worth, the commercial solutions are hard to setup, unstable and terribly difficult to maintain, and this is after a small fortune has been invested in making them work. Not to knock the open source solution, but it's hard to beleive that something that is infrequently used and difficult to understand will be truly production quality if you want to use it for money.
  • by venom600 ( 527627 ) on Thursday September 11, 2003 @06:49PM (#6937880) Homepage Journal
    We've been evaluating the Emic application cluster for MySQL and have had pretty good results. It's a new product (so YMMV), but it looks promising.
    Emic Networks [emicnetworks.com]
  • MySQL Replication (Score:5, Insightful)

    by infernalC ( 51228 ) <matthew DOT mellon AT google DOT com> on Thursday September 11, 2003 @06:52PM (#6937897) Homepage Journal
    MySQL has very nice replication functionality, and, in certain circumstances, you can even set up replication rings. It is somewhat flexible about the topology you choose to use, so pick the one best for your application. Load balance ala DNS and you're in business.
    • Yes, as long as you don't give a damn about ACID compliance.
      • by strobert ( 79836 )
        sorry, maybe it is just me, but the whole "ARRGG IT AIN'T ACID" is a lot of hype to me. ACID boils down to transactions. plain and simple.

        And I have found in many applications it is easier to deal with transaction type data consistency at the app layer instead of the db one.

        knowing that a DB transaction is complete doesn't help you if for in order to move forward you have to have db ops done in mtuliple servers and/or a change happen with an external vendor.

        And generally some bad code/process will at s
        • Re:MySQL Replication (Score:5, Informative)

          by Daniel Phillips ( 238627 ) on Friday September 12, 2003 @02:01AM (#6940284)
          sorry, maybe it is just me, but the whole "ARRGG IT AIN'T ACID" is a lot of hype to me. ACID boils down to transactions. plain and simple.

          Perhaps you need a deeper understanding [geocities.com].

          ACID tends to be a knee jerk reaction, and most people realyl need to be askign themselves what it ACTUALLY buys them.

          It buys them a database that you they can expect to still be there, sound and consistent, after the machine blows a fuse in the middle of 200 simultaneous updates. It buys them a database that doesn't accumulate rot over time because somebody deleted a customer at the same time somebody in another city entered an invoice. It buys them queries that give the right answer, because of only ever seeing the database in a consistent state, even while other queries running at the same time are only partially completed.

          Basically, it gives them a database capable of completely correct operation, not just mostly correct. Of course that may not matter to you, in that case I have a faulty pacemaker to sell you.
    • Rings? Are you implying *all* servers involved in the replication process could handle writes rather than a master that handles writes and a bunch of slaves that handle all the read access? If this is true, point me to some docs :) That would be too cool.
      • Re:MySQL Replication (Score:5, Informative)

        by Hamstaus ( 586402 ) on Thursday September 11, 2003 @08:25PM (#6938494) Homepage
        Rings? Are you implying *all* servers involved in the replication process could handle writes rather than a master that handles writes and a bunch of slaves that handle all the read access? If this is true, point me to some docs :) That would be too cool.

        Here you go [mysql.com].

        The part you are probably interested in is this:
        You should run your slaves with the --log-bin option and without --log-slave-updates. This way the slave will be ready to become a master as soon as you issue STOP SLAVE; RESET MASTER, and CHANGE MASTER TO on the other slaves.
        Note that if you decide to "ring" your server setups, then you are not necessarily helping distribute the load, you are simply creating redundant masters in the case that your primary machine becomes unavailable. Also, you'll have to write your own monitoring scripts. MySQL says they are working on some tools for this... I'm excited to see what they come up with.
        • Note that if you decide to "ring" your server setups, then you are not necessarily helping distribute the load, you are simply creating redundant masters in the case that your primary machine becomes unavailable.

          Not necessarily. The largest part of most database access is reads - searching, retrieval, etc. This often times vastly outnumbers writes, depending on the application. Reads do not have to be replicated, giving a big performance boost.
    • Re:MySQL Replication (Score:3, Informative)

      by techwolf ( 26278 )
      Bah, DNS isn't load balancing.

      LVS + MySQL works really well. We've got grouped clusters of databases that we can allocate more/less resources to as needed. Reporting cluster for the slower queries, faster cluster for the real-time queries and a few specific application clusters.

      Replication keeps them in sync but there isn't a good HA solution available for the master database yet. Perhaps in MySQL 5.0. In the meantime, use DRBD + heartbeat for near HA.
  • -1:Troll (Score:4, Insightful)

    by stratjakt ( 596332 ) on Thursday September 11, 2003 @06:52PM (#6937903) Journal
    Do anyone have experiences with clusters of MySQL , Postgres-R, C-JDBC or other solutions? How does it compare to commercial products?

    They don't compare to commercial products. I know it isn't what you want to hear, and there are hundreds of kids here to tell you different, but they just dont compare. Those kids database experience doesn't extend past an address book.

    Even if you manage to get them to technically keep up, transaction wise, to Oracle or SQL Server, the ACID enforcement isn't there, the syntaxes are kludgy. Gack.

    My company ships products with SQL Server or Oracle as the back end. I've tried to put together an OSS solution so I could impress the big boss with millions of bucks of saved license fees. They just aren't anywhere close IMO.

    Run a SQL Server farm on the back end if you cant afford an Oracle license. Don't be an OSS idealogue in the business world, you end up unemployed.
    • Re:-1:Troll (Score:5, Insightful)

      by venom600 ( 527627 ) on Thursday September 11, 2003 @06:59PM (#6937959) Homepage Journal

      ACID enforcement isn't there
      Actually ACID compliance is getting pretty darn good in databases like MySQL. Care to elaborate about what ACID compliance issues you have?

      Don't be an OSS idealogue in the business world, you end up unemployed.
      Actually, in our flailing economy 'OSS idealogues' as you call them are making a lot of head-way. OSS now has a viable alternative to *just about* any commercial enterprise software out there.

      • Re:-1:Troll (Score:3, Interesting)

        by LadyLucky ( 546115 )
        Actually ACID compliance is getting pretty darn good in databases like MySQL. Care to elaborate about what ACID compliance issues you have?

        Bull pucky: From someone who with their only deployment of MySQL into a live environment went completely pear shaped, MySQL crashing several times per day. The damned thing doesn't report ANYTHING to the error log, except "I'm starting up again, and oohhh look at all that corrupt data, I hope I can do something about that!". I would never touch the database again, not

    • ahem have youchecked nay opensource db lately?

      most of them have acid enforcement despite your claim to the contrary..

    • Re:-1:Troll (Score:3, Troll)

      by zulux ( 112259 )

      Oracle has some amazing features, but PostgreSQL kicks the crap out of MS SQL Server -

      MS SQL skips record when it queries - more info here [com.com]

      MS SQL crashes for no fucking reason.

      MS SQL requires x86 hardware - No Sparc, No POWER, No MIPS. Just crappy x86.

      There is no 64 bit version os MS SQL.

      PostgreSQL has a very robust multi-version concurency controll mechanism - somthing MS SQL could only dream of.

      And if your *REALLY* need to scale PostgreSQL - run is on a SUN/SGI/IBM.

      Not a bunch of fucking Intel t
      • Re:-1:Troll (Score:3, Informative)

        by afidel ( 530433 )
        There is no 64 bit version os MS SQL

        Bullshit, it's been out for months, see This [pcpro.co.uk] article. As to the rest of your argument check out TPC-C [tpc.org] results and say that MS SQL doesn't scale, it's the second highest scorer and has 6 of the top 10 results. This is a real world load testing benchmark that many companies base purchasing decisions on. (ok the MS solutions are a little unusual in that they are shared-nothing but the other competitiors are free to do likewise).
      • There is no 64 bit version os MS SQL.

        Not true. [microsoft.com]

      • MS SQL requires x86 hardware - No Sparc, No POWER, No MIPS. Just crappy x86.
        Crappy x86 ...Intel Itanium 2 specifically is the leading TPC-C platform [tpc.org] (both 1st (HP-UX/Oracle) and 2nd place (with the dreaded MS-SQL 64-bit version) and Intel cpus taking 8 of the top 10)

        There is no 64 bit version of MS SQL.
        Wrong again. See #2 above

        And if your *REALLY* need to scale PostgreSQL - run is on a SUN/SGI/IBM.
        Wel, aside from the TPC-C Top 10, it's interesting to note that if you trust SGI, their whole next gene
        • Crappy x86 ...Intel Itanium 2 specifically is the leading TPC-C platform

          Itaniam is not x86.

          AFAIK The HP Superdomes run multiple images. That's great for TPC-C performace, not so go for everything else.


          There is no 64 bit version of MS SQL.


          Oh... great... it looks like it made it out of Alpha rather quickly.

          Hope you like being a beta tester. Enjoy.

      • Just to add to the other replies, the first point was referring to a bug that was reproduced once in the now over 5-year old and four major revision old version of SQL Server.

        Way to prove your case there.

    • Re:-1:Troll (Score:5, Insightful)

      by Kunta Kinte ( 323399 ) on Thursday September 11, 2003 @07:30PM (#6938145) Journal
      Run a SQL Server farm on the back end if you cant afford an Oracle license. Don't be an OSS idealogue in the business world, you end up unemployed.

      And I would fire the IT guy who causes my company to spend $10,000 for SQL Server in a situation where the free MySQL or Postgres would do.

      Just focus on the right tool for the job. If the database is a simple one. If it is regularly backed up and your company can stand a small period of downtime, why on earth would you buy Oracle or MS SQL Server?

      This is not to say that MySQL is unreliable. I have *never* seen MySQL crash, or lose any of my data. So it would be silly of me to go with Oracle, just because everyone else is doing it.

      The right tool for the job people.

      • Re:-1:Troll (Score:5, Interesting)

        by Tmack ( 593755 ) on Thursday September 11, 2003 @07:54PM (#6938309) Homepage Journal
        I would have to second this. I use MySql at work as the main database for the NOC and service activations and circuit delivery groups. The database (running off an old Sun Netra box) handles the load of all the scripts (mostly perl) used by all those groups. This includes scripts that monitor circuit status (ala Netcool), test new circuits, keep track of customer installations, change requests, troubles, router configs, etc... The MySql server has never caused dataloss, and the only instances where it "crashed" were errent querys in alpha CGI script releases that caused basically an infinite loop around a search on the 20K+circuit entries on a non-Indexed field, that a simple restart of the mysqld fixed. Even when the Beta version was released running on a linux P4 box we never had issues, as opposed to the Oracle backended system used for the main corp. database that regularly causes much frustration among co-workers (not to mention the internal conflict between 2 development teams (corp vs us) trying to control the access and data of the corp database vs the ease of development of new utilities to make Customer installation and support easier.

        TM

        P.S.Cant wait for our Sun V280r shows up!

      • > The right tool for the job people

        Right, and a myoptic application of the above advice would lead to a dozen different database products in a typical department. They'd all be the right tool for some job - unless you're hoping to reuse skills, reuse backup solutions (TRM for DB2, Veritas for Oracle, etc), have any hope of reliable integration, etc.

        So, yeah - get the right tool for the job. But before you right that out you need to take a big step back and get a sense of what your strategic directio
    • Love your sig. Is that from Carl from ATHF? I could have swore that's what he said when he had the Foreigner power belt on.
    • Re:-1:Troll (Score:3, Insightful)

      by Lumpy ( 12016 )
      They don't compare to commercial products. I know it isn't what you want to hear, and there are hundreds of kids here to tell you different, but they just dont compare. Those kids database experience doesn't extend past an address book.


      yeah, nobody would ever run a high traffic website on OSS database.....

      Anyone know of a high traffic website that uses per and OSS database servers in a cluster?

      Oh yeah.... this place [slashdot.org]
    • Re:-1:Troll (Score:5, Informative)

      by justin.warren ( 14556 ) <{daedalus} {at} {eigenmagic.com}> on Thursday September 11, 2003 @08:12PM (#6938401) Homepage
      An appropriate subject line.

      Some of us who compare OSS databases to commercial ones have experience that extends past address books. And no, I'll pass on the DSW if you don't mind.

      My main problem with PostgreSQL is the query optimiser. Oracle's query optimiser is definitely superior as Postgres occasionally comes up with some peculiar query plans. In a product I'm involved with, we hand tune our SQL from the ground up, so this is less of a problem for us. I find the two products to be pretty comparable in other aspects, though I haven't tried Postgres-R yet.

      I haven't played with MySQL since back when you couldn't do sub-SELECTS, so I have no idea how much it's progressed since then.

      At this stage, I'd suggest you stick with a commercial product for replication or clustering for high end work. Clustering and replication is still the bleeding edge for OSS, so use it with caution on non-critical tasks. Having said that, these are complex tasks you're talking about, and even the commercial products have their own peculiarities at times. High volume replication using Oracle materialized views over database links comes to mind.

      • Re:-1:Troll (Score:4, Insightful)

        by stanwirth ( 621074 ) on Thursday September 11, 2003 @10:56PM (#6939580)

        My main problem with PostgreSQL is the query optimiser. Oracle's query optimiser is definitely superior as Postgres occasionally comes up with some peculiar query plans.

        I had the same experience. You basically have to optimse large queries combined with joins and subselects on Postgresql yourself -- and often with Oracle, as well, if its for tables with > 1-10M records. ish. You might want to check out DB2. Awesome clustering -- IMHO more sophisticated and flexible than Oracle's. YMMV depending on the application, as always. Also, if it's a development environment, you can test DB2 and Oracle on linux boxen to your heart's content for the same price as PostGreSQL -- free .

        MySQL may be able to handle subselects, but it's still struggling with triggers and stored procedures.

    • by C10H14N2 ( 640033 )
      Great, we had to go there. "Kiddie" and then resort to recommending MS SQL, which has proven itself to be, shall we say, not the most "reliable" on the planet and elaboration shouldn't be necessary in these parts. Oddly enough, although MS SQL was "licensed" (a-la CPM, sigh) from Sybase, people often simultaneously disparage Sybase and praise MS SQL, despite the fact the Sybase licenses are roughly half of MS SQL. So we're left, essentially, with Oracle, which the vast majority of businesses will find not t
  • Not personally, but (Score:5, Interesting)

    by revividus ( 643168 ) <phil@crissman.gmail@com> on Thursday September 11, 2003 @06:53PM (#6937907) Homepage
    I've been looking into MySQL for a bit, and I saw this article [dell.com] recently, which is directly concerning clustered database servers running MySQL.

    Maybe it will be of interest...

  • Open-source or not...

    I would say just get a bigger box for your PostgreSQL solution and do semi-realtime remote replication on the tables you dont want to lose.
  • Huh? (Score:5, Funny)

    by Wakko Warner ( 324 ) * on Thursday September 11, 2003 @06:54PM (#6937916) Homepage Journal
    You can "cluster" MySQL? Does it involve "rsync" and "cron"?

    - A.P.
  • by MarkusQ ( 450076 ) on Thursday September 11, 2003 @06:56PM (#6937929) Journal

    IMHO, the biggest problem is replication; keeping them all consistent in the face of asyncronous updates. It can also reduce/eliminate the advantages of clustering if you have a significant number of updates compared to the number of quieries.

    I guess the best answer depends on how dynamic your data is. If it's static, there are all sorts of easy answers. If all the updates come from a central source, or on a predictable schedule, you're almost as well off. If updates come from the great unwashed but the data can be partitioned in some way (say, geographically) you can still do it. If updates come from all over but queries can be centralized, or if your database is tiny, or if latency isn't a problem, or if you have a machine that prints money, it can still be done.

    If you want to do everything for everyone everywhere, right now if not sooner, for under twenty bucks, you're screwed.

    So, what are your needs?

    -- MarkusQ

  • by zulux ( 112259 ) on Thursday September 11, 2003 @06:57PM (#6937936) Homepage Journal


    Check out the new replication at postgresql.org: it's master -> multiple slave replication.

    Then have your slave database query the master database - and if it no longer responds, it could promote itself to master.

    The replication is the easy bit - the slave promotion is the hard and gritty bit.

    • by tcopeland ( 32225 ) * <tom@th[ ]sleecopeland.com ['oma' in gap]> on Thursday September 11, 2003 @07:27PM (#6938133) Homepage
      Another hard bit is that the Postgres replication doesn't support sequences - see the details in the aptly named "Things to Remember" section of the installation documentation [postgresql.org].

      So if your master fails, presumably you have to recreate the sequences starting at a number high enough to avoid conflicting numbers before switching over to a slave. Seems like this could be a problem.

      Nonetheless, Postgres is cruising away on RubyForge [rubyforge.org]; 300,000 records and counting...

      • I love PostgreSQL sequences - I think definatly a feature I would miss.

        PGSQL stores it's sequences seperate from the tables. When you need a sequence, you can have PGSQL create one at table creation time, or you can link your table to an already available sequence.

        The sequences are under your full controll - you can reset them, roll them back or set them to any value you want.

        Clever things you can do with this:

        Set one server to have high squence numbers and another to have lower ones - in the middle of
  • MySQL + BigIP (Score:2, Interesting)

    by zarthrag ( 650912 )
    What *I* would probably attempt would be to setup a replication ring, and use a bigIP to make them all look like the same server. Then you get your load balancing, and scalability. I have yet to try this, but I will in the (very) near future.

  • High Availability (Score:5, Insightful)

    by mcdrewski42 ( 623680 ) on Thursday September 11, 2003 @07:00PM (#6937972)

    HA is always crapshoot/tradeoff between cost and risk. Throw enough $ at the problem and you'll approach 100% availability.

    I know that 'more robust' is a nice thing to want, but you really need to think about what you really need. If it takes 15 minutes to switch over to a backup copy (using some magic RAID disk mirroring maybe?) and 15 minutes to restart the app and let it checkpoint it's way up to a decent operational speed again, is that good enough?

    If it takes an hour, how about that?

    How much time/heartache or money is it worth for you to have system downtime, and how much are you willing to expend to reduce it by 5, 15, 30 minutes?

    So, there's really a continuum of availabilty you have to pick your point in. At the low end, you have no backups and recreate everything from scratch. At the high end you use Vendor X's real clustering solution and 24x7 monitoring, then have zero downtime even in a disaster. Somewhere in the middle is you.

    Now I realise this an overtly commercial view of things, but if needs be replace money with effort and season to taste.
    • Agreed. (Score:5, Interesting)

      by oneiros27 ( 46144 ) on Thursday September 11, 2003 @07:53PM (#6938294) Homepage
      Availability is one of the basic issues when sizing your system. [ie, can you have it down at night for a cold backup, or does it have to be available 24x7? Can you even get a maintenance window once a month?]

      As with sizing your UPS and/or generators, you need to determine what the cost to your business is for downtime.

      Now, yes, you might have some issues in SLAs that spell out how much it'll cost you, if you have to refund customers's money [for service based orgs]-- or how much profit you'd lose if your customers couldn't purchase items [for sales based orgs]. But unfortunately, you have to also consider the recovery costs, the costs of damage to your reputation, etc.

      If it's not worth your purchasing an Oracle or other, more expensive database, there's good odds that it's not worth the headaches of maintaining a high availability cluster with automatic failover. Instead, you can mirror the data, and keep transaction logs that you can replay.

      You can have a spare system on standby, that you can keep updated on a regular basis (again, your cost of downtime, and the necessary time to recover the system will affect your choices), and when your main system should fail, you can push the most recent diffs to your standby, reconfigure the application servers to recognize the new server as the old one, and you're back in business.

      It requires a bit of planning, and making sure that the necessary manual steps are well documented [so that anyone can do it, should the server outage be caused by something serious enough to take out your administrator, too], but it's easier and cheaper to build and maintain than a true cluster.
  • Well.... (Score:3, Insightful)

    by agent dero ( 680753 ) on Thursday September 11, 2003 @07:00PM (#6937975) Homepage
    If you're working with enough data that would require a CLUSTER, then I would suggest a commercial product.

    But if you need that SPEED, but not a lot of data storage, I'd say a decent sized MySQL cluster would cover you, depending on what your needs are.

    If you are in the position to actually need a cluster to do that much work, you should be able to get something commercial and more large-scaled oriented
  • eRserver (Score:5, Interesting)

    by linuxwrangler ( 582055 ) on Thursday September 11, 2003 @07:01PM (#6937980)
    I have found PostgreSQL to be nearly bullet-proof. I routinely have connections up for months at a time (that's individual persistent connections - the server is up much longer and the connections usually get dropped when I upgrade the client software). Still, sh*t happens and replication has been a sore point for many databases both open and commercial.

    You should investigate eRserver [erserver.com]. It was originally a commercial replication product for Postgres but has been open-sourced. I haven't tried it yet but it's on my to-do list.

    • Re:eRserver (Score:5, Informative)

      by TheFuzzy ( 140473 ) on Thursday September 11, 2003 @07:04PM (#6938006)
      Well, the .ORG domain runs on PostgreSQL + eRServer, so that's one scalable solution ...
    • .org and .info are both using it.

      The press release [postgresql.org] of ER Server becoming open source is quite informative (karma?) as well.

      Marc of PostgreSQL Inc's an incredible resource on the postgresql mailinglists too; and PostgreSQL Inc has a really cool policy that allowed them to do donate their code to the community that way:

      From their release: " "DATELINE FRIDAY, DECEMBER 15, 2000 Open Source vs. Proprietary: We advocate Open Source, BSD style :) We will consider and develop short term (up to 24 month

  • Shared storage? (Score:5, Informative)

    by crstophr ( 529410 ) on Thursday September 11, 2003 @07:01PM (#6937986) Homepage

    You can make a High Availability cluster out of most any software if you have some kind of shared storage.

    People have used firewire drives connected to two different computers to accomplish this cheaply. Oracle is giving away a cluster filesystem (so they can sell RAC on linux) there is OpenGFS as well for filesystem usage.

    Just write some basic monitoring scripts that will bring up your postgress database on the second server should the first one fail. Just make sure those scripts completely take down the old database on the first server in the case of a partial failure. Having two databases try to open the same data would be a really bad thing.

    Here are some links to articles that should help:

    Overview [oracle.com]

    Howto [oracle.com]

    Cluster Filesystem [oracle.com]

    These are mainly geared for Oracle/RAC, all you need is the firewire shared storage and cluster filesystem. You're on your own to write the monitoring and failover scripts. Hope this helps. --Chris

    • by rtnz ( 207422 ) on Thursday September 11, 2003 @07:06PM (#6938017) Homepage
      What does Slashdot [slashdot.org] do for this? I recall way back in the day there was some information about what the Slashdot tech looks like, anyone have info regarding their database setup? L
    • People have used firewire drives connected to two different computers to accomplish this cheaply.

      I have done this with scsi and it works great. each computer needs a different device id (that's why they let you change it :-) and all 4 servers were tied to the same scsi drive rack.

      the only bitch was to be sure the drive chassi was powered up first and then each computer HAD to be powered up at the same time.

      Hell I saw networking done in linux via the scsi bus that way...
    • Oracle is giving away a cluster filesystem (so they can sell RAC on linux) there is OpenGFS as well for filesystem usage.

      I was saying "Wow! Oracle has released a clustered filesystem!", until I discovered it only works with shared-storage. Meaning it won't create a filesystem image across a cluster network, where data is distributed. But rather the cluster filesystem is stored in a centralized location, but can be accessed by multiple members of the cluster at the same time for both read and write.
  • by vinsci ( 537958 ) on Thursday September 11, 2003 @07:06PM (#6938011) Journal
    Two MySQL products I found interesting (neither of which is open source at this time): The rest of this comment is quoted verbatim from InnoDB News [innodb.com]

    MySQL/InnoDB-4.0.1 and Oracle 9i win the database server benchmark of PC Magazine and eWEEK. February 27, 2002 - In the benchmark eWEEK measured the performance of an e-commerce application on leading commercial databases IBM DB2, Oracle, MS SQL Server, Sybase ASE, and MySQL/InnoDB. The application server in the test was BEA WebLogic. The operating system was Windows 2000 Advanced Server running on a 4-way Hewlett-Packard Xeon server with 2 GB RAM and 24 Ultra3 SCSI hard drives.

    eWEEK writes: "Of the five databases we tested, only Oracle9i and MySQL were able to run our Nile application as originally written for 8 hours without problems."

    The whole story [eweek.com]. The throughput chart [eweek.com].

  • by gfody ( 514448 ) * on Thursday September 11, 2003 @07:09PM (#6938031)
    or does this term sound kind've like a made up buzzword like ".NET powered Java schemas!" or "SOAP servlet toaster oven with X-M-L!"
  • Replicated MySQL (Score:3, Informative)

    by Jack Auf ( 323064 ) on Thursday September 11, 2003 @07:12PM (#6938055) Homepage
    Using one server as a master and n servers as slaves. Just make sure to write everything to the master. Replication to the slaves generally takes about a second or maybe two depending on load.

    OK, not quite the same thing but this works quite well for ready heavy applications, and is very reliable unless you get a slave out of sync.

    This was on v3.n.n - the good folks at MySQL have made many improvements to the replication facilities in the 4.n series I believe.
  • by u19925 ( 613350 ) on Thursday September 11, 2003 @07:12PM (#6938057)
    there are basically three type of clusters:

    1) shared nothing: in this, each computer is only connected to each other via simple IP network. no disks are shared. each machine serves part of data. these cluster doesn't work reliably when you have to aggregations. e.g. if one of the machine fails and you try to to "avg()" and if the data is spread across machines, the query would fail, since one of the machine is not available. most enterprise apps cannot work in this config without degradation. e.g. IBM study showed that 2 node cluster is slower and less reliable than 1 node system when running SAP.

    IBM on windows and unix and MS uses this type of clustering (also called federated database approach or shared nothing approach).

    2) shared disk between two computers: in this case, there are multiple machines and multiple disks. each disk is atleast connected to two computers. if one of the computer fails, other takes over. no mainstream database uses this mode, but it is used by hp-nonstop. still, each machine serves up part of the data and hence standard enterprise apps like SAP etc cannot take clustering advantage without lot of modification.

    3) shared everything: in this, each disk is connected to all the machines in the cluster. any number of machines can fail and yet the system would keep running as long as atleast one machine is up. this is used by Oracle. all the machine sees all the data. standard apps like SAP etc can be run in this kind of configs with minor modification or no modification at all. this method is also used by IBM in their mainframe database (which outsells their windows and unix database by huge margine). most enterprise apps are deployed in this type of cluster configuration.

    the approach one is simpler from hardware point of view. also, for database kernel writers, this is the easiest to implement. however, the user would need to break up data judiciously and spread acros s machines. also adding a node and removing a node will require re-partitioning of data. mostly only custom apps which are fully aware of your partitioning etc will be able to take advantage.
    it is also easy to make it scale for simple custom app and so most of TPC-C benchmarks are published in this configuration.

    approach 3 requires special shared disk system. the database implementation is very complex. the kernel writers have to worry about two computers simultaneously accessing disks or overwriting each others data etc. this is the thing that Oracle is pushing across all platforms and IBM is pushing for its mainframes.

    approach 2 is similar to approach 1 except that it adds redundancy and hence is more reliable.
    • approach 3 requires special shared disk system. the database implementation is very complex. the kernel writers have to worry about two computers simultaneously accessing disks or overwriting each others data etc. this is the thing that Oracle is pushing across all platforms and IBM is pushing for its mainframes.

      I recently attended an Oracle Convention, and the one thing everyone (except Oracle) will admit about RAC is that it is very difficult to implement and very very expensive. Of the many vendors,

  • Das DB (Score:3, Informative)

    by Flip Chart ( 447050 ) * on Thursday September 11, 2003 @07:12PM (#6938058) Homepage Journal
    What about SAPDB isn't it a potential choice. I thought I read somewhere that MySQL and SAPDB were merging. Chech it out http://www.sapdb.org/
  • by allankim ( 558661 ) on Thursday September 11, 2003 @07:13PM (#6938059)
    I've been running a 3-4 node MySQL 3.23.x cluster on Slowlaris 9 since January. It has survived several catastrophic power outages and numerous other insults without a hiccup. Load is fairly light (about 3,000 updates daily and a similar number of queries on each server) so YMMV.
  • by Wheat ( 20250 ) on Thursday September 11, 2003 @07:22PM (#6938116) Homepage Journal
    ZEO [zope.org] will allow you to scale the ZODB [zope.org] (Zope Object Database) across multiple processors, machines, and networks. Although the ZODB is a Python object database, so it's probably not an option to port your current database. There are other limitations of the database - it's not always the fastest, it's an object database so concepts like foreign keys are not fully there, but it can give you high availability. As of new Zope 2.7 in beta though, ZEO is quite easy to set-up, and it is open source.
  • by pabos ( 706496 ) on Thursday September 11, 2003 @07:31PM (#6938153)

    Two options I haven't seen anyone mention yet are PostgreSQL eRServer 1.0+ (see PostgreSQL news item "PostgreSQL now has working, tested, scalable replication!" from August 28, 2003 [postgresql.org] or a lengthier press posting "PostgreSQL, Inc. Releases Open Source Replication Version" [postgresql.org]) and Backplane [backplane.com].

    eRServer has been in development for over two years, is used in production settings and is released under a BSD license (as with PostgreSQL). It uses a single master/multiple slave asynchronous replication scheme. There are cautions in the release that replication may be difficult to setup.

    Backplane seems to be particularly well-suited to clustering data quickly across a WAN. A quote may explain it better:

    The Backplane Open Source Database is a replicated, transactional, fault-tolerant relational database core. Currently supported on Linux and FreeBSD, Backplane is designed to run on a large number of small servers rather than a small number of large servers. With Backplane, it is possible to spread the database nodes widely, allowing database operations to work efficiently over WAN latencies while maintaining full transactional coherency across the entire replication group.
    Backplane's native quorum-based replication makes it easy to increase the database capacity (by simply adding a new node), re-synch a crashed server, or take down multiple nodes for maintenance (such as an entire co-location facility) - all without affecting the database availability.

    I haven't used either yet, but you may wish to give them a look.

  • by cubal ( 601223 ) <matt AT problemattic DOT net> on Thursday September 11, 2003 @07:36PM (#6938180) Homepage
    deviantart.com [deviantart.com], IIRC, runs about 3 mysql servers behind a load-balancing cache/server, so have had to deal with a lot of the difficulties involved in that.
  • by Kunta Kinte ( 323399 ) on Thursday September 11, 2003 @07:37PM (#6938186) Journal
    MySQL Teams With Veritas, SGI on Clusters - http://www.eweek.com/article2/0,4149,1208538,00.as p [eweek.com] and http://www.mysql.com/press/release_2003_23.html [mysql.com]

    Supposedly should be out by now.

  • It's always hard to get this across to my clients, but you need first of all to answer the question "how much reliability/availability do you need?"

    Think of it this way: if what you need is no better than 99 percent, you need to be able to fail over fast enough to only have 864 minutes of downtime a week. Of course, that's about 14 minutes, so you can practically handle it by doing a hand cutover.

    On the other hand, if you need availability of 99.999 percent ("five nines") you can only afford to have abou
  • by koreth ( 409849 ) * on Thursday September 11, 2003 @08:02PM (#6938349)
    Why do you want clustering? Do you need to scale up transactions per second? If so, are these primarily reads or writes? The answer to that question can make a huge difference in your clustering and replication strategy.

    Clustering read-mostly data for performance reasons is relatively easy; for many applications, where a second or two of staleness on the replicated databases is acceptable, you can make do with a bunch of independent copies of the database, with all updates going to an authoritative database and getting replicated out from there asynchronously.

    If your data can be partitioned cleanly -- that is, if you have groups of tables that are never joined with tables in other groups -- then you can perhaps get some benefit from putting different data on different servers, with no replication required. Obviously that's only worthwhile if the query load is comparable between groups.

    If, on the other hand, you require ACID-compliant updates of all the replicants as a unit, you're entering difficult territory and you might have no choice but to go with a commercial solution depending on the specifics of your needs.

    At just about all of the places where I've done database programming where this has come up, we ended up buying a much beefier database server (lots of processors and memory, good I/O bandwidth, redundant networking and power supplies) with disk mirroring, rather than get into the headaches of replication. A big Sun or HP server is certainly more expensive than some mid-range Dell or no-name PC, but it may end up being cheaper than the engineering time you'd spend getting anything nearly as robust and high-performance on less expensive hardware.

    I've also found that very often when there's a database bottleneck that looks like it requires bigger hardware, the problem is the data model or the queries (unnecessary joins, no indexes where they're needed, poorly-thought-out normalization, etc.) or the physical layout of the data (indexes competing with data for access to the same disk, fragmentation in indexes/data, frequently-used tables spaced far apart on disk.)

    If I'm dealing with Oracle, sometimes the solution is as simple as adding an optimizer hint to make the query do its joins in a sensible way. Sometimes denormalization is helpful, though you want to be careful with that. Sometimes a small amount of data caching in the application can mean a tremendous decrease in database load. And so on.

    If you can tell us more about the specifics of your situation, there are lots of people here who can offer more specific advice.

  • by Anonymous Coward
    Don't know how DB2 ICE would do compared to Open Source soloutions but take a look at the interesting results of the recent TPC-H benchmark performance testing [ibm.com] on Clustered and non-Clustered 100GB and 300GB configurations. It appears that the IBM DB2 Integrated Cluster Environment (DB2 ICE) for Linux is heads above the rest.
    • I run DB2 on Linux.
      It's been the largest pain in the ass I've ever had managing servers.
      MySQL spanks DB2, as does postgreSQL.
      Our DB2 on Linux crashed so much we spent months before we had a production ready system. We were replacing PostgreSQL and we had to rethink everything. It couldn't handle our insert load, and we were going from 4 dual 733 intel boxes to two large quad xeon boxes with 15,000 rpm disks.
      We spent $100,000 on DB2 license (that with the discounted half price DB2 EEE for linux). We
      • Ouch, sounds like you should have gotten an experienced dba to set it up for you. DB2's too complex to go with simple defaults, and clustering is definitely a high-skills endeaver.

        As far as insert loads go, we've seen 500 rows / second on five year old hardware without any problems. Although that's far short of what DB2 is capable of, it's fine for a sustained load. Beyond that batch loads hit 15,000 rows per second easily on the same box.

        And as far as pricing goes, today you could get DB2 Express for
  • by jerryasher ( 151512 ) on Thursday September 11, 2003 @08:42PM (#6938662)
    PDF blows.

    I hate PDF links. On Windows the experience is great, let's come to a complete halt as I watch CPU load hit 100%, wait for a splash screen, and watch the damned thing decide to show me the text at 245% zoom.

    What a load of shit.

    What's wrong with HTML as a virus free, pleasant to experience, documentation format?

    Just say no to PDF.
  • by Mine_Field ( 706524 ) on Thursday September 11, 2003 @09:25PM (#6939007)
    Here is a description of a Cluster created on MySQL with Linux boxes - similar to Google. http://www.dwreview.com/Product_Reviews/Review_Dat a.html and http://www.dwreview.com/Data_mining/Intelligent_Da taMining.html
  • by Anonymous Coward on Thursday September 11, 2003 @09:50PM (#6939147)
    I maintain a site that does a fair bit of traffic (Daily avgs: files served = 1.8 Million, bandwidth = 20 Gigs)

    We have 1 "master" MySQL server which gets all updates and inserts, etc. We have 2 "slave" servers which each take a signifigant portion of the select queries. All machines run the same 4.0.x version of MySQL. (Web access is PHP on Apache) All machines are dual x86s packed with RAM.

    Setting up replication is pretty easy. And for the most part things are pretty nice. The load average drops a lot on each machine when we add a new slave. (Oh don't forget to enable query caching.)

    We have had some problems though. Because the site gets so much traffic sometimes queries take a while to run and to propagate to the slave servers. This means if you update your data (via the master) and then do a select from one of the slaves your change may not show up yet. For most web apps this might not seem really big.

    But it leads to the web users changing things and not seeing the results right away. So they figure the site is "broken" and they repeat what they just did only to have it take place twice. If you have your browser "refresh" the page first usually the data has come through but many people don't do this. The result is they don't feel their account has been credited or something. These kinds of bugs are hard to track down too.

    I wrote a program to check repetatly (sleeping from 1/4 to 1/25 of second in between) and the slaves were almost always in perfect sync with the master. (as per MySQL's binary log position indicator). That was really impressive however there are times when the servers are under load that the slaves will be out of sync for 30 to 60+ seconds! (Measuring in the tens of thousands of byte offest differences in the binary log position.)

    The solution we've been using is that any time there is an update to the database and the imediate page seen next by the user relies on the changed data we do the selects from the master server. This seems to work for now but I'm not sure how long we will be able to scale this way.

    In summary so long as the laod on the machines stays around 1.0 or lower everything runs pretty smooth. If the loads hit 3 to 5 or higher then people notice (or rather mention) that things seem odd. (By the way those are linux load averages which IIRC is different than under Solaris.)

    What I would like to see is a virtual server type system where one machine accepts all queries and hands them out to a set of replicating servers without requiring the application to know about it. This is nice for developing applications but the real reason is the master can then prevent the syncing issues discussed above.

    SF
  • by texasrocket ( 706553 ) on Thursday September 11, 2003 @11:24PM (#6939695)
    I have personally installed, setup and maintained a 5 (3 slaves, 1 master/slave, 1 slave/master) node cluster using Heartbeat and MySql replication. It works great!! My guess is that 80% of MY Mysql usage is content and needs READ-ONLY access. So I have 3 slaves that are used in a Read-Only cluster. The master is one of 2 other machines and ALL WRITES go to it. In the event of a MASTER db going down, the remaining slave promotes itself and updates the other slaves to point to itself. Been working great fo 8 months!!!
  • by codepunk ( 167897 ) on Friday September 12, 2003 @01:02AM (#6940101)
    I run two types of clusters, one of them is a RAC 9i on Linux. Nothing and I mean nothing has the functionality of RAC 9i. You can put a bullet through one of the nodes right in the middle of a query being returned and still get your records just like nothing ever happened. The other database I run is a postgresql on redhat advanced server and the database files are sym linked into the san (this is high availability only) . If I had to do it again I would not use postgresql because it scales for shit and I cannot under any circumstances keep it up in a 24/7 configuration. The database needs to have vaccuum run on it once a day and I have to do that manually because half the time it fails. Running a vaccum on the database while clients are connected basically locks everyone tight until it is finished.

    If you cannot spend any money and wish a fast, scalable and higly available system my advice is first sapdb and or mysql and advanced server on some sort of shared scsi.

    Now all of you big postgresql advocates flame away but it does not change the facts. I love the database but if you need heavy lifting it just does not cut the mustard.
    • Just wondering, but are you on an older flavor of Postgresql? Most of the issues you mention (i.e. vacuum slowing things down) have been fixed for quite some time.

      Also, if you haven't bothered to tune your postgresql.conf file on an older install, it will run for shit. I.e. the default settings are for a small workgroup type setup, not enterprise class stuff.

      Keep in mind, Afilias runs the .org and .info tlds on postgresql, so it can't be impossible to get 24/7 operation out of, or the .org domain would
  • by BiOFH ( 267622 ) on Friday September 12, 2003 @01:36AM (#6940207)
    http://www.livejournal.com/community/lj_maintenanc e/60984.html

"Why should we subsidize intellectual curiosity?" -Ronald Reagan

Working...