Ask Slashdot: Optimizing Apache/MySQL for a Production Environment 143
treilly asks: "In the coming weeks, the startup company I work for will be rolling out a couple of Linux boxes as production webservers running Apache and MySQL. Management was quick to realize the benefits of Linux, but I was recently asked: "Now that we're rolling out these servers, how do we optimize out of the box RedHat 6.0 machines as high performance web and database servers in a hosting environment"?
Optimisation of Apache/db (Score:1)
Watch mysqld's nice level. (Score:1)
A favor please. (Score:1)
I ask that you document your development specifically focusing on any novel solutions that you found that increased performance. A faster CPU doesn't count (sheesh). Also, put this info together in a concise readable format and provide it to the Apache site or the Linux Tuning site (forget the URL at this moment.) It's very important that work of this type be formally documented and accessible.
Performace tuning (Score:1)
Hardware Tuning:
- Use a caching raid controller fully populated with cache configured for raid level 0+1.
-Use IBM or Seagate 10000 rpm SCSI drives with lot's of cache.
- Consider multiple SCSI cards (or channels) to separate the OS + logs, indexes and data files on to separate raid arrays.
- Also strongly consider using separate web and database servers, so each can be fully optimized for its job.
- Obviuosly use as much ram as you can afford. (preferably 100 or 133 mhz)
-Uses multi-processor computers for the database and web servers.
- Connect the web and database servers using a back end network, separate from the internet connection.
OS Tuning:
I'm not terribly familiar with tuning the Linux OS, but I suspect that there are many resources already available.
In general you'll want to:
- Optimize the block size on your raid array's for mazimum performance (trial and error using bonnie or the like)
- Optimize the amount of memory used for cache.
Application Tuning:
- Look at http://www.mysql.org/Manual_chapter/manual_Perfor
- Look at http://www.apache.org/docs/misc/perf-tuning.html for information on tuning apache.
I would first suggest using PHP, but barring that, I would definately use mod_perl. There are probably a lot of other sources for tuning apache available on the Internet.
A suggestion: optimize the number of server children with the number of availble processors.
SQL/other Tuning:
Understanding how to properly build tables and indexes is somewhat of an art, but you can really make or break the whole site with proper use of SQL and indexes. I'd either spend some time learning table/index design, and coding sql for performance, or consult someone who knows.
Hope this helps a little bit.
Jerry
jerry@bellnetworks.net
Check out the optimization tips page at apache.org (Score:4)
Also Dan Kegel wrote an interesting web page in response to the whole Mindcraft NT/IIS vs. Apache/Linux fiasco and on that page are several detailed measures to improve Apache's performance under Linux:
Dan Kegel's Mindcraft Redux page [kegel.com]
Apache Week 'zine [apacheweek.com]
Here's how I handle several million hits per day.. (Score:5)
Sincerely,
Rob Malda
Re:General purpose advice (Score:2)
Not only that, turn them off. (AllowOverrides None, IIRC) If you simply don't use them but have them enabled anyway, you pay the price WRT all the stat(2) calls the server does looking for them.
This is all IIRC, but I usually have a good memory. Then again, I did just wake up.
Re:MySQL vs PostgreSQL (Score:2)
There's obviously more to it than that, but I'm not aware of any specific comparisons...
fhttpd (Score:1)
My fhttpd [fhttpd.org] with combination of MySQL and PHP can be considered, too -- it allows some configuration options and optimizations that Apache doesn't provide -- you can limit the number of connections to database, use separate userids for sets of scripts, etc. If you want even more performance, program in C or C++ can be written as its module, and the API [fhttpd.org] is much easier to use than one of Apache.
Clarification of LIKE vs. == (Score:2)
RAM & RAID 1+0 is your friend. (Score:3)
As for optimization, definately check your queries and always use keyed fields and == queries. Doing like queries will kill your performance to being unusable on decently large tables (>100k records). Definately read the MySQL docs concerning RAM usage and the various switches to optimize it's RAM usage. That is extremely important.
As for Apache, don't use .htaccess at all costs and only compile in required modules. Also check the tuning FAQ mentioned above.
Idiot of the year award (Score:2)
BTW, programmers write programs, not text. So "HTML Programmer" is a misnomer in the first place -- that should be "HTML page creator".
-E
Re:Optimizations (Score:1)
Perfomance tuning and Availability (Score:1)
On the database side theres a feature of mysqld called --log-update. Call it using mysqld --log-update=/usr/mysql/update_logs/update. This will create a log of eveything that changes in your DB and can be reinserted back into the mysql monitor. To go along with this everytime you call `mysql --flush-logs` a new update file will be created as update.# - where # increases for each call. At this point there are quite a few scripts written to insert this log file into another DB - most of them use perl DBI.
To increase the performance of your setup there are several options noted in the mysql manual. But none of them will do a whole lot of good if the queries and tables you construct are poorly designed and indexed.
Depending on what scripting language you use theres probobly a way to compile it into apache. Whether it be mod_perl,pyapache or PHP. I would plan on doing this. A good way to speed up your system after this is to run 2 httpd servers.
The first server compile plain apache with mod_rewrite and proxy support, the second server compile in your application support. If you put all your applications in one directory you can easily proxy to them with proxypass.
ex. proxypass /perl server:88/perl
This way things like images and html will be served by a webserver that only takes up 400-500K instead of one that could take up to 10M-20M depending on how many scripts and libraries are in memory(Of course some of that is shared). When your server gets hit hard you'll probobly notice having maybe 5 - 10 times as many regular servers as there are application servers this way.
A more advanced thing with this setup is to utilize your backup server. Its will take a little work but you could have apache proxy to a list of application servers that exist in a config file, and have this config file altered based on system availabilty. At this point though it may just be easier to get localdirector, unless your organization is really strapped for cash.
You could start it out as a module (Score:1)
You could of course start it out as a module - forgetting the CGI version. If you're leaking memory during development, keep 'MaxRequestsPerChild' at a very low value - 10 or maybe 1 even. Then increase it to 100 or whatever when your leaks are under control.
Re:Software RAID? (Score:1)
Think about it... do you want to do 64k XOR's 32-bits at a time on a single strongarm 233 (at best) or half/full cache lines at a time using SIMD on an SMP?
The hardware solution. Those cache lines are filled with useless data that should never have been competing with my process data. Hell it should have been off of my local bus by now, instead of filling my cache. Yeah your SIMD can take care of entire cache lines but it's gotta shove it back out to the bus, not to mention compete with DMA and system interrupts in doing so.
I bet you think that WinModems and WinPrinters are better for the same reason, because your P3/500 has more power than the little microcontroller and/or DSP that are present on full hardware-based solutions.
A carefully designed hardware solution will beat out a software solution every time in terms of speed. If my processor (or any number of them) is busy looking after the drive array, that's one less thing that it could be doing for me.
Your comment on XORs is bullshit. A RAID controller could be designed with an integer DSP which could blow your P3/500 Xeon out of the water in terms of integer operations, especially something as mundane as XOR and parity checking.
Actually I think that's exactly what my DPT RC4040 cache/RAID module does. The host processor on the PM2044UW is a Motorola 68k but that's only for busmatering and host functions. All the RAID is done on custom silicon. I've got 64 megs of cache on the module, meaning it could very well likely chew through more data faster than your cache subsystem could keep refilling a dozen times over, keeping in mind that you use your moderately-loaded system for something other than a RAID controller.
In a similar vein, when one of my drives finally opens up a black hole and all the data on it disappears into oblivion, the hardware controller works on keeping the system stable while rebuilding the array, where your processor would now be spending even more processor resources doing the same.
Remember: every time you do something in software it's using CPU time that could be used to actually run the computer, rather than run the peripherals. And the raw speed calculations are bullshit because the CPU has many, many other things to do than just fill its cache lines with SCSI data.
Re:Einstein of the year award (Score:1)
Tuning webservers (Score:3)
0) If you have LOTS of RAM, compile Apache, MySQL and optionally Squid with EGCS+PGCC at -O6. The extra speed helps.
1) Guesstimate the number of simultaneous connections I'm likely to have.
2) Guesstimate how much of the data is going to be dynamic, and how much static.
3) IF (static > dynamic) THEN install Squid and configure it as an accelerator on the same machine. Give most of the memory over to Squid, and configure a minimal number of httpd servers. You'll only need them for accesses of new data, or data that's expired from the cache.
4) IF (static 5) If you've plenty of spare memory, after all of this, compile the kernel with EGCS+PGCC at -O6, but check it's reliability. It's not really designed for such heavy optimisation, but if it works ok, the speed will come in handy.
NOTE: Ramping up the compiler optimiser flag to -O6 does improve performance, but it also costs memory. If you've the RAM to spare, it is sometimes worth it.
Re:Here's how I handle several million hits per da (Score:1)
To all you bitter schmucks with nothing else to do (Score:2)
Trust me
So, as I said, to everyone who has so little better to do than scan Slashdot waiting for opportunities to flame others (under Anonymous Coward status), screw off.
Cordially yours,
David
Re:Performance tips for Apache... (Score:1)
Second benefit: if you really get low on memory, you're fucked with a 1 Gig RAM disk, whereas the disk-cache will quickly be thrown away and used for whatever memory hog you have running.
ram-disks are good for booting over-modularized kernels _only_.
Re:Software RAID? (Score:1)
It's also more flexible (think RAID of RAID of network block-devices).
It's also cheaper.
It has features that some hardware controllers don't even have (like background initialization).
What kind of idiot talks about software RAID without knowing jack about it ?
Re:Idiot of the year award (Score:1)
Production environment (Score:1)
When I set up a production environment (regardless of operating system chosen), the first step is always to have policy and change control.
Change Control
You must have change control or you will suffer downtime. Downtime represents a transaction rate of 0 trans/sec, which is clearly unacceptable.
Development and Acceptance Test
You must have a separate machine for development, and another machine for acceptance test. Of the two machines, only the A.T. machine must be identical to your production server. Otherwise A.T. simply cannot replicate the environment you're going to test, and thus any testing is at best misleading, or at worst, completely invalid.
You must create a set of repeatable build instructions that takes you from a fresh blank machine to a stable, reliable, working production system. And you should have a set of tests that thoroughly gives the resulting systems a complete workout, including sustained load, boundary condiditions (such as empty rows), and attacks against the system whilst trying to continue to process transactions.
Finally, the best advice I can give you is don't skimp on reliability and availability. Buy RAID with hot rebuild. Buy a server with redundant PSU's, and not a handmade machine. Buy an additional NIC per machine, and put that on a different switch - dual path everything.
In terms of SQL and web based stuff, from a security standpoint, it's always advisable to have your SQL server behind a firewall (or at least on a separate private network).
In terms of speed, I've always found that having enough RAM to allow several outer joins to complete in RAM really helps. As someone else mentioned, it's a good idea to index columns you select on a regular basis.
Make sure you can dump the database online - stopping the dbms whilst a dump takes place is unacceptable; if it takes 30 minutes, that's reduced your availability from near 100% to 97.9%. That's bad.
Good luck!
Re:RAM & RAID 1+0 is your friend. (Score:1)
Is this truly less efficient than a database query for username/password type access?
Re:All the money in the world (Score:1)
I have no experience in Cold Fusion so I'll decline comment on that.
I'd caution anyone trying to Netscape's web servers on HP though. In my experience it likes to run away alot. I've been away from the environment with that configuration for a while and maybe they've upgraded OS or servers, but we were getting it hogging 90% of our CPU several times a day.
Re:3 words... (Score:1)
I guess, if that's still true, it just shows what kind of Unix admins/programmers work for MS
Re:Software RAID? (Score:1)
Ya know, I truly find it hilarious that people believe hardware raid has some huge benefit...
I've been using software raid, both over normal SCSI and hardware raid, in production servers for quite some time... hardware raid, even fast ones like DAC1164P, are going to get smoked in something like RAID5...
Think about it... do you want to do 64k XOR's 32-bits at a time on a single strongarm 233 (at best) or half/full cache lines at a time using SIMD on an SMP?
The best approach so far has been to allow hardware to handle raid0 for simple striping and disk management, and leave the XOR's and large chunks done in your main processors (after all, this is all streaming to prefetching helps a good bit) if you can afford the cycles.
Refer to linux-raid archives and my performance postings there with any questions
Re:Depending on if your site is read-only or not (Score:1)
Nothing could be further from correct.
If read-only, raid 1+0 allows striping reads across all physical drives, so you get the performance benefit of raid0 with the mirroring (and drive death survival) of raid1. If you don't care about data redundancy (you might want to care about making sure your site is available), a pure raid0 will still get data off drives faster than a single drive.
Of course, I'm a strong software raid advocate, with a switch to hardware when it's cheaper to offload those cycles to other chips rather than speed up (or increase the number of) main processor(s).
Sites that have a lot of writes, OTOH, have to balance data amount available vs. performance (etc) wrt raid1, 5, or 10.
Re:RAM & RAID 1+0 is your friend. (Score:1)
Re:RAID! (Score:2)
First, PLEASE don't point people to that horrible howto... as soon as Linus will accept the real software raid versions (and howto) available over at:
http://metalab.unc.edu/pub/Linux/kernel.org/pub/land
http://ostenfeld.dk/~jakob/Software-RAID.HOWTO/Second, realize 0+1 (typically 1+0, or RAID 10) only gives you half of total physical space in effective space.... sometimes you can afford that, sometimes you can't... and you still generate the scsi bus loads of the full drive set :)
In the very typical (especially in these situations) case of reading the databases, it's worth agreeing that 1+0 becomes 0+0 (since you can split reads across a raid1, assuming no failed drives)
Last, as a side note to the mysql part, try to use isamchk (if the db server can have any down time) for pre-sorting your database instead of doing the sorting as part of your SQL
All the money in the world (Score:1)
Such as Netscape/Cold Fusion/Oracle/Sun?
Besides not being able to call on the experience of all of you guys when the going gets tough, what are the other drawbacks besides the obvious (MONEY)?
MySQL is not a solution for me. It lacks many features that Sybase or Oracle provide (can you say TRANSACTIONS?). Netscape and Cold Fusion have better integration of security. Has a benchmark been done on PHP vs Cold Fusion? PHP seems to be able to handle Cold Fusion's role pretty well according to PHP's site.
Is the answer truely a mish mash of the both? Pay for Netscape for the SSL and Oracle for the STUD (I still like Sybase better) of a database that it really is, but go freeware where you can?
Just looking for a couple of good opinions.
a couple ideas (Score:3)
Some other ideas, are to split image serving onto it's own apache, not necessarily it's own box. This apache can be completely pared down to absolute minimum modules, since all it will be doing is serving up static images. It also let's cache be used efficiently, since mostly the common images will be stored. As opposed to common images contending with common text files for cache space if images and content are served from the same apache.
Also, what are you using in apache to create dynamic pages and connect to the db? Use long running processes where possible, which means pick mod_perl, php, fastCGI, servlets, etc... over plain cgi scripts. This will save you lots of cycles and also let you have persistent db connections. Always a very good thing.
Taking the splitting out of machines to the next level, you could also try splitting all of your dynamic content to it's own machine, mod_proxied through your front end apache's. This makes the front ends very small since they barely need any modules installed at all. It also gets some extra performance out of your dynamic content apaches. Of course you're running a lot of boxes now. :)
Read this [apache.org] if you're running mod_perl. And read this [mysql.org] to optimize your db.
Linux and Databases (Score:2)
Some of the thing you might need to know. If your going to do some serious databases, I recommend you spend more money in faster harddisks (SCSI preferble, multiple disks (oracle runs very nicely with the database spanned over 3-4 disks and the program running on another disk -- partitions wont do ). Have a generous amount of RAM and swap. If your making this a database box, dont use it for anything else. Even hosting a web server is not a good idea (As far as I'm concerned). Use WebDB if you like and host the database box seperately with just the database running as the main application.
Make sure you have a stable kernel. Make sure you have a secure system. Use ipchains to block out anything but local and remove all telnet and other daemons. Security is something a lot of people forget when making large databses.
Make sure you make daily, if not hourly backups (based on how sensitive your data is). RAID is a good way to keep your system running. Also if your database is web based, you might need to have 2 or 3 boxes set up identically and databse queries being distributed over all of them.
With Oracle, read everthing, they have a lot of tweaks listed on their pdf files and documents that come with the dist. Read all of them. Some tweaks are to the kernel. So pick a good stable kernel and stick to it. Forget about monthly kernel upgrades. I recommend yearly or every 6 months kernel upgrades. Software wise, if your doing Oracle 8i, make sure it's a glibc2.1 system (RH6 and debian potoato (we use potato, even though it's unstable, it lets us tweak the system and gives us the most familier interace ).
On mysql, it might help to read some of the online tweaks, also it might be a good idea to compile the server yourself, instead of using the one that came with your dist. Or compile it and copy it over what came with your distribution. Dont use msql unless there is no other way to do it.
And good luck.
--
Re:RAM & RAID 1+0 is your friend. (Score:2)
-Evan
Re:mulitprocessor hardware? (Score:1)
As for multiprocessor hardware, Linux works just fine for me. I'm writing this on a dual P3, and my other workstation is a dual PPro. I haven't tried it on boxes with > 2 processors though. For a web server, more processors are unlikely to get you any benefit, however. I'm pretty sure that apache on a single processor will easily saturate your network bandwidth, no matter what it is. Now if you're doing really complex CGI's, like, for example, some kind of real-time stock calculations, that require a lot of processing, then multiple processors might help. But if this is the case, I'd probably advocate hooking up several boxes in parallel (Mosix [huji.ac.il] is designed for this) and farming your CGI's out to idle processors on separate machines. Your Database might also benefit from multiple processors, but (for a properly indexed DB) probably only in extremely liminal cases (very, very large DB's), and if so, you should have it on a separate machine too. In general, spend the extra money on RAM instead of another processor. Your clients will thank you :-)
Re:MySQL, ?? (Score:2)
The caveat to this, of course, is that you must know how to set up your database right. I recently had an opportunity to play around with a fairly large db (upwards of 400,000 records) on mySQL. The records represent people, and some of the fields are birth month, birth date, last name and first name. I wanted to select las and first names for people who were born today. So, with no indexes, the query selected about 600 records, and took 11.8 seconds. Yes, that's right, 11.8 seconds. I was floored! Here's me thinking "mySQL's fast! It'll work great!" Well.
So then I went back through and indexed (birth month, birth date), checked that I had done it right with EXPLAIN, and ran the exact same query again. This time it took 0.8 seconds. A total time savings of 11 seconds. I learned an important lesson that day... Always index everything you're going to use as a key! With this in mind, mySQL is indeed damn fast, and low overhead.
Now, the other thing I can't really speak to is reliability. mySQL doesn't really support referential integrity, and I guess it's up to you whether you need it or not. I've seen my share of M$-trained database folks who use CASCADE as a cheap crutch to paper over their bad code. Rather than write queries that do what they really wnat them to do, they just spend the extra overhead to have CASCADE's do it for them. I've also seen times where this was crucial to a db's function. Either way, it's something to consider. I've also never seen mySQL handle failure, or had to rebuild it after one. Whatever you usde, your strategy should account for this possibility, in any case.
Logo is, Cobol is, JavaScript is - HTML is NOT (Score:1)
HTML is N O T a programming language (Score:1)
You can't put logic on it. Not without JavaScript, etc....
Re:Flamebait? (Score:2)
Does it have to be Apache? (Score:1)
MySQL, ?? (Score:1)
Re:Performance tips for Apache... (Score:2)
General purpose advice (Score:1)
Re:General purpose advice (Score:1)
Re:Major Performance Boost (Score:1)
Some (I use Roxen Challenger [roxen.com]) use a single-process approach.
They "compile" and then embed your scripts into the main process, and so you save time because you don't need to fire up the interpreter.
Also, because of the long-lived, single-process approach, you can share the DB connections among your scripts, and most of all cache
mSQL vs. mySQL (Score:1)
--Andrew Grossman
grossdog@dartmouth.edu
Re:Idiot of the year award (Score:1)
If not, you're probably writing it in text. And before you start some "it's not real programming unless it's compiled" rant, tell it to a perl hacker...
Major Performance Boost (Score:4)
Okay, this is how I generally do it. First of all, I suppose that you're using Perl, so these tips are for a Perl/Apache/MySql environment.
1) Use mod_perl so that your script doesn't neet a whole perl compiler for each separate instance in memory. The performance boost is just incredible...
2) Use Apache::DBI. It will prevent your script from connecting and disconnecting your DB each time it's called and rather use a persistent database connection. Great for performance.
There are some other tweaks that you can do. If you're interested, just let me know [mailto]...
Wintermute
Ready-made solutions (Score:3)
Oh wow!! Rob actually posted! (Score:1)
"Only one thing is for sure in the Universe: me"
--Corndog
Does Squid need 2 be on the same server as Apache? (Score:1)
User computer --> Squid server --> Apache server --> DB server?
Why Raid 5 and MySQL? (Score:1)
A separate server for the database? (Score:1)
Is it faster to put Apache and MySQL on separate Linux boxes, connected via 100Base-T? What sort of performance hit would we get if we put it all on one box? What about one box with double the RAM? Thanks in advance for your help.
Ryan
Re:3 words... (Score:1)
Re:Optimizations (Score:1)
Re:Optimizations (Score:1)
OTOH, I'm sure that the binaries from TcX are probably fully optimized and would be the best source if you didn't want to or are unable to compile them yourself.
Re:What's that? (Score:1)
Through simple (and more complex) testing, we found what works best. Personally, I wanted both servers to be FreeBSD, but we found that Linux had a significant advantage when used as the SQL server (see some basic test results here [fxp.org]).
You don't need a PhD to figure out that you should use what works best. I trust FreeBSD implicitly with the web-serving because of it's stability and speed WRT web-serving. OTOH, I trust Linux for the SQL server due to it's stability and speed WRT the SQL server. As stupid as it may sound...use the best tool for the job. Both OS's have their strengths; people should be emphasizing what each *CAN* do instead of bickering over what the other can't.
Re:Optimizations (Score:1)
Optimizations (Score:4)
Our company uses Apache, MySQL, and PHP extensively (and exclusively). You can't beat the price/performance ($0.00 / excellent == great value). Thorough our research, we settled with the following combination:
Any questions/comments can be directed to me. Flames directed to
Re:Optimisation of Apache/db (Score:1)
Re:Optimisation of Apache/db (Score:1)
RAID! (Score:1)
Nice starting point if you are on a budget:
Software RAID mini-HOWTO [unc.edu]
Also take a look at:
Linux High-Availability HOWTO [metalabhttp]
Re:RAID! (Score:1)
Re:You could start it out as a module (Score:1)
If you don't want to learn how to construct a module, but would rather stick to the CGI protocol, mod_perl can still help you through Apache::Registry. It keeps your cgi scripts precompiled and ready to go, and you can still take advantage of persistent database connections. The downsides? Increased memory consumption for each httpd process, and more attention must be paid to initializing variables that are no longer wiped clean between requests.
Testing, testing, testing (Score:1)
Also overlooked is possibly tuning the filesystem for caching and the like (file descriptors) and networking (maximum connections).
Possibly most of all, when I've seen performance problems, it's been due to how the code was written
Re:Flamebait? (Score:1)
--
A mind is a terrible thing to taste.
Re:Optimizations (Score:1)
The binaries (Intel) you can get from www.mysql.com are already compiled with pgcc -O6 and statically linked.
Performance tips for Apache... (Score:2)
Re:All the money in the world (Score:2)
MySQL vs PostgreSQL (Score:1)
Re:Flamebait? (Score:1)
Re:Tuning webservers (Score:1)
Well, here's what I know (for what it's worth) (Score:2)
Of course, neither of those sites is particularly busy and I'm more proud of the management utilities than the sites themselves, but that's par for this course.
The thing I did learn was that using perl and CGI is quite clumsy for this sort of thing. I eventually switched to PHP3 because everything goes together much faster. I don't know what it does to the performance, but since both sites are being served from the world's slowest Web server hardware (the database server is a 486dx2-80 and the database server has the HNBA website on it but the C Bookstore Web server is the 5x86-120 that I use for most of the four dozen or so domains that I host) and performance is not that big an issue, I'm not all that worried. It'd be nice if it got some hits, though.
Re:Einstein of the year award (Score:1)
You cynical bastards are quite amusing. It'll be interesting to see how cynical you are 20 years into your dead end careers. I'm sure the "HTML programmer" will be doing quite fine.
Re:3 words... (Score:1)
guess what site gets more traffic
(www.mediametrix.com says microsoft dot com does
Performance with Servlets (Score:1)
Apache JServ allows load balancing (basically doing a round robin over each of your servlet engines). I've found performance goes up about 30% for each PC you throw into the mix (I've only been able to test this up to 3 PCs).
FYI: I've found I needed servlet engines running on 2 PC's connected to 1 mySQL database to reach the performance of the perl app which stores its data as | delimited files.
While this may seem pretty poor, using a database means that the scalability (for size of data) is going to be a lot better than the file solution and the servlet solution used XSL which gives us a lot more flexibility over the HTML that we generate (basically each one of our users can have a completely different looking site while running the same app as all the others).
I'll be posting some benchmarks at http://objexcel.com in a few days if anyone is interested.
Peter
Don't Be Cynical, it's very plausable. (Score:1)
That's rather cynical, as we port various Linux applications to QNX it's in our favor to document what we've found to improvde performance. The fact this documentation also helps Linux is just part of the benefit of open source, itself.
If a company has gone through its paces to approve using Linux it's only logical that the people looking for all that free support will also contribute to it.
-From Up North
Re:Tuning webservers (Score:1)
Interesting idea...
HTML as a programming language (Score:1)
of the bright light who suggested, in response to a "rewrite the browser in Java" thread on the mozilla.general newsgroup, that it would be
better to rewrite the browser in XML...
Re:Diff. boxes... (Score:1)
How did you implement this, may I ask? Particularly, how were the two RAID arrays mirrored, and how did the Web Servers/Database servers do I/O with them?
Cheers,
-NiS
Hotmail working on Solaris (Score:1)
I guess it sure was the pain for people at Microsoft to choose Solaris as their "OS of the choice". Hmm, it might be worth asking why they didn't stick to their great OS?
what about Eddie? (Score:1)
balance them with eddie. check out the
eddieware project. Cool thing is, it runs on
FreeBSD AND Linux and it's open source
http://www.eddieware.org [eddieware.org]