Follow Slashdot blog updates by subscribing to our blog RSS feed

 



Forgot your password?
typodupeerror
×
Linux Software

The Speed Demon That Is Tux 2.0 231

gergi writes: "Running at the kernel level, Redhat's Tux 2.0 blew away Apache and IIS in webserving content according to this benchmark! Given the never-ending security flaws found in other webservers, has a major turning point in web server design come about?"
This discussion has been archived. No new comments can be posted.

The Speed Demon That Is Tux 2.0

Comments Filter:
  • by Anonymous Coward

    Where is everybody getting that Tux 2.0 is a static webserver?

    They're confusing Tux HTTP Server with khttpd.

    pin eight [pineight.com]
  • by Anonymous Coward
    They do mention X-15 in the article, about 2/3rds of the way down this page http://www.zdnet.com/eweek/stories/general/0,11011 ,2776396,00.html [zdnet.com]
  • That is one way of doing dynamic content in TUX. You can also write TUX modules, which can run either in kernelspace (insane) or userspace.
  • The us Army switched to MacOS servers from WinNT after being haked so many times (there are quarterly discovered expolits in MS Win NT web servers and as we all know Linux and even Apache on BSD have had many hacks and exploits regularly.

    Why is the mac the most secure? Runing MacOS 9.1 and earlier OS's it has no published vulnerabilities ever running WebStar (the most popular of the mac webservers).

    Webstar is not magical, its just that that MacOS is more secure than unix for many reasons.

    1> There is no command line shell to allow redirection. No shell, no shell exploits or redirection of scripting.

    2> Everyhing is 'root' at all times so programmers do not get lazy and fantacize about the existance of a more secure root to help protect them. The Webstar server, as most mac programs, is written knowing that security is is important and that the code is running at root. Truthfully, PowerPC apps run at user-level, and Gary Davidian's birthdate needs to be passed in a register to gain true supervisor level, but no normal benefit is gained on a mac from running in the microkernel space or debugger-nub space.

    3> Macintoshes do not suffer from stack exploits based on buffer overruns of C style strings. The mac uses Pascal style strings, instead of slow null-terminated strings in most all aspects of the entire operating system and in most users code. ANSI-C libraries are traditionally shunned. Pascal style strings are not only faster, they prevent the vast majority of buffer overrun problems.

    4> Macintoshes do not EVER exucute code from file that are simple data files, no matter how the file is named or no matter how the file suffix is generated or set. Macintoshes use dual fork files, and text files and data files traditionally cannot easily become executables, and firthemore would typically need to have their 4-byte FILE-TYPE set to a value to even begin to allow a hackers file to be blessed for execution. Webstar and other tools do not typically allow any hacker or rougue tool to set file types by accident or on purpose. On a wintel system a text file saved with a .exe extension can be executed!

    5> Source to mac os (pre os X) is not typically available outside apple corp. This is not a valid argument for security, (obscurity) but the appologists for the copious amount of linux redhat exploits use this as one reason for the many bugtraq exploits coverred.

    6> The Mac OS weservers running Webstar do not automatically allow errantly saved files from executing out of the CGI bin merely because they are stored there.

    7> The Mac OS has other good multi-homing multi-domain tools that run on it for robust free email (SIMS), DNS (QuickDNS Pro), FTP (Rumpus) and all have nice user interfaces to configure them and though these commercial tools may not be technically as secure as Webstar itself is, or the MacOS, I prefer them over running any open source tools on FreeBSD,NetBSD,OpenBSD,Linux, etc. Free is only free if you value your tech support at 0 dollars an hour sometimes. Plus, these other non Webstar related tools seem to have mostlty unblemished histories, unlike BIND.

    8> People on the mac tend to use scripting languages based on Applescript rather than perl for os level dynamic work and protecting against some minor perl problems, or unix scripting (no command line on a mac, thankfully). I cannot attest to java as being swell, but the fact is many mac people tend to do dynamic content in straight C. Happlily Webstar includes a rich variety of trusted dynamic content assist tools.

    There are many reasons that the WWW consortium members published that MacOS webservers are the most secure web servers.

    Even source forge was hacked into last month, where apache source is held?

    So all this laughable talk about shoving crap into kernel space is amusing to me (Tux).

    The Mac OS running Open Transport, based on open protocols, and bilevel protocol stack declaration order is amazing. It avoids lots of famous TCPIP hacks and it also allows end-to-end file transfer from ram to ram without copying a single data byte! (only pointers to buffers are passed end-to-end in the most ideal situations) This is similar to some of the work Tux is trying to achieve. There are also papers that discuss proper tuning of open-transactions vs queued transactions and how to get the most astounding hits per second from dynamic webstar content. But if you want speed, run Apache on a mac, because apple demonstrated 18 months ago that the mac ran apache in a benchmark far faster than any other computer similar in cost.

  • by Anonymous Coward
    • I'm not so sure putting the web server (ie, more code) in the kernel is a good route to better security
    Then why say anything?

    Anyone that was sure about this would know that there are plenty of things that can be done to secure the system fully.

  • by Anonymous Coward on Tuesday June 19, 2001 @11:20AM (#139836)
    What I really want to know is how well it compares to X-15 [chromium.com] (free reg. required), which trounced the original TUX, *despite* running in user mode. X-15 is also open source, and from the guys who make the ChromeLinux web server that was mentioned in the Apache section a while back.

    Tux beats apache. Big deal. Apache is slow. Everyone knows that. I want to see Tux take on the current Linux web-server champ.
  • Yes, that would be rather insane.

    Tux passes on requests to Apache so PHP content will still work, but it won't be as fast as serving static files.

    ---
  • > Frankly, from a security perspective, having a public-facing daemon running in kernel space is utterly frightening.

    Maybe, but that's exactly what NFS does...

    ---
  • The verbiage also referred to an IIS number of 5137 - a little less than half of Tux. iPlanet has insignificant and declining market share and is thus fairly uninteresting. You would want to compare a new product with either the best-performing or most popular products on the market. The previously fastest product was Tux 1.0 (which has insignificant market share and was essentially experimental anyway), and Apache and IIS together hold something like 80% of the market, the bulk of which is Apache. So I don't really see any problems with the selection.

    They weren't trying to put together a comprehensive view of the market, only show that Tux 2.0 is dramatically faster than the competition.

  • LOL!

    Best /. post I've read in ages :-)
  • by jd ( 1658 ) <imipakNO@SPAMyahoo.com> on Tuesday June 19, 2001 @11:32AM (#139845) Homepage Journal
    X15 is still 2-3 times faster than Tux 2.0, and Cheetha (from MIT) is 2-3 ORDERS OF MAGNITUDE faster than either.

    In consequence, all that this benchmark proves is that proprietary software is pathetic, that over-burdened designs are poor, and that small, specialised tools are by far the most powerful.

    Congratulations! Most of the UNIX world came to the same conclusion, over 20 years ago. Hence all the small (but powerful) utilities, rather than one single humungous blob.

    What's needed is less arrogance, and more understanding that Small Is Beautiful (And Bloody Fast). A specialised server-side script server, that can do nothing else but handle server-side scripts, would be a good place to start. Essentially, it would be a "shell" for web clients, which could access a handler for PHP, a handler for Zope, another for CGI, etc.

    Nothing would be "built-in", and nothing would be loaded unless it was being used. Small apps are fast apps. Less to load, less to page, less to copy in the event of a fork().

    The problem with Apache is that there's too much of it. It's a brilliant piece of engineering, but that is precicely the problem. A multi-threaded, multi-purpose, multi-colour device is a marvellous device. It's the Swiss Army Knife of web servers. But, like a Swiss Army Knife, you generally don't use it for cutting steak, eating a hamburger, or mowing the lawn. You use specialist tools for specialist tasks.

    This is something that is too easily forgotten. The Osprey is a classic example of why such approaches are doomed. Whatever turf it's on, it's always going to be the turf of the other guy. And that means it's always going to be inferior. Yes, overall, it can beat the pants off anything. Aircraft (with the sole exception of the Hawker Sidley Harrier) cannot fly vertically. Thus, in situations where vertical flight is necessary, the Osprey cannot be matched by (almost) any conventional aircraft.

    Likewise, the Apache web server can compete in more arenas than ANY other web server in existance. But the price for this flexibility is that it will NEVER seriously compete with any specialised web server, of any kind. It can't.

    Now, the question is: Is this a price you are willing to pay?

    If the answer is yes, then Tux 2.0's performance can go rot in /dev/hell. If the flexibility is of overriding priority, then the speed penalty is of no consequence.

    If the answer is no, then you need to get a specialist web server for each type of transaction you wish to perform, PLUS a transaction server which can fire up the necessary tool and hand over the transaction to process.

  • TUX does a good job of identifying inefficiencies in the network code, and of identifying at least one critical path that can be optimized. It doesn't actually do all that much which is particular to web serving; it's mainly a routine for efficiently getting a file from disk (or, most likely, cache) onto the network, with a bit of stuff that does HTTP. Doing the whole thing is the kernel is slightly useful for performance, but I think the main effect is that the whole thing gets written and optimized at once.

    Normal web servers are two problems working together: the kernel and the web server. Having a single program doing it means that it's easier to track what's going on through the whole execution path, so end-to-end debugging is easier.

    In any case, HTTP is significantly simpler than TCP and IP, which the kernel handle anyway. So most of your web server is really already in the kernel; it's just that, instead of responding to just pings and TCP control packets for you, it also responds to certain requests for you.
  • First, I disagree with teg; in standard use, I would hope that TUX would serve dynamic content--that was definitely the intent.

    TUX has four ways of handling dynamic content, and only one, kernel-space TUX modules, runs in kernel space--and we recommend that people not implement kernel-space TUX modules unless they have an overriding reason to do so. Normal TUX modules are run entirely in user space and have all the same security checks as any other user-space program.

  • by Michael K. Johnson ( 2602 ) on Wednesday June 20, 2001 @03:45AM (#139851) Homepage
    TUX can handle dynamic data on its own in several ways:
    • CGI programs, which TUX can call directly
    • TUX modules in user space (recommended, no significant performance penalty relative to kernel space)
    • TUX modules in kernel space (not recommended unless there is some particular reason for kernel integration, and performance is not an issue)
    • Pass-through to other servers (normally used only for modules that have not been ported to TUX; for example, mod_php; most of these modules have other speed constraints so there is not much reason to port them to the TUX framework)

    TUX can serve dynamic content VERY fast, even running CGIs. Because of the overhead of dynamic linking, we recommend that CGIs running under TUX be linked statically for maximum performance. (This is, of course, not an issue for TUX modules, since they are loaded at server start time, not at each invocation.)

    While TUX does have a fast path that can serve up static content without ever reaching user space, there is an option to send all requests through user space that only costs a few percent of performance even in the worst case. This can allow all sorts of interesting fine-grained control.

  • IIS (which pulled off 5,137 transactions per second) beat Apache (4,602 tps)

    Not only that, but IIS (with SWC) [spec.org] also beat Tux 2.0 [spec.org] on nearly identical hardware. The question is whether or not the difference in hard drives was enough to account for the difference in performance. My hunch is that it is, but that's just a hunch. Either way, it's obvious that MS have been shocked into action by the performance of Tux, and have come up with something comparable. I'd like to see a real comparisson of Tux, IIS/SWC and X15 on identical hardware.

  • by Tet ( 2721 ) <slashdot@nOsPam.astradyne.co.uk> on Tuesday June 19, 2001 @01:51PM (#139853) Homepage Journal
    X15 is still 2-3 times faster than Tux 2.0

    Errr... no. Even Fabio (X15's author) only claims a 5% perfomance increase over Tux 2.0.

  • Yeah, I really dug that thread. As Ingo points out, X15 really shows how fast the 2.4 kernel can be in user or kernel space.

    Now if they can just fix the freakin' VM... :-)

    -l
  • Actually, IIS has most of the same API hooks as Apache via ISAPI Filters, which would allow you to write something very similar.

    A quick google search turned up this [zeus.co.uk] from the Zeus manual of all places, which is an example of how that would work.

    I've written my own URL-rewriting functionality for IIS before now with no problem, but at the moment, AFAIK, you do need to use some C to do it.
    --
    the telephone rings / problem between screen and chair / thoughts of homocide
  • You don't need to hack the server in order to hack the access controls on the data available to the server. Is it immensely easier? Usually. But not necessary.

    If I have the admin password for whatever app you're running, then I can see all the data, and never have hacked the webserver system.


    This space for rent. Call 1-800-STEAK4U

  • by TBone ( 5692 ) on Tuesday June 19, 2001 @11:39AM (#139858) Homepage

    Is running in the kernel necessarily safe? No, probably not. However, the Linux kernel is intrinsically safer than the NT kernel. Add the patched CAP functions to the kernel, and limit what the web server has access to. Besides which, if your data is that important to your company, you shouldn't be hosting anything on your web server BUT your web server pages. There shouldn't be any other logins or user accounts, the server should be in a extranet, and the only access allowed from it to other boxes should be from it to your backend database servers or such. Those connnections should be limited to some dumbed-down user equivalent in the DB. So if the machien gets hacked, the hacker can see all the web pages that he would have been able to see through the web server anyway, and can connect to your database to see data that he would have been able to see anyway.

    Even in NT, this should be the case. Your web server being hacked, while problematic, should not be cause to call in the National Guard while S'Kiddies make havoc on your network. There's nothing wrong with Kernel-space applications when the box is set up correctly to account for the possibility of it being hacked.

    In addition, Apache itself is marginally attackable by hackers anymore. Most hack attempts come through poorly configures applications on the backside that yield access to the server.


    This space for rent. Call 1-800-STEAK4U

  • by Jeffrey Baker ( 6191 ) on Tuesday June 19, 2001 @11:23AM (#139859)
    The main kernel improvements from Tux have been merged into the mainline kernel, so there really isn't anything interesting that Tux can do which can't be done in user space. I agree that running a web server in the kernel is a risk. Moving that to user space and running as a regular user should be the next step. There has been much yakking on linux-kernel about a user space web server that outperforms Tux.
  • X15, an experimental user-space server written to "compete" with Tux is faster:

    http://kt.zork.net/kernel-traffic/kt20010507_117.h tml#3 [zork.net]

    Frankly, from a security perspective, having a public-facing daemon running in kernel space is utterly frightening.

    Apache is meant to have configurability over performance, and does dynamic content. However, I'm sure we'll see better performance from *nix based Web servers over time. Paul
  • X15 is still 2-3 times faster than Tux 2.0, and Cheetha (from MIT) is 2-3 ORDERS OF MAGNITUDE faster than either.

    However sincere you may be, I don't think you should be spouting on subjects you are less than fully informed about. When X15 was first released [theaimsgroup.com] the author claimed it was a slightly faster than TUX, which turned out to be true, even after Ingo Molnar worked together with Fabio Riccardi, X15's author, to resolve some small standards compliance issues. However exciting X15 may be as a piece of software engineering, it is not a replacement [theaimsgroup.com] for TUX unless its restrictive licence is changed. Looking into my magic mirror, I see half a dozen busy teams of geeks working feverishly on GPL'd/Apache licenced high-performance user space http servers. It's clear the future of http is in user space, not the kernel.

    As for "Cheetha", I don't know a thing about it, except that it is 2-3 orders of magnitude slower than you claimed.
    --

  • by um... Lucas ( 13147 ) on Tuesday June 19, 2001 @11:12AM (#139870) Journal
    Why? Apache's currently the market leader... 60% share, or something like that, so of course we're best served by them comparing it to the most widely used server...
  • X15. There is a blurb about it on the last page of the article. In fact most of the last page is touting that the speed improvements are not really because Tux is kernel space, but rather because the kernel as a whole has improved tremendously.
  • by Skeezix ( 14602 ) <jamin@pubcrawler.org> on Tuesday June 19, 2001 @11:58AM (#139875) Homepage
    For those who aren't aware what Tux is, it's a cute penguin--the linux mascot. The real news is that it can now swim faster than a military aircraft can fly.
  • The fast webserver is called X-15. Here is a release announcement [indiana.edu] from the author, Fabio Riccardi.
    For a discussion of this webserver on LK, see Kernel Traffic #119 [zork.net]
  • Tux handles dynamic content simply passing it to an user-level server such as Apache, Roxen, Zeus or whatever else you wish to use.

    This means that were this a purely static-content benchmark, the results would have been much,much more in favour of Tux.

  • Funnily enough, having a kernel-space HTTP server might actually improve security.

    Follow me: a buffer overflow's exploit works by uploading some malicious assembly code to the target system, and overwriting some memory location to cause the execution flow to execute the uploaded code.
    Doing so might trash the stack, but it's not a problem since the malicious code might just not care. Suppose for instance we're cracking Apache: if the 'sploit mangles the stack, the process serving the request will die for segfault or similar. The super-process will just fire up a new one, thinking that the infected process just exited because it wanted to.

    Now let's suppose that the same happens to a live kernel. If the kernel crashes, there's no recovery. If the attacker wanted to change something on disk, the changes wouldn't even reach it! Flaws in such a server would be much harder to exploit, because the attacker would have to ensure system integrity while doing her own deeds.
    Sure, such a thing could cause availability problems (can you say "ping of death"?) to no end, but it wouldn't be a security problem. I don't know you, but I'd take an availability problem over a security flaw any time of the day.
  • by crow ( 16139 ) on Tuesday June 19, 2001 @11:08AM (#139880) Homepage Journal
    So is Tux 2.0 really any more secure than other web servers, or is it just that since it doesn't amount to a noticable percentage of servers, crackers haven't been trying to break it?

    Is there anything fundamental about the design of Tux that should make us feel secure?
  • by Timothy Dyck ( 16448 ) on Tuesday June 19, 2001 @11:47AM (#139881)
    Hi Spoing, the story doesn't position Tux as a competitor to Apache. In fact, we went out of our way to test the combination of Apache and Tux working together, as well as Tux and Apache (and IIS) on their own. We point out how well Tux and Apache work together and recommend that combination.

    You may have come to your conclusion only on the basis of the title of the Web article, which is different than the print version. I think the print title is better, which is Tux: Built for Speed.

    Also, I think that painting all stories from all Ziff-Davis publications with the same brush is too broad a generalization. The company produces content aimed at everything from home users and gamers to IT managers at companies that spend millions of dollars a year on technology (the later is eWEEK's market). You're more likely to find stories you like by following the work of particular authors or publications than the activities of an entire publisher.

    Regards,
    Tim Dyck
    West Coast Technical Director
    eWEEK Labs
  • Why moves this into the kernel?

    Context switches between kernel mode and user mode take time. Indeed these context switches tend to be the single greatest overhead in I/O bound, multithreaded apps. If memory serves, the p3 700 mhz xeon requires around 3ms to do a context switch. These little buggers add up in a real hurray. For fun, turn on the "View Kernel Times" on your Windows Task manager and see how much time certain apps spend in kernel mode. Or for real fun, head into perfmon and turn context switches. Those take time.

    Moving things into the kernel isn't always a bad thing to do. When Microsoft moved the graphics engine into the kernel, it allowed Win2k to use DirectX for real, not a bad thing, and it really hasn't affected stability of the OS. I think its safe to say win2k is most stable windows operating system yet.

    Network drivers, USB Drivers, indeed IDE RAID drivers all reside at the kernel level, you just have to be careful that you've built a stable base.

    I think the real advantadge here would be defining what you want in the kernel and what you don't. Hell, I suspect Apache in the kernel would push pages nearly as fast as Tux, and I could say the same thing about IIS.

    John "EvilJohn" Carney
    Windows Team Lead - TowerJ
    http://www.towerj.com/

    // EvilJohn
    // Java Geek
  • >They moved the GDI in the kernel at NT4, not 2K

    Yes the GDI was moved with NT4. But thats not the whole ball of wax, either.

    >Your suspicion is wrong. Context switch is not the only things that tux optimizes. There is now an user-mode web server that seems to be at least on par with tux.

    I assume you mean this:

    http://kt.zork.net/kerneltraffic/kt20010521_119. ht ml#1

    It really just goes to prove the point about optimization. Algorithmic optimizations are better then system level ones, something I would categorize the movement into kernel space to be. The question on kernel space is does TUX gain from movement into kernel space?

    I think it's pretty silly to assert that moving into kernel space doesn't bring you a speed increase to certain applications. Whether or not it's necessary is the question. I think the X15 project shows quite clearly the move isn't necessary, and a well coded, designed application is better.

    2.4 is such a smooth kernel though, it really begins to make the whole debate moot. This is something not true of the windows world, where one still has to be a lot more careful about thrashing context switches.



    // EvilJohn
    // Java Geek
  • nope, it's rule by river horse. Sounds like the name of a movie - "The River Horse Rulers"
  • by evilpenguin ( 18720 ) on Tuesday June 19, 2001 @11:45AM (#139889)
    I would add that speed is a selection criterion for only a very small number of sites. Most corporate web servers I've worked on are served by anywhere from xDSL-class to T3 class speeds. Apache on a high-end Lintel box would keep that pipe full without being overloaded. (depending on the complexity of what's being served, of course! A lot of servlets that do 9-table joins in a Sybase database, all on the same box as the web server is quite different from serving only static HTML) So would IIS and Tux, since they both perform better. For that small number of sites (of the slashdot size, perhaps? ;-) that have pipes big enough to keep several server farms busy, ten percent less CPU demand can be big $$$ savings.

    "Optimization" sounds like it is a single thing, but my "optimal" might be very diffeetn from yours. It is, as always, a cost/benefit question.
  • by cpeterso ( 19082 ) on Tuesday June 19, 2001 @11:16AM (#139892) Homepage
    Why is running Tux inside the kernel so great? IIS has had a kernel module since IIS 1.0. And Microsoft got hell when it moved NT's graphics code into the kernel in NT 4.0.

    This reminds me of a joke my CS professor made about operating system research: the "endo-kernel". Microkernel researchers try to move OS features from the kernel to userspace processes for extra protection and modularity. Other researchers (such as UW's SPIN [washington.edu] OS and now Tux) move application "modules" from userspace into the kernel to boost performance. So now the "endo-kerenl" OS will be upside-down: OS running in userspace processes for protection, but applications running in kernel space for performance! ;-)
  • Excellent question. If your only goal is speed, then Tux 2.0 is the way to go. If, however, your goal is stability, then this article does nothing whatsoever to help you with a solution. Maybe Tux 2.0 is ultra stable. Maybe not. Who knows?

    Thus the falacy of benchmarks.

    Like the recent benchmark pitting Linux, WinY2K and FreeBSD against each other for serving speed. FreeBSD wasn't designed for speed. So what was the point? That Linux was better for a print server than FreeBSD? There is no way to know, because no one but an anal tech reporter cares how fast a print server serves up print jobs!
  • > Hypocracy: Tyrannical rule by Hippos.

    Au contraire, Hippos gave us good government and low taxes. It was, however, somewhat embarassing when we had to explain to visiting dignitaries why we were ruled by a horse.

    [Sorry: pedantic note follows. This does not make your post any less funny.]

    Hippocracy: Rule by Hippos (or by horses).

    Hypocracy: "Sub-rule", "under-rule", or perhaps "rule from beneath", as in "The Low Dwarves enforced a hypocracy on the High Dwarves, and even tried to extend it to surface dwellers".


    --
  • As an administrator of several web servers (personal and educational), I must say that I've had to patch the IIS servers more times in the past year than the 1 time upgrade of Apache servers. I'm sure that's no surprise to many, but sometimes I consider the horde of flaws in IIS inexcusable.
    Anyway, back on topic, I can see Tux as really cool, but how many security nuts out there (including myself) would be willing to run something so integrated with the kernel. Ironically, I could see this being likened to the flaws that IIS has...being so close to the core of the OS.

    By the way, it's hard to post when you log in under www.slashdot.org, and the URL's keep bouncing over to slashdot.org....just in case one of you slashdot admins read this. (I know, bitch bitch, whine whine...)
  • Probally because it does little more than server up static pages.

    Sure there are chances for buffer overflows, but the code base is smaller than IIS and all the .dlls it ships with.

    So less features == less things to go wrong.

    --
  • Comment removed based on user account deletion
  • At least there is the choice. Free Software.
  • Why are we pushing for faster and faster web server software?

    When the Mindcraft fiasco first erupted, the standard response was "What does it matter, it's already fast enough to saturate the network?" That hasn't changed.

    The speed of the web server itself doesn't even register on the radar. It's the dynmic part of the sites and the network itself that are slow, folks. If you have to serve up pages 0.05 seconds faster or you fail, you're too close to the edge of the cliff already.
  • Serving static content takes a small amount of CPU, especially when not using IDE drives. Yes, 70% of the files are static, but they likely consume only about 5% of the total processing time.

    What I'm saying is that Tux isn't needed or even a good thing for 99.9% of the sites out there. It isn't needed unless you're running an site of almost all static pages on a server that is way too mall on a fat network connection.
  • by SheldonYoung ( 25077 ) on Tuesday June 19, 2001 @12:54PM (#139904)
    Capacity for servers is generally bounded by dynamic processing workload and network bandwidth, not suffling bytes from the drive to the network card. If it is, you're pushing way too hard for a live server.

    Take slash, for example. If apache was 10 times faster, they would still need exactly as many server as they have now.

    Only a site that serves ONLY static pages will benefit, but then for static pages even the slowest web servers can saturate all but the fastest network connections.
  • Well, there were comparisons to IIS, and IIS beat Apache by about 10%. That gives you some idea to the relevance. The Slashdot editor could have just as easily said that Tux nearly doubled IIS's performance, and that's just as impressive - especially given that ZDNet is usually very kind to IIS.
  • how did it handle dynamic content? Oh wait, they probably didn't think to test THAT out.

    Well, from the article:
    "...found that Tux was able to perform nearly three times faster than current Web server mainstay Apache (12,792 transactions per second vs. 4,602 tps) when running a mix of dynamic and static Web content.

    Oh wait, you probably didn't think to read THAT before you posted.
  • Tux is a static web server, whereas Apache and IIS are both full-blown dynamic web servers.

    Where is everybody getting that Tux 2.0 is a static webserver? Here's the quote direct from Redhat's page:
    TUX is a kernel-based, threaded, extremely high performance HTTP server. It is able to efficiently and safely serve both static and dynamic data.

    ZD's test was a mix of 60% static and 40% dynamic. So...???
  • Not that a lot of this wouldn't have happened anyway, but a lot of the impetus here is from the much-decried Mindcraft benchmarks.

    Now all we need is for Microsoft to pay for a benchmark of SQL Server performance against, say, Interbase, and we could all direct our energies into a great open source DBMS.

    No, MySQL does not count.

    [duck]

    --
  • Agreed, for the most part.

    I did a simple benchmark of _boa_ versus apache the other day. On the same box, delivering the regular static page that is Debian's splash-screen, I got twice as many hits per second and twice as much traffic per second using boa than I did with apache.

    I suspect the reason is that all this `pool of servers' and particularly the `how well am I doing? oops, must throttle back, kill these daemons' concepts mean it spends all its time sorting its act out and not so long actually delivering results.

    I'm of the opinion that something other than apache, for static pages, with proxy-pass back to apache for php/cgi would be the best way to go. Gimme the raw speed except where I don't expect it.
    ~Tim
    --
    .|` Clouds cross the black moonlight,
  • If people are really worried about TUX which only handles static files by default and is fairly strict in the requests it will serve, what about NFS?

    NFS has been in the kernel for a long time, does just everything that TUX does (network file distribution) and more, is far more complex, has had well known security problems, has had well known performance problems, and people are worried about hypothetical, *potential* problems in TUX!?!
  • Yup. It's pretty hard to write kernel code to give you a root shell on either system. The way most script kiddies would do it is wait for someone to publish an exploit and then tinker with it using a hex editor to make it do what they wanted.

    If you subvert the system call table on either system you can gain a root shell fairly quickly, even from a web server like Tux or IIS.
  • by throx ( 42621 ) on Tuesday June 19, 2001 @12:10PM (#139920) Homepage
    Exactly how is Linux kernel mode any safer than NT kernel mode? Once I'm in kernel mode I can do absolutely anything, regardless of the CAP functions - just hack up the kernel memory space and make all sorts of function calls around them. Code running in a processor's priviledged mode is (by definition) trusted code.

    Personally, I'd be suggesting that the NT kernel space may be marginally more effective given that the kernel can be paged in and out which makes writing kernel buffer overrun stuff slightly more error prone (BSOD).

    It comes down to the fact that if someone can run arbitrary code in kernel mode, they can relatively easily (at a minimum) switch out of protected mode, format all your hard drives, erase any tapes that may be in the machine (via BIOS calls) and basically render your machine worthless. This is WITHOUT calling any kernel functions!!

    Of course, if you want to be more subtle, just rewrite the syscall interface table to some hooks that subvert the security of the system and let it appear to continue running normally while allowing you access to stuff you shouldn't (like passwords of users that logon to the site and other fun things).
  • by QuantumG ( 50515 ) <qg@biodome.org> on Tuesday June 19, 2001 @01:09PM (#139924) Homepage Journal
    Posted yesterday, yet another IIS sploit, I will ignore details and skip to the funniest section of the notice:

    Funny:
    Some people might wonder why this advisory does not contain the typical eEye
    humor like most of our other advisories. Basically, the reason is that this
    is our 4th remote SYSTEM level IIS vulnerability and well...we've run out of
    jokes.

  • I thought the Osprey was doomed(minus computer failures) was that if one of the props/main rotor setups got blown up, it was basically screwed.
  • "For example, the Cheetah web server built on top of XOK performs eight times faster than NCSA or Harvest and three to four times faster than IIS running on Windows NT Enterprise Edition."

    - - - - -
  • It was compared to IIS as well, and with the waining percentages of other webservers, many of which are slow as dogs, there really wasn't any need. I think their points were abundantly clear.
  • by Lawrence_Bird ( 67278 ) on Tuesday June 19, 2001 @11:11AM (#139933) Homepage
    Nothing like a real world test
  • I understand that the Tux in-kernel webserver stuff is pretty minimal, but isn't running anything as notoriously security-risky as a webserver directly from kernel space a Really Bad Idea? I understand that Tux is fairly simple & open to peer review, but it seems like any tiny unchecked buffer or similar security hole could easily result in a remote root exploit, or at least a DOS atack (i.e. forcing a kernel panic).

    The whole idea is pretty neat, though, and if there really isn't any risk of security problems, this could be a huge boon to busy web sites everywhere...

    ----
  • by wass ( 72082 ) on Tuesday June 19, 2001 @11:33AM (#139935)
    Ugghh, not this complaint again. Okay, here comes my rant again.

    You're committing the age-old fallacy of assuming that all of us slashdotters are of one and the same ideology.

    Have you found any of the same posters that criticized IIS and ALSO praised TUX? If so, then you have a valid response. If not (which I'm assuming), then SHUT the HELL UP!

    I've said it before and I'll say it again. Slashdot readers are not of one common philosophy. We're a community of various people with various beliefs, who live in various countries, who use various software, etc. STOP assuming that we're all one and the same!

    If Joe Linux complains about IIS, fine. If Mary Linux praises TUX, that's fine. It's NOT hypocrisy when different people give their opinions on different subjects!
    __ __ ____ _ ______
    \ V .V / _` (_-&#60_-&#60
    .\_/\_/\__,_/__/__/

  • by rkent ( 73434 ) <rkent&post,harvard,edu> on Tuesday June 19, 2001 @11:17AM (#139937)
    Hypocracy: Tyrannical rule by Hippos.

    Hypocrisy: Duplicitous behavior.

    (Much respect [dictionary.com].)

    ---

  • by AugstWest ( 79042 ) on Tuesday June 19, 2001 @11:21AM (#139938)
    Given the never-ending security flaws found in other webservers, has a major turning point in web server design come about?

    Let's see... I can make Apache or IIS run as a specific user whose access I can (largely) control, or I can run my web server in kernel space....

    This is going to end "security flaws"?
  • So now the "endo-kerenl" OS will be upside-down: OS running in userspace processes for protection, but applications running in kernel space for performance!

    You'll be happy to know that this was indeed the way that Windows (2.0) 386 was developed. The intended user vs system rings of the first protected mode Intel processor were used in the opposite way it was intended. This was to provide backwards compatibility with non-protected mode applications and unaware hardware.

  • kernal adj, 1. Relating to the technical appetites: kernal desire, kernal panic. 2. Holy or unworldly: kernal hacker, kernal space.
    --
  • ...but I know I don't want IIS, Apache, or anything else integrated into the O/S any more than I like having IE a part of my desktop O/S.

    That's great! So don't put it in your kernel. That's one of the wonderful things about Linux and the like. You can decide what goes into your system. The complaint with MicroSoft is that you don't have the option.

    --Ty

  • I was refering to the IE on the desktop comment in respect to MicroSoft. There isn't really a practical way to keep it from being there. Wasn't refering to a web server.

    --Ty

  • I expect ./ got rid of the last few lines of the press release sent by Gurgi:

    "Gee, what kernel is that?"

    "Oh, it's the amazing, slices, dices, does it all more securely and faster Tux 2.0! Run out and get one today!"

    The folks at Redomond are hiring PR flacks, maybe they should talk to the Tux people. ;-)

  • I'll be interested in these fast webservers when they can serve some sort of dynamic content. Until then, they're merely benchmark toys.

    A big part of the specweb test is dynamic content, so it already deals well with that. Of course, you have to deal with this a specific way - therefore, the most common approach would be to have Tux forward any request to dynamic content to an Apache server on the same host listening on a different port.

  • by teg ( 97890 )
    Remember that for standard use, Tux would only serve static content - this drastically reduces the possibilities for attacks against it.
  • by teg ( 97890 )

    I wasn't implying that using tux dramatically reduces the possibilities for attack against your system - just that the number of possible ways to attack tux itself isn't that many: It doesn't include it's own kernel space indexing module, to give a contemporary example.

  • I am not a developer (slowly learning C++ and will be in pre-CS @ James Madison U) nor am I a sysadmin, but I'm wondering.... what are the stability issues here? It would seem to me that unless Tux 2.0 is really a quality work then it could really give Microsoft grounds to make Linux look like a trashy webserver because if Tux crashes, wouldn't that often take down the kernel too?
  • Nothing would be "built-in", and nothing would be loaded unless it was being used. Small apps are fast apps. Less to load, less to page, less to copy in the event of a fork().

    Linux uses copy on write when forking, so infact only the task descriptor needs to be copied (and possibly the stack?). Everything else is delayed write: making it a very fast operation ... linus is fond of pointing out "linux can fork a process faster then windows can create a thread" ...

  • Nitpick.

    I believe Zeus uses something similar to SGI's state threads, coupled with one heavyweight process per CPU. It's basically I/O multiplexing in multiple processes - a one to many process/connection relationship as opposed to a one-to-one relationship. The "threads" don't have any kernel entity associated with them, and aren't fully preemptive.

    Check out http://oss.sgi.com/projects/state-threads/docs/st. html [sgi.com] for more info.

  • by Tom7 ( 102298 ) on Tuesday June 19, 2001 @11:22AM (#139969) Homepage Journal
    I'm not so sure putting the web server (ie, more code) in the kernel is a good route to better security; it increases the trusted codebase. Perhaps the simplicity of the server makes up for this, but that's because it's simple, not because it's in the kernel.

    Kernel-space web servers are a good idea for people who need super high-volume serving of static content. Those people hopefully also have the resources to maintain something as potentially dangerous as that. It's good for benchmarks, too.

    But most people do not need this kind of speed; serving up static content, I'm sure a T1 would max out before a modest desktop machine would max out its CPU running apache. What people do need is safe software which needs little maintenance -- experience has shown that many users do not stay up-to-date on patches. Therefore, I say that moving to yet lower levels than userspace C (ie, the kernel) is a bad idea, in fact, we should be moving to higher level languages where more safety properties are guaranteed. Presumably, while this might lead to slower web servers (I don't think it would be more than a few percent), it would also lead to fewer bugs and lower maintenance costs. That's the real cost of software, after all!
  • Given the never-ending security flaws found in other webservers, has a major turning point in web server design come about?"

    I'm still trying to figure out who in his right mind is welcoming putting web servers directly in kernel space as a more secure design than having them run in kernel space. In a well designed and secure OS, even if the web server is compromised then all this does not signal that the entire system has been compromised, this is no longer the case when compromising the web server is now a kernel exploit.

    --
  • Remember that for standard use, Tux would only serve static content - this drastically reduces the possibilities for attacks against it.

    But doesn't reduce the possibilities for attacks against your system, which is a better goal. You still need Apache around for CGI access. Now you potentially have very dangerous vulnerabilities in Tux running as worse-than-root in addition to your normal Apache vulnerabilities. I don't see that as reducing the possibilities of attack.

    The article quoted Tux's creator as saying on linux-kernel that "there is nothing significant left in [Tux] that cannot be done from user space." I think this means that it's time for Tux to die off. It was a great tool for the kernel developers to increase speed in the kernel. Now it's done its job...hopefully soon someone will release a full-featured high-performance webserver that runs in userspace. (Not like Tux and X-15, one that actually handles dynamic content on its own.) Apache 2.0 might qualify, but I'm crossing my fingers.

  • IIS has always been a kernel space server. Or, I should say, has always had the option of running in kernel space. Similarly, NT 3.x had the graphical subsystem in userspace, but moved it into kernel space in the 4.x version.
  • Buddy, when I posted that comment, there was, I believe, one comment that I could read at my +1 threshold. I was railing against the poster and the slashdot editor, not against slashdot at large.
  • by SuiteSisterMary ( 123932 ) <{slebrun} {at} {gmail.com}> on Tuesday June 19, 2001 @11:11AM (#139978) Journal
    Statement: Hey, everybody, IIS is blazing fast because it runs in Kernal space!
    Response: That's stupid! It'll crash the server! It'll compromise security! Statement: Hey, everybody, TUX is blazing fast because it runs in Kernal space!
    Response: Wooohooo! This is a major turning point in web servers! Yay Linux!
  • If it supports cgi, it supports PHP as a CGI script. Applications should support databases, not the web server itself. I don't know about MOD_SLL, but if you read the article you might have seen that it says, "Tux's main weakness is that it doesn't support Secure Sockets Layer traffic, a feature planned for a future version."
  • With some caching techniques, or for less dynamic files, background processing (e.g. a perl script making a static file that is updated periodically), you could maximize the speed boost. And yes, running PHP as a CGI is going to be much slower on any web server most likely.
  • The SPECweb99 [spec.org] benchmark uses both static and dynamic content, and there are results from a variety of servers.
  • I think a comparison between TUX, Zeus, AOLServer, Apache, etc. might have given more useful results.
  • by Wesley Felter ( 138342 ) <wesley@felter.org> on Tuesday June 19, 2001 @11:44AM (#139988) Homepage
    X15 is still 2-3 times faster than Tux 2.0, and Cheetha (from MIT) is 2-3 ORDERS OF MAGNITUDE faster than either.

    Er, let me get this straight: TUX can saturate multiple GigE cards per CPU [spec.org], so Cheetah [mit.edu] can saturate 200-2000 GigE cards per CPU? Today's systems don't even have that much memory bandwidth.
  • For all the web sites I've managed in my career, static page delivery has never been the chokepoint. Not even once. Even a sluggish web server can easily saturate a typical network connection serving just static content.

    Besides, few web sites ever register that much traffic. Look at Apache's server status [apache.org] sometime. At the moment, it's pushing 1.2 MB/sec, and the CPU load isn't even close to the single digits.

    As nifty and cool as Tux seems, I have a fear it wasn't created to solve real problems, but to win benchmarks.

  • MIT claims that cheetah [mit.edu] is 3 to 4 times faster than IIs, which would be just about exactly the same as TUX 2.
  • Hi Spoing, the story doesn't position Tux as a competitor to Apache. In fact, we went out of our way to test the combination of Apache and Tux working together, as well as Tux and Apache (and IIS) on their own. We point out how well Tux and Apache work together and recommend that combination.

    Then I stand partially corrected; there have been plenty of stories before on this combination, and I don't care for any more while reading Slashdot. Is this Ziff's fault? Nope. Is it Slashdot's? Nope. I'd still like a check box, though.

    Monotony is monotonous, and I didn't read this specific story since there didn't seem to be much of a point after reading the title. I've wasted time on fluff and untimely pieces before -- Ziff-originated and not -- and I honestly don't want to even know about them in the future. If I'd have to limit myself to blocking Ziff-originated stories only -- even when occasioally loosing out -- the trade-off seems reasonable.

    This might not seem fair, but I personally have only so much time to look at any specific article -- there is too much good stuff available from other sources including mailing lists, newsgroups, and more focused zines (online and offline).

    Also, I think that painting all stories from all Ziff-Davis publications with the same brush is too broad a generalization. ... You're more likely to find stories you like by following the work of particular authors or publications than the activities of an entire publisher.

    Agreed on specifc authors and some articles in specific publications. From a larger perspective, after many years of reading publications from Ziff-owned sources, I've personally found little value in them that I can't get elsewhere; the hit to miss ratio is just too low. Others might find otherwise, as is thier choice.

    As such, from experience, I'd really really like to filter out stories from Ziff-owned sources when visiting Slashdot. [frustrated]

  • by Spoing ( 152917 ) on Tuesday June 19, 2001 @11:31AM (#139996) Homepage
    OK, that's too harsh. I don't mean it, and just want some attention. (Linux Today also runs a fair amount of Ziff stories in an overly ernest effort to have some balence and I still read that site nearly every day.)

    I am honestly sick of Ziff, though. They are the Mickey-D's of the computer press, and while they have thier place, and offer up the occasional useful story, I would really like to carve out a Ziff-free, or at a minimum a Ziff-limited one.

    1. Gripe on this story: Tux 2.0 is an adjunct to another more dedicated web server or for use in limited situations -- it isn't competition for Apache! The reasons for this have been covered many times before on /. and other places even before Tux 1 was officially released.

      Plea: Could the great and all-powerful Slash web meisters add a check box for blocking Ziff stories?

    Thanks for listening!

  • by Misch ( 158807 ) on Tuesday June 19, 2001 @11:11AM (#139997) Homepage
    "First, Tux puts Web server code into the kernel and reads Web pages directly from Linux's kernel-mode file system cache for speed"

    It might just be me, but I'm a *little* wary of the security implications of something running right in the kernel of the operating system. I smell security breach *somewhere* lurking in this product.

    But, it is always nice to see progress, and I'm glad that part of the credit is given to the kernel developers and their speed improvements in the 2.4 kernel over the 2.2 kernel.

  • When I read the article, this kind of jumped out at me.
    As mentioned, Tux's internal architecture is designed specifically for high performance, but that design is only one of five factors critical to its top-notch performance, according to Tux's primary author, Ingo Molnar, kernel development/systems engineer at Red Hat, in Berlin.
    This is a sentence, that, when read, on the occasion that I saw it on the screen, caused me to experience, figuratively speaking, the most explosive burst of laughter I have yet to experience, although it may be, for the purposes of English grammer, correct, and furthermore, caused me to wonder if, perhapse, although they may otherwise be of a compotent nature, ZDNet's editors are, figuratively speaking, asleep on the job.

    --

  • by Pinball Wizard ( 161942 ) on Tuesday June 19, 2001 @11:44AM (#140001) Homepage Journal
    Close. Actually the real hypocrisy on Slashdot lies in its attitude toward ZDNet.

    Statement: Tux runs faster than IIS.
    Response: Woohoo! Linux RULEZ! Props to ZDNet for their insightful and informative article!

    Statement: IIS is the best overall web server.
    Response: Not these M$-suckups again! Haven't we learned yet not to equate ZDNet with real journalism?

  • I don't know what you're doing but our entire company's site fits into the server's RAM with space to spare, the disc is only used for logging most of the time.

    With gigabyte RAM available to even small companies now the question of disc access is relatively unimportant.

    TW

  • by RedWizzard ( 192002 ) on Tuesday June 19, 2001 @03:04PM (#140009)
    Why is running Tux inside the kernel so great?
    Tux 1.0 ran in the kernel because the enhancements that made it fast (e.g. zero copy networking) needed to be in the kernel. But many of those enhancements were not specific to webserving. So they've been slowly making their way into the main kernel code. Now with Tux 2.0 very little time is spent in the Tux specific part of the kernel: only 2% of CPU time. In fact Tux doesn't really require kernel integration anymore, indeed X15 manages very similar performance running entirely in userspace. You could look at Tux as a "proof of concept" that resulted in several performance enhancements in Linux 2.4.
  • Found in the Linux kernel. Play with it, bench it, enjoy it. 'nough said, eh?

    Microshaft still OWNZ JOO! [msnbc.com]
  • They laughed at my CLI-less Mac and then at my GUI-less BSD box. KDE shut them up.

    so did an amiga. :)

    --saint
    ----
  • by BoarderPhreak ( 234086 ) on Tuesday June 19, 2001 @11:35AM (#140027)
    Might I be the first to say, it's not hard to beat Apache in performance. Remember, their motto is "correct first, fast second" more or less. It's also not very efficient, Apache - hence the v2.x series. Even then, there are faster Web servers out there, like Zeus.

    You can't lump Tux in with general Web servers, since it's rather limited as to what it can do - remember, it's only for STATIC pages. You still need an additional Web server for CGI programs or any sort of dynamic site or one that relies on modules like PHP.

    Used correctly, and in conjunction with Apache (or another server) Tux is an *extremely* welcome addition to the stable. Props to the people involved on this.

  • I think, however, that moving tux to userspace would probably decrease speed.

    THis brings up a few good questions: Price fo security. Given that IIS is faster but less secure than Apache (but costs more), where is the price bottleneck?

    In most cases, the price bottleneck is in bandwidth. Hence developments whcih lead to cheap highspeed bandwith will probably be more important than developments in speed by webservers.

    I am generally more concerned about security and flexibility anyway, so for now, I will stick with Apache.

  • Almost all images are static. It's not just the HTML that has to be loaded to render a page.
  • by Whining Liberal ( 336415 ) on Tuesday June 19, 2001 @02:04PM (#140040)
    Arrggg!

    Why can't people read? Tux can handle dynamic content. If it couldn't, it wouldn't be able to run the SpecWeb benchmarks. The reason why Tux is fast is *not* because it supposedly cannot run dynamic content. It can. The reason that it is fast is because it runs in the kernel and is highly optimized.

    Those of you who think that Tux can only handle static pages are thinking of khttpd.

    Do your bloody research! At least read the story linked to.
  • And in other breaking news today, the latest web server, Foo, has found to blow away Tux, Apache, and IIS in terms of speed.

    Although the initial install is slow -- as the entire Internet gets downloaded to your hard drive -- after that, nothing else even comes close in terms of performance.

"Being against torture ought to be sort of a multipartisan thing." -- Karl Lehenbauer, as amended by Jeff Daiell, a Libertarian

Working...