Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
Linux Software

Kernel Benchmarks 136

kitplane01 writes: "Three students and a professor from Northern Michigan University spent the semester benchmarking a bunch of Linux kernels, from 2.0.1 to 2.4.0. Many functions improved in speed, but some did not. Total lines of code have tripled, and are on an exponential growth curve. Read their results here."
This discussion has been archived. No new comments can be posted.

Kernel Benchmarks

Comments Filter:
  • by Anonymous Coward
    Most of these graphs contain a curve labeled Expon or something (once again, great legend). Why exponential. Why not some polynomial or some other function. What is the error in the fit/correlation coefficient(s). Just tell me something that gives me a reason to believe that this curve means something.

    It's a good thing Moore didn't have a pompous ass like you as an instructor, or he might have been too traumatized to make the observation that processing capacity doubles every 18 months.

  • by Anonymous Coward
    http://euclid.nmu.edu/~benchmark/total_growth.gif [nmu.edu]

    $ strings total_growth.gif | head -2
    GIF89a
    Software: Microsoft Office

  • by Anonymous Coward on Wednesday May 09, 2001 @06:08PM (#233530)
    http://euclid.nmu.edu/~benchmark/null_call.gif [nmu.edu]

    This shows why computer guys are not scientists. My first year phys chem prof would tear his own arm off and beat you to death with it if you gave him a graph that looked that ugly.

    The Excel defaults may be ugly, but you can change them.
  • by Anonymous Coward on Wednesday May 09, 2001 @04:47PM (#233531)
    One problem with benchmarking is the optimizations settings for GCC. GCC is very sensitive to the proper choice of optimizations. Several years ago I did an extensive test of GCC using the Byte benchmark suite. I experimented with the various optimizations settings. The most important were the settings of -malign-jumps -malign-loops and -malign-functions. These flags each take a numerical argument representing a power of 2 on which the object will be aligned.

    Thus "0" indicates byte alignment, "1" word (16 bit) alignment, "2" doubleword (32 bit), "3" quadword (64 bit), and "4" paragraph (128 bit). The other optimization of interest is the "-O" setting. Here arguments can take the value of 0, 1, 2, or higher. Personally, I found that -O2 was not necessarily the best setting, although it seems very common to find it set to that in Makefiles. I found using -O1 and tuning the alignment optimizations by hand provided better results.

    My findings by benchmarking all the combinations of settings were that for a Cyrix 5x86, optimal alignment values were lower numerically lower than might be expected. For example, close to optimal settings as I recall were:

    gcc -O1 -m386 -malign-jumps=1 -malign-functions=1 -malign-loops=1
    It wouldn't be a bad starting point for any Intel processor. On modern processors, it is more important to achieve high cache hits, which is thwarted by certain wrong optimizations such as aggressive loop unrolling and excessive alignment. One particular setting to avoid is -m486. It should be avoided for most processors other than a 486, because the 486 alignment requirements are less than optimal (i.e. tends to over-align) for both its predecessors and descendents. And if you don't need a debugging version of your code -fomit-frame-pointer is usually always useful as it frees up an extra general purpose register.
  • ...between 2.0 and 2.4, mmap() got 40 times faster, so there's still a little room for improvement, I'd say...

    I can DEFINITELY tell the difference between 2.2.x and 2.4.x -- 2.4 beats the hell out of 2.2.

    - A.P.

    --
    Forget Napster. Why not really break the law?

  • Except for the lines of code graph, I don't see how they justify fitting exponential curves to any of the other graphs. Since the resulting "exponential" curves that were fit are nearly straight lines there's really no basis for doing anything other than a linear fit.

    They note that this was all run on the same hardware, but all that means is that the results are valid *for* that hardware. Some of the drastic changes in some areas might be due to, for example, the replacement of a generic driver with a specific driver optimized for one of the pieces of hardware they used. Obviously this change wouldn't carry over to all other systems.

    All in all not bad though. It would've been nice to see some more rigorous data analysis though (the data analysis expected in a typical college freshman chemlab class is more extensive than this).
  • What about a web server using signal based on IO and a single process model handling quite a few connections? That can easily have thousands of signals per second.
    --
    Mike Mangino
    Sr. Software Engineer, SubmitOrder.com
  • here's how they counted the lines in the kernel.

    We counted lines of code in all files that:
    ended in ".c" or ".h"
    were in one of the following directories:

    arch drivers fs include init ipc kernel lib mm net


    when it really should have just been something like.

    (root@mustard)-(/dev/tty0)-(/usr/src/linux)
    (Wed May 9)-(05:53pm) 19 # find . -name *.[ch] -exec egrep "&lt some terrible curse words >" {} \; -print | wc -l

    yeah, that would'a worked.
  • I'll be taking a little of my time to dig through this to see how many of the well hyped performance hacks actually work as advertised.

    Too bad the do little detailed things like lines of code and Stat rather than how much RAM/CPU dose your dynamic web server need to saturate a T1.

    Still educational for the none kernel hacker in any case.
  • Naw, just have programs use SIGUSR1 for dot and SIGUSR2 for dash, and you can have programs use morse code for interprocess communications...

    --

  • Why does the 'Linux Lover' use MSExcel for his plots??

    Why not? Just because you love Linux doesn't mean you don't use anything else. Heck, doesn't mean you don't love anything else. I'm in a polyamourous relationship with both Linux and NetBSD... :)

    --

  • I would be interested in seeing the pre 2.0 kernels stuck in there too ... (not interested enough to dust my TOWER of old cds and start compiling though :-)

    I heard from some people who were using 1.2.something in an embedded project that it's context switch times were quite a bit better than the latest.

    Anyone out there know how the older kernels stack up?
  • Must be a snow cow from Michigan that modded me down...

    Why can't you admit that it's boring up there! Come on, you know it is! All they talk about is how many feet of snow will be left on the ground when June comes around.
  • Because I can.
  • How about this one? I'm logged in. I have the karma. I can do what I want. If I post at +2, there's just as many levels above me as there are below me, so set your damn threshhold appropriately.
  • I knew it was boring way up in Northern Michigan, but until now, I never imagined just how boring it actually is. I guess in Manitoba they must be benchmarking DOS calls in various MS operating systems. I guess it beats watching caribu mate.
  • by PD ( 9577 )
    The performance that I care about is "do it work???" and the NE2000 cards give me no trouble at all. 3c509 cards are also sweet and trouble free.
  • Nothing beats when caribu mate.

    Except YOU maybe. heh heh.
  • by PD ( 9577 ) <slashdotlinux@pdrap.org> on Wednesday May 09, 2001 @05:05PM (#233546) Homepage Journal
    That's a lot of work just to print out a negative number on your screen...
  • A lot of work went into making the UNIX schedular automagically give programs that are currently interacting with the user get a higher priority. Reducing the latency to an interactive program makes the system seem very snappy, and makes users more happy. A slow boat to china job doesn't need to be given high priority because it's gonna take forever anyway. Letting an interactive job run before it won't hold it up long, especially since most interactive jobs do only 1 or 2 timeslices of work before sleeping on the keyboard again.

    In a single-user environment, this can be done well with the focus-boosting MS uses. There is a problem, however, with MS's implementation. The UNIX priority system was designed to make interactive jobs responsive without starving CPU-intensive jobs. MS doesn't do this. Focus boosting is a good idea, but MS's priority scheme is hostile to low-priority jobs. UNIX doesn't have such a thing since a UNIX box is usually multi-user/remote-user, so ID'ing the right process to boost the priority of is more difficult.

    Interesting note, but in Win2K, if you set a CPU-intensive job to a high enough priority on a single-CPU system, it will use 100% of that processor's time, without letting ANYTHING else run. Talk about starving low-priority jos.

  • ...which just goes to prove that optimization is (justifiably, as it happens) much -maligned.
  • Rule #4:
    One can have a graph of any shape that he wants by carefully choosing the axis'.
  • It is not suprising though. Consider the number of driver and new features that have been introduced. As well as the S390 archetecture. There are numberous new drivers, as well as framebuffer, as well as better SMP. I am sure that there is more. It would be interesting if the kernel developers had a debug feature in there that if you build it with that on it would tell you the time of execution of each function (not sure if they do), similar to perl benchmark.

    I quote" Hardware compatibility is a large part of the growth."

    I don't want a lot, I just want it all!
    Flame away, I have a hose!

  • I find it interesting that both Michael and Jamie McCarthy post stories on /. - you would think Michigan wouldn't be big enough for the both of them :)

    Caution: contents may be quarrelsome and meticulous!

  • I don't dislike what they did, I dislike their presentation. They did a reasonably good job of data collect (not exceptional, but okay). FYI, I am 25, am a PhD candidate at a Big Ten University in chemistry, and have been teaching for 6 years.

  • What community college do you teach at, stupid arrogant cock-sucker?

    Very eloquent. I teach at the University of Minnesota, Twin Cities.

  • It's a good thing Moore didn't have a pompous ass like you as an instructor, or he might have been too traumatized to make the observation that processing capacity doubles every 18 months.

    If Moore's observation was correct (which most people seem to think has been shown by the history of the industry) there are ways to "prove" it, like trying different fitting functions and looking at their errors and/or correlation coefficients. In the case of Moore's law, one whould find that an exponential is the best fitting function.

    For most of the graphs in the article, linear or simple polynomial (e.g., quadratic) would appear to give better/comparable fits to the presented data. It seems they chose exponential because it is more impressive to say "this is growing expoentially!" than to say "we fit the growth to a quadratic with coefficients blah-blah-blah."

    Lies, damn lies, and statistics.

  • by rangek ( 16645 ) on Wednesday May 09, 2001 @04:25PM (#233555)

    Silly graphs is a pet peeve of mine. I hate it when my students give me graphs like these. Needless gridlines, unlabeled legends, connected dots, and poor statistical analysis.

    • I hate gridlines and they usually distract from the graph
    • what the fuck is "Series 1". For Christ sakes, take a minute and either delete the needless legend or at least overwrite the stupid defaults to make them meaningful
    • Connecting the dots means something. If you plot linux 2.1.1 and linux 2.1.14 and draw a line or someother curve between these points, you are telling me that if I pick up linux 2.1.7 it will lie on that curve. That is not a correct interpretation of this data.
    • Most of these graphs contain a curve labeled Expon or something (once again, great legend). Why exponential. Why not some polynomial or some other function. What is the error in the fit/correlation coefficient(s). Just tell me something that gives me a reason to believe that this curve means something.

    I also find it ironic that they used MS Excel (which they don't say they did, but it sure looks like it)...

  • If they tested stable kernels, they would probably get only one big step with each major
    release - without explaining where it actually came about.<BR><BR>

    That's because stable kernels are rather on the security maintenance and driver update path, it does not tinker with the scheduling, memory, signal and disk I/O routines.<BR><BR>

    Ploitting development kernels is actually more relevant.
  • That study seems to show, that the exponenital nature as well as bulk of the code comes mainly from drivers. Some subsystms, e.g. FS seems to have actually decreasing LOC.

    Shortly, supported hardware grows exponentially.
    Notice, that if the hardware driver development grew linearly, the cumulative amount of drivers would be quadratic. Since the rate of adding hardware drivers is probably a little bit faster than linear, the curve seems to be quadratic to exponetial.

    This is far from being signs of bloat disease. This is actually quite healthy grow.
  • Based on these numbers and the test I just ran, Linux 2.4.0 kicks FreeBSD 4.0-STABLE's ass all over the place in every category.

    Sure, I only have a 400MHz K6-III vs. their 850 MHz Pentium III, but it's not like Linux does everything twice as fast; it's much worse than that.
  • by cartman ( 18204 ) on Wednesday May 09, 2001 @04:11PM (#233559)
    First, the university benchmarking team simply ran lmbench (a free, popular, old kernel benchmarking utility) on a variety of kernels. Claiming that:

    Three students and a professor from Northern Michigan University spent the semester benchmarking a bunch of Linux kernels

    ...somewhat exaggerates this accomplishment

    Second, no data were presented on the main areas of the kernel that were improved. How is SMP performance in kernel space? Did the finer grained locks help? How is the performance from the threaded IP stack? Does it prevent IO blocking?

    THAT kind of information would have been interesting. They tested only things that the kernel has done forever.
  • If lmbench is a standard benchmark, I wonder what the same tests runs across FreeBSD 2/3/4 and Windows NT 3.51/4/2000 would show.

    For those who are interested, here [bitmover.com] is the LMbench home page.
  • I guess I've answered my own question. Here are Larry McVoy's lmbench results for AIX, Linux, FreeBSD, IRIX, and SunOS [bitmover.com].

  • I'm especially interested in FreeBSD.

    thanks,
    chris
  • > Changing BIOS memory setting from CAS 2 to CAS 3 : 3.7% speedup.

    Oops. Make the obvious correction.

    --
  • by Black Parrot ( 19622 ) on Wednesday May 09, 2001 @09:01PM (#233564)
    > I definitely noticed a jump in performance between 2.2.16 and 2.4.0 so they must be missing something here.

    I use a "real world" benchmark (which of course might be completely irrelevant to you, however relevant it happens to be to me).

    Here are some recent observations regarding this specific benchmark, ranked in order of effect:
    • Changing BIOS memory setting from CAS 2 to CAS 3 : 3.7% speedup.
    • Changing to a different brand motherboard, and matching the original's BIOS settings as well as possible : 2.1% speedup.
    • Upgrading 2.4.3 to 2.4.4 : 1.1% speedup.
    • Running under kernel compiled as "Athlon" rather than "i686" : no substantial difference.
    Moreover, although I have not had time to test it, a well-informed friend tells me that using certain recent versions of gcc rather than certain older ones can give a whopping 30% slowdown, even using the same flags for compilation. (N.B. - He did not say "gcc is getting worse with time". He merely remarked re two specific versions, whose numbers escape me at the moment.)

    If performance tuning is your forte, then clearly you've got your work cut out for you.

    --
  • by the eric conspiracy ( 20178 ) on Wednesday May 09, 2001 @06:40PM (#233565)
    Over three years it's still positive.

  • by the eric conspiracy ( 20178 ) on Wednesday May 09, 2001 @04:49PM (#233566)
    Every evening I run a disk/memory intensive program that does a 3 year analysis of the US stock market. When moving from 2.2.x to 2.4.x I obtained a run time decrease from 270 to 190 seconds. This to me was a VERY impressive upgrade. The same code running on Win2000 takes 1300 seconds to run.

  • by the eric conspiracy ( 20178 ) on Wednesday May 09, 2001 @06:36PM (#233567)
    It's the same code running on the same box - a dual P2 400 with 0.5 GB of RAM. No ifdefs. Programs are invoked from the command line. Relatively small results datasets are saved to files. Because of the size of the input dataset, and the crappy indexes the main performance determinant is the efficiency of disk i/o and buffering thereof.

    For this application the 2.4 kernel kicks butt up and down the street all day. YMMV.
  • by MrClean ( 23413 ) on Wednesday May 09, 2001 @08:09PM (#233568)
    Annother more extensive linux evolution study is at:
    http://plg.uwaterloo.ca/~migod/papers/icsm00.pdf
  • These pathetic graphics are grounds for inclusion in Edward Tufte's "Chart Junk" chapters. For those that haven't read his book, I highly recommend all of them. http://www.cs.yale.edu/people/faculty/tufte.html
  • As pointless and misleading as the connected dots were in these graphs, turning it into a set of bar graphs would not improve matters. I think you were more correct when you proposed dots. OTOH the data is more or less meaningless, complicating the problem of how best to display it...
  • Well, along those lines of what I want on a system, for Windows, throw in VC++ (which i'm sure is huge), perl, python, VB (instead of tcl/tk, shell scrtipting, fortran, and all the other misc language support many of us have), MSSQL, and Photoshop or Paint Shop Pro.

    (what I'd REALLY want on a Windows system is an X server and Cygwin, but for the sake of arguement, I'll leave that stuff out)

    I'm guessing we'd be approaching some huge numbers on both sides, and all I can really speculate is that I think Windows would have more overlapping functionality in its apps, but I can't say as for lines of code.

    Anyway, lines of code is not directly a measure of bloat. In my mind, bloat is lines of code divided by (functionality times stability times performance), but I realize that not everyone shares my view on that.

    Yeah.

    -ben.c
  • Win2K may be 30 million lines of code but the Win2K *kernel* is tiny compared to that amount. The 30 million lines includes everything from the kernel, logging, user management, dialup tools, solitaire to the file manager. Don't compare apples to oranges.
  • Don't forget about the added S/390 arch files, too...
    --
  • Nice article.
    Can someone point me some links for GCC optimization?
  • in any language it is impossible (except maybe on alt.sex.stories.
  • The other side [vidomi.com] of the story is on their site.
  • by CJ Hooknose ( 51258 ) on Wednesday May 09, 2001 @05:47PM (#233577) Homepage
    What I wish is that hardware manufacturers would just use one standard interface, then only one driver for each device would be necessary. Impossible you say? Look at current modems, old sound cards (all sound blaster compatible), NE2000 network cards (I won't buy any other kinds) ATAPI CD-Roms....

    Yeah, right. The problem with this approach is that it leads to unnecessarily narrow definitions of functionality, and can prevent hardware manufacturers from doing things cheaper. Not only that, but the examples you chose are kind of screwy. "Current modems" without a qualifier implies the N+1 varieties of WinModems out there, which all do things differently. Many old sound cards did things their own way and had a small DOS TSR that provided SB compatability in software. The floppy, IDE, and ATAPI command sets, as well as the RS232 serial-port standards, are published and standardized, but these are properly communications protocols between devices, not the devices themselves. The PCI and ISA busses are, again, more like protocols to allow devices to communicate rather than devices themselves. I don't see too many non-PCI, non-ISA devices that plug into the insides of an x86.

    Non-x86 hardware platforms have it easier; one vendor like Apple/Sun/IBM says, "This is the list of hardware that works on our platform," and you use it. The multitude of hardware vendors for x86 boards and devices has led to a large amount of conflicting standards and weird, proprietary hardware. (If a vendor can save $0.10 per unit on a device by leaving out hardware functions which can be replicated by a kludged binary driver, they will. Think WinModems.) This approach has also made x86 hardware cheaper than the alternatives.

    Simply put, things will change and change quickly in hardware. Standards are a good idea, but they quickly become lowest-common-denominator, think "VGA".

  • I hope you aren't using 2.4.2. It was buggy and crashed a lot on my system (reiserfs may have been the problem).
    ------
  • Guns don't kill people. Bulletskill people.
    ------
  • P4 1.4G

    What is the 1.4G that you are referring to? 1.4GB HDD? 1.4Gbps Ethernet? 1.4GB RAM? 1.4GHz? $1400?

    Nothing's more confusing to the non-computer-"literate" people than having people like us talking ambiguously.
    ------

  • Obvious correction? Which correction is that? (What is CAS, anyway?)
    ------
  • by norton_I ( 64015 ) <hobbes@utrek.dhs.org> on Wednesday May 09, 2001 @08:22PM (#233582)
    2.4.0 has a dramatically improved mm system, most of the benefits of which don't show up on these tests, yet make a world of difference in real life.

  • Why the h*** does they list both RAM and a combination of base mem & extended mem on their 'resources' page. I would have mattered if they had tested MS-DOS 6.2, but not Linux!!

  • is this stuff documented anywhere? has anyone gone through and done a thourough analysis of when each gcc option is best used? doing so might be very beneficial for linux overall....

    ----
  • Exactly what I was thinking!

    I can't say I find these benchmarks very credible. Unfortunetly, people will see these "benchmarks" from a college professor and instantly think this is some seriously authoritive info on the comparative performance of various Linux kernels. Bleh.

    If they are so authoritive on OS design and performance bottlenecks at such fine grain levels of OS mechanics, perhaps they should put their 4.5 years into improving Linux into where they think it should be performance wise.

    But alas, they wait and wait for the next kernel release, run some non real-World benchmarks, and then try to ponder some conclusion from their numbers. Four and a half years and this is all they could come up with?

    Don't get me wrong, I think these types of profiling benchmarks have their place, but usually should be used in the pursuit of finding the culprit to performance degradation found in real World benchmarks with a view to actually fixing these smallest yet most significant of bottlenecks.

  • they've got better things to do than write new driver code every time kernel 2.6.287-test1.patch58 comes out.

    Huh? First off, a good printer will interpret some common printing language like Postscript or PCLx to render differences between various printer hardware irrelevant, for anything beyond plain text. So really, for these printers, the version of the kernel or even what OS is running the spooler is never going to be an issue as long as the printing app speaks PS or PCL.

    In the case of crap printers that can't even print plain text without having the CPU tell it when to move the print head and when to splatter ink from what holes, the kernel version or OS can still be irrelevant as this can be done well outside the kernel. A filter program that accepts Postscript and then converts this into signals that the printer can accept does'nt have to be reliant on a particular kernel version.

    Ghostscript compiles on practically any Unix, MS-DOS, Win9x, Winnt, Win2k, OS/2, VMS,... kernel shmernel.

    Even if this were something kernel specific, the OEM could simply release a kernel driver for a version of Linux as source code, and then someone(s) would most likely build it into something much better, faster and stable for kernels up to current ones. Witness the history of the SBLive drivers! They started out from Creative quite closed, were buggy and featureless, creative released the code under pressure and now the SBLive is one of the best sound cards supported in the latest Linux kernels.

  • by lizrd ( 69275 ) <[su.pmub] [ta] [mada]> on Thursday May 10, 2001 @07:57AM (#233587) Homepage
    Don't compare apples to oranges.

    I've always wondered why people say that. I can make several valid comparasions between apples and oranges:

    • Oranges have a thicker skin than apples
    • Apples grow better in northern regions than oranges
    • Apples make a better pie than oranges
    • Orange juice is thicker than apple juice
    • Oranges have larger seeds than apples
    I could continue on like this for some time and I don't think that I would ever get around to mentioning either Linux or Win2k whilst comparing apples and oranges (Though, I might get around to mentioning OSX and British cell phone users if I were to keep at it long enough)

    ________________________
  • This is pretty useless. It compares different machines, and vastly out-of-date versions (of FreeBSD, at least). Tonight I'll run a test of current Linux and FreeBSD on the same hardware (both vmware virtual machines running on the same physical box) and post some results.
  • by Baki ( 72515 ) on Wednesday May 09, 2001 @10:11PM (#233589)
    Another thing making this benchmark useless is that it only tests Linux performance under no-load conditions (i.e. the benchmark is the only thing that runs), it doesn't tell anything about scaleability and keeping up performance under heavy load.

    And that is exactly the point that Linux is often criticized for, compared to competitors (Solaris, FreeBSD): it may perform well under no- or light-load conditions, but it doesn't scale well. It would have been interesting to check whether this criticism is still valid for the 2.4 kernels.

  • by cananian ( 73735 ) on Wednesday May 09, 2001 @05:29PM (#233590) Homepage
    This was really a pretty sloppy writeup. The "performance note" from linus was linked a page too early, there were no convenient navigation links, and far too little effort was spent to identify the sources of the performance improvements identified. In addition, "capabilities" are blamed for what was really the result of a debugging-printk excess, and in at least one point "kernel 2.1.92" was blamed (a convenient culprit) when looking at the graph it is obvious that kernel 2.1.*32* was the outlier.

    I'm not impressed.

  • Most of the growth is in the drivers...and that is a good thing.
  • I rented that video last week. Very racy.
  • check out the quote on http://euclid.nmu.edu/~benchmark/index.php?page=nu ll_call [nmu.edu]:

    "As mentioned in our methodology section, this is due to a bug in the kernel code that lead to a feature freeze in subsequent kernels."

    if a bug in the kernel code can cause a feature freeze, someone better debug the developers! :)

    jon
  • It seems to me that any program firing off thousands of signals per second has a serious design flaw.

    Does your brain have a serious design flaw?

  • A few things. a) You seem to be overexpanding your data to make your point seem more important. IOW: 98 and 98SE have little in them that would significantly change driver development. The changes between these two are primarily "feature oriented". Same with the 2000. Furthermore you are citing compatibility with unreleased software...don't count your chickens. b) WDM is not nearly as seamless as you claim it to be. Although you may be able to WRITE drivers that work on all those OS's this doesn't imply the reverse correlation! In other words that all WDM drivers work on all those products. Since such things simply aren't true. For Win2K I've had to get specific drivers for my G400 and my DXR3 even though Win98 WDM drivers existed. c) Unified drivers have been proposed several times, just do a search on the kernel mailing lists or Kernel Notes and you'll see that there are lots of reasons they get rejected. Some of them are good, some are performance related, some are religious.
  • The original statement is bullshit. The LOC of the kernel have increased almost exclusively to provide oodles of device drivers and support for more architectures, not because of bloat in the core parts of the kernel. All of it just increases the size of the full source download, not of the final compiled binary.
  • why the FUCK must everyone insist on political correctness in linux-related stories? the fact that microsoft exists and that people choose to use their products is NOT reason to just blindly post inflamatory criticisms of their methods. if i want to use some in-house graphing program that produces graphs identical to the ones displayed by MS Excel, should i avoid treading linux waters with my statistical analyses simply because i'm afraid of bullshit backlash? give me a break.
  • bet you're using Cygnus...be gone, troll!
  • In my mind, bloat is lines of code divided by (functionality times stability times performance), but I realize that not everyone shares my view on that.

    Interesting... I think I do!

    But there are still factors to consider. I think we at least need to multiply by the spaghetti ratio, but other factors, such as usefulness index, design cleanliness coeffecient and ugly hack quotient needs to be taken into account. :-)

    Oh well.

  • by joto ( 134244 ) on Wednesday May 09, 2001 @08:34PM (#233600)
    So when will line count surpass Windows 2000?

    Depending on point of view, that has already happened long ago...

    To make the comparison meaningfull, you have to get systems of somewhat equal capacity. The linux kernel by itself is in no way comparable to Windows 2000.

    In addition we need various fileutilities, an accelerated X11-server (with Mesa/OpenGL, the video-extension, and antialiasing), one of Gnome/KDE (filemanager, basic desktop utilities, a simple texteditor, something akin to COM (which would be Bonobo or Kparts)), a working web-browser (Mozilla or Konqueror), some userfriendly utilities to replace the control-panel, a user-friendly email-client and newsreader, a simple webserver, basic networking utilities (Samba with a user-friendly network neighborhood browser, telnet, ftp, ping, ...), a good media-player (capable of playing at least wav, mp3, CD's, mpeg, avi, mov and preferably asf and wmf), minicom, a ppp-dialer, and probably quite a few other goodies I've forgotten to mention.

    If we put all this into a linux-distribution, I doubt we would do much better than W2k. But to make things even worse, that wouldn't make much of a linux-system. Most linux-users wouldn't be too happy without emacs, gcc with friends, perl, python, tcl/tk, and most of the common command-line utilities (sed, awk, find, etc...) (, and probably also apache, MySQL or PostgreSQL, gimp, etc...).

    Line-count? Well, guess what... Linux has become bloatware... Even more than what's produced in Redmond!

  • I have 10/100Base-T cards in multiple systems (full-duplex of course) and they perform just as well as my SiS900, 3com509, Realtek, and others.
  • My point is, don't buy the crap that they make propritary just to save a buck. 99% of the time the cheaper one has lost some functionality or stability (i.e Winmodems) While it hasn't made a huge impact, people aren't buying WinModems as much as their hardware based-counterparts. Why? Because they've been told what's wrong with just picking the cheapest one. Now if we could do that with other types of hardware....
  • I was speaking of odd hardware, not odd implimentations of hardware. (i.e. Data aquisition cards, video capture cards, MPEG boards, etc)
  • i86 has long been touted as the standard because of the lack of propriety as with Apple. The problem is, the devices comming with the CPU and motherboard are just as proprietary as Apple's systems.

    Besides... Apple only seems like it qualifies because there isn't much different hardware for it. It's not that all video cards use one driver, it's that there's only 2 video cards (exaggeration I know). If Apple got popular, they would be in the same boat. At least if i86 set the precident, other platforms could take over and not run into the same problems later.
  • You've just hit on the killer problem there. OS developers just take it for granted that they have to write drivers for every device out there. What I wish is that hardware manufacturers would just use one standard interface, then only one driver for each device would be necessary. Impossible you say? Look at current modems, old sound cards (all sound blaster compatible), NE2000 network cards (I won't buy any other kinds) ATAPI CD-Roms (all recent ones are) Floppy drives, and many more devices. If people would put their foot down an say 'I want compatibility' then driver problems under any device would be a distant memory, OSes would be far smaller, hardware would be truely interchangeable, and Windows wouldn't be the only option for those with exotic hardware.
  • by big.ears ( 136789 ) on Wednesday May 09, 2001 @05:26PM (#233606) Homepage
    The most important benchmark they showed was their charts--ugly products of Microsoft Excel. Even though a lot has changed in those 4.5 years, its still easier to make your charts in windows.

  • It's NOT all they spent the semester doing. I assume it was an independent study for the students; as such it might have been 2-4 hours of coursework, but not all they did the entire semester. If so, it would indeed be ridiculous.
  • Yes it's so improved it won't even run Visual Age Java or Apache any more (at least on my machine).

  • That old open source saw

    If you interested in some results that no one appears to have produced, go do them yourself. Don't criticise someone who has scratched their itch.


  • Good points. But numbers are numbers. And as long as they performed the benchmarks consistently across all kernels tested these numbers should be usefull. Besides, do you think a professor would put his best grad student on something like this?

  • According to this graph [nmu.edu] page fault latencies suck in kernel 2.2. Is this true? I think I'm running a 2.2.17 AC kernel though and if I'm just doing development and not causing swapping then it doesn't matter though right?
  • Isn't a large part of the growing Linux code base hardware support (drivers/alternate architectures?) The exponential increase in the number of lines of code in *.c/*.h files doesn't necessarily mean that Linux is bloatware; rather, I think that it's a result of better support for the hardware out there.

    I'd worry more if vmlinuz and modules start to grow exponentially.

    ---

    ---

  • Yeah sure, let me know when Windows2000 becomes open source, then we'll be able to figure that out.
  • by Beowulfto ( 169354 ) on Wednesday May 09, 2001 @04:20PM (#233619)
    Total lines of code have tripled, and are on an exponential growth curve.

    So when will line count surpass Windows 2000?
    ----

  • If its the same code then it has nothing to do with his develpoment skills. Most calculations of that nature are done using programs that read input from a text file perform the calculation and dump it to the screen or another file. That should be completly portable with no #ifdef __POSIX. Now what could be to blame is the libraries that are being linked against.
  • Uh, maybe it's just me, but does anyone else think it's funny they used MS Graph (and presumably Excel) to draw the result graphs? You'd think they use StarOffice.
  • I agree that the benchmark was not very useful, but it was still interesting. However, testing only the "basics" of the kernel enabled them to show a long-term trend over several kernel versions.

  • Yesterday I modded some of these Michael related posts WAY down. Why?

    1. Because they are often insulting, and I don't like to read lame insults on my slashdot.
    If you make an offtopic comment about a delicate subject, it really doesn't help if you start insulting.
    Just state your opinion calmly and have respect for other people. If you'd post like that I would mod it up. (But sadly i wasted all my points modding you down yesterday :-)

    2. You also always post so mysteriously. Why? I still don't really understand what all the fuss is about. And that's also really irritating. So would you please explain thoroughly what the problem is. Only if we all know what the problem is can we solve it.

    So please post something abjective and insightful about this, so we can discuss and solve the whole thing. If you keep posting like this you will only get modded down > get frustrated > post more insults > ...

  • I've read that the Kernel Team has recomended use of egcs 1.1.2 as an alternative to gcc 2.95.2 for compiling the 2.4.0 kernel. How much affect does that have on the performance of an OS?

    Is it worth the trouble?
  • Awww....

    Then they are saying that it will take twice as long for Linux to tell my apps that I have ordered them killed.... (-1) so maybe that extra 1.5 microseconds might prevent a -9 switch.

  • by rknop ( 240417 ) on Wednesday May 09, 2001 @04:36PM (#233632) Homepage

    One thing that I wonder about: that huge performance hit on the page fault latency shown in 2.2.6. Is it still there as of 2.2.19? Did the fix make its way back into the 2.2 series, or is it only fixed as of the later 2.3's and the 2.4 series? 2.2.6 is the only 2.2 in their study, so the study doesn't answer the question.

    -Rob

  • The results give a feeling that linux is converging in the Cauchy sense.

    ie. There is not much fat to trim left...

    Therefore the next dramatic improvements if they are to come will not be from tweaking this part or that part of the kernel, but rather from implementing entirely new classes of functionality.

    ie. Linux has arrived. It's settled down, time for it to start exploring as yet unimagined new things to do instead of new ways to do old things.

    The future will be, umm, fun.

    This post is not designed or intended for use in on-line control of aircraft, air traffic, aircraft navigation or aircraft communications; or in the design, construction, operation or maintenance of any nuclear facility.

  • So what? Are you suggesting their conclusions are somehow invalid because they don't use a Linux-based system to draw charts?

    Reread his post. He's not suggesting anything of the sort. He's suggesting that a) many people still find it easier to use Windows than Linux, and b) that's a more important benchmark than speed.
  • by Lethyos ( 408045 ) on Wednesday May 09, 2001 @05:38PM (#233648) Journal
    It would be nice to see updates to the data here as new versions of the kernel are released. For example, some users are not particularly concerned with newer versions of the kernel unless there are significant improvements. Consider this example: you're concerned mostly with performance aspects of the kernel. A new version is released that shows no improvement (or a decrease) in performance. No sense in upgrading immediately (of course, you may be one of those people who actually looks for and reports bugs) and you can wait until you see a downward trend in the graph before taking your time. There are other potential uses for "live" data such as this. I think it'd be nice if these guys would keep maintaining it. :)
  • by Professor J Frink ( 412307 ) on Wednesday May 09, 2001 @04:22PM (#233649) Homepage
    Where are the results for IDE/SCSI transfer rates/latency?

    Where are the results for networking?

    I definitely noticed a jump in performance between 2.2.16 and 2.4.0 so they must be missing something here.

    They note the large increase in hardware support, but don't seem to realise that this new support and improved support has given Linux much more performance than their benchmarks might show.

    Maybe the improvements in X etc have helped but no real performance difference between 2.1.38 and 2.4.0? Put any such machines through real world work and you'll soon spot the difference...

  • I am a senior at nmu. Maybe you should try to communicate with the professors and other students. I have had no problems with it. In fact, I am currently starting a research project with Dr. Appleton this summer pertaining to linux file systems. I say, if you don't like it here, leave.... now.
  • For those of you who were interested in the "exponential growth" issue, I did a much more detailed study on the growth of the Linux kernel that was published in the 2000 Intl Conference on Software Maintenance. I think it's very readable by non-academics. Comments welcome. -- MWG http://plg.uwaterloo.ca/~migod/papers/icsm00.pdf [uwaterloo.ca]

For God's sake, stop researching for a while and begin to think!

Working...