Slashdot is powered by your submissions, so send in your scoop

 



Forgot your password?
typodupeerror
×
Linux Business The Almighty Buck

Lack of Testing Threatening the Stability of Linux 325

sebFlyte writes "Andrew Morton, a Linux kernel maintainer, has said that he thinks that the lack 'credit or money or anything' given to those people who put in long hours testing Linux releases is going to cause serious problems further down the line. In his speech at Linux.Conf.Au he also waded into the ongoing BitKeeper debate, saying 'If you pick a good technology and the developers are insane, it's all going to come to tears.'"
This discussion has been archived. No new comments can be posted.

Lack of Testing Threatening the Stability of Linux

Comments Filter:
  • Hmm.. (Score:5, Funny)

    by Vickor ( 867233 ) on Friday April 22, 2005 @09:40AM (#12312493)
    I thought good technology required insane developers...
    • Re:Hmm.. (Score:5, Insightful)

      by kfg ( 145172 ) on Friday April 22, 2005 @10:55AM (#12313297)
      I have this tendency to respond to serious posts with a joke, and jokes with a serious post. I tend to come at problems from angles of perception that other people do not see.

      This is what the best developers do, otherwise they would simply come up with the same mediocre to bad solutions that everyone else does, no?

      They do, however, have this really annoying tendency to see everything from the same "Man from Mars" perspective, not restricting themselves to viewing only code differently than most do. This can make them appear "insane" to the general populace.

      In the land of the blind the one eyed man is a paranoid schizophrenic.

      Insanity is percieving things not as they really are. If the majority percieve things not as they really are the man who does so will give the perception of being insane when he acts upon his perceptions, those acts being unintelligable to the majority.

      And thus is born the image of the "quirky" genius. All will hail his new invention, but titter quietly about how he wears his socks, never for one minute stopping to take the obvious point of view that there just might be something of genius in the way he wears his socks, because he wears his socks differently than the majority do.

      And being the same is sanity, right?

      Nevermind that we innately wipe out genius in one swell foop with that attitude. It enforces a regression to the median, if it weren't for the fact that half the populace would have to progress to the median somehow, which, trust me, they just ain't gonna do. So instead of a regression to the median we get a regression the "really dumb."

      Take the current fad for "Playskool" interfaces. . . .please.

      Of course, some "geniuses" really are just insane and "luck into" some discovery through their insane perception of things.

      So how do you tell the difference? Well, takes one to know one I'm afraid. It would be nice if it didn't seem as if the people who end up in charge of "mental health" weren't all, themselves dimwitted morons at best, and completely, utterly crackers at worst.

      They're coming to take me away, HO,HO! HEE, HEE! HA, HA!

      I think it's something about the way I wear my socks.

      KFG
      • Re:Hmm.. (Score:3, Insightful)

        by Cthefuture ( 665326 )
        I have this tendency to respond to serious posts with a joke, and jokes with a serious post. I tend to come at problems from angles of perception that other people do not see.

        Heh, I do the same thing. I often consider myself the ultimate Devil's Advocate.

        When others are stressed, I'm calm. When others are calm, I'm stressed. Bizarre, but that's the way I work. Unfortunately this causes problems for me even in geek circles because they don't see what I see and often I can't explain why I know the thin
  • Insanity (Score:2, Funny)

    by wirah ( 707347 )
    Isn't any developer insane?
  • by tquinlan ( 868483 ) <tom&thomasquinlan,com> on Friday April 22, 2005 @09:43AM (#12312513) Homepage
    ...does it seem like Linus might need a vacation?

    TFA states that he's starting to take as much pride in rejecting patches as he does accepting them, and with this whole BitKeeper thing, it seems to me like he might need a small break.

    Of course, I'm not one to really talk, as I don't do nearly as much as he does with Linux...

    Also, with regards to testing, those of us who use it daily are testing all the time. I know it's not structured QA, but still, it's a lot of testing.

    Also, maybe slowing down the kernel releases a bit might help. I know that I do an emerge world on my Gentoo boxes about once a week, and it seems like there's a new kernel release every week. If there's a need for more testing, perhaps a little less time releasing and more time testing is in order.

    • by SecurityGuy ( 217807 ) on Friday April 22, 2005 @10:01AM (#12312706)

      Also, with regards to testing, those of us who use it daily are testing all the time. I know it's not structured QA, but still, it's a lot of testing.


      Isn't that the essence of Microsoft's QA?

      Should we be doing what we rightly criticise them for?

      • Exactly.

        Say what you want about Microsoft and the stability/reliability/security of their software, but they have many full time (and paid) people devoted exclusively to testing and trying to break their software so that it can be fixed.
        • Also, with regards to testing, those of us who use it daily are testing all the time. I know it's not structured QA, but still, it's a lot of testing.

          Isn't that the essence of Microsoft's QA? Should we be doing what we rightly criticise them for?

          Say what you want about Microsoft and the stability/reliability/security of their software, but they have many full time (and paid) people devoted exclusively to testing and trying to break their software so that it can be fixed.


          He's saying that they DON'T
        • Yes they do. Those people however tend to work in my office who don't actualy get a paycheck form microsoft to break things.

          On a serious not, microsoft tends to release prerelease canidates as stable and then does alot of thier quality control thru updates and service packes. I usualy recomend to wait 4-6 months before using a service pack or new version of whatever so i can imeadiety download fixes and use the product simularly to the marketing brochure's claims. I'mot aying that doesn't happen in any ot
      • So you say because Microsoft does structured QA Linux should not do it ? Hm. You know, I recently heard MS doesn't like it if their developers jump out of the window, so... don't worry, it's just four floors down.
      • You are mostly correct... except for the fact that we have unstable, testing, and stable branches. And I can tell you with much certainty, that bugs are less likely to slip into the stable branch with this testing than with a good million dollars worth of structured QA. So, when Microsoft creates an unstable branch, we will be pretty much equal.
      • *sigh* (Score:5, Interesting)

        by bmajik ( 96670 ) <matt@mattevans.org> on Friday April 22, 2005 @10:34AM (#12313056) Homepage Journal
        that is NOT Microsoft's approach to testing.

        Where did you hear or get the impression that that was the MS "approach" to QA ?

        I've written test suites for the following Microsoft Products
        - Visual Basic Compiler, 7.0
        - Microsoft Business Framework 1.0 (unreleased)

        None of them involved just using the compiler or the business framework over and over in day to day work to find bugs.

        We have a variety of test approaches, including a few that _might_ be construed as what you describe - There are a few ways that we get test coverage via product usage

        - stress
        - bug bashes
        - app weeks

        Stress is funnier than it sounds. Did you know we're not allowed to ship windows until the exact build of windows under ship consideration has been running on hundreds (thousands, usually) of machines continuously with no problems while enlisted in a distributed "stress" client... where they're pounded and pounded with automated tests that do things like starve memory whilst performing other work, etc? Same with ASP.NET and the CLR - they have to _survive_ for a pre-determined time period before the build can be considered shippable. We dont think there are any show-stopper bugs at this point - but we just want to be reasonably sure. Note that if we find a bug (even an unrelated one, like the documentation has a typo) and take a fix for it, the stress cycle resets because the bits have changed. Better safe than sorry. In the end game of a product release it can literally be the case that taking a bug fix means delaying ship for another week or more.

        - bug bashes
        this is probably most like what you're describing. Everyone on the team sits down for a couple of days and really just beats on a specific area of the product. Security Bug Bashes have become popular int he last couple years (wonder why ;) These really dont happen that often during the product cycle, because ad-hoc testing doesn't catch that much stuff if you've got well developed automation suites. However, it's still very worthwhile because it is a good feedback mechanism to explain why your other testing missed something, and it's the best way to notice the odd "that's funny..." sort of issues that are not functinoally incorrect but are still user annoyance type issues.

        - app week

        For developer tool products (like Microsoft Business Framework) we like to do an app week with each milestone, where everyone on the team builds some sort of end to end application, using as much of the toolchain as possible. This sort of testing really makes the employees better (we're usually pretty compartmentalized on our areas of functionality ownership). It also lets unreleated parties take a look at peices of the product they don't own (so don't have preconceived notinos about). Finally, it lets us simulate the end-to-end customer experience on our product stack. If we can build the sort of apps a customer might build with our tools, then the tools are probably alright. Where we run into problems, we know the tools need help.

        bug bashes and app weeeks happen perhaps 1-2 weeks per milestone (which is on the order of 2 months). It is a small part of our testing, time, effort, and results wise. It's still important to do, but it is not the _focus_ of QA at microsoft.

        • Re:*sigh* (Score:5, Informative)

          by Anonymous Coward on Friday April 22, 2005 @11:29AM (#12313611)
          I was a Microsoft developer for about 6 years, and this guy gets it exactly right.

          Most of the really first-class groups at Microsoft (Windows, SQL Server, Developer Tools, lots more) have INCREDIBLY exacting test requirements, and extremely competent and thorough and demanding test teams. The open source community has done well, but it is nowhere near the professionalism and thoroughness of commercial software development. And it's precisely because the testers get *paid* to do the same damn test on every single build -- something open source people won't do, because there's no glory in it.

          Slashdotters will no doubt respond, "Well, if it's so good, then what about all those security bugs!" Which is a fair criticism. Commercial software development (such as Microsoft's high testing standards, and similar at Sun, Apple, etc.) only works when the testing priorities you start with are the right ones. For a long time, Microsoft's priorities were 1) features, 2) usability, 3) more features, 4) stability, 5) security, 6) even more features.

          This has changed. Microsoft mid-level managers (dev managers, product unit managers, etc.) have internalized the idea that they are literally under attack, and that security must be a high priority from here on. I wish they had STARTED with that as a priority, but at least they get the message now.

          But, seriously, the parent poster is right on the money. Microsoft has AMAZING testers and test/developers. The hardware and software matrix that they run code under has to be seen to be appreciated.

          And, again. This is not intended as a slight at all to open source development or testing. It's just *very* different.
        • Just a counterpoint and question to an insightful post.

          Counterpoint: I find that Linux (Debian) with KDE to have 3rd party applications that cause it to hang, just like some non-MS created applications cause Win 3.1, 95, 98, 2000, XP to hang. This never seems to happen on my Mac. Maybe I don't use it enough, maybe there aren't 3rd party apps developed by substandard programmers calling routines poorly or managing memory in odd ways that cause these problems, or maybe it really is a superior product provi
          • by bmajik ( 96670 )
            I am not a tester in the windows division, but based on what i understand to be accurate, windows appcompat testing is a HUGE time / resource sink.

            A good person on this subject is Raymond Chen, a reasonably famous MS blogger. He's written a number of times on all the crap he personally did in Windows 95 to make certain apps run _at all_. Yeah, there was code in Win95 for the sole purpose of letting certain pre-Win95 games run properly - when those games had bugs in them that we couldn't have counted on t
        • I think the point was in jest and a jab at the fact that bugs are found far after all of the "testing" you mention. Still, things get by and are found much later. Couple that with Bill Gates offering to license beta software the other day and these jabs aren't without some merit.
      • by popeyethesailor ( 325796 ) on Friday April 22, 2005 @11:07AM (#12313423)
        Scott Guthrie [scottgu.com] describes how they test ASP.NET in this blog posting [asp.net]

        Read that, you might learn a thing or two.

    • Linus has always felt that his main role was to reject patches. If you take just anything, people won't refine patches to the point where they maintain or improve the overall quality of the code. Andrew and Linus essentially do a good cop/bad cop routine on patches.

      Of course, he's essentially been on vacation from Linux work, developing git. I'd guess that writing his own thing has make him feel a lot better about the BitKeeper mess. He certainly seemed to be having fun coming up with brilliant solutions
    • "Also, with regards to testing, those of us who use it daily are testing all the time. I know it's not structured QA, but still, it's a lot of testing."

      When having users stumble into bugs is your primary method of finding them, your QA has already failed.

      Because they do active development on the 2.6 branch, new bugs are introduced all the time. Even if they're only there for one version, there's always more bugs in the next version, which is a big disincentive for upgrades. And not minor stuff, big things
  • by Dale549 ( 680107 ) on Friday April 22, 2005 @09:44AM (#12312529)
    where the testers (a.k.a. users) get to pay $$$ for the privilege of testing OS stability.
    • by ciroknight ( 601098 ) on Friday April 22, 2005 @09:47AM (#12312550)
      I'm really surprised no company really has used this as a business model.

      I think it'd be awesome to run a software debugging/testing firm, where basically you have a bunch of computers and a bunch of users come in and try their best to break the software. Cheap labor and a good variety in machines, and you could quite quickly clean up even some of the nastiest code.
      • That may work in userland, but it's not that simple in the kernel where bugs can lurk for years and only appear due to un-related code changes that effect code path execution and timing.

        You may be able to 'break it', but can you repeatedly 'break it'?

        Can you predictably reproduce the bug?

        • by ciroknight ( 601098 ) on Friday April 22, 2005 @09:58AM (#12312665)
          Virtual machines can help with this; running the kernel in a sandbox to get an actual snapshot of the kernel in action. But at the same time, the kernel's going to be running, and userland/kernel-land interaction will cause plenty of bugs to crop up and show themselves. But you are right; it's hard to poke at a kernel to see what's broken, especially when some code paths are very hard to follow and others are almost never used on certain systems.
      • Uh, no. (Score:5, Informative)

        by bmajik ( 96670 ) <matt@mattevans.org> on Friday April 22, 2005 @10:17AM (#12312873) Homepage Journal
        Software testing (usually) isn't monkeys pounding on keyboards until the box BSOD's.

        It is difficult to test software without adequately understanding what it is supposed to do. Varying the underlying machine type is almost irrelevant for binary distributed software unless you're testing an operating system kernel or looking for race conditions in software (which is really just a stab in the dark)

        How are you going to have 3rd party people debug software they know nothing about?

        Where users help find bugs is by using the software. It honestly takes a certain mentality to be an effective software breaker, and it's not very common. It takes something else entirely to be a software tester; you've got to be a good developer (because software testing is about automation these days unless you're insane) but you've got to not get sucked into the developers way of thinking.

        I assure you - letting normal users play with software doesn't clean it up. we can show that this is true in the following way:

        - more users use Microsoft software for more hours a day than any other software in the world
        - slashdotters say Microsoft software is the buggest software made

        clearly if users using software was sufficient to find all the bugs, MS stuff would be bug free, based on its frequency of use alone. I know this isn't the case, because im a software tester at Microsoft.

        (The appropriate response is "well then, stop posting and get back to work; you're clearly not done yet!" :)

        W.r.t. linux kernel testing: this is something that's always amazed me - linux works surprisingly well for something with so little formal testing. On the other hand, when there are edge case problems my experience has been that nobody is much interested in fixing them. One example i had was at a consulting gig. the client was looking to move his web hosting business onto linux boxes if he could get more sites per box then he could on windows. He had a problem where his linux server would start dying after a few days. I started to look into it and the box would basically panic() in low memory situations. I asked Alan Cox about it (via irc) and the response was "buy more memory". Nice.

        Another sore point with me growing up was xserver crashes. The Xserver was 99% reliable, but then you'd get some random crash and lose everything you were doing, and you knew there was no real way of getting it fixed or investigating it.. you just had to hope it magically got better somehow.. maybe when you switched hardware or something.

        Then there's the just plain lack of testing of some F/OSS projects in general. When i was in college i had NeXT, Sun, and SGI boxes in my dorm room (but no linux :). I remember dling the Gaim tarball (this was loooong ago) and seeing about getting it built on my SGI machine. IIRC, there were some makefile / #include problems getting it to even build, and once it was built there were some other issues with its runtime. Ultimately i submitted a patch to the gaim folks that more or less "enabled" gaim on IRIX. There is no way anybody had ever used Gaim on an SGI without making these fixes, so it seems reasonable to suggest the authors had never tried it before. This lack of a platform test matrix is pretty common amongst smaller F/OSS apps, even when they say "works on *nix" they mean "works on the distribution of linux i run at home".

        Another baby patch i submitted was for the openBSD kernel.. this time for the wdc driver. Back when UDMA 100 was newish, i bought 2 UDMA 100 disks a month or so apart.. so they were different sizes and different vendors, but on the same bus. The UDMA rollback code in openBSD would drop the DMA level from 5 (UDMA100) to 2 (something much slower, i dont remember what) after a certain number of DMA errors. This obviously sucked since you can run UDMA devices at different speeds on the same bus, and you can also fallback to UDMA66 and UDMA33, both of which are better than mode 2.
        • This lack of a platform test matrix is pretty common amongst smaller F/OSS apps, even when they say "works on *nix" they mean "works on the distribution of linux i run at home".

          Switch to Debian. Oh wait, all the crybabies whining about how long it takes to make sure every package builds and runs on every platform when all it needs to do is "run on the box I have at home" is ruining it for us.
        • The issues you bring up point to a major issue with I have with developers:

          By and large they do not understand how to apply the KISS principle to their design and implementation problem.

          In some respects it might be an artform - because understanding how to simplify something that is very complex is very difficult. Nevertheless if you approach the original problem with that in mind you can minimize the effort in maintaining an understanding of the state (possible paths/interactions) of your application in
        • The problem is, most people who use Windows every day are actually using Windows every day. These users goals are to get their jobs done. What I proposed would be a company who spent their time poking holes in the software; deleting files, running sloppy code, doing whatever they can to crash the code, and then spending time to figure out why the code crashed.

          Secondly, Microsoft wouldn't release their source code, so debugging would be pointless. But if you could generate a few thousand bug entries into
      • Is *finding* the bugs really the limiting factor? It seems like bug databases have lots of entries--the hard part is to (1) isolate what exactly is causing the bug (which requires reproducible testing) and (2) to code a fix.

        The users, in the comfort of their own homes, already do a pretty good job of "trying to break the software" just by using it to get their work done!
      • They do exist [babelmedia.com] for the games industry, and most likely other areas too.
      • Not really. Hiring an infinite number of testers will only find the bugs. You still need real developers to fix it (which testers are not, and users, which you are suggesting, sure as hell are not.)
      • I'm really surprised no company really has used this as a business model.

        you don't work with Vertical market software do you.

        all of our critical sales apps are considered alpha testing. by the time an app becomes stable and useable they retire it and sell us their next abomination that does not work right, has 1/3rd the features the sales guy sold the CTO on and has stability problems that make any Admin cry (imagine a printer driver on your W2K box causing data corruption in an app... WTF is THAT!

        loo
  • by frankblack9999 ( 704688 ) on Friday April 22, 2005 @09:46AM (#12312546)
    just imagine what'll happen if Linux actually makes a dent in the non-geek desktop market, and widespread use by "appliance operators" ensues.
  • Loose women ??? (Score:5, Informative)

    by noisymime ( 816237 ) on Friday April 22, 2005 @09:46AM (#12312548) Homepage
    Coming from someone who was at that talk, he specifically said NOT to give money to testers. His words were actually 'give them credit, fame and loose women'.

    This drew laughs from the audience.
    • I read TFA and combined with what you have said, I guess ZDNET pro-Microsoft FUD machine is again at works.

      But it is just a guess.
    • Everyone knows that the loose women are using Apple [digibarn.com]
    • by xixax ( 44677 ) on Friday April 22, 2005 @10:38AM (#12313098)
      From TFA:
      "A lack of commitment to testing by the Linux community may ultimately threaten the stability..."

      The content of the article is much better than the headlines and excerpts being quoted. I was there and felt that what he was geting at was that we need to start thinking about updating QA procedures. The ratio of bugs to features is decreasing, but the rate of features is (maybe?) growing that much faster. The point of his talk was to outline a number of options for improving QA, thre are issues, but the sky certinly isn't falling either. It was an excellent follow on from Tridge's keynote the previous day on how to do quality system programming (overshadowed by his very brief coverage of the BK thing).

      Xix.
  • by Anonymous Coward on Friday April 22, 2005 @09:48AM (#12312563)
    Osnews had an article a while ago about some of the testing Sun do on Solaris - http://www.osnews.com/story.php?news_id=10178
  • by G4from128k ( 686170 ) on Friday April 22, 2005 @09:50AM (#12312581)
    Testing of Linux might be easier if it contained some automated features for sending crash reports back to a central database. Gathering some basic data on the stack trace, thread states, processes, etc. might help troubleshoot the OS in the context of the wide array of systems, configurations, and usage patterns. I know that both Microsoft and Apple have benefited strongly from this feature. Some tin-foil-hat wearers might object to their box phoning home. Tin foil hatters can just disable the feature but it might mean that the types of bug they experience never get fixed.

    If developers are going to fix the bugs that occur in the real world, they need data from the real-world.
    • Brief Answer: No. (Score:5, Informative)

      by ciroknight ( 601098 ) on Friday April 22, 2005 @09:54AM (#12312632)
      Long answer; kinda. You can use core dumps and system logs to interpret what's going on, but you can never really know for sure. Besides, the kind of errors that are in the kernel are the kinds of errors that really don't return error codes; they're the kind that crash the computer and make you reboot.

      Microsoft's method is for some of the higher up software, and so is Apple's. If there's a bug in the kernel it's very unlikely that their code will catch it. Or at least that's been my experience.

      If the problem is that Linux is so buggy, we just need to run it on a bunch more machines, and start randomly poking it as hard as we can until we break something. Once we've broken it, do it again to make sure it's not hardware, and then go to work fixing it. Good old brute force repairs.
      • True enough...and to add to what you said, KDE at the very least does include a crash reporting feature, identical to the windows/apple versions. When anything in KDE crashes, I get a dialog pop-up asking me if I would like to report or debug etc.

        -Jay
      • "It's easy to fix a 747- we just need to look through it until we find the broken part, then weld it back together or replace it."

        "It's easy to create peace in the middle east- we just need to bring the leaders of the factions together and get them to agree to stop fighting."
    • Actually, Microsoft's OS *don't* have a built-in crash reporter for the OS; they report crashes of applications, which is totally different. A BSOD only hits the screen. When your kernel is trashed, you can't trust your disk subsystem, networking subsytem, etc. to actually do what you told it to.
  • by xxxJonBoyxxx ( 565205 ) on Friday April 22, 2005 @09:52AM (#12312607)
    I thought the point of Open Source was to allow more people to read through the code. You mean thousands of people aren't really doing that for fun? I'm shocked.

    More seriously... I think many of the people who DO eyeball the code are looking for security problems these days (where you do get recognition, etc.). For the record, I know I won't get any HR props for putting OS bugs that I've uncovered on my resume, but the security bugs I've found are always good conversation pieces.
    • Eyeballing the code is one thing, running it and poking at it is another, and a comprehensive automated test suite is yet another thing.
    • by Bostik ( 92589 ) on Friday April 22, 2005 @10:43AM (#12313153)

      Actually, I'd say that giving proper credit and public recognition for bug reports is good enough for most of the end-users. Case in point (interestingly enough, from LKML and by Andrew Morton himself):

      • User reports a nasty bug on LKML
      • Devs request details and the user provides them
      • User complies and is provided with a patch to test. First one does not work but second one does.
      • User reports back that the second patch fixes the problem and apologises for not being able to assist better.
      • Andrew Morton replies (and I'm quoting from memory so only the context will be correct): "You reported a problem, provided enough details to pinpoint it, tested patches to fix the problem and reported back that a certain patch indeed was the correct fix. What more could we possibly ask for?"

      Getting an answer like that should lift anyone's spirits. Not only has the bug been fixed, it was also recorded for posterity that a certain user discovered it and helped to his ability in fixing it. And to top it all off, the reporter was given an honest praise and a thank you. The last part alone is usually enough for most users, to see that the developers actually care.

      As for resumes? If you have a verifiable record of reporting back bugs and helping to test their fixes, you should be able to use that for your advantage in CV or at least in an interview. If nothing more, it shows that you can communicate with different kinds of people and have enough technical ability to follow through with their requests for further details. You might have even gotten a better product for yourself to use.

  • by selectspec ( 74651 ) on Friday April 22, 2005 @09:54AM (#12312626)
    Morton also criticised the Bugzilla tool used for tracking problems, saying that it encouraged one-to-one communication, a process which didn't help educate the wider community about potential problems.
    "Bugzilla is fine for tracking bugs, but as it's currently set up, it's not very good for resolving bugs."

    Hmm... I'd be interested to understand what alternatives to a web-based system he has in mind. Any thoughts?

    "This process, where individuals communicate via a Web site, is very bad for the kernel overall."

    • We are running Continuus here for versioning, configuration management and problem reporting.

      This application suite is not complete. We have written a whole lot of software around it so that it can optimally be used in our processes.

      One of the things that we have and that probably can aid Bugzilla is an overview of subprojects and their outstanding and resolved bugs. But of course that means having a more formal way of working between Bugzilla and the kernel developers.

      Another part of the processes here

  • lack of testing? (Score:5, Insightful)

    by Cat_Byte ( 621676 ) on Friday April 22, 2005 @09:54AM (#12312629) Journal
    I thought the whole Fedora project WAS mass testing of "cutting edge technology for Linux". Have I been wasting my time submitting bugs? Most have been fixed that I submitted so far.
  • QA isn't sexy (Score:5, Informative)

    by ChaoticCoyote ( 195677 ) on Friday April 22, 2005 @10:03AM (#12312725) Homepage

    Morton is correct.

    Even at commercial companies, QA isn't a "sexy" task. People would rather bang out code than write testing harnesses and run benchmarks.

    Also, free software is driven by programmers, who tend to hate QA. Like any artist or craftsman, a programmer hates having their work critiqued. They spent hundreds (or thousands) of hours on a program, only to have someone nit-pick the details and point out the flaws. But for art, "quality" is a subjective quality -- and with software, quality and reliability are tangible quantities that can be measured.

    My Acovea [coyotegulch.com] project demonstrated the problem. Users of GCC love Acovea; many developers of GCC, on the other hand, seem to treat it is an annoying distraction. Acovea identified more than a dozen errors (some quite serious) in the GCC 3.4/4.0 compilers -- and yes, I did report them to bugzilla. Only a couple of GCC's maintainers have said "thanks."

    Not that the cool reception deters me. I have a new version of Acovea in the wings, and will be unleashing it on GCC 4.x Real Soon Now. ;)

    As a consultant, I've been paid to perform QA work on commercial software packages -- but only one company, and a big one at that, has ever contracted me to QA a free software project.

    Right now, free software is about many things, but quality is not job 1. And that needs to change.

    • I looked over you project and it's quite interesting. I like this quote.

      Acovea was designed with "spikes" in mind -- short programs that implement a limited algorithm that runs quickly (2-10 seconds per run). Using the default evolutionary settings, Acovea will perform 4,000 compiles and runs of the benchmark; even with a very small and fast benchmark, this can take several hours. If your program takes a full minute to compiler, and another minute to run, Acovea will require 8,000 minutes, or more than 5
    • replace testers with script
      (I know, I know humans make mistakes and discover bugs )

      but hey just try building all my modules into a static kernel and it fails (cxx drivers)

      basically hardware manufacturers would love to send their bits to a place to get a stamp on it thats what they should do

      get a logo

      e.g. drivers for linux

      if your hardware works and passes the automagic tests on linux then you get to put that logo on your product

      job done

      regards

      John Jones
    • True... (Score:4, Informative)

      by jd ( 1658 ) <imipak@yahoGINSBERGo.com minus poet> on Friday April 22, 2005 @11:33AM (#12313650) Homepage Journal
      However, there are aids. The Linux Test Project doesn't do much real testing, from what I hear, other than some basic standards stuff, but it should be simple enough to bolt on some real heavy-duty code testing routines.


      Then, there's the mysterious Stanford Code Validator, used to great effect for a while. I feel certain that a few sweeps of that would uncover many of the more troublesome problems.


      For those without SCV (99.9999% of the planet), there are some Open Source code validators out there. It should be possible, at the very least, to use those to identify the more blatant problems.


      If you're not sure about using code validators, then it's simple enough to write programs that hammer some section of the kernel. For example, if you have some large number of threads mallocing, filling and freeing random-sized blocks of memory, can you demonstrate memory leaks? How well does the VMM handle fragmented memory? What is the average performance like, as a function of the number of threads?


      Likewise, you can write disk-hammering tools, ethernet tests, etc. For the network code, for example, what is the typical latency added by the various optional layers? Those interested in network QoS would undoubtably find it valuable to know the penalties added by things like CBQ, WFQ, RED, etc. Those developing those parts of the code would likely find the numbers valuable, too.


      If you don't want to write code, but have a spare machine that isn't doing anything, then throw on a copy of Linux and run Linpack or the HPC Challenge software. (Both are listed on Freshmeat.) The tests will give kernel developers at least some useful information to work with.


      If you'd rather not spend the time, but want to do something, map a kernel. There's software for turning any source tree into a circular map, showing the connections within the program. If we had a good set of maps, showing graphically the differences between kernel versions (eg: 2.6.1 through to 2.6.12-pre3) and between kernel variants (eg: standard tree, the -ac version and the -mm version), it would be possible to get a feel for where problems are likely. (Bugs are most likely in knotty code, overly-complex code, etc. Latency is most likely in over-simplified code.) You don't have to do anything, beyond fetch the program, run it over the kernels, and post the images produced somewhere.


      None of this is difficult. Those bits that are time-consuming are often mostly time-consuming for the computer - the individual usually doesn't need to put in that much effort. None of this will fix everything, but all of it will be usable in some way to narrow down where the problems really lie.

  • by Anonymous Coward on Friday April 22, 2005 @10:04AM (#12312727)
    I have successfully used Test Driven Development [agilealliance.org] in several of my projects and it is a uniquely satisfying experience. Writing test cases before writing the code then completing each test case one after another in steady progression gives a constant stream of small victories. It also means you can run all test cases at a later time and see that "yep, everything still works" or "doh! that change just broke 10 things I already had working."

    There are several other benefits to writing tests first as well. The experts in the link above explain it all better than I could, I'm sure.

    Many open source projects are taking this approach already and usually boast the number of unit tests along with the lines of code included in the distribution. Anyone can type in "build test" for example and it will show the program run and pass some odd thousand tests.

    Is it time for the Kernel to embrace this methodology? I certainly think it is a genuine best practice. But is it applicable to OS development as well? I don't see any reason why it wouldn't be, but I am not a kernel developer myself.
    • I have a friend whom i deeply respect who has been programming for over twenty years. He read Kent Beck's book and exclaimed that it "changed his life." I'm now working on a project and integrating my project with CppUnit. It's mega sexy cool. So, yeah, I think it's time for the Linux Kernel to embrace TDD. However, it may be a bit of a challenge. I have a cave-man grasp of the Linux Kernel. (Grog know kernel monolithic.) Thus devising effective unit tests could be a reel beech!

      If some young Turk wants to
    • The problem is that some things are hard to test.

      Libraries, even big ones like glibc are trivial to test. Games and GUI programs are hard to test, although the components can be if they're well designed. Now, kernels...

      The problem with the kernel is that it usually doesn't fail in a predictable way. Most of the problems are not because a syscall does the wrong thing when the parameters are strange, but something like this:

      Under a high load, on a SMP system, with heavy filesystem activity there exists a c
  • crashme, etc... (Score:4, Interesting)

    by pohl ( 872 ) on Friday April 22, 2005 @10:04AM (#12312731) Homepage
    I remember in the early days there was a program called 'crashme' that threw randomly-generated executables at the system, and it was credited bolstering stability. Do tests like this still hapen frequently by the unappreciated? Is there a good place online to read about these tests and their results for different point-releases? Along similar lines, I recall someone throwing random input at the various gnu utilities, and it was discovered that they were more robust against this sort of abuse than the commercial unix equivalents. Are there any other interesting tests that anybody knows about? Breaking stuff is fun.
  • already happening? (Score:3, Interesting)

    by tverbeek ( 457094 ) on Friday April 22, 2005 @10:06AM (#12312743) Homepage
    It may not be a Linux issue per se (more of a distro issue, I think), and it's purely anecodatal, but I've been seeing some QA problems lately in the mainstream distro I use. They include a bug that requires me to hand-edit the X11 config file to get my mouse to work, having to manually rebuild the routing table after every boot, and a so-far baffling total freeze of the system after rand() hours, only when it's serving web pages. I've been using Linux to do this job for six years, and never had these kinds of problems before.
  • by DrXym ( 126579 ) on Friday April 22, 2005 @10:08AM (#12312768)
    When I heard that I nearly fell off my ostrich.
  • And started seeing credibility problems with the article. Is there a transcript of what he actually said anywhere?
  • Linux is constantrly improving, but that means it is also constantly changing, and that makes it a constantly moving target.

    That applies to most distros as well as the kernel itself.

    It's hard to put a lot of effort into testing something when it's possible those tests will be invalidated a few months down the road...
  • Want to help? (Score:3, Interesting)

    by SenFo ( 761716 ) on Friday April 22, 2005 @10:17AM (#12312863) Homepage
    If anybody reading this is interested in participating in the test procedure, check out the Linux Test Project [sourceforge.net].
  • by Senor_Programmer ( 876714 ) on Friday April 22, 2005 @10:21AM (#12312913)
    ing," he said.

    Wait a minute here...

    I thought the whole scheme was structured thusly...

    I crank up the latest greatest kernel. I find a bug. I report it. My bug gets fixed. THAT's MY REWARD! The friggin bug gets squashed. What more could one ask for, with a clear conscience and a straight face.

    As for those guys who fix the stuff. Well sanity is a relative term as we should all realize in light of the Japanese influence and emergence of cargo cults in WW-2 Niu Guinea. AFAIK, most Linux users view the kernel developers as some mysterious force from which benefit is derived through clever creation of effigy's.

  • We do exist (Score:5, Interesting)

    by Necron69 ( 35644 ) <jscott.farrow@gm[ ].com ['ail' in gap]> on Friday April 22, 2005 @10:25AM (#12312956)
    With all due respect to Andrew, Linux QA people do exist. After 11 years of being a sysadmin, I'm now entering my fifth month of being paid to test Linux releases. I'm having fun, learning a lot, and generally enjoying life.

    BTW, we have not one, but two of my colleagues down under right now listening to Andrew in person. It should be interesting to get a first-hand account of what was said.

    - Necron69

  • That's odd (Score:2, Insightful)

    by radiophonic ( 767486 )
    I was under the impression that by using Linux, I was, in a sense, testing Linux.
  • by jfengel ( 409917 ) on Friday April 22, 2005 @11:03AM (#12313379) Homepage Journal
    This article is about kernel development. While I appreciate the development being done to make the kernel faster/better/cheaper (well, it doesn't get any cheaper), it's already a Pretty Damn Good kernel. It sounds to me like the most crucial thing would be to solidify it and test the bejeezus out of it, then largely freeze it, because that's not where the problems are.

    When people complain about MS Windows, they're not (usually) complaining about the kernel. They're talking about all of the stuff built on top of it: window manager, IE, networking, configuration. If the Linux kernel is receiving too little testing to be stable, what about the millions of lines of code that go into X windows, Gnome, CUPS (as mentioned the other day), etc.

    If MS didn't have to make kernel changes to bettter support security, I suspect they wouldn't be touching it at all. BSODs are still more common than they should be, but most users find them extremely rare, and the kernel is Fast Enough relative to the work that needs to be done. The improvements in Longhorn are largely about changes above the kernel, especially in its spiffy interface.

    While I'm grateful to Linus and all of the other developers for the kernel improvements, and while Open Source means never being told what to work on, kernel improvements other than stability are probably a terrible use of manpower. The kernel is a tiny fraction of the lines of code that go into a Linux distro. They are basic, and need to be rock-solid, but while performance improvements there benefit everybody, they don't benefit you at all if X, or KDE, or Konqueror, or any of the hundreds of other higher-level apps crash.
  • Personally, I liked the earlier practice of running stable even releases (2.4) and testing/developing on odd releases (2.5).

    I realize that they have abandoned that in 2.6, but I don't really understand why they did that.

  • In a windows world you have a manufacturer from whom you can get detailed specs on hardware, etc.

    In the linux world it's a rarity to get such information, which means that you can only test what you have. So yes, the devs can test a "brand X rev 1.02" video card up the wazoo and have it working perfectly, but it becomes something of a feedback-from-users loop when the r1.03 card breaks the compatability of the 1.02 driver.

    Let's face it, there are way too many configurations out there to test for all com
  • Re: (Score:2, Insightful)

    Comment removed based on user account deletion

Solutions are obvious if one only has the optical power to observe them over the horizon. -- K.A. Arsdall

Working...