Lack of Testing Threatening the Stability of Linux 325
sebFlyte writes "Andrew Morton, a Linux kernel maintainer, has said that he thinks that the lack 'credit or money or anything' given to those people who put in long hours testing Linux releases is going to cause serious problems further down the line. In his speech at Linux.Conf.Au he also waded into the ongoing BitKeeper debate, saying 'If you pick a good technology and the developers are insane, it's all going to come to tears.'"
Hmm.. (Score:5, Funny)
Re:Hmm.. (Score:5, Insightful)
This is what the best developers do, otherwise they would simply come up with the same mediocre to bad solutions that everyone else does, no?
They do, however, have this really annoying tendency to see everything from the same "Man from Mars" perspective, not restricting themselves to viewing only code differently than most do. This can make them appear "insane" to the general populace.
In the land of the blind the one eyed man is a paranoid schizophrenic.
Insanity is percieving things not as they really are. If the majority percieve things not as they really are the man who does so will give the perception of being insane when he acts upon his perceptions, those acts being unintelligable to the majority.
And thus is born the image of the "quirky" genius. All will hail his new invention, but titter quietly about how he wears his socks, never for one minute stopping to take the obvious point of view that there just might be something of genius in the way he wears his socks, because he wears his socks differently than the majority do.
And being the same is sanity, right?
Nevermind that we innately wipe out genius in one swell foop with that attitude. It enforces a regression to the median, if it weren't for the fact that half the populace would have to progress to the median somehow, which, trust me, they just ain't gonna do. So instead of a regression to the median we get a regression the "really dumb."
Take the current fad for "Playskool" interfaces. . .
Of course, some "geniuses" really are just insane and "luck into" some discovery through their insane perception of things.
So how do you tell the difference? Well, takes one to know one I'm afraid. It would be nice if it didn't seem as if the people who end up in charge of "mental health" weren't all, themselves dimwitted morons at best, and completely, utterly crackers at worst.
They're coming to take me away, HO,HO! HEE, HEE! HA, HA!
I think it's something about the way I wear my socks.
KFG
Re:Hmm.. (Score:3, Insightful)
Heh, I do the same thing. I often consider myself the ultimate Devil's Advocate.
When others are stressed, I'm calm. When others are calm, I'm stressed. Bizarre, but that's the way I work. Unfortunately this causes problems for me even in geek circles because they don't see what I see and often I can't explain why I know the thin
Insanity (Score:2, Funny)
Moderators (Score:3)
Re:Moderators (Score:4, Funny)
by ciroknight (601098)
I know it's early, but do we really have to mod everything flamebait, even if it's hilarious??? Come on..
Evidently.
Re:Moderators (Score:2, Informative)
Re:Moderators (Score:4, Funny)
Developers
Developers
Developers
Developers
Re:Moderators (Score:2)
Apologies in advance (Score:5, Funny)
"You don't have to be a developer to work here, but it helps".
I'm not insane! (Score:2)
They're also telling me that it's time to go home and clean the guns.
Re:I'm not insane! (Score:2)
We don't need to go home and clean the guns, they're already cleaned, oiled and ready to go!
Are you sure? (Score:2)
Re:I'm not insane! (Score:2)
Get thee to a nunnery!
Re:Insanity (Score:2, Funny)
Spelling is usually the first clue though.
Vacation for Linus...? (Score:5, Insightful)
TFA states that he's starting to take as much pride in rejecting patches as he does accepting them, and with this whole BitKeeper thing, it seems to me like he might need a small break.
Of course, I'm not one to really talk, as I don't do nearly as much as he does with Linux...
Also, with regards to testing, those of us who use it daily are testing all the time. I know it's not structured QA, but still, it's a lot of testing.
Also, maybe slowing down the kernel releases a bit might help. I know that I do an emerge world on my Gentoo boxes about once a week, and it seems like there's a new kernel release every week. If there's a need for more testing, perhaps a little less time releasing and more time testing is in order.
Re:Vacation for Linus...? (Score:5, Insightful)
Isn't that the essence of Microsoft's QA?
Should we be doing what we rightly criticise them for?
Re:Vacation for Linus...? (Score:2)
Say what you want about Microsoft and the stability/reliability/security of their software, but they have many full time (and paid) people devoted exclusively to testing and trying to break their software so that it can be fixed.
Re:Vacation for Linus...? (Score:3, Insightful)
Isn't that the essence of Microsoft's QA? Should we be doing what we rightly criticise them for?
Say what you want about Microsoft and the stability/reliability/security of their software, but they have many full time (and paid) people devoted exclusively to testing and trying to break their software so that it can be fixed.
He's saying that they DON'T
Re:Vacation for Linus...? (Score:2)
On a serious not, microsoft tends to release prerelease canidates as stable and then does alot of thier quality control thru updates and service packes. I usualy recomend to wait 4-6 months before using a service pack or new version of whatever so i can imeadiety download fixes and use the product simularly to the marketing brochure's claims. I'mot aying that doesn't happen in any ot
Re:Vacation for Linus...? (Score:2)
Re:Vacation for Linus...? (Score:2)
*sigh* (Score:5, Interesting)
Where did you hear or get the impression that that was the MS "approach" to QA ?
I've written test suites for the following Microsoft Products
- Visual Basic Compiler, 7.0
- Microsoft Business Framework 1.0 (unreleased)
None of them involved just using the compiler or the business framework over and over in day to day work to find bugs.
We have a variety of test approaches, including a few that _might_ be construed as what you describe - There are a few ways that we get test coverage via product usage
- stress
- bug bashes
- app weeks
Stress is funnier than it sounds. Did you know we're not allowed to ship windows until the exact build of windows under ship consideration has been running on hundreds (thousands, usually) of machines continuously with no problems while enlisted in a distributed "stress" client... where they're pounded and pounded with automated tests that do things like starve memory whilst performing other work, etc? Same with ASP.NET and the CLR - they have to _survive_ for a pre-determined time period before the build can be considered shippable. We dont think there are any show-stopper bugs at this point - but we just want to be reasonably sure. Note that if we find a bug (even an unrelated one, like the documentation has a typo) and take a fix for it, the stress cycle resets because the bits have changed. Better safe than sorry. In the end game of a product release it can literally be the case that taking a bug fix means delaying ship for another week or more.
- bug bashes
this is probably most like what you're describing. Everyone on the team sits down for a couple of days and really just beats on a specific area of the product. Security Bug Bashes have become popular int he last couple years (wonder why
- app week
For developer tool products (like Microsoft Business Framework) we like to do an app week with each milestone, where everyone on the team builds some sort of end to end application, using as much of the toolchain as possible. This sort of testing really makes the employees better (we're usually pretty compartmentalized on our areas of functionality ownership). It also lets unreleated parties take a look at peices of the product they don't own (so don't have preconceived notinos about). Finally, it lets us simulate the end-to-end customer experience on our product stack. If we can build the sort of apps a customer might build with our tools, then the tools are probably alright. Where we run into problems, we know the tools need help.
bug bashes and app weeeks happen perhaps 1-2 weeks per milestone (which is on the order of 2 months). It is a small part of our testing, time, effort, and results wise. It's still important to do, but it is not the _focus_ of QA at microsoft.
Re:*sigh* (Score:5, Informative)
Most of the really first-class groups at Microsoft (Windows, SQL Server, Developer Tools, lots more) have INCREDIBLY exacting test requirements, and extremely competent and thorough and demanding test teams. The open source community has done well, but it is nowhere near the professionalism and thoroughness of commercial software development. And it's precisely because the testers get *paid* to do the same damn test on every single build -- something open source people won't do, because there's no glory in it.
Slashdotters will no doubt respond, "Well, if it's so good, then what about all those security bugs!" Which is a fair criticism. Commercial software development (such as Microsoft's high testing standards, and similar at Sun, Apple, etc.) only works when the testing priorities you start with are the right ones. For a long time, Microsoft's priorities were 1) features, 2) usability, 3) more features, 4) stability, 5) security, 6) even more features.
This has changed. Microsoft mid-level managers (dev managers, product unit managers, etc.) have internalized the idea that they are literally under attack, and that security must be a high priority from here on. I wish they had STARTED with that as a priority, but at least they get the message now.
But, seriously, the parent poster is right on the money. Microsoft has AMAZING testers and test/developers. The hardware and software matrix that they run code under has to be seen to be appreciated.
And, again. This is not intended as a slight at all to open source development or testing. It's just *very* different.
Re:*sigh* (Score:2)
Counterpoint: I find that Linux (Debian) with KDE to have 3rd party applications that cause it to hang, just like some non-MS created applications cause Win 3.1, 95, 98, 2000, XP to hang. This never seems to happen on my Mac. Maybe I don't use it enough, maybe there aren't 3rd party apps developed by substandard programmers calling routines poorly or managing memory in odd ways that cause these problems, or maybe it really is a superior product provi
Yes (Score:2)
A good person on this subject is Raymond Chen, a reasonably famous MS blogger. He's written a number of times on all the crap he personally did in Windows 95 to make certain apps run _at all_. Yeah, there was code in Win95 for the sole purpose of letting certain pre-Win95 games run properly - when those games had bugs in them that we couldn't have counted on t
Re:*sigh* (Score:2)
Re:*sigh* (Score:3, Insightful)
I thought the OP
Re:Vacation for Linus...? (Score:4, Interesting)
Read that, you might learn a thing or two.
Re:Vacation for Linus...? (Score:3, Interesting)
Of course, he's essentially been on vacation from Linux work, developing git. I'd guess that writing his own thing has make him feel a lot better about the BitKeeper mess. He certainly seemed to be having fun coming up with brilliant solutions
Re:Vacation for Linus...? (Score:3, Insightful)
When having users stumble into bugs is your primary method of finding them, your QA has already failed.
Because they do active development on the 2.6 branch, new bugs are introduced all the time. Even if they're only there for one version, there's always more bugs in the next version, which is a big disincentive for upgrades. And not minor stuff, big things
Re:Vacation for Linus...? (Score:3, Informative)
You won't hear linus complaining if someone forks his kernel and attention shifts away.. linus will continue to integrate things he wants to integrate.
Re:Vacation for Linus...? (Score:4, Interesting)
The kernel did not get where it is with his current attitude.
As I said, the pressure can get to anyone and the kernel is now a mighty beast of a project to maintain. He just needs to get his head screwed on straight. Either that or risk turning into another Theo.
Re:Vacation for Linus...? (Score:5, Informative)
Oh, yes it did--go spend a few hours reading the lkml archives. He's always flamed people, and always been happy to drop patches that he thought weren't right for one reason or another. There's no sudden change here.... (But I wouldn't call it a "fuck off" attitude. Even when he flames someone he rarely seems to actually hold a grudge, or be unwilling to work with anyone.)
--Bruce Fields
Re:Vacation for Linus...? (Score:5, Interesting)
Well, that would really be a problem, but despite Theo's personality (which I think might have its own charms) doesn't necessarily get in the way of development. Just think of the huge contributions OpenBSD made. Common Address Redundancy Protocol (CARP) for instance. Or their excellent firewall, pf (now present in all BSDs). Not to mention OpenSSH. And beside these standalone or highly portable applications, they released a secure and stable OS. Not 'just' a kernel. They write their own libc. They maintain a lot of software in their base system. Apache 1.3.x can be almost considered a fork, with their security/stability related patchset. Which comes down to my main point: The problem is not lack of resources, monetary or otherwise
Currently there are ~100 developers payed fulltime just to work on the kernel (at various organizations). There are none in FreeBSD. There are perhaps a dozen devs whose employers let them work on FreeBSD part-time, or there are various works that are sponsored by companies (pair network comes to mind) from time-to-time. But all in all, FreeBSD, that writes its own kernel, its own C library, and generally speaking maintains an OS (userland apps like their package management and ports system for instance, burncd - the native cd burning app of freebsd, etc.) does that with 1/50 of the resource Linux & co has just to develop the kernel.
This is not about linux vs. freebsd btw. I chose to use the latter, you chose the former, I really don't care, and I'm not willing to engage in yet another linux vs. bsd flamefest. You can argue endlessly about why linux is better, and I can do the same about FreeBSD, but I think we can agree on one point: either way, neither is that much better (lets cut down that figure to 10x - you can't possibly claim that linux is 10x better or something). In other words, my point is that it is not about (monetary) resources. It is a problem of organization imho. Less frequent releases, more API/ABI stability, a controlled release engeneering process might be a solution. Perhaps a branch split like it was done during 2.5.x (current 2.6) development. Pronounce the current 2.6.x branch STABLE, meaning introducing a POLA (policy of least astonishment in freebsd) and forbid API/ABI changes, then continue development in a new, 2.7 branch at the current pace.
I don't mean to imply that there is no release engeneering in linux kernel development whatsoever. But somehow FreeBSD's (and I assume the other BSD's as well) release engeneering seems to me a lot more transparent. Click the first few links at the top of this page [freebsd.org] to see what I mean by "controlled release engeneering process."
Re:Vacation for Linus...? (Score:4, Interesting)
I would be shocked if the number is that small.
"There are none in FreeBSD. There are perhaps a dozen devs whose employers let them work on FreeBSD part-time, or there are various works that are sponsored by companies (pair network comes to mind) from time-to-time."
That's a shame, but ok...
"[FreeBSD is released] with 1/50 of the resource[s of] Linux [...]"
Right, and so the fact that FreeBSD works well is quite impressive. The fact that it doesn't work at all on certain high-end platforms, obsolete platforms, lots of embeded platforms, etc., is also not shocking nor does it make it a poor platform.
FreeBSD does what it can with the resources it has, and that's a good thing. Let's not try to compare them to Linux. Linux is Linux and BSD is BSD. They are excellent tools for different jobs.
Re:Vacation for Linus...? (Score:3, Insightful)
This, of course, ignores the work that goes into special subsystems for popular platforms,
Re:Vacation for Linus...? (Score:3, Insightful)
The kernel team tests and releases in one pass, which is roughly akin to unit testing in a large project that has several sub-projects.
Then distributions pick up the changes (it's really not that clean a separation, but let's say it is for sake of simplicity), and incorporate it into their OSes. Each distribution has its own unique QA/release process, so let's look at Red Hat as an example. They take some internal things, some
Hi Linus! (Score:2, Funny)
In contrast to the MS method... (Score:5, Funny)
Re:In contrast to the MS method... (Score:5, Interesting)
I think it'd be awesome to run a software debugging/testing firm, where basically you have a bunch of computers and a bunch of users come in and try their best to break the software. Cheap labor and a good variety in machines, and you could quite quickly clean up even some of the nastiest code.
Re:In contrast to the MS method... (Score:3, Interesting)
You may be able to 'break it', but can you repeatedly 'break it'?
Can you predictably reproduce the bug?
Re:In contrast to the MS method... (Score:4, Interesting)
Uh, no. (Score:5, Informative)
It is difficult to test software without adequately understanding what it is supposed to do. Varying the underlying machine type is almost irrelevant for binary distributed software unless you're testing an operating system kernel or looking for race conditions in software (which is really just a stab in the dark)
How are you going to have 3rd party people debug software they know nothing about?
Where users help find bugs is by using the software. It honestly takes a certain mentality to be an effective software breaker, and it's not very common. It takes something else entirely to be a software tester; you've got to be a good developer (because software testing is about automation these days unless you're insane) but you've got to not get sucked into the developers way of thinking.
I assure you - letting normal users play with software doesn't clean it up. we can show that this is true in the following way:
- more users use Microsoft software for more hours a day than any other software in the world
- slashdotters say Microsoft software is the buggest software made
clearly if users using software was sufficient to find all the bugs, MS stuff would be bug free, based on its frequency of use alone. I know this isn't the case, because im a software tester at Microsoft.
(The appropriate response is "well then, stop posting and get back to work; you're clearly not done yet!"
W.r.t. linux kernel testing: this is something that's always amazed me - linux works surprisingly well for something with so little formal testing. On the other hand, when there are edge case problems my experience has been that nobody is much interested in fixing them. One example i had was at a consulting gig. the client was looking to move his web hosting business onto linux boxes if he could get more sites per box then he could on windows. He had a problem where his linux server would start dying after a few days. I started to look into it and the box would basically panic() in low memory situations. I asked Alan Cox about it (via irc) and the response was "buy more memory". Nice.
Another sore point with me growing up was xserver crashes. The Xserver was 99% reliable, but then you'd get some random crash and lose everything you were doing, and you knew there was no real way of getting it fixed or investigating it.. you just had to hope it magically got better somehow.. maybe when you switched hardware or something.
Then there's the just plain lack of testing of some F/OSS projects in general. When i was in college i had NeXT, Sun, and SGI boxes in my dorm room (but no linux
Another baby patch i submitted was for the openBSD kernel.. this time for the wdc driver. Back when UDMA 100 was newish, i bought 2 UDMA 100 disks a month or so apart.. so they were different sizes and different vendors, but on the same bus. The UDMA rollback code in openBSD would drop the DMA level from 5 (UDMA100) to 2 (something much slower, i dont remember what) after a certain number of DMA errors. This obviously sucked since you can run UDMA devices at different speeds on the same bus, and you can also fallback to UDMA66 and UDMA33, both of which are better than mode 2.
Re:Uh, no. (Score:2)
Switch to Debian. Oh wait, all the crybabies whining about how long it takes to make sure every package builds and runs on every platform when all it needs to do is "run on the box I have at home" is ruining it for us.
Re:Uh, no. (Score:2)
By and large they do not understand how to apply the KISS principle to their design and implementation problem.
In some respects it might be an artform - because understanding how to simplify something that is very complex is very difficult. Nevertheless if you approach the original problem with that in mind you can minimize the effort in maintaining an understanding of the state (possible paths/interactions) of your application in
Re:Uh, no. (Score:2)
Secondly, Microsoft wouldn't release their source code, so debugging would be pointless. But if you could generate a few thousand bug entries into
Re:In contrast to the MS method... (Score:2)
The users, in the comfort of their own homes, already do a pretty good job of "trying to break the software" just by using it to get their work done!
Re:In contrast to the MS method... (Score:2)
Re:In contrast to the MS method... (Score:2)
Re:In contrast to the MS method... (Score:3, Insightful)
you don't work with Vertical market software do you.
all of our critical sales apps are considered alpha testing. by the time an app becomes stable and useable they retire it and sell us their next abomination that does not work right, has 1/3rd the features the sales guy sold the CTO on and has stability problems that make any Admin cry (imagine a printer driver on your W2K box causing data corruption in an app... WTF is THAT!
loo
Re:In contrast to the MS method... (Score:2)
Re:In contrast to the MS method... (Score:2)
If you can do code coverage you can make much better tests.
If chaos is so close at hand now... (Score:3, Interesting)
widespread use by "appliance operators" (Score:2)
Then what do you mean by "appliance operators" (Score:2)
I'm (honestly) a bit confused by the juxtaposition of 'appliance operator' and 'desktop'.
Loose women ??? (Score:5, Informative)
This drew laughs from the audience.
Re:Loose women ??? (Score:2)
But it is just a guess.
Re:Loose women ??? (Score:3, Funny)
Somewhat alarmist headline (Score:4, Informative)
"A lack of commitment to testing by the Linux community may ultimately threaten the stability..."
The content of the article is much better than the headlines and excerpts being quoted. I was there and felt that what he was geting at was that we need to start thinking about updating QA procedures. The ratio of bugs to features is decreasing, but the rate of features is (maybe?) growing that much faster. The point of his talk was to outline a number of options for improving QA, thre are issues, but the sky certinly isn't falling either. It was an excellent follow on from Tridge's keynote the previous day on how to do quality system programming (overshadowed by his very brief coverage of the BK thing).
Xix.
Do we need something like what Sun do? (Score:3, Informative)
Does Linux have a built-in crash reporter? (Score:4, Interesting)
If developers are going to fix the bugs that occur in the real world, they need data from the real-world.
Brief Answer: No. (Score:5, Informative)
Microsoft's method is for some of the higher up software, and so is Apple's. If there's a bug in the kernel it's very unlikely that their code will catch it. Or at least that's been my experience.
If the problem is that Linux is so buggy, we just need to run it on a bunch more machines, and start randomly poking it as hard as we can until we break something. Once we've broken it, do it again to make sure it's not hardware, and then go to work fixing it. Good old brute force repairs.
Re:Brief Answer: No. (Score:2)
-Jay
Re:Brief Answer: No. (Score:2)
"It's easy to create peace in the middle east- we just need to bring the leaders of the factions together and get them to agree to stop fighting."
Re:Brief Answer: No. (Score:2)
Re:Does Linux have a built-in crash reporter? (Score:2)
Open Source Means More Eyeballs? (Score:4, Insightful)
More seriously... I think many of the people who DO eyeball the code are looking for security problems these days (where you do get recognition, etc.). For the record, I know I won't get any HR props for putting OS bugs that I've uncovered on my resume, but the security bugs I've found are always good conversation pieces.
Re:Open Source Means More Eyeballs? (Score:2)
Re:Open Source Means More Eyeballs? (Score:5, Interesting)
Actually, I'd say that giving proper credit and public recognition for bug reports is good enough for most of the end-users. Case in point (interestingly enough, from LKML and by Andrew Morton himself):
Getting an answer like that should lift anyone's spirits. Not only has the bug been fixed, it was also recorded for posterity that a certain user discovered it and helped to his ability in fixing it. And to top it all off, the reporter was given an honest praise and a thank you. The last part alone is usually enough for most users, to see that the developers actually care.
As for resumes? If you have a verifiable record of reporting back bugs and helping to test their fixes, you should be able to use that for your advantage in CV or at least in an interview. If nothing more, it shows that you can communicate with different kinds of people and have enough technical ability to follow through with their requests for further details. You might have even gotten a better product for yourself to use.
Not happy with Bugzilla? (Score:3, Interesting)
"Bugzilla is fine for tracking bugs, but as it's currently set up, it's not very good for resolving bugs."
Hmm... I'd be interested to understand what alternatives to a web-based system he has in mind. Any thoughts?
"This process, where individuals communicate via a Web site, is very bad for the kernel overall."
Re:Not happy with Bugzilla? (Score:2)
Re:Not happy with Bugzilla? (Score:2)
We are running Continuus here for versioning, configuration management and problem reporting.
This application suite is not complete. We have written a whole lot of software around it so that it can optimally be used in our processes.
One of the things that we have and that probably can aid Bugzilla is an overview of subprojects and their outstanding and resolved bugs. But of course that means having a more formal way of working between Bugzilla and the kernel developers.
Another part of the processes here
lack of testing? (Score:5, Insightful)
QA isn't sexy (Score:5, Informative)
Morton is correct.
Even at commercial companies, QA isn't a "sexy" task. People would rather bang out code than write testing harnesses and run benchmarks.
Also, free software is driven by programmers, who tend to hate QA. Like any artist or craftsman, a programmer hates having their work critiqued. They spent hundreds (or thousands) of hours on a program, only to have someone nit-pick the details and point out the flaws. But for art, "quality" is a subjective quality -- and with software, quality and reliability are tangible quantities that can be measured.
My Acovea [coyotegulch.com] project demonstrated the problem. Users of GCC love Acovea; many developers of GCC, on the other hand, seem to treat it is an annoying distraction. Acovea identified more than a dozen errors (some quite serious) in the GCC 3.4/4.0 compilers -- and yes, I did report them to bugzilla. Only a couple of GCC's maintainers have said "thanks."
Not that the cool reception deters me. I have a new version of Acovea in the wings, and will be unleashing it on GCC 4.x Real Soon Now. ;)
As a consultant, I've been paid to perform QA work on commercial software packages -- but only one company, and a big one at that, has ever contracted me to QA a free software project.
Right now, free software is about many things, but quality is not job 1. And that needs to change.
Re:QA isn't sexy (Score:2)
Acovea was designed with "spikes" in mind -- short programs that implement a limited algorithm that runs quickly (2-10 seconds per run). Using the default evolutionary settings, Acovea will perform 4,000 compiles and runs of the benchmark; even with a very small and fast benchmark, this can take several hours. If your program takes a full minute to compiler, and another minute to run, Acovea will require 8,000 minutes, or more than 5
its not thats why we should make it automagic (Score:2)
(I know, I know humans make mistakes and discover bugs )
but hey just try building all my modules into a static kernel and it fails (cxx drivers)
basically hardware manufacturers would love to send their bits to a place to get a stamp on it thats what they should do
get a logo
e.g. drivers for linux
if your hardware works and passes the automagic tests on linux then you get to put that logo on your product
job done
regards
John Jones
True... (Score:4, Informative)
Then, there's the mysterious Stanford Code Validator, used to great effect for a while. I feel certain that a few sweeps of that would uncover many of the more troublesome problems.
For those without SCV (99.9999% of the planet), there are some Open Source code validators out there. It should be possible, at the very least, to use those to identify the more blatant problems.
If you're not sure about using code validators, then it's simple enough to write programs that hammer some section of the kernel. For example, if you have some large number of threads mallocing, filling and freeing random-sized blocks of memory, can you demonstrate memory leaks? How well does the VMM handle fragmented memory? What is the average performance like, as a function of the number of threads?
Likewise, you can write disk-hammering tools, ethernet tests, etc. For the network code, for example, what is the typical latency added by the various optional layers? Those interested in network QoS would undoubtably find it valuable to know the penalties added by things like CBQ, WFQ, RED, etc. Those developing those parts of the code would likely find the numbers valuable, too.
If you don't want to write code, but have a spare machine that isn't doing anything, then throw on a copy of Linux and run Linpack or the HPC Challenge software. (Both are listed on Freshmeat.) The tests will give kernel developers at least some useful information to work with.
If you'd rather not spend the time, but want to do something, map a kernel. There's software for turning any source tree into a circular map, showing the connections within the program. If we had a good set of maps, showing graphically the differences between kernel versions (eg: 2.6.1 through to 2.6.12-pre3) and between kernel variants (eg: standard tree, the -ac version and the -mm version), it would be possible to get a feel for where problems are likely. (Bugs are most likely in knotty code, overly-complex code, etc. Latency is most likely in over-simplified code.) You don't have to do anything, beyond fetch the program, run it over the kernels, and post the images produced somewhere.
None of this is difficult. Those bits that are time-consuming are often mostly time-consuming for the computer - the individual usually doesn't need to put in that much effort. None of this will fix everything, but all of it will be usable in some way to narrow down where the problems really lie.
Test Driven Development for OS? (Score:5, Interesting)
There are several other benefits to writing tests first as well. The experts in the link above explain it all better than I could, I'm sure.
Many open source projects are taking this approach already and usually boast the number of unit tests along with the lines of code included in the distribution. Anyone can type in "build test" for example and it will show the program run and pass some odd thousand tests.
Is it time for the Kernel to embrace this methodology? I certainly think it is a genuine best practice. But is it applicable to OS development as well? I don't see any reason why it wouldn't be, but I am not a kernel developer myself.
Re:Test Driven Development for OS! (Score:2)
If some young Turk wants to
Re:Test Driven Development for OS? (Score:2)
Libraries, even big ones like glibc are trivial to test. Games and GUI programs are hard to test, although the components can be if they're well designed. Now, kernels...
The problem with the kernel is that it usually doesn't fail in a predictable way. Most of the problems are not because a syscall does the wrong thing when the parameters are strange, but something like this:
Under a high load, on a SMP system, with heavy filesystem activity there exists a c
crashme, etc... (Score:4, Interesting)
already happening? (Score:3, Interesting)
We're insane? (Score:5, Funny)
Re:We're insane? (Score:2, Funny)
Hug the ostrich as you go. Show it you care.
That way you *can't* fall off!!
Re:We're insane? (Score:3, Funny)
Re:We're insane? (Score:2)
I got as far as the URL (Score:2)
It's also frustrating to test a moving target. (Score:3, Insightful)
That applies to most distros as well as the kernel itself.
It's hard to put a lot of effort into testing something when it's possible those tests will be invalidated a few months down the road...
Want to help? (Score:3, Interesting)
"They get no thanks or credit or money... or anyth (Score:3, Insightful)
Wait a minute here...
I thought the whole scheme was structured thusly...
I crank up the latest greatest kernel. I find a bug. I report it. My bug gets fixed. THAT's MY REWARD! The friggin bug gets squashed. What more could one ask for, with a clear conscience and a straight face.
As for those guys who fix the stuff. Well sanity is a relative term as we should all realize in light of the Japanese influence and emergence of cargo cults in WW-2 Niu Guinea. AFAIK, most Linux users view the kernel developers as some mysterious force from which benefit is derived through clever creation of effigy's.
We do exist (Score:5, Interesting)
BTW, we have not one, but two of my colleagues down under right now listening to Andrew in person. It should be interesting to get a first-hand account of what was said.
- Necron69
That's odd (Score:2, Insightful)
Kernel testing vs. app testing (Score:3, Interesting)
When people complain about MS Windows, they're not (usually) complaining about the kernel. They're talking about all of the stuff built on top of it: window manager, IE, networking, configuration. If the Linux kernel is receiving too little testing to be stable, what about the millions of lines of code that go into X windows, Gnome, CUPS (as mentioned the other day), etc.
If MS didn't have to make kernel changes to bettter support security, I suspect they wouldn't be touching it at all. BSODs are still more common than they should be, but most users find them extremely rare, and the kernel is Fast Enough relative to the work that needs to be done. The improvements in Longhorn are largely about changes above the kernel, especially in its spiffy interface.
While I'm grateful to Linus and all of the other developers for the kernel improvements, and while Open Source means never being told what to work on, kernel improvements other than stability are probably a terrible use of manpower. The kernel is a tiny fraction of the lines of code that go into a Linux distro. They are basic, and need to be rock-solid, but while performance improvements there benefit everybody, they don't benefit you at all if X, or KDE, or Konqueror, or any of the hundreds of other higher-level apps crash.
I liked the Eves / Odds approach (Score:2)
Personally, I liked the earlier practice of running stable even releases (2.4) and testing/developing on odd releases (2.5).
I realize that they have abandoned that in 2.6, but I don't really understand why they did that.
What else can you do though? (Score:2)
In the linux world it's a rarity to get such information, which means that you can only test what you have. So yes, the devs can test a "brand X rev 1.02" video card up the wazoo and have it working perfectly, but it becomes something of a feedback-from-users loop when the r1.03 card breaks the compatability of the 1.02 driver.
Let's face it, there are way too many configurations out there to test for all com
Re: (Score:2, Insightful)
Re:Contrapositive (Score:5, Informative)
"If it doesn't come to tears, then you didn't pick a good technology or your developers are sane."
Re:Contrapositive (Score:2)
Re:Bugzilla (Score:4, Insightful)