Catch up on stories from the past week (and beyond) at the Slashdot story archive

 



Forgot your password?
typodupeerror
×
Linux Software

Kernel 2.4.12 Released 377

Whoops. A nasty bug affecting symlinks made it into 2.4.11, and Linus has ditched that "sorry excuse for a kernel" in favor of the new and improved 2.4.12. :) See the (short) changelog or list of mirrors, as usual.
This discussion has been archived. No new comments can be posted.

Kernel 2.4.12 Released

Comments Filter:
  • Quality Assurance (Score:2, Insightful)

    by terrabit ( 50647 )
    Does Linus put about to be released kernels through any tests in attempt to avoid bugs like these? Does anyone else remember the brown paper bag bug at the begining of the 2.2 series?
    • Re:Quality Assurance (Score:5, Interesting)

      by MartinG ( 52587 ) on Thursday October 11, 2001 @08:02AM (#2414717) Homepage Journal
      Does anyone else remember the brown paper bag bug at the begining of the 2.2 series?

      Yes, it was around 2.2.8 or 2.2.9 IIRC. What I do remember though is that after that happenned Linus decided it was time to fork to the 2.3 series around the same time.
      Maybe now this has happenned he will start 2.5 and hand over 2.4.x to Alan who IMO keeps kernel series stable better than Linux does.
      • Yes, it was around 2.2.8 or 2.2.9


        No the Brown Paper Bag [tuxedo.org] release was 2.2.0. IIRC correctly, you could cause an OOPS by examining pretty much any coredump file.
      • by Juju ( 1688 )
        Maybe now this has happenned he will start 2.5 and hand over 2.4.x to Alan who IMO keeps kernel series stable better than Linux does.

        That's exactly what Linus said in his 2.4.12 annoucement. But I guess you knew that already ;o)

      • Maybe now this has happenned he will start 2.5 and hand over 2.4.x to Alan who IMO keeps kernel series stable better than Linux does.

        From the KML, as reported elsewere in this thread:

        On the other hand, the good news is that I'll open 2.5.x RSN, just because Alan is so much better at maintaining things ;)

        Linus

        Either you alredy knew that, or you are a real cassandra. In the last case, would you mind give me some hints on the next horse race? :-)

  • watch out. (Score:5, Insightful)

    by MartinG ( 52587 ) on Thursday October 11, 2001 @07:59AM (#2414704) Homepage Journal
    2.4.12 has a new bug that crept in with the parport update that Tim Waugh did.

    Check lkml archives for a patch to fix it.
    • Re:watch out. (Score:5, Informative)

      by psavo ( 162634 ) <psavo@iki.fi> on Thursday October 11, 2001 @08:31AM (#2414771) Homepage
      Patch is here:

      --- linux/drivers/parport/ieee1284_ops.c.orig Thu Oct 11 09:40:39 2001
      +++ linux/drivers/parport/ieee1284_ops.c Thu Oct 11 09:40:42 2001
      @@ -362,7 +362,7 @@
      } else {
      DPRINTK (KERN_DEBUG "%s: ECP direction: failed to reverse\n",
      port->name);
      - port->ieee1284.phase = IEEE1284_PH_DIR_UNKNOWN;
      + port->ieee1284.phase = IEEE1284_PH_ECP_DIR_UNKNOWN;
      }

      return retval;
      @@ -394,7 +394,7 @@
      DPRINTK (KERN_DEBUG
      "%s: ECP direction: failed to switch forward\n",
      port->name);
      - port->ieee1284.phase = IEEE1284_PH_DIR_UNKNOWN;
      + port->ieee1284.phase = IEEE1284_PH_ECP_DIR_UNKNOWN;
      }

  • by decaying ( 227107 )

    Users, doing the QA in Linux for over 10 years!

    • Exactly. This is the way it has always worked with OSS. Did you ever read the disclaimers at the end of each OSS licence?

      Since the beginning, OSS users pay their software doing testing, documentation and, if they can, code.

      You know what? Many people are happy with that. Because they discovered that they stll end up with a better product that most commercial stuff, at a lower price.
      The reason for this is that, given current software complexity, alpha-level tests,done in laboratory, can only do so much [and often not even that because smart managers cuts test budgets to save their ass].
      That is wy _every_ commercial company does beta-testing (e.g. asks user to test their product for them).

      In OSS, beta testing is simply done putting out a new release (sometime dubbed 'beta' or 'unstable', but you shold always expect potential troubles).
      Whoever don't like this is free to pay whatever company they want for doing their own Test or QA on whatever software they choose to use.

    • That's not a new marketing scheme. SGI has been doing a variation on it for years. Their internal motto is "We have the world's largest QA department," according to two ex-IRIX developers I work with.
  • by Anonymous Coward
    Wow, this is bad, but it doesn't appear to be the first time something like this has happened. Testing is something that is sadly overlooked, it seems, for Open Source projects.

    Off the top of my head, Mozilla is the only large Open Source project I can think of that has a reliable testing process. I'm sure there are others of course, it just seems that Linux is not one of them. Saying "Release early, release often" and wait for your users to find your bugs is nice in practice, but for large projects you need to have a more structured approach to testing, at least IMHO. Obviously there is some small Unit & Intergration testing done when the code is written and the patches applied, but that won't catch things like this in general. Whats needed is an overall testing plan.

    Maybe it's about time the Open Source world started paying more attention to testing?
    • I would think that this is the testing plan. Let people who are interested in working with the bleeding edge do just that. If you're running a production environment, use a stable kernel. If your working on the bleeding edge, expect to find and report bugs. The bug was only out for one day, and its already been fixed!! You wouldn't see that with most commercial software now, would you?
      • by taliver ( 174409 )
        If you're running a production environment, use a stable kernel

        But unless an upgraded kernel has something that you need or desire, why would you upgrade at all? Once a version is running, just stick with it.


        I'm of the belief that one of the greatest assets to open source is the fact that all older versions are still available for the vast majority of projects. This means that if you think your system was better off with the 2.2.14 kernel, the 2.67 gcc and libc-5, you can still get those, and load them onto a system built fairly recently (OK, no USB... but that's kinda what I'm talking about here.)


        Try doing this with any microsoft product. Go to a store and ask for Windows 95. It's getting hard to find 98. Try asking to buy office 97. Or any older version tha might work really well on a super fast system.


        So, to get back on topic, if you have a kernel that works, why would you even think about upgrading unless you are testing/comparing/a hobbyest.

        • This means that if you think your system was better off with the 2.2.14 kernel

          2.2.14 has a number of security problems. So no, it's not always prudent to pick some kernel at random and use it, just because it's old. However, it probably is wise to grab the latest 2.2.x kernel, and go with that.

          And yes, you can still find older versions of Windows, so don't pull that typical zealot FUD...It was only about a week ago that MS finally discontinued NT 4.0, which has been out for ages.
    • by MartinG ( 52587 ) on Thursday October 11, 2001 @08:11AM (#2414753) Homepage Journal
      Maybe it's about time the Open Source world started paying more attention to testing?

      I think you have to ask the question that when Linus releases a new kernel, who is that aimed at? IMO, these days it is at the distributors, the kernel developers and enthusiastic individuals. It is no longer the case that most Linux users download and compile their own kernels. Because of this the release of the kernels to the distributers actually forms part of the testing itself. ie, don't consider a release to be stable just because its in the so-called stable series. Consider it stable when you either (1) get it from a distributor who will have tested it and guarentee (sp?) it's stability or (2) downloaded and tested it yourself before production use.

      Yes, it would of course have been better it this hadn't crept in but it's not really a big deal. How many users do you see around you who have lost work because of this bug?
      • I think you have to ask the question that when Linus releases a new kernel, who is that aimed at?

        While thinking about who a kernel is aimed at, we've really got to consider our role as users of the latest kernels.

        On any project, regardless of size, the developers are often too close to the code to test it completely. They know too well the right thing to do. They are, in effect, well programmed users for the system that they are building, and this makes them bad testers. This is why so many companies have quality assurance departments that never interect with developers except through bug reports.

        In this Free/Open community, where the entire user base is the `company', it suddenly becoms apparent, that we, the users of bleeding edge kernels are this qa department. It is we who do the final testing, it is we who write the reviews for people who stick to old kernels until they get these reviews.

        Some may argue that there is already an unstable branch in which to do these things. Sure, but there are other testers for those. Often those testers aren't suitable to test stable kernels because they are now too close to the bugs that they first reported. I hope I'm making sense here.

        Anyway, this is one instance where this huge qa team found a bug real fast, got it back to the developers, and they fixed/undid it in equally quick time.

        Philip

    • by hey! ( 33014 ) on Thursday October 11, 2001 @08:24AM (#2414758) Homepage Journal
      Wow, this is bad, but it doesn't appear to be the first time something like this has happened. Testing is something that is sadly overlooked, it seems, for Open Source projects.

      Because releasing is part of the testing process. This is, unfortunately, true even for closed, commercial software -- there's always some bugs and some releases are always dogs. With free software the releases come frequently and you can pick which release you want to put your chips on. It's like the difference between having a single, conservative (but still not guaranteed) retirement plan at work, and having a choice of a diverse set of funds.

      And I don't believe itis valid to compare a product with Mozilla with the a product like the kernel. You simply can't test them the same way. Nobody (or at least very few people) grab the source code off of CVS and change their kernels every night; and if they did it wouldn't be as useful because many problems only become apparent after you run things for a while on a kernel.

      Not to minimize the good work that the Mozilla peple are doing, testing a kernel is simply much harder. This is why commercial operating systems are so slow in their releases. I actually think situations like this demonstrate a strategic advantage of the open software. Nobody in their right minds runs the latest, or even any recent kernel on any machine where reliable uptime is a consideration. I personally am using 2.2.19 on my production servers because it works for me. But lots of people do run every new kernel as it comes out on some machine, so when I switch to 2.4 there will be an excellent knowledge base, that will tell me for example to avoid 2.4.9 for VM problems, or to apply specific patches for 2.4.10 if I need.

      This is what it is all about folks -- release early and often, and hang your dirty laundry up where everyone can see it. This is a great benefit, and the only people who this affects in any negative way are fools.

  • 2.5 is comming (Score:4, Informative)

    by bram.be ( 302388 ) on Thursday October 11, 2001 @08:07AM (#2414736)
    Read it here [alaska.edu]
    • On Thu, 11 Oct 2001, Linus Torvalds wrote:
      > Not a good week.
      >
      > On the other hand, the good news is that I'll open 2.5.x RSN, just
      > because Alan is so much better at maintaining things ;)

      On Thu, 2001-10-11 at 07:54, Alan Cox wrote:
      > > And will Alan release 2.4.13 asap with Rik's VM? - (sorry, couldn't resist)
      >
      > I think 2.4.13 will be a Linus release
  • Is it just me ? (Score:2, Interesting)

    by CaptainZapp ( 182233 )
    While up and including 2.4.9 everything appeared to be really hunky-dory, compilation of 2.4.10 / .11 can only be described as a miserable failure to some degree.

    I'm aware that this is not the KernelCrisis hotline; however, since it doesn't appear to be offtopic and it really bugs me and there's a heap of wizzards in here, I'd like to find out more.

    Usually, when a new kernel is out, I download the patch, apply it, use the most recent config file, which I go through some, but not necessary through all umpteen options and this usually worked just fine. It doesn't anymore since 2.4.10. From aborting compilations for strange missing files, up to USB race conditions and kernels which if they boot at all, sputter all sorts of gibberish, I've seen it all.

    Any suggestions where to look ? Could it be that gcc 2.95.x is really too buggy and why suddenly.

    I don't really have the expertise to up/downgrade the compiler and the related libraries. So, I'd be really thankful for each and every hint.

    • Re:Is it just me ? (Score:3, Informative)

      by jnik ( 1733 )
      Usually, when a new kernel is out, I download the patch, apply it, use the most recent config file, which I go through some, but not necessary through all umpteen options and this usually worked just fine.

      Um, you should use "make oldconfig" when you're upgrading kernels and using the same config file. It'll prompt you for any new options.

    • Re:Is it just me ? (Score:2, Informative)

      by Stonehead ( 87327 )
      The Alan Cox series (latest is 2.4.10-ac11 [linuxtoday.com]) works fine here. I'm currently running 2.4.9-ac18 from a week ago. Here [kernelnewbies.org]'s how to get it. I use gcc 2.95.4 under Debian - as far as I know it was not yet recommended to compile the kernel with 3.0+, but it might work.
    • Re:Is it just me ? (Score:5, Insightful)

      by StikyPad ( 445176 ) on Thursday October 11, 2001 @11:01AM (#2415408) Homepage
      Usually, when a new kernel is out, I download the patch, apply it, use the most recent config file, which I go through some, but not necessary through all umpteen options and this usually worked just fine...I don't really have the expertise to up/downgrade the compiler and the related libraries.

      From the kernel Readme:

      SOFTWARE REQUIREMENTS

      Compiling and running the 2.4.xx kernels requires up-to-date versions of various software packages. Consult ./Documentation/Changes for the minimum version numbers required and how to get updates for these packages. Beware that using excessively old versions of these packages can cause indirect errors that are very difficult to track down, so don't assume that you can just update packages when obvious problems arise during build or operation.


      Many sources recommend that if you don't have a critical reason to upgrade your kernel, don't. I will follow in this recommendation, as the old adage, "If it ain't broke, don't fix it," is especially true if you don't know how to fix it. Installing/uninstalling a program is far more mundane than upgrading a kernel. If you're not comfortable upgrading (downgrading) gcc, and your kernel is performing well as is (or was working fine, as the case may be), you aren't a strong candidate for a kernel upgrade. Learn the basics and fundamentals of the OS before diving headfirst into something as critical as kernel patches. Distribution providers usually do extensive testing on the kernel version included to ensure stability and compatibility.

      If you're determined to go ahead with this, Linuxnewbie.org has a decent amount of information, linuxdoc.org and linuxnow.com have HOWTOs on virtually any subject (including the GCC HOWTO [linuxdoc.org], although I can't say with any degree of certainty that gcc is at fault here), and the website of your distribution is probably another good source of info. If you still have problems, and turn to the net for answers, make sure to state specifically each step you took thus far and try to detail the problems you encountered, providing logs and diagnostic output when possible. In doing so, you or someone else may find you skipped a crucial step. Kernel upgrades are not to be taken lightly and, as you have experienced, can quite possibly be more trouble than they're worth.
  • Staying with 2.4.9 (Score:3, Interesting)

    by mckeowbc ( 513776 ) on Thursday October 11, 2001 @08:29AM (#2414767) Homepage
    Personally I'm sticking with 2.4.9 until 2.4.12 hangs around for a while. I like to see open source developing things quickly, but having 2 kernel releases in 2 days is a little absurd. I think Linus should slow down a little and make sure to hammer out glaring errors like having something defined incorrectly in a file. We'll see if things improve once the 2.5.x series comes out though.
  • by Leimy ( 6717 ) on Thursday October 11, 2001 @08:38AM (#2414783)
    Don't you think it might be about time to have a linux CVS and bugtracker. To my knowledge no such thing exists. Perhaps other people would have actually tried to compile some of this stuff and caught the bugs.

    That way each new release can be a "tag" and every new 2.x+1 that occurs can be a branch.

    FreeBSD (while not void of bugs) has had a great deal of success with CVS IMHO. The parport bug is that the author submitted something that can't even compile due to a #define constant name that was malformed. I think he forgot the ECP part of it. Someone else posted the patch here in the replies section.
    • Yes! Please! CVS!

      CVS NOW!

      The current 'system' results in lost patches, random accidental additions cough*JFFS*cough, little/no tracking of changes, difficulty getting/making patches among various versions, etc.

      Please, please, CVS, please!
  • Does it compile? (Score:3, Interesting)

    by martinde ( 137088 ) on Thursday October 11, 2001 @08:39AM (#2414788) Homepage
    2.4.11 wouldn't even compile for me with either
    the old or new Adaptec 7xxx driver enabled. This is the 3rd or 4th 2.4.x kernel that would not compile "out of the box" for me. 2.4.9-ac18 compiled and seems OK on that particular box.

    On the bright side, 2.4.11 does seem to have decent VM. And the firewire support seems to be better than before with my Digital 8 camcorder.
  • by sydb ( 176695 ) <michael@NospAm.wd21.co.uk> on Thursday October 11, 2001 @08:45AM (#2414806)
    My definition of a stable kernel is one that has been handed over to the stable kernel maintainer, Alan Cox.

    The stable kernel has become ready for production usage once development has started somewhere else.

    May I recommend this attitude to people who complain about the instability of the 2.4 series. It's called pragmatism.
    • by G27 Radio ( 78394 ) on Thursday October 11, 2001 @02:30PM (#2416518)
      Anytime changes are made to a kernel (or any other code for that matter) there is always the potential for new errors to be introduced. If you want a truly stable kernel then you need to wait until it's been around long enough to be proven to be stable.

      The same goes for service packs for Windows. None of the Windows shops that I used to work for would ever install service packs until they had been available long enough to know the new errors they would introduce. In fact many of those companies had policies that declared you would be fired for installing any new service packs until IT had determined that they wouldn't break usability.

      If you install software on a production system that was just released yesterday, you're just asking for trouble. This applies to ALL software, not just kernels.

  • by Outlyer ( 1767 ) on Thursday October 11, 2001 @08:53AM (#2414841) Homepage
    I agree this is an annoying bug, but to paraphrase a coversation between the comic book guy and Bart:

    Comic Guy: Worst kernel EVER

    Bart: Why do you get to complain? They've given you thousands of hours of entertainment for free?

    Comic Guy:As a loyal [user], they owe me.

    Admittedly, I'm probably off the actual text by a bit here, the point remain. Try not to be the Comic Book Guy when Linus makes one mistake.
    • Re:To paraphrase. (Score:5, Insightful)

      by tshak ( 173364 ) on Thursday October 11, 2001 @10:31AM (#2415299) Homepage
      Whethor or not it's free has nothing to do with it. If you want to compete with Windows, you can't say, "We have more bugs but hey it's FREE!". The whole argument FOR open source is that it's better software AND it's free (as in purchase price), not that it's free and "almost as good" software. So, yes, you must hold Linux to the same standards that you hold OS X or Windows too.
      • All software has bugs. The turnaround time on this one was pretty amazing though. 2.4.11 was released tuesday, and the fix was provided today.
      • Re:To paraphrase. (Score:3, Insightful)

        by el_nino ( 4271 )
        Linux isn't in competition with Windows. Linux isn't an operating system. A GNU/Linux system can compete with Windows, and it might even *gasp* not use the latest release of the Linux kernel. If you want an operating system, install a distro and use the distribution's current, tested kernel. Don't follow the Linux kernel development and install the latest and greatest kernel unless it's Linux, not a replacement for Windows, you're interested in. There's no reason for most users to always use the latest kernel.

        I do agree that the latest releases haven't been tested as much as they should have been before being released as stable, but the competing with Windows point is moot.
  • this is why.... (Score:4, Insightful)

    by xtermz ( 234073 ) on Thursday October 11, 2001 @09:52AM (#2415133) Homepage Journal
    ...im still running ol'trusty 1.1.59.....
    ...
    but i cant figure out why my box keeps getting owned...

    oh well
  • by Speare ( 84249 ) on Thursday October 11, 2001 @10:23AM (#2415266) Homepage Journal

    I'm a relative newcomer to the Open Source world, but what has struck me is how none of the big profile projects seem to have their own test harness or test suites. Maybe I'm missing something. Please let me know what test suites major OSS software ships with. (The only one I could think of was autoconf, which isn't a quality-management test suite but a build manager, and the Perl build process does a few demonstrations of terminal features.)

    What I mean is something like "make test" integrated into the project. Running that generated test code would perform hundreds of sanity checks (or even thousands for complicated projects) on the code.

    Perhaps Red Hat and SuSE have this kind of code locked away in their "commercial advantage" (and I could see the arguments for keeping those closed) but it would seem to me that Linux and Alan Cox and crew would be more open about test suite software for the kernel.

    Install a kernel, run a battery of tests. Find systemic breakers really quickly. It's not hard, it's just a matter of discipline to write the tests. As code is written, write the tests for the code. Any time a bug is found outside the normal test suite, write the test that should have found it. Automatable tests wherever possible.

    In the "eXtreme Programming" development paradigm, this is codified even more vigorously: write the test(s) BEFORE the code. In Eiffel, you program by contract; each method has a pretest and a posttest to ensure that the state of the world is correct. Part of the official build process for releasing the software should be a 100% compliance with the automated tests.

    • GCC has it (Score:2, Insightful)

      by adadun ( 267785 )
      I couldn't agree more. Testing is incredibly important for software projects and automated tests makes sure that certain test are not forgotten.

      GCC has a test suite: http://gcc.gnu.org/testresults/ [gnu.org] and uses the test suite as a formal release criterion. The GCC team also uses those tests as benchmarks for the compiler.
    • by MarkusQ ( 450076 ) on Thursday October 11, 2001 @11:54AM (#2415658) Journal
      I'm a relative newcomer to the Open Source world, but what has struck me is how none of the big profile projects seem to have their own test harness or test suites. Maybe I'm missing something. Please let me know what test suites major OSS software ships with...What I mean is something like "make test" integrated into the project. Running that generated test code would perform hundreds of sanity checks (or even thousands for complicated projects) on the code.

      Install a kernel, run a battery of tests. Find systemic breakers really quickly. It's not hard, it's just a matter of discipline to write the tests. As code is written, write the tests for the code. Any time a bug is found outside the normal test suite, write the test that should have found it. Automatable tests wherever possible...Part of the official build process for releasing the software should be a 100% compliance with the automated tests.

      There is a comprehensive testing suite in place for linux; in fact, we just saw it in action. It involves testing the kernel on thousands of boxes simultaneously, running ten of thousands of hours of tests, and getting feedback to the developers within a few hours.

      To paraphrase pogo: "We've seen the test suite, and it is us."

      Now, this may seem odd or broken, but it has a few charming advantages. First, the costs are distributed amongst those that benefit most, with zero accounting overhead. Second, the response time is very fast. Third (and, IMHO, most importantly) test coverage is maintained by the same laws of statistics that make sure there is air for you each time you take a breath; if usage patterns change, the new usage is included in the tests automatically--even if no one is consciously aware that they are doing "something new" it still gets tested.

      -- MarkusQ

      • Now, this may seem odd or broken, but it has a few charming advantages.

        It also has several rude disadvantages. First, I am being treated as an unpaid QA employee. Second, when I find a problem and report it, I am flamed for not reporting it in the proper format or with the proper dump or trace. Third, my intelligence is insulted by labelling this software "stable" before any unit, system or integration testing has been done.

        Hacking on software is fun, exciting and rewarding. But there are a few of us who just want to *use* the software. The last thing I want to do when I come home is to trace another core and file a problem report.

        Why Open Source developers get paid for writing software, when the Open Source users don't get paid for testing software?
        • First, I am being treated as an unpaid QA employee
          I never got paid for any OSS work I did. Linus doesn't get any money for maintaining the linux kernel.

          But there are a few of us who just want to *use* the software.
          Then don't upgrade a stable production system the same night a patch comes out. If you would've waited a day or two (like I'm doing), you would be fine.

          • by Bishop ( 4500 )

            Just to add to Leto2's comments: "don't upgrade a stable production system" is not limited to open source. A decent sysadmin will test any patch commercial or otherwise before rolling it out to production systems. Patching blindly is just asking for trouble.

            And to all the linux bashers: this is nothing new. Most big software packages that I am aware of has had a bad patch or "fix."

    • I'm a relative newcomer to the Open Source world, but what has struck me is how none of the big profile projects seem to have their own test harness or test suites. Maybe I'm missing something. Please let me know what test suites major OSS software ships with.

      The Gnu Compiler Suite has an extensive regression test. See for example "GCC Automated Testing System" [cygnus.com] or "GCC 2.95 Regression Test Strategy" [gnu.org]

      If you need to write a regression test for your own software check out DejaGnu [utah.edu].

      --Andre

    • Stanford already has a test suite for linux kernels, and it fixes hundreds of bugs that Alan Cox incorporates and passes along.

      The checker lives here [stanford.edu]
    • There are three issues in the case of the Linux kernel: (1) It depends on hardware. Nobody has one of everything, so nobody could do a comprehensive test. Sometime one driver will turn out to make one assumption, and another driver will make a conflicting assumption; either will work, but not together. (2) There are a lot of situations where the specified behavior is underdetermined, and, in particular, cases where you can do a set of things the original programmers didn't expect you to do together. There won't be tests for these sorts of things. (3) There are a lot of corner cases that are hard to make happen. It's very difficult to put the kernel into certain combinations of states, so, on most trials, the situation won't even get tested. This is particularly the case with race conditions between processors and things that depend on peripheral timing.

      I agree that having a test suite would be good for catching a lot of simple bugs in places that are easy to test. Sometimes a new kernel will have something that doesn't work at all, and that should get caught. But there are relatively few of these, compared to things which aren't really quite safe, although you have to look at them carefully and think about them, understanding the code, for a while, and these are the more important bugs.
  • Nowdays we get a newkernel spit out every second day and people instantly downloads it.
    thenthey start to complain about lackof testing.
    YOU are the ones who should testthis folks.
    if you dont like it, why change kernel.
    my main machine still running 2.0.30
    why ? well i dont need any of the newthings.
    just because its new and shiny, it doesnt have to be better folks.
    with 2 yearsof uptime, i would never even think of swapping kernel.
    andif i find i abug, i dont complain, instead i fix it, that is the way of open source.

    learn to code before complaining people, it really isnt all that hard.

    my main question is why download a new "release"
    if you arent prepared for bugs orto sort them out?. No one, and i mean nooo one who codes can make a totally bugfree product.

    / J.Thorsell Sysadm.

  • First of all, let's all remember that the amount of money we are paying for the linux kernel is $0.00. Secondely, if you are bothered by the testing process, download some pre-releases and test it yourself. Thirdly, this kernel was not included in any linux package that I know of, and we all know that Mandrake, Redhat, and all the other linux packages, do testing themselves, and usually don't release the bleeding edge kernel in their releases, so the amount of exposure is minimal. Should 2.4.11 been released, well no, but it was and it was fixed quickely, and you can always revert back one version if something is not working. So everyone take a deep breath, and remember, IT'S FREE, a lot of these guys are submitting these patches on their spare time. What have you done to help out?

    KidA
    • First of all, let's all remember that the amount of money we are paying for the linux kernel is $0.00.

      And the amount staked on it may be millions. If you make the claim that Linux is equal to or better than commercial OS's, then you need to be prepared to have it subjected to the same demanding standards that paying customers have. Since I don't pay directly for development, I don't expect that development to happen on my schedule. But when it makes claims of quality and reliability, I expect them to be backed up. You can't have it both ways.

      Fact is, I am often disappointed by commercial OS's too. I respect Linux enough to not cut it any more slack. I don't think Linus would have it any other way.
  • by SClitheroe ( 132403 ) on Thursday October 11, 2001 @10:31AM (#2415295) Homepage
    This is only the second kernel I will have built (just installed Slackware 8.0 for the first time, and built 2.4.11)...Can I reuse the configuration file created by "make menuconfig" with 2.4.12, or should I try and re-select all the options I had previously?
  • by Lethyos ( 408045 ) on Thursday October 11, 2001 @10:56AM (#2415392) Journal
    I get the impression that Linus is "rushing" releases of 2.4.x in an atempt to get it mature. Perhaps then he can say, "it's stable/it works" so that development on 2.6.x/3.0.x(?) can open up full swing? He mention in the interview posted yesterday that he wouldn't really jump into that until 2.4.x was a little older.

    *shrug*

    It just seems that the patch on this tree are coming out faster than any of the other branches before it. And many more "issues" slipping through the cracks including controversy and laziness. I don't know about the rest of you, but I would prefer slower progress in favor of more careful, more tested releases. :)
  • I've heard of "release early, release often" but this is ridiculous. (-:

    Actually, I can't wait to see what 2.5 development will bring us. The fact that so much is changing even in the stable kernel likely means developers will really cut loose when 2.5 work begins. I think it's actually quite exciting. (As long as 2.4 gets all its holes patched up in the near future of course..)
  • So many folks get mad when Linux has bugs. Most of this is because they do not understand the model.

    Let us compare the Linux development of a 2.odd version to a project branch in your average software company. That project branch will be unstable and used only for feature testing.

    Then, you have the 2.evens. These are equivalent to a release branch or product mainline. You expect these to be *more* stable, but you still don't expect perfect code. When these are sent to your release egnineers and Q/A, you get several small rounds of bug-fixing as a result of regression tests, feature verification and so on.

    This last step can be mapped to what distribution vendors do. So, for example, we expect Red Hat to go with something like 2.4.12, but to add the parport patch into their SRPM. They will doubtless also discover several other problems, or may be dragging along patches from previous kernels that have yet to be merged.

    This is the art of release engineering. There's no such thing as a developer-released product. Never was. This is the value-add that distributions give us. They act as Q/A.
  • It is nice to see that no one bothers to actually test kernel versions before
    they get released into the "stable" tree.


    Perhaps, then, it can be said that they follow the "Slashdot" model of development: Post first and correct things (maybe) later.

  • I had been bullish on the Linus+AA kernels (2.4.10 ran pretty well with week-long uptimes) but 2.4.11 had the symlink thing and only got 8 hours up before 2.4.12, which just locked hard under load after only about 4 hours.

    I guess it's time to hit 2.4.10-ac11 and see what Alan+Rik can do for me. The 2.4 series so far has been a crap shoot... I wonder if they can save it before 2.4.15-20 or so?

    My vote (I don't know these people so it's by no means binding): Linus starts 2.5 and leaves this 2.4 nonsense behind (sooner we get a 2.6 the better) and Alan makes a big, ugly change to allow user-select at compile time of Rik's VM vs. AA VM. Both Rik and AA then both get to keep going nuts trying to keep the 2.4 boxes up...

Anyone can make an omelet with eggs. The trick is to make one with none.

Working...