Slashdot is powered by your submissions, so send in your scoop

 



Forgot your password?
typodupeerror
×
Operating Systems Upgrades Linux

Is the Stable Linux Kernel Moving Too Fast? 156

darthcamaro writes "Yesterday the stable Linux 3.10 kernel was updated twice — an error was made, forcing a quick re-issue. 'What happened was that a patch that was reported to be broken during the RC [release candidate] review process, went into the release, because I mistakenly didn't pull it out in time,' Greg Kroah-Hartman said. The whole incident however is now sparking debate on the Linux Kernel Mailing List about the speed of stable Linux kernel releases. Are they moving too fast?"
This discussion has been archived. No new comments can be posted.

Is the Stable Linux Kernel Moving Too Fast?

Comments Filter:
  • No (Score:5, Insightful)

    by Stumbles ( 602007 ) on Wednesday August 21, 2013 @04:47PM (#44636307)
    its moving along just fine. People make mistakes, get over it, its not the end of the world. Considering its current release speed, the amount of changes made over the long term the Linux kernel folks have as good or better track record than most other software houses.
    • Re:No (Score:4, Interesting)

      by CAIMLAS ( 41445 ) on Wednesday August 21, 2013 @05:32PM (#44636745)

      Really?

      The current process resulted in us having CFQ + EXT3 as the default for a long time (some distros still have this). This basically means any sort of interactive performance is worse than horrible. The only reason we're beyond it now is because EXT3 is on its way out with EXT4 being preferred.

      IIRC, wasn't CFQ one of the first major infrastructural things put into 'stable' after this 'rapid release' approach was adopted?

      Also, udev.

      I'm sure there are other examples... and maybe I'm projecting a bit.

      • CFQ only comes in to play when accessing new uncached data from disk when disk is at medium-high usage (at low usage any read/write queues are empty).
        I'm struggling to figure out any interactive programs that fit that description. Web/email/documents/games etc... don't touch the disk in any significant way and the cache handles most disk accesses for them anyway.

        • by tlhIngan ( 30335 )

          CFQ only comes in to play when accessing new uncached data from disk when disk is at medium-high usage (at low usage any read/write queues are empty).
          I'm struggling to figure out any interactive programs that fit that description. Web/email/documents/games etc... don't touch the disk in any significant way and the cache handles most disk accesses for them anyway.

          Until you start swapping. And most users tend to run a ton of programs at once (or load up dozens of tabs in Firefox) so the kernel starts to swap

          • Ok that may be a scenario where CFQ doesn't perform adequately, however I'd say that is a pretty poor example.
            You cannot reasonably expect a fluid desktop environment with moderate swapping. It is never going to happen.

            • by CAIMLAS ( 41445 )

              It used to not be a problem, that's the thing. Before all the modern schedulers came to be (so, back in the 2.4 days) it was entirely possible to stream a video over the network without stuttering - while running a -j3 kernel build, updatedb, and a find on root with memory exhaustion taking place (eg. browser was loaded up). It was a matter of pride that these things could be done on Linux, because Windows would fall over under similar loads. Nowadays, Windows handles these situations better than Linux does

              • Use the deadline scheduler then if you don't want a complicated scheduler. It should be included in most prebuilt kernels.

                Funnily enough deadline is recommended for server loads and CFQ is recommended for desktop use. The exact opposite of what you are suggesting.

                And I have no doubt my computers can do exactly what you describe just fine without impacting desktop performance at all.
                Just the other day I was doing network video streaming (1080p over SSHFS), I had 4 virtual machines running, plus Thunderbird,

                • by CAIMLAS ( 41445 )

                  Are you using an SSD? Was memory exhausted?

                  The scenarios I describe were/are disk contentious in scenarios at or near memory exhaustion, when the system dips into swap.

                  You can experience this as soon as the system starts dumping RAM pages to swap even today, assuming you've not got an SSD.

          • and yes, it also appears some webmasters code their websites assumiing their users have machines with quad cores and 16GB of RAM.

            This. I had to throw more RAM into my wife's laptop because a significant portion of the interior design blogs she reads are coded by seriously inept people.

      • Re:No (Score:4, Interesting)

        by jhol13 ( 1087781 ) on Wednesday August 21, 2013 @08:24PM (#44638453)

        Linux (the kernel) is accumulating new security holes at least at the same speed as they are fixed.
        Proof: Ubuntu kernel security hole fix rate has been constant for years.

        (actually I have not counted the actual number of holes, only the actual number of kernel security patches - these two should correlate though).

        • by Error27 ( 100234 )

          I work in kernel security and I would say we have improved. You can't just tell people "don't make mistakes" and expect security to improve the only way you can improve is by improving the process.

          1) We've added a few exploit prevention techniques like hiding kernel pointers.
          2) Our fuzz testers have improved.
          3) Our static checkers have improved.

          But we're not perfect.

          For example, we earlier this year we merged user namespaces [lwn.net]. Obviously this is tricky code which deals with security. People had been workin

          • by jhol13 ( 1087781 )

            Although you are more qualified than me (I gave up following security actively over ten years ago), I beg to differ. No offence.

            Seems like you have invented a very complicated version of jail. Typical Linux attitude, both "NIH" and "it is a superset (i.e. can do more), therefore it is better". In security both have been proven to be bad ideas quite a few times.

            [quote] Code has bugs. That's life [/quote]

            I do not want to talk that much about whether some code has bugs or not, rather about attitude. That sente

            • by jhol13 ( 1087781 )

              One thing: I do not and have never claimed that some-other-OS is better (you won't get me in a OS flame war).
              I claim Linux could do hugely better.

      • by tibman ( 623933 )

        How does the Kernel drive what your disks are formatted to? Your disks are formatted to ext3 or 4 long before you config/compile the Kernel.

        • by dAzED1 ( 33635 )
          yeah, I'm a bit confused by his comment too. Is he suggesting the ext4 support is not available? Because umm...wtf does the kernel (otherwise) have to do with what formatting is "default?"
      • The current process didn't result in that at all. Your distro chose that default and could've changed it any time they liked.

        Get involved with your distro if you care so much and help them choose sane defaults.

    • by Shark ( 78448 )

      All in all, I think it's a nice problem to have. Compare that to the kernel being stagnant, it's great that being able to include submitted code safely and fast enough becomes an issue to look into. I doubt MS or any of the other big software companies have issues where features and improvements are being *produced* so fast that QA is unable to keep up. I suspect that they have more of an issue with them being *requested* faster than the company can provide.

    • by msobkow ( 48369 )

      Timely releases of the Linux kernel don't hurt anything anyhow, because most of the packagers and distributors don't release the updates until they've had further testing. In fact, most distros lock in at a particular kernel release, and backport patches to *that* release rather than updating the whole kernel.

      So there are really three levels: dev, stable, and production.

    • by gweihir ( 88907 )

      Indeed. Also anybody with some experience in rolling their own kernels with sources from kernel.org knows not to use any x.y.0 releases when they want stability. Give a specific release some time to mature before using it, say 2-3 weeks. You may also have to read changelogs to find out what things were patched. There are also nice, pretty current "longterm" kernels.

      That said, I have been on some current releases for some 24/7 machines and did not encounter any issues. I would say that the Linux kernel folks

  • Compared to what? (Score:5, Insightful)

    by bmo ( 77928 ) on Wednesday August 21, 2013 @04:49PM (#44636329)

    "Are they moving too fast?""

    Compared to what, Windows, IOS, OSX, What?

    >known bug that got by review
    >caught
    >fixed rapidly instead of waiting for the next release

    I don't see the problem.

    If this was a regular occurrence, yeah, it'd be a problem. But it's infrequent enough to be "news."

    Unlike Patch Tuesdays, which aren't.

    --
    BMO

    • Patch Tuesdays aren't news. Patch Tuesdays that break something are.

    • by jamesh ( 87723 ) on Wednesday August 21, 2013 @05:04PM (#44636455)

      "Are they moving too fast?""

      Compared to what, Windows, IOS, OSX, What?

      Compared to a speed where accidents like this are less likely to happen, if such a thing exists. It could be that OS release cycles are unsafe at any speed!

      • Re:Compared to what? (Score:4, Informative)

        by MikeBabcock ( 65886 ) <mtb-slashdot@mikebabcock.ca> on Wednesday August 21, 2013 @10:42PM (#44639371) Homepage Journal

        Which would be why you should use an actual distribution kernel and not the compile-it-yourself variety if you need stability and testing.

      • Perhaps the question should be "Am I adopting Linux kernels too fast?"

        Well, if you were hit by this bug in a significant non-testing way then I'd say yes. Unless you are testing, don't install anything on release day.

        • Indeed. ArchLinux, which is the fastest bleeding-edge, rolling-release distro out there, never released 3.10.8 into [core]. [core] is now at 3.10.7 and an update is flagged to 3.10.9 within a couple of days, skipping .8. The only way you would install 3.10.8 is if you (a) roll your own kernel or (b) use the [testing] repo in Arch. Both of these indicate that you are not a regular user, you are a developer (in some sense).
    • What's good. (Score:4, Insightful)

      by dutchwhizzman ( 817898 ) on Wednesday August 21, 2013 @05:06PM (#44636479)
      Why do you have to compare it to other operating systems? Just look at what should be the right way to do it, maybe learn from other operating systems, but don't just look at the speed of what others are doing and try and match that. If things go wrong because you're moving too fast, you should either slow down, or fix your methodology so you can deal with the speed. If things don't get adapted by distributions because it's a pain to keep supporting, slow down, or make it easier for them to support it. If things go too slow and you miss essential features that everybody needs, speed it up. It's not that hard to rely on your own merits and not be dependent on other operating systems to determine how fast you should be going.
    • "Are they moving too fast?""

      Compared to what, Windows, IOS, OSX, What?

      A long time ago, I don't remember where it was, maybe LKM, Linux Torvalds said there would never, ever be a version 3 of the Linux Colonel. I thought that was a strange thing to say, even for him. I thought to myself "things are really gomna get weird when they get to 2.99.99.99.99.99".

      So now, he's changed his mind and the version numbers are zipping along. Not as fast as the absurd version number inflation of Firefox and Chrome, but still a lot faster than it used to be.

      In General, I don't have any Ma

    • by bmo ( 77928 )

      >moderation "overrated"

      Not like karma counts for anything, but I've always thought the "overrated" mod was chickenshit.

      "I disagree with you, but I don't have the balls to reply"

      --
      BMO

    • by Osgeld ( 1900440 )

      problem is that most of the time it takes its sweet ass time

      known bug that got by review
      caught
      wait half a decade for known bug to be fixed or more likley forgotten so it still remains in next version, goto 10

  • by icebike ( 68054 ) on Wednesday August 21, 2013 @04:54PM (#44636369)

    As indicated in the debate on LKM, rc kernels get hardly any testing, although all of the tests it does get are mostly by highly motivated and astute testers

    Most distros are releasing kernels at least one behind the developers tree, with not a great deal of incentive to update the kernel right away, (even if they make it available in a repository for those wanting it). So much of the real world testing on new kernels comes only after its been released, and even then it doesn't hit Joe Sixpack's machine for several months.

    So at most, this was an embarrassing incident, and not a bit deal. The amazing thing is that it was caught at all. Some of us remember kernels that got into production distros with serious things broken that should have been caught much earlier.

    • So much of the real world testing on new kernels comes only after its been released, and even then it doesn't hit Joe Sixpack's machine for several months.

      One of these days I am going to meet Joe, and I am going to complement him on his Abs.

    • by CAIMLAS ( 41445 ) on Wednesday August 21, 2013 @05:43PM (#44636873)

      From where I'm sitting, as someone who used to routinely build rc releases and use them, this is how things look.

      Five, ten years ago you had people such as myself who would build RC (or AC, etc.) kernel trees to test things and see how they'd work. I know several people who regularly made LKML submissions, many of which turned out to contribute to fixes.

      Today, using the rc releases isn't as practical because they're fairly divergent from distribution patchsets. A lot goes into a distribution kernel which isn't present in the vanilla kernel.org kernels, it seems.

      More often than not, pulling everything together to build our own kernels isn't worth the extra effort: possibly due to the shortened cycle and possibly due to general code maturity, there's little benefit. Maybe our definitions of 'benefit' has changed, too - but arguably, the changes in the kernel today are nowhere near as drastic or significant as when (say) XFS was getting merged with additional kernel and disk schedulers.

  • by fizzup ( 788545 ) on Wednesday August 21, 2013 @05:03PM (#44636443)

    I mistakenly didn't pull it out in time.

  • ... is that it would move even faster.
    Really, this is a non-problem. The 'system' worked.
    Thank God they're no slick sleezeballs like Ballmer,
    but acked and corrected.
    Enjoy your new kernel. Relax. Be happy!

    • But my heart monitor automatically downloads and installs the latest kernels within minutes of them being posted to kernel.org.

  • by Anonymous Coward

    Time for an LTS option. RedHat, Canonical, Debian, should backport security fixes and maybe mature drivers to a LTS kernel for 5 years or so.

    For that matter, go ahead and make a LTS gcc fork, backporting security fixes during that same time schedule.

  • by robmv ( 855035 ) on Wednesday August 21, 2013 @05:15PM (#44636581)

    No, you want a frozen kernel. A stable kernel isn't one without bugs, is one where there aren't massive changes and you get dot releases with fixes

  • by stox ( 131684 ) on Wednesday August 21, 2013 @05:16PM (#44636599) Homepage

    There are plenty of older kernels being actively maintained. Stable does not equal recommended for all production needs.

  • by swillden ( 191260 )

    Keep in mind that the stable kernel releases are not expected to be production-ready. Linus just produces the input to the actual testing and validation processes, which are performed by the distributions. That assumption is built into the kernel dev team's processes. Not that they intentionally release crap, but they don't perform the sort of in-depth testing that is required before you push a foundational piece of software onto millions of machines.

  • by Khopesh ( 112447 ) on Wednesday August 21, 2013 @05:49PM (#44636911) Homepage Journal

    I haven't needed to bypass my Linux distro and install a vanilla kernel in over ten years. I can wait. If it hasn't been packaged by the major distributions yet, it also hasn't been widely deployed. There are some highly competent kernel package maintainers out there, and they're likely to catch a lot of things themselves.

    Then there's the package release and its early adopters (Debian, for example, releases the latest kernels in Experimental and/or Unstable, letting them cook for a while). I typically pick up new kernels when they're in Debian Testing or Unstable (if I really want a new feature). This minimizes the chance of running into problems.

    (note, this works best for people using standard hardware. ymmv regarding embedded or obscure devices)

  • USB will suck in the first 4.XX one.

  • by Tarlus ( 1000874 ) on Wednesday August 21, 2013 @06:23PM (#44637247)

    ...then it's moving too fast.

  • As someone who tested drivers with it:
    OK through about 3.2 then it started to decline.
    Faster decline around 3.4
    3.7 - Who needs NFS anyway? Took until 3.9 to fix that

  • I follow kernel development only cursorily, looking at the kernel mailing list once in a while. But I get the distinct feeling that patch volumes have been higher over the past few months than they would be a few years ago. A version is simply something that group a set of tested patches. Generally, you don't want the sets to get too big, so it seems natural that the speed of version releases is keeping up.

    It would be nice to see a plot of the number of commits and number of versions over time.

  • Are we *gasp* agile?Or what!

  • I am still pissed there hasn't been an LTS since 3.4, and they are NOT being released on a regular cadence

    3.0, 3.2 and 3.4, released in short succession all are LTS, and since then none. I understand the devs can't maintain a zillion kernels, but could they at least space them out and/or release on a more stable cadence.

    i.e. drop 3.0 or 3.2 for 3.10/11???
    • by e5150 ( 938030 )
      3.10 is longterm, even tough kernel.org doesn't say so yet. http://www.kroah.com/log/blog/2013/08/04/longterm-kernel-3-dot-10/
  • by Anonymous Coward

    Apologizing for mistakenly not pulling out in time...hilarious.

Some people manage by the book, even though they don't know who wrote the book or even what book.

Working...