Slashdot is powered by your submissions, so send in your scoop

 



Forgot your password?
typodupeerror
×
Operating Systems Upgrades Linux

Linux 3.12 Released, Linus Proposes Bug Fix-Only 4.0 274

An anonymous reader writes "Linus Torvalds announced the Linux 3.12 kernel release with a large number of improvements through many subsystems including new EXT4 file-system features, AMD Berlin APU support, a major CPUfreq governor improvement yielding impressive performance boosts for certain hardware/workloads, new drivers, and continued bug-fixing. Linus also took the opportunity to share possible plans for Linux 4.0. He's thinking of tagging Linux 4.0 following the Linux 3.19 release in about one year and is also considering the idea of Linux 4.0 being a release cycle with nothing but bug-fixes. Does Linux really need an entire two-month release cycle with nothing but bug-fixing? It's still to be decided by the kernel developers."
This discussion has been archived. No new comments can be posted.

Linux 3.12 Released, Linus Proposes Bug Fix-Only 4.0

Comments Filter:
  • by icebike ( 68054 ) on Sunday November 03, 2013 @09:36PM (#45321847)

    There have been so many fast and furious features added over the last couple releases, not only to the kernel but also the various and sundry major components (like systemd) that taking a breather isn't going to hurt anything. There is nothing huge waiting in the wings that everyone needs next week.

    Take the time to fix everything you can find.

    • by Anonymous Coward on Sunday November 03, 2013 @09:50PM (#45321925)

      I don't know how you can honestly say that there's "nothing huge waiting in the wings that everyone needs next week." You must not understand the current operating system market.

      THERE IS BALLS TO THE WALL COMPETITION RIGHT NOW!

      The moment the Linux community rests on its laurels, even if just to fix some "bugs" that don't even exist, the competition from Windows and OS X will intensify to an extent that we haven't seen in ages.

      Look, Windows 8.1 was just released, and it's a game-changer. It makes the Windows 8 stream a viable option for businesses and home users alike. Windows 8.0 was like Vista was; Windows 8.1 is like Windows 7. Windows 8.0 tried some things out, and some of those were mistakes. Windows 8.1 remedies these, and the result is a powerful, usable operating system.

      OS X 10.9 Mavericks was just released recently, too. It took what was perhaps the most popular and widely used Unix-like system and made it even more efficient and powerful.

      Then there's Linux. There are major changes underway as we speak. The Ubuntu and GNOME 3 communities, which were once among the largest and most appreciated, shat upon the faces of their users, causing them to seek refuge in other distributions and desktop environments. Now we have Wayland on the way, and it's going to bring so much disruption that there may in fact be a civil war of sorts within the Linux community. X is not going to die easily! And then there's LLVM and Clang, which are kicking the living shit out of GCC. In fact, this is a revolution that we haven't seen the likes of in years.

      With so much turmoil in the userland software, it's now up to the kernel to pick up the slack. We're going to need to see the kernel team at least double their efforts to make up for the stupidity of the GNOME crew, for example. We're going to need to see a kernel that offers greater power efficiency on modern systems. We need to see a kernel that'll offer even better process and thread scheduling. We'll need to see a kernel that can scale from the smallest cell phones to the largest clusters. We need to see the completion of Btrfs.

      Never forget that when it comes to operating systems, the BALLS ARE TO THE WALL!. This is more true today than ever before. The competition is fierce, and prisoners will not be taken. When there is BALLS TO THE WALL competition, everybody involved needs to bring their best. This includes the Linux kernel developers. They need to be the best they've ever been. This is no ordinary situation; this is a BALLS TO THE WALL situation. And don't you ever forget that!

      • by 0123456 ( 636235 ) on Sunday November 03, 2013 @10:05PM (#45322005)

        You are Steve Ballmer, and I claim my five pounds.

      • by icebike ( 68054 )

        even if just to fix some "bugs" that don't even exist,

        Let's see, Linus thinks there are bugs needing to be fixed, and some random AC says they don't exist.
        Decisions, decisions.

      • Drive the parent AC troll into the ground.
    • And for extra credit, keep patching 4.0 with bug fixes after 4.1 is released.
  • by cold fjord ( 826450 ) on Sunday November 03, 2013 @09:39PM (#45321861)

    It could be very useful to have the code stabilize for a bit, put it through regression tests, do some auditing, maximize use of static code checkers, and fix the problems. I hope they seriously consider it.

    • I wish KDE and Gone would do exactly the same thing, and ideally, at the same time. In general, everything's pretty stable, but there's always one little bug that everybody knows that interferes with their workflow. Imagine if we got to a state where almost all of those were gone.

      • If Linus reaches a decision soon it could be something that propagates through the Open Source / Free Software community.

      • Re: (Score:2, Insightful)

        by Artifakt ( 700173 )

        I'm thinking you meant to say "Gnome", not "Gone", but I have to admit, as a typo, it makes one hell of a Freudian Slip. I won't say I wish Gnome was Gone, but I do wish the Gnome team would restore some user option control, and even extend it. Gnome has pruned a lot with version 3 and the 3.X's, and I would even think joining in the big bug fix movement should be a lesser priority than for KDE or any of the others to join in. A massively less buggy version of a still heavily restricted Gnome might say that

  • Yes, it is needed. (Score:5, Insightful)

    by Frobnicator ( 565869 ) on Sunday November 03, 2013 @09:46PM (#45321903) Journal

    The kernel's bug database shows almost 2500 open bugs right now.

    All projects slowly accumulate those hard-to-fix bugs, or the "maybe later" bugs, or the "not interesting right now", bugs. Periodically every project needs to have that cruft cleaned up.

    Spending two months fixing those bugs might be a minor annoyance to some of the kernel maintainers but would be a godsend to people who have been waiting a very long time for low priority and low interest kernel bug fixes.

    • by The Snowman ( 116231 ) on Sunday November 03, 2013 @10:27PM (#45322121)

      The kernel's bug database shows almost 2500 open bugs right now.

      All projects slowly accumulate those hard-to-fix bugs, or the "maybe later" bugs, or the "not interesting right now", bugs. Periodically every project needs to have that cruft cleaned up.

      In my experience, many of those are esoteric bugs that affect one or two people in weird situations, perhaps with a custom kernel patch applied (i.e. method works correctly unless you mod calling code to pass an otherwise invalid parameter). I wonder what the breakdown is between bug types and severity.

      Spending two months fixing those bugs might be a minor annoyance to some of the kernel maintainers but would be a godsend to people who have been waiting a very long time for low priority and low interest kernel bug fixes.

      I agree, somtimes it is good to clean up even the low priority bugs which impact a small number of total use cases, but could be huge: imagine if there were some "minor" bugs which impact embedded devices such as cable routers. For my home file server the bug is nothing, but could cause a security nightmare for someone who runs custom router software. Linux is too useful in too many places to ignore this many bug reports.

      • The other issue with "small number of users" bugs is that it's hard to determine how small they are. The bugs you see are just the ones someone could be bothered to report (or in the kernel's case, were eventually percolated up from users through distros as kernel issues).

        So they're certainly important.

      • by smash ( 1351 ) on Monday November 04, 2013 @02:03AM (#45323005) Homepage Journal
        Often, it's the "minor" bugs which are a warning of a more serious underlying problem that will bite you in the arse later in a more serious manner.
    • by jhol13 ( 1087781 )

      All projects slowly accumulate those hard-to-fix bugs, or the "maybe later" bugs, or the "not interesting right now", bugs.

      No, "all" projects certainly does not.
      Besides, "everybody does it" is a lame excuse.

  • by Loki_1929 ( 550940 ) on Sunday November 03, 2013 @09:48PM (#45321913) Journal

    Develop Linux like Intel develops CPUs: first you make a new shiny, then you do an entire release on improving that shiny. Rinse and repeat ad infinitum. Even better if you have two competing teams working on it. Whichever team comes up with the better product by launch time gets the nod.

    • by Anonymous Coward on Sunday November 03, 2013 @11:57PM (#45322551)

      Seeing as how Linus is new to this whole Linux kernel thing, I'm sure he appreciates the input of someone so knowleadgeable in kernel development.

      • by smash ( 1351 )
        Linus is fallible like everyone else. Remember that. Otherwise, v1.0 would have been pretty much perfect for single core, and 2.2 would have been a perfect example of SMP done right.
    • by smash ( 1351 )
      Pretty much the way FreeBSD do it. You have -CURRENT (at the moment, v10), which is the bleeding edge, you have current -STABLE which is where most of the stability/bugfix stuff often shakes out and then you have the previous -STABLE release which is for those who are extremely conservative.
    • Develop Linux like Intel develops CPUs: first you make a new shiny, then you do an entire release on improving that shiny. Rinse and repeat ad infinitum.

      So, how it used to work in the 2.2/2.4 days? And they rejected that?

      Even better if you have two competing teams working on it. Whichever team comes up with the better product by launch time gets the nod.

      Ah, internal competition, a fine strategy from the management manual, but a terrible terrible idea in practice that fosters resentment, animosity, stops cooperation. What do you think the team that fail are going to do? Say "ah never mind", or get frustrated, go off and do something else with their lives and never contribute again?

  • by Anonymous Coward
    Will my mouse work with Linux 4.0?
  • Amazing what someone genuinely passionate about their work and free of corporate pressures can accomplish.

    When was the last time you heard about a major version of Windows that was purely for much-needed bug fixes instead of trying to force bullshit "features" like Metro?

  • There is a reason RHEL uses an older kernel after all. Personally, stability is the #1 feature that converted me from Windows. Best to keep it that way.
  • All released Linux versions tried to be bug free, that should be nothing as big to deserve a whole new version for 4.0. But probably this "bug fix" goes beyond the normal scope. It must not just work, but work in an hostile environment where governments with plenty of resources try to exploit any "more or less work" vulnerability to plant backdoors and snoop, where hardware, firmwares (the methods that could use #badBIOS [erratasec.com] to spread could be an example), internet protocols or encryption algorythms are not so
  • by Jody Bruchon ( 3404363 ) on Sunday November 03, 2013 @10:27PM (#45322115)
    One of the most frustrating things for me is that the frenzy over the past six or seven years has led to some serious annoyances with the kernel's behavior: 1. Linux kernels for i386/x86 can't boot in less than roughly 28MB of RAM. I have tried to make it happen, but the features added along the way don't allow it. Perhaps it's the change to ELF? I'm not sure. 2. Linux x86 can't have the perf subsystem removed. It's sort of pointless for a Turion 64 X2 or a Core i3, but for systems with weaker processors (netbooks, embedded, etc.) every single evicted cache line counts. 3. Some parts of the kernel seem to be dependent on other parts almost arbitrarily. I once embarked on a quest to see what it took to discard the entire cryptographic subsystem. Long story short: good luck. I was surprised at how many different hashing and crypto algorithms were required to make use of common hardware and filesystems and network protocols. Are all of these interdependencies really necessary? 4. The help text for lots of kernel configuration options are in SEVERE need of updating and clarification. Most of the network drivers still say roughly the exact same thing, and some of the help text sounds pretty silly at this point. 5. Speaking of help text, why doesn't the kernel show me what options are forcing the mandatory selection of a particular option? For some, it's simple, but try hitting the question mark on CRC32c and you get a disastrous and impossible to read list of things that force the selection of that option. The help screen should show an option dependency tree that explains how the option in question was forced. 6. ARM is still a disaster. I have a Motorola Triumph I don't use anymore, but I wanted to build a custom system for. It uses a Snapdragon SoC and the only kernel I can use with it is a 2.6 series kernel from Motorola (or derivatives based on that code base) with lots of nasty deviations from the mainline kernel tree that will never make it into said mainline tree. I have a WonderMedia WM8650-based netbook that originally came with an Android 2.3 port and I can't build anything but the WonderMedia GPL compliance kernel release if I want to use most of the hardware in the netbook, even though general WM8650 support exists in mainline. Something needs to change to make it easier for vendors to bring their drivers and SoC specifics to mainline so that ARM devices aren't permanently stuck with the kernel version that they originally shipped with. I'm still using a VIA C7-M netbook which suffers heavily due to the tiny on-chip caches. I also have a Fujitsu P2110 with a Transmeta TM5800 CPU that makes my VIA look like an i7. I also own Phenom II servers, AMD A8 laptops, MIPS routers, a Raspberry Pi, and many Android devices I've collected over the years. What I've seen is that the mad rush to develop for every new thing and every new idea results in old hardware being tossed by the wayside and ignored, especially when that hardware isn't based on an x86 processor. Even then, I'm sure that this frenetic, rapid development process has resulted in a lot of unnecessary bloat and a pile of little unnoticed security holes. It may be time to step back and stop adding new features. I would like to see the existing mainline kernel become much more heavily optimized and cleaned up, and then see the inclusion of support for at least some of the embedded platforms that never managed to make it back into mainline. I know that this is an unrealistically broad set of "wants," but I also know that these are the big nasty unspoken problems in the Linux world that there are no easy answers for.
    • Re: (Score:2, Funny)

      by Anonymous Coward

      I hope that they fix the kernel bug that obviously stripped all of the paragraphs out of your comment. It was a kernel bug that did it, right?

    • by Microlith ( 54737 ) on Monday November 04, 2013 @12:24AM (#45322637)

      I once embarked on a quest to see what it took to discard the entire cryptographic subsystem. Long story short: good luck. I was surprised at how many different hashing and crypto algorithms were required to make use of common hardware and filesystems and network protocols. Are all of these interdependencies really necessary?

      Rather than just asking if they are necessary, the better question to ask is what are they using the cryptographic subsystem for? For example, BTRFS does checksumming and offers compression. EXT4 uses CRC32 as well. And that use isn't arbitrary, they use it to protect data integrity and, in the case of BTRFS, maximize use of disk space. The TCP/IP stack offers encryption. These requirements aren't arbitrary, they pull it in to accomplish a specific goal and avoid duplicating code.

      ARM is still a disaster.

      And it will continue to be so long as every ARM device is its own unique thing. There might be forward progress with AArch64.

      I have a Motorola Triumph I don't use anymore, but I wanted to build a custom system for. It uses a Snapdragon SoC and the only kernel I can use with it is a 2.6 series kernel from Motorola (or derivatives based on that code base) with lots of nasty deviations from the mainline kernel tree that will never make it into said mainline tree.

      Probably lots of board specific details (the board support package) that have no relevance in the kernel. x86(-64) and other architectures have the advantage that once processor support is added, support for every motherboard that CPU gets plugged into is virtually guaranteed. x86 would have the same problem as ARM if not for the use of things like ACPI, PCI, and the various hardware reporting formats supplied by legacy bios/UEFI.

      I have a WonderMedia WM8650-based netbook that originally came with an Android 2.3 port and I can't build anything but the WonderMedia GPL compliance kernel release if I want to use most of the hardware in the netbook, even though general WM8650 support exists in mainline.

      You'll have to blame WonderMedia. Barnes and Noble, Amazon, etc. all do the same thing: baseline GPL compliance release. Chip vendors will do the same thing, releasing only what is necessary and not bothering to integrate upstream. This is no small part of why vendors abandon Android devices so rapidly.

      Something needs to change to make it easier for vendors to bring their drivers and SoC specifics to mainline so that ARM devices aren't permanently stuck with the kernel version that they originally shipped with.

      Something does need to change, however that something is not in the kernel.

      I also have a Fujitsu P2110 with a Transmeta TM5800 CPU that makes my VIA look like an i7. I also own Phenom II servers, AMD A8 laptops, MIPS routers, a Raspberry Pi, and many Android devices I've collected over the years. What I've seen is that the mad rush to develop for every new thing and every new idea results in old hardware being tossed by the wayside and ignored, especially when that hardware isn't based on an x86 processor.

      And virtually all of that is still supported, with the ARM caveat noted above. Even the Transmeta CPU is still supported. What ends up happening is that the world moves on, and older hardware passes into history and receives less attention.

      Mos

    • by XB-70 ( 812342 )
      4.0 should consist of the following: The ability to decipher the hardware that it is installed into and then an automated optimization and re-compiling process for that hardware à la Gentoo with a bloated fall-back option in case of failure. Realistically, how often have ANY of you ever changed a bus, processor, network card, drive controllers and other hardware - especially on boards with much of that built in?
      • by smash ( 1351 )
        Just compile drivers/extra features as loadable modules, and get on with your life? The whole obsession with recompiling the kernel and stripping things (rather than just building as loadable modules) is (for 99% of users) just making work for yourself when you discover that "oh, crap this software i'm trying to use needs the frumble-mumbo kernel feature".
    • by smash ( 1351 )
      One of the most frustrating things for me is attempting to read a single paragraph of about 500 words.
    • by smash ( 1351 )
      Oh, and I was booting ELF kernels on my 486 in 4 megabytes of RAM back in 1995, so it wasn't the ELF change.
      • by epyT-R ( 613989 ) on Monday November 04, 2013 @04:21AM (#45323385)

        I've run into this before, and I've gotten modern (late 2.6) kernels running on systems with 8MB of ram. I have not tried with 3.x, and it's difficult to get the kernel size under 3 or 4MB these days. In processor type and features, try disabling the 'build a relocatable kernel' option, and setting CONFIG_PHYSICAL_START (shown in menuconfig as "physical address where the kernel is loaded") to a value less than the default 0x1000000 (16MB). This is a worked-for-me status solution.

    • You're complaining that it's not easy to compile your own kernel? I am simultaneously both kind of sympathetic, and not. What is the use case that the average-to-slightly-power-user needs to compile their own kernel for, anyway? (I am actually curious. Hardware support?) And if you're a legit power-user, shouldn't you already know more or less how to do it?

      On the other hand, documentation always sucks. ALWAYS. Which is NOT to say that we shouldn't try to make it better.

  • Linus's stated reason for not wanting numbers to go too high is seemingly based on a feeling or personal dislike of high numbers.

    Two questions.

    1. What happens when there are major changes in the Linux kernel? How are they now represented in selection of version number?

    2. What happens when the major digit begins to resemble Firefox / Chromes out of control version madness? How many years before Linux 19.4?

    It used to be version numbers actually meant something and conveyed some useful hint of scope or amount of change between versions.

    I'm not sure dumping this concept for the sake of political games and or OCD pedantry are worth opportunity cost to the user when contrasted with structured predictable scheme based on commonly agreed and understood guidelines.

    • by aardvarkjoe ( 156801 ) on Monday November 04, 2013 @12:40AM (#45322713)

      2. What happens when the major digit begins to resemble Firefox / Chromes out of control version madness? How many years before Linux 19.4?

      3.0 was released on 21 Jul 2011. Given the expected timeframe for 4.0 (if he decides to go through with this proposal, of course), then that's roughly 3.25 years per major version. So the answer to your question would be sometime in 2061.

      It used to be version numbers actually meant something and conveyed some useful hint of scope or amount of change between versions.

      With this proposal, it does mean something. It means that a 4.0 release is the result of focused testing and bugfixing of the changes and features added in the 3.x series. If the model seems to work, then 5.0 would probably be the culmination of the work put into the 4.x series. Sure, the meaning is different than is used for most projects, but that doesn't make it worse.

      • 2. What happens when the major digit begins to resemble Firefox / Chromes out of control version madness? How many years before Linux 19.4?

        3.0 was released on 21 Jul 2011. Given the expected timeframe for 4.0 (if he decides to go through with this proposal, of course), then that's roughly 3.25 years per major version. So the answer to your question would be sometime in 2061.

        I was going to post exactly the same thing, you would think that after 20 years we could go from 3 to 4 without someone whining about it.

  • by XB-70 ( 812342 ) on Monday November 04, 2013 @12:35AM (#45322691)
    There is such a great contrast between the slow, steady, improvement-laden release of Linux and the article that precedes this on on Windows 8.1 which can't even get its mouse to work. You'd think that Microsoft is trying to push out Windows 95!

    Overall, it speaks to the simple fact that, if the agenda is to improve things vs make money, improvements are the things that make money in the long run.

    FYI: I run a bunch of different OSs: Apple, Linux - 4 or 5 distros, Win 8x, 7x, Vista, 2003 (server)

    It's been a long, long time since I've had Linux crash or become unconfigurable - whether I upgrade from a previous version or do a clean install. Way to go, Linus!!

    • by smash ( 1351 )
      Haven't seen the mouse problem in 8.1 yet. Apparently it occurs in some games, but I've not seen it in the 80 I have installed yet. but, I'm willing to bet that the number of games the problem DOES NOT occur in is equal or greater to the entire Linux game library. And yeah, I run a bunch of stuff too, OS X, Win7, Win8.1, server 2003, 2008r2, FreeBSD and Linux.
  • by Evil Pete ( 73279 ) on Monday November 04, 2013 @01:16AM (#45322861) Homepage

    Now it's not that I bump up against many bugs but this is a very smart move. So many times you see feature upon feature added, maybe crash a bit blah blah. But sometimes you just have to stop, take a deep breath and just fix what is there rather than pile on new stuff. A brave decision but essential for the OS itself which must be rock solid above all else.

  • I definitely hope 4.0 is a bug-fix only kernel..

    It opens up the possibility of providing support for the kernel for sufficiently longer periods, and essentially, it could act as an LTS kernel for distributions. Linux is not that stable at this time, and the experience is still very much a hit or miss on systems. Whilst things are certainly better than they used to be, there are still many cases where I come across systems which should work, but don't (ie, they might stutter a lot, sometimes occasionally ker

  • I think for this to work he has to say something like "We won't move on and merge new features until X bugs have been fixed." In other words if you want the merge window to reopen for features, fix some bugs. X has to be high enough that a good many developers have to work at it. Kinda like making sure you hit your target heart-rate before getting off the treadmill.

A committee takes root and grows, it flowers, wilts and dies, scattering the seed from which other committees will bloom. -- Parkinson

Working...