Please create an account to participate in the Slashdot moderation system

 



Forgot your password?
typodupeerror
×
Operating Systems Upgrades Linux

Linux 3.12 Released, Linus Proposes Bug Fix-Only 4.0 274

An anonymous reader writes "Linus Torvalds announced the Linux 3.12 kernel release with a large number of improvements through many subsystems including new EXT4 file-system features, AMD Berlin APU support, a major CPUfreq governor improvement yielding impressive performance boosts for certain hardware/workloads, new drivers, and continued bug-fixing. Linus also took the opportunity to share possible plans for Linux 4.0. He's thinking of tagging Linux 4.0 following the Linux 3.19 release in about one year and is also considering the idea of Linux 4.0 being a release cycle with nothing but bug-fixes. Does Linux really need an entire two-month release cycle with nothing but bug-fixing? It's still to be decided by the kernel developers."
This discussion has been archived. No new comments can be posted.

Linux 3.12 Released, Linus Proposes Bug Fix-Only 4.0

Comments Filter:
  • by cold fjord ( 826450 ) on Sunday November 03, 2013 @10:39PM (#45321861)

    It could be very useful to have the code stabilize for a bit, put it through regression tests, do some auditing, maximize use of static code checkers, and fix the problems. I hope they seriously consider it.

  • by Loki_1929 ( 550940 ) on Sunday November 03, 2013 @10:48PM (#45321913) Journal

    Develop Linux like Intel develops CPUs: first you make a new shiny, then you do an entire release on improving that shiny. Rinse and repeat ad infinitum. Even better if you have two competing teams working on it. Whichever team comes up with the better product by launch time gets the nod.

  • by Jody Bruchon ( 3404363 ) on Sunday November 03, 2013 @11:27PM (#45322115)
    One of the most frustrating things for me is that the frenzy over the past six or seven years has led to some serious annoyances with the kernel's behavior: 1. Linux kernels for i386/x86 can't boot in less than roughly 28MB of RAM. I have tried to make it happen, but the features added along the way don't allow it. Perhaps it's the change to ELF? I'm not sure. 2. Linux x86 can't have the perf subsystem removed. It's sort of pointless for a Turion 64 X2 or a Core i3, but for systems with weaker processors (netbooks, embedded, etc.) every single evicted cache line counts. 3. Some parts of the kernel seem to be dependent on other parts almost arbitrarily. I once embarked on a quest to see what it took to discard the entire cryptographic subsystem. Long story short: good luck. I was surprised at how many different hashing and crypto algorithms were required to make use of common hardware and filesystems and network protocols. Are all of these interdependencies really necessary? 4. The help text for lots of kernel configuration options are in SEVERE need of updating and clarification. Most of the network drivers still say roughly the exact same thing, and some of the help text sounds pretty silly at this point. 5. Speaking of help text, why doesn't the kernel show me what options are forcing the mandatory selection of a particular option? For some, it's simple, but try hitting the question mark on CRC32c and you get a disastrous and impossible to read list of things that force the selection of that option. The help screen should show an option dependency tree that explains how the option in question was forced. 6. ARM is still a disaster. I have a Motorola Triumph I don't use anymore, but I wanted to build a custom system for. It uses a Snapdragon SoC and the only kernel I can use with it is a 2.6 series kernel from Motorola (or derivatives based on that code base) with lots of nasty deviations from the mainline kernel tree that will never make it into said mainline tree. I have a WonderMedia WM8650-based netbook that originally came with an Android 2.3 port and I can't build anything but the WonderMedia GPL compliance kernel release if I want to use most of the hardware in the netbook, even though general WM8650 support exists in mainline. Something needs to change to make it easier for vendors to bring their drivers and SoC specifics to mainline so that ARM devices aren't permanently stuck with the kernel version that they originally shipped with. I'm still using a VIA C7-M netbook which suffers heavily due to the tiny on-chip caches. I also have a Fujitsu P2110 with a Transmeta TM5800 CPU that makes my VIA look like an i7. I also own Phenom II servers, AMD A8 laptops, MIPS routers, a Raspberry Pi, and many Android devices I've collected over the years. What I've seen is that the mad rush to develop for every new thing and every new idea results in old hardware being tossed by the wayside and ignored, especially when that hardware isn't based on an x86 processor. Even then, I'm sure that this frenetic, rapid development process has resulted in a lot of unnecessary bloat and a pile of little unnoticed security holes. It may be time to step back and stop adding new features. I would like to see the existing mainline kernel become much more heavily optimized and cleaned up, and then see the inclusion of support for at least some of the embedded platforms that never managed to make it back into mainline. I know that this is an unrealistically broad set of "wants," but I also know that these are the big nasty unspoken problems in the Linux world that there are no easy answers for.
  • by The Snowman ( 116231 ) on Sunday November 03, 2013 @11:27PM (#45322121)

    The kernel's bug database shows almost 2500 open bugs right now.

    All projects slowly accumulate those hard-to-fix bugs, or the "maybe later" bugs, or the "not interesting right now", bugs. Periodically every project needs to have that cruft cleaned up.

    In my experience, many of those are esoteric bugs that affect one or two people in weird situations, perhaps with a custom kernel patch applied (i.e. method works correctly unless you mod calling code to pass an otherwise invalid parameter). I wonder what the breakdown is between bug types and severity.

    Spending two months fixing those bugs might be a minor annoyance to some of the kernel maintainers but would be a godsend to people who have been waiting a very long time for low priority and low interest kernel bug fixes.

    I agree, somtimes it is good to clean up even the low priority bugs which impact a small number of total use cases, but could be huge: imagine if there were some "minor" bugs which impact embedded devices such as cable routers. For my home file server the bug is nothing, but could cause a security nightmare for someone who runs custom router software. Linux is too useful in too many places to ignore this many bug reports.

  • Re:Sigh (Score:4, Interesting)

    by VortexCortex ( 1117377 ) <VortexCortex AT ... trograde DOT com> on Monday November 04, 2013 @01:28AM (#45322657)

    Well, when I was naive I was pissed off a lot too. When I had about 10 years of code under my belt all Major version numbers in my codebases indicated a complete re-write / major design overhaul and API breakage as far as the eye can see. That same reasoning was what Linus was going by when he said there'd never be a 3.x.x release -- v3 would mean he when insane and wrote the whole thing in a message passing version of VB; I'm paraphrasing. [wikiquote.org]

    What's interesting is that I follow the Unix Way(tm): "Do one thing and do it well"; So my "Applications" are actually just that: Application of multiple smaller modules each with their own names / codenames and version numbers. The Editor application "Sledge 0.4.x" is a UI layer stack provided by Core v3.0.x leveraging Sterling v1.6.x for rendering, Vaporworks v1.13.x for a scripting VM, CFG9000 v5.2.x for INI/.conf persistence, etc. Git submodules makes building other programs that target disparate points in the independent module versions simple. Eg: A server for providing HTTP interface to other game-engines/servers via remote console utilizes Core, Vaporworks, and CFG9k. My code editor, audio assemblers, etc. use a different group of modules, but the same common codebase. So, the major application version of an application may not change even if I use a different subsystem or rewrite a module (eg: to get my rendering engine using Wayland natively); Major module version changes translate to minor Application version changes.

    Each of the modules is like a library with its own test suite, but provides a small set of associated (terminal) tools (eg: My "Core" library provides a platform abstraction layer and provides a virtual file/network system where local / remote / archived paths can be mounted and mapped to the installed system, allowing me to "cd", "cp", "mv" across the network and OS barriers; Vaporworks provides a scripting environment, but also provides a compiler / bytecode translator and debugger / profiler tools. For these individual modules and their smaller tools the "Major version change = Rewrite" method makes sense.

    However with larger applications (say, a distributed versioned 3D game development environment), or a browser, or a Monolithic Kernel: Full / Majority code rewrites aren't occurring. So after having created some sprawling and immense applications I came around to the idea that it doesn't make sense to require the same level of change for a major version number in the application as the module -- Why even have a major version number if it never changes? The game dev studio always has the same interface: It must always interface at the human / machine level. Eg: There's a few ways to create a multi-threaded event pump, but the API for them all will be the same. There's different ways to handle pointer input (esp. on Win32 vs X11 vs Wayland to reduce input latency), but the pointer API is not going to change (it did have to change years ago to support multiple pointers / multi-touch, and that was a major version bump in Core.UI, and in apps that use it). It's not like I scrapped pointers for eye tracking, context awareness and vocalizations or gestures... yet, but that was a substantial addition to the system.

    The Linux Kernel is in the same boat. It's to the point now that it's got to provide largely the same interface to its users i.e. programs; ergo: ABI stability; There's not going to be a full rewrite because that would be death -- It wouldn't be "Linux" anymore. Nothing that depends on it would be able to function, and all the dependent applications / modules / systems -- A huge chunk of the ecosystem -- would have to be rewritten given the level of change that warrants a rewrite. Especially if we actually want to improve on operating systems -- Say, eschew POSIX in favor of Agent oriented operating environment with byte-code program modules linkable into machine code at install time, or runnable via VM if untrusted (sandboxing that actually works

  • by aardvarkjoe ( 156801 ) on Monday November 04, 2013 @01:40AM (#45322713)

    2. What happens when the major digit begins to resemble Firefox / Chromes out of control version madness? How many years before Linux 19.4?

    3.0 was released on 21 Jul 2011. Given the expected timeframe for 4.0 (if he decides to go through with this proposal, of course), then that's roughly 3.25 years per major version. So the answer to your question would be sometime in 2061.

    It used to be version numbers actually meant something and conveyed some useful hint of scope or amount of change between versions.

    With this proposal, it does mean something. It means that a 4.0 release is the result of focused testing and bugfixing of the changes and features added in the 3.x series. If the model seems to work, then 5.0 would probably be the culmination of the work put into the 4.x series. Sure, the meaning is different than is used for most projects, but that doesn't make it worse.

  • by smash ( 1351 ) on Monday November 04, 2013 @02:52AM (#45322961) Homepage Journal

    No. Mavericks has a huge number of improvements with the VM subsystem (compressed memory to avoid swap at all costs for better performance and power consumption), timer coalescing, etc. I am seeing a "no bullshit" battery life improvement of 15-20 percent on my 2011 MacBook Pro 15" - and improved performance.

    Mavericks is the biggest improvement in OS X performance since Snow Leopard.

  • by Clsid ( 564627 ) on Monday November 04, 2013 @04:37AM (#45323237)

    You must be insane. Running your own web server from a Mac is as easy as going to System Preferences and activate Web Sharing. While you are at it, I honestly beg you to try the share internet checkmark and then choose to broadcast the WiFi signal you are using over bluetooth to another device. Try to do the same in Linux and let me know if it is a walk in the park. Hell forget about that, using Ubuntu, try to install a Wi-fi card that does not have the driver precompiled by Ubuntu and again, let me know how that goes.

    I think Ubuntu has done a fantastic job at making Linux easier, but don't kid yourself, the closed source OSX is vastly superior and just taking a look at their App Store should remind you that. That being said I will always use Linux for work stuff, especially servers where it really excels, but even in that field Linux is getting a run for its money with OpenBSD. That BSD that you said that never took off. Once you realize how awful iptables is compared to what OpenBSD gives you, I believe you will stop bullshitting yourself about the supposed virtues of a holy grail system.

  • realy cool bugs, too (Score:4, Interesting)

    by SuperBanana ( 662181 ) on Monday November 04, 2013 @11:31AM (#45325759)

    Mavericks introduced some "really cool" bugs, like graphics redrawing issues.

    I now regularly have all sorts of static, black borders, and other artifacts around various screen elements. I'm not alone, if you google around.

To the systems programmer, users and applications serve only to provide a test load.

Working...