Linux 3.12 Released, Linus Proposes Bug Fix-Only 4.0 274
An anonymous reader writes "Linus Torvalds announced the Linux 3.12 kernel release with a large number of improvements through many subsystems including new EXT4 file-system features, AMD Berlin APU support, a major CPUfreq governor improvement yielding impressive performance boosts for certain hardware/workloads, new drivers, and continued bug-fixing. Linus also took the opportunity to share possible plans for Linux 4.0. He's thinking of tagging Linux 4.0 following the Linux 3.19 release in about one year and is also considering the idea of Linux 4.0 being a release cycle with nothing but bug-fixes. Does Linux really need an entire two-month release cycle with nothing but bug-fixing? It's still to be decided by the kernel developers."
Bug fix only release could be useful (Score:5, Interesting)
It could be very useful to have the code stabilize for a bit, put it through regression tests, do some auditing, maximize use of static code checkers, and fix the problems. I hope they seriously consider it.
Take some lessons from Intel (Score:4, Interesting)
Develop Linux like Intel develops CPUs: first you make a new shiny, then you do an entire release on improving that shiny. Rinse and repeat ad infinitum. Even better if you have two competing teams working on it. Whichever team comes up with the better product by launch time gets the nod.
There are quite a few things I'd like to see fixed (Score:5, Interesting)
Re:Yes, it is needed. (Score:5, Interesting)
In my experience, many of those are esoteric bugs that affect one or two people in weird situations, perhaps with a custom kernel patch applied (i.e. method works correctly unless you mod calling code to pass an otherwise invalid parameter). I wonder what the breakdown is between bug types and severity.
I agree, somtimes it is good to clean up even the low priority bugs which impact a small number of total use cases, but could be huge: imagine if there were some "minor" bugs which impact embedded devices such as cable routers. For my home file server the bug is nothing, but could cause a security nightmare for someone who runs custom router software. Linux is too useful in too many places to ignore this many bug reports.
Re:Sigh (Score:4, Interesting)
Well, when I was naive I was pissed off a lot too. When I had about 10 years of code under my belt all Major version numbers in my codebases indicated a complete re-write / major design overhaul and API breakage as far as the eye can see. That same reasoning was what Linus was going by when he said there'd never be a 3.x.x release -- v3 would mean he when insane and wrote the whole thing in a message passing version of VB; I'm paraphrasing. [wikiquote.org]
What's interesting is that I follow the Unix Way(tm): "Do one thing and do it well"; So my "Applications" are actually just that: Application of multiple smaller modules each with their own names / codenames and version numbers. The Editor application "Sledge 0.4.x" is a UI layer stack provided by Core v3.0.x leveraging Sterling v1.6.x for rendering, Vaporworks v1.13.x for a scripting VM, CFG9000 v5.2.x for INI/.conf persistence, etc. Git submodules makes building other programs that target disparate points in the independent module versions simple. Eg: A server for providing HTTP interface to other game-engines/servers via remote console utilizes Core, Vaporworks, and CFG9k. My code editor, audio assemblers, etc. use a different group of modules, but the same common codebase. So, the major application version of an application may not change even if I use a different subsystem or rewrite a module (eg: to get my rendering engine using Wayland natively); Major module version changes translate to minor Application version changes.
Each of the modules is like a library with its own test suite, but provides a small set of associated (terminal) tools (eg: My "Core" library provides a platform abstraction layer and provides a virtual file/network system where local / remote / archived paths can be mounted and mapped to the installed system, allowing me to "cd", "cp", "mv" across the network and OS barriers; Vaporworks provides a scripting environment, but also provides a compiler / bytecode translator and debugger / profiler tools. For these individual modules and their smaller tools the "Major version change = Rewrite" method makes sense.
However with larger applications (say, a distributed versioned 3D game development environment), or a browser, or a Monolithic Kernel: Full / Majority code rewrites aren't occurring. So after having created some sprawling and immense applications I came around to the idea that it doesn't make sense to require the same level of change for a major version number in the application as the module -- Why even have a major version number if it never changes? The game dev studio always has the same interface: It must always interface at the human / machine level. Eg: There's a few ways to create a multi-threaded event pump, but the API for them all will be the same. There's different ways to handle pointer input (esp. on Win32 vs X11 vs Wayland to reduce input latency), but the pointer API is not going to change (it did have to change years ago to support multiple pointers / multi-touch, and that was a major version bump in Core.UI, and in apps that use it). It's not like I scrapped pointers for eye tracking, context awareness and vocalizations or gestures... yet, but that was a substantial addition to the system.
The Linux Kernel is in the same boat. It's to the point now that it's got to provide largely the same interface to its users i.e. programs; ergo: ABI stability; There's not going to be a full rewrite because that would be death -- It wouldn't be "Linux" anymore. Nothing that depends on it would be able to function, and all the dependent applications / modules / systems -- A huge chunk of the ecosystem -- would have to be rewritten given the level of change that warrants a rewrite. Especially if we actually want to improve on operating systems -- Say, eschew POSIX in favor of Agent oriented operating environment with byte-code program modules linkable into machine code at install time, or runnable via VM if untrusted (sandboxing that actually works
Re:What happens when the first number gets too hig (Score:4, Interesting)
2. What happens when the major digit begins to resemble Firefox / Chromes out of control version madness? How many years before Linux 19.4?
3.0 was released on 21 Jul 2011. Given the expected timeframe for 4.0 (if he decides to go through with this proposal, of course), then that's roughly 3.25 years per major version. So the answer to your question would be sometime in 2061.
It used to be version numbers actually meant something and conveyed some useful hint of scope or amount of change between versions.
With this proposal, it does mean something. It means that a 4.0 release is the result of focused testing and bugfixing of the changes and features added in the 3.x series. If the model seems to work, then 5.0 would probably be the culmination of the work put into the 4.x series. Sure, the meaning is different than is used for most projects, but that doesn't make it worse.
Re:There is balls-to-the-wall competition right no (Score:5, Interesting)
No. Mavericks has a huge number of improvements with the VM subsystem (compressed memory to avoid swap at all costs for better performance and power consumption), timer coalescing, etc. I am seeing a "no bullshit" battery life improvement of 15-20 percent on my 2011 MacBook Pro 15" - and improved performance.
Mavericks is the biggest improvement in OS X performance since Snow Leopard.
Re:True... but not entirely (Score:4, Interesting)
You must be insane. Running your own web server from a Mac is as easy as going to System Preferences and activate Web Sharing. While you are at it, I honestly beg you to try the share internet checkmark and then choose to broadcast the WiFi signal you are using over bluetooth to another device. Try to do the same in Linux and let me know if it is a walk in the park. Hell forget about that, using Ubuntu, try to install a Wi-fi card that does not have the driver precompiled by Ubuntu and again, let me know how that goes.
I think Ubuntu has done a fantastic job at making Linux easier, but don't kid yourself, the closed source OSX is vastly superior and just taking a look at their App Store should remind you that. That being said I will always use Linux for work stuff, especially servers where it really excels, but even in that field Linux is getting a run for its money with OpenBSD. That BSD that you said that never took off. Once you realize how awful iptables is compared to what OpenBSD gives you, I believe you will stop bullshitting yourself about the supposed virtues of a holy grail system.
realy cool bugs, too (Score:4, Interesting)
Mavericks introduced some "really cool" bugs, like graphics redrawing issues.
I now regularly have all sorts of static, black borders, and other artifacts around various screen elements. I'm not alone, if you google around.