Linux Kernel 2.4 out by this Fall? 145
Skeezix writes
"Linus says that he aims to make kernel upgrades more
incremental instead of cramming tons of features into each
major upgrade. The net result? The next major kernel upgrade
might be out by Fall."
I still haven't worked out all my kinks with 2.2 yet, at this
rate 2.4 will be out before I sort it out!
Calm down :-) (Score:1)
Daniel
Re:Still on 2.0.X (Score:1)
Linux says he's ready for another 8 1/2 years (Score:1)
In the article they quoted Linus as saying:
I feel like going for another eight-and-a-half years.
Though I'm sure this has been talking about before but what would happen if Linus did hang it up? Do any of the other core kernel developers have what it takes to fill his shoes or is it likely that the kernel would fork in different directions under different leadership. Or maybe it would move to a more democratic development model like Debian has. Just something to think about :^)
Re:No marketing savvy (Score:1)
I actually owned two copies of Windows 1.1 (both given two me, so I don't know how much it cost), and darn but I wish I hadn't wiped those floppies and used them for other things. I'd love to install it again just as a conversation piece.
Re:Version 2.4 - Why not? (Score:1)
Re:Bah (Score:1)
Re:No marketing savvy (Score:1)
--
Re:Linux on small devices doesn't make sense (Score:1)
That's what I thought!
No, really, in all honesty, Linux running as a command-line-only OS on a small device would not be that bad of an idea. Actually, in order to be effective, it would definitely have to be streamlined, but still, command-line-only OS's have a lot of potential in a small amount of computing power. Think about it, using a command-line-only version of Linux still gives you access to news groups, email, text editing (for notes), security, and compatibility with a full blown operating system for easy file transfer. Assuming there were a specialized Linux made to run on these little guys, I think it would be great.
The Palm Pilot example shows that you would need a graphical interface of some sort, but it would be very easy to do. Since Palm Pilot grpahics are black and white anyway, it's no big deal and requires very little memory. I say go for it.
Eric
By the way, I honestly believe the commodore 64 could do a decent job of running Linux provided it's graphical and sound capabilites are limited and ethernet is gonna have to be an engineering feat of its own. Still, you can operate modems just as fast with the C64 as you can with a PC and if Linux can provide a TCP/IP stack for the Commy, then you will sitll have access to networking. And, another engineering feat, but possible, would be to adapt a PCI bus to the commy and use video cards from the PC. Quite a novel little project really. Good for few sundays when there is nothing else to do . . .
my take on things... (Score:1)
Re:Until Merced is release, it is under NDA. (Score:1)
RTLinux merge... (Score:1)
6. What next and acknowledgements:
[...]
Linus Torvalds once said the the RTLinux core would become integrated with the standard kernel in 2.3, but the availability of pre-patched kernels makes this a less pressing issue.
Well, there's Alan Cox for starters (Score:1)
Chris Wareham
'Twas just a joke (Score:2)
I think the original "somebody already has" comment was actually directed at the AC who called the idea "absurd".
I added the comment because people sometimes assume that slashdot posters are trying to contribute serious, well-thought-out insights and ideas to the forum. That would be OTHER posters, not me.
Re:Alan Cox probably. (Score:1)
Think about it, how many of the programs of your distribution do you really use? The kernel isn't a place to play political games and democracy encourages compromise and horse trading.
no problems here (Score:1)
Sanity. Good. (Score:1)
It would have been nice to have had a 2.4, 2.6, 2.8 somewhere in between.
Burnout (Score:1)
Re:Sanity. Good. (Score:1)
software companies who shall remain nameless
we believe in actually progressing through
version numbers as opposed to incrementing
them at random for marketing reasons
Linus (Score:1)
--
Get your fresh, hot kernels right here [kernel.org]!
Not Much Difference (Score:1)
1.2 -> 2.0 Major changes
2.0 -> 2.2 Major changes
2.2 -> 2.4 Minor changes
2.4 -> 2.6 Minor...
I would'nt look for anything huge until 3.0. But I suppose that could be a month away
About time! (Score:2)
Much better than MicroSoft: Promise early. Release maybe. Patch occasionally.
This is good news indeed.
--
At this rate (Score:1)
thejeff
This is a good thing (Score:1)
Alan Cox All the way (Score:1)
Who's gonna maintain linux kernel developpement if Linus hang it up? Alan Cox for 3 imaginatives reasons.
1-)We always heard about kernel 2.2.xacx
2-)He was a good looking guy
3-)Linux will be called Lincux
Here is my 2 cents...
Iam not a Geek
It was my opinion
Sorry for my english
emman
Re:Still on 2.0.X (Score:2)
-- New devices (I do have USB, but nothing on it currently)
-- Ensuring that 2.0.X users don't get forced to upgrade. (What? You want to run Gnome 2.0? That requires gtk1.4 which only support glibc 2.2.3 which requires Linux 2.6)
Transmeta Anyone? (Score:1)
Piecing together one groundless rumour with another:
Maybe Transmeta's first product is going to something that'll make both the Palm Pilot and Isty look big, clunky, and out-dated.
specifics (Score:1)
I have absolutely no idea about SMP support. Was primitive SMP availible in 2.0.0?
---
MS Marketing and Version Numbers (Score:1)
Visual Interdev jumped from 1.0 to 6.0. What happened to the other numbers?
Still, when you get down to brass tacks, you still have proper version numbering, even from MS... "Oh, asp.dll 2.37.2.2 leaks memory - you need asp.dll version 2.39.17.4 to fix that!"
Ugh
Re:What's new in 2.4? (Score:1)
That is something i was wondering since the Mincraft benchmark. Is there some kernel level thread.
If there wasn't up to now and if we add it and modify some part of the kernel to use it efficiently this could be a considerabvle boost in SMP performances.
Hope this will become true.
Imagine Linux kicking Nt's ass on high level harware (something like 16+ CPU's). that is one thing I would want to see (the sooner the better
just some of my dreams
Know the process. (Score:1)
Pure humor with no contribution to discussion (Score:1)
Christopher A. Bohn
yeah sure (Score:1)
I say, "yes sure", why not ?
version 2.2 was out just before summer 1998 and the 2.4 will be out before the end of the year ?
ok. He can say so, as usual, but we're talkin' about software, don't forget it.
seb.
--
Marketing or maturity? (Score:2)
Maturity: the kernel has setted down quite a lot at 2.2, and hopefully there won't be as much major upheavals as from 2.0 to 2.2, and it should me more an adding drivers/tweaking, rather than a redesign.
Marketing: or more accuratly, exposure. As the user market for linux gets more and more widespread, the impact of a major change will cause greater ripples. Most users (and distributions) will only run "stable" releases, so it makes sense to take lots of incremental upgrades (little steps) to that population, rather than on big jump (and possible fall over)
As the market of Linux increases, so does the potential interia to change. Take libc5-glibc for example, The linux bandwadgon hadn't really started rolling then, but if it had before the change, I think it would have been a lot more hassle to get people to migrate, because of all the apps that would have been released for libc5 systems.
As a result of this increasing inertia, I think companies will slowly drft from releaseing bleeding edge releases, "first with linux v x.y!!" to releasing "burnt-in" versons. For the end users, that's good, for the developers, that's bad, as there is less people to feedback bugs. Hence, the release stable versions of kernel often, will help allieviate this problem.
--
Re:Bah (Score:1)
Sun's a great company and Solaris is a nice OS but the marketing people need to take a break.
-Mike
--
Re:Not sure (Score:1)
Re:Pure humor with no contribution to discussion (Score:1)
Re:Still on 2.0.X (Score:1)
Re:Don't you get it? Most Linux users's ARE Avg Jo (Score:1)
Re:Transmeta Anyone? (Score:1)
OTOH, I don't think telephony is a "big thing", it should be a fading business; the Neon (?) chip will IMHO be just a cheap one, which is way more efficient for special tasks (DSP, 3d accelator) than the current general CPU's. Hopefully able to run everything efficiently, but many doubt that :)
Another lesser know transmeta leak is here [deja.com].
Re:Marketing or maturity? (Score:1)
Read between the lines (Score:2)
Such parallel development is necessary considering the amount of people working in parallel on kernel-related issues. In the old days, thing were tried out in the development kernels, nowadays people like andrea, rick, and the isdn4linux and pcmcia groups make their patches, and then ask on the mailing lists for people to test the patches. Such things start as hacks, tryouts, or proof-of-concept without hampering the development of any other part of the kernel. Once their patches, or sometimes just their idea's have been tested by some people and proven to be good, then Linus may decide to put the patch, or sometimes just the basic concept behind it, in the kernel.
You say 'unless everybody starts coding a lot faster', for which you assume that the number of people stays the same... which is not the case. There are a lot more people working on the kernel, or kernel-related projects right now then there were at the beginning of 2.1.x, and especially when compared to the old 1.3.x days.
With the patch archives maturing(kernelnotes, the patches at rock-projects, kernel traffic, and others), and with the support of Alan's -ac patches, such parallel developments will still be available to many for the testing and using, even before Linus decides to include it.
No more 2+ years of waiting for new stabilizing kernels, it's going to be great!
Definitely a good thing (Score:2)
It's definitely a good thing, for a number of reasons.
The first is that the new features will get "to market" more quickly. That means more people using them, which in turn means more bugs found, which in turn means they get working more quickly.
A second, and important, reason in terms of World Domination(tm), is that it takes away a lot of the nay-sayers ammunition. For example, for a long time they said "Linux has poor SMP support", which of course was true for 2.0.x but not for 2.1.x. Now they are going to say the same thing about USB, but given the rate of USB development on 2.3 right now, a 2.4 in the fall should have fully-functional USB.
Finally, it should make each small upgrade much less painful, since they are more incremental.
Re:Not sure (Score:1)
As far as bugfix / optimization, I would assume that, for example, 2.2.x + (some features) + (2.2.x bugfixes) + (2.3.x bugfixes) = 2.4.0, in the cycle envisioned. This means that the 2.4.0 codebase should not be too far away as far as, e.g., filesystem or network drivers are concerned, but maybe 2.4.0 has features X, Y, and Z that comprise the main diff between 2.2.x and 2.4.x. I think that it could, therefore, turn out to enhance the stability of the system as a whole by integrating useful features into the stable branch when they are ready, rather than basing the stable branch on certain features.
-Chris
AKA How To Beat MSFT at their own game (Score:1)
"Oh, but that was version 2.0. We're already at version 2.6, which is five times faster at multi-processor static page hurls and still won't crash like IIS does..."
If it's got a higher version number, it must be better - Bill Gates law of increasing profits
Will in Seattle
What's needed... (Score:1)
Journalling File System
Swap File System for
Real Time Scheduling
Intelligent Kernel Thread Scheduling (where threads of a process are spread among all CPUs)
Kernel Asynchronous I/O (KAIO)
Large file support (2GB)
I know SGI is going to opensource a journalling filesystem. That could be in a module I suppose. The others are of various importance, but I suppose they should be considered.
Any thoughts?
Re:Bah (Score:1)
I do not work for Sun. I do not work for M$. This is hearsay
-Chris
Re:Still on 2.0.X (Score:1)
Argh.. no. Glibc compiles on AIX, IRIX, etc, etc, etc.. Things like that have nothing to do with the kernel. (- generalization for sake of clarity.)
The things that DO matter as far as 'upgrading' a kernel are exactly what the previous person mentioned, support for hardware and features. If you want really GOOD NFS support, you're 'forced' to upgrade. Some programs might use, for instance, features of a newer kernel, (the new procfs comes to mind) but that is hardly a forced upgrade.
So, you make a choice. Live with your old kernel, or learn to use methods other than rpm to install software. No one's forcing you to do anything.
And, hell, if you don't like it, learn C.
--
blue
Re:Only distros and 'average joes' (Score:1)
Also, you assume that all kernel development is done by "kernel developers". But don't some packages run as kernel modules (for example, the lmsensors package) and yet they are developed by people who are not improving the main Linux kernel? I would think that it might help if features such as this would be merged into the main Linux Kernel. Especially the features that lmsensors (hardware health monitoring) which would be so useful.
Also, Alan Cox seems to put out an awful lot of patches quickly (I guess that the -ac patches are actually put together by a group of people that Alan supervises or organizes). While Linus can't mandate that developers work faster, as these developers learn more about the kernel, and how it operates, they can work more efficiently and thus produce more code/features/whatever-measurement-unit-you-want.
Re:Only distros and 'average joes' (Score:1)
What I was saying is that Linus' decision will not change development rate, just how often he decides to stop and create a stable release. Of course the development rate may change for the plus or minus, but this announcement isn't going to be a direct cause (contrarwise to what the origional post was about).
So the kernel development is probably going to continue similarly to how it has - blisteringly fast. The kernel patches will still be produced at the speed of Free Software, but now distributions will be able to take advantage of them more often. that's all.
NT 3.1, not Windows 3.1 (Score:1)
Solaris "2.7" == SunOS 7? (Score:1)
Anyways, "Solaris Seven" sounds cooler than "Solaris Two Point Seven".
Is ext3 still planned for Linux 2.4? (Score:1)
Re:solaris (Score:1)
Re:No marketing savvy (Score:1)
Actually, I think he was referring to Windows NT, which started out at 3.1... of course, maybe they did that because it was based on the OS/2 codebase, and OS/2 already had a 2.x by then...
Re:MS Marketing and Version Numbers (Score:1)
Just a thought.
Re:Still on 2.0.X (Score:1)
The plan is to release security upgrades if needed.
For new devices, it's possible for third parties to provide drivers (however, USB may be another matter, requiring too much effort for all but the most senior kernel gurus to backport).
Userland programs don't care about the kernel, and it is never true that the source for something like gtk would need a particular C library (though someone's pre-installed binary might). Newer glibc's will run on older kernels, as well as non-Linux kernels; you can run Gnome on BSD or Unix systems.
If you let yourself fall way behind, it may mean that you have to build some software from source, rather than just install from an RPM or the 21st century equivalent.
Re:Read between the lines (Score:2)
Right on - Right now it seems that the biggest threat to "world domination" is new forms of hardware that aren't solidly suppported under Linux. You already have all USB computers like the iMac and some Sonys, and there's more coming. There's the real possiblity that by next year, there could be a ton of computers on the market that won't work with Linux 2.2.
They could take all the work in progress on USB, ISDN, Firewire, dynamic reconfiguration, plug-and-play, ACPI, and so on and get it into shape and justifiably call it version 2.4, even without all of the other big changes planned (SMP, ext3, etc.)
--
Re:No marketing savvy (Score:1)
Heheh. Sounds as if you were implying there is a problem with being version 15. They must realize that they are going to be version 15? Why? What's relevant of being in version 15? *grin*
Alejo.
one problem here (Score:1)
This happens at least with 2.2.7-2.2.10. It is reproducable with one person whose connection is slirp-like through a Digital Unix box, using netscape. I don't know if it always happens with passive ftp. I do know that it doesn not happen with regular ftp (from other hosts) or http from that same Digital Unix host.
Most likely a small steering committee (Score:1)
Word (Score:2)
Actually Microsoft Word is one of their few products where the version numbering is fairly correct. Versions 1.0 thru 5.1 were on the Mac, seperate versions 1.0 and 2.0 on Windows with a different feature set, and with version 6.0 they merged the codebase. The only real problem was that version 7 (95) was more like 6.1 in that the only new feature was red squigglies.
Better examples might be Windows NT (first version = 3.1), Exchange (first version = 4.0, jumped to 5.0 for no good reason), MS SQL (where'd version 5.0 go?), MS Access (MIA versions 3.0 - 6.0), and so on.
Thank god we still have companies like Apple that will produce a version number like 8.7.
--
How bout this for marketing: (Score:1)
So kernel 2.2.10 => 2002010 !!!
or 2,002,010 if you think it's more readable.
Big enough, MS?
Re:Is ext3 still planned for Linux 2.4? (Score:1)
Re:Not sure (Score:1)
Not at all. According to the usual procedure, the goodies currently under development in the 2.3.xx tree will make it into 2.4, not into 2.2.xx (with few exceptions). Then guess what? MORE goodies will start to appear, in 2.5.xx.
Soooooo... the bottom line is: the sooner we see 2.4, the better.
Glibc upgrade not that hard (Score:1)
Good Thing (Score:2)
The biggest point that I agree on with Linus is that upgrading to 2.2 was a big deal for some people when it should not have been. Just look at how many distros drastically changed their products.
The only drawback I see is that bleeding-edgers will have to compile much more often and that the already light-speed trend of kernel suffix escallation will only get worse. But is that really bad?
-Clump
Intel is... (Score:1)
Look at it this way: Linux is gonna support Merced a lot sooner than MS is.
Re:Merced? (Score:1)
"Done deal" -- Torvalds (Score:1)
-russ
Not sure (Score:3)
VA Research (Score:1)
~luge
VA's working on it (Score:1)
This is good. It's not irrelevant. (Score:1)
It is a good thing, considering there are many persons stuck in 2.0.X, unwilling to switch to 2.2.X because of the big effort it takes:
Someone posted:
I believe this is exactly why they are going to change version numbers more often. Why bother mantaining 2.0.X?!
I understand your point of view, the effort related with switching is big and 2.2.X are not as stable as 2.0.X (or so it seems). But from another point of view, its like saying: I am running 2.0.36, will someone release 2.0.36.1? I don't want to update 2.0.37...
I know, there are no big differences between 2.0.36 and 2.0.37 so updating is not as big a pain as updating from 2.0.X to 2.2.X.
And that's what Linus is trying to fix, I suppose. He doesn't want to have people stuck with 2.2.X when he releases 2.4.X, so he's going to release stable kernels more often, hoping people will update their systems and distributions to stable kernels more often (but with less hassle).
Alejo.
Version 2.4 - Why not? (Score:2)
First - consider the open source development approach - it works well when there are a large number of people involved in the development and testing, and all the little dot releases and patches and pre releases are part of this process. In the traditional "closed shop / source" model, the same often happens - but only those in the development team sees it. Having the "stable / public" stream, and the "(b)leading edge / development" stream helps. As others have said many times - "release early, release often". That way, we all benefit from each others contributions, and can build on other's work sooner.
What is required, though, is a way to make kernel upgrades easier and simpler for the "average joe" - and i imclude myself here. I still have 2.0.36 on my machine at home - partially because personal circunstances at hame have prevented me from doing too much - but I also have a sneaking suspicion that if I dont take extra caution, I may trash something. Sure, if I spent the time reading and experimenting, I would have greater confidence. I upgraded my Win 95 to Win 98 in under and hour - and no damage was done (assuming that you don't class wunning Windoze as being of irrepairable damage in the first place). Kernel upgrades should be able to be run as fairly simple and painless activities - and with a suitable "backout" capability. maybe that is already there - but I haven't discovered it yet. (yes, I know - RTFM).
There is a fine line between showing a product is under active support, with new features being added quickly, improvements to security and performance coming all the time, and these enhancements being in response to what the users of the system want and need (and not just some market-droid's idea of how to sell more stuff); compared to having a product appear to be too experimental, with new versions released constantly without proper testing and quality control. I think that a major revision (2.0 => 2.2 => 2.4, etc) each 9 to 12 months is about right.
Ken
Re:Sanity. Good. (Score:1)
v2.0->2.2 might have been a big enough
increment to warrant a new major number,
that would have seemed evil, so it only
got the minor. In effect we have the
opposite problem: Linux version numbers
are misleading because they are too timid.
Anyway a fast release cycle is the real fix.
it made sense when he said it... (Score:4)
according to linus, he was very un happy with the "pain" that people went through upgrading from 1.2-2.0 and then from 2.0-2.2 . he wants to avoid this as much as possible, (also stating that people shouldn't "have" to upgrade, but realizing the unmitagated joy in doing so). also, there was the time issue, something like 1.5 years from 1.2-2.0 and then like 2.5 years for the next jump.
in the future, the plan as linus stated it, was to implement less "major" changes between code-freezes in order to get the newer stuff out there quicker. that's all.
Re:Definitely a good thing (Score:1)
Re:Mindcraft dictating Linux kernel development (Score:1)
Re:VA Research (Score:1)
Chris DiBona
VA Linux Systems.
--
Grant Chair, Linux Int.
VP, SVLUG
Re:No marketing savvy (Score:1)
Wow, you missed the Sun maneuver... They were going to introduce SunOS 5.7 (aka Solaris 2.7), but decided that they needed a bigger number; they changed the name to Solaris 7.
Re:Burnout (Score:1)
RTFM = read the freakin' manual (censored). By far the best source of information for newbies learning Linux is to read the man pages and HOW-TOs.
Point your fav search engine to the ESR 'Jargon' file. It explains RTFM and many other hackerisms.
Re:Burnout (Score:1)
RTFM = read the freakin' manual (censored). By far the best source of information for newbies learning Linux is to read the man pages and HOW-TOs.
Point your fav search engine to the ESR 'Jargon' file. It explains RTFM and many other hackerisms.
Tom "Blitz" Conder - Programmer, SleepWalker
http://www.arkhamlab.com/
Re:This is good. It's not irrelevant. (Score:1)
Re:No marketing savvy (Score:1)
Windows 3.0 was pretty bad and very flaky, but compared to naked dos, not bad.
Personally, i find the idea of date stamping a product not completely nuts. i mean, if you know that a box is running win95, but office 2000, then you can figure that the OS may need updating.
dave
Thanks for the clarification! (Score:1)
That sounded much better coming from your mouth... er... fingers, than ZDNet. Basically, put a few more freezes in, with the goal of getting a smaller set of big things working. Better and sooner.
The way ZDNet put it, it started to sound like a commercial development. Which would suck (and be very unlike Linus and the whole kernel crew!)
I greatly respect Linus' ability to manage these things. I'd fold in 10 minutes :)
Re:No marketing savvy (Score:1)
Only distros and 'average joes' (Score:1)
All this means is that the stable kernel cycle time will be shorter, meaning less features per cycle, but distros will be able to incorporate these new changes sooner. I think this is a good thing.
A user who wants the newest and hottest kernel can just use that, like always. The kernel patches will probably be released at the same rate as always, so the 'bleeding-edge' user will have the same number of compiles as before.
Re:Not sure (Score:1)
It probably is a good idea. The development kernels are supposed to contain fundamental changes that are likely to do very bad things (TM, pat pend.). You don't want to do too many of those all at once or you'll likely miss a lot of bugs.
Bug fixes and stability issues are moving very fast in 2.2.x. 2.2.10 is a lot better sorted out that 2.0.10 was. By tearing up less code in 2.3.x, 2.4.x will probably have less introduced bugs to fix.
Re:Not sure (Score:1)
Re:Not sure (Score:2)
This will probably result in _more_ stable kernels, since instead of having to bugfix some huge amount of new features to get it stable (like was needed for 2.1 series), there will be a more modest list.
No marketing savvy (Score:3)
And Microsoft? Their development environments will all be 7.0 soon (clearly more up-to-date than any of those Linux products). Why, that's why MS renamed NT 5.0 to Windows 2000! Why buy NT 5.0 when I can have Red Hat 6.0 -- it's a bigger number, man!
For the sake of marketing, I propose a MASSIVE jump in kernel major numbers. Then all the distros can fall in line.
This fall, Linux 2.4 should be released as Linux 2001.0.0. Red Hat, SuSE, and everyone else can release with the same version and clear up all this confusion. Dev kernels can use the same system as now (2001.1.0, etc.)
/* Someone will take me seriously. Watch. It happens every time. Bunch of literalist geeks.... */
Re:At this rate (Score:1)
I am pretty sure that the Debian 2.1 cd, at $2.49, is easier to install afresh, than to do it all by hand. It has Glibc 2.07 I think.
Still on 2.0.X (Score:2)
Question: Has anyone stepped up and said they will take over maintenance of 2.0.X if Alan and Linus don't want to anymore?
Re:No marketing savvy (Score:1)
The crazy thing is that people will think this way.. They did it with Word/Wordperfect, and it will happen again.
Re:Sanity. Good. (Score:1)
And some hardware companies !!!
Christopher A. Bohn
Re:No marketing savvy (Score:1)
let's release Gnome as version 1.0, so then
everyone will know it's ready for the desktop.
Re:Not sure (Score:1)
Not if it's as bad as 2.2.0 was.
That's my point. More x.EvenNumber.0 kernels means more really bad buggy kernels. Would you really want to run 2.0.0 or 2.2.0 for anything serious? Not to mention that 2.2.1-2.2.8 weren't so hot either.
Re:No marketing savvy (Score:1)
Funny, AutoCAD R9 worked wonderfully for me for the year or so I used it.
Re:No marketing savvy (Score:1)
There was a Windows 2.0 too
Which was, I think, the last version of Windows to ask if you wanted to install to HD or to floppy. I installed Win2.0 once :-)
dylan_-
--
Re:We should also thank the gcc/egcs developers (Score:1)