Follow Slashdot stories on Twitter


Forgot your password?
Linux Software

Linux Kernel 2.4 out by this Fall? 145

Skeezix writes "Linus says that he aims to make kernel upgrades more incremental instead of cramming tons of features into each major upgrade. The net result? The next major kernel upgrade might be out by Fall." I still haven't worked out all my kinks with 2.2 yet, at this rate 2.4 will be out before I sort it out!
This discussion has been archived. No new comments can be posted.

Linux Kernel 2.4 out by this Fall?

Comments Filter:
  • I think he was talking about the other reply :-)

  • I have a few friends that are still on 1.2.13 and have no plans to upgrade. For them there is no reason to. All of the devices they use are supported, all of the features they want are already there, and 1.2.13 has a much smaller memory footprint.

  • Posted by kenmcneil:

    In the article they quoted Linus as saying:

    I feel like going for another eight-and-a-half years.

    Though I'm sure this has been talking about before but what would happen if Linus did hang it up? Do any of the other core kernel developers have what it takes to fill his shoes or is it likely that the kernel would fork in different directions under different leadership. Or maybe it would move to a more democratic development model like Debian has. Just something to think about :^)

  • Windows 1.1 was definately available separately from Excel. I ran it on my Laser Turbo XT. Well, I didn't run it often. It didn't recognise my mouse, was a pain to use with just a keyboard, and I didn't have any applications for it, but I did have it installed. I ran Geos most of the time.

    I actually owned two copies of Windows 1.1 (both given two me, so I don't know how much it cost), and darn but I wish I hadn't wiped those floppies and used them for other things. I'd love to install it again just as a conversation piece.
  • Who cares about Linux 2.4 when FreeBSD exists ?
  • This is bunk. I do work for sun, but anyone who has 2.7 can pull up uname -a and see for themselves. SunOS 5.7 The decision to call it Solaris 7 was purely for marketing.

  • Was there a version 1.0?
  • You got something to say about the Commodore 64?

    That's what I thought!

    No, really, in all honesty, Linux running as a command-line-only OS on a small device would not be that bad of an idea. Actually, in order to be effective, it would definitely have to be streamlined, but still, command-line-only OS's have a lot of potential in a small amount of computing power. Think about it, using a command-line-only version of Linux still gives you access to news groups, email, text editing (for notes), security, and compatibility with a full blown operating system for easy file transfer. Assuming there were a specialized Linux made to run on these little guys, I think it would be great.

    The Palm Pilot example shows that you would need a graphical interface of some sort, but it would be very easy to do. Since Palm Pilot grpahics are black and white anyway, it's no big deal and requires very little memory. I say go for it.


    By the way, I honestly believe the commodore 64 could do a decent job of running Linux provided it's graphical and sound capabilites are limited and ethernet is gonna have to be an engineering feat of its own. Still, you can operate modems just as fast with the C64 as you can with a PC and if Linux can provide a TCP/IP stack for the Commy, then you will sitll have access to networking. And, another engineering feat, but possible, would be to adapt a PCI bus to the commy and use video cards from the PC. Quite a novel little project really. Good for few sundays when there is nothing else to do . . .
  • It seems to me that this is a good move for one simple reason: fewer changes. What I mean by that is that you don't have to sit and debate over all the changes in the jump to the next major rev, (e.g., ipfwdadm vs ipchains, USB vs. broken down USB). There may very well be many features rolled into the quantums we've been taking in the past but the more releases that are made add a level of granularity to the upgrades that have been absent in the past, afforded only by patches. I think this actually reflects a much more meaningful use of the minor rev numbers, which seems to be a common point of contention.
  • Once it is released, I am sure things will work at quite a furious pace. Cygnus has signed the NDA and will make the compiler for Merced. Linux has always said Linux can be ported to any platform that gcc can run on, so don't sweat it.
  • From An RT-Linux Manifesto (in Proceedings of the 5th Annual Linux Expo), page 195:
    6. What next and acknowledgements:
    Linus Torvalds once said the the RTLinux core would become integrated with the standard kernel in 2.3, but the availability of pre-patched kernels makes this a less pressing issue.
  • (see subject line)

    Chris Wareham
  • It was pure silliness -- inspired by the "version-number-as-marketing-tool" conditions you just described.

    I think the original "somebody already has" comment was actually directed at the AC who called the idea "absurd".

    I added the comment because people sometimes assume that slashdot posters are trying to contribute serious, well-thought-out insights and ideas to the forum. That would be OTHER posters, not me. :)
  • Alan would probably take over. As for democratic development model, that might be good for some thing, but I don't think the kernel is a good idea. Things designed by comitee bloat at the drop of a hat.

    Think about it, how many of the programs of your distribution do you really use? The kernel isn't a place to play political games and democracy encourages compromise and horse trading.
  • I have not had any real problems with 2.2.9. My machine has been doing okay with it. Any problems that I have had were there in 2.0, but I found out they were BIOS related... first?
  • by Anonymous Coward
    I think this is what more users expect. I've encountered some confusion from folks on 2.2: "if it's so much better, why isn't it 3.0?"

    It would have been nice to have had a 2.4, 2.6, 2.8 somewhere in between.
  • I hope this doesn't lead to burnout. Linus does happen to have a life (i. e., wife, kids, job).

  • And did you explain to them that unlike some
    software companies who shall remain nameless
    we believe in actually progressing through
    version numbers as opposed to incrementing
    them at random for marketing reasons ;-)
  • You talk as if he's a mere mortal...


    Get your fresh, hot kernels right here []!

  • I really would'nt expect to see too much difference once the new cycle takes effect. It's either called 2.2.17 or 2.4.0, but you are still going to get basically the same feature set. Quicker kernel release cycles means that major version increments will not necessarily tie to majot feature injections.

    1.2 -> 2.0 Major changes
    2.0 -> 2.2 Major changes
    2.2 -> 2.4 Minor changes
    2.4 -> 2.6 Minor...

    I would'nt look for anything huge until 3.0. But I suppose that could be a month away ;-)

  • Open Source: Release early. Release often.

    Much better than MicroSoft: Promise early. Release maybe. Patch occasionally.

    This is good news indeed.
  • At this rate, 2.4 may be out before I switch to 2.2. I was working on it, but I munged my system badly, trying to upgrade to glibc by hand. I've installed Debian this time, and am still playing with configuration. Don't have much time to work on it.
  • This means that there will be fewer "experimental" changes in each major release, much to the pleasure of us joe users.
  • Posted by unpHEAR:

    Who's gonna maintain linux kernel developpement if Linus hang it up? Alan Cox for 3 imaginatives reasons.

    1-)We always heard about kernel 2.2.xacx ...
    2-)He was a good looking guy :)
    3-)Linux will be called Lincux

    Here is my 2 cents...
    Iam not a Geek
    It was my opinion

    Sorry for my english
  • -- Security Updates

    -- New devices (I do have USB, but nothing on it currently)

    -- Ensuring that 2.0.X users don't get forced to upgrade. (What? You want to run Gnome 2.0? That requires gtk1.4 which only support glibc 2.2.3 which requires Linux 2.6)
  • Piecing together one groundless rumour with another:

    • Expect something from Transmeta this fall (Current hardcopy issue of the Linux Journal)
    • Expect a new kernel this Fall ( ZDNet [])
    • Transmeta's first product will have something to do with portable DSP and Telephony ( TBTF [])
    • Expect to see Linux in telephones (Linus in the same ZDNet article [])

    Maybe Transmeta's first product is going to something that'll make both the Palm Pilot and Isty look big, clunky, and out-dated.

  • Check out the history of the clone() system call. I might be wrong about the specific patchlevel, but I know it was added sometime in the 1.3 series (the number 1.3.19 keeps coming to mind now).

    I have absolutely no idea about SMP support. Was primitive SMP availible in 2.0.0?

  • You have to love this kind of thinking. To the general public, even for technical products, Microsoft has given up all pretense of having the version numbers actually mean anything.

    Visual Interdev jumped from 1.0 to 6.0. What happened to the other numbers?

    Still, when you get down to brass tacks, you still have proper version numbering, even from MS... "Oh, asp.dll leaks memory - you need asp.dll version to fix that!"

    Ugh :)

  • That is something i was wondering since the Mincraft benchmark. Is there some kernel level thread.

    If there wasn't up to now and if we add it and modify some part of the kernel to use it efficiently this could be a considerabvle boost in SMP performances.

    Hope this will become true.

    Imagine Linux kicking Nt's ass on high level harware (something like 16+ CPU's). that is one thing I would want to see (the sooner the better ;) but this isn't the most important thing either).

    just some of my dreams ;)
  • First, as others have pointed out 2.4 will be the stable release of the 2.3 development kernel. Second, as no one else has pointed out, part of the kernel release process includes a feature freeze. There is a point during the 2.3 development when Linus will stop accepting new features, and only be accepting bug fixes. Essentially, what Linus is saying is that the feature freeze will occur sooner with fewer new features to debug resulting in more stable, but less featureful(sp?) stable kernel releases. This is probably a good thing.
  • Just think -- if Windows 2000 had a two-digit date, then it'd report itself as Windows 00, a pre-alfa version =)
    Christopher A. Bohn

  • I say, "yes sure", why not ?

    version 2.2 was out just before summer 1998 and the 2.4 will be out before the end of the year ?
    ok. He can say so, as usual, but we're talkin' about software, don't forget it.


  • There are several good reasons I can think of of having more regular, but less dramitic, major version releases

    Maturity: the kernel has setted down quite a lot at 2.2, and hopefully there won't be as much major upheavals as from 2.0 to 2.2, and it should me more an adding drivers/tweaking, rather than a redesign.

    Marketing: or more accuratly, exposure. As the user market for linux gets more and more widespread, the impact of a major change will cause greater ripples. Most users (and distributions) will only run "stable" releases, so it makes sense to take lots of incremental upgrades (little steps) to that population, rather than on big jump (and possible fall over)

    As the market of Linux increases, so does the potential interia to change. Take libc5-glibc for example, The linux bandwadgon hadn't really started rolling then, but if it had before the change, I think it would have been a lot more hassle to get people to migrate, because of all the apps that would have been released for libc5 systems.

    As a result of this increasing inertia, I think companies will slowly drft from releaseing bleeding edge releases, "first with linux v x.y!!" to releasing "burnt-in" versons. For the end users, that's good, for the developers, that's bad, as there is less people to feedback bugs. Hence, the release stable versions of kernel often, will help allieviate this problem.

  • by copito ( 1846 )
    Don't forget that LinuxOS 4 is also Linux 1 and runnig uname on Linux 7 reports "LinuxOS 5.7".

    Sun's a great company and Solaris is a nice OS but the marketing people need to take a break.
  • That's possible. I hope that's how it turns out. What I'm afraid of is that as soon as the jump is made to 2.4.x, the work on 2.2.x will pretty much stop, except for fixing major security bugs (just like Alan Cox says that no more work will be done on 2.0.x unless some really major bug crops up). If you release 2.4.0 after 15 or so 2.2.x kernels, that means 2.2.x is the latest "stable" kernel for a lot shorter period of time, so you could conceivably have less testing and optimizations. The big bugs will still be fixed, but things may not be as smooth or well-optimized as with the 2.0.x series's long run.
  • And that's pronounced "Windows [pause] Oh-Oh!" (just like it is for earlier versions)
  • And 1.2.13 is much slower on anything newer than a 486, too. Can't beat that.
  • I am an average Joe and it makes me so mean :-P
  • Yep, funny; it looks like Linus plans to have more time around the fall, that may mean the hard, secret work is over by then :) Also, while he says nothing "big" about the release, he's sure of it; that could also mean support for the Transmeta chip, whatever it does, will be big news, but he doesn't want to say just now.

    OTOH, I don't think telephony is a "big thing", it should be a fading business; the Neon (?) chip will IMHO be just a cheap one, which is way more efficient for special tasks (DSP, 3d accelator) than the current general CPU's. Hopefully able to run everything efficiently, but many doubt that :)

    Another lesser know transmeta leak is here [].

  • For instance, Debian 2.1 spent 6-months in freeze before release.
  • If you read between the lines, you'll see that Linus wants more projects like the PCMCIA, isdn4linux, e2compr, etc, that develop separate (in parallel) from the kernel, and that go from proof-of-concept to kernel feature without especially being included in the development kernels untill they have become stable and popular patches, at which point it should not take much time to integrate it into Linus' release.

    Such parallel development is necessary considering the amount of people working in parallel on kernel-related issues. In the old days, thing were tried out in the development kernels, nowadays people like andrea, rick, and the isdn4linux and pcmcia groups make their patches, and then ask on the mailing lists for people to test the patches. Such things start as hacks, tryouts, or proof-of-concept without hampering the development of any other part of the kernel. Once their patches, or sometimes just their idea's have been tested by some people and proven to be good, then Linus may decide to put the patch, or sometimes just the basic concept behind it, in the kernel.

    You say 'unless everybody starts coding a lot faster', for which you assume that the number of people stays the same... which is not the case. There are a lot more people working on the kernel, or kernel-related projects right now then there were at the beginning of 2.1.x, and especially when compared to the old 1.3.x days.

    With the patch archives maturing(kernelnotes, the patches at rock-projects, kernel traffic, and others), and with the support of Alan's -ac patches, such parallel developments will still be available to many for the testing and using, even before Linus decides to include it.

    No more 2+ years of waiting for new stabilizing kernels, it's going to be great!

  • by Anonymous Coward

    It's definitely a good thing, for a number of reasons.

    The first is that the new features will get "to market" more quickly. That means more people using them, which in turn means more bugs found, which in turn means they get working more quickly.

    A second, and important, reason in terms of World Domination(tm), is that it takes away a lot of the nay-sayers ammunition. For example, for a long time they said "Linux has poor SMP support", which of course was true for 2.0.x but not for 2.1.x. Now they are going to say the same thing about USB, but given the rate of USB development on 2.3 right now, a 2.4 in the fall should have fully-functional USB.

    Finally, it should make each small upgrade much less painful, since they are more incremental.
  • That is an interesting take on it. I guess I assumed the opposite, that is that we might have fewer major feature changes per minor version than we do now. I mean, 2.0.x to 2.2.x was a humongous leap, and it seemed to me that releasing a minor every 6-12 months would maybe force better feature discipline.

    As far as bugfix / optimization, I would assume that, for example, 2.2.x + (some features) + (2.2.x bugfixes) + (2.3.x bugfixes) = 2.4.0, in the cycle envisioned. This means that the 2.4.0 codebase should not be too far away as far as, e.g., filesystem or network drivers are concerned, but maybe 2.4.0 has features X, Y, and Z that comprise the main diff between 2.2.x and 2.4.x. I think that it could, therefore, turn out to enhance the stability of the system as a whole by integrating useful features into the stable branch when they are ready, rather than basing the stable branch on certain features.

  • Seriously, especially since they're pushing WinNT vs Linux on WinNT-optimal machines as "the benchmark", we can just say:

    "Oh, but that was version 2.0. We're already at version 2.6, which is five times faster at multi-processor static page hurls and still won't crash like IIS does..."

    If it's got a higher version number, it must be better - Bill Gates law of increasing profits

    Will in Seattle
  • Here's a few things that were noted in a comparison of linux vs Solaris. I assume these are valuable traits (I am not a guru):

    Journalling File System
    Swap File System for /tmp
    Real Time Scheduling
    Intelligent Kernel Thread Scheduling (where threads of a process are spread among all CPUs)
    Kernel Asynchronous I/O (KAIO)
    Large file support (2GB)

    I know SGI is going to opensource a journalling filesystem. That could be in a module I suppose. The others are of various importance, but I suppose they should be considered.

    Any thoughts?
  • Well, there seems to be some misunderstanding about Sun's choice of versioning, which is actually symbolic of some things that they have been changing in the core OS. As you may (or more obviously, may not) recall, SunOS4.x was BSD based. Sun decided to change to a more SysV based style, so they decided to start calling it Solaris and change the versioning scheme to 2.x (SunOS being 1.x, I guess), but lots of things still come up as SunOS5.x on a Solaris box. Solaris 2.7 actually completed their changeover, so instead of 2.7, which symbolizes the changes, they called it Solaris 7, symbolizing the completion of the change. Of course, it is still Solaris 2.7 and SunOS 5.7, but it makes more sense than M$ Winblows 9x or NT or 2000.

    I do not work for Sun. I do not work for M$. This is hearsay ;).

  • -- Ensuring that 2.0.X users don't get forced to upgrade. (What? You want to run Gnome 2.0? That requires gtk1.4 which only support glibc 2.2.3 which requires Linux 2.6)

    Argh.. no. Glibc compiles on AIX, IRIX, etc, etc, etc.. Things like that have nothing to do with the kernel. (- generalization for sake of clarity.)

    The things that DO matter as far as 'upgrading' a kernel are exactly what the previous person mentioned, support for hardware and features. If you want really GOOD NFS support, you're 'forced' to upgrade. Some programs might use, for instance, features of a newer kernel, (the new procfs comes to mind) but that is hardly a forced upgrade.

    So, you make a choice. Live with your old kernel, or learn to use methods other than rpm to install software. No one's forcing you to do anything.

    And, hell, if you don't like it, learn C.

  • As someone noted in a different thread to this discussion, you assume that the number of kernel developers remains static.

    Also, you assume that all kernel development is done by "kernel developers". But don't some packages run as kernel modules (for example, the lmsensors package) and yet they are developed by people who are not improving the main Linux kernel? I would think that it might help if features such as this would be merged into the main Linux Kernel. Especially the features that lmsensors (hardware health monitoring) which would be so useful.

    Also, Alan Cox seems to put out an awful lot of patches quickly (I guess that the -ac patches are actually put together by a group of people that Alan supervises or organizes). While Linus can't mandate that developers work faster, as these developers learn more about the kernel, and how it operates, they can work more efficiently and thus produce more code/features/whatever-measurement-unit-you-want.
  • Sure, I agree with all that. I'm not sure I see the point.

    What I was saying is that Linus' decision will not change development rate, just how often he decides to stop and create a stable release. Of course the development rate may change for the plus or minus, but this announcement isn't going to be a direct cause (contrarwise to what the origional post was about).

    So the kernel development is probably going to continue similarly to how it has - blisteringly fast. The kernel patches will still be produced at the speed of Free Software, but now distributions will be able to take advantage of them more often. that's all.
  • He was talking about NT. The first version was NT 3.1, supposedly to be parallel to the regular Windows version numbers. That didn't last long!

  • I remember Solaris uname would return SunOS 5. So if Solaris 2.6 is "SunOS 6", then Solaris 2.7 must be "SunOS 7"??

    Anyways, "Solaris Seven" sounds cooler than "Solaris Two Point Seven".
  • I read that Linus wanted "at least a year" of testing with a brand new ext3 FS before it became the next official Linux FS. If Linux 2.4 is to released in less than one year, how could "untested" ext3 still be part of that release? One of these things will have to surrender.
  • If you really look at Solaris 7 you will see that it is really Solaris 2.7, they didn't just decide to make such a leap to go from 2.6 to 7.
  • > I'm assuming you're speaking of Windows 3.1

    Actually, I think he was referring to Windows NT, which started out at 3.1... of course, maybe they did that because it was based on the OS/2 codebase, and OS/2 already had a 2.x by then... ;)
  • Another good one from MS is from Word 3.0 to Word 95. If you look at the headers there is a string that says Word 6. And one of the patches for Word 97 changed a string in the headers from Word 7 to Word 8. Funny how a patch can constitute a whole new release number in the header. :) or perhaps MS's release numbers in Word are for tracking when they released something to fix the "bug" of WordPerfect being able to convert the files over?
    Just a thought.
  • The plan is to release security upgrades if needed.

    For new devices, it's possible for third parties to provide drivers (however, USB may be another matter, requiring too much effort for all but the most senior kernel gurus to backport).

    Userland programs don't care about the kernel, and it is never true that the source for something like gtk would need a particular C library (though someone's pre-installed binary might). Newer glibc's will run on older kernels, as well as non-Linux kernels; you can run Gnome on BSD or Unix systems.

    If you let yourself fall way behind, it may mean that you have to build some software from source, rather than just install from an RPM or the 21st century equivalent.

  • If you read between the lines...

    Right on - Right now it seems that the biggest threat to "world domination" is new forms of hardware that aren't solidly suppported under Linux. You already have all USB computers like the iMac and some Sonys, and there's more coming. There's the real possiblity that by next year, there could be a ton of computers on the market that won't work with Linux 2.2.

    They could take all the work in progress on USB, ISDN, Firewire, dynamic reconfiguration, plug-and-play, ACPI, and so on and get it into shape and justifiably call it version 2.4, even without all of the other big changes planned (SMP, ext3, etc.)
  • Redhat seems to like the 5.0->5.1->5.2->etc->6.0 without going to minor numbers. That's their perogative. However, they must realise that they are going to be on version 15 in 2 or so years. That's what I find silly about the whole notion.

    Heheh. Sounds as if you were implying there is a problem with being version 15. They must realize that they are going to be version 15? Why? What's relevant of being in version 15? *grin*

  • I have a modem/ppp connection, and sometimes when somebody is getting a file from my computer through ftp, my network comes to a total stand still (except for the ftp tranfer in process). I can't even ping other hosts anymore.

    This happens at least with 2.2.7-2.2.10. It is reproducable with one person whose connection is slirp-like through a Digital Unix box, using netscape. I don't know if it always happens with passive ftp. I do know that it doesn not happen with regular ftp (from other hosts) or http from that same Digital Unix host.
  • A committee with Alan Cox, David Miller, and 3-5 other core kernel hackers. Just like most of the other big and successful open source projects (egcs, glibc, etc).

  • Actually Microsoft Word is one of their few products where the version numbering is fairly correct. Versions 1.0 thru 5.1 were on the Mac, seperate versions 1.0 and 2.0 on Windows with a different feature set, and with version 6.0 they merged the codebase. The only real problem was that version 7 (95) was more like 6.1 in that the only new feature was red squigglies.

    Better examples might be Windows NT (first version = 3.1), Exchange (first version = 4.0, jumped to 5.0 for no good reason), MS SQL (where'd version 5.0 go?), MS Access (MIA versions 3.0 - 6.0), and so on.

    Thank god we still have companies like Apple that will produce a version number like 8.7.
  • Just get rid of the '.'s and make the lower numbers three digit!

    So kernel 2.2.10 => 2002010 !!! :)

    or 2,002,010 if you think it's more readable.

    Big enough, MS?
  • I think the keyword here is "official". Even when ext3 becomes the accepted "standard" for Linux that ext2 is right now, there will probably still be some people who stick back and use ext2 for a while (much like 2.0.x kernels, and libc5). I can see a new kernel supporting it, but not having it on by default until it's probation is up. Hell, some of the fs's don't even support writing yet, and they're in there
  • All you're doing is taking what would've been called 2.2.25 and calling it 2.4.0

    Not at all. According to the usual procedure, the goodies currently under development in the 2.3.xx tree will make it into 2.4, not into 2.2.xx (with few exceptions). Then guess what? MORE goodies will start to appear, in 2.5.xx.

    Soooooo... the bottom line is: the sooner we see 2.4, the better.
  • I've been going for a while now on a (very) old Slackware distribution, and I didn't have much trouble upgrading to glibc. As long as you read the installation instructions, and trust that the person who wrote them know what they're doing, you'll probably get through it fine (Then again, maybe I just had a very similar system to the person who wrote the INSTALL file...) The only major trouble was when I went to a glibc XFree86, and the install erased my libc5 X libraries....
  • When its all said and done, I believe this to be a good thing. With the recent talk of the Mindcraft benchmarks, Linux will be able to mature faster and get fixes and improvements in quicker.

    The biggest point that I agree on with Linus is that upgrading to 2.2 was a big deal for some people when it should not have been. Just look at how many distros drastically changed their products.

    The only drawback I see is that bleeding-edgers will have to compile much more often and that the already light-speed trend of kernel suffix escallation will only get worse. But is that really bad?
  • As part of their investment in Red Hat a while back, I remember one of the side-deals was that Intel was either going to work on or fund an effort to have Linux 64-bit on Merced as soon as it comes out, which shouldn't be much of a problem since we're already 64-bit on Alpha.
    Look at it this way: Linux is gonna support Merced a lot sooner than MS is.
  • I believe it has been demonstrated to run on one of the Mercid simulators on HP. I'm sure the Mercid port will be available and more stable before Monterey takes off. Lets just home the IA64 chip sets will be cost effective or the who platform will die.
  • Linus said that it was a "done deal". A friend of mine who works on support chipsets told me that it's already running on the Merced simulator. I think that's why our fearful leader can say that with such confidence.
  • by Trepidity ( 597 ) <delirium-slashdot.hackish@org> on Wednesday June 16, 1999 @10:13AM (#1847695)
    I'm not sure how good of an idea this is. Unless everybody starts coding a lot faster, the development is unlikely to speed up. All you're doing is taking what would've been called 2.2.25 and calling it 2.4.0 instead. However, this fiddling with the version numbers could lead to a decrease in overall quality, as moving from 2.2.x to 2.4.x will tend to tempt people to add more features, while keeping in the 2.2.x series leads to a bunch of bugfixes and optimizations, but few features added (i.e. 2.0.37 doesn't have that many more features than 2.0.0, but it is a lot more stable). More new features added and a shorter bugfix/optimization period leads to a more full-featured but less stable kernel. I suppose this is both a good and bad thing.
  • VA has a team of five working on the merced portm (mainly kernel, IIRC), with support from Intel. The EGCS team (through Cygnus) are working on the compiler, again with Intel support (since an NDA is required to get any access to emulators, and until recently, specs.) It'll be done on time, don't worry.
  • This press release [] from March says that VA Research is working on the port and that "VA Research will deliver the optimized port, in synch with Merced-based system availability in mid 2000." Apparently, HP, SGI, Caldera, Debian, RedHat and SuSE are all working on it as well.
  • Some persons have pointed that whatever they call the versions is irrelevant. It is not, since odd version numbers such as 2.1.X or 2.3.X are experimental while even version numbers such as 2.0.X or 2.2.X are stable code.

    It is a good thing, considering there are many persons stuck in 2.0.X, unwilling to switch to 2.2.X because of the big effort it takes:

    Someone posted:
    Question: Has anyone stepped up and said they will take over maintenance of 2.0.X if Alan and Linus don't want to anymore?

    I believe this is exactly why they are going to change version numbers more often. Why bother mantaining 2.0.X?!

    I understand your point of view, the effort related with switching is big and 2.2.X are not as stable as 2.0.X (or so it seems). But from another point of view, its like saying: I am running 2.0.36, will someone release I don't want to update 2.0.37...

    I know, there are no big differences between 2.0.36 and 2.0.37 so updating is not as big a pain as updating from 2.0.X to 2.2.X.

    And that's what Linus is trying to fix, I suppose. He doesn't want to have people stuck with 2.2.X when he releases 2.4.X, so he's going to release stable kernels more often, hoping people will update their systems and distributions to stable kernels more often (but with less hassle).

  • Well, I think all things considered, this is *a good thing*. A couple of comments, though.

    First - consider the open source development approach - it works well when there are a large number of people involved in the development and testing, and all the little dot releases and patches and pre releases are part of this process. In the traditional "closed shop / source" model, the same often happens - but only those in the development team sees it. Having the "stable / public" stream, and the "(b)leading edge / development" stream helps. As others have said many times - "release early, release often". That way, we all benefit from each others contributions, and can build on other's work sooner.

    What is required, though, is a way to make kernel upgrades easier and simpler for the "average joe" - and i imclude myself here. I still have 2.0.36 on my machine at home - partially because personal circunstances at hame have prevented me from doing too much - but I also have a sneaking suspicion that if I dont take extra caution, I may trash something. Sure, if I spent the time reading and experimenting, I would have greater confidence. I upgraded my Win 95 to Win 98 in under and hour - and no damage was done (assuming that you don't class wunning Windoze as being of irrepairable damage in the first place). Kernel upgrades should be able to be run as fairly simple and painless activities - and with a suitable "backout" capability. maybe that is already there - but I haven't discovered it yet. (yes, I know - RTFM).

    There is a fine line between showing a product is under active support, with new features being added quickly, improvements to security and performance coming all the time, and these enhancements being in response to what the users of the system want and need (and not just some market-droid's idea of how to sell more stuff); compared to having a product appear to be too experimental, with new versions released constantly without proper testing and quality control. I think that a major revision (2.0 => 2.2 => 2.4, etc) each 9 to 12 months is about right.

  • I think the problem is that even though
    v2.0->2.2 might have been a big enough
    increment to warrant a new major number,
    that would have seemed evil, so it only
    got the minor. In effect we have the
    opposite problem: Linux version numbers
    are misleading because they are too timid.

    Anyway a fast release cycle is the real fix.
  • by peterjm ( 1865 ) on Wednesday June 16, 1999 @02:04PM (#1847703)
    the article doesn't really go into this, so (with out trying to show off the fact that I was there) I will.
    according to linus, he was very un happy with the "pain" that people went through upgrading from 1.2-2.0 and then from 2.0-2.2 . he wants to avoid this as much as possible, (also stating that people shouldn't "have" to upgrade, but realizing the unmitagated joy in doing so). also, there was the time issue, something like 1.5 years from 1.2-2.0 and then like 2.5 years for the next jump.
    in the future, the plan as linus stated it, was to implement less "major" changes between code-freezes in order to get the newer stuff out there quicker. that's all.
  • USB is the key here. As more iMac-alikes come out (i.e. expansion via USB only), Linux is in danger of becoming superfluous if there isn't good USB support in a stable kernel release.
  • A unified buffer cache is great for the whole range of linux systems.
  • Yep, we have the merced simulator here as well.

    Chris DiBona
    VA Linux Systems.
    Grant Chair, Linux Int.

  • Why buy NT 5.0 when I can have Red Hat 6.0 -- it's a bigger number, man!

    Wow, you missed the Sun maneuver... They were going to introduce SunOS 5.7 (aka Solaris 2.7), but decided that they needed a bigger number; they changed the name to Solaris 7.

  • Linus has been working on Linux for close to ten years. If he hasn't burned out yet... Actually it is fortunate that he still finds it interesting.

    RTFM = read the freakin' manual (censored). By far the best source of information for newbies learning Linux is to read the man pages and HOW-TOs.

    Point your fav search engine to the ESR 'Jargon' file. It explains RTFM and many other hackerisms.
  • Linus has been working on Linux for close to ten years. If he hasn't burned out yet... Actually it is fortunate that he still finds it interesting.

    RTFM = read the freakin' manual (censored). By far the best source of information for newbies learning Linux is to read the man pages and HOW-TOs.

    Point your fav search engine to the ESR 'Jargon' file. It explains RTFM and many other hackerisms.

    Tom "Blitz" Conder - Programmer, SleepWalker
  • I agree with it being a good thing that kernels won't be such huge steps, but there's another problem too. Those people that are sticking with 2.0.x right now are going to have to leap even higher because they'll have to clear the 2.2.x step as well (Not to mention those poor bastards running 1.2.x...) Not only will they have to do all the upgrades that 2.2.x needs, but 2.4.x will probably need it's own stuff too. Then if 2.6.x comes out, they'll get farther and farther behind. This is one of the reasons I don't agree with staying with 2.0.x. Chances are you'll have to upgrade because you'll get a new ethernet card or somesuch. Might as well take things one step at a time.
  • Windows 1.0 and 2.x were the runtime environments which came with early versions of Excel, IIRC.
    Windows 3.0 was pretty bad and very flaky, but compared to naked dos, not bad.
    Personally, i find the idea of date stamping a product not completely nuts. i mean, if you know that a box is running win95, but office 2000, then you can figure that the OS may need updating.

  • That sounded much better coming from your mouth... er... fingers, than ZDNet. Basically, put a few more freezes in, with the goal of getting a smaller set of big things working. Better and sooner.

    The way ZDNet put it, it started to sound like a commercial development. Which would suck (and be very unlike Linus and the whole kernel crew!)

    I greatly respect Linus' ability to manage these things. I'd fold in 10 minutes :)

  • There was a Windows 2.0 too.
  • I mean, Linux _development_ isn't going to go any faster. Linus can't mandate that all the developers start working harder.

    All this means is that the stable kernel cycle time will be shorter, meaning less features per cycle, but distros will be able to incorporate these new changes sooner. I think this is a good thing.

    A user who wants the newest and hottest kernel can just use that, like always. The kernel patches will probably be released at the same rate as always, so the 'bleeding-edge' user will have the same number of compiles as before.
  • It probably is a good idea. The development kernels are supposed to contain fundamental changes that are likely to do very bad things (TM, pat pend.). You don't want to do too many of those all at once or you'll likely miss a lot of bugs.

    Bug fixes and stability issues are moving very fast in 2.2.x. 2.2.10 is a lot better sorted out that 2.0.10 was. By tearing up less code in 2.3.x, 2.4.x will probably have less introduced bugs to fix.

  • Well, the kernel code has "forked", basically. From the 2.2.x code came 2.3.x.. Updates to 2.2.x will not contain new features, and will only contain bugfixes etc. The new features are tested out in the 2.3.x series (where stablity is not much of an issue -- until the jump to 2.4.x approaches)
  • The idea is to take the long cycle of 'development' kernels, and chop them up into smaller bite-size increments. IE - instead of having over 120 kernel releases in the 2.1 series, have far fewer in 2.3. Then, like always, the development kernels will be frozen and stabalized into the next stable kernel release, 2.4. 2.4 will _not_ be the same as '2.2.25', it will be '2.3.40' or whatever. the 2.2 series and the 2.3 series are divergent, remember.

    This will probably result in _more_ stable kernels, since instead of having to bugfix some huge amount of new features to get it stable (like was needed for 2.1 series), there will be a more modest list.

  • by DonkPunch ( 30957 ) on Wednesday June 16, 1999 @10:29AM (#1847724) Homepage Journal
    Is Linus nuts? It's 1999 and he's still only releasing version 2.x? Look around, man! The Red Hat distro is up to version 6.0. SuSE is up to 6.1 (clearly, it's more up-to-date).

    And Microsoft? Their development environments will all be 7.0 soon (clearly more up-to-date than any of those Linux products). Why, that's why MS renamed NT 5.0 to Windows 2000! Why buy NT 5.0 when I can have Red Hat 6.0 -- it's a bigger number, man!

    For the sake of marketing, I propose a MASSIVE jump in kernel major numbers. Then all the distros can fall in line.

    This fall, Linux 2.4 should be released as Linux 2001.0.0. Red Hat, SuSE, and everyone else can release with the same version and clear up all this confusion. Dev kernels can use the same system as now (2001.1.0, etc.)

    /* Someone will take me seriously. Watch. It happens every time. Bunch of literalist geeks.... */
  • I considered doing this too. But I balked, and bought the Cheapbytes Redhat 6.0 and let it do all that for me. It was really easy.

    I am pretty sure that the Debian 2.1 cd, at $2.49, is easier to install afresh, than to do it all by hand. It has Glibc 2.07 I think.
  • I'm still on 2.0.X and don't have any plans to upgrade. Things are running well for me and all of my devices are supported.

    Question: Has anyone stepped up and said they will take over maintenance of 2.0.X if Alan and Linus don't want to anymore?
  • Is Linus nuts? It's 1999 and he's still only releasing version 2.x? Look around, man! The Red Hat distro is up to version 6.0. SuSE is up to 6.1 (clearly, it's more up-to-date).

    The crazy thing is that people will think this way.. They did it with Word/Wordperfect, and it will happen again.
  • ...unlike some software companies...
    And some hardware companies !!!
    Christopher A. Bohn
  • Hey, that's a great idea. While we're at it,
    let's release Gnome as version 1.0, so then
    everyone will know it's ready for the desktop.
  • Soooooo... the bottom line is: the sooner we see 2.4, the better.

    Not if it's as bad as 2.2.0 was.

    That's my point. More x.EvenNumber.0 kernels means more really bad buggy kernels. Would you really want to run 2.0.0 or 2.2.0 for anything serious? Not to mention that 2.2.1-2.2.8 weren't so hot either.
  • (This disregards the big leap from 2.x to 10, of course. Everyone already knows that 3 through 9 are all unlucky.)

    Funny, AutoCAD R9 worked wonderfully for me for the year or so I used it.

  • There was a Windows 2.0 too

    Which was, I think, the last version of Windows to ask if you wanted to install to HD or to floppy. I installed Win2.0 once :-)



  • They don't. Linux still uses partially invalid asm code which used to work on gcc 2.7.2, but fails with a better compiler such as egcs 1.1.x. The egcs team has decided to _not_ support the broken code from Linux.

(null cookie; hope that's ok)