Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
Linux Software

Kernel Traffic #64 And The 2.4 Kernel TODO 101

sohp writes: "Alan Cox's summary writeup of the things remaining before 2.4 highlights Kernel Traffic #64. It's quite a long list -- I'm not holding my breath for 2.4 " Kernel Traffic is a pretty cool overview thing if you can't handle the burden of actually subscribing to the list itself ...
This discussion has been archived. No new comments can be posted.

Kernel Traffic #64 And The 2.4 Kernel TODO

Comments Filter:
  • by Anonymous Coward
    Wrong. Kde2 development is encountering serious obstacles regarding slooow startup of any app. They have a handle on it, but it will take some time to apply the fixes to all the apps and "parts". Other than that, it's in great shape and looks (visually) exciting. I predict that within 6 weeks they will have a beta which includes only the base system and koffice. The rest will follow incrementally through the summer and fall. Gonme will be first. Gnome always releases early and releases buggy. But, I hope they release a new file manager to replace GMC as planned soon. When the hell is gtk going to develop a text widget as a standard part of its lib that has a horizontal scroll bar and allows one to turn line wrap on or off - I hate those ugly return symbols and forced line wrap - yuck! (Qt came out with such a widget as a standard part of Qt 2.1 which sure beats its old widget what didn't allow line wrap at all (without hard line feeds). Debian will always be last. Whatever they release as current will be at least a year behind the times. Hell, I would be using Debian now except "Stable" Debian uses an ancient version of X that doesn't support my video card! I curse them still after preparing some dozen floppies and making the discovery. Mozilla will have an "EXCELLENT" beta for Linux within two months. I think they got the message. The problem will be that official "netscape" branded versions included in most distros will have all the forced AOL crap with no easy way to remove it.
  • by Anonymous Coward
    some fool made a comment that linux should sniff for a windoze registry, and use this to set up settings for a linux install. This has some merit I think, but MS should also allow multi boot like lilo. I cracke d up when w1900 went backwards. This intolerance to other os's, including ms's own, ... anticompetitive. With perl and grep, should not be too had, and easy to fix when ms breaks things over.
  • by Anonymous Coward
    Every heard of the killall command? :)
  • by Anonymous Coward
    Sorry to beg still more details--is this fstab entry necessary or advisable for any 2.3 series kernel? I've been running one 2.3.4something with no such line. ((yikes!)) thx
  • Til Linus himself adds a journalled fs in, it doesn't help at all. If it didn't make it into the canonical tree, it's a fair assumption that stability and/or security problems are caused by adding it and in any event the combination hasn't received enough testing to trust.
  • by Anonymous Coward
    The text widget you ask for is already in the tree. It looks like it will do everything you could possibly want in a text widget. I don't know how speedy it is yet, though.
  • Injurious?

    Al is currently rewriting half of vfs because of the devfs integration, mind you. Which tends to make 2.3.99-pre* badly unstable (as in, 2-3 weeks ago half of the time you couldn't unmount some of the filesystems). This is going to push back 2.4 even more...

    OG.
  • My money is on PHP4, it's currently up to a Release Candidate. The Gimp team is in bug fix mode mostly too.

    Anyone know what the story is with Apache?

    --
    Simon
  • I said it once, got marked as a troll by some orthodox zealot debianite who didn't understand his own religion, and I'll say it again.

    Debian uses words we use every day but with different meanings.

    ex...

    1) Free: Doesn't violate our Guild Socialism
    2) Freedom: Enforcing our Guild Socialism on others
    3) Stable: So old that everyones is bored with developing on it, so it doesn't change much

    Now I'm not bashing my favorite distrobution. I'm just translating for others to understand. It actualy makes sence that when new major releases come out its time to lock off the old stuff as stable and start working on the new distrobution of new stuff.

    ^~~^~^^~~^~^~^~^^~^^~^~^~~^
  • Until you realize (too late) that on some unix systems killall will ignore any arguments and kill all processes owned by running user. Don't try this at work as root.
  • >And the give me one good reason why someone who buys a zip drive now
    >would want the parallel version when he can have the faster, more
    >reliable USB version.

    How about compatibilty with older hardware?
  • >a do nothing product? have you looked recently at how many new
    >printers, scanners, mice, joysticks, external storage, MP3 players,
    >etc etc etc...are all using USB?
    >until linux has solid USB support, we're missing out on a lot of cool
    >new devices.

    Really? I have an external zip drive that plugs into my parallel port. Give *ONE* good reason why someone who has such a drive or any other device should trash it and run out and buy the USB version of it?
  • actually it does...
    you can also try this
    ps auxgw |grep netscape | awk '{print $1}' | xargs -n1 kill -9

    this little beaut is called netpoop
    =)

    -Peter
  • When enough bugs get fixed to consider it usable.
    There is *nothing* stopping you from using it right now. It's out there in fact. As 2.3.99pre666-1-5-6ac3 :)

    If you find a bug, you can even help get it out faster. There isn't really much hype though. USB is a do nothing product right now, and DRM is about the only thing I see useful for myself.

    It just takes time to get things bugfree'er.

    Give it 3 months I think.

  • closer to two years between 2.0 and 2.2,

    and even then, 2.2 wasn't quite were it should have been until around 2.2.5
  • Um... try looking up killall. Be careful though... SysV systems like Solaris have killall's that behave quite differently. :P
  • Actually, it goes a little deeper than that. Linus often decides to change X so that it is better/cleaner/different knowing full well that Y, Z, Q, A, B and a unknown other things will break and need to be fixed.

    This is the basic deal with the Linux kernel -- nobody is guaranteeing internal compatibility, especially at the expense of better code. On the other hand, a commercial vendor such as Microsoft would implement an new X, but also at the same time keep the old X working. This is why Windows 98 can do mysterious things like run SCSI drivers from 1986.
    --
  • Netscape never can run multiple instances. 'killall' is just a giant awk/perl script, depending on how old it is. One of the downsides of 'killall' is that it only goes on the last part of the binary name. So, if you've got something running (netscape) named communicator-4.72.bin, but it contains 'netscape' in the path, then 'killall' won't nuke it.

    All the 'Kill Netscape' progs on freshmeat.net look at the whole path, so they can grab anything that contains a certain string.

    But yes, the script isn't exactly robust. It actually was a one-liner, but I cleaned it up for readability.

    In reality, this would work as well:



    #/bin/sh

    kill -9 `ps -ax | grep $1 | grep `whoami` | head -1 | awk '{print $1} '`;



    And this one will get anything owned by you, instead... plus it doesn't try to kill itself. Yay for Bourne Shell.

  • This is one of those things that shows that Linus most definitely does have a clue.

    Except his solution to this problem is pretty clue--. He suggests "running something like 'tar' on shutdown and boot" to handle devfs permissions. When multiple people thought the idea was totally stupid, he replied that running a startup and shutdown script wouldn't be all that bad...

    Obviously, Linus never took catastrophic failure into consideration.

  • um, "hype"?

    are periodic progress reports "hype"?

    that is a truly wierd statement you've made. I don't call frequent press releases and talking to the press, talking about how Fun and Easy the next WIndows will be with very few specific details anywhere, NEAR equivilent to the steady stream of periodic, sober, low-key descriptions of features and implementation details about 2.4 that have poured out from the people who make the kernel. Hype in my mind is essentially attempting to rile up the future end user and make the end user want whatever is being hyped. Whereas practically everything i've seen written about linux 2.4 isn't even intended to be READ by end users; just stuff by kernel developers for kernel developers..

    if you're tired of hearing about the progress of linux 2.4, then don't just read things written about it until it's released.. sheesh.
  • These are important pieces of functionality that worked once. Why were checkins/merges allowed that broke these?

    The cache (among other stuff) were changed so it would be way faster and use about half the memory. Good, eh? All filesystems then needed an overhaul for this to work.

    Wasn't the code reviewed? Didn't anyone test for these?

    It is being reviewed right now. All software manufacturers do this: Change something, and a lot of stuff is temporarily broken until fixed/adapted. A developer typically work on only a few files at a time. Of course no release take place during the broken time. The same happens to linux, with the difference that you may indeed take a look at code in the "broken state", because you too may do development if you want to help out. The code is not released though, it is merely available. Release happens when they roll out 2.4.0
  • that was much too long to be a "classic" me too post, also, your post even contained a small tidbit that may be interesting to others..you should be more careful when labeling your posts "classic"...

    =P

  • try out killall program_name..if you don't like that then write yourself a shell script or something to what you want. Not to mention that would not be a kernel thing.

  • well..if you hack the kernel with the best of them...sleep is optional anyway. :)

    'Sleep? Isn't that a totally inadequate substitute for caffeine?'
    - Sorry, I don't know where this one's from. Anyone else know?
  • Ooo... that IS good.

    Personally, I'd bet on either the kernel, KDE2 (it rocks, BTW - been using it for a month and a half, and it's getting better every day!), or Mozilla.
  • I don't know exactly when the change went in. But if its there, you want it. If you need it and don't have it, you'll know because X will be super slow (MIT-SHM obviously relys on SHM working).
  • As usuall, all of those latest packages will release -before- Debian 2.2 (which is frozen) does and only after that Debian 2.2 will release to ensure that we, the Debian users, are always runnunng previous versions of everything ! :p
  • I'd bet on GNOME 1.2, a.k.a. "April GNOME".

    There's just a few days left of April, and even if it doesn't make it (and those chances are good) it's damn near... ;)

  • Don't forget to put an entry into fstab for shm (shared memory)
    The mount point for that filesystem should be /var/shm. I had a REALLY hard time finding that out when I upgraded; maybe I was looking in the wrong places.
  • we, the Debian users, are always runnunng previous versions of everything

    Not all Debian users are always running old versions! I don't see why everyone is always running Debian stable. No one i know uses it. I've been running "unstable" since i first got into Debian, and i've never had any problems with it. As long as you're cautious about when you update, you'll never be bitten by the big changes. The name is misleading; all it means is that things change, whereas nothing changes in stable except for security problems.

    The only reason i can see to run the released version of Debian is if you have to pay by the minute on local phone calls. Then, downloading all the update packages could get quite expensive. This problem wouldn't exist if people would put unstable on CD...

  • alias killnet='killall -9 netscape;rm ~/.netscape/lock'

    A handy alias I find. Just waiting for Mozilla to get a tiny bit better.
  • It's really sad to see so many items on the list that indicate regressions caused by earlier checkins/merges. For example:

    >msync fails on NFS
    >UMSDOS was broken by the fs changes
    >Restore O_SYNC functionality

    These are important pieces of functionality that worked once. Why were checkins/merges allowed that broke these? Wasn't the code reviewed? Didn't anyone test for these? Say all you want about "open source is better" and "debugging is parallel" but these sorts of things would never have slipped through the checkin/review process in any decent OS group (I should know, I've been in a few).

    The msync and O_SYNC bugs would have shown up using any number of public-domain standard tests, and the people who broke them should never have forwarded the code to _anyone_ else with bugs like these still present. That's basic "software hygiene", and failing to require even that much just gives the whole Linux community a black eye. Why are we doing MS's PR folks' job for them?
  • Amen brother.

    As a test engineer who has seen products released
    way before they were done I have only respect for
    developers who won't release until the shit is done.
  • Disabling Java will dramtically reduce the crashes.
  • well..if you hack the kernel with the best of them...sleep is optional anyway. :)
  • I've got my money on KDE2.0.

    : )

    BTW: My dream distro is Debian with *all* of the above... imagine it with the 2.4 kernel, KDE2 with Konqueror or GNOME2 with Mozilla and Evolution on top of XFree86 4.0 all placed on a nice journaling filesystem... oh, what a glorious day that shall be. I can then die in peace and happiness.
  • I realize we all are on the same side, hence I bother to point out the errors in Linus's logic.

    Each has its advantages, so keep both- don't throw the baby out with the bathwater.

  • I personally like GPL just so there is no confusion), but why is comunism bad?


    ---
  • Say all you want about "open source is better" and "debugging is parallel" but these sorts of things would never have slipped through the checkin/review process in any decent OS group (I should know, I've been in a few).

    Well, I've never worked on an OS, so maybe I'm way out of line here. But this software is BETA, which is why it's in the UNSTABLE kernel numbering scheme. Ie, it's 2.3.xx instead of 2.4.xx. So it's NOT expected to work perfectly.

    Now if they released it as 2.4, then that's a different story. I guess I'm confusing your notions of CVS check-in and my notion of release. But is it feasible to check each kernel check-in against the other 5000 things the kernel does to see if any of them broke?

    And no, we're not doing MS's PR job for them, because these problems will be fixed (except the post 2.4 ones) before the 2.4 release.

  • He's repeating a very old idea.

    10 years ago there was a great debate on SMP versus distributed computing. Already then it became clear that SMP is a dead end. It cannot scale because of cache coherency etc.

    IBM for example never wanted to make SMP machines. Later they gave in to customer demand and made some SMP AIX machines, but internally they still don't believe in it (they push more for massively parallel, so distributed such as the SP2 machines).
  • X does not require inetd to function. When you
    run X, it listens on a port for clients to connect
    to it. inetd is for those programs that do not
    set up their own listener.

    I could be wrong here, but I think X can be set
    up using UNIX sockets, which require no IP
    addresses.
  • Interesting: sounds like he's just reinvented a form of NUMA (or at least that's how I read it...) with all the up- and down-sides that entails.

    The (really interesting) problem with this is cache coherency and consistency: if you've got (say) 256 independent nodes all accessing the same dataset for CRUD operations (as against straight shared reads with non-interdependent writes), how do you ensure that the data's kept in sync without spending vast amounts of time waiting on locked data. The semantics of transaction processing mandate that your transactions be atomic, consistent, isolated and durable: it's difficult to do this (particularly the consistent and isolated requirements) when you've got lots of nodes each with their own private caches unless you enforce frequent cache flushes and reloads. Even doing this you start seeing a lot of waiting: fundamentally your access to data is serialised, at which point a lot of the benefits of massive parallelisation disappear (this is better known as Amdahl's Law, which states that the potential speed-up offered by massive parallelisation is limited by the fraction of serial work in the problem) - see her e [sgi.com] for further discussion.

    This is less of a problem with highly read oriented systems (such as DSS) and jobs such as crypto-cracking, but with OLTP this is a fundamental bottleneck.
    --
    Cheers

  • Why so much anger? Each has its advantages and disadvantages as do other *nix variants. Maybe someday you'll come to realize we are on the same side; not that this is some kind of jihad anyways.
  • USB has a much greater bandwidth than your parallel port. Your zip drive would be faster using USB.

    CleverFox
  • That's sort of the effect from the user's point of view, but if I understand vmware, it slides underneath the two OS's and makes them run side-by-side with one appearing in a window in the other. With the user-mode port, it is really Linux inside Linux. If you run it and do a ps, you will see a whole bunch of "linux" processes, plus what they really are inside the virtual machine.

    That's simply not true. If you have win98 running in Linux, then VMWare is a Linux process that emulates a virtual machine for win98. There are virtual machines that run below all OS's, but VMWare doesn't work like that. For instance VMWare can crash of freeze solid, without disturbing any other Linux app (or the kernel).

  • I had forgotten that Windows was suitably large so that 65K bugs is not that many bugs/line of source code.

    The endemic EEE culture could be really interesting in a post-breakup future with two behemoth software companies in Redmond (call them OS Products and Everything Else).

    Care to speculate what would happen if Windows 2003 came out with a new "innovation", such as a swallowed version of something that looked a lot like Word+Excel+Access+Outlook? Hey, if it can eat IE, it's got a pretty good-sized mouth!

  • It's great reading, even if you're not a direct kernel hacker, being something of an education as to what the issues are, the pro's and the con's and plenty of subjective opinion about the what's "beautiful" and what's not.

    As far as I'm concerned, this is like reading about the Chronicles of the Mightiest, but sadly, I never expect to see a terse laudatory Alan Cox comment with my name on it, though I much admire those other names that do get put in the parentheses.

    Fixed a long-standing bug. (Little Ole Me)

    Cured subtle race on 8-way SMP (Me Again)

    Speeded up khttpd by 5x. (Aint I Great)

    Implemented VMware like win32 interpreter. (You Know Who)

  • Read before you answer. The keyword was new devices.

    And the give me one good reason why someone who buys a zip drive now would want the parallel version when he can have the faster, more reliable USB version.

  • Do you really want him to stick to a release date, and then end up releasing an extremely buggy non-fully functional kernel?
  • I only know of Linuxcare because of Kernel Traffic. What an excellent resource (thanks, Zack Brown)! I was afraid that KT might go away after the latest Linuxcare blow-up, but, happily, it continues.

    I find myself reviewing the KT and diving into the referenced discussions in the archive when something grabs my attention. I am not a kernel hacker, and much of the discussion is beyond me. Of course, exposure over time helps, but the other reason I read it is to see the Open Source development process in action. Very cool!

  • N.B. No need to moderate this down. Address complaints directly to Linus.
    Why moderate when you can reply? Personally I am quite happy with the way the kernel is progressing, in any case considering every release is available including every patch it is hardly vaporware. It's not like Linus is some huge corporation spending vast sums on a piece of software that doesn't even exist yet.
  • http://freshmeat.net/search.php3?query=kill+netsca pe
  • That's not a kernel matter... and your plight is one that has been solved before. Use the shell command 'killalll netscape' ... That'll send a SIGTERM signal to netscape. It if still refues to die: killall -9 netscape ... sudden death.
  • I do beleive that the cycle from 2.0 to 2.2 was nearly a year long or thereabouts. The 2.1.x series was up to 2.1.132 (correct me if wrong) before the 2.2.x pre's came around.

    Justin Buist
  • Obviously, you have never developed software on such a huge scale as the linux kernel. You add one thing, it breaks another. It's just basic fact. Before you start insulting the (majority) of people that give their FREE TIME to develop YOUR KERNEL, realize there is a lot more to programming than your simplistic view.

  • I think it would be extremely cool if they started having it so when you wanted to kill say netscape then you could just type kill netscape. I find that is becomes extrememly annoying when netscape crashes every two seconds.
  • Anyone have any friggin' clue on when this is coming out? The hype for 2.4 has reached the level of which Microsoft once garnered.
  • The hype for 2.4 has reached the level of which Microsoft once garnered

    Debatable. At least it will not be pushed onto us like the aforementioned company has been known to do.
  • I use 2.3-pre5. Its stable as hell and alot faster than 2.2 for me as it supports udma for my motherboard. Install it and start sending bug reports, or better send patches. If you dont do any of these things you SHOULD NOT complain...
  • The whole reason I object to seeing this kind of breakage is because I know this stuff can be done better.

    The whole reason I object to this objection is because you don't seem to understand the idea of "development release".

    This is not code intended to be running high-availability web servers. This is code that is intended to be downloaded by people who are willing to do testing on it themselves, to watch things break, and then to submit core dumps/debug output/code patches to the main kernel developers. If you put this kernel on your production box, then yes, something will most likely break.

    Is the fault that of the kernel developers? No, the fault is yours for being ignorant.

    One reason testing is done this way is because out in the wide world of kernel release land, people are going to use different setups. What happens if some odd interaction between my AMI Megaraid card and my Soundblaster 128 on my VIA motherboard with an Athlon processor brings my system down? What would have happened if none of the kernel developers had this exact setup that demonstrated this problem, and they hadn't been putting out development releases?

    Answer: They might not have found the problem in the IRQ spinlock or whatever would have caused that rhetorical situation.

    Development releases are a GOOD THING.


  • Then why did Linus himself say that 2.4 would come out in the first quarter of 2000?
    (only to change that later to summer 2000)
    (which looks like another promise that he might not be able to keep...)

    Vaporware, anyone?

    N.B. No need to moderate this down. Address complaints directly to Linus.
  • I once supported and tested a product which had very rigid release deadlines, even though at the time there were only 3 sites using the product, and all three were still not in production. The 'project manager' would often promise a new version by the end of the week to one customer, and the programming team would often work long hours just to get everything working as promised, often ending up with code that was tested enough to check that it was working. The project was an utter shambles. If we had shipped new versions when they had been ready, perhaps the test customers would still be using it today, instead of abandoning it as unready.
  • >But you can exclude nearly everything related to development, most network daemons, databases, most
    >libraries, most scripting languages, and most applications.

    Win2K has a complete scripting language, Windows Script Host, so it would be fair to include a Perl or Python in the mix. And, since WinNT/Win2K have a rudimentary POSIX system, it seems only sane to give Linux a rudimentary Win32 system and include Wine.

    We should also include a modern browser, probably either Communicator or Mozilla, either of which will raise our bug count severely.

    Win2K also includes some basic network functionality that NT was missing (under the strange moniker "Services for Unix"), like a telnetd and the like, so removing network daemons is not actually fair. There's Dynamic DNS, so we'll include BIND, which actually is less capable. Also encrypted filesystems, Kerberos (hey, it's not strictly correct, but it's in there), IPsec, telephony, OpenGL & DirectX...

    And, IIS, XML parsing, COM system, PPTP, Distributed File System, ActiveDirectory -- you'd need Apache, PHP, probably Mozilla for XML, some ORBish system, any of the horribly buggy PPTP implementations for Linux, Coda? AFS?, OpenLDAP....

    The list goes on and on. To say that Win2K is analagous to a stripped-down Linux distro is really not correct -- there's a LOT in there, way more than in the average Linux distro.

    None of which is a defense of Win2K; I just think it's important not to use the 63K alleged bug count as FUD ammo -- I could file 63,000 bugs against ANY Linux distro if our definition of 'bug' is the same one used for the Win2K count -- anything that displeases the tester, even as small as tiny aesthetic visual issues with a program.

    >Of course, we also don't have Microsoft's legions of full-time, paid developers and testers working
    >on the core components of a Linux distribution, either.

    That's kind of an excuse, and gets really pretty close to saying that expensive commercial software should have fewer bugs than Open Source since they have more money to throw at it.


    --
  • In all fairness, comparing 95 known bugs in the Linux kernel with 63K bugs in all of Windows 2000's kernel, GUI, services, userland programs, games, icons/graphics/media files, POSIX subsystem, scripting language, and so forth is comparing apples to oranges.

    It would be a more fair comparison to get hold of an entire distribution's count of todo's and bugfixes, including X, Wine, sendmail, python, GNOME, and on and on. I bet you could get right up there near 65K pretty easily if you aggregate all the TODO and KNOWN BUGS lists in all the SRPMS in Red Hat 6.2, for instance.


    --
  • Thanks... now to work out how to put some of the other nifty features of the 2.3.99'ish kernel to use...
  • Thanks for pointing all this out - I'm familiar with fstab, just not familiar with shm use in Linux... reading the Documentation/Changes file now, which also appears to answer my other questions related to how to find out more about the new features in the new kernel.

    For the longest time (been a Linux *user* since 1994, the days of Yggdrasil, pre-RedHat), I've never even bothered to really read the kernel docs, I'm ashamed to admit. Most of the time, for me anyway, it's been a matter of build the kernel, install the kernel, run the kernel (for a year or so), then upgrade a year later... :) And I'm happy to say, it's been totally worth it all along.

  • It would be a more fair comparison to get hold of an entire distribution's count of todo's and bugfixes, including X, Wine, sendmail, python, GNOME, and on and on. I bet you could get right up there near 65K pretty easily if you aggregate all the TODO and KNOWN BUGS lists in all the SRPMS in Red Hat 6.2, for instance.

    Obviously comparing the kernel to Windows 2000 is outrageous, but comparing Windows 2000 to a full Linux distribution is also unfair. The average distro ships with some 3,000 or so packages, the vast majority of which are applications that you won't find an equivalent for in Windows 2000.

    Windows 2000 is more comparable to the kernel, X, glibc, GNOME, samba, Apache (maybe-- does IIS come with the base OS now?), some common libraries and small utility applications. Probably a few other applications, too. But you can exclude nearly everything related to development, most network daemons, databases, most libraries, most scripting languages, and most applications.

    And then, you have to factor in the fact that most Linux distributions run on several hardware architectures, many of the libraries and applications in that distribution run on various operating systems, too.

    But still, you do have a point: we've got our share of bugs, too. Of course, we also don't have Microsoft's legions of full-time, paid developers and testers working on the core components of a Linux distribution, either. Imagine what you could do if you had a billion and half dollar development budget (which is about what they spent on Windows 98 and IE4/5).
  • I should have been more specific about what would be injured...

    The result of the massive changes is likely to be a diminishment of the amount of time Al Viro has available for other things, such as sleeping...

  • There is a problem with the "when it's ready" philosophy, which is that the release schedule of a piece of cooperatively developed software is bistable. In one mode, releases are frequent, so the "penalty" of your favourite feature being held over to the next release is small, and so the pressure to get features into the current release is reduced, and it is easier to make frequent releases. In the other mode, the opposite happens. Everyone wants their feature in the current release, because they don't know when the next one will be, so it gets harder and harder to get releases stable and out.

    Steve
  • First of all- to preempt the question that I KNOW is going to come time and again here: KERNEL 2.4 WILL COME OUT WHEN IT IS READY AND NOT A MINUTE BEFORE. Incase anyone is wondering, Mozilla will also be released WHEN IT IS READY. As will KDE2. As will any other project your waiting for.

    For those of you new to Open Source, the reason you can run linux for months without a crash is because the developers take great pride in it. We don't release our software as a final version until it is ready and fairly bug-free.

    If you truly want to know when it will be coming out, subscribe to the kernel-dev list and read the status. Or, if that's too much mail, at least check out kernel traffic. It makes for good reading.

    This particular list has been on kernel traffic for atleast a month. Alan has been updating it and revising it for a while now, with no huge changes. Why it's slashworthy now--I have no idea. Oh well, a minor rant. :)

    Erik
  • I don't think you understand the philosophy of linux kernel development.

    Every part of the kernel is to have a maintainer. When something changes, like the VFS layer, the maintainers of the components depending on that layer, like UMSDOS, must update their components to work with the changes.

    If a particular thing isn't working, because of a change, it's probably because 1) there is no maintainer for that part or 2) the maintainer is lazy/busy/no longer interested.

    As for other kinds of problems... that's the price of innovation. Linux kernel development, in in the development trees, trys to improve. You don't innovate,and manage to keep *Everything* working *perfectly* the whole time.

    Keep that in mind. Maintainers do their own code. When something big changes, it gets announced on the mailing list. Someone who changes X isn't expected to go fix Y, Z, Q, A, and B that depend on it. The kernel is simply too big for that kind of process to be expected.
  • #!/bin/sh
    pid=`ps -ax | grep $1 | head -1 | awk '{print $1} '`
    kill -9 $pid
    echo "Killed $1, pid: $pid"

    now you can run ./killg netscape

  • What I find strange about that is that Windows 95/98/ect runs in a similar fashion. Apps are launched in a user-mode virtual machine that is either 8, 16, or 32 bit. This is part of the reason M$ software runs a little tighter than third party stuff, they have full access to the vm control components where no-kernel-access-granted people only had published documentation.
  • Interesting, but I think there is a slight flaw:

    Take your favorite linux distro. Remove inetd completly, install X with kde since it is releatively similar look and feel wise to linux. Install kde progs to match standard Win2k utilities. Add smb and a stable kde gui frontend. Remove all console utilities that have no equlivant in win2k. Now install a natile portman desktop theme on both computers. Compare the amount of known bugs


    I believe that X is a Client/Server applications and thus relies on inetd to function. We at least need a loopback configured.

    Besides - my opinion is that KDE is full of bugs and because it's still not fully operable you should halve the version numbers.
  • There are a significant number of changes to the kernel internals, especially w/ regard to the VFS layer (and internal layer that lets the kernel treat each filesystem uniformly) Making all these changes causes many things to break, until they are all updated to conform to the new internal interface layer. The VFS is expected to evolve even more for 2.5/2.6.

    Why make all these changes, instead of minor incremental changes to the internals? So that the kernel can handle such things as advanced new filesystems (XFS, ReiserFS, etc.) w/ larger file sizes, advanced features, etc. Also, Linux has incorporated a logical volume manager (LVM) allowing people to link filesystems, grow filesystems, etc. Without updating the VFS abstraction layer, XFS and others would have to resort to kludges, that would be difficult to maintain in the future.

    Basically, Linux's VFS layer is a VERY powerful concept, allowing one to plug new file systems into the kernel, and have the rest of the kernel automatically use it without tons of rewriting. However, as people get more ambitious with their filesystem dreams, the VFS must evolve to provide the proper abstraction of features.

    Also, one more neat thing (unrelated to VFS). I'm running 2.3-pre6 (released today), and the Soundblaster Live! OSS drivers have been included (emu10k1). That is great news! Considering I never expected my SBLive to work under Linux when I bought, it is great to see that it is now in the mainstream releases.
  • >Obviously, you have never developed software on such a huge scale as the linux kernel.

    Actually, I have. I was in the OS groups at two of the early UNIX-SMP companies, and I've worked on a half-dozen other kernel-oriented projects where the complexity was comparable. The point is that it's precisely _because_ of the large scale of something like a kernel that a more disciplined approach is needed. Fast and loose works fine for small stuff. What we're seeing is precisely the sort of breakdown that always occurs when you apply the same approach to big stuff. It's avoidable, but only if the developers exhibit some maturity. As you have amply demonstrated, though, that is a quality often lacking in this community.

    >Before you start insulting the (majority) of people that give their FREE TIME to develop YOUR KERNEL, realize there is a lot more to programming than your simplistic view.

    Typical slashdot. Someone says something critical of Linux, and respondents assume that the poster is a non-programmer. Au contraire. The whole reason I object to seeing this kind of breakage is because I know this stuff can be done better. I've done it better myself. I've worked alongside hundreds of others who know about basic software hygiene, who developed many of the techniques and algorithms (and sometimes the code) that has since been recycled in Linux. Linux can be improved by a willingness to learn not just technical stuff but also organizational stuff from the experts, instead of having to learn every lesson and reinvent every wheel the hard way.

    An example: on my current project, I have often made changes that have caused breakage in msync in the Connectathon tests. This is exactly correspondent to the msync breakage in Linux right now. The difference? I go out of my way to _find_ that breakage by running appropriate tests, and then I _fix_ it before anyone else - even my own team members - is affected by it. There's no reason at all why an open-source developer can't do exactly the same thing. The reason many don't has nothing to do with philosophy or organization structures. It has only to do with individual laziness and lack of self-discipline.
  • >Basically, Linux's VFS layer is a VERY powerful concept,

    ...which has been around longer than Linux itself. As a filesystem developer, I was stunned to find that Linux didn't have a VFS layer to speak of years ago. The value of such an abstraction, and the methods for implementing it, were old news even ten years ago when I started working on UNIX.
  • >The same is true of O_SYNC: it is not vital to many people

    Are you kidding? True, it's not vital to that many people, but to those who do need it there is no substitute. A filesystem that doesn't properly support O_SYNC (and yes, I'm aware that ext2fs never did, even before the current breakage) is simply not suitable for mission-critical apps. It's that simple.
  • >The whole reason I object to this objection is because you don't seem to understand the idea of "development release".

    I understand the concept of "development release" just fine. I've been involved in more of them than you. I'll put this simply so even a moron like you can understand it.

    Even for a development release, there are certain kinds of breakage and certain levels of carelessness that are unacceptable.

    It's not a difficult concept, and what we're seeing in 2.3.x is that we're on the wrong side of that line. Some people just need to clean up their acts, and that's all there is to it.

  • >That's simply not true. If you have win98 running
    >in Linux, then VMWare is a Linux process that
    >emulates a virtual machine for win98.

    So, if you do a ps, you will see a process that has win98 running inside it? Cute.

    In that case, that is fairly similar in effect to the user-mode port. The basic design is entirely different, though.

    Jeff
  • by AME ( 49105 )
    About your sig: Does any one out there really use Gnome or KDE? WM is my favorite, I'm tired of start menu's (This also includes stylized "K"'s and "little feet")

    I use Gnome. Really.

    With regard to the foot menu, it's not an essential part, and it's easy to remove (right click on the foot->Remove from panel).

    For that matter, the entire panel could be removed, or easily configured in almost any way that suits you. Or you could have several of them, each configured differently.

    In any case, WM does have a root menu, and how this is different from a foot menu is difficult to fathom. Except that there's an icon on the panel for it, but like I said, there doesn't have to be.

    --

  • What hype? I really don't expect 2.4 to be very different as Linus said it would not be ( One of his emails posted on Slashdot a while ago said they'd pack less into each kernel so it could be released more often )

    Is there anything in 2.4 you are looking forward to? A device driver you lack support for?

    • Yes? Maybe you could contribute too and spped up the process.
    • No? Shuddup.

    2.4 has some new stuff I'm looking forward to, mostly usb support and I really hoped reiserfs would be included, but heck kernel patches are fine

    Oh, and btw - 2.1 went way into the hundreds of alpha/beta version before it was released, 2.4 is young compared to that. 2.2 was just released last year, settle down.

  • Obviously, Linus never took catastrophic failure into consideration.

    Since devfs is a file system, you can add it to your backup rotation. You might choose to dump it to tape nightly or rsync it to a remote host hourly.

    If you want higher reliability, add a journal to the filesystem. If you want high availability mirror changes to the other servers.

    Linus made it so you can add as much catastrophic failure tolerance as you feel is necessary.

  • However, Linux extends this idea to things that are needed for serious work. Having procfs _replace_ sysctl is A Bad Idea.

    I understand his complaint about numbering sysctl branches and nodes, but it's not as big of a deal as he makes it out to be- things don't get drastically changed with this practically ever.

    The fact is, being able to get your data with one or two syscalls is fast. A lot faster than having to drag the VFS code into things. Case in point? Here's a look at top on a sourceforge box:

    PID USER PRI NI SIZE RSS SHARE STAT LIB %CPU %MEM TIME COMMAND
    7014 bugg 18 5 1156 1156 844 R N 0 6.8 0.2 0:02 top

    6.8% of the processor. (this box has 238 processes)

    What kind of processor is this?
    model name : Pentium III (Katmai)
    stepping : 3
    cpu MHz : 598.506870

    Unacceptable. Procfs is great for users who want to read things, but programs need to have a lower level interface for the sake of speed, and code simplification. It's a lot easier to use sysctl than it is to open up a file and read the data from it, hate to break it to you.

  • That would be a neat poll, but you'd have to and an extra one that you forgot.

    9)Hemos
  • Hm, that doesn't address multiple instances of a program.

    My Linux distro came with a nice killall command that does:

    # killall netscape
    ---

  • Well if you really want to be fair you have to do one of two things.

    1. Take Win2k and load it up with Fictional Daemon (telnet only ftp disabled), apache, exchange server, and an ftp server that is known to be rootable. Throw in some other things to make up for things like finger that you can't add to win 2k and use VNC as a sorta equiviland of xdm. Now that Win2k is ready to take whatever comes into you machines NIC, except mabey hot grits. Now that you have windows set up like an out of box linux install count all known bugs and compare it with the bug count of your favorite distro.
    2. Take your favorite linux distro. Remove inetd completly, install X with kde since it is releatively similar look and feel wise to linux. Install kde progs to match standard Win2k utilities. Add smb and a stable kde gui frontend. Remove all console utilities that have no equlivant in win2k. Now install a natile portman desktop theme on both computers. Compare the amount of known bugs


    comparing a bugcount of the two kernels is not fair because the windows kernel is not really seperate from the rest of the OS, while the linux kernel could be placed on a system without any of the traditional unix tools and a custom init and work
  • by torpor ( 458 ) <ibisum@@@gmail...com> on Wednesday April 26, 2000 @01:31PM (#1108373) Homepage Journal
    Can you provide more details about how to set up /etc/fstab, or point me (us) in the direction of a web page that has a reference for how to use these new features in the Kernel?

    I'd be really happy to get into using 2.3.99, but I don't really have a lot of time to wade through the sources looking for details on how to use the new features. I'm sorry if this is a stupid question, but maybe there's a page out there that contains a guideline for the new features and more importantly - how they might be used?

  • by Stormie ( 708 ) on Wednesday April 26, 2000 @12:35PM (#1108374) Homepage

    Yes, this is a classic "Me too!" comment. Kernel Traffic is wonderful, a godsend to someone like myself who really is interested in what's going on in kernel-land, but can't possibly read 100 messages a day (or whatever) on the mailing list. Anyone else in the same boat (and I'm sure heaps of Slashdotters are) would be mad not to check it out every week.

    And, if you're not aware, Kernel Cousins [linuxcare.com] is a collection of "cousins" to Kernel Traffic, for other mailing lists. Currently the Gimp, Wine, Samba and Debian HURD mailing lists are summarised weekly or thereabouts. So if you're interested in the bleeding edge of any of those projects, there's something for you too.

    Massive kudos to Zack Brown and the other traffickers for these summaries!!

  • There's no "hype" in talking about source code that you can already download.

    Incidentally, the portion of the Kernel Traffic discussion [linuxcare.com] where Linus discusses devfs , mentioning thus:

    I want the numberic crap to GO AWAY. It's stupid, it's unmaintainable, and I do _not_ want to have the same old "device number" problems in new guises.

    A hierarchical name-space with true names is the obviously correct way to name kernel parameters. And doing that any other way than exporting it as a filesystem is stupid and wrong.

    Guys, remember what made UNIX successful, and Plan-9 seductive? The "everything is a file" notion is a powerful notion, and should NOT be dismissed because of some petty issues with people being too lazy to parse a full name.

    The same is true of ASCII contents. Binary files for configuration data are BAD. This is true for kernel interfaces the same way it is true of interfaces outside the kernel.

    I tell you, you don't want the mess of having things like the Windows registry - we want to have dot-files that are ASCII, and readable with a regular editor, that you can do grep's on, and that can be manipulated easily with perl.

    This is one of those things that shows that Linus most definitely does have a clue. Further devfs changes will likely have an impact on VFS code, and thus be "injurious" to Alexander Viro. And it looks like there may be some side-effects whereby /proc gets nearly "reimplemented." And I can see the glimmerings of the VFS changes providing the kernel support needed to make managing ACLs and kernel capabilities a whole lot better.

    It may take some time, and may not be complete until 2.5, but there is definitely some ongoing Good Stuff getting implemented in the Linux kernel.

  • by tilly ( 7530 ) on Wednesday April 26, 2000 @03:45PM (#1108376)
    I am talking, of course, Larry McVoy's thoughts on scalability and SMP clusters. Here is a link [linuxmall.com] on the problems with SMP, and here are the slides [bitmover.com] without explanation.

    The theory goes like this. In an SMP system all of the CPUs have to be made to pay attention when any of the CPUs wants to do something where races would be bad. To do that you need good latency, which means that you need to fine-tune what is locked where and for how long. This introduces a lot of overhead.

    Instead what Larry wants is to have a machine with a lot of CPUs turn itself internally into a cluster of Linux machines that just happen to network Really Fast. There are good theoretical reasons why this should scale Really Well.

    One of the key items in this vision is the ability to run virtual machines within Linux. Guess what User Mode Linux is? :-) The other piece of the puzzle is making a cluster work like one machine, and Ron Minnich [lanl.gov] has been doing some work there.

    In 2 years, care for a 1000 CPU multi-threaded database server? With failover? :-) :-) :-)

    Cheers,
    Ben
  • by kijiki ( 16916 ) on Wednesday April 26, 2000 @01:52PM (#1108377) Homepage
    echo "none /var/shm shm defaults 0 0" >> /etc/fstab

    That should do ya. make sure you use ">>" not ">" or you'll clobber your fstab (that's bad).
  • by cullman ( 29958 ) on Wednesday April 26, 2000 @06:27PM (#1108378)
    Reading the first couple of pages where all of Linus' quotes are in red text, I had some Deja Vu. Reminds me a bit of the New Testament.
  • by Salamander ( 33735 ) <`jeff' `at' `pl.atyp.us'> on Thursday April 27, 2000 @04:28AM (#1108379) Homepage Journal
    >But this software is BETA, which is why it's in the UNSTABLE kernel numbering scheme.

    Unlike some of the other BS people have posted in response to my original comments, this may be a genuine philosophical difference. You see, I've been working for ten years in a world where "beta" means that the people responsible for the product have done everything they could to ensure that it's free of major defects, but they want to get some real-world experience before they "close the book" and call it done. A beta isn't expected to be perfect, but it shouldn't have any _known_ defects of a certain severity level and adequate testing should already have been done to verify that nothing's in there that will later seem "obvious". Linux 2.3.x clearly does not meet this standard. Maybe this standard is too stringent and inappropriate for Linux 2.3.x, what with the cooperative development model and all. In fact, I believe that is probably the case. However, I think the current state of Linux 2.3.x fails to meet _any_ reasonable quality standard, even one more appropriate to the situation.

    I still believe that there's a certain level of diligence that should be expected before beta, before alpha, before hand-off to QA, even before other team members get their hands on a new piece of code. Obvious standard tests should be run to check for regressions, for one thing, and the code should not leave that developer's hands while such obvious regressions exist. For example, I run Connectathon tests before I check in even to a development branch, and if I see a new failure I don't check in. That doesn't mean I'm special, either; it's basic stuff that should be required of every developer. What I'm seeing, and what I'm complaining about, is that even these very basic rules clearly are not being followed in Linux development.

    Nobody in their right mind expects any software to be perfect, least of all something clearly labelled "unstable", but I think it's perfectly reasonable to expect that things won't be broken _more than they have to be_. And that's not just a philosophical thing, either. Detecting and fixing bugs sooner rather than later is also more efficient. What if fixing O_SYNC requires more than superficial changes, requiring everyone else to change their code yet again after they'd already changed it once to go along with the new VFS stuff? Wouldn't it have been better if the person who broke O_SYNC had been required to fix it _before_ the whole suite of related VFS changes was sent out to affect everyone?
  • by bbarrett ( 33853 ) on Wednesday April 26, 2000 @02:01PM (#1108380) Homepage
    As a couple people pointed out, you just need to add the line: none /var/shm shm defaults 0 0 to the /etc/fstab. Since you kind of asked, a description of the entry. The first part (none) is the physical file system. For an ext2 partition, this might be something like /dev/hda1. The second entry (/var/shm) is the mount point. Don't forget to create the directory /var/shm. The third part (shm) is the file system type. This can be one of a bunch of things. The fourth option (defaults) lists any options that should be sent along to mount when mounting the file system. Look at the mount man page for more info on this. The last two numbers (0 0) give info to dump and fsck. The first tells dump whether to include this mountpoint. 0 means this mount point will never be dumped. (dump of course being for backups). The second 0 is used by fsck to know that this filesystem does not need to be checked. A non-zero number gives the order of checking on filesystems that might need such things. One other thing. Read the Documentation/Changes file in the kernel source. It includes information such as the shm setup, including how to do it. There are a couple other important notes in there. Well worth the read.
  • by jdike ( 35029 ) on Wednesday April 26, 2000 @02:32PM (#1108381)

    > It seems to me to be like a VMWare that only
    > does Linux (to put it simplistically).

    That's sort of the effect from the user's point of view, but if I understand vmware, it slides underneath the two OS's and makes them run side-by-side with one appearing in a window in the other. With the user-mode port, it is really Linux inside Linux. If you run it and do a ps, you will see a whole bunch of "linux" processes, plus what they really are inside the virtual machine.

    > trying out new
    > kernels/distributions/configurations without
    > needing to mess with your current setup.

    This part is fun. The kernel boots out of a file in your normal filesystem. I've got Red Hat, Debian, Slackware, and SuSE filesystems. This makes it a lot easier to play with new distros.

    > One also wonders then if Linux could be ported
    > to other call interfaces

    There's been talk of a windows port. According to one of the guys on my mailing lists, 95 is out, but 98 and NT look possible. The really important thing is the ability to intercept and annull system calls. If that's there, everything else can probably be made to work.

    Jeff
  • by Phallus ( 54388 ) on Wednesday April 26, 2000 @01:15PM (#1108382) Homepage
    This looks rather interesting, a "port of the Linux kernel to its own system call interface. It runs in a set of processes, resulting in a user-mode virtual machine.". This is supposed to be being folded into 2.4. It seems to me to be like a VMWare that only does Linux (to put it simplistically).

    The two most exciting things here for me are being able to look at the kernel in user-space while running, in ways that wouldn't be possible on a traditionally running Linux, and trying out new kernels/distributions/configurations without needing to mess with your current setup. For kernel developers in particular this could be very valuable.

    One also wonders then if Linux could be ported to other call interfaces - Linux under *BSD/*dows/etc for dual booters who need to do something quickly in Linux while still in their other OS.

    The web page is here. [sourceforge.net]

    tangent - art and creation are a higher purpose

  • by SurfsUp ( 11523 ) on Wednesday April 26, 2000 @12:45PM (#1108383)
    I'm not holding my breath for 2.4

    That's very wise ;-) If you've been running 2.3.99 you'll know there are still a few "issues" remaining, though these kinds of things hardly stand comparison with the bug list of a certain monopol^H^H^H^H^Hsoftware provider I could mention.

    On the whole though, when it is ready, and it's a lot closer than you'd think from the jobs list, it's going to be a real killer. Just a couple of things: Built-in pcmcia and USB; extensive support for video, including USB Webcams; vastly improved SMP support; a new virtual device filesystem that makes major/minor device numbers go away; bags more drivers for all kinds of things; many, many other goodies I didn't mention.

    I'll weigh in with a guesstimate of 3 months to 2.4.0 then another 2 months before you see 2.4.x start appearing in distributions. Very, very definiately worth waiting for, or just download it now and use it if you can't wait :-)

    Don't forget to put an entry into fstab for shm (shared memory) - this has now become part of vfs, and your distribution won't know about that.
    --
  • Good afternoon ladies and gentlemen, and welcome to the OSS racing track. Let's look at today's lineup:

    1) Linux 2.4
    2) Debian 2.2 (potato)
    3) Mozilla 1.0
    4) XFree86 4.0 (The real version)
    5) The multi-headed XFS/Reiserfs/ext3 beast
    6) KDE 2.0
    7) Evolution (The upcoming GNOME email app)
    8) (insert your favorite software-under-development here)

    Anyone willing to make a few bets as to which one we're going to see first? Hmmm, maybe this would make for a descent poll... (c:

    --Cycon

  • by Black Parrot ( 19622 ) on Wednesday April 26, 2000 @12:56PM (#1108385)
    Excluding the "Done" and "Probably Post 2.4" sublists, they have exactly 95 issues to address. And even that number includes quite a few "Fixed but not yet merged" items.

    If it was Microsoft, they would have shipped it 62,905 fixes ago.

    C'mon, guys. If you raise customer expectations you're gonna wreck the industry. It's waaaay too expensive to ship shit that actually works.

    --

Most public domain software is free, at least at first glance.

Working...