Catch up on stories from the past week (and beyond) at the Slashdot story archive

 



Forgot your password?
typodupeerror
×
Linux Software

Linux 2.4.19 Released 367

Adrian Voinea writes "The latest stable Linux kernel (2.4.19) is out. The somewhat massive changelog has the details. The patch file is here and the full source is here. If possible use a mirror."
This discussion has been archived. No new comments can be posted.

Linux 2.4.19 Released

Comments Filter:
  • If possible? (Score:5, Informative)

    by SpamJunkie ( 557825 ) on Friday August 02, 2002 @10:00PM (#4002871)
    What do you mean, "if possible use a mirror.". Use a mirror. The only time it isn't possible is when, say, the main server gets slashdotted and there ARE no mirrors.

    When will you ever learn?
    • by tunah ( 530328 ) <sam AT krayup DOT com> on Friday August 02, 2002 @10:28PM (#4002959) Homepage
      Oh yeah, great. Why don't I just go and have a clove of garlic for breakfast while i'm at it? You want a silver stake? 'Cause I've got one right here that you can have. Honestly, you are a load of insensitive bastards.

      Dracula

    • Didn't you just contradict yourself? You have admitted that there are times when it is not possible to use the main site (it is Slashdotted). In which case, the phrase "if possible use a mirror" makes perfect sense. By your own admission.
      • I think he meant when the main server is slashdotted before the mirrors do their mirror thing. In that case, they probably can't, and the only site with the content is the main one.
    • Also... (Score:3, Informative)

      by ZorinLynx ( 31751 )
      If you are at a University, use a mirror located at another University. Chances are the traffic will travel over Internet2 at ridiculous speeds, and not strain your University's (usually) clogged commodity Internet link.

      I got 1.42Mbytes/sec from U of Wisconsin to FIU, myself.
  • by electricmonk ( 169355 ) on Friday August 02, 2002 @10:01PM (#4002873) Homepage
    Can anybody here summarize any important changes that went on between 2.4.18 and 2.4.19? This changelog is just a ton of bug fixes between prereleases. Did they do anything interesting with it?
    • There should be much improved IDE support including much improved support for UDMA6/ATA133 especially on Promise cards. To me, this is the most important thing because I've been unable to use Linux on my main system due to spurrious lockups when my large UDMA6 disks are mounted (even without DMA turned on..). -- Dave
    • by plaa ( 29967 ) <{if.iki} {ta} {nenaksin.opmas}> on Friday August 02, 2002 @10:25PM (#4002953) Homepage
      Can anybody here summarize any important changes that went on between 2.4.18 and 2.4.19? This changelog is just a ton of bug fixes between prereleases. Did they do anything interesting with it?

      This is exactly the thing I'd like to see someone make. A simple list of notable changes for the average kernel-compiling Linux user. I've been wanting such a list for several years now, but have never seen one.

      Something in the form of, "If you which to use hardware X with option Y, you may wish to upgrade, as this version adds beta support for it. If you use option Z you should definately upgrade, there are many bugfixes. ..."

      Is there any kind of ChangeLog summary available anywhere? And if not, why? I shouldn't think it would be such a big deal for someone with some knowledge of the kernel.
      • There is a summary at the very end of the Changelog, or are you like me and the summary wasn't very explanatory? Then, uhhh, I don't know what to say. Though the summary was understandable enough that I know that more explanation probably wouldn't help my understanding much...
      • What is so funny here is that people were bitching because there was not enough info, now they're bitching because it's too much. When will it end?
      • "A simple list of notable changes for the average kernel-compiling Linux user."

        I think the biggest change that was made to the kernel update was the upgrade in version number. I wouldn't trivialize this update, it did earn it quite a bit of visibility on Slashdot!

        *Hopes people have a sense of humor today.*
      • Ok, just my 2 cents.

        Orinoco driver updated from 0.09b to 0.11b
        If you are using wireless network card (specially lucent and similars), you could think about upgrading your kernel, there have been many improvements and bug fixes

        More info: http://www.seattlewireless.net/index.cgi/OrinocoDr iver
      • the kernel change log on fresh meat is always nice and tidy.
        but you have to go through gatch rc and pre version to get a full picture.

        e.g. 2.4.19-pre9 change log is....

        This release should be the last pre-patch before 2.4.19. It contains USB, emu10k1, and i2o fixes, a devfs fix, several gcc 3.1 compilation error fixes, support for I845G, USB Casio EM500, and Tieman Voyager USB Braille display drivers, and several documentation updates.
    • by rakarnik ( 180132 ) on Friday August 02, 2002 @10:28PM (#4002961) Homepage Journal

      Main important change would be the IDE updates from the -ac kernels which are in 2.4.19. These should support the new large disks and ATA133, AFAIK. Also, the Changelog is accurate: those were the patches from 2.4.18 to 2.4.19.

      -Rahul

    • aren't stable branch releases supposed to be just that - bug fixes? the "interesting" stuff goes into 2.5

      but yeah, the "user friendly digest summary" idea is brought up every time, guess that kernel literate person who wants to take the time to do it ever time has not materialized yet.

    • I hate to say it, but the Windows 2000 Service Pack 3 release notes [microsoft.com] are much more clear than the ChangeLog. They might not tell you in great detail what was fixed, but at least you can understand it.

      Say unlike this item in the ChangeLog:
      [PATCH] Important Bluetooth fixes

      Uhhh... yeah... okay... they're important, but why?

      • First, thank you Gogo for the link, I have been wanting to know what was in SP3 all week. Second, to be fair microsoft has writers and secretaries and corporate accounts to cater too. Alas the SP3 update is a major update, NT service packs are almost as important as new OS's to enterprise customers. Whereas second decimal point Kernel releases are frequent. Changelogs are written by Kernel developers that can better spend time kernel hacking than making organized concise changelogs. All things considered Marcelo [theaimsgroup.com] has done a good job compiling this log, if you want more information on any change just do a search for it or the developer who contributed it in the Kernel mailing list. [iu.edu]

        "hmm, sacrilicous" HS
    • by larry bagina ( 561269 ) on Saturday August 03, 2002 @01:47AM (#4003528) Journal
      yes!

      pipe.c: ++i; changed to i++;
      panic.c: printf("shit!\n"); changed to puts("shit!");

  • use this mirror (Score:3, Informative)

    by Anonymous Coward on Friday August 02, 2002 @10:03PM (#4002878)
    http://atlantis1.prolixium.com/~prox/proserv/linux /kernel/v2.4/linux-2.4.19.tar.gz [prolixium.com]. it's 100mbit. so give kernel.org a break...
  • Does dump work yet (Score:5, Interesting)

    by mosch ( 204 ) on Friday August 02, 2002 @10:07PM (#4002898) Homepage
    A fundamental question. Does this release of 2.4 make it so that dump is now reliable, or is dump deprecated as a method of backup forever?

    If it's the latter, can any of you linux gurus tell me what is the current "accepted" solution for making backups. Not archives or images, backups.

    For those of you who are going to say dump works fine on 2.4, please read this message [lwn.net] from Linus Torvalds. I keep hoping he'll change his mind though, at least until a viable alternative arises.

    • Read the message you referred.

      Linus says: "use tar, do not use dump. it's not a good design anyway"

      I'd better listen to him.

      • 'cpio' is also a good choice, especially when capturing special files, like those in /dev. I trust this more than the system default tar on many systems (e.g. Solaris).

        Supposedly 'afio' is advertised as a better alternative to cpio. It's biggest advantage would
        be safer creation of compressed archives.

      • Wonder when Linus will get tired of "tar"? After all, its about as standard as "dump" is in any UNIX distribution. Maybe some file buffer cache change will render tar unusable, and we'll be told to use say.. dd? Or maybe cat.
        • by macshit ( 157376 ) <(snogglethorpe) (at) (gmail.com)> on Friday August 02, 2002 @11:27PM (#4003137) Homepage
          Wonder when Linus will get tired of "tar"? After all, its about as standard as "dump" is in any UNIX distribution. Maybe some file buffer cache change will render tar unusable, and we'll be told to use say.. dd? Or maybe cat.

          Um, do you understand the difference between dump and tar?

          Tar (and cpio, etc.), works via the normal user filesystem interface, which is very stable and well-defined. Dump, on the other hand, looks at the underlying disk, and so is extremely sensitive to changes in the way the filesystem works. As a result, it's not very robust (though it can be speedy).

          Linus's advice is very good. Hopefully dump will just go away altogether; its time has gone.
    • I use Mondo Archive [microwerks.net]. Works great for me.
      If it's the latter, can any of you linux gurus tell me what is the current "accepted" solution for making backups. Not archives or images, backups.
      Mondoarchive clearly doesn't do disk imaging. I'm not clear on the distinction you're making here between backups and archives. The issues mentioned in the abovementioned post from Linus [lwn.net] are:
      • Backing up without unmounting disks. Mondoarchive does fine with that.
      • Altering atimes and ctimes. I haven't checked this so I don't know what Mondoarchive does with them.
      Mondoarchive can do incremental backups. Internally it uses afio for all of its work.
    • tee hee... "dump" *giggle*

      (as a side note - shouldn't having karma of "fucking awesome" or whatever mine is, relieve you of the whole "you must wait two minutes between posts" thing? ok, maybe this wasn't the best post to attach this gripe to, but still...)

    • by Anonymous Coward on Friday August 02, 2002 @11:10PM (#4003074)
      Linus is unlikely to change his mind about dump being a stupid program, because the concept of dump is just plain broken.

      If you want to back up raw devices, use dd.

      If you want to back up filesystems, use tar, cpio, or similar programs. These tools will back up everything that the standard Unix APIs expose about files.

      Dump, however, tries to do more. Since there isn't an API to get what it wants to know, it has to read the raw device and duplicate the kernel's interpretation of the raw data. Since the kernel is being bypassed, there's no way to ensure that the data is coherent; sooner or later, something will get out of sync and bite you.

      You can make dump work right, if you add new hooks into the kernel, extending the API. So it may work fine on Solaris, for example. But I don't think that those hooks are part of the POSIX standard. Dump is always bound to a particular implementation, with zero portability.

      Figure out what you really need. Most can get by with file backups. If you truly need to restore to the same block numbers, then use dd.
      • Linus is unlikely to change his mind about dump being a stupid program, because the concept of dump is just plain broken.

        What's broken about incremental backups? When you run a datacenter that needs 10TB of data backed up every night, you're going to have a hard time doing that with dd/cpio/tar. Incremental and/or differential backups are the only way to go. Now, you can debate the merits of dump using the raw device and bypassing the buffer/cache, but the idea of dump is not wrong.

        Btw, FreeBSD dropped raw devices two years, and its' dump still works. Also, unified buffer/caches are not all they're cracked up to be.
        • by AJWM ( 19027 )
          Incremental and/or differential backups are the only way to go.
          $ man find

          FIND(1L)
          [...]
          -newer file
          File was modified more recently than file. -newer is affected by -follow only if -follow comes before -newer on the command line.
          Works great for providing the list of files to do an incremental backup of with tar or cpio.
        • by Anonymous Coward
          Nothing's wrong with incremental backups. You can even do them through the file system layer, which is safe. But dump operates below that layer, which is dangerous.

          Dump/restore have great features, are useful, etc. That's not the problem. The problem is that bypassing the kernel's file system code is an unsafe kludge. It's like jumping directly into the middle of the code for some function, because you want to access some internal variable that's not exposed through the normal interface.

          The idea of using the raw device while it's mounted is what's wrong with dump. I have no objection to the rest of it.

          The file system calls expose the data that most people need. Some of the file system internals aren't exposed, making things like sparse files hard to deal with. The right way to do dump would be to add calls that tell the kernel to expose that data, so that the kernel can do the needed synchronization. Doing that in user code that gets the file system structures by reading blocks straight off of the disk is just plain broken. It won't work in Linux, where dump/restore are not managed as part of the kernel release process. *BSD and commercial Unixes always include both the kernel and dump in any release, making it possible for it to work.

          But the right way to get the data is to ask the kernel for it.
      • If you truly need to restore to the same block numbers, then use dd.

        But we note that dump is not a disk imaging tool. It's a backup program with support for things like incremental dumps, interactive restores, etc. It can't be replaced with dd. It should be obvious to anyone who is doing backups in a real world.

    • Dump? Use of dump was discouraged when I started using Unix back in the mid-1980s.

      For home/small systems, 'find' and 'tar' or 'cpio' are fine ('find' so you can do incrementals). For serious stuff, use one of the professional packages.

      Of course it also depends on how much you want to back up and what your budget is like. A 100 GB removable drive and 'dd' could be all you need...
    • Use LVM [sistina.com], filesystem snapshots, and tar (or cpio).
    • We have been using BRU backup backup software on our Linux servers the last couple of years. Recommended. The price is right too. They have fully working demoes for download. http://www.tolisgroup.com/

      The main reason that we chose a commercial package was, that backups, especially on DAT streamers, can be a nasty experience. After experiencing a couple of "write-only" backup incidents (on NT 4.0 using DAT 1 and 2 streamers), I wanted something that actually verified the backup, and had extremely good error-logging, and CLI/scripting facilities.
      Since BRU probably is the oldest commecial Linux and BSD backup package, it was the best choice at the time. There are several other solutions now.

      Some advanteges with BRU:
      Good CRC-32 check to ensure that what you _try_ to back up, actually end up on the tape in a non-corrupted state.
      Fast verify.

      Excellent error-logging.

      Back up of live filesystems, and special files like sparse files, pipes, special device links etc.

      Excellent CLI options, like regex selecting files, or filesystems to backup.

      We still use v.16. But v.17 has Quick File Access (QFA. It also has a better GUI, but BRUs real power is as CLI program.

      They also have a free (QPL lince) program called CRU, that enables booting from tape (if the streamer supports it, like HP's), and making a complete restore of the OS and data, including fdisk'ing, in one go.
      (You just press a button on the DAT streamer, while the server boots).

      • The name BRU rang a bell, I seem to recall the backup engine under HDBackup in AmigaOS 2.x/3.x was called BRU - Backup and Recovery Utility or something like that. Any relation?
        • It certainly looks like it, not only because of the name but also since the syntax /switches (http://www.amigarealm.com/) basicly are the same as in BRU from tolisgroup. Eg. BRU -G for getting archive info etc.

    • Well, despite the opinions of some, I also hope that dump gets the love it needs from kernel developers soon. There's more to some filesystems than tar or cpio take into account. ACL's spring to mind... Neither tar nor cpio back those up. Basically any file system which offers extended management features will need its own dump program. It's all covered in O'Reiley's "UNIX Backup and Recovery", which everyone who thinks that tar and cpio are always good enough should read.
    • Lonetar + rescue ranger [lone-tar.com]! It's a carryover from my SCO days. SCO server failure? If you've got lone-tar/airbag you're up w/out hassle. Freaking reliable.

      Backups for linux are fire and forget too. Fortunately, I've never had a catastrophic failure under linux, so I've not had a need to test RR outside of a lab environment. (Where it works fine)

  • by account_deleted ( 4530225 ) on Friday August 02, 2002 @10:09PM (#4002906)
    Comment removed based on user account deletion
  • Hrm. (Score:4, Funny)

    by Jonny 290 ( 260890 ) <{brojames} {at} {ductape.net}> on Friday August 02, 2002 @10:18PM (#4002936) Homepage
    Insert standard Darn-And-I-Just-Finished-Downloading-The-Last-One- Yesterday wisecrack...
    • finished downloading? geez, it's only about 25 megs, it's not like you wait for it...
      • Out here in the real world, some of us are stuck using 28.8K modems.

        Hmmmph.
        • um, yeah, having broadband (along with, what is it now, about half the US?) means I live in some sort of fantasy world.

          out of curiosity - I keep seeing these 28.8 references, I can understand not being able to get DSL or cable, but why not at least buy a more or less modern modem? 56.6 came out a looong time ago

          • Oh, I own a 56.6 modem, it's just that the line quality here is crappy enough that it always drops out after 10 minutes. So I have to use an older one (actually it's 33.6, not 28.8 -- fat lot of difference that makes).

            That's what a telecommunications monopoly will do for you (Telstra, in Australia) -- as does their anticompetitve restriction of local loop access and bandwidth pricing. I live in a city of over a million people, but apparently it would cost them too much to provide decent services...
        • by Eil ( 82413 )

          I was stuck with 28.8 dialup internet access for the longest time, so I know how badly it sucks to *deserve* broadband (or even a decent 56k) but still feel like I'm stuck in the early '90s technology-wise.

          Now that I have broadband, I will admit that I've been pretty much spoiled. In a few weeks however, I'm going to have to switch back to 28.8 due to a location, career, and (positive) marital status change. I'm hoping that reverting to a much slower means of internet connection won't be too much of a shock since I spent all those years at 28.8 prior.

          See, with a slow net connection, one truly realizes the value of kernel patches. Larger patches can still take awhile to download, but not the nearly 24 hours that the entire linux kernel source would normally take.

          But with broadband, where downloading an entire kernel can take less than 30 seconds, it's not even worth your time to download 3 to 4 (or more) version patches and then sit there trying to remember the exact command to patch the sources while hoping you didn't just screw yourself by patching the wrong tree* and/or applying the patches out of order, etc etc.

          * Am I the only one who believes that making the kernel source extract into ./linux rather than ./linux-$major.$minor.$pl is just a stupid idea? So far I've not seen one good bit of evidence for why this is superior other than "it's always been done that way."
          • by Eil ( 82413 )

            Hmm, I take back the asterisk note. Someone mentioned a few posts down that starting with 2.4.19, the kernel does in fact extract into a directory with the version number.
  • by randomErr ( 172078 ) <.ervin.kosch. .at. .gmail.com.> on Friday August 02, 2002 @10:26PM (#4002955) Journal
    Anyone else notice that in the last couple of days Microsoft's ad for Visual Studio .Net keeps coming up in the rotation when ever there is Linux story.

    Wonder how much that cost them to buy those keywords? Could C. Taco be enjoying a quiet vacation on an island somewhere?
  • Directory name... (Score:5, Informative)

    by Atzanteol ( 99067 ) on Friday August 02, 2002 @10:43PM (#4002998) Homepage
    Anybody notice? Whenever you *used* to untar a new kernel tarball, it created a directory 'linux'. Now it creates 'linux-2.4.19'.

    'Bout time! I always hated creating a temporary directory to uncompress to...
    • It's good to know that the maintainers are doing this. Having the extraction directory be linux/ has cause me to blow away a kernel tree once or twice...

    • And here I always just used

      tar tfz linux-x.x.xx.tar.gz | head

      to see if it would uncompress to linux/ or linux-x.x.xx/ and if it shows the former, simply:

      mkdir linux-x.x.xx
      rm linux
      ln -s linux-x.x.xx linux
      tar xfz linux-x.x.xx.tar.gz

      Not too bad, if the result of the first command shows the tarbal did extract to linux-x.x.xx (which I don't recall it ever doing), the process is pretty much the same, but in a different order:

      tar xfz linux-x.x.xx.tar.gz
      rm linux
      ln -s linux-x.x.xx linux

      and the only think that missing from the previous list of the commands is the mkdir command. So, I really wouldn't consider it that much of a difference either way.
  • Labels... (Score:5, Funny)

    by anakog ( 448790 ) <anakog@yahoo.com> on Friday August 02, 2002 @10:48PM (#4003010) Journal

    (02/06/06 1.537.2.10)
    [PATCH] Re: mislabelled label patch

    No pun intended...

  • by Spooky Possum ( 80044 ) on Friday August 02, 2002 @11:58PM (#4003225)
    (02/07/17 1.642)
    [PATCH] PATCH: personality clashes

    If only they were all that easy to fix ...
  • by Joe Tie. ( 567096 ) on Saturday August 03, 2002 @12:37AM (#4003339)
    (02/07/30 1.659)
    PATCH More -ac merge

    Sweet, now my system will scream "FIRST BOOT!!!!" at me when I turn it on. :)
  • by Nomad128 ( 579708 ) on Saturday August 03, 2002 @02:24AM (#4003599)
    Hey all,

    Assuming someone else on this list was, like me, silly enough to buy a PowerVR Kyro-based graphics accelerator, here's a fix for a compile bug that I got w/ kernel 2.4.19 and gcc 3.1:

    drm/pvr_drm_vm.h, line 138, change to:

    physical = (unsigned long)page_address(pte_page( pte ));

  • by Rufus211 ( 221883 ) <rufus-slashdotNO@SPAMhackish.org> on Saturday August 03, 2002 @02:47AM (#4003669) Homepage
    Well, I've seen a few instructions for debian, but they're either wrong or not comented, so I'll try my own also.

    First, get the sources. I don't see them in the debian tree yet, so get them from kernel.org yourself. Put it in /usr/src/linux or whever your favorite place is.

    To compile (all in /usr/src/linux):
    # optional: tells debian to apply any debianized patches (eg. preempt, ReiserFS, XFS, whatever)
    # very important to do *before* config, or else you'll be configuring and building different things
    export PATCH_THE_KERNEL=yes
    make-kpkg --append-to-version "-me" -rev test.1 --initrd debian
    # configure the kernel as you chose
    cp /boot/config-2.4.18 .config
    make oldconfig # or x/menuconfig
    # build the kernel image
    make-kpkg --append-to-version "-me" -rev test.1 --initrd kernel_image
    # optional: build debianized modules (eg. nvidia, lirc, alsa)
    make-kpkg --append-to-version "-me" -rev test.1 --initrd modules_image
    # install the resulting .deb's
    cd ..
    dpkg -i *2.4.19-me*.deb

    Explination of make-kpkg options:
    --apend-to-version: optional, but a good idea. Makes the kernel version into 2.4.19-me and avoids any conflicts by installing to /lib/modules/2.4.19-me, /boot/vmlinux-2.4.19-me, etc
    -rev: needed for the debs. good as long as it has some number in it
    --initrd: tell it to build the initial ram disk (/boot/initrd.img-2.4.19-me). Not sure if it's really needed, but all debian kernels have one so I figure might as well use it.

    I'm aware that not all of the options are needed on all of the commands, but I figure for safty and consistency's sake, to just leave it as is.

    Hope this helps someone.
  • by David McBride ( 183571 ) <david+slashdot&dwm,me,uk> on Saturday August 03, 2002 @02:57AM (#4003695) Homepage
    Having a trojaned SSH build script was bad enough.

    You *really* don't want a compromised kernel. Use the signatures [kernel.org].
  • One thing I really with kernel releases had was a way to rsync/cvs/bk whatever to the release kernel. That way only the files that have been changed get sent. kernel.org's rsync is setup to let you mirror the site, but not the individual kernel. I'm thinking of the kind of access provided to the kernel sources on the penguinppc.org project [slashdot.org]. That way, I can start with any bastardized kernel source and arrive at a pristine new source dir without using up the bandwith to download the whole thing. Heck, I can even exclude the architectures I'm not using, saving even more bandwidth.

    Anyone know if/where to get this kind of access to the kernels?

    • That's not an altogether bad idea, but I think the kernel patches provide for most people's needs in this area.

      I myself just keep one recent "pure" linus tarball and whatever patches I might want to apply. A few months ago, I was testing out various versions of the 2.4 series and ended up downloading every kernel with a patchlevel divisible by 5. I then proceeded to download whichever version patches I needed to get the kernel version that I was looking for. Saved myself a ton of time doing this.

      I think it would be great if support for architectures other than x86 were provided as patches instead of the main tree, but I suspect doing this would be a huge pain initially (going through dozens of megs of code to figure out what's x86 and what's not) and only add additional overhead to development of the kernel.

      The KISS principle applies pretty strongly to OS kernel development and even stronger to the largest OS kernel in existance. (Yes, that'd be Linux for the humour-impared) :P
  • But it has some serious personality problems. Luckily, I'm not the only one who noticed:

    (02/07/17 1.642)
    [PATCH] PATCH: personality clashes

"Experience has proved that some people indeed know everything." -- Russell Baker

Working...