Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
Linux Software

2.2 vs 2.4 150

bevo wrote to us with a shoot-out review of kernel 2.2 vs 2.4. It covers what's been updated, some benchmarks, and also has a good page on how to compile your own kernel, including modules recommendation and such.
This discussion has been archived. No new comments can be posted.

2.2 vs 2.4

Comments Filter:
  • Well, I for one certainly hope so. The less bug-reports for me to handle, the better... I can't say that I've been drowned in reports, though.

    The reason a lot of people run v2.0.xx is that this kernel-series is well tested and most bugs documented, hence its behaviour is mostly predictable.

  • if you record right to mp3, the best encoder out there (so far, the frau CLI one) is SO FAR FROM REALTIME its not even funny. takes well over 10minutes to encode a typical 4minute song.

    I'd have to buffer to hell to be able to encode a-la unix filters.

    and like the other guys said, I want to edit the music after the initial capture; mp3 is not a directly editable format.

    --

  • If you're recording off the radio to begin with, you're not going to get cd quality anyway.
  • Depends what kind of editing you need to do. If you just want to split it up into smaller pieces without doing any real audio processing (I'm assuming you don't want to run it through a vocoder =) well that can be done with a few utils that work directly on the MP3 stream without recompressing.

    As far as the encoding is concerned, I strongly doubt the Fraunhofer codec would be considered the best. Fraunhofer may have produced the first popular MP3 encoders but they certainly don't know much about audio fidelity because their encoders all distort the upper-end of the audio spectrum. I honestly find that our beloved LAME encoder works fine and cleanly. I still have my highs (the resulting MP3 data is actually too clean for my MPTrip, which tends to clip since it wasn't tuned for such strong high-ends). Not only does it retain good sound quality but it is also lightning fast on my PC (p3-850), averaging 9x encoding. That's 9 times more than you need to record live.
  • nope, two different cards.
  • I strongly doubt the Fraunhofer codec would be considered the best

    I might not have qualfied my statement enough, then; for 128k encoding, its still surely the best. show me any other encoder that produces as listenable a result at 128k only. the freeware ones are known to be better at 160 and above (more like 192k for me) but for the highest compression ratio, frau is still (sadly) the king.

    their encoders all distort the upper-end of the audio spectrum

    I've not been bothered by this, even though I've heard its true. I'll easily trade some mushiness in the high end for ringing (yikes!) in the midrange. midrange 'honk' (no, I'm not being racist [g]) is so objectionable, I just cannot listen to anything at 128k but frau output.

    yes, I'll give you that LAME is fast. but it doesn't optimize nearly as much as frau does. when I give frau the switch -qual 9, it takes FOREVER to process; but I then know it "huffman'd the hell" out of the source and found every bit of space it could save. by doing that, it has more bandwidth (bits) to allocate to the high and mid end.

    --

  • I shouldn't bother repling this late in the game. BUT I hate to see ignorant posts.

    UDMA100 channel is capable of up to 100MB/sec. We've actually tested various UDMA 100 controlers on a RAM IDE drives. (Where storage was to NVRAM not a platter) And we've seen sustained transfer rates of up to 70-92 MB/sec (depending on the IDE controler) Yes 92MB/sec. What does this mean? Well when the data goes from the media->drive cache->cable it's capable of going at the same speed as data coming of the drive. Your limiting speed is the media read/write speed.

    Sure if you are using a 40MB/sec drive on a UDMA 33 channel you're limiting factor will be the IDE channel. But that's like using a UW drive on a SCSI 1 controler. Same issues.

    Ok Driver isssues? Exactly what driver issues are you refering too? The only drivers you need for IDE are for the controller and only if u want to use the high performance features of SOME manufacturers. (Same as with SCSI or are you trying to tell me I don't need the Adaptec 2940 etc SCSI driver to use a scsi drive? )

    As far as this sentance (your quote): because I said: "Just because you can get the data to the drive cache that quickly doesn't mean you can pump it over the channel that fast... ie the speed of the channel. 33/66/or 100 MB/sec (UDMA33/66/100)"

    "Just because you can get data to the drive cache... "
    Well if I get the data to the drive cache that quickly it's ALREADY traveled over the channel. Tus the speed the data arived in the cache == speed of the channel. hmm make sense? Or are you trying to convince me that data magically arives at the drive cache and THEN travels over the channel. The only way I could logically interpret that sentance was by by assuming that by channel you are refering to the read/write channel... which is the internal channel between the cache and the read/write channels ON the drive itself.

    So before you argue with someone that's written drive firmware as well as debuged drive hardware please read up on disk drive design 101.

    On a further note 33/66/100 are infact SUSTAINED rates.. not BURST. Often times you hear people say "you'll never see 100MB/sec with current drives except in a burst mode". What they are refering to is that since the read/write channel of the drive, (The read/write speed to the media) can't sustain 100 MB/sec (limited to about 40MB/sec right now) the only time you see 100MB/sec is during a short burst where data is being dumped to the drive cache.

    However.. when the cache is full the channel speed will back off to the sustained rate of the drive. So on a drive with 2MB onboard cache the first 2 MB will BURST at 100MB/sec.. any data following that (if we are constantly streaming) will be then transfared at the to media speed of the drive, 40MB/sec in the case of the seagate drive.

    This IS the same thing as puting the X15 on a UW (80MB/sec) SCSI controller. The drive will burst at 80 MB till it's cache is full, after which it will transfer at 41.5 MB/sec in the case of the X15. In otherwords the same damn principle aplies whether you are using a SCSI or an IDE drive. And by the way an 2-3 year old SCSI drive will no way in hell sustain 33-35 MB/sec as you claim. Check your numbers. Like I've said earlier the sustained transfer rate is a component of (media density)*(rpm). The current generation plater technology is now at 20MB/platter (soon to be 30) 2 years ago they were still at about 6MB. This is roughly a 2.5-3x performance increase in raw transfer rate. A 2-3 year old 7200 rpm drive is somewhere near 18-20 MB/sec at best.
    Oh and BTW.. right now IDE is leading in platter density. Which also explains why a seagate 7200 RPM IDE drive has nearly the same sustained rate as their 15000rpm SCSI drive. The SCSI drive uses lower density platters. I can give you a reason for that. Because SCSI drives are optimized for high seek usage such as database or heavily loaded servers. To gain that kind of seek performance (about 1/2 that of ide drives) The have to use lower density platers to give themselves more tolerance for offtrack errors. The tracks are wider and more forgiving if a head is slightly of the center of the track.

    I will stick by what I have said earlier. SCSI is a waste of your money unless you have a need for more then 4 drives in your PC or are running a heavily accessed DB or over loaded (Seek Heavy) server. So before you flame me for not checking MY facts.. please insure you have yours strait. And please get a clue to what you are talking about. What you have said is typical "I spent a crapload of money on SCSI hardware and need to defend my decision FUD".
  • by Tower ( 37395 )
    In terms of the L2 cache stealing... say you only have 64/128kB of L2... that you could store in an internal RAM on a network card, though most don't have buffers quite that large. Even if they did, the card wouldn't have access via the PCI bus to the L2 RAMS directly... you'd have to wrangle it out with the device driver. Hmmm, they'd probably end up with a few copies of the task schedulers and network device drivers, though there could be some interesting stuff in there, if it knew when to look (that's the key). If it sampled from the cache only when it was going to xmit (the couple of bytes), then you'd end up with a lot of garbage... better to do spurious DMAs from known good pages in memory... hmm, that *would* make an interesting project.

    --
  • Thanks for eventually replying...

    >"We've actually tested various UDMA 100 controlers on a RAM IDE drives. (Where storage was to NVRAM not a platter) And we've seen sustained transfer rates of up to 70-92 MB/sec (depending on the IDE controler) Yes 92MB/sec."

    Yes, and it's that 70-85 range that bothers me (probably limited on the other side of the controller), though UDMA/100 is better than UDMA 66 and 33 in terms of actually reaching the theoretical throughput. With some of the Quantum solid state drives, we've achieved 75+MB/s on an U2W system (a few different controllers) (and 70+ on many others).

    >"Sure if you are using a 40MB/sec drive on a UDMA 33 channel you're limiting factor will be the IDE channel. But that's like using a UW drive on a SCSI 1 controler. Same issues."

    UW on SCSI1... that's more like using a 40MB/s drive on a PIO/1 channel... or 40MB/s 66/100 drive on a UDMA/33 channel is like using a 40MB/s U2W drive on a F/W or UW channel.

    >"Ok Driver isssues? Exactly what driver issues are you refering too? The only drivers you need for IDE are for the controller and only if u want to use the high performance features of SOME manufacturers. (Same as with SCSI or are you trying to tell me I don't need the Adaptec 2940 etc SCSI driver to use a scsi drive? )"

    I was referring to the specific device support, rather than the driver for the controller. SCSI has a (IMHO) nicer command set to work with, which makes disparate types of devices easier to work with.

    >"Well if I get the data to the drive cache that quickly it's ALREADY traveled over the channel. Tus the speed the data arived in the cache == speed of the channel. hmm make sense? Or are you trying to convince me that data magically arives at the drive cache and THEN travels over the channel."

    I'm getting the data to the cache from the media, as I've tried to make clear with: "disk -> cache" (assuming that one enables the read cache on the drive), since bursting from the cache (as I was saying) gives the highest possible channel speed.

    >So before you argue with someone that's written drive firmware as well as debuged drive hardware please read up on disk drive design 101.

    Ok, I talked to myself and some others here at work (some guys who collectively have a couple decades of IDE and SCSI hardware and firmware experience), so you could take that advice yourself, and try to read and understand my comments before you reply... so far you have managed to take quite a bit out of context and not attempted to understand the basic points.

    >"And by the way an 2-3 year old SCSI drive will no way in hell sustain 33-35 MB/sec as you claim. Check your numbers. Like I've said earlier the sustained transfer rate is a component of (media density)*(rpm). The current generation plater technology is now at 20MB/platter (soon to be 30) 2 years ago they were still at about 6MB. This is roughlya 2.5-3x performance increase in raw transfer rate. A 2-3 year old 7200 rpm drive is somewhere near 18-20 MB/sec at best. "

    But I said (note the new emphasis):
    "Well, using my Adaptec 2940-UW, and ***a couple*** of 2-3 yr old (maybe a generation ago, if you like) UW drives (10krpm 9GB, 7200 RPM 18GB) I can do 33-35MB/s at about 4-6% util"

    Note that "a couple" of drives would indicate that more than one is actually used at once... two at 18-20MB/s (which is what they do, roughly) => ~36MB/s. Wow. I'm glad you read that line before irrationally flying off the handle again.

    >"I will stick by what I have said earlier. SCSI is a waste of your money unless you have a need for more then 4 drives in your PC or are running a heavily accessed DB or over loaded (Seek Heavy) server. So before you flame me for not checking MY facts.. please insure you have yours strait. And please get a clue to what you are talking about. What you have said is typical "I spent a crapload of money on SCSI hardware and need to defend my decision FUD".

    I agree that the new IDE drives are a far cry better than they used to be, and priced very well. Right now I have 8 hard drives, two removable media drives, two CD/DVD/CD-R drives, and one tape drive attached to one system (for personal use). Nice and easy with SCSI (maybe once I see some reasonable DDS-3 IDE drives we'll talk).

    >What you have said is typical "I spent a crapload of money on SCSI hardware and need to defend my decision FUD"

    Actually, I spently fairly little money (relative to the regular retail cost of the setup), and something about working with machines at work that (typically) have anywhere from 30-300 drives (not to mention the amazingly expensive controllers we happen to produce). Sounds like a pretty good reason to me. I'm justifying work results more than home use. My two other systems are IDE, with gobs of storage (cheap). Personally, I have tested the speed of the same system with the 2-3 yr old SCSI storage in it vs. the 6-mo old IDE storage... the older stuff comes out ahead on anything that bothers with any sort of seeks.

    BTW, the areal density on the biggest SCSI drives isn't any different than that of the big IDE drives right now.

    I'm not spouting FUD to justify myself. I mostly object to the fact that you can't read and digest my comments without misunderstanding them.
    --
  • Yeah, but what I meant was they took the 2.4 "version" of USB support and backported that to the 2.2.18 kernel. And you're right, though IMNSHO "completely hosed" is a slightly better description than "broken".
  • I haven't run it on any 486/66s, but I have run it on a P133/64MB. Comparing to 2.2.18, 2.4.0 feels very good. Memory management is improved as there is less swapping and programs feel like they run faster.

    There's no reason not to at least try it out, (except that it'll take a couple hours to compile the kernel on your 486!), so go for it.

  • Well in that case let's digest FURTHER. Perhaps the problem is that yer comments aren't exactly that clear.

    First Off:

    I'm getting the data to the cache from the media, as I've tried to make clear with: "disk -> cache" (assuming that one enables the read cache on the drive), since bursting from the cache (as I was saying) gives the highest possible channel speed.


    Ok so if on a UDMA100 controler I can get data from a Solid State drive at 80-90MB sec.. that means my channel can sustain up to 90MB/sec.
    If the drive can sustain 40MB/sec thus drive->cache->controler->PC Memory can sustain 40MB/sec.
    Therefor your original comment about:

    Peak media transfer rate != channel rate... Just because you can get the data to the drive cache that quickly doesn't mean you can pump it over the channel that fast.

    Is, like I have stated previously, incorrect. You are always bottlenecked by the slowest component. Which is infact the drive.

    Next Up:

    But I said (note the new emphasis):
    "Well, using my Adaptec 2940-UW, and ***a couple*** of 2-3 yr old (maybe a generation ago, if you like) UW drives (10krpm 9GB, 7200 RPM 18GB) I can do 33-35MB/s at about 4-6% util"


    Ok now. If we are to compare apples to apples I can take a "couple" of 2-3 year old IDE drives and ALSO get 33-35 MB sec. (Raid 0 - Striped across 2+ IDE drives) Apples to Apples.

    And Finally:

    ATA specification is infact a subset of SCSI. ATA was never intended to be used with tape drives, scanners,printers, etc. ATA is a simplified technology and command set for random access block devices only. ( Disks,cdroms,etc ) Heck look at the supported commands and their HEX values.. they are the same as SCSI. And Simple is not always worse. It is true that 5-6 year old IDE controler implemetations were frankly crap.. this is not the case today. As with all things in the technology where there is harsh competition the products released improve in quality tremedeously from generation to generation.

    BTW, the areal density on the biggest SCSI drives isn't any different than that of the big IDE drives right now.

    Please prove me wrong by showing me a 7200RPM (or faster) SCSI Drive that I can purchase today with a platter density of 20GB/platter or more. The one such IDE drive is the Western Digital Caviar WD400BB.
    The reason you can't is because there isn't any. SCSI drives sacrifice capacity for performance. The lower platter density allows SCSI drives that 5ms access time. The other reason is because IDE drive manufactureres are in a Capacity war. SCSI vendors are in a accestime war. 2 different requirements yielding 2 different results. IDE == high capacity/ high acces time. SCSI == low capacity / low acces time. Each has their use.
  • 66mhz 486? Jesus, are you running the Hubble Telescope??

    As the wise old UNIX programmer from the Dilbert cartoons once said:

    Here's a quarter, kid. Go buy yourself a real computer.

    Well, to give you an example of how a 66MHz 486DX2 would be useful: dirt cheap NAT/Firewall box. Yes, the Linksys box is going for less than $100 in the two-port (one for the network, the other for the Internet) configuration, but I was able to get an old DX2-50 box for $30 from a friend.

    I have two NE2000 NICs sitting around, a CD-ROM and an ISA IDE card that originally came with another CD-ROM...these are parts in my parts pile so they're technically free.(as in Beer.) Linux or FreeBSD or OpenBSD are all free OSes.(both as in Beer and as in Libertas.) So I'll have a NAT/Firewall box for $30. And one more computer will be saved from the landfill and made useful again..


    ----
    http://www.msgeek.org/ -- Because you can't keep a geek grrl down!

  • I used to think that, but I recently hacked a cheap machine for use at home. ATA-66 (VIA 82C686), 40Gb EIDE drive (Maxtor 54098H8).

    Under FreeBSD 4-2:

    su-2.03# dd if=/dev/ad0s1 of=/dev/null bs=524288
    1302+1 records in
    1302+1 records out
    682665984 bytes transferred in 23.191338 secs (29436248 bytes/sec)
    su-2.03#

    This almost 30Mb/s sustained. I was shocked. I thought I would get a much lower number.

    SCSI is still superior when dealing with mutliple devices, or when doing complex thing, but EIDE is pretty good those days...

    Cheers,

    --fred

  • Has anybody tried compiling linux-2.4 with gcc-2.95? Is this still considered Bad Juju?
  • You are WRONG sir. I've worked in the HD industry for 3 YEARS. And the fact of the matter is that an IDE drive can SUSTAIN over 40MB/sec at the outer diameter. THIS IS TO THE MEDIA rate. No cache involved. Most IDE drives these days come with 2MB cache. So that if you write a 2MB file you will see peek burst rate. Ie the speed of the channel. 33/66/or 100 MB/sec (UDMA33/66/100)

    However after the 1st 2MB the cache is now full and you will see the true to media data rate. Which like I've said is upwards of 40MB/sec on the outer diameter and 26+MB/sec on the inner diameter. Remeber the OD moves FASTER then the ID relative to the head. This is why the OD transfer rate is much higher.

    The BS you are saying about SCSI is just that; BS and FUD. The only differance between SCSI and IDE is the protocol and controller. It's the same PHYSICAL hardware with a diff chipset. SCSI vendors would like you to believe that their SCSI drive is FASTER then everyones IDE drive but that is just NOT THE CASE. (It used to be the case but is no longer true)

    SCSI is excelent for multiple devices on a chain. That's about it. (Well that and if you want to pay for that 10K rpm drive, since there is no IDE drives at 10K right now) Additionally those 180MB/sec 360MB/sec SCSI transfer rates are also misleading. For that kind of performance they require a 64Bit/66Mhz pci slot.. because otherwise you are limited to a max of 132MB/sec by your PCI bus ANYWAYS.

    So before you go of and tout how SCSI creams IDE. 1st check your facts and LOOK at www.storagereview.com and compare 7200 rpm scsi and 7200 rpm IDE drives.

    The problem you see with IDE drive "slowness" is because your PC does not have UDMA mode turned on. "/hdparm -d1 /dev/hda" (Or compile it as default into the kernel) A current generation IDE drive running in UDMA100 mode streaming data at 35MB/sec only uses 10-15% CPU on a P2-400.

  • by tao ( 10867 )

    If I'm not all mistaken, this is due to a problem in XFRee, not in the Linux-kernel. You should be able to solve this problem by updating to a later version of XFree (possibly a CVS-version is needed.)

  • I don't get it.. this guy sounds like Steve Ballmer on pot.

    In addition to this robust filesystem support, the 2 GB file size limit has also been done away with -- although I don't know exactly who has 2 GB files sitting around on their platters

    How clueless can this guy be ? If someone went to such great lengths to defeat the 2gb limit, then I'm pretty sure it's because it's been a problem for a while. Uncompressed video comes to mind, where a reasonable clip bucket can contain well over 10gb of data. Databases also get pretty huge when you start collection data from the web (search engines perhaps, or DoubleClick stats). Next.

    Also, at the speed processors are progressing at (we're already at 1.5 GHz), the 2 GHz mark is coming soon -- something Linux 2.4 adds support for.

    Ok, will someone tell me how the hell you add support for speed ? Ok so 8 years ago we had a problem with Turbo Pascal's timing loop overflowing, and they all learned to never write such stupid timing code ever again. What could possibly work on my p3-850 that won't work on a 2ghz cpu (besides CGA Pong) ? Software doesn't need to know the speed of the supporting cpu, it just needs to do its thing as fast as it can, and wait if absolutely necessary. Timing is important in code (especially games), speed is not. Speed is merely a byproduct of time.

    Linux 2.4 adds support for Wireless LAN devices and also includes PPPoE within the kernel itself

    Funny, I thought anyone could include PPPoE in the kernel when configuring it for a recompile. Perhaps he could have mentioned that it was enhanced and tweaked, but saying it's included in the kernel is like saying your brand new car comes with a steering wheel. Du-uh!

    Well that's about all I could squeeze out of that one page of blab. Have a great kernel!

  • I maybe wrong here but isn't there more to upgrading from 2.2 to 2.4? I've not upgraded myself but I've seen lots of people in IRC and Newsgroups talking about having problems with loading modules?
  • by f5426 ( 144654 ) on Wednesday January 24, 2001 @05:16AM (#485237)
    > As far as using it as a server, 2.4 is FAST. Much faster than 2.2. Our SCSI RAID goes about 3x faster and NFS goes twice as fast (over gigabit ethernet).

    Rotfl. So does it means that 2.2 sucked despite all the claims that it was a server-class OS ?

    I used 2.0 and 2.2 for a loooong time. A lot of things sucked badly (overall, it sucked less then the NT4 boxes I rhad), but I generally got bashed when pointing those suckage. Now, it is pollitically correct to talk about shortcomings of 2.2 ? (As long as I don't point any 2.4 problem...)

    Cheers,

    --fred
  • Of course, the same USB support is also available in the 2.2.18 kernel. Why they backwards-ported it I'll never know.
  • You actually did that? Userland and all?

    (Oh, you mean _the_kernel_. Oh. Ah.)
  • Obviously, you have not been paying attention to what IBM has been doing. IBM has a version of Linux for the S/390, RS6000, and Infinity servers. I expect to soon see Linux for their AS/400 systems. Companies that use these machines may have huge databases. I know from experience in a financial institution, where we had databases of 5GB+. You need to be more open minded. Linux is gearing up to be a capable enterprise solution. It is a waste of time to compete with Windows 9x/ME. These are not good stable OSes for the corporate world. The fact that IBM even considered, let alone developed Linux for the S/390, shows that Linux is no longer just a science project. It is here and it is now. Now Linux should work on some better administrative tools. There are some good ones out, such as webmin, but more are needed.
  • I compiled the experimental hfs support in the 2.2 series to read hfs zip disks in my crappy pc.

    I was feeling pretty 1337 at that time :D

  • The problem is that the PC architecture is indecent. There is no simple clock, you have to fiddle with a lot of I/O registers to get the time (indirect I/O pointers). On most RISC you have a decent time counter in the chip that you can use (and the TSC in Pentium+).

    Btw these are used for submillisecond delays most of the time, you don't want to have a timer interrupt at 10 or 100kHz, do you ? Linux does have a regular timer interrupt at 100 Hz on most architectures for system management chores.

    So for 386 and 486, the only reasonable solution is to use delay loops. For Pentium+ the TSC can be used but the problem was that it was calibrated by counting CPU clock in one second which might overflow in the near future. Note that having a TSC which counts at the core clock frequency is overkill when your CPU runs over 5 times the bus clock, which is the one which determines externally visible effects. In this case, PPC gets it right since the CPU timebase is based on the bus clock divided by 4 for most models, which corresponds roughly to the shortest possible I/O access time or the duration of a data burst transfer to fill a cache line. This would allow up to 8 or 16 GHz bus clock, which we aren't going to see in a _long_ time, if ever.

    This said this is one the worst articles on 2.4 I've ever seen...

  • The original post was probably posted by somebody who comes from a culture where '.' is the demarcator for thousands-place instead of ','. i.e. 38.000 == 38,000. Either that or they typo'd, after all the two keys are right next to each other.

    Based purely on personal experience, I can say with authority that linux can handle > 38 tcp/ip connections simultaneously :-).


    --
    Fuck Censorship.
  • Realize that in several European locales (like ... I dunno, Germany), the meanings of '.' and ',' in numeric contexts are reversed from American usage.

    The world's a lot bigger than your little corner of it, my friend. He was saying 38000 connections.

  • Well, silly as you may think it is, ther e*was* indeed a problem with running Linux on a machine with a clock higher than 2Ghz, something to do with the calibration of the dealy loop or something like that I seem to recall.
  • The article states that hfs (mac filesystem)
    is only available under PPC linux. This is
    false.

    That is all.

    K.
    -
  • AFAIK, they needed to make some counters bigger so they don't overflow if the CPU is really fast.

    Regards, TOmmy
  • ...Is what I have been looking forward to in 2.4... fully functioning USB that is ;)

    ------
  • This was so fscking funny! The funnyest thing tho is you have to have lived it to be able to write it. So get out of the closet you fsckin 1337 linux haX0r... :)
    --
    "No se rinde el gallo rojo, sólo cuando ya está muerto."
  • > On your lunch hour, download the next full version release of your operating system (service packs are for sissys). Configure it, compile it, install it and be back working with no problems by the end of the hour. That's what I did.

    The *full* version ? This means everything from the kernel to the applications, including the C library. On what linux distro did you do that ?

    Btw, I do it once a week on my operating system (freebsd)

    cvsup -L 2 stable-supfile
    make buildworld
    make buildkernel KERNEL=SIDONIE
    make installkernel KERNEL=SIDONIE
    make installworld
    mergemaster

    Did you really do this on your platform, or are you just making noises ?

    Cheers,

    --fred
  • by chazR ( 41002 ) on Wednesday January 24, 2001 @05:23AM (#485251) Homepage
    IANAKD (Kernel Developer) (but I do read the mailing list).

    IIRC, there is a potential problem with very fast (>2GHz) processors confusing the delay loop calibration.

    Check out calibrate_delay in /usr/src/linux/main.c

    It uses an unsigned long to count ticks. So, on a 64-bit fast processor, there's probably no problem.

    On the other hand, I may be completely wrong.
  • Duke of URL is known for it's shoddy reporting. Read their review of Mandrake 7.2 for a hearty laugh.

    -- Eat your greens or I'll hit you!

  • by pyretic22 ( 247450 ) on Wednesday January 24, 2001 @05:23AM (#485253)
    It talks about a DALnet server running stable on 38.000 connections. I found this link on deamonnews a BSD site, and like them I'm quite curious about how well linux 2.4 networking works compared to the latest FreeBSD. The article is here [linux.com]
  • So does it means that 2.2 sucked despite all the claims that it was a server-class OS ?

    Does this that Linux is using Microsoft's standard marketing technique of "New version out - must diss current version to acquire beta testers" now? Gaaah!

  • by Anonymous Coward
    You have to compile a new module called ns558 for all gamepads. Give it a try and see if it works.
  • New Kernel with a lot new features, new gcc with a lot new features (finally one with a proper C++ support), new KDE with included better Kdevelop... We will get a boost of new software and support. So we just need a good packaging standard for all and running a free software system will be possible to everyone. But getting rid of old (quasi)standards is hard, as everybody knows. (A20 gate, tube monitors still have analog input, althoug monitors with an own DAC are better adjustable, there are still C libraries using K/R syntax, there is still no full xml/xslt/MathML/svg featured web browser, BIOS updates run still with DOS, but cheasy keyboard with 156 Windows/IE/Menu keys and a space key(no space bar)are the only ones available.
  • Indeed, the FAQ for the HFS filesystem driver in the Linux 2.2 source tree states:

    8. Will it run on my (your processor type here)?
    The code is carefully written to be independent of your processor's word size and byte-order, so if your machine runs Linux it can run the HFS filesystem. However some younger ports don't yet have support for loadable modules.

    Note that HFS is tested most extensively on Intel platforms. So there could be subtle compilation problems on other platforms. If you encounter any that are not addressed by the documentation then please let me know.

    So, it would seem that actually it should work on x86 first.

    --Joe
    --
  • There is a fix for this problem at the Windows Update Site, read about the problem here. [microsoft.com]
  • 2.4.0 won't boot on 386's (or at least some 386's, I don't remember) due to a small bug that's fixed in the 2.4.1 pre-patches.

    IIRC, it also uses slightly more swap space to do exactly the same thing, so disk space may be an issue as well.


    ---
    The Hotmail addres is my decoy account. I read it approximately once per year.
  • Are you kidding? 2.0.38 is already dead. 2.0.39 was released in early January. :-)
    ---
    The Hotmail addres is my decoy account. I read it approximately once per year.
  • Linus talked about this recently. Basically, it should work, but it isn't guaranteed to work and he would like to know about what breaks so it can be fixed. (There was a post on linux-kernel about this, but I can't find a link to it...)
    ---
    The Hotmail addres is my decoy account. I read it approximately once per year.
  • It's kind of weird to read your argument, where you basically say that IDE is faster than SCSI if you ignore fast SCSI hardware. Seagate X15 will embarrass any IDE drive. Yeah it is expensive. Yes a fast SCSI bus is faster than a normal PCI bus. That alone tells us a lot about SCSI performance: SCSI can sustain better transfer rates over several meters of cable than 32/33 PCI can over a few centimeters of a PCB.

    Anyway you are right that a modern IDE drive could swamp an ATA-33 bus (just: 37MB max sequential transfer rate on today's fastest IDE disk). But the fact is that most transfers are not sequential in nature, and for random access modern SCSI drives have 2 to 3 times the performance of modern IDE drives.

    Hell, even the aging Seagate Barracuda 9LP is faster in every StorageReview benchmark than the IBM Deskstar 75 GXP, which is the fastest IDE drive you can buy today. The Seagate Cheetah 18XL really creams the IBM (for only ~$225, mind you), while the Seagate Cheetah X15 (~$425) positively destroys any other drive available.

  • Concerning your camera.......Have you tried to use the usb-storage driver in 2.4.0. It was able to map my USB Camera (HP 315) to a scsi device.

    Summary;
    1. Make sure that you have the usb-storage module compiled in and scsi support. I think its experimental still.
    2. modprobe usb-storage.
    3. plug in camera
    4. read console output and find out what scsi dev it was mapped to.
    5. Mount that device.
  • 2 GHz is 2000MHz is 2,000,000,000Hz, sound close to 2^32 ? (ok, using a signed 2^32 type, int?). What happens when you store 2.3GHz into a signed int? Exactly :-)

    So they changed the type holding this value (including delay loops, calibrations, and whatever else) to allow Hz values over 2G.

  • Ok, will someone tell me how the hell you add support for speed ?

    I've heard that Microsoft Windoze ME is having problems with some faster processors because it is shutting itself down faster than the disk subsystem can go down. Now as to why this is being done asynchronously I canna tell you. Of course, this *is* Micro$oft we're talking about and flashiness is more important than stability.

    How the fuck can shutting down the fucking computer be so fucking difficult??!!??

    Fsck.

  • I mean't, the kernel, I don't consider the rest of the distribution to be the O/S.
  • You Linux people make more money

    do u speak english?

    is english your native language?

    i am not.

    i am a chinese from hongkong.

    english is not my mother tongue,but even a grade 6 student can figure out your grammer sucks...

    techie guy!@#$%^

  • If you don't trust the software, how can you trust the hardware? How do you know that 3COM, D-Link, Seagate, and Asus aren't all collecting information about you?

    It'd be really easy for a network card to stick in one or two extra bytes every hour, until it had a sample of your processor's L2 cache (merely 64k on some processors). You wouldn't even notice it with a sniffer, unless you were paranoid.

    Seriously, it'd be interesting to implement, just to see if anyone did notice it. And it'd be sort of fun to look over all those L2 cache dumps.

    Anyways, lots of hardware manufacturers in the past ignored everyone but Microsoft customers; so much so, in fact, that it was either common knowledge or the equivalent (a strong, persistant rumor -- which is the best you can do when everything in the industry is covered by NDAs) that those companies were in bed with Microsoft big time. Or Intel. Why else would companies (say, 3COM) suddenly discontinue popular products (say, the Big Picture PCI video capture board) and replace them with brain-dead USB models that have half the features... and call it an upgrade? It doesn't make too much sense to me, except to believe that either Intel or Microsoft convinced them to 'retire' the PCI product.

    So, of course it happens. And lots of corporations will happily ignore a significant minority (I'm sure that non-Win95/98/ME users make up over 10% of the desktop market; Win2k and NT must be somewhere around the size of the Mac or Linux desktop installations). Just look at the number of video card, sound cards, and USB peripherals that plain don't work outside of Win 98 or Win ME. Many more have seriously reduced functionality on Win NT4 or Win2k. Forget ever using them under a non-MS operating system.

    Your best bet is to just hardware from manufacturers that are Linux friendly, ignore technology that does not have an international standards organization behind it, and wait until the second revision of any product before expecting stable (ie, in the stable kernel) Linux drivers.

    Life sure is easy when you go with an all SCSI setup. Everything is a SCSI device. None of this weirdness with IDE, parallel port, or USB peripherals. Blech. They're all slower than SCSI, too. (Ultra DMA/66 and 100 are plain marketing bullshit. All you need is Ultra DMA/33 for 90% of the high performance EIDE hard drives.)

    I suppose I just wanted to spout off.

    Oh yeah, one more thing. There really aren't any I2O peripherals or motherboards out there, even though it's been years since Intel introduced it. There's the Asus P2B-D2, with integrated SVGA, SCSI, 10/100 NIC, and dual Pentium II support, but precious few other products. Mostly just triple channel Ultra 160 RAID adapters, which tend to cost as much as a used car.
  • make sure you upgrade modutils to 2.4.X (X is biggest available) before installing the 2.4 kernel.

    2.4 changes the layout of modules, and the new modutils handles the new layout.
  • There's an unfortunate focus on the mainstream in commerce these days.

    How exactly is this new? I'd say there is LESS focus on the mainstream today than at almost any time in the past. Can you imagine, say, a shoemaker in the 1500s marketing to Jews because it was "cool" to be countercultural and antiestablishment like that? I dunno about where you live but every record store I've ever gone to the largest section is "Alternative".
  • Linux, just like BeOS, supports a mass of filesystems. This isn't just a great thing to have for enterprise users, but can also come in handy when you happen to be transferring files from a hard drive you just pulled off another computer

    It is of course also helpful for those users who are forced to dual-boot because they have that one damned application they can only run in Winblows.

  • by jaa ( 22623 ) on Wednesday January 24, 2001 @05:41AM (#485272)
    isn't that the true breakthru in 2.4? SMP scalability, especially the scalability of the new networking code? I'd really like to see SMP 2.4 vs. SMP 2.2 vs. SMP 2K.
  • How clueless can this guy be ? If someone went to such great lengths to defeat the 2gb limit, then I'm pretty sure it's because it's been a problem for a while. Uncompressed video comes to mind, where a reasonable clip bucket can contain well over 10gb of data. Databases also get pretty huge when you start collection data from the web (search engines perhaps, or DoubleClick stats). Next.

    Yeah, tell me about it.

    When I capture video at 640x480 I stick to MPEG1 at 3.5MBPS. Believe me, your hard drive fills up FAST.

    Flavio
  • How does one get 32000 connections? My understanding is that each connection is an open file descriptor. I haven't been able to get the kernel (2.2) to compile with > 1024 open file descriptors. Select breaks (even after recompiliing what I thought was necessary) and no matter what I try I can't open more than 1024 connections. Sorry if this is a clueless question...
  • Seriously, that is one of the least informative articles I have ever read from a link on slashdot. The benchmarks were by far the least informative piece of the entire article.
  • Sure if u compare apples to oranges. Which is what you are doing comparing a 15K rpm SCSI drive to a 7200RPM IDE drive. Compare two 7200RPM drives and u will see that the only thing a scsi drive has is better seek performance. For 30% more cost mind you.

    And sure a 180 or 360 SCSI controller can transfer data at 180MB and/or 360MB/sec. BUT what good does that do you if as soon as the data has to go anywhere. (Network or CPU.. or your video editing software) It gets boged down by the PCI bus.

    Additionaly Remeber that a SCSI drive (EVEN the XP15) CAN'T transfer data to it's media at even close to 360MB/sec. Sure some day the drives will hit 360MB/sec to the media.. but by then the 32Bit 33Mhz PCI bus will be long gone too.

    And IDE drives will certainly keep up. The truth of the matter is that a 7200 RPM 20MB/platter IDE drive and a 7200 RPM 20MB/platter drive have nearly identical sustained transfer rates. (go to storage review and run the sort on transfer rates)

    I will give SCSI that they are faster for when it comes to seeks (Most Server tasks).. but again you pay that 30%+ penalty in cost.

    Additionally Max Transfer Rate Record for IDE is currently held by the Seagate Barracuda ATA III (7200RPM) at 40.5MB/sec at the OD.

    Current Record Holder for SCSI is Fujitsu's MPF3xxxAH at 42.77 MB/sec. The X15 (15K RPM)weighs in at only 41.47 MB/sec.

    So IDE is definetly not second chair in peak transfer rates.

    The choice in the drive you buy should really depend on WHAT you are trying to do. Buying a SCSI drive for a desktop system is frankly a BIG waste of money. Hell buying a SCSI drive for a small servers is a waste of money. (instead spending the extra 30% for a SCSI drive you're better off geting more RAM for the server to use as cache instead)

    The only place I'd use SCSI drives would be in servers that are under heavy load or are under constant random access patterns. (Where your read cache is being trashed constantly) Or require a lot of drives.
  • Glad I'm not the only one who thought that article was a waste of the lumens required to read it.
  • I think its time we moved on to a 64 bit file systems, journeled, database like and pervasively multithreaded. Hang on, this is a Linux forum and I'm advocating something BeOS had from day 1.
  • Unless you're exporting those file systems to anything other than Lunix clients.

    Why? Is Linux 2.4's NFS incompatible with other NFS clients? Stoopid kernel developers.
  • by drift factor ( 220568 ) on Wednesday January 24, 2001 @05:44AM (#485284)
    If, like me, you've been looking for a free NFS solution, you'll definitely want to try 2.4. There's really no other option at the moment if you need NFSv3 support, as well as reliable file locking (lockd on *BSD doesn't work worth a damn.) Client side NFS never worked completely correctly until 2.2.18, and as a server...I had to reboot it more than a Windows box and deal with stale NFS handles multiple times due to "soft down" situations. I've been using 2.4.0-test11 for about a month and a half now, and it has been solid. Granted, when I benchmarked it's performance against FreeBSD it got owned badly, but at least it can lock files correctly.

    Bottom line, if you need a free high availability NFS server, 2.4 is a godsend. If you have the cash, I still recommend sticking with Solaris, however.
  • An "Alternative" that for the most part is bloated with contrived fabricated pop crap. The "real" alternative has escaped to indie, and several other niches. Of course once those sounds become the big thing expect them to make up the largest section in the store.

    The word of the day is: co-option
  • See, now that is funny.
    The parent of that post is funny, too, but I don't know if it was meant to be. It certainly says something, though, about how many moderators don't actually read the story they are moderating.
  • That's a timing issue, not a speed issue. The difference lies in the fact that the shutdown process must sit and wait for the drives to shut down. "$foo=time(); while(time()$foo+2);"

    A speed issue would be a delay loop overflow, but anyone still using counter-based delay loops should sit down with a game developer and learn about more robust throttling methods. Every decent computer architecture has a system clock that can be used to accurately measure time in tiny increments.

    On any x86 system you can use the system clock to trigger rhythmic interrupts that keep everything in sync. Good old Dos games used this trick to hit 60fps on the mark without having to wait for vertical blanking to occur. Why couldn't an OS scheduler use this technique to manage timeslices and other boring chores ? I don't know about other architectures but surely there is something analogous to this method.
  • Since you are mentioning dual head, Im guessing Matrox G400. There is a known problem with G400, XFree 4.02 and the 2.4.0 kernel at the moment. (my main reason for not upgrading). Looks like Matrox is working to fix this.

    pere
  • can't be, that's not supported in-kernel yet (there are finally some userspace tools like the old hfsutils for it, but that's all afaik).
  • Funny, I thought anyone could include PPPoE in the kernel when configuring it for a recompile. Perhaps he could have mentioned that it was enhanced and tweaked, but saying it's included in the kernel is like saying your brand new car comes with a steering wheel. Du-uh!

    Actually this is new. Previously to get PPPoE support (in a stable kernel) you had to install a kernel patch or use a client that runs in user land (e.g., rp-pppoe - which is excellent, thanks David). Now it's included in the kernel.

    I agree with everything else, though. This guy needs a couple of good whacks with a cluestick.
  • Dude, I've got a 20GB Maxtor, and HDTach reports (average) sustained rates in the mid 20's (over the entire drive.) That means for at least a quarter of the drive, it is saturating my UDMA/33 bus.
  • I also thought that that assertion was suspect, based on my own "real world usage" experiences. The motherboard drivers for my PCs (I have 3, all the same config) at work have an option to enable ATA-66 mode or not when you install them. We're developing an application that requires loading a large landscape database (these are PIII 700s with 512 MB RAM). When the option is not enabled, loading the landscape takes on average 35 to 40 seconds. When enabled, the load time drops to, on average, just under 20 seconds. Nothing else on the configuration was different, only that, and those values are the average of many load times.

  • Or... you could eat lunch.... just a thought ;)
  • " I've worked in the HD industry for 3 YEARS."
    That's nice, I've been working with the hardware and firmware for drive controllers for a while myself...

    Thank you for completely misreading my post...You said: "And the fact of the matter is that an IDE drive can SUSTAIN over 40MB/sec at the outer diameter. THIS IS TO THE MEDIA rate."

    because I said: "Just because you can get the data to the drive cache that quickly doesn't mean you can pump it over the channel that fast... ie the speed of the channel. 33/66/or 100 MB/sec (UDMA33/66/100)"

    Meaning: Just because disk -> cache = 40MB/s
    !=> cache -> controller via ide cable at 40MB/s
    So you are hot and bothered about agreeing with me... (except that a 33/66/100MB/s is the peak burst rate, not sustained - that was my point).

    You said: "Remeber the OD moves FASTER then the ID relative to the head. This is why the OD transfer rate is much higher."

    I don't remember disputing that, but thanks for sharing.

    You: "The BS you are saying about SCSI is just that; BS and FUD. The only differance between SCSI and IDE is the protocol and controller. It's the same PHYSICAL hardware with a diff chipset."
    Me: "IDE is a *slow* protocol, and fairly braindead"
    Wow, we were both talking about the protocol, and I never mentioned drive hardware - just interface hardware. Looks like you are upset about agreeing again. Obviously, the newer technology that was developed for server drives is now on the desktop. I never disputed that.

    You: "SCSI is excelent for multiple devices on a chain. That's about it."
    Well, let's see - try to attach an external cabinet with IDE... oops, sorry. The range of devices is better, the driver support isn't as complicated, and there's better error reporting, configuration, and on-the-fly debugging... but that's ok - that's beyond the desktop space anyway, and it doesn't seem as if you want to consider that. I'll get back to my utilization #s a little later.

    You: "since there is no IDE drives at 10K right now"
    Well, I've had my 10krpm UW SCSI drives for a couple of years now - some things don't catch up as quickly as areal density (mostly because people don't want to pay for them).

    You: " Additionally those 180MB/sec 360MB/sec SCSI transfer rates are also misleading. For that kind of performance they require a 64Bit/66Mhz pci slot.. because otherwise you are limited to a max of 132MB/sec by your PCI bus ANYWAYS."

    Another true statement that is irrelevant to my argument... I only mentioned U2W, which is 80MB/s, and has worked (for quite some time now) rather quickly. A lot of the equipment I work with has had 64b/66MHz slots for a long time, too... but again, for the desktop space, U2W is pretty reasonable.

    You: "The problem you see with IDE drive "slowness" is because your PC does not have UDMA mode turned on. "/hdparm -d1 /dev/hda" (Or compile it as default into the kernel) A current generation IDE drive running in UDMA100 mode streaming data at 35MB/sec only uses 10-15% CPU on a P2-400."

    Well, using my Adaptec 2940-UW, and a couple of 2-3 yr old (maybe a generation ago, if you like) UW drives (10krpm 9GB, 7200 RPM 18GB) I can do 33-35MB/s at about 4-6% util on a Celeron 300a... bummer. That's not as many %s as you, I gess you win that one ;-)

    You: "So before you go of and tout how SCSI creams IDE. 1st check your facts and LOOK at www.storagereview.com and compare 7200 rpm scsi and 7200 rpm IDE drives."

    So before you go off and FLAME someone using CAPITALIZED WORDS all OVER the place, MAYBE you should READ the other post and THINK and/or CHECK your FACTS...
    --
  • > The *full* version ? This means everything from the kernel to the applications, including the C library. On what linux distro did you do that ?

    If I'm not mistaken, Debian can do this. I believe it compiles source .debs if you tell it to "install" one. If not, then make dist-upgrade would be a binary-only thing (and you still have to fetch whole packages at once anyway).

    Debian's about as close as it gets to ports, and it's smarter about downloading dependencies simultaneously.
    --
  • > I mean't, the kernel, I don't consider the rest of the distribution to be the O/S.

    Apparently Linus doesn't even consider necessary parts of the kernel to be worthy of the distribution. New kernel, old modutils? Better hope your network driver on that headless box wasn't a module. Better hope init still works too.
    --
  • Anybody have info on how 2.4 behaves on old/slow machines...say, like a 66 mhz 486? I've been hearing reports that it's "faster" and "better", but there's usually a tradeoff involved.
  • by FreeUser ( 11483 ) on Wednesday January 24, 2001 @05:58AM (#485305)
    How clueless can this guy be ? If someone went to such great lengths to defeat the 2gb limit, then I'm pretty sure it's because it's been a problem for a while. Uncompressed video comes to mind [...]

    You don't even have to reach that far. Compressed video easilly grows to larger than 2 gb for any non-trivial project. For example, I used dvgrab [schirmacher.de] to capture multiple small video clips[1] from my ieee1394 sony trv-900 camcorder [bealecorner.com] and media converter [insanely-great.com] (sony), then edited them together into a 25 minute home video. This is all using compressed DV format, which is small enough that captures work perfectly fine in realtime to ATA33 IDE drives (unlike traditional analog captures which demand much faster drives because the quantity of data is so much greater).

    25 minutes of video, even in 4:1:1 or 4:2:0 compressed DV format, is way bigger than 2 GB.

    My solution was to upgrade to kernel 2.4.0 (which is easy to do with Mandrake 7.2, as long as you do not compile in devfs support) with the ieee1394 fixes [sourceforge.net]. I opted to use SGI's XFS [sgi.com] filesystem (which rocks) but to get around the 2GB limit upgrading to 2.4.0 was sufficient (ext2 and reiser both worked fine for test files of about 5.5 GB in size).

    [1]This is a limiting bug in dvgrab which segfaults at around 900MB, but works fine in "looping" mode with filesizes 900MB.
  • by TheGratefulNet ( 143330 ) on Wednesday January 24, 2001 @06:00AM (#485307)
    If someone went to such great lengths to defeat the 2gb limit, then I'm pretty sure it's because it's been a problem for a while. Uncompressed video comes to mind, where a reasonable clip bucket can contain well over 10gb of data

    and live audio events, as well. if I wanted to record several hours of audio (like a 6hour Grateful Dead marathon on my local FM radio station), I'd need much more than 2 gigs of contiguous storage.

    until now, I've had to break my live music into 2gig or less segments. that sucked! now, with the limit removed, I just hit 'record' (well, I type record) then come back 6 hours later and hit ctl-c and voila - I have a large audio file that I can now cut at logical boundaries rather than physical ones.

    (of course getting an audio editor to be able to read more than 2gig at a time will be tricky to find; linux is way behind the curve in this aspect; and winblows has problems (much of the time) with even 1gig audio files).

    still, being able to store that much audio to disk is the first step. editing it in-place will soon follow.

    --


  • This is untrue. Todays IDE hard Drives top out at 40+MB/sec (YES 40 Mega BYTES per sec) at the outer diameter. This is a SUSTAINED transfer rate. At the inner diameter they hit 26+MB/Sec. So an IDE drive can indeed saturate a UDMA33 bus.

    Take a look at the Baracuda ATA II and Maxtor DM+ 60 reviews at http://www.storagereview.com if you don't believe me.

  • by rw2 ( 17419 )
    Too me the only problem with Linux 2.4 (and don't flame me here, I use it full time at work and at home) is the lack of driver support. Specifically with regard to USB.

    I've seen glimmers of being able to use Windows drivers (using WINE parts I think) to contact USB devices.

    Can anyone comment on what to expect from either that or why I shouldn't worry that we'll get everything supported 'in house' despite proprietary device vendors?

    --

  • I'll try that, thanks.
  • If someone went to such great lengths to defeat the 2gb limit, then I'm pretty sure it's because it's been a problem for a while.

    Heck - my /home partition comes to well over 2GB compressed. Putting that into a tarfile before 2.4 would have been virtually impossible.

  • nah, the Create and Share Camera's chipset is NOT supported by the USB driver. I tried what you said and it didn't work.

    Did some more searching on the web and found out that it is certainly not supported. Rather disappointing, but you'll have that.
  • by mike_diack ( 254876 ) on Wednesday January 24, 2001 @04:42AM (#485322)
    I've tried 2.4.0 for a while now and am loving it
    with one exception:
    I can't get the joystick support working for
    my Gravis Xterminator in my SB PCI 128.

    I can get it fine under 2.2.17, 2.2.18 and 2.2.19
    pre, but not 2.4.0 (xmame just ain't the same
    without it!)

    But otherwise it's lovely.... It is discernably
    faster in use than 2.2.18(ish) which itself
    is discernably quicker than the early 2.2.x
    series.
  • Yeah, I'm having similar problems with 2.4.0 and XFree86 4.0.2. I went back to 2.2.16--which is playing havoc with my System.map, 'cause I'm too lazy to get a proper copy:-) Anytime I'd run Netscape, or a few other apps (xmms [xmms.org], gnucash [gnucash.org]), after a fairly short period of time the entire system would hang. 'Twas bloody annoying. I refuse to be reduced to only the console--this is my desktop machine here--and thus for the nonce I am back to 2.2.16.

    I've a feeling I might need to fiddle with the shm filesystem and it'll go away; I understand that it's needed to make shared memory (and hence X) run properly.

  • For those contemplating an upgrade to 2.4, and running a VIA IDE chipset, there are problems that can lead to severe filesystem corruption if UDMA is used.

    I've had simillar problems with the patched version of 2.2.18 as supplied with Debian, and I don't want to go through the same again until I know that it is fixed.

  • This article is almost completely useless. It tries hard to pretend to have hard technical data and all it has is fluff. I don't understand how anybody could've seriously written that piece and expected it to actually help anybody understand anything.

    While all of these new updates are fine and dandy, the inner-workings on Linux are the things that probably need the most updating. Yes, Linux is working its way up, but its way of doing many things are a rather abstract way at times or often very close to that of its older brother UNIX. One rather non-standard way Linux handled memory was an old UNIX way, which is very obviously proprietary. Linux is now in the future of memory and works in a more standards-compliant way of doing things -- which is what Linux is all about if you ask me. Although, Linux still remains compatible with the old UNIX-style way of managing memory, just as it does with the new controversial filesystem, DevFS.

    What did that paragraph actually tell you? Almost nothing. Some vague hand waving about memory management with no specifics at all. Irritating!

    I'm actually seriously interested in what kinds of changes were made to the VM subsystem. I know that's what held up the kernel for several months, and I want to know what was done.

  • by Ateocinico ( 32734 ) on Wednesday January 24, 2001 @06:06AM (#485331)
    I have the same problems with linux-2.4.0 and
    the reason can be found in archive:
    linux-2.4.0/Documentation/Changes
    There states that newer versions of system utilities need to be upgraded.
    The list of minimal versions for some packages
    is this:

    (copied from the Changes file)
    ============ begin ==========

    o Gnu C 2.91.66
    o Gnu make 3.77
    o binutils 2.9.1.0.25
    o util-linux 2.10o
    o modutils 2.4.0
    o e2fsprogs 1.19
    o pcmcia-cs 3.1.21
    o PPP 2.4.0
    o isdn4k-utils 3.1beta7

    ============ end ===========
  • EDA dump files are BIG too. Try simulating a 1 million gate design and see how big the dump file gets for a reasonable run. We're talking several GBytes very quickly. D'uh!
  • by phaze3000 ( 204500 ) on Wednesday January 24, 2001 @04:42AM (#485340) Homepage
    A reasonably well written article with one small problem:
    it's pretty rare for a IDE drive to completely saturate all 33 MB of the IDE pipe

    Whilst it is true that no drive will supply a sustained 33mb/sec, this forgets the overhead inherent in the EIDE protocol itself, which can be up to 30% of the bandwidth down the EIDE bus. Also remember this is per drive too.

    --
  • by garcia ( 6573 ) on Wednesday January 24, 2001 @04:49AM (#485343)
    I am still rather disappointed w/the problems w/AGP and DRI support in the kernel. Can't have that stuff compiled in and keep Netscape open for more than 2 mins.

    I am really having a lot of problems with it crashing X as well (dual heads). It may not be a problem w/2.4.0 but it seems to only have started when I upgraded the kernel.

    USB is coming along but it is not yet at a point where it is useful enough for me to give it thumbs up. When my Intel Create and Share USB camera and my Cassiopeia link cradle are supported, I will jump up and down. Until then, it is just a novelty...

    Just my worthless .02
  • On your lunch hour, download the next full version release of your operating system (service packs are for sissys). Configure it, compile it, install it and be back working with no problems by the end of the hour. That's what I did.
  • It appears to be limited to particular VIA chipsets. There's a full discussion on Linux Kernel. AFAIK the problem hasn't been resolved yet.
  • by Xunker ( 6905 ) on Wednesday January 24, 2001 @07:49AM (#485353) Homepage Journal
    If you're an idiot like me and know your way around Linux, but still can't get the hang of that 'new fangeld' kernel recompile thing, Linux Newbie is running a wonderful little Newbiezed Help File on How to install the 2.4.0 kernel under Rad Hat 7 [linuxnewbie.org]. Great for people who aren't yet up to the skill of 'kernel hacker'.
  • So how is your experience with XFS? Is it stable?

    I've only been using it for a couple of weeks, so I cannot state this with the kind of certainty a few month's experience would grant, but thus far it has been very solid and very fast.

    I have been really pleased with it. I actually prefer it over reiser, though reiser is also quite nice. I did have some issues with reiser under Mandrake 7.1 ... root disk usage growing without any corresponding files, which must have been a bug in the journalling. Nothing like that has happened with XFS, and XFS does support huge numbers of files, huge filesizes, etc. I like having the choice of both, and I do not want to diss either one, but right now my preference leans toward XFS.

    I'll be able to comment on the stability with more certainty in a month or two, after I've been using XFS (at work and at home) and beating up on it a while longer
  • Peak media transfer rate != channel rate... Just because you can get the data to the drive cache that quickly doesn't mean you can pump it over the channel that fast. IDE is a *slow* protocol, and fairly braindead, especially the older versions (<=33MB/s). U2W can beat out ATA/100 any day, esepcially considering the fact that you can't chain nearly as many devices on an IDE chain... Heck, my setup on an old UW controller can beat out any ATA/66 system for throughput and CPU utilization... it's not a caching RAID controller - it just works better.
    --
  • by Royster ( 16042 ) on Wednesday January 24, 2001 @08:06AM (#485361) Homepage
    A speed issue would be a delay loop overflow, but anyone still using counter-based delay loops should sit down with a game developer and learn about more robust throttling methods. Every decent computer architecture has a system clock that can be used to accurately measure time in tiny increments.

    You've seen the BogoMIPS reported in a Linux kernel boot? That measure is reported when the kernel calibrates a certain timing loop which it uses for microsecond delays. There are some *very* good reasons why the kernel still uses timing loops not the least of which is that the gettimeofday() syscall is much too slow for the kernel to use for this purpose.

    Believe me, the kernel developers know what they are doing. If you doubt that, spend some time reading the archives [indiana.edu] of the kernel development list or the weekly Kernel Traffic [linuxcare.com] summary.
  • by Black Parrot ( 19622 ) on Wednesday January 24, 2001 @04:54AM (#485364)
    > Can anyone comment on what to expect from either that or why I shouldn't worry that we'll get everything supported 'in house' despite proprietary device vendors?

    I can't really answer your question, but I wanted to make the suggestion that if vendors of proprietary devices only support Windows, the solution is not to switch to Windows, but rather to use something else and forego their devices just to spite them.

    If we give up and just go with the flow, we are essentially rewarding the behavior that we want to change. That isn't the recipe for success.

    I recognize that not all vendors will ever provide open source drivers, but we could at least expect drivers for OS OSes.

    In principle I don't object to closed-source drivers, but with the proliferation of spyware and other badbehaviorware, I'm getting to the point that I don't really want to run anything closed on my systems. However, it might be better to postpone that battle for now, and concentrate on getting drivers, open or closed, for alternative OSes.

    On the bright side, I notice that Linux 2.4 supports I2O, which if I understand correctly is a protocol for OS-independent drivers. If we could get vendors to ship I2O drivers with all their nifty toys, Linuxers (and others) would be in as good a shape as Windowsers as far as device support is concerned. And it should be in vendors' best interest to ship I2O drivers, because that would let them maintain one driver and sell everywhere, rather than limiting themselves to a single market or having to maintain multiple drivers for multiple OSes.

    Of course, with Windows running on 90% of the world's desktops, some vendors may not think that their slice of the remaining 10% is worth bothering with. There's an unfortunate focus on the mainstream in commerce these days.

    And of course, MS may be subsidizing some of them a la OEM agreements, with an explicit or tacit agreement that they will only support Windows. If that turns out to be the case, that's all the more reason to shop elsewhere.

    --
  • The article states that hfs (mac filesystem) is only available under PPC linux. This is false.

    Furthermore, an HFS driver is already in the 2.2 kernels, and was available for the 2.0 kernels as a patch. I use the HFS driver on my 2.2 kernel, on my Intel Pentium II system, to mount old Syquest 40MB removables I still have from my old Mac days. Works like a charm.

    Perhaps the reviewer was referring to HFS+, the newer Mac filesystem?

    --Jim

E = MC ** 2 +- 3db

Working...