Follow Slashdot blog updates by subscribing to our blog RSS feed

 



Forgot your password?
typodupeerror
×
Linux Software

Kernel Summit Wrapup 163

Jonathan Corbet at LWN has posted a terrific summary of the first Day of the Ottawa Kernel Summit, and you should expect the second day soon. In it he relates the greatest hits of the first day's talks, including the AMD Hammer Port, Block I/O, Modules, and more. For mp3s or oggs of this event, check out the Kernel Summit MP3 Repository on SourceForge. The big news is the desire to feature freeze 2.5 within 4 or 5 months. Halloween. I've posted a very small gallery of the group pictures from the summit on my site.
This discussion has been archived. No new comments can be posted.

Kernel Summit Wrapup

Comments Filter:
  • Just because it was one of the first things on the article, might I mention that (having looked at other UN*Xes, where merele PIDs are stord there) I just *love* the linux /proc directory.

    Is there a cooler way to tune your system than "echo 1 > /proc/sys/kernel/lowlatency" or whatever?
  • by rodgerd ( 402 ) on Tuesday June 25, 2002 @10:38PM (#3766572) Homepage
    Rusty started with the claim that the only purpose for modules was adding hardware that you didn't have when you booted your kernel.


    Sorry, but this shows a paucity of imagination ("Rusty's smoking crack again"). Modules are useful because I don't have to rebuild the kernel constantly. I love not needing to care if I have to swap ethernet cards - tune /etc/modules.conf, reboot. Not "reconfigure and recompile kernel, fiddle with lilo, reboot".

    I also love the fact that distros no longer resemble the bad old days where there where a billion different boot images for installation, depending on which combination of hardware I happen to have. Anyone want to guess the QA costs to RedHat if modules went away?

    Rusty's wrong, wrong, wrong.
    • Um, exactly... hardware you didn't have when you booted your kernel.
      • You've never had a pcmcia or usb ethernet card have you?
      • Unless you have hot-swap PCI, hot-swap SCSI, USB, Firewire, PCMCIA/Cardbus devices, CF, or any of the other devices that don't require reboots to change.
      • It's also for software you didn't have when you booted the kernel (i.e. testing). If you had to reboot your system everytime you made a slight change to your module, it would increase the development time. I also don't think that there's that much difference between a boot-time loaded module system and a run-time loaded module (some, but not that much).

        I think the better question is: what is the problem that rusty sees with run-time loadable kernel modules? I'm betting that they save most driver developers more than they cost.

    • If you swap ethernet cards, then it will exist when you boot the kernel, and you can configure it to be loaded with the kernel, at boot time. Being a module doesn't mean it has to be loadable at run time.
      • Dude. If I had to compile in support for all possible hardware devices, my kernel would make xp look light.

        Kernel modules are very cost-efficient things when you run a company. Rather than recompiling (or using a bloat-kernel), you can mostly run the same kernel on multiple computers from different vendors.
        • Dude. If I had to compile in support for all possible hardware devices, my kernel would make xp look light.

          You're missing the point entirely, as did the original poster. The modules don't have to be part of the kernel. You can have modules that only get to be loaded at boot time. This solves the problem of having to being to be loaded and unloaded at run time, and all the various problems that go along with it.

          I don't think anyone wants to get rid of modules entirely and put everything into the kernel. They want to eliminate run time loadable and unloadable modules.

          Hot swappable hardware is different, as it's designed to be hot swappable. The individual devices can be hot swapped, but the base hardware cannot. You would need to load the module for a USB controller at boot time. The USB devices that use it would be controlled through this driver, and wouldn't need to be part of the kernel.
    • I personally like the Windows 2000 way of:

      1) Insert ethernet card into PCI Slot.

      2) Flip power switch to "On."

      That's it --- it does the rest. If you're on DHCP you're set... just double click on your browser of choice and you're up!
      • I'd sacrifice ease of MS driver handling for modules for the following reason:

        "Your mouse has moved. Windows must be restarted for the changes to take effect. Reboot now? [OK]"

        I change my system settings more than I change my hardware.
        • by ipfwadm ( 12995 ) on Tuesday June 25, 2002 @11:43PM (#3766778) Homepage
          "Your mouse has moved. Windows must be restarted for the changes to take effect. Reboot now? [OK]"

          Watching karma fall through the floor for supporting Microsoft...

          I have Windows 2000 on one of my systems, and this rebooting-after-everything is not nearly as much of a problem as it once was. Yes, after installing critical updates, the system does need to be rebooted, and some software still requests reboots on installation (which I typically ignore). But gone are the days where changing an IP address or other network settings would require a reboot. That's one of the big things Microsoft tried to do w/ W2K, cut down the number of trivial things that required reboots.

          (Disclaimer: the system I'm typing this on is a Linux box that hasn't been rebooted in almost six months)
          • Cheers on the 6 months! =)

            I *was* being a bit general there, but that's how it is for me. Although I've worked on 2000 and XP, I haven't installed either on my home system -- I stopped at 98 and use it for games alone.

            I'm building Gentoo as we speak. =D

            MS certainly has it's advantages. But look how far Linux has come in the last 2-3 years. It'll have no problem catching up in the next few, what with the growing popularity (and thus programmer base, hopefully).

            I'd venture to say that for normal use, a RH 7.x desktop with Ximian Gnome is easier to use than windows. But that's my opinion, and I'm not Joe Public.
          • I have Windows 2000 on one of my systems, and this rebooting-after-everything is not nearly as much of a problem as it once was.

            While changing configuration options rarely requires a reboot in 2000, I do find it much easier to install Windows things in WINE than Windows proper. I installed .NET on a work PC and I swear it took more than a dozen reboots and a few hours of rebooting. Later that day I installed all the plugins for crossover and it took less than 20 minutes because the "reboots" were all instantanious.

            Not that it would ever happen, but it would be a lifesaver if Microsoft just cleaned up the WINE code and switched to the Linux kernel for Windows YP. Though maybe that Lindows dispute went deeper than we thought. ;)

            PS I used try to ignore the reboot prompt whenever I could, but it's resulted in a few burns. I hate reinstalling Windows so I follow it more religiously now.
          • But gone are the days where changing an IP address or other network settings would require a reboot.

            I agree Windows 2000 is a lot better in this regard compared to Windows NT or Windows 98, but you still have to reboot when you change the computer name!
            I have no idea why, because if you use DHCP to retrieve the computer name you don't have to reboot when the DHCP-server changes the name.

            • Because if you are part of a domain then changing your name results in you reathenticating to the domain, which changes your security identifiers. It is also possible that you are not rejoining the same domain but also changing the domain to which your computer belongs at the same time (pretty common here, computer gets retasked so it is renamed and joins a new resource domain.)
      • Funny, same thing happens with me running Red Hat.
      • It works that way in DeadRat (kudzu). But that's a function of the userland, not the kernel.
      • Depending on the distribution (like, say, Redhat), that may be all you need to do in Linux also. However, I think the underlying system could be a lot better.

        I have a mental list of about 5 things Linux needs to be really damn good. Better module support is one of them. Granted, I'm not a driver author, and I really haven't done much with compiling drivers since 2.2, so anyone please correct me if some of this stuff is already in place.

        All drivers should be available as an add-on module. The idea here is to be able to load any driver when you need it, and to never have to recompile your kernel.

        Binary compatibility between kernel versions. I mean, really, does it change _that_ much? I should be able to load a 2.4.0 driver on a 2.4.1 system. Why should VMWare need to have a bazillion drivers for various kernel versions? Maybe this is too late for older kernels, but how about considering backwards compatibility from this point on? Is it too much to ask that Linux 2.6 drivers should work in 2.8?

        A very good interface for which modules can be interacted with (outside of the kernel). Currently there is insmod, which is able to specify additional parameters to a module via command line arguments. But what about errors? Are those returned as an insmod exit code or a standard text format? What about interacting with a module after it is already loaded? Is there a standard for probing whether or not hardware exists, other than simply seeing if the driver fails to load?
        The end result is that I should be able to get a diskette from a hardware vendor that contains a nice driver.o file that I can load. If it comes with a driver.c file, then that's great, but I want a .o that I can just use right away.

        Having a standard for probing and manipulating drivers is badly needed, so that someone can come along and make a decent Linux driver GUI. Yes, I know about Kudzu and Yast2, but something tells me they are doing a LOT of workarounds and/or "evil hacks" behind the scenes.

        Solve this and we are well on our way to making Linux the best it can be. I really hope it is being addressed in 2.5

        • by rodgerd ( 402 ) on Tuesday June 25, 2002 @11:55PM (#3766827) Homepage
          Broken binary compatability is considered by Linus to be a feature, not a bug. Essentially the kernel developers are unwilling to be constrained in their maniuplation of kernel internals by people who don't want to provide source.

          The arguments around this have been hashed out time and time again on the l-k mailing list.
          • I think they are shooting themselves in the foot by doing that. People should be happy that some companies are providing Linux drivers, let alone having to play by "their" rules. Take for example, the wireless ethernet adapter I have on my WinXP box (it's a Netgear, FYI). The XP Driver blows. Even though it "shipped" with the adapter, it's immature crap -- probably at pre-alpha stage. Drops connections like crazy, etc. So what did I do? Installed the Windows 2000 driver. Just a little tweak (disabling the Zero-config for 802.11 adapters service, to be exact) and it's up and running without a hitch.

            If it wasn't for binary driver compatibility with earlier 'kernel versions' (in this case, XP with 2000), I'd be SOL. If you happen to be constrained using a certain hardware component with a binary-only driver, you're stuck at a certain kernel version and can't do a damn thing about it. It's the end-user who's affected ultimately. And that, well plain sucks. Like I said, demanding source just may be crossing the line. Sure, you may not like it, but hey, this is a free country, people can license software under whatever license they want. To live your life with such a constrained philosophy like "I refuse to use non-free-as-in-foo software" is just retarded. I see Linux having the potential of losing people cause of this.

            Now the only way this could be advantageous is that that driver/module actually has to be compiled against that certain kernel, proving that it works, at least in theory. But it can prove to be a bitch in a bind. I think this is similar to Microsoft's driver certification, they're just taking a totally different approach to a common problem.
            • Take for example, the wireless ethernet adapter I have on my WinXP box (it's a Netgear, FYI). The XP Driver blows. Even though it "shipped" with the adapter, it's immature crap -- probably at pre-alpha stage. Drops connections like crazy, etc. So what did I do? Installed the Windows 2000 driver.

              So did it ever occur to you that an open source driver doesn't have to "blow", but can be fixed?
            • by Ami Ganguli ( 921 ) on Wednesday June 26, 2002 @01:44AM (#3767285) Homepage

              The reality right now is that the vast majority of drivers do provide source, so everything works pretty well. Requiring backwards binary compatibility for the modules interface would hurt everybody (because it creates cruft and a maintenance headache) and benefit only a few short-sighted companies

              Remember that the general attitude towards binary-only modules is "we'll tolerate it, but if it breaks you keep both pieces". Nobody is demanding source, they just want to minimize the damage of closed-source code in the kernel. There's no reason everybody should suffer because of a few companies.

            • Linux is not windows.

              Linux is not windows.

              Got it? When you use kernel 2.4.x instead of 2.2.x, most of the drivers for the hardware you use are available. If you have hardware that worked in 2.2 that does not work in 2.4 due to a driver that was not updated (incredibly rare) then you can get the source and update it yourself. There is a reason for this. The reason is that you are getting the system for FREE and the source for FREE. If you are using Red Hat or SuSE or Mandrake or Debian and your device is not supported in the new release then complain to them. Don't complain to the kernel people. Their mission is to make a good kernel, not to make it easy for you.

              By the way, have you noticed that Windows 98 drivers don't work in W2K or XP? Have you noticed that many W2K CDRW drivers do not work in XP? Even Microsoft understands the virtue of changing binary compatibility.

              It turns out that binary compatability is a really bad thing in the case of Linux. Here are some things that have come up at the kernel summit that would make guaranteed binary compatability prohibitively onerous: Async IO, loadable security modules and the coming SCSI changes. Got an old scsi card? Maybe you are running 2.2 or 2.4 right now and will switch to 2.6 when it's released. Async IO looks like it is going to become the normal way for Linux to handle IO. Your 2.2 or 2.4 scsi driver doesn't know about that. LSM also needs hooks in every part of the kernel to work efficiently, your 2.2 scsi dirver doesn't know about that (although it probably doesn't need to), the scsi stuff that is being pushed into the block layer code is probably going to make a pretty big difference for how your scsi driver interfaces with the mid and high level kernel scsi stuff. Binary compatablity for this sort of thing would be a waste of time for the kernel deveopers. The source and the interface specs are out there. It is faster, and more efficient all around to just fix older drivers than to make the clueless end user's life a tiny bit easier.

              The right way to make life easier is the current way most modular drivers are able to be built now days. You can have a directory seperate from the kernel source tree that will contain your module criver code and you can tell it to build against your current configuration of the kernel in the source tree directory. This way you don't have to rebuild the whole kernel. You can either download the newest module source, or hack it up yourself and then build or rebuild it as many times as you need to if your module isn't supported in the official stock kernel sources. Binary only modules are just dumb and the vendor will have to be the one to make those modules compatible.

              • By the way, have you noticed that Windows 98 drivers don't work in W2K or XP? Have you noticed that many W2K CDRW drivers do not work in XP? Even Microsoft understands the virtue of changing binary compatibility.

                Nit-pick. WDM drivers targetted at Win98 should work with 2k and XP.

                'Course, thats should, but I've got at least one device (a digital camera) that worked like this.
              • By the way, have you noticed that Windows 98 drivers don't work in W2K or XP? Have you noticed that many W2K CDRW drivers do not work in XP? Even Microsoft understands the virtue of changing binary compatibility.

                That's cause the driver model was completely distinct, all the way back to the days of Win95 and WinNT 3.51. Win9x used the VXD virtual device driver format, whereas NT *.* used a .sys file format to talk to the HAL. The driver models were entirely different, so I don't think anyone ever expected them to work. It's like taking a binary driver from FreeBSD and expecting it to work on Linux. It just ain't gonna happen.
        • by Anonymous Coward
          The end result is that I should be able to get a diskette from a hardware vendor that contains a nice driver.o file that I can load. If it comes with a driver.c file, then that's great, but I want a .o that I can just use right away.

          Well, y'see, there's this OS called Windows, made by this guy Gates... it might suit you better.
          • I'd have a better alternative. Why not make the vendor configure the drivers for your hardware and have the kernel people extend the menuconfig so that you can load a "drivers config set" without altering all other options that you chose?

            Then, they hardware vendor could provide an util to autodetec what kernel version you are using and tada. You could even make a "Wizard" (hehe) that you mama could use.

            That should take care. Of course, the unsolvable problem is not this one, but the cases where some companies don't want to provide the sources. It's not Linux fault.
          • by infiniti99 ( 219973 ) <justin@affinix.com> on Wednesday June 26, 2002 @12:23AM (#3766978) Homepage
            Well, y'see, there's this OS called Windows, made by this guy Gates... it might suit you better.

            Somehow I knew I'd get a comment like that. I can't tell if you are a Windows user singing its praises, or some die-hard Linux user that wouldn't ever touch modprobe your life depended on it. Either way, your comment completely misses the point.

            I find Linux to be a better OS in general, but it is not without flaws. What is the harm in fixing these flaws? I didn't say everyone needs to use a point-n-drool driver GUI. What I want is a better driver layer, so that stuff like that can exist if necessary. In the instance that you're a "die-hard Linux user", then continue to recompile your kernel or uncomment a modprobe line in your Slackware config whenever you want to install hardware. I just see a lot more potential with Linux driver configuration than that.

            We ooh and ahh about "apt-get mozilla" or "emerge mozilla". Single commands that do all the hard work for you. Wouldn't it be great to have programs that could do the same kind of things for drivers also? The best part about Linux is that it is so flexible. You are not confined to a GUI or anything. A powerful underlying driver layer means more sane configuration, more powerful driver scripts, and the possibility of making easy-to-use configuration tools. And best of all, you can continue recompiling your kernel just as you may have always done. Yay! Everyone is happy.

            My suggestion was to make Linux better, not to make it Windows. I hope you can see the difference.
      • your assuming a lot with that process... unfortunatly all is not well and peachy in MS drivers land... your process will quickly fail if:

        1) your NIC is not supported by MS (happens more than i like)
        2) you have all your ASPI, PnP, DMA, I/O, and IRQ setting correct... most defaults work ok but if you have to manually configure anything on your system than you can't count on windows working right all the time...
        3) if your NOT on DHCP as you said OR if you want to do anything more than simple web browsing... remember that win2k is officially a step up from NT and is designed for workstation/server markets... these are the type of environments where custom setting are needed...
        4) and btw you forgot step 0... first turn the power switch off ;)... sorry to nitpick but its crucial... oh wait... what if you have hot swap PCI and you want to not reboot and potentially cause mission critial data loss... well then maybe 2k wont work for you...

        in my opinion win2k works most of the time but is not the best... neither is linux... the closest thing i have found to perfect would be mandrake linux because of the easy 'drake tools suite for auto configuring your system...
    • If the kernel had a good modukle system (which i believe 2.6 is going to have) then modules are also good for.

      Installing an updated driver with less bugs, security holes.

      Installing an updated driver with more features (e.g. they make NTFS WR+ not just R)

      You can get binaries of modules(this is called a comprimise!)

      And bet of all the if you plug in a USB device etc.. and you don't have a driver, a daemon could go an find it on the net and download/install it for you!
    • He is absolutely NOT calling for module genocide. He is stating that the way modules load, and especially UNLOAD, is wildy inconsitent.

      He is asking for module loading to be a one-way proposition, insmod without rmmod. At least, and I'm sure Linus is thinking this too, until it bugs someone enough to implement one of the other two options Rusty gave.
  • by iabervon ( 1971 ) on Tuesday June 25, 2002 @10:43PM (#3766591) Homepage Journal
    Considering the number of things that are supposed to go in, including things that aren't really even started (beyond looking a bit at the issues) and things that are likely to be disruptive that haven't started to be merged, Halloween seems rather unrealistic. Of course, pushing off things that haven't been acceptably merged by then until 2.7 would probably make for the reduced-feature 2.6 that nobody wanted to commit to explicitly.

    Personally, I like the idea of a 2.6 as soon as the big things which are partially merged are finished, with everything else put off until 2.7 and everyone who got their stuff into 2.6 responsible for making sure it's stable under wide testing. There are a number of big improvements already in 2.5, and cutting over to a stable release with those features would be nice. And maybe Marcello could be swindled into maintaining it because it's not _that_ different from 2.4...
  • Or we'd have to wait 3 days :-)

    I dont dislike debian btw.
  • you should expect the second day soon

    Like maybe in 24 hours or so.....
  • by boa13 ( 548222 ) on Tuesday June 25, 2002 @10:51PM (#3766620) Homepage Journal
    I just saw the link [lwn.net] on... Slashdot. Gee, these little side boxes are helpful sometimes. ;)
  • What's new in 2.5? (Score:3, Insightful)

    by RelliK ( 4466 ) on Tuesday June 25, 2002 @11:15PM (#3766692)
    What I especially want to see:

    * ACLs!
    * All journalling file systems merged (XFS, JFS, ext3, ReiserFS)
    * No more VM stability issues

    Anyone know if we can expect that?

    On a side note, what are the four FSs above best suited for? I know ReiserFS is really good at working with lots of small files and XFS is excellent at data streaming. Anyone care to add more details?
    • by Corby911 ( 250281 )
      Well, I know ext3 is great for ease of upgrade and backward compatibility with ext2. You can convert the root filesystem to ext3 without remounting read-only. Also, any ext3 filesystem can be mounted as ext2.

      IIRC, it's also more "middle of the road" that reiserfs. What I mean by this is that (performance-wise), it supposedly eliminates the extreme cases. So, you get sligly worse handling of many small files, but eliminate some of the possible ultra-slow cases when dealing with large files.
    • by groomed ( 202061 )
      My experiences with XFS have been good. I've seen XFS shut down filesystems upon encountering I/O errors, so at least it tries not to be blithely destroy your data. There is a sense of maturity and the bag of userland tools adds to this.

      XFS is quick when working with large files. I do a lot of audio work and the difference with ext2 is very noticeable. Since I haven't used ReiserFS I cannot make a comparison between XFS and it, but XFS is a big improvement over ext2, especially on large drives.

      Other XFS features (although I can't say how well they work since I haven't tested them) include ACLs and the possibility to reserve a partition for "realtime" use, i.e. to provide a guaranteed minimum data rate.

      The biggest XFS drawback as far as I can see is that it is not a standard part of the kernel. Whenever a new kernel appears you need to wait for a patch against that kernel version and apply it. This can lead to conflicts if you track multiple patches.

      For most uses I imagine any of the 4 mentioned filesystems would suffice. Having tried only XFS, I can say that it is a definite improvement over ext2 though.

    • Mmmmmm..... ACLs......

      There is a patch (or group of them) that give Linux ACL support. I don't remember the name of it, something like grsecurity. It was mentioned in the WOLKs interview a few stories before this one.

    • by Anonymous Coward
      > * All journalling file systems merged (XFS, JFS, ext3, ReiserFS)

      I'd have to say, "never".

      Each of these was designed to solve a different subset of the journalling issues, and ReiserFS is aiming for a superset as well.

      Why do away with three good ones just to have a single choice? Is reading up on the four of 'em and selecting the one best suited to your system's needs (..while leaving other users the ability to do the same...) such a HUGE chore?
      • by BJH ( 11355 )
        I think "All journalling file systems merged" was intended to mean "All four file systems integrated into the kernel", not "all four fiel systems merged into one".
      • There's a lot of places where the four of them do the same thing in different ways. One of the ideas was to have the common journalling code be shared between the different filesystems, and to have the various changes that need to be made to the general filesystem and disk code to support the operations needed for journalling be done in one way, rather than with different modifications for the different filesystems.

        Obviously, they're not all going to be exactly the same: they have different on-disk formats and different consistency constraints. But there's a lot of stuff which should work with any journalling filesystem with the same code that ought to be straightened out between the different filesystems.
  • http://images.dibona.com/pictures/showpic/index.sh tml?Kernel02/convs/6x4_dscn4027.jpg [dibona.com]


    You see that whole back row? The uber-dorks have matured!

    • Sweet! Is that woman ove....

      Oh never mind, he just has long hair.
      That's alright, I've got my bitch [johnromero.com]
      (HEY JOHN! Frontpage left about a page and a half of space at the bottom of your site! You've sold your Testarossa, get Dreamweaver, man)
  • by asv108 ( 141455 )
    Can anyone pick out the virgin in this crowd [dibona.com].
  • Ok, but... (Score:2, Interesting)

    by SHEENmaster ( 581283 )
    When will the ability to run a .class file from a command line be part of the kernel? That is what I care about.
    • Re:Ok, but... (Score:1, Insightful)

      by Anonymous Coward
      a) most support for execution of arbitrary objects is handled by the shell. a *lot* of things execute "/bin/sh *command*", which forks, calls the shell, then exec's the command based on what the shell determins should happen. this provided support for #!/path/to/interpreter style applications
      b) that's already supported. all you need to know is the "magic number" that every .class file will have to make it identifyable, and you can use it.

      go read the kernel config, under "General setup->Kernel support for misc binaries"
      there's already docco on how to use javarse with that.

      ashridah
    • Re:Ok, but... (Score:3, Informative)

      by dvdeug ( 5033 )
      When will the ability to run a .class file from a command line be part of the kernel?

      Since at least version 2.2. See BINFMT_JAVA (obsolete) and BINFMT_MISC when compiling your kernel.
    • Of course the kernel should not do that, but it would help a LOT if there was a standard program (probably called "start" or "open") that did the same thing that double-clicking an icon did. Obviously not kernel stuff, but right now all these solutions are tied to GUI and should not be there as well.

      Exactly *how* start works can be left up to different implementations but what is needed is the ability to be able to assumme the program is there and can be called and will do reasonably well so programmers are not tempted to write their own implementations.

      Old shells would require the user to type "start " but I would expect newer shells to just take " " directly.

      I think this is a vital addition to Linux and should be done.

  • PC110 (Score:5, Informative)

    by BJH ( 11355 ) on Tuesday June 25, 2002 @11:55PM (#3766824)
    For those of you who might be wondering, the small PC that Alan Cox is shown as using in the photo is an IBM PC110 [pc110.ro.nu].
    It's a full x86 PC, not a PocketPC or PDA - and what's really amazing is it was put on the market in 1995.
    I own three of the things... in 2000, the last stocks were sold for ridiculously low prices (compared to the price when it was originally sold, anyway), and I happened to have some cash in my pocket. At least they're small enough to not annoy my wife ;)
    Anybody wanting to buy one should be able to find one on ebay fairly cheaply.
    • Anybody wanting to buy [an IBM PC110] should be able to find one on ebay fairly cheaply.

      That is, until you took pains to draw attention to the device on /.

      • by BJH ( 11355 )
        Like I said, I've already got three ;)
        It's not like the PC110 is a big secret, anyway - most people these days wouldn't want a PC with a maimum of 20MB RAM and a 33MHz 486SX for a CPU.
  • so then, someone care to annotate this image with the names?

    http://images.dibona.com/pictures/showpic/index. sh tml?Kernel02/convs/12x10_dscn4027.jpg
  • I quote the second day of the summary (which is now also up:
    In the last part of the talk, Bdale whipped out his Debian T-shirt and raised the question: does Debian violate its Social Contract by shipping the Linux kernel? The issue, of course, is the inclusion of proprietary firmware in a number of device drivers. Kernel developers, as a whole, seem unconcerned about proprietary firmware...

    Ok, I'm a debian user/admin, I user it on all my machines, but this is just plain retarted. The kernel of your O.S. distribution violates your policies? Change your policies then.. or take the rod out of your ass.

    it's funny how whenever you start to vehemently say 'for the people,' under weird beauracracy it can easily turn into 'we know what the people need better then they do' *sigh*
    • my own favorite quotes from the discussion: - audio [sourceforge.net] (approximately)

      (Linus speaking): moving this (binary drivers with which Stallman / deb take issue) into user space is a sign of mental disorder .... we are clear from a copyright standpoint ... linux has intentionally taken a non-rabid standpoint ... as I've shown with my use of bitkeeper I don't care about black and white people.

      [issues about firmware && binary modules]

      (Alan Cox?) The kernel developers do not have energy to sit down and determine a clear set of rules ... Debian has an endless supply of people who have nothing better to do than study legal issues....

      [Linus points out that actual GPL violating files get addressed in ca 24 hr timeframe]

      The conclusion was to send a message back to the Debian users to "put up or shut up"

      I'm sure RMS will have a press release out later this week.

      • Well, freedom is still Linux number one selling point. Tru64 is better on number crunching and Solaris is still better for most server applications, and windoze is better on the desktop. If they don't realize this, I think they will become irrelevant.
        • The perl motto (there's more than one way to do it) applies to computing generally, and to the idea of what's the 'selling point'.

          As it happens I have a couple of posts today about the what goes on in the OS market, so I'll just link Exchange on nt 3.1? & lessons for OSS [slashdot.org] and (long) Is Linux Dead [slashdot.org]. And yes, the market will hand irrelevancy to systems which don't adapt. If GPL does not adapt it will follow the same path, and by my read it has:

          Beyond all that, free is not just GPL. FSF used to distribute X11 from the X-Consortium at ~$150 / tape Sometime later RMS decided the X11 license was 'bad'. Today, (perhaps with Debian/Hurd as his ace in the hole?) RMS is trying to push Linux to a strict (activist?) interpretation of GPL.

          When I look a the history I think Stallman for all his principles exhibits pragmatism in his actions which is so often attributed to Linus. Linus made it clear long ago that he was not going to give FSF/Deb the blank cheque that many GPL developers do licensing under "gpl-current or whatever later version"

          I daresay Linus drew a line in the sand saying "2.0 and no later version of GPL", and I bet if he hadn't we'd be looking at GPL-V3 today.

          To be clear I'm not trying to knock either approach. I happen to have a bit more sympathy for Linus's views but that doesn't invalidate the strengths and accrued benefits of other approaches.

  • I hope so... (Score:1, Redundant)

    by bugg ( 65930 )
    Halloween. I've posted a very small gallery of the group pictures [dibona.com] from the summit on my site.

    For a second there, I was scared when I saw those pictures. Then I realized they were Halloween costumes.

    They are Halloween costumes, right?

  • WOW! As a long time computer nerd professiionally and personally I have realized one thing. AFTER LOOKING AT THOSE GUYS IN THE PICTURE I AM ONE OF THE BEAUTIFUL PEOPLE! And WTF? there is only girl there? Jeez... Even ROB married a cutie. Puto
  • % du -h KSMP3s/
    147M KSMP3s
    % du -h KSOggs/
    274M KSOggs
    Looks like a case of using "oggenc --bitrate n" instead of the better style [vorbis.com] "oggenc --quality n". I've found that quality 0 is fine for lectures and would probably save some filespace/bandwidth.
  • Rusty said about modules unitialization:

    The third approach is simpler: simply deprecate module removal. It would remain as an optional feature (it is, after all, most useful for debugging kernel code), but, by default, module loading would be forever. In the end, says Rusty, kernel modules do not take that much space and memory is cheap; module removal may be an optimization that we no longer need. There are some residual issues, ... , but as a whole this option is easy and makes a lot of code go away. There seemed to be a lot of sympathy for this approach in the room."


    Noooooo! Nooooooooo! Oh noooo!
    This is why I have to reboot WinDOS, because they don't unitialize stuff, you have to reboot in order for new config to become active. They are not able to uninitialize actual config, initialize new config (remember, you had to reboot when you changed IP!!!)

    It's easier, but.... whenever we update a kernel module we'll have to reboot.

    Please don;t get rid of usefull code.

    Thanks
    • Since as a general rule, you only get a new module when you get a new kernel (and you have to reboot anyway for a new kernel), this isn't that huge a problem.

      • This is a total misnomer. What about 3rd party kernel modules? VMWare? NVIdia? And not only proprietary either.. what about updated bttv or lm sensors? etc etc. I know of at least 6 3rd party modules I have on my system.
        • What about them? For a start, NVidia's decision to release their drivers only as binary certainly shouldn't affect the design of the kernel. But to carry on, how often do you actually update lm_sensors? I run it too, but since it reports the values for all sensors installed on my system, I can't really imagine a reason why I'd want to upgrade it.
    • I, for one, am incredibly worried about this. I often switch ISA NICs on the fly, unplugging them while the machine is hot. Unloading modules is very handy. Or how about those devices I plug in for a bit and then unplug? I like to remove the modules to clean up memory after using them (Zip drives and the like). When my soundcard crashes I have to rmmod and modprobe it back in to clear it out.

      Please don't take away my module unloading. Rusty, I don't give a fark how kernel 1337 joo are, to say "memory is cheap" and be a kernel hacker makes you sound like a complete, contradictory moron.
  • I see only males on the group photos ... is the penguin machist or what ?
  • If any of you tried ip-chains or played around with NAT both under Linux then you know what I am talking about. Routing sucks bigtime compared to other unixes. I would like to setup NAT for 2 way address translation and a more readable scripts like ipf under *bsd but linux doesn't support it. In Windows2k you only click "enable internet connection sharing". Anyway I am not a hacker so I am only requesting it and would be nice to have. Also the new feature being discussed for 2.5x that is having outdated modules remain loaded in memory is a bad idea. This would make rebooting necessary when the system is full of garbage like WindowsNT. I also hope they fix the VM. Even the newer one is exhibiting some problems from what I heard. Especially on non intel platforms like the alpha.

    • if any of you tried ip-chains or played around with NAT both under Linux then you know what I am talking about. Routing sucks bigtime compared to other unixes. I would like to setup NAT for 2 way address translation and a more readable scripts like ipf under *bsd but linux doesn't support it.

      Its called iptables. Its in 2.4. Its good, and its very much comparable to *BSD style routing. iptables seems to be a pretty good solution. (Hey, third time's a charm. ipfw, ipchains, iptables, yey!).

      As for the readable scripts. I tend to setup a script which just runs iptables how I want it. I use comments to state what I'm doing. Its very readable, and maintainable. If I want to see why a certain packet isn't going through, I open the script and look at what the comments say.

1 + 1 = 3, for large values of 1.

Working...