Slashdot is powered by your submissions, so send in your scoop

 



Forgot your password?
typodupeerror
×
Linux Software

Should Aunt Tillie Build Her Own Kernels? 507

DeadBugs writes: "Linux Weekly News is reporting on a new linux controversy. The inclusion of a Kernel Autoconfiguration program that would make it easy for almost anybody to build a custom Kernel on their computer. Eric Raymond supports this idea saying that this will bring Linux to a wider market. Those that oppose this idea mainly think that only those educated few should custom build their own Kernels. I for one hope this gets included if only to make standard installations and upgrades faster."
This discussion has been archived. No new comments can be posted.

Should Aunt Tillie Build Her Own Kernels?

Comments Filter:
  • Controversy??? (Score:5, Insightful)

    by CrazyBrett ( 233858 ) on Thursday January 17, 2002 @05:05PM (#2858001)
    Why the heck is this a controversy? It seems to me that anything that makes good technology accessible to more people is a good thing.

    I'd like to hear good arguments in the other camp, though.
    • Re:Controversy??? (Score:3, Insightful)

      by Tryfen ( 216209 )
      It doesn't make good technology accessible. Look at something like a Palm Piolt. It is (for the conventional) user, impossible to modify the underlying OS or the UI. This is a GOOD thing. People should not mess with what they don't understand. Just because you have a body, it doesn't automatically follow that you know exactly what sort of drugs to take, what exercise regime to follow or how best to educate yourself.

      Fine - let people configure their system to some degree. But when it comes to meddling with things that change the fundamental operation of their machine, leave it to those who understand what they are doing.
      • Re:Controversy??? (Score:2, Insightful)

        by fishebulb ( 257214 )
        palm os not designed to have the underlying os/ui changed. Thats key, linux IS designed for that type. WHat do you mean by LET?, if you mean make easy thats great. but everyone should be allowed to configure everything. So why not make it as easy as possible?
      • Re:Controversy??? (Score:5, Informative)

        by SirSlud ( 67381 ) on Thursday January 17, 2002 @05:31PM (#2858260) Homepage
        Rediculous. People obviously should not mess with things they dont understand. How many motor-morons have you heard of that have screwed up their car engines, just because there is access to the hood via an extremely easy to use, easy to recognize lever?! I mean, not EVERYONE pulls the lever and starts hammering on the crankshaft once their car breaks down. People know what not to touch, because they've been allowed access to it ... the responsibility in understanding the tools they are using is placed in their hands, as it should be!

        When people suggest that technological means of prevention is superior to ingraining a respect for a particular technology, I get so upset! It's not my fault that MS has brainwashed you into thinking that the consumer should be unable to fuck something up! A sufficient warning and a learned respect for your belongings (in this case, the kernel) should be the top priorities. When people are told to not worry about what their doing, that they can't screw stuff up, thats when you end up with people who do screw stuff up once they find a chink in the armour of the technological solution!

        Here's a sobering stat: more people fall off cliffs with fences than cliffs without fences. Why? Because when you leave people to their own devices, they have to think and respect the power of the tools they are using or the situations they are in. When you put the blinders on them, you're only making sure that shit will get fucked once you slip up and accidentally allow them access to the tools and technologies that you were so adament to lock everybody out of.

        I understand that people still have access to custom kernels regardless of this auto config tool, but this is akin to providing an easily labeled handle to your hood, or an emergency exit, or whatever. Because it's so easy to get access to, people are forced to learn and know implicitly what the consequences of pulling it are! Compare this to the newb who finds the man pages on building the custom kernels, or the HOW-TO .. you won't get enough people having to face the idea of respecting the OS and computer in large enough numbers to make the fragility of the kernel and hardware a widely known thing. Sure, there might be some sacrificial lambs when you open up things like that, but hey, I'm all about the greater good rather than the few who still need to learn the lesson of accountability and respect!
        • Re:Controversy??? (Score:5, Insightful)

          by blitzrage ( 185758 ) on Thursday January 17, 2002 @06:21PM (#2858676) Homepage
          People obviously should not mess with things they dont understand.

          My simply answer to that is: Then how did you learn?
        • Except (Score:3, Insightful)

          by Tony-A ( 29931 )
          that people understand precisely by messing with things they don't understand. In fact only for very unimportant things does anyone stand a chance of understanding prior to messing with.
      • People should not mess with what they don't understand.

        Oh goddamn I hate people with that opinion.

        Hell, why not? If Aunti is running her own Linux on her own computer why shouln't she be able to easily autocompile a new kernel? Since we are speaking Linux here, even if the newly compiled kernel is the biggest pile of crap on earth, it wouldn't do much harm as every distribution would not overwrite the factory kernel anyway.

        So if something goes wrong, just boot with the factory settings and everything is just like before.

        Admit, the *only* problem you have with this is that you would lose some elitist status.

      • Re:Controversy??? (Score:5, Insightful)

        by elefantstn ( 195873 ) on Thursday January 17, 2002 @07:18PM (#2859057)
        People should not mess with what they don't understand.


        I never would have learned anything if I thought like that.
    • Re:Controversy??? (Score:4, Insightful)

      by Nailer ( 69468 ) on Thursday January 17, 2002 @05:17PM (#2858134)
      Eric Raymond supports this idea saying that this will bring Linux to a wider market. Those that oppose this idea mainly think that only those educated few should custom build their own Kernels.

      My own personal opinion is that:

      * nobody should have to ever recompile their kernel (just update their distro in the worst case)

      * everyone should be able to have the option of doing so easily if they want to.

    • by Otter ( 3800 ) on Thursday January 17, 2002 @05:25PM (#2858205) Journal
      I'd guess it's a "controversy" because Eric Raymond a) proposed it in a typically condescending, inflammatory way and then b) ran around publicizing the thread as a great controversy in the kernel world.

      Here is a different angle on the same issue, that makes for a better debate: Should the typical user be running a precompiled, distribution supplied kernel or a customized kernel that may offer performance advantages or may be wildly inappropriate and which creates immense tech support headaches?

      • by Anonymous Coward
        In response to your question, yes, I think we should do just that.

        Bill Gates
        Posting anonymously in order to preserve my karma.
      • Re:Controversy??? (Score:2, Interesting)

        by gmack ( 197796 )
        It's worse than that.. he wanted autodetection of non PnP ISA cards.

        Auto configure is one thing... auto detection of hardware that was never designed to be auto detected is quite another.

        I for one would hope Aunt Tillie would have a reasonably recent system. If she uses 10 year old componants she should expect it to be hard.
      • Should the typical user be running a precompiled, distribution supplied kernel or a customized kernel that may offer performance advantages or may be wildly inappropriate and which creates immense tech support headaches?

        But this misses the whole point. The point is that with a good autoconfigurator, there won't be an issue of a custom kernel that "may be wildly inappropriate". The autoconfigurator would detect the user's hardware (and possibly check to see which services/file systems they have running to know other kinds of support to compile in) and build a custom kernel that was actually appropriate and optimized for that user's specific hardware. If the system were well designed, there would be very little risk of ever winding up with a kernel that was wildly inappropriate. Any halfway decent design would also help to prevent a number of the most common newbie mistakes in kernel building, like forgetting to keep an appropriate, well tested kernel available in case the new one is a failure.

      • Re:Controversy??? (Score:3, Interesting)

        by rlowe69 ( 74867 )
        Should the typical user be running a precompiled, distribution supplied kernel or a customized kernel that may offer performance advantages or may be wildly inappropriate and which creates immense tech support headaches?

        Heh, I guess it depends on who you ask. Since RedHat's money making owes a large part to support fees, I'm sure they won't mind walking through custom kernel configurations at a few dollars a minute, will they? ;)

        I suppose this is where open source and commercial processes differ: commercial joints see support calls as 'headaches', open source joints see them as 'a source of revenue'. Who are you going to get better support from, I wonder?
    • Re:Controversy??? (Score:3, Insightful)

      by einhverfr ( 238914 )
      I am here with you. I think that this is a good thing, but consider:

      Most of our aunts and uncles could not even consider installing hardware in their computer (with the exception of external devices) anyway, so with a few exceptions, I think that this is not likely to initially make kernel building available to a wider audience. Can you imagine your 60 year old uncle trying to install an internal NIC-- this is more intimidating than the actual software, even though at present the software is much more difficult.

      Also, consider that there are few reasons for an average person to rebuild their kernel. I myself (as an advanced user) only do it for special purposes, and for this I require a high level of control over what gets compiled in and what gets omitted. I know that you will say that security patches are the real advantage of doing this, but for a firewalled, single user system (or one with only trusted family users as regular users), there is little need for patching the most common types of security holes which require physical access to the computer.

      (OK, so you are SLIGHTLY more vulnerable to makicious programs and viruses this way, but have you ever broken things by upgrading your kernel? I have, and then I have to find out where the problem occurred.)

      My question is: Will dumbing it down mean less control, or can I still have the same level of control over how my kernel is built? If so than I cannot support it. Also, what if I am building a kernel for a different (slow) system which does not match my system or I want to make a specialized boot image for a system recovery kit?

      Lets face it-- compiling the kernel sounds scary and all, but with make menuconfig and make xconfig, it is hardly rocket science. These items should still be available in some form even if an automatic configuration utility is included...
  • by eaddict ( 148006 ) on Thursday January 17, 2002 @05:05PM (#2858003)
    I can see it now from the vendors "Compiling your own kernel voids any software support." Can you imagine trying to keep up with all the changes as a software vendor? So maybe as a test system, yes. Supported? No
  • by davejenkins ( 99111 ) <slashdot&davejenkins,com> on Thursday January 17, 2002 @05:06PM (#2858011) Homepage
    Am I reading correctly? Is this a debate over limiting vs. allowing certain behavior? What part of the Open Source philosophy got suspended while I was at lunch?

    Let some distribution try this. It may take off, it may fail-- that's what it's all about...
    • Is this a debate over limiting vs. allowing certain behavior? What part of the Open Source philosophy got suspended while I was at lunch?

      Actually, I totally agree. The question is whether people who support this also will see the the hypocracy of NOT spporting Microsoft adding more functionality to their mail programs, but with unfortunate side effects of allowing people to execute programs that might contain viruses.

    • It could also be a discussion whether it is worth the job to develop such a tool... Or whether it would make anything better...
      • Whether it is worth it to give users more freedom?

        I agree with the original post. Anything that allows people to more easily use their freedom with OSS is only a god thing. I cannot even believe there is an argument about it.

        The original article, with its reference to the "educational elite," is just crazy.
        • I mostly run debian (when it comes to linux) nowadays. Why? Becasue of the dselect convenience. I have thousands of pre-configured free packages to choose from.

          Did I compile a lot of stuff when I ran slackware? Yes.

          Do I know how to build debian packages? No.

          Would I be able to build a debian package if I found out I needed to? Probably.

          I dont feel I would have more freedom if there was a very simple program that created .deb-files (maybe there is). In fact, freedom for me is not worrying about how the .deb-files were created in the first place.

          Why would an automatic kernel-compiling-wizard give more freedom to users than the opportunity to choose from a set of precompiled kernels?

          But of course, I can not argue that more choice is less freedom. Hopefully the tool gets so good everyone uses it.
    • by RedWizzard ( 192002 ) on Thursday January 17, 2002 @06:46PM (#2858866)
      Am I reading correctly? Is this a debate over limiting vs. allowing certain behavior? What part of the Open Source philosophy got suspended while I was at lunch?
      Unfortunately the story submitter felt the need to completely misrepresent the debate. The two camps arguments in a nutshell (IMO):
      • Eric Raymond's view is that Aunt Tillie, who starts with a standard distribution should be able to click an icon and have a new kernel downloaded, configured, compiled, and installed. He's talking about standard (Linus) kernels here, not the distribution's kernel package.
      • The other camp (which includes e.g. Alan Cox) doesn't see the need. Aunt Tillie would be better off sticking to the distro's kernel updates. If she wants to go beyond that then the resources are available for her to learn how to configure and compile a kernel using the existing tools.
      So no-one is talking about limiting behaviour, that was just poor reporting. Personally I think that there probably aren't many Aunt Tillies who would find a need for the sort of tool Eric is advocating (although others my find it useful).
      • Aunt Tillie (Score:3, Insightful)

        by AftanGustur ( 7715 )


        I'm with Cox on the matter that I think Aunt Tillie would be better off with the distro's kernel (where she might have lm_sensors, nVidia, TV and Radio drivers), but !

        I'l defend Aunt Tillie's *right* to chose !!
        That's what freedom is all about, options !

  • I seems to me... (Score:2, Insightful)

    ...that this would just make things easier for a Linux newbie to break the OS. Then they can't fix it and are screwed. Then you lose a new Linux user because they don't want to feel stupid using their computer.
    • They'll still get the standard binaries, right?

      That means they'll have to go out of their way to tweak with the kernel. It should be easy to throw up a disclaimer. Think of this: even Micorosoft includes tools for editing registries, with the standard boilerplate.

  • If people expect to make linux a desktop OS, then this will probably not fly. The sheer number of total borkages compared to the gain is not worth it.

    If people expect to make linux a server/embedded OS then it *would* be nice if powerful things could be done without scaring off PHB's and NT admins.

    Though of course it could be argued that PHB's and NT administrators are just as likely to screw themselves as Joe User...
  • With the speed at which devices come out, even the RPM's aren't always going to be up to date. If it was easy to run a script that would:
    1. Get the source
    2. Probe for devices
    3. Configure, make, make install
    more people might consider using Linux because one of the major hassles is removed.
  • by mystery_bowler ( 472698 ) on Thursday January 17, 2002 @05:09PM (#2858052) Homepage
    Most Linux users are already familiar with the caveats and reprecussions of customizing your kernel. This kind of tool would just make it easier to get to.

    There aren't all that many "casual" Linux users. That market is dominated by Microsoft. And if you've deployed Linux to a work environment, chances are you won't allow a tool like this to be used, because you'll probably want to lock down the configurations (making your life as a sys admin a lot easier).

    Assuming Linux continues to proliferate to the consumer market, I still wouldn't be worried about people tinkering with their kernel too much. Most people, especially at the "average Joe" level, don't understand the inner workings of their OS. Heck, most of them fear their OS and assume that they'll break something if they tinker with the OS's inner settings. I wouldn't conclude that simply because the tool is there that most people would be interested in using it.
    • Which I have to say is why I like that fact that Windows is the supported desktop at work, the last thing I need is some MIS guy telling me that I can't recompile my kernel or make the changes I want.
    • This is not true at all. People /migrating/ from Windows environment lack the knowledge that you might be able to assume in a "hacker." I mean, it's a burgeoning hacker, right?

      Besides, some like to deploy linux without wanting to /know/ exactly how to compile a kernel. After all, it may not be necessary. Someone could deploy linux because they want to run their own webserver, or code assignments from school, or code their own little pet projects.

      I have a friend who just came into the linux world. Couldn't figure out how to change the resolution in X... "The option wasn't anywhere in the menus!" Of course not, but he didn't know that, and how could have? He just moved over! He's still learning.

      I agree with one of the first posts: "Why is this an issue at all? Increasing accessibility is /always/ a good thing."
    • Most people seem to answer to posts like these by saying that the Linux community wants to take away the "casual users" from Microsoft.

      Although I have my doubts about that, I think a tool like this would be potentially useful (perhaps even more useful) for non-casual users.

      I have configured a kernel or two as a user and I never found the problem too complicated with the tools already available, but it's still a step that can take from 10 minutes to half an hour, depending on how complicated is the setup, what decisions you have to make, and how many acronyms you have to check just in case they apply to your hardware this time.

      I'd like to take the time at some point to do that, but sometimes I'd like to get the hardware to work fast and just get on with my life, and the distribution kernel doesn't always work at those times.

      This is doubly true if you're installing Linux for someone else, and they happen not to have the most compatible hardware, or know very little about their manual-lacking components. Spending hours configuring kernels, telling them what you're doing and trying them out is not fun and probably, at that point, not even educational.
  • Those that oppose this idea mainly think that only those educated few should custom build their own Kernels.

    This I just do not understand. Should that attitude prevailed when it came to PCS or ISA cards pre Plug and Play days when you had to be an expert and getting interrupts set correctly or your system hung (and yes I realize problems still happen with PnP, but its still a billion times better than the old days). What an elitest attitude.

    *Make it easier*

    Should we get rid of the './configure && make' cycle because its too easy for those of us who don't know the ins and outs of the compile cycle?

    (Man, am I in a snippy mood today or what!)
  • by hpa ( 7948 ) on Thursday January 17, 2002 @05:11PM (#2858066) Homepage
    I think this particular article is an egregorious misrepresentation of the arguments. Noone is arguing that an autoconfigurator isn't a nice thing to have, however:
    • Eric has requested several in-kernel facilities solely to support his autoconfigurator. Most of these requests have been at the very best ridiculous.
    • Aunt Tillie shouldn't have to build a new kernel. I can't emphasize this enough. We should be striving towards modular autoconfiguration at runtime, so you don't have to mess with your kernel because your hardware changed -- either at runtime or between boots.
    • The autoconfigurator is bound to be an imperfect job, simply because a PC doesn't give you enough information to tell what exactly is in it, at least not in the presence of ISA cards. There is no magic you can do to avoid this problem.
    • The kernel people are already drowning in bogus bug reports, to the point where it is very hard to avoid ignoring real bug reports. This, unfortunately, isn't likely to improve the situation.

    I really can't emphasize strongly enough that I believe that if Aunt Tille has to build her own kernel, we have much bigger problems that Eric's autoconfigurator will solve.

    • by xant ( 99438 ) on Thursday January 17, 2002 @05:25PM (#2858208) Homepage
      Eric has requested several in-kernel facilities solely to support his autoconfigurator. Most of these requests have been at the very best ridiculous.

      I suppose I'd have no trouble believing this. I'd still like to know what the requests are and why they are ridiculous.

      Aunt Tillie shouldn't have to

      But.. she does. We don't have runtime autoconfiguration that works in every case. If an autoconfigurator is easy to build, and won't impact the people working on runtime configuration, then why stop them from doing it? My computer should read my mind (or at the least, the pointer should move to the thing my eyes are looking at) but I'm not going to tell people to stop working on improving mouse support.

      The autoconfigurator is bound to be an imperfect job

      True enough, but this is true for runtime autoconfig as well.

      The kernel people are already drowning in bogus bug reports

      Kernel bugs are reported via email on the mailing list. This is described in marginal detail in /usr/src/linux/REPORTING-BUGS. Furthermore, it begins with the following dubious paragraph:

      What follows is a suggested procedure for reporting Linux bugs. You aren't obliged to use the bug reporting format, it is provided as a guide to the kind of information that can be useful to developers - no more.
      What this document highlights more than anything is that kernel developers are drowning in bug reports because linux kernel bugs are reported in an informal format on the mailing list. Get a proper bug tracking system and it will be much easier to keep track of real bugs. This should be done regardless of whether or not we make "kernels for the masses". I hadn't heard about the bug report problem until you brought it up, and it's frankly amazing that it hasn't been addressed in this manner already.
      • by rlowe69 ( 74867 ) <ryanlowe_AThotmailDOTcom> on Thursday January 17, 2002 @06:24PM (#2858696) Homepage
        I hadn't heard about the bug report problem until you brought it up, and it's frankly amazing that it hasn't been addressed in this manner already.

        Actually, the bug/patch reporting problem was mentioned in a very recent article [slashdot.org] about Linux VMs. Rik van Riel complained that Linus' (rather human-based) system was prone to missing patches, no doubt because the mailing list is filled with bogus bug reports, if indeed these are the same lists. Even if they aren't the same lists, Linus would probably have to monitor both anyway.

        The point is that we have clear evidence a better system is needed for bug reporting and patch submission to give the main developers some way of organising and prioritising things. Clearly a simple mailing list does not suffice when the number of people submitting gets very large. Any takers?
    • I couldn't aggree more. To take it a step further I don't understand why one would expect "Aunt Tillie" to even understand what a kernel is. Most people don't want to be bothered with the stuff -- they just want a computer that works.

      Let's say some device isn't working properly and that happens to require a kernel rebuild. Aunt Tillie could care less about the fact that a rebuild is required, she just wants a working machine. The auto updater should take whatever steps are necessary to deliver what the user wants and expects.
    • I tend to agree with you, except for one thing.
      The Linux developer community has been promising the elimination of the need to recompile a custom kernel for quite a while now - and I don't think it'll ever really happen.

      I say this primarily because Linux keeps getting used for niche purposes (think dedicated hardware like firewalls or routers, MP3 players in cars, etc.). In these situations, people need an OS that can be trimmed down to the bare essentials.

      It's cleaner to compile a small kernel that has exactly what's needed in it. Otherwise, you have to have all the seperate little module files stored someplace on the device - and the storage device may be severely restrictive as to how much and how many files can be put on it.

      Not only that, but compiling all the modules seems to be the slowest part of a recompile process. If I know my kernel is only going to work with specific hardware, in an appliance type setting, I'd rather skip the whole step of compiling modules.
    • I don't think it is smart to have Aunt Tillie care about kernel recompilation, even if almost everything is done by an autoconfigurator.
      But what about auto recompiling the kernel while aunt tillie's screensaver is running ? The kernel could collect usage and performance data while it runs and automatically make kernel configuration changes that suits the usage of aunt tillies kernel.
      Modules are a nice thing, but there is a small performance lose when you use modules. Why not ship linux distribution kernels with almost everything compiled as a module and then let an autoconfigurator compile an custom new kernel every few weeks until the kernel gives aunt tillie near optimal performance ?
      ISA Cards are a problem, but there aren't almost anymore ISA cards in the aunt tillie systems out there anymore and normal distribution kernel have the same problems, they also need to try find all the isa cards in your system and normally it doesn't work that bad.
  • Why bother? (Score:3, Informative)

    by crow ( 16139 ) on Thursday January 17, 2002 @05:11PM (#2858071) Homepage Journal
    Most of the kernel configuration is simply a matter of which drivers to include in the kernel instead of as modules. Distributions put most of the stuff out as modules, so all that a kernel autoconfigurer would do is notice which modules are loaded and build them as part of the native kernel. The advantages of this are minor--slightly better memory utilization, no need for initrd.

    On the other hand, there are some areas where an autoconfigurator would be handy. That's when determining which chipset features/bugs to compile for. Hopefully this project will focus on the areas of configuring that are more complicated than (y/M/n).
  • i remeber setting up my first Linux installation on a laptop and the hell that ensued when trying to figure out how to put together a custom kernel that would support PCMCIA. Yes, i did learn a lot about the kernel, the Linux boot process, compilers and all sorts of other stuff. Problem being, the average computer user has no desire to do any of these things. This is why the average user won't use Linux. If the goal is really to get Linux on more desktops, we're going to have to see WAY more wizards and configuration tools.
    I think the beauty of linux is that I can manually edit config files to my hearts content, or I can fire up Linuxconf and do the same thing.
    No one forces me to do either.
    Choices are good.
  • "Compiling a kernel" means collecting seeds for aunt Tillie.

    This seems like a bad idea if it's a desktop icon or an easy to access program. Let aunt Tillie mess with the kernel and watch how fast the computer will grind to a halt, heck, we're lucky if aunt Tillie knows that a computer mouse is not a rodent !
  • How much easier can it be?

    tar -xzf linux-2.4.17.tar.gz
    cd linux
    make xconfig
    make dep
    make clean
    make modules
    make modules install

    ...and make it boot...

    I mean, If you think that is hard you probably wont be able to give any useful instructions to a kernel configuration program at all... Maybe not even know you need a new kernel...

    What is nice with linux (compared to Windows) is that very few things happen "behind your back". The system does not change itself. I find this very comforting.

    And modern distributions tend to make it quite easy anyway... I installed Redhat 7.2 from isos today for the first time in over a year. All hardware was autodetected and worked without any tweaking at all (then I felt like compiling an own kernel to play DVDs well, but that another history)
  • by Astral Traveller ( 540334 ) on Thursday January 17, 2002 @05:13PM (#2858088)
    The need to compile custom kernels is a wart inflicted by Linux's monolithic nature. Instead of encouraging the painstaking and error-prone task of compiling custom kernels, we should be working on moving more and more kernel functionality into modules, which are loadable and configurable at run-time. It will always be easier and faster to setup a tool to install the proper modules with the correct parameters than it will to tweak a monolithic kernel config, then spend hours compiling the whole 20MB tarball worth of kernel source just to add support for a new feature.

    While 2.4's module support is excellent, and modularisation is become more and more prolific throughout the Linux architecture, there are still several important features which need to be excised from the kernel core and made available as runtime modules. Trivial features such as APM support, SMP and Unix sockets shouldn't require a full recompile to activate. Why do we insist on prolonging the life of "make config" and its brethren when we could very well do without it altogether?

    • Hours recompiling? You must have very slow machine. But seriously...

      The biggest problem with modules is that (by design) the binaries aren't necessarily compatible from kernel to kernel. They may not even be source compatible, as Linus and friends like to change broken architectures from time to time.

      Debugging kernel loaded down with proprietary binary modules is time consuming, and often counter productive-- if kernels were binary compatible, this might further encourage the writing of non-gpled binary modules, and cause compatibility problems galore.
    • According to LWN [lwn.net], Alan Cox said that Linux 2.6 will not have compiled-in modules.


      From: Alan Cox
      To: babydr@baby-dragons.com (Mr. James W. Laferriere)
      Subject: Re: ISA hardware discovery -- the elegant solution
      Date: Mon, 14 Jan 2002 18:08:32 +0000 (GMT)
      Cc: alan@lxorguk.ukuu.org.uk (Alan Cox), esr@thyrsus.com,
      cate@debian.org (Giacomo Catenazzi),
      linux-kernel@vger.kernel.org (Linux Kernel List)

      > Hello All , And what mechanism is going to be used for an -all-
      > compiled in kernel ? Everyone and there brother is so enamoured
      > of Modules . Not everyone uses nor will use modules .

      For 2.5 if things go to plan there will be no such thing as a "compiled in"
      driver. They simply are not needed with initramfs holding what were once the
      "compiled in" modules.


      Alan
  • Although I use FreeBSD, building a custom kernel is good for Linux or any of the *nix's. You can get rid of device drivers that slow down the boot process, and you can tailor optimization for your specific uP. That will be especially true if we ever get a gcc that has decent Athlon optimization. I'm also told that taking out the plain Jane i386 support speeds up things considerably.
  • If you want zero-effort working systems, the distributions are the way to go at the moment. If there is anything that can be done to help that unfortunate situation, I am all for it.

    I know lots about unix in general and linux in particular. I've written kernel drivers. I've designed embedded CPU's and PCI plugin cards. I am generally regarded as being very technically minded :-)

    The flipside: I have also been mystified as to why one of my (admittedly more esoteric) kernels just gives up the ghost at inopportune moments. It's almost always my fault, and I almost always learn more from the mistake, but it's sometimes a non-timely learning experience ... I actually *like* learning this sort of thing, I just prefer to choose the time of my enlightenment :-)

    In short, why would you want to make it difficult ? Use the time you save to solve other problems instead - ones that someone else hasn't kindly provided a solution for...

    Simon.
  • What's the problem if Joe Rube decides to build a new one? I mean, if he smegs up because he didn't ask Jane the Ubergeek to help him, all he has to know is to boot the prior kernel and no damage done for the most part. If he's using Mandrake, he doesn't even need to worry about how the LILO prompt works as he'll be able to select the old Kernal from a list at bootup. Force a timeout for LILO and keep the old Kernel and you're ALMOST Idiotproof, IMHO.

    Devo Andare,

    Jeffrey.
  • by Stiletto ( 12066 ) on Thursday January 17, 2002 @05:14PM (#2858102)
    Ponder:

    Should Aunt Tillie Build Her Own Kernels?
    Should Aunt Tillie Install Her Own OS?
    Should Aunt Tillie Install Her Own Applications?
    Should Aunt Tillie Run Her Own Applications?
    Should Aunt Tillie Produce Her Own Documents?
    Should Aunt Tillie Think Her Own Thoughts?
  • Configuring a kernel from source simply ain't that hard. That's why so many pimply-faced youngsters do it. For Eric Raymond to characterize the prospective users of his "make config with training wheels" as something that is needed to get kernel updates to females (his examples are Aunt Tillie and Penelope Power User) is just sexist bullshit.
  • Disclaimer (Score:5, Insightful)

    by SirSlud ( 67381 ) on Thursday January 17, 2002 @05:14PM (#2858105) Homepage
    *****WARNING. USING THIS TOOL CAN SCREW UP YOUR COMPUTER, BIG TIME. IF YOU WANT TO USE IT, DONT BE MAD AT ANYONE BUT YOURSELF IF YOU FUDGE THINGS UP*****

    If MS can include regedit, you cant tell me that we can't inlcude autoconf .. I mean, seriously. The only thing you could end up with is some fucked kernels (who should get along just fine with the fucked registries) and some users who will learn and be cautious, and end up having a better understanding their computers.
  • Wait a minute... (Score:3, Interesting)

    by BaronM ( 122102 ) on Thursday January 17, 2002 @05:15PM (#2858116)
    ...doesn't everyone build a custom kernel? I've been using Linux for years, and I always assumed that the prebuilt, "Christmas Tree" kernels were just to make installation easier. People actually run with those things? Heh..
  • by Eloquence ( 144160 ) on Thursday January 17, 2002 @05:16PM (#2858125)
    I think that kernel compilation should obviously be as easy and accessible as possible. After all, one of the promises of Linux is to make hacking fun (cf. L. Torvalds: Just for Fun), and there's no reason to build artificial barriers into the OS. The more I play around with Linux, the more I find myself exploring other concepts of computing (networking, various script languages, filesystems etc.) about which I would not even think on Windows, because everything that does not work best over a graphical interface is just so user-unfriendly. (OTOH, there are many times when Linux has been frustrating, especially with regard to documentation -- I think using different tools, like Wikis, may make this part of Linux development more accessible to contributors.)

    But I don't think "Aunt Tillie" should accidentally come anywhere near a kernel: Users should not care about kernels because they have to, but because they can. That means that most hardware configuration tasks should be accessible without touching the kernel, including installation of new drivers. So include lots of warning signs -- optimally a normal user will never have to log into his box as "root" except for installing new software with a graphical apt-get like tool.

  • by Cyberllama ( 113628 ) on Thursday January 17, 2002 @05:16PM (#2858127)
    I mean honestly, what claim can linux hold over windows if not that the availabillity of the source code allots the user more freedom? This is, as far as I'm concerned, what linux is all about. I am totally unable to understand any argument against making one of the most important benefits of linux more accessible to a wider market.
  • Why not just compile everything as modules? That's what most modern distributions do. Compile everything as a module and only insert the ones for the devices that the user has. I know that on all my computers I compile several ethernet devices as modules that I don't have in the computer (like ne2k-pci) so that if my ethernet card breaks or something I'll have the module available anyway.

  • Sure, we all want to build our massively customized kernels for server use, etc. etc.

    Take 2 boxes, one running 2.2, one running 2.4. Throw KDE on there, and I am willing to bet that the average Aunt Tillie can't tell the difference which one is faster/better.

    The kernel as packaged by distros does a good enough job of running the typical desktop system that Aunt Tillie uses to browse the web. Leave the custom build stuff for the experts.
  • ...that I'm not sure what the argument is about. this is pretty much an anarchy, if someone wants to develope this then more power to them. they're free to do what they want, if it's good enough then it should be accepted into distros. That's my official thinking answer.

    My emotional reaction is Noooooooo! Not because I'm elitist and arrogant, I can always find another thing to be arrogant about, ("You use the newbie tool to rebuild, loser" ) but I don't want to field a hundred questions each from a hundred people. I don't want my mother calling me and asking me if she needs iRDA modules, or constantly answering questions at the bar from people who probably have no need to get into that stuff. It's bad enough now fielding questions about windows... I gotta get this shirt [thinkgeek.com] from thinkgeek.

  • by rice_burners_suck ( 243660 ) on Thursday January 17, 2002 @05:29PM (#2858236)

    It would be great if anyone could build a custom kernel.

    Imagine this... Let's say it's 5 years down the road, and the hot new computer is the 72 ghz Apple Pentium G7 with 64 gigs of on-chip ram. Hard drives have been totally eliminated because new, memory based permanent storage technology has been invented and proven over the past 2 years. An entire meg can be recorded in under 1 microsecond. The only remaining mechanical component of a computer is the standard Glass-RW drive (the 2 terabyte recordable successor to DVD), so the whole computer is now a small single board, and most of the electronic hardware is inside the main processor, an inch square in size. In fact, the plugs on this board take more room and cost more than the computation hardware.

    Now imagine that a build world takes 4 minutes to complete. Here's how installation of FreeBSD 9.8-RELEASE takes place. (Yeah, I know this was a Linux thread.) You pop the Glassdisk in the drive, choose a few options, and all your software is configured, optimized, checked for security vulnerabilities, compiled and installed within 2 or 3 minutes.

    In order for that to happen in 5 years, Granny needs the ability to custom configure her own kernel right now.

  • What about putting the Autoconfigurator and compile in the install script so that the "default" kernel is optimized for your hardware? I think that has some very interesting possibilities.

  • Aunt Tilley bought her PC and it came with Windows XP preinstalled, has a web browser preassigned and an email program automatically assigned to her preassigned ISP built in already - so guess what..

    she doesn't give a crap and wouldn't know what the hell all of you geeks in here are talking about.

    A kernel to Aunt Tilley is a piece of corn... Jeez...

    Talk about being out of touch....
  • Here we have an actual interview with Aunt Tillie, and we got her opinion on this topic...

    Pro Coder: "So Aunt Tillie, how would you like to compile your own custom Linux Kernel?"

    Aunt Tillie: "What the hell are you talking about?"

    Pro Coder: "You know, compile a custom Linux Kernel, so you can have a very customized OS."

    Aunt Tillie: "Why would I want to do that?"

    The conclusion we draw from this interview is: your average user doesn't have any idea what a Linux Kernel is and that they don't need a custom kernel, at least not yet

  • by Brendan Byrd ( 105387 ) <SineSwiper-slash ... esonatorSoft.org> on Thursday January 17, 2002 @05:40PM (#2858340) Homepage Journal
    ...when elite hackers said that only elite hackers should have Linux, and all of these "Red Hat" guys are polluting the user base. They are, of course, full of shit.

    The whole point of Linux is having a stable and friendlier version of UNIX that is GNU and doesn't have any ties to MS. We now have Average Joe User with their own copy of Linux/X and they are using it just fine. Why should we limit ourselves because we need to do it the "old fashioned way"? Let them (and us) have a easy-to-run auto-config script for building kernels. Are we going to delete our "make menuconfig" scripts and tell everybody to replace it with "vi Makefile", just for being elitists?

    Personally, I think these are the "10 miles in the snow, both ways" people, who still believe that the best way to configure PPP on Mandrake is rolling your own scripts. (Uhh..."netconf"...duh!)
  • "Compiling a kernel is hard and should only be done by the select few."

    Thats sounds like what programmers used to say when they wrote things in machine language.

    The goal is to make it easy enough for anyone with a brain to do it. Hell they don't even need to know that they're recompiling a kernel.

    "Oh you want to do that? Ok give me a sec and then I'll reboot and you'll be all done." *compiling*

    Thats the goal. The user doesn't HAVE to know just 'what' they're doing.

    Do you really know exactly what that for statement you just wrote compiles to in machine? Do you care? If you want Open GL do you have to know ANYTHING about the kernel?

    Sure it HELPS to know these thigns but for the end user it's not a must and should never be a must.
  • by LinuxParanoid ( 64467 ) on Thursday January 17, 2002 @05:47PM (#2858418) Homepage Journal
    Easier to use tools are great.

    I just hope we don't start designing things such that people say "oh, to do that just reconfigure your kernel with the foobar option". Feature sets should generally not require kernel recompile imho. For a long time, this was a UNIX weakness.

    If we can avoid this (which is after all worse than the old "reboot NT to configure something"), I'm for it 100%. I'm not saying that you have to recompile the kernel much nowdays (I had to once to get an unsupported Ethernet driver working), but kernel recompile gets really easy, I'm nervous that people would start to rely on that way of doing things. Which would be bad.

    --LP
  • by Jack William Bell ( 84469 ) on Thursday January 17, 2002 @05:53PM (#2858458) Homepage Journal
    Aunt Tillie doesn't need this. But, as a computer consultant and VAR, I need the ability to easily make these kinds of changes based on what my customers need.

    Sure, I can do this myself the old-fashioned way. But this is the kind of thing I prefer to delegate to someone with a lower billing rate so I can focus on the things that really bring in the bucks. It is easier to train someone to use Eric's AutoConfigurator than it is to explain make files and such...

    Jack William Bell, who likes the KISS method in most things.
  • by Weasel Boy ( 13855 ) on Thursday January 17, 2002 @05:53PM (#2858469) Journal
    I have multiple engineering degrees and several years' experience building and using Linux and BSD, and *I* have trouble configuring, building, and installing the Linux kernel. Forget Aunt Millie -- I want a good kernel autoconfig tool for myself!
  • by 3seas ( 184403 ) on Thursday January 17, 2002 @05:56PM (#2858494) Homepage Journal
    Programming is the act of creating automations of complexities that are made up of simpler things.

    Does the programmer re-write open() every time they need to open a file?

    There is not only nothing wrong with making it easier to build a custom kernel, but in fact there should be a growing interest in doing this sort of simplifying, given the GNU Hurd is about not only modularity but about servers/transltors and creating such, even custom as is needed.

    This can be taken even further in that autocoding tools can be and should be built for the GNU users.

    In a hundred years from now, how do you suppose programming will be done (given programming today is only about 50 years young)?

    As things are being done today, it is not possible to do such a program of complexity as can be imagined of what would be a holodeck program (And we do have such virtual reality cudes today in university labs).

    It won't be untill the general programming field realized the need to genuinely and honestly address and do the automation of the field of programming. Certainly everything else can be automated, including human balance and movement (segway).

    It's fooling to continue the illusion that programming is not itself automatable. And to begin making it happen, where better than on higher level like autoconfiguration system that allow custom kernels to be done? (Or at least one place for it to begin)

    A recent research paper on autocoding [york.ac.uk] presents the current/recent mindset on autocoding. It's worth reading to see how young and admitedly immature the field is. Open system and Open Source Software such as the GNU efforts (Linux, the Hurd, etc..) with their open community has far better ability to do what needs to be done than any private effort which will be biased away from doing the things that need to be done.

    Soooo, anything that automates computers and their use is inherently a good thing, for iot will allow us all to reach and achieve much more advanced systems and the benefits of.
    • With respect to autocoding: you may try to automate some of the implementations details of software creation (this may be what you call "coding"; if so, you're right) if you're willing to sacrifice some performance (which I'm betting you are).

      If that is what you mean by autocoding, then the field is not against it in any way. It seems almost everyone is very much for it, and willing to make a buck in the process (CASE tools, UML, RADs, garbage collection, run-time environments, design patterns, etc).

      But if you're talking about automating Software DESIGN, then you're dealing with more fundamental problems. It's not a matter of immaturity that this is thought as "really hard" (or impossible), it's the result of serious research and you would be wise to check the literature.

      Modelling and solving arbitrary problems of arbitrary complexity (fully automated programming) is, as far as I know, equivalent to "strong AI".

      If you have a solution for THAT problem, publish it as a paper on any scientific journal and you'll easily get your Nobel prize.
  • My Crazy Idea (Score:4, Insightful)

    by graveyhead ( 210996 ) <fletch@fletchtr[ ]cs.net ['oni' in gap]> on Thursday January 17, 2002 @06:01PM (#2858543)

    I want a distribution that has a similar GUI installer that RH and Mandrake have, but instead of invoking "rpm -i" for each package, it would build all the install packages from source drops. The "installer script" could be a large XML file that describes how to compile each package, what its dependencies are, and provides a mechanism for tweaking the packages configuration. Most of the packages out there can have their runtime configuration configured via their 'configure' script (wow that's a lot of "configures"), making it a fairly uniform approach. In addition, at the beginning of the install, it would be neat to see controls for your *exact* hardware configuation that get turned into CFLAGS like -march=i[my]86 and -O3, etc.

    The only drawback I can see is that it would increase install times by a *lot*. However, in the end you would end up with a *highly* optimized distribution.

    The idea came to me while building my own [byolinux.org].

  • by Evanrude ( 21624 ) <david.fattyco@org> on Thursday January 17, 2002 @06:09PM (#2858601) Homepage Journal
    I think an easy to use, Joe User can make his own kernel is a great step as far as making linux more usable for the masses, i.e in a desktop environment. You can make a more compact kernel without knowing ALL the minute system details and random hardware that most people don't have or know what it is. Definitely a step in the right direction.
  • by eyeball ( 17206 ) on Thursday January 17, 2002 @06:41PM (#2858816) Journal
    I think if Aunt Tillie can create a swap partition during installation, pick a window manager, download compile + install the latest mozilla browser update (or maybe she prefers Opera), configure her firewall, and set up lpr for her printer, she can recompile her kernel. I just don't want to be around when she starts looking for "Freecell."
  • by RovingSlug ( 26517 ) on Thursday January 17, 2002 @06:49PM (#2858888)
    "Hello my name is RovingSlug, and I used to be a Kernel Version Whore."

    That was way back in the 2.1.x days. Then, I knew all the caveats of the minor revisions, and I knew which particular revisions were more stable than others. Now I'm nearly the opposite. I'm happy to leave my system running for months on end without checking the status of the kernels. I actually have to "cat /proc/version" to see what revision I have fired up.

    That attitude is only reinforced with the 2.4.x tree. Pondering a kernel upgrade is like pondering if I want to step into a minefield.

    Reading the comments in "2.4, The Kernel of Pain", I know there's still Version Whores out there. They know the obvious stuff like "don't use 2.4.15". And I'm sure there's less obvious stuff, too. Aunt Tillie or whomever isn't going to keep up. If she steps onto the 2.4.15 mine (or its equivalent in the future), she could do damage to her system.

    To that end, we could use short, digestible ranking/summary system of the kernel revisions. (Or does one already exist?) Which kernels in the stable branch are really unstable? Which are the most robust? Many, Aunt Tillie and myself included, would find value in such a system, regardless of a Kernel Autoconfiguration program.

  • Elitist snobs (Score:3, Insightful)

    by CAIMLAS ( 41445 ) on Thursday January 17, 2002 @07:19PM (#2859064)
    I don't think that anyone genuinely interested in linux/open source/what have you, and who doesn't have their head stuck up their own asses in conceit would honestly say that an autoconfiguration tool for the kernel is bad. Let's look at this objectively.

    First, such a tool would only make linux easier for people that are not knowledgeable with computer workings, and make it a more viable option for those who don't want to mess with, or aren't knoledgeable of, the inner workings of the computer. I've run into many people (online) who don't have support for xy device with #.#.# kernel, don't want to install another distro, and need to compile a kernel.

    Second, (as far as I know) this would be something fairly easy to do, provided that the device that wants to be used is already attached to the system - the kernel seems to have a decent detection system already, just have, say, a 'kernel compilation disk' which would have the kernel you want to compile, with all the possible modules compiled in, which would use your system. it'd have it's own initscript, which would have a step-by-step process, walking you through the configuration (eg., Is the kernel source tree untarred already?, Is the kernel source tree in a location other than the standard location? etc)

    Just some ponderings.
  • Kernel rebuild (Score:3, Interesting)

    by digitalhermit ( 113459 ) on Thursday January 17, 2002 @07:23PM (#2859075) Homepage
    This thread's funny.

    I put together a kernel rebuild guide a few years ago ( Kernel Rebuild Guide [digitalhermit.com] ). I'd guess that for perhaps 95% of Linux users, there's absolutely no need to rebuild a kernel. For those that do, it's usually to enable a feature or to tweak just an iota more performance from the system.

    Sure, anything that makes the system easier to use is good. It would be wonderful if guides such as mine were obviated. At the same time, should we really be wasting time on what's essentially a band-aid? By this I don't mean that Aunt Tillie shouldn't re-compile her kernel, only that if Aunt Tillie (a regular user) requires the feature then the distribution should already support it through other tools.

    The main problem I see is that no matter the frontend, a kernel recompile will invariably ask a lot of questions that Aunt Tillie may be unprepared to answer. And if she can answer them I strongly believe that she would have absolutely no problem with the current configuration tools such as xconfig/menuconfig.
  • Why not? (Score:3, Interesting)

    by b0r0din ( 304712 ) on Thursday January 17, 2002 @07:30PM (#2859095)
    This is perhaps the dumbest flame war I've yet seen on discussion. This is one of the reasons Windows is leading Linux by FAR in the OS wars - if you even want to call it a war.

    The answer is very simple. Of course allow this autoconf. Autoconf's are great, people should be able to run a program that makes life easier.

    BUT

    If you're building a Distro of Linux for end users like your fictional Aunt, don't include the feature. Just don't include it. There isn't enough of a performance increase that you'll see from a kernel optimization in almost every case. Truly if Linux wants to make it onto the mainstream, they are for all intensive purposes going to have to dumb it down a bit. People who just want a simple environment to write their reports, file their taxes, surf the web, and email friends are not going to give a crap about optimizing their kernel. That is best left to hackers. Why not create a distro that speaks to the masses? So don't put it on your 'enduser' distros. That's why distros exist, isn't it?

    Now let's face it, the majority of people who use Linux are using it in server environments. If I'm a sysadmin and I want to setup this new distro of Linux quickly and easily without having to search through lines of what ends up looking like a bunch of code, I'd easily take autoconf. I don't see what the argument is about, really. What it comes down to is, there's a bunch of little Linux brats (no better than 5cr1p7 k1dd13z if you ask me) who are trying to protect their little clique of windows-bashers and Linux advocates (who probably don't use Linux anyway), who would rather dismiss the general public as idiots than work with something innovative and smart that makes life easier. These are Syds of the world who insist that the world was better when people did their programs using punch cards.
  • by CDWert ( 450988 ) on Thursday January 17, 2002 @07:34PM (#2859120) Homepage
    Well, if all the Unter-Geeks start flooding the lists with why doesnt my kernel work then it sucks..

    UNLESS that leads them to learning to do it themselves.

    Ive often wondered why DISTOS didnt have an autocompile script for their kernels so at install it builds one to suit your system
  • by by2 ( 549446 ) on Thursday January 17, 2002 @07:57PM (#2859238)
    I have problems with Eric Raymond's scenarios. Forget about if it's a good idea to make it easy for anyone to build a custom kernel, my question is, why should you need to recompile the kernel just to install a device driver ? That's just stupid. Installable drivers, that's the way to go.
  • by pi_rules ( 123171 ) on Thursday January 17, 2002 @08:18PM (#2859343)
    Granted, I've never used the CML2 tools, but I followed along on the discussion on LKM for quite some time. It seems as though every post is way off the mark.

    First, ask yourself this. Is 'make xconfig' a bad user interface? Nope, it ain't. What sucks about kernel configuration? The dependency resolution crap. Linus has a nifty little program in place that does a pretty good job of figuring them out too -- but it's ugly, and admittedly a kludgey solution. CML is more "elegant and flexible" which is a damn good thing -- but last I knew the bugger took 2x longer to do it's job than the old system. Kernel developers do probably 99% -or more- of kernel builds so why on God's good green earth would they want a system that's going to slow them down right now? They don't and I can't blame them in the least bit.

    CML2 is nice, and it seems like it's a really good little system, nobody on LKM is opposed to it really (that I saw) they just don't want something that's going to suck minutes out of their programming day. "Aunt Millie" can't answer kernel configuration question anyway, period. Heck, most Windows users don't know if they have 95/98/NT/2000/Me/XP some of the time, let alone if their processor is Pentium III, Pentium IV, or K7 based.. unless the sticker is still on it. Shoot, they don't know if their mouse is ps/2 or serial, or what USB is. Do they know if their USB host system is UHCI or OHCI? Hell no.

    CML2 is about making kernel configuration easier in terms of expandability -- not usability. The current interface is very usable, just not very flexible. Because of it's inflexibility and complexity it leads to un-bootable systems sometimes when depency stuff get borked up in strange configuration situations. CML2 takes care of -that- and nothing else. It doesn't keep you from having to know your hardware inside and out. End of story.

/earth: file system full.

Working...