Follow Slashdot stories on Twitter

 



Forgot your password?
typodupeerror
×
Linux Software

New Linux Kernel Configuration System 370

An anonymous reader writes "When Eric S. Raymond tried to replace the Linux kernel's configuration system with "something better", he got booed off the stage. Now Roman Zippel is bravely having his own go at it. Here's an interview with Roman and a look at his new configuration system, aimed for inclusion into the 2.5 development kernel. Also, find some screenshots of his new graphical configuration frontend."
This discussion has been archived. No new comments can be posted.

New Linux Kernel Configuration System

Comments Filter:
  • Ironic... (Score:4, Funny)

    by cat5 ( 166434 ) <cat5@c[ ]ive.org ['atf' in gap]> on Sunday September 08, 2002 @10:42AM (#4215886)
    that the kerneltrap topic id is 404...
    • Re:Ironic... (Score:5, Informative)

      by _ganja_ ( 179968 ) on Sunday September 08, 2002 @11:12AM (#4216030) Homepage


      How is that Ironic? I blame Alanis for this total misuse of the word... That's just a coincidence.

      While I'm at it, will the people that insist on using the word "literally" to mean metaphorically give it a rest: "That was so funny I literally shit myself" or "That last tackle literally ripped his head off".

      • While I'm at it, will the people that insist on using the word "literally" to mean metaphorically give it a rest: "That was so funny I literally shit myself"

        My friend, if someone says to you 'Oh man, that was so funny I literally shit myself...' I suggest that you work on putting down paper on your car seats before letting them sit in your car, rather than complaining about their particular euphemism for 'metaphorically'.
      • This is how its ironic...

        The American Heritage Dictionary:

        i*ro*ny Pronunciation Key (r-n, r-)
        n. pl. i*ro*nies
        The use of words to express something different from and often opposite to their literal meaning.
        An expression or utterance marked by a deliberate contrast between apparent and intended meaning.
        A literary style employing such contrasts for humorous or rhetorical effect. See Synonyms at wit1.
        Incongruity between what might be expected and what actually occurs: "Hyde noted the irony of Ireland's copying the nation she most hated" (Richard Kain).
        An occurrence, result, or circumstance notable for such incongruity. See Usage Note at ironic.


        Well, one of the most acceptable definitions of an ironic situation was one that seemed to imply otherwise. Hence, `rain on a wedding day' isn't ironic - its just bad luck. But visiting a site talking about the latest useless kernel addon and how it will rule the world, with a link proclaiming it to be a `404' of some kind (seeming to imply it won't rule the world) could indeed be ironic.

        I blame Alanis for this total misuse of the word...


        Me too, but I blame radio DJs who can't think of any better material for making this `its not actually ironic' discussion a popular one, especially when someone uses the word correctly

    • that the kerneltrap topic id is 404...

      The screenshot [resentment.org] itself is more ironic, I think..
  • poor guy (Score:3, Insightful)

    by GoatPigSheep ( 525460 ) on Sunday September 08, 2002 @10:44AM (#4215893) Homepage Journal
    When Eric S. Raymond tried to replace the Linux kernel's configuration system with "something better", he got booed off the stage.

    Yet another thing to add to my list of "and people wounder why linux is not being readily accepted by everyone" items. I mean, come on, the guy just wanted to help make things better! Getting booed off the stage hurts!
    • by Bruce Perens ( 3872 ) <bruce@perens.com> on Sunday September 08, 2002 @11:07AM (#4216006) Homepage Journal
      The kernel developers are a pretty open-minded bunch. Eric's design was cool, he explained it to me a few years back, and it has seen use in other projects than the kernel. But I could see that it would be difficult for the kernel developers to accept:

      • It required Python to build the kernel.
      • It was complicated. It included an entire theorem prover. This was sort of cool in that it would not allow you to generate a non-working configuration, but really more than was required for the job.
      • Its language was arcane. The main language idiom is the suppress-unless statement, which is sort of the logical negation of if-then statements.
      • And some folks questioned his motivation for getting this grandiose project into the kernel - was it just to help out, or was it primarily to establish additional hacker reputation for Eric? I'd be willing to give him the benefit of the doubt on this - he did the work.
      I think he had a chance of getting it in, but he would have had to refactor the entire thing, write it over in C, make the language cleaner, and I guess that didn't come about. But to his credit, he didn't just talk about it. He generated a working software product with functionality that did not previously exist in Open Source as far as I could tell. His project is worth studying, and I'd encourage works derived from his ideas. I'm sure there's a paper about it online.

      Bruce

      • [Eric's design] was complicated. It included an entire theorem prover. This was sort of cool in that it would not allow you to generate a non-working configuration, but really more than was required for the job.

        I grasp the significance of the other three points of contention you mention, but the fourth (above) doesn't jump out and grab me as an issue in and of itself. On the one hand, it may be that the method was overly complex (evidenced in part by the Python requirement and an unfamiliar idiom). Disallowing an unworkable configuration doesn't seem unreasonable, though. Is there a down-side to building that safety into the configurator apart from any flaws [heightened complexity] in Eric's particular implementation?

      • by Anonymous Coward
        Python wasn't the problem, a few people didn't like it. I don't even remember anyone trying to veto it for it's complexity or anything either. There were 2 issues. It didn't do what Linus wanted it to do, namely decentralize a few thing, Eric simply ignored the requests for that. And then he tried to slide it in some how without peer review by trying to get developers closer to Linus to endorse it an include it rather than doing it in the open.

        Eric was playing games and his solution was technically superior. Done deal.

      • by Ungrounded Lightning ( 62228 ) on Sunday September 08, 2002 @02:38PM (#4216805) Journal

        And some folks questioned his motivation for getting this grandiose project into the kernel - was it just to help out, or was it primarily to establish additional hacker reputation for Eric? I'd be willing to give him the benefit of the doubt on this - he did the work.

        What's wrong with being motivated by hacker reputaion points? Isn't that what was supposed to replace money in the open source motivational system?

        ... was it just to help out, ...

        So an open source developer is evil unless he's motivated solely by altruism?

        (That humming sound you hear is the beat between the spin rates of Ayn Rand and Friedrich Hayek.)

        C'mon, Bruce. You know better than that.

        Regardless of how much we want to help out humanity and all that, SOME of us aren't the leisure class - with old money, idle time, and an indoctrination in the obligations of nobility to give us internal satisfaction when we do something "just to help out" the benighted masses of the common man. Some of us ARE those commoners, with a family, a mortgage, and (if we haven't been laid off in the latest recession) a paycheck that is all that stands between using a shopping cart for groceries and using it for a mobile home.

        If we're to contribute time and effort to the open-source codebase we need a way to keep that paycheck coming. Like "reputation points" to put on a resume, to find work the next time the current project is over or the current company goes belly-up.

        Maybe Eric doesn't need any more points. But let's not have a big name flaming him for maybe wanting some - and thus convince thousands of onlookers that working open source is a good way to get a BAD rep, so they'd be better off getting that MCSE instead.

        • Just remember that ESR was the one who wrote about the reputation and the like to begin with, in addition to the rules of social interaction in the OSS world. He didn't write them, he wrote about them. Whether he's correct on all these points is another matter. It may be that hackers write for reputation but it seems that the appearance of writing for reputation is bad.
        • Well, this time I'm just reporting the news, and don't endorse the feeling I'm reporting. So, I'm not going to take this one any farther.

          Bruce

    • by FreeUser ( 11483 ) on Sunday September 08, 2002 @11:09AM (#4216019)
      Yet another thing to add to my list of "and people wounder why linux is not being readily accepted by everyone" items. I mean, come on, the guy just wanted to help make things better! Getting booed off the stage hurts!

      First, GNU/Linux will never be accepted "by everyone." Nor will FreeBSD, nor will BeOS, nor will Apple's OS X.

      Nor will Microsoft Windows, unless Palladium and DRM is legislated into law by the likes of "Disney" Hollings, and even then Apple is likely to be kept around as a token "competitor," paying hefty patent fees to Microsoft for the privelege of being allowed to manufacture "legal" hardware in the US. Unless, of course, you get off your butt and do something [randomfoo.net] about it, but I digress.

      The problem is a simple and obvious one, and the solution as elusive today as it was the first time humans came to live together (and likely predates our ability to speak): Politics is ugly and banal, and people are fallible. This includes the Linux kernel developers and Linus Torvalds himself.

      Example: The ggi project wanted to provide a kernel abstraction layer for video hardware in the same manner such abstractions are presented for everything else, from your ethernet adapter to your system's RAM and hard drive. Linus thought the idea sucked, then ended up doing a "poor man's" version of frame buffer support instead. How much better things would have been if the original vision of the GGI folks had been realized and supported we'll never know.

      Example: PCMCIA. It is still a mess. The more capable userspace version got sidelined in favor of a broken and less capable rewrite ... I can only ascribe that to politics and personal pull, which every group, no matter how altruistic and well meaning, falls prey to now and then.

      There are other examples, and perhaps Eric S. Raymond's effort is one (though I hesitate to make that assumption), but the purpose of this post is not to catalogue the mistakes Linus and others have made, or to air my own disagreements with them (but what the hell: when will we get XFS into the main kernel tree damn it! :-)), but rather to point out their humanity and fallability, a trait they share with everyone reading this comment, the guy posting it, and probably with every sapient being, everywhere.

      Mistakes happen, everywhere, by everyone. The measure of a group or project's success isn't their perfection (as is so often implied in political discussions), it is by how much their mistaken decisions are outweighed by their correct decisions.

      And using that metric, the Kernel developers, including Linus Torvalds, have done very well indeed.
      • This will happen more and more. I really do expect that at some point someone will just say fuck it and branch their own version of Linux. I see this as possibly a good thing. If this branched version gets all the cool patches that Linus and co are turning down, and they work, it may prove to be a catalyst for change. Either that or the branched version will become better than the original. It happened with XEmacs (which IMHO is much better than Emacs), I see no logical reason it couldn't happen here aswell.
      • GGI tried to do too much and it abstracted too far.

        Userspace PCMCIA drivers? That's a new one. I can only imagine that you were refering to the external set of drivers that used to be the standard and where characterised as being so hard to install that Linus himself had trouble with it. I completly understand his reasons for wanting that mess replaced.

        ESR's configureator was massive overkill and it made life harder for developers. On top of that what killed it in the end wsa not Linus but ESR's refusal to update the patches to handle changes Linus made to the core code.

        Not everything gets to be black and white.
      • are the marketing and PR departments to cover up or put spin on anything that could be even remotely considered a mistake.

        To parent poster: Do you honestly believe that worse things don't happens behind corporate walls!? Have you been living in a bubble!? The great thing about open source is, whatever happens, you will always have enough information to form your own opinion. As a corporate drone, I can safely assure you that you will NEVER get that level of detail from a corporation (even though the Internet has helped expose a lot). Personally, I think your shock is due to a lack of exposure to a REAL community (people argue all the time . . . that's how things get worked out), rather than anything having to due witht the Kernel developers. Goes to show how corporate our society has become . . .
      • Example: The ggi project wanted to provide a kernel abstraction layer for video hardware in the same manner such abstractions are presented for everything else, from your ethernet adapter to your system's RAM and hard drive. Linus thought the idea sucked, then ended up doing a "poor man's" version of frame buffer support instead. How much better things would have been if the original vision of the GGI folks had been realized and supported we'll never know.

        I don't agree with your interpretation for why the GGI failed.

        The way I saw it, the GGI developers had very grand ideas but insufficient time/resources. In the end, the GGI lost out because FB and the DRI offered something tangible with a reduced complexity. Maybe you could argue that GGI offered more but that's just confirmation of the classic 80/20 rule.

    • Re:poor guy (Score:5, Informative)

      by ceswiedler ( 165311 ) <chris@swiedler.org> on Sunday September 08, 2002 @11:34AM (#4216146)
      Just to add to Bruce's points above: from what I heard, the biggest problem wasn't technical, but rather ESR's refusal to negotiate. The userbase of the kernel config system is the kernel developers; they had several tried-and-true ways of configuring kernels. Many of them were in fact quite happy with the existing system, and didn't see a need to upgrade at all; there was a general consensus that there were some shortcomings in the existing system, but those were very specific.

      ESR solved these problems very well with CML2. By he also added a dozen features and changed a hundred other minor things, simply because he felt it was better that way. ESR was solving problems which only he perceived. For example, he was very interested in making it easy for "Aunt Tillie" to configure a kernel. Unfortunately, Aunt Tillie doesn't have a say in whether something goes into the kernel. Linus was apparently OK with CML2, but most of the other kernel developers were very resistant. No one ever explicitly refused CML2, but it never went in either, and ESR eventually gave up.

      The impression I got was that ESR should have minimized the changes to the UI in his first version. If he had built something exactly like the old config, but with a new language and backend, most of the objections would have gone away. He then could have submitted the other changes; they may or may not have been accepted, but at least the underlying system would have been improved.
  • by oliverthered ( 187439 ) <oliverthered&hotmail,com> on Sunday September 08, 2002 @10:45AM (#4215899) Journal
    Well the site is /.ed so what i want to know is.

    Does it scan your hardware and create a default kernel configuration with all ther drivers for your hardware pre-selected.

    It could even ask if you running a desktop or server machine and turn on/off low latency, pre-emtion and supermount for the desktop.

    I usually have to enable evrything to get X piece of hardware working corrctly and then disable stuff to find out what the correct drivers/modules were.
    • by Anonymous Coward on Sunday September 08, 2002 @11:46AM (#4216207)
      Does it scan your hardware and create a default kernel configuration with all ther drivers for your hardware pre-selected.

      for the curious, you can use dmassage for OpenBSD to get a kernel config file with only your hardware enabled (hardware that was enabled at boot time).

  • by NetRanger ( 5584 ) on Sunday September 08, 2002 @10:46AM (#4215906) Homepage
    There is too much resistence to change in the Linux community. The problem is a simple one: in the minds of Elitists, easier is not better, it's "lamer", "suckier", or "for wussies". Thus, when someone comes up with the brilliant idea that the average person should be able to actually use the system, they're booed off. Yet these boo-ers are the same people who bash the mass market for using Microsoft or Apple's OS X. OS X is astoundingly good... a simple, intuitive, appealing interface on top of loads of raw power. That's what Linux needs.

    Right now, when you install pretty much anybody's distro, you start up with an interface that has tons and tons of menus, icons, widgets, and whatnot, already up and running. It's an overload, and instead of trying to learn it, newbies are balking at it.

    So why not have an easy-to-use kernel configuration system? Why not have an independent object model, where any distribution or window manager can use each other's dialog pages?

    The only answer we seem to get is: "because it's for wussies!"

    • I believe ESR got booe doff for two reasons. One, the new config required Python. Two, he wanted to change everything at once in ne huge patch, rather than bits and pieces which are easier to understand, back out and correct, and so on.
      • Can someone explain the objection to using Python for the configuration system? Surely the kernel developers don't believe that C is the be-all-and-end-all of languages. If it's quicker and more maintainable to write in a high-level language then why not just do the job - and use the time you saved compared to writing in C for other things? It's not as if Python is something wacky or proprietary or a resource hog.
        • I think the main resistance to using other languages in the kernel-building process is that you have to have them installed. Neither Perl nor Python produces a stand-alone runtime.

          One of the goals is to make the same code compile on any platform with only a makefile difference. If you require fairly heavy-weight runtimes you limit the platforms you can run things on.
          • Yes Python can do that. And that was one of the reasons Eric chose it. But it would of course be huge, slow, and basically contain a Python interpreter. Perl can also do that, but I think it actually needs to link with libperl, so it would be a little bit more silly...
      • First reason false. Second reason true. Third reason: it was complex, and nobody understood it...
      • I believe ESR got booed off for two reasons. One, the new config required Python.

        Yeah ... we all know the kernel config process doesn't at any time rely on any language other than C. For instance, we know that doing a "make xconfig" won't invoke Tcl/Tk, right? Oh, wait...

        Including a configuration option isn't a problem provided that it isn't the only configuration option. Right now there are at least three ways of configuring the kernel ("config", "menuconfig", and "xconfig"). As long as the simplest method remains as a fallback, it should be okay to include another config method, even if it depends on Python.

    • by Bruce Perens ( 3872 ) <bruce@perens.com> on Sunday September 08, 2002 @11:11AM (#4216026) Homepage Journal
      You might change your mind if you examine the project in question. See this comment [slashdot.org].

      Bruce

    • Yet these boo-ers are the same people who bash the mass market for using Microsoft or Apple's OS X.

      Actually, I'm pretty sure the boo-ers are different people, and there's the problem that Linux has - lots of different people trying to take it in different directions.

      Right now, kernel hackers tend to care more about the techie market, and the server market is where the money is, so that's the direction that things are really moving in.

    • No. I think everyone agrees that easier is better, but not everything is as easy as it wants to be. If a program can automatically configure a kernel for me that supports all the hardware and features I need, that's good, but I don't see much difference between a graphical menu and a text-based version of the same menu.
    • Why not have an independent object model, where any distribution or window manager can use each other's dialog pages?

      Why not stop war, hunger, poverty? It's a great idea, who wouldn't want it? Maybe it's just a little harder than writing a few words?
    • by Wdomburg ( 141264 ) on Sunday September 08, 2002 @11:54AM (#4216230)
      >Thus, when someone comes up with the brilliant
      >idea that the average person should be able to
      >actually use the system, they're booed off. Yet
      >these boo-ers are the same people who bash the
      >mass market for using Microsoft or Apple's OS X.

      Let's see.. When I started running Linux way back when I had to manually partition my hard drive, manually configure X (including plugging in video timings for my monitor), manually configure sound (including plugging in I/O addresses and IRQs), had to edit a config file in vi to add icons to my windowmanager (Afterstep Classic), had no real GUI filemanager, took 4 hours to figure out how to get my printer working properly, etc, etc.

      Now, I can stick in a CD, have it autopartition, detect all my hardware, configure X, and has a full desktop environment with a GUI filemanager, where I can simply drag an icon to the panel. I can hot plug USB and PCMCIA devices to my heart's content. I can add new hardware, and it will detect and configure it on boot. I can sit back and let my machine take care of keeping itself up to date with all the latest security patches.

      I must have missed an AWFUL lot of booing somewhere.

      >Right now, when you install pretty much anybody's
      >distro,

      Except for Lindows, Lycoris, Libranet, OEOne, Xandros...

      Come the release of Red Hat 8.0, you can probably add that to the list, given the focus they've put into creating a rational, consistent desktop in the betas.

      >you start up with an interface that has tons and >tons of menus, icons, widgets, and whatnot,
      >already up and running. It's an overload, and
      >instead of trying to learn it, newbies are
      >balking at it.

      Taken a look at Gnome lately? From an end user perspective, all of the changes in Gnome 2.0 are aimed at usability, accessibility, simplification, and consistency. To paraphrase Havoc, they're removing the "crack rock" features, and "proving one good way of doing things instead of six broken ones".

      >So why not have an easy-to-use kernel
      >configuration system?

      Noone has objected to the concept, only to the implementation. At different points there were issues with the rulesets in CML2 differing from CML1 in ways that the developers didn't agree with. The frontends used a different UI. It globally loaded rules for all architectures.

      It has long been Linus' policy not to accept patches which introduce more than one fundamental change. The proper course would have been to make CML2 a drop in replacement for CML1, with no changes to the rulesets, and with front ends that completely emulated the old ones. Then, and only then, discussions on rationalizing the rulesets and providing enhanced interfaces would be appropriate.

      Did it solve the problem it set out to solve - i.e. providing a more flexible syntax and a single parsesr? Sure, but it bundled too many other changes, and when you come down to it, it was replacing a system known to work with an unknown one.

      On a side note, the whole topic is moot. Does Windows provide you with an easy to use kernel configuration tool? Does MacOS? No, because the end user should NEVER have to configure a kernel.

      >Why not have an independent object model, where
      >any distribution or window manager can use each
      >other's dialog pages?

      Umm... what on earth is that supposed to mean?

      Matt
      • Come the release of Red Hat 8.0, you can probably add that to the list, given the focus they've put into creating a rational, consistent desktop in the betas.

        And getting screamed at by the KDE team for it...
        • >And getting screamed at by the KDE team for it...

          What do you expect? If a Red Hat employee farted during a KDE presentation at LinuxTag, I wouldn't be suprised if there was speculation on the mailing the next day that it's part of the grand Red Hat conspiricy to sabotage KDE.

          Matt
        • That's because Redhat removed identification information from the "About" boxes of KDE applications. You can't blame the KDE guys for getting pissesd that they make this nice software, and the project doesn't get recognized for it. Sure its allowable by the GPL, but it isn't polite by any means. And there was no reason to do it. What did they think, that mentioning the KDE project could confuse users? ("But I thought I was running Redhat!") Come on. Take a look at common Windows apps like Yahoo messenger and AIM. They've got branding information all over them. Does it confuse users? I doubt it. People aren't that stupid.
          • >That's because Redhat removed identification
            >information from the "About" boxes of KDE
            >applications.

            No, they removed the "About KDE" boxes from KDE applications.

            >You can't blame the KDE guys for getting pissesd
            >that they make this nice software, and the >project doesn't get recognized for it.

            The "About" box that IS left in still lists what version of KDE the program is built on, so yes, they do get recognized for it. They just don't get credited TWICE.

            >And there was no reason to do it.

            I personally think it's unnecessarily redundant, and potentially confusing for new users. Should they put in an About box for every single library they link against? I for one would rather not go to the Help menu and find About, About KDE, About QT, About libxml2, etc, etc.

            >Does it confuse users? I doubt it. People aren't
            >that stupid.

            You haven't done desktop support much, have you?
    • by slamb ( 119285 ) on Sunday September 08, 2002 @11:56AM (#4216240) Homepage
      There is too much resistence to change in the Linux community. The problem is a simple one: in the minds of Elitists, easier is not better, it's "lamer", "suckier", or "for wussies".

      First, as other posters have said, there were valid reasons ESR's system was rejected. They weren't because it was for wussies.

      Second, configuring a kernel will never be easy. You have to make a lot of decisions that require technical knowledge. Whether you do that in a text-based interface or a fancy graphical one doesn't matter very much. That doesn't mean a fancy graphical interface shouldn't be made, but it shouldn't be made for the reason of making it easier for mom to use Linux.

      The correct solution to make hardware configuration usable for the masses is not to make building a kernel easier but to make building a kernel unnecessary. The system has become more modular over time.[1] Hardware has become friendlier to autodetection. Distributions like RedHat come with a single kernel that will work for just about anyone. When you start up with new hardware, kudzu will recognize it, ask you about it, and load the appropriate driver.

      [1] and is still becoming more modular. 2.5 was supposed to completely remove the idea of compiled-in versions of stuff that is modular. I believe this got canned due to time constraints; look for it in 2.7 maybe.

    • The only answer we seem to get is: "because it's for wussies!"

      Do you have any evidence for this statement? All these discussions (including the discussion over whether or not to include Raymond's configuration system in the kernel) have taken place on public, searchable forums, so if your claim is true then you should be able to produce tons of evidence. (Here's a start. [google.com])

      My memory of the discussion on lkml was that people had a lot of different problems with CML2, which might or might not have been showstoppers. But I don't remember anyone saying "no, this would make kernel configuration too easy for novices, and we don't want that."

      If you accept the premise that "Open Source" development is a magic process for the effortless production of arbitrary amounts of excellent code, or if you assume that it is obvious how to create good user interfaces, then I suppose the only reasonable explanation for a lack of user-friendliness is a conspiracy on the part of developers. Absent those premises, you're free to adopt a more reasonable explanation: writing easy-to-use software is much, much harder than it appears, developers have limited resources, and we just haven't gotten a lot of things right yet.

      --Bruce F.

    • by isdnip ( 49656 ) on Sunday September 08, 2002 @03:22PM (#4217025)
      Good comment. Easy is perceived as bad. Yes, some easy things (Microsoft Bob) are really bad, and some difficult things are good, but Linux, like Unix in general (sorry to generalize but it's a cultural inheritance), suffers from an "old boys' club" mentality in which earned knowledge of arcana is viewed as an entry point. Sort of like morse code has been to ham radio -- there aren't rational reasons to require it any more (though some of us actually like to use it, sportingly), but removing the requirement brings out all sorts of anger from old timers who had to learn it.

      Where it currently bugs me the most: Gentoo looks like a swell distro. Installing from source ends dependency hell and optimizes performance; I can buy it. But the setup is dreadful, basically more Linux From Scratch than anything else. The topic of an installer came up on Gentoo Forums. The "consensus" of the Gentoo user base is that "Gentoo is a hard distribution, and so the installation should be hard too." What rot! Once installed, no distribution should be gratuitously hard to live with. And while Gentoo lacks some of the GUI tools of say Mandrake or Red Hat, it's basically a clean system that shouldn't be that hard to manage. But the install procedure basically consists of printing out a lengthy set of instructions and doing a lot of hand edits of files, step by step, and hoping your system is enough like the developers' to work right.

      Personally I don't find the current kernel config (make xconfig) to be that hard, just a little nerve-racking where some new options show up that I don't understand. Which is what Bruce set out to fix. We can quibble about implementation details but his heart's in the right place. Linux won't prosper so long as it lives with the old boys' mentality. If I want to join the Freemasons, I will.
  • by frankske ( 570605 ) <slashdot@@@frankbruno...be> on Sunday September 08, 2002 @10:48AM (#4215919) Homepage
    It's a shame that Linus doesn't want to change, becuase Roman's system is realy great: faster, easier, and at the moment it still leaves the old system as default...
  • Why Change? (Score:4, Insightful)

    by PoiBoy ( 525770 ) <brian@poihol d i ngs.com> on Sunday September 08, 2002 @10:57AM (#4215959) Homepage
    Seriously, what's wrong with typing "make menuconfig" now? To me at least, an ncurses-based menu system is just as easy to use as a GUI (yuk).

    Moreover, it's not like complete newbies are going to be doing kernel compiles. For anyone with enough experience to recompile the kernel, an ncurses-based system is adequate IMHO.

    • Why not? (Score:3, Insightful)

      by brunes69 ( 86786 )
      Its not like they are saying "Lets ditch menuconfig and replace it with this!". For you and whoever else there is still make menuconfig. But I for one would welcome a better GUI than make xconfig, which I find pretty honkey. Since when are more options bad? It's not like they are forcing you into switching.
  • by Kiwi ( 5214 ) on Sunday September 08, 2002 @11:08AM (#4216017) Homepage Journal
    Linux kernel developers often times act like schoolyard bullies.

    This is a pretty strong statement, and needs to be qualified. The Linux kernel developers are a very talented group of programmers that have written some impressive code which is helping stop the Microsoft machine control all of computing.

    These programmers are doing professional-quality work, oftentimes on a completely volunteer basis.

    We have lost a lot of good code which the kernel could have used because of some of the bullying that the kernel developers have engaged in. The lost of Eric's excellent CML2 is the most highlighted case, but we also lost a lot of improvments to the ugly IDE subsystem [lwn.net]. The IDE developer finally had enough [alaska.edu] of the schoolyard bullying games; and so Linux lost another developer.

    I wonder how long Linus Torvalds will allow his "inside circle" to continue to mock and belittle attempts to improve the kernel code. If these actions continue, the kernel code will languish and become more unstable. Is Linus even considering adding next generation pthreads [ibm.com] to the kernel? I really want to see the Linux kernel become a real competitor to Solaris and AIX in the enterprise, so I hope that Linus fires some of the more nasty bullies from kernel development (I don't care how good Viro's code is; he comes off as being one of the bigger flamers) so that new ideas are truly welcome in to the kernel again.

    - Sam

    • Linux kernel developers often times act like schoolyard bullies.

      ... The lost of Eric's excellent CML2 is the most highlighted case, but we also lost a lot of improvements to the ugly IDE subsystem [lwn.net]. The IDE developer finally had enough [alaska.edu] of the schoolyard bullying games; and so Linux lost another developer.

      The people you call "schoolyard bullies" are the people who have done the most to make the kernel the great piece of code that it is. Linus is an amazing manager of code and people . What Linus did in both cases (CML2/ESR and IDE/Dalecki) was actually good management. Marcin said he had enough because people kept complaining that the IDE code was unstable but Linus supported him for as long as he produced good and promising code. When Marcin gave up, Linus went back to what was stable. Nobody else really knew how Marcin Dalecki's new code worked and it still had bugs so Linus made the smartest move: go back to crufty but stable and known code that works. Linus (and Linux) can not afford to have core maintainer's who are not tough with "thick hides" and "stiff backbones" who sometimes look like "bully's" because if a maintainer gives up on a project, that part of the kernel may "bit rot", and truly become unusable. Linus plays the game of "survival of the fittest" in both code and developers. If you can't take the heat, get out of the kitchen. When Marcin closed up shop, Linus did not ask "please someone take this promising code I put in of Marcin's and make it work right" or say "Marcin please don't go!". Instead he said "fine" and went back to what worked. If someone wants to take over Marcin's work and make it work right, Linus might put it back in but Linus is not looking for ideals or promising code; he is looking for results. The case of ESR is different but similar. ESR made a lot of changes (that not everybody was happy with) with a lot of promise but he had had enough and gave up as well. Now there is someone else at the mantle of configuration (writing far less controversial code) and hopefully STRONG enough to take the heat. Linus knows that no one person or piece of code is vital to the whole. If something really needs to be fixed, he knows that someone will come along and eventually do it but he has no patience for what "should or could or want" in kernel development. The "schoolyard bullies" work hard to weed out the best code from contributors and send them on to Linus and despite people like you "giving them a hard time", they have proven that they are tough enough to take it and keep on producing while ESR and Marcin Dalecki proved they weren't. Thank goodness Linus has no tolerance for weak code or people.

    • by Mark Bainter ( 2222 ) on Sunday September 08, 2002 @02:26PM (#4216761)
      Are you people actually READING anything about these topics or just spouting off based on a few summaries you read somewhere? There are several messages like this here and I have to wonder how you have developed the leg strength to be able to jump to these kinds of conclusions.

      True, some of the developers can be caustic, but I don't think it's unreasonable for people to be adults about things. If you can't handle criticism you should go develop for a nursery school or something. The kernel developers are not babysitters. They don't have time to hold your hand, and encourage you, and give you applause and little paper prizes every time you do something right.

      There were many good and valid objections to ESR's system. Not all of them were, but many were. The largest objection (iirc) was to his attitude. Particularly when it looked like things weren't going to go his way.

      The IDE system issues were even more complex. Marcin was working hard to clean it up, but his changes often reached into other areas, and had consequences outside of the ide area. Some of the changes broke functionality some people depended on. Obviously, these people were upset. More than that, his changes made running 2.5 on an ide system risky. That, I think, was what got him the most negative feedback. Especially when he wrote it off as the cost of improvments.

      A broken ide subsystem made it difficult for many developers to work on and test the 2.5 kernel. Which, in turn, delayed improvements to other subsystems and completion of other projects.

      Was this Marcin's fault? Probably not. In general they probably would've been better off setting up a "new" ide tree like they're doing now before. Regardless, eventually he had enough. It's very sad that all of his hard work has now had to be removed from the kernel. It is a shame that it's going to go to waste. However, I don't think you can say the problem was just a bunch of schoolyard bullies knocking a newcomer. That was not the case at all.

      In other areas we have things like the new kbuild that was written by Keith Owens. This is a really nice system, it still hasn't gone in. (Amazingly, it was left out of this post, though I'd count it as far more likely to get in than CML2.) The only real contention here is that keith wants a flag-day change, and linus wants it in pieces. It'll (hopefully) eventually all get in, as it is being chopped up and added in in pieces, though how it'll ever make it completely this way without something akin to a FDC is beyond me. (Doesn't mean it's not possible, I just don't see how).

      All of these things are real issues that are brought up. Nobody is rejecting patches or ideas because they come from an unknown. Nobody is rejecting ideas because they don't like the way the person looks/smells/types their sig or any other superficial reasons.

      The only objections I've seen have been related to the code/idea at hand. This is perfectly valid, and it's how we avoid crap getting into the kernel because of some namby pamby desire to avoid offending people.

      This is the big bad world. It's full of adults. Some ideas are bad. Some code is bad. Everyone makes mistakes, and not everyone is nice. There's no naptime, and there's no milk and cookies.

      Suck it up and cope.

    • The Linux kernel developers are a very talented group of programmers that have written some impressive code which is helping stop the Microsoft machine control all of computing.

      It's funny you should say that, because that's completely unrelated to the reason Linux was designed for in the first place...

      Linux was designed to replace the non-free Unix on the larger, expensive machines at the Universities. It was basically Free Unix for PCs. It just so happens that people have extended it in the meantime to be used to replace things like the Microsoft desktop, but Linux is definately not competing for the Microsoft space, or the desktop... Microsoft, on the other hand, is competing for OUR space.

    • Is Linus even considering adding next generation pthreads to the kernel?

      Regardless of your points being valid or not, this specific example/question is wrong, since support for NGPT has been in the kernel since 2.5.8 (mantained by Linus), as you can read on the NGPT devel mailing list [ibm.com] (see the bits about 2.5.x).

      About 2.4.x: the official mantainer is not Linus, but Marcelo Tosatti, and he has the precise mandate to keep 2.4.x stable, so I doubt he will include the support for NGPT (but I could be wrong, of course).

    • "[Al Viro] comes off as being one of the bigger
      flamers"

      I do not think that word [geocities.com] means what you think it means.

      OTOH, if I'm wrong, someone would need to update the Linux Gay Conspiracy [fazigu.org] to reflect this finding.

  • by wfmcwalter ( 124904 ) on Sunday September 08, 2002 @11:14AM (#4216044) Homepage
    (well, the site is still /.ed, so I'm winging it here...)

    Hopefully the tool(s) in question are flexible enough that they can be adapted/generalised for other systems (i.e. other than the linux kernel).

    Configuring complex systems (openoffice, mozilla, KDE, Gnome, etc.) that have to run on numerous platforms and can contain optional elements that can depend on other optional elements (or be mutually exclusive with them) is always a challenge. It would be a big saving if one tool(set) could be applied to lots of these large projects. This is a lot like autoconf replacing weird project-specific configurators.

    Equally (and perhaps complicating the above goal) the tool in question really needs to be powererful enough in a particular instanciation that it's "complete" - i.e that one doesn't need to manually fix stuff up (in the config output file) to get some of the more advanced options. Windriver's VxWorks, for example, has a cool little kernel-image configuration program - but this always seems to lack GUI access to the one component I need - so one gets used to the raw config file, and the power of the GUI tool is lost.

  • If you go to the main site, it just says it's "experiencing technical difficulties". The webmaster says that he picked "a wrong day to experiment". Wonder if it's because he's updating his web page or because of the server overload.
  • Go to the site, and ya get:

    Warning: Too many connections in /prod/www/virtual/kerneltrap.com/www/htdocs/includ es/database.mysql.inc on line 7
    Too many connections

  • A great start. (Score:3, Insightful)

    by FreeLinux ( 555387 ) on Sunday September 08, 2002 @11:56AM (#4216239)
    After finally being able to get the page, I think that it is a great start and a tremendous improvement over Xconfig.

    That said, I think he still needs to go further. Most users don't have a clue what all the options are or mean. Even with the descriptions and recommendations they will quickly become overwelmed.

    I feel that users should be presented with a very basic and lean initial configuration screen. One that lists generic features for them to enable and disable. For example a single check box for IDE and SCSI HD support or a single checkbox to enable HAM radio support with generic or "standard" options preselected for those devices. Then there should be an advanced button that brings them to the complete configuration options, such as Roman's example.

    This, combined with some form of modprobe hardware detection, would make kernel configuration a breeze, even for MCSEs. Also, the fact that this configurator reads the existing config, rather than starting with a blank slate everytime, is great!!
  • by AxelTorvalds ( 544851 ) on Sunday September 08, 2002 @12:26PM (#4216341)
    ESR got himself booed off the stage by trying to undermine the process. His solution was technically superior in ways, it didn't do what Linus and others wanted and he was playing politics and games trying to get it in to the kernel. When that was exposed it was done. Technically superior or not, the games undermine everything, it's a very open process and they like peer review and things done in the open. Bottom line, not too many people aren't replacable and any work you do can probably be done by somebody else, they don't need politicians. There were a few times when Linus made it very clear what he wanted changed and ESR simply didn't fix it, it was as if he didn't even hear it; look at the threads in the kernel archive. I don't know what ESR's motivation was but he made it look a little corrupt.

    Further, people are working on the configuartion language but there are bigger problems to be solved, everyone knows it and still the efforts don't fully address them. Like how do you know the configuation options used on the kernel you are running? There is no reason to change just for the sake of change and compilation speed isn't a huge issue, my dual amd compiles kernels so fast I don't care if I cut the speed in half. Plus, when you're hacking you usually work on a module or two and don't rebuild the whole thing.

    The process is good, they don't take crap. The VM system and the IDE system are other prime examples. Al Viro is kind of mean to people but everyone else makes it pretty clear what needs to be done, why things aren't accpeted, even Al has expectations that he makes clear. There are expectations for robustness, it's more important than performance. Hans Reiser has had issues with that, he can't explain the robustness or answers concerns but he can point to benchmarks; clue: they don't give a shit if it's not robust.

    There have been a handful of people who just don't cut it. Believe me, they can be replaced. It sucks, it'll be a dark day when Alan Cox or Dave Miller quit, if they ever do but they also know the rules, they play by them and they have their own forks if they don't agree. If Linus or someone else don't like your code, it doesn't get in, fork and show that they are wrong or make it better. This isn't bullying or anything like that, it's not that they are elitests, they have real expectations that aren't meet some times. Are some people and some parts of the kernel more equal than others? Of course, we're all human.

    I take exception to the suggestion that the kernel team is throwing out great stuff for non technical reasons. They aren't they throw it out because it doesn't do what it is supposed to, people are trying to get it in for non-technical reasons with non-technical means or because it's not robust. It's not easy to write a VM or IDE system, there are a ton of expectations, it's a hard job, there are working solutions already that you have to do better than.

    • In particular, a new configuration system has to be able to run on rules that the old configuration system can run on, so that both can be in the kernel until people are satisfied with the new one (ironic that Linus rejected CML2 for largely this reason, but then didn't follow this principal with IDE, where it would have been much easier). In order to replace something that everyone has to use, you have to replace it with something that just works for everybody, including the people writing rules, people who want to keep their old configs, people who want to maintain code for both the new kernels and old kernels, and so forth. The whole "Aunt Tillie" thing really is an issue, because important kernel developers don't want to spend more thought on configuring the kernel they're building than Aunt Tillie would, and they already have routine ways of doing that.

      "Technical superiority" is a subjective thing, and "does it configure the kernel" is not the only factor. At least as important is "is switching to it easier than continuing to use the existing tools".

      Incidentally, the problem with the old build system was not that it was too slow to build but that, if you changes a few things and rebuilt, either you did a complete rebuild, which was too slow, or you just rebuilt the module you were working on, which didn't necessarily catch all of the changes. The new versions are moving toward giving you the ability to rebuild only those things which actually need to be rebuilt without making you know what things these are. People frequently report problems where they've forgotten to do the right thing after they've changed something.

      Making the current kernel's configuration available isn't actually difficult at all; it's just that userspace generally has the information anyway (most people copy the config file somewhere parallel to the kernel image). The hard thing is configuring a new kernel version to match the effects of the config file for an old kernel version: sometimes the options change, sometimes there are new options required for old things, etc.
    • There were a few times when Linus made it very clear what he wanted changed and ESR simply didn't fix it, it was as if he didn't even hear it; look at the threads in the kernel archive. I don't know what ESR's motivation was but he made it look a little corrupt.

      I think I may have found the answer in the following excerpt from his World Domination [linuxjournal.com] guest editorial on Linux Journal:

      Of course, articles like this are part of that game. We hackers are a playful bunch; we'll hack anything, including language, if it looks like fun (thus our tropism for puns). Deep down, we like confusing people who are stuffier and less mentally agile than we are, especially when they're bosses. There's a little bit of the mad scientist in all hackers, ready to discombobulate the world and flip authority the finger--especially if we can do it with snazzy special effects.

      I can't help wondering whether, in this case, Linus and Jeff are "the bosses"; indeed, stuff like pretty pictures and theorem provers and various other kitchen sinks associated with CML2 qualify (amply) as those "snazzy special effects" of which he is so fond.

      Now, love him or hate him, Eric is not going anywhere, even after getting booed off a very important stage. And in light of his, um, staying power and in consideration of the CML2 affair, it should be of some comfort to his detractors that at least Eric the Rich Guy hasn't lost his hackitude and keeps producing worthwhile stuff. When Eric first threatened to quit politics [slashdot.org], I looked forward to the return of Eric the Hacker and the retirement of Eric the Politician [slashdot.org]; alas, half an Eric must, ipso facto, half not be [montypython.net], and I'll take a whole Eric over half an Eric any day, thank you very much.

    • Like how do you know the configuration options used on the kernel you are running?

      This one should be so simple to solve that I don't understand why it's an issue at all, namely: include a copy of the .config file in the kernel image and add a handler so that /proc/kconfig maps to it. On my system, the .config file is 36k in size -- small compared with the size of most running kernels these days.

      Obviously that should be another configuration option, but it would certainly solve most of the problem, if not all of it.

  • Kernel changes that modularize everything that can be (pretty much that way now) and everything loaded on need (it's done that way now too mostly). So why compile your kernel (besides the because you can thing). One thing I would like is to see a standard way for non GPL'd drivers to be added to the kernel without a recompile, or having to half way compile (proprietary core) with a kernel interface needing compiled. It's kind of that way with some, but only if your binary is developed against the same kernel (the LTMODEM drivers used to be and may be still this way). Microsoft does not write all of their drivers. We should not have to either. Seems to me this would sure make alot more hardware work under Linux! I know the GNU bigots won't like it much, but I just want to be able to have more stuff supported out of the box then I do now.
  • "Shit... don't inflate my stock more than it's worth." - Jeff Garzik

    Now if only more [people] were at least this humble...
  • by mrm677 ( 456727 ) on Sunday September 08, 2002 @01:36PM (#4216590)
    My question is why custom kernels are needed anyways? Except for embedded applications, such as Tivo, why should the common user have to build a custom kernel to get certain hardware support? Is the Linux device driver model really flawed as many claim?

    Certainly its nice for development, or experimental patches such as low-latency patches. However it often seems necessary to build a kernel to get certain modules or hardware functionality.

    Any comments on the Linux device driver model?
    • Custom kernels are necessary because Linux is a monolithic kernel. That means that in order to use certain hardware or other features, the drivers have to reside in the kernel itself.

      Now, lets suppose that you just got the latest gee wiz device and you want to use it on your Linux box. You hook your "flux capacitor" up to the firewire port and nothing happens. Why, because either a firewire or a flux capacitor driver (or both) is required and the kernel doesn't have it installed. This means that you must rebuild the kernel with the appropriate driver in order for your new flux capacitor to work.

      Now, some may argue that the kernels should be pre-built with all the drivers and everything. Indeed, many distros do something like this for their stock kernels. But that still doesn't account for hardware that is yet to be invented. It also causes the kernel to grow into a giant that gives the term monolithic a whole new meaning. This large size means slow boot times and slower overall performance, in some cases. Surely, you don't want that?

      Indeed, many people want to trim the size of their kernel to an absolute minimum to improve the performance of their system, not to mention the security enhancement of removing unneccessary services. Do you really need HAM radio support? Most people don't, so why would most people want the HAM drivers loaded in their kernel? Do you need NTFS file system support, as I do? Probably not, especially with write access, so why include it? But at the same time, why prevent me from using it, as I need to?

      Even without the above reasons requiring the custom kernel, there is one more reason in favor of it. Part of the whole idea behind Linux is the ability to modify and customize it to your heart's content. That means if you want to modify your kernel you can. And this project will make such modifications easier than in the past. If you don't want to bother with customizing your kernel, then use the latest stock kernel from a major distribution, which will have mostly everything included. But, if it is slow or your flux capacitor isn't supported, you'll just have to wait and hope that the distro includes the support in its next release.
      • Mostly correct; but the kernel is not necessarily monolithic. It CAN be; but every distribution I've used comes with a modular kernel, with most options compiled as modules which are loaded and unloaded as needed. This is also how most people compile their kernels. It is very possible to compile them monolithically; but then you're guaranteeing that you'll need a recompile with any hardware changes.
      • You hook your "flux capacitor" up to the firewire port and nothing happens. Why, because either a firewire or a flux capacitor driver (or both) is required and the kernel doesn't have it installed. This means that you must rebuild the kernel with the appropriate driver in order for your new flux capacitor to work.

        No it does not. It means, in this case, you must rebuild a module. Rebuilding an entire kernel for this purcpose is a waste of time and energy.

        But realistically, you'd already have the module available - to follow your own example, user A might not use NTFS support, but having the driver available as a module doesn't harm his own system - the kernel isn't bigger because of it. It also allows user B to mount his Windows disk easily with only a slight overhead from loading the driver as a module rather than having it compile into the kernel. The solution is more modularity.
    • Its largely not needed anymore. Even from a performance standpoint, its not really an issue because Linux links in drivers as just another .o file. Thus, there is the new Linux driver model in 2.5 (it'll be awhile before everything uses it though) and Linus has stated that he wants to remove compiled-in drivers and make everything dynamic.
    • As an end user I have had no trouble with Linux device drivers. I suspect that most users with standard machines will have no trouble with device drivers. I use Debian. It compiles many drivers into the kernel directly. It includes the rest as modules. The list of hardware that a default Debian does not run on is pretty small.

      I do however use a custom kernel. Not because I have to, but because I want to. In the case of Debian I wish to use devfs which is experimental. I also wish to remove the unneeded drivers, modularize other drivers, and compile the kernel specifically for the target CPU.

      If you must recompile your kernel it is the fault of your distrobution not the Linux kernel.

      Is the Linux device driver model really flawed as many claim?

      I take exception to your statement that "many claim." It makes you look like a troll. If you wrote "some claim" I would accept that. Regardless, I disagree. From a technology perspective Linux device drivers can do anything that "that other OS" kernel can. The big difference is that linux drivers don't usually come precompiled. There is no good reason for this other then history.
      • I take exception to your statement that "many claim." It makes you look like a troll. If you wrote "some claim" I would accept that. Regardless, I disagree. From a technology perspective Linux device drivers can do anything that "that other OS" kernel can. The big difference is that linux drivers don't usually come precompiled. There is no good reason for this other then history.

        I guess I hear this too much by being in the academic world (Computer Science). However I do know that many linux device drivers, such as the OpenAFS client file system module, rely on knowing the exact offsets of fields in the task_struct. There is no run-time method of retrieving these offsets as far as I know. So our administrators are forced to re-compile the openafs module everytime they apply a patch, or run a different kernel on a system that has changed the layout of task_struct. When maintaining systems that may serve different purposes, I can see why this would be an administration headache. We use a distributed filesystem where most binaries, such as device drivers, are maintained on a central server instead of being on each local box.

        Constrast this scenario which Solaris, which is claimed to have a great device driver model by these people I speak of. I can use the same device driver binary I used in Solaris 2.6 as I use in Solaris 8.0.

        Regards...I wasn't trying to be a troll.
  • missing the point (Score:3, Interesting)

    by g4dget ( 579145 ) on Sunday September 08, 2002 @01:39PM (#4216607)
    I'm sorry, but tinkering around with another graphical configuration app isn't going to fix the fundamental problems with Linux kernel configuration. (In fact, if anything, I find a single window application with a tree widget worse than xconfig.)

    We shouldn't have to decide for hundreds of packages whether we want them or what options they should be pre-configured with in the first place. Almost everything should always be dynamically loadable and should always be dynamically loaded. Modules should be independent between minor kernel versions. There should be very few options, and those that are there should be configurable at runtime. The few remaining compile-time options shouldn't require some complicated interface. If we want single-file kernel distribution, we should be able to create a single file archive of the kernel and the required modules in a way that the bootstrap loader understands.

    While parts of the Linux kernel are great--the variety of kernels and file systems, for example--I think overall kernel architecture and configuration is by far the weakest part of the Linux operating system. It's not the GUI that inhibits Linux adoption by the masses--Linux GUIs are up to par with other platforms--it's the fact that a large number of people end up having to recompile the kernel to get things like audio, FireWire, power management, cameras, and USB working, even with the modularized kernels in some distributions.

    • Check out the second screenshot. 2 scrollbars of 2 different applications. One has a normal, ugly X scrollbar, the other has a nice look, probably inheriting the selected theme.

      That, my friend, is a result of the true error in the whole picture: there is no consistency. People are doing what they think is right, but there is no big, guideline which will bring the whole system to a certain level because it's all worked out.

      That is true for the gui, it's also true for kernel configuration. You are right about the fact that people shouldn't be hassling over which package should be installed and which option should be compiled into the kernel. On windows I just run setup and the system configs itself. I never have to recompile any kernel, because 1) I don't have the sourcecode (;)) but 2) I don't have to: WinXP will config itself and will work no matter what hw card I jam into the pci slots: install the driver (or better: xp has the driver already) and off you go. There is no need for compilation of a certain subsystem into the 'kernel'.
      • On windows I just run setup and the system configs itself. I never have to recompile any kernel, because 1) I don't have the sourcecode (;)) but 2) I don't have to: WinXP will config itself and will work no matter what hw card I jam into the pci slots: install the driver (or better: xp has the driver already) and off you go. There is no need for compilation of a certain subsystem into the 'kernel'.

        Yes, kernel modules work better on Windows and Macintosh OS X. That's the kind of kernel configurability Linux should aim for.

        That, my friend, is a result of the true error in the whole picture: there is no consistency. People are doing what they think is right, but there is no big, guideline which will bring the whole system to a certain level because it's all worked out.

        Windows and Macintosh had to make dynamic kernel configuration work because recompilation just wasn't an option in their markets.

        Your analogy to GUIs misses the point. Hardware vendors on Windows or Macintosh don't make their drivers work because of some consistent guidelines, they make they work because otherwise they couldn't sell their products. Hardware vendors did this long before DOS/Windows even had a kernel or guidelines. Linux kernel configuration is, if anything, more consistent and standardized than Windows, it just happens to be more cumbersome for end users, too.

      • VisualStudio.NET bombs the Linux developer right back to the stone age.

        Indeed. As a Linux developer, I always feel like I'm back in the stone age when I use VisualStudio.

  • The primary problem I have with "graphical" anything, is that this is a Windows era issue of "every program written has to use a mouse and a window or it isn't user friendly."

    Simply not true. When all I had was a VT100 command set, I wrote terminal interfaces that a 4 year old could use for fairly complicated pieces of software, no mouse required.

    Although it did use windows....of a sort.

    As for configuring and building a kernel, beyond:

    1) Interfaces to remove and pull out components on demand....

    2) The interface should provide optional documentation and guidelines as well as best practices for most kernel configs given the applications the machine is going to be running.
    (i.e. Will it be a router, firewall, an app server or database server?)

    3) The configuration system should be easily scriptable with a minimal set of gcc utils.
    (sh, make, config..etc.) This is so that it requires less software to build the kernel.

    This implies inherent reduced security risks, smaller kernel distribution and less dependancies for linux systems integrators.

    Eric S Raymonds vision fails on all three accounts as far as I can tell, on how a kernel should be built and what the logical assumptions are for building a kernel in the first place.

    Primarily, users, shouldn't be building kernels anyway. Which is what I think the root problem is here. No, I don't think either, that there is something wrong with Linux if a user can't do EVERYTHING with a mouse and windows.

    Lets be realistic here: Users do not have the background to properly build a kernel, and building a nice graphical front end too build a kernel for a sophisticated developer gets in the way. It also, doesn't detract from Linux one iota simply because this fact exists.

    That is what the argument here is, and that is why many people who write kernel programs don't use graphical tools ANYWAY. Which I think breaks another assumption made by Mr. Raymond about a new config system.

    Dependancy graphics are nice, rules seperation parsers built to create such graphics with a language are nice.

    But this is really OLD SCHOOL stuff. Any computer science/computer professional can buy a book on such theory and compeently learn everything there is to know about REINVENTING the wheel.
    (ISBN: 0-13-1555045-4 Start reading at 7.3.2)

    And you too can write a configuration system similair to Eric's...

    But WHY WASTE YOUR TIME?

    The existing kernel configuration system is very scriptable, has supporting documentation available with each modules or options, and works with a very minimal set of build tools on the command line.

    Very nice, simple and it works very well very nicely without Python, X, windows, Mice a supported video driver and a whole new set of tools that basically give us the same thing we have now, just a whole lot more complicated.

    I would like to see Eric address points 1-3 and tell use exactly why we need all this stuff as kernel developers. He hasn't done so.

    His website just shows pretty pictures of a kernel configuration system.

    Hack
  • xconfig is good enough for me. The problem is that dependencies between drivers aren't tracked (at least not well) so it is easy to turn on an option and break your kernel. Solve THAT problem and you win a cookie; Fail to solve it and you're just painting pretty pictures.

    In addition, requiring the Qt or GTK libraries is ridiculous. I don't want to link to anything that large. If you can't get it done with Xaw (or something similarly small) I'm not interested in your stupid config tool. I'd rather use something fast and ncurses-based.

  • It looks surprisingly similar to the KDE Kernel Configurator.

    Control Center->System->Linux Kernel Configurator


  • The screenshots look more complicated to me - just a horrible GUI interface with a single giant tree list. How horrible.

    The sad thing is that you should need to recompile the kernel to add support for various bits of hardware. What is wrong with using drivers that are not compiled into the kernel, and being able to add them at runtime?

    I can understand recompiling the kernel for certain reasons:

    1) Want to compile for your architecture to get the best performance

    2) Want to make use of a kernel patch, or non-standard kernel feature

    The monolithism of the Linux kernel is primitive. It should be fully modular - a small kernel core with additional services for various aspects of the kernel, and with full runtime driver addition and removal, etc. This will become even more necessary with systems that need 99.99% uptime using hot-swap PCI and the like.

    The kernel configuration should basically be an automated process - check how many processors you have, optimise for that processor type, etc. Compile all hardware support as drivers/modules. Install.

To the landlord belongs the doorknobs.

Working...