Please create an account to participate in the Slashdot moderation system

 



Forgot your password?
typodupeerror
×
Linux Software

What Does The Future Hold For Linux? 468

Nailer asks: "With kernel 2.4 in the final stages of bug hunting, and on track for a December release, I thought it might be pertinent to discuss the future of Linux. What now? ReiserFS will apparently be in 2.4.1, but there's very little information about the mid to long term available. Where do you think Linux [the OS, as well as the kernel] will head in the future? Personally, I'd really like to see POSIX ACLs as the default permission system [allowing the fine grained access control that many apps try and implement themselves]. What do you think?"
This discussion has been archived. No new comments can be posted.

What Does The Future Hold For Linux?

Comments Filter:
  • by Anonymous Coward
    I think one of the biggest threats to linux is the quick
    change of needs. The dotcom age has come and gone. The ASP age
    was full of promises...
    And now, time to change your mind, we see the XML-peer-to-peer
    model gaining speed ( ok, don't remind me p2p is there from
    the beginning... I know this well. I mean there is now XML
    to make it rule seriously over the world ).

    Linux is efficient as a server ( maybe a little less
    than FreeBSD, but that is just an opinion ), has quite good
    security and it can be made use of these strong points to
    be a player in the new scheme.

    But a huge pressure will soon appear, if it hadn't already,
    on the windows side of the industry to push this OS higher
    than it never has been.

    The Linux community would have to react *quickly* but as far
    as I see, nothing happens. Just take a look at the HUGE wave
    of XML docs and software that are coming... there are nearly
    all win-related !

    Ideas, docs and work are needed bad !

    Anon
  • Nice use of 'ad hominum'.

    -Paul Komarek
  • Have a central repository of configuration data stored in a database, accessible through a special API. Call it a "registry".
  • If you have more than one mode line, X will let you change resolutions on the fly with a key combination.
  • That's all well and good, but that can be achieved *without* XML just as easily as it can with. I see no inherit advantage in using XML, except the gain in "buzzword compliance".

    Bingo! What all those XML-loving types don't seem to understand is that XML is just a glorified text file. You are right on the money in saying that a common config file format is what's needed, not "buzzword compliance" as you nicely put it. However, I don't think this can be achieved not only because different projects have to accept it but simply because it may not be practical to do so. Is is possible to make fstab and named.conf use the same format? Maybe. Would that make it more convenient to configure them? Doubt it.

    ___

  • If Linus built ACLs right into the kernel, I would be forced to put in serious effort and rip it out again.

    You mean recompile the kernel?
    ___

  • It's not realistic to expect a volunteer to write device drivers for harware they don't have access to either.

    Seriously, and don't give me that "Well, Linux will never achieve world-domination with that attitude" crap. Hardware manufacturers generally write drivers for Windows, Microsoft doesn't even have to pay anyone to do it. Linux is a volunteer effort, if people can write a device driver for hardware they need, they do it.

    If you really want a driver for a piece of harware, maybe you could ask one of the developers that wrote similiar drivers and send him or her a sample of the hardware. There is simply no other way, unless you expect driver developers to buy and write drivers for every piece of hardware in the world out of the kindness of their hearts.

    "Free your mind and your ass will follow"

  • Linux will not die until no one is interested in playing with it anymore.

    I would hesitate using the term 'extreme weakness' for the security model. It is simply the Unix security model which, while far from perfect, is at least reasonable with proper administration.

    "Free your mind and your ass will follow"

  • XML has some potentially useful 'meta' features. There are a lot of parsers, and other tools and standards that interoperate with it well.

    From the perspective of config files, one of the more important things is that you could define a set of schema fragments that can be used to assemble the config files for particular systems.

    Its certainly not necessary to use XML, but it does seem to add something over the creation of an entirely new and different syntax,
  • It would be pretty interesting to see where did you get this list from. As I have seen completely different evaluations. Besides I am a systems integrator/administrator/hacker and my practice goes tremendously different from yours

    You forget Solaris AND _AIX_ on the Web servers. If there is one thing where AIX performed very well was on Web Servers. Solaris is the big gammer for complex Internet tasks and big Database systems.

    Linux is the big gammer on Web servers. Try a look at http://netcraft.com and search for their July report. Windows is beaten by Linux!!! (truly by a very narrow margin).

    BSD on the Web? Good for fixed, glued and inflexible implementations carrying high performance. With a special note on security if OpenBSD I is called onto the show. On the rest a headache.

    Small office servers? NT, Novell, Linux, BSD. I still haven't seen real and serious W2k implementations on such environment.

    Embedded, realtime? You are right on what concerns QNX. In terms of its quality. But not in terms of how spread it is.

    Games? W2k???? Are you kidding? Maybe WinME (You? No thenks!), 98, 95, Whistleblower. But never W2k or NT. On W2k only a small number of action and quite very new games may preform well and stable.

    Besides you forget several other Internet servers like ftp, DNS, Mail, etc. There Linux has also a big piece in the pie.

    You forget about some very specific application servers like DB servers and Fax servers, where also Linux has gained some good points.

    And on what concerns Worstations. Please add Linux to your list. The fact that Linux is not ready for the dumb installer does not mean that Linux users don't exist. Only in this city there are a few thousands.

    On what concerns w2k. Very good for desktop office tasks and some professional graphics processing. Average for some games. Not too bad for very small office server tasks. On the rest: no comments...
  • I don't use DSound3D w/EAX and don't see the meaning for hunting a SBLive. My Archi-old GUS MAX is still enough for me. I need OpenGL but not only on X. My tradtions are to put everything on /usr. My colleagues either put on /opt or /usr/local. One puts in /usr/src (...). RedHat/Mandrake config file layout is tractor for servers but Slackware's is too dumb for a desktop station. RPMS are good but bloat. Tarballs are primitive but stick to the minimal demands. Drivers on 2.4.0 should be recompiled for 2.4.17 as there are several features that will always grow in incompatibility (M$ featurefix DLLs are an example of this). I use glibc 2.1.9x/2.2 only. Not long ago, in one station, I used mostly the latest libc5. Ncurses can be 3,4,5 as long as they fit the package I need to use and I don't wanna run every five minutes to the developer, asking him for upgrade features. Sometimes, I need joe's-own-stupid-lib.so because I still have old stuff or some other demands hacks (ex. I need 3 libglides.so.* on my comp) On Perl/Python you may have some point. I use bash, but my solaris friends highly prefer tcsh to it. And I need sash on init 1 for deep dives on the system.

    One note. People should stick to some baselines but not on stating "ncurses 5 and nothing else!" The rules should tell not about the apps or versions but how to implement a file without raping the whole system. Things like if two libs with similar names and different incompatible versions, then apps should link strictly to the libs with version number and not libXXX.so. And, when upgrading, making warnings that "libXXX.so.#.## is used by app", but not unilaterally deleting it. If this happened then all conflicts would become minimal.
  • 1) Agree.
    2) Partially agree. Some standards should exist but no one should stick to them if they don't cover the features of specific device systems. However standards shbould be followed as far as possible.
    3) Hooks? What the Hell is this? Redmond's flamebait? I don't use KDE, Gnome, GTK, or anything else. I use all of them and none of them in preference. If anyone tells me to stick to some KDE or anything then I prefer Bora-Bora to this.
    4) Configure stuff should have some more organisation. But "NO THANKS!" to automatic loads if there will be no choice. Such automatisation will be only useful for some users. Others, like me, would highly prefer to have hand control on drivers. Besides a more flexible hand control than now...
    5) Replace OSS? No. I prefer Alsa to OSS but there are issues where OSS is much better than Alsa. Specially if this concerns hand control of these drivers or you need minimal use of sound capabilities with good quality. Alsa is too bloated/unstable for some sound server implementations.

    6) Ok. Fraction the kernel. REALLY! Fraction it. Divide it. Cut it in half, a third, a tenth. Slander it. The thread oriented problem is serious but it possesses its minuses, specially on some performance issues. And i believe that sticking into the "ONE UNIQUE" kernel is probably the most stupid thing ever established. The only problem is how well we may perform the fractioning. But I believe that people may find solutions much similar to those that happened when we passed from a.out to the ELF based kernels.
  • Don't confuse Linux and KDE/Gnome/WM... These things are not even Linux rooted. And I hope no one of kernel developers will ever dream on integrating the kernel with such stuff.

    You may be right that they are still relatively slow. But that's a problem that 70% is due to X. Yes, I think that most *NIX developers should think about deeply reforming this system. Even 4.0.1 with a super-patched structure of DRI/XVideo/GL and other drivers is still slower than Windows.

    Distros more standard? No thanks... I prefer to go through a shelf and see 10 Linux distros instead of one Windows pack...
  • Another view:

    Windows: Pay for Hardware and Windows. You come home and note that your embedded Windows is a year old. If you are a good citizen go back and buy a new fresh Windows version. If you're mad at M$ you get a pirate copy. Each end of month you reinstall Windows.

    Linux: You come to the shop and tell what you need. You warn them that you will it them alive if the HDD is not clean, virgin, nude and fresh new. You send them to Hell all Windows emblazements and remind them of a Commitee for Consumer's Protection. You come home, grab a Linux distro. You take a month fine tuning and recompiling the whole stuff. For a year you forget about reinstalls...

    Solaris: You buy hardware the same way as Linux but grab a copy of Solaris. You cry Hell on the lack of drivers, apps, fine tuning for a month or two. In the end you wait a year to reinstall Linux. In the end you reinstall Solaris the same way as before.

    BSD: You buy hardware the same way as Linux and Solaris but install BSD. For a year you "make world" and think why BSD is not so flexible and available as Linux.
  • "rwx" permissions existed in Unix from its creation, so by definition they are Unix-like. That doesn't mean they are the best way (look at Plan 9's credential system for more recent ideas from Unix's creators), but they are what Thompson & Ritchie used in place of the more more capable (and complex) mechanisms of Multics.

    -Ed
  • First of all, the reason it's not in CVS is so that the code base can be controled. So some evil hax0r from Microsoft can't go and make it mutate like their ads in germany clame that linux will do. Besides that, The decision is up to Linus, If he wants to move it into CVS, thats his choice, but I think it would be a bad idea.

    Two, Win2k is not all that stable. My Win2k laptop has just frozen from time to time, and youc can still get BSODs from just using Win2k, Though they are much more rare.

    I have several things I also dislike about Win2k:
    - It goes to some ip address starting with 169 when your dhcp server lease runs out. It also has this tendecy to crash my ISC DHCPd, Yet I haven't bothered to sniff the conversation yet.
    - IPSec is a bitch to get working with free/swan
    - I have found that IE 5 and IE 5.5 on win2k have this tendency to grind the system to a halt within 3 hours of it being booted in some cases.

    I admit, I like the fact that it has IPSec, PPTP, and L2TP support built in. I beleave that Win2k is designed for telecommuters.

    Oh, Did I forget to mention that Win2k is a memory hog, It's kernel is 28 megs when running, and they call it a microkernel :) (The kernel on my exchange server at work is 26 megs when running)

    -LW

    Yes, LW is a MCSE, Though he hates Microsoft Products, they have their uses, like making a system for only one person at a time, and slowing work down so Network/Systems Administrators can sleep peacefully in their cubicals w/o any overhead florecent lighting.

    BTW, What kind of code can we now type into our messages? C? C++? Perl? Pascal? god forbid vbscript
  • Bingo! What all those XML-loving types don't seem to understand is that XML is just a glorified text file.

    This misses the point completely. By the same argument, C is just a glorified text file.
    --

  • Sorry, but this is not a good idea. The reason being that any call to any such driver requires two full context switches. One for the call and one for the return from the call. Since a context switch is monstrously expensive, compared to a simple method call (which is all it takes to do a driver call with the current architecture), it's a MAJOR speed bump.

    My point was that it's probably necessary to do it despite the speed hit. Vendors are just starting to notice Linux now; we'll be seeing an explosion of hardware support some time soon. Many of these drivers will be badly written, and stability will suffer a *lot* if a driver crash can take down the rest of the kernel.

    Now, regarding the speed hit. I think that it's possible, with clever programming, to mask most of the performance degradation. While individual calls to the driver will still have a high cost, you can minimize the number of calls that need to be made.

    For things like mouse and keyboard drivers, that have dozens to hundreds of events per second, speed is a non-issue. Context switches aren't *that* slow.

    Sound drivers are similarly not much of a problem. Modern sound cards have large on-board buffers, and only need new data a few dozen times per second - few driver calls needed. Similarly, if the block size is set to something reasonably large by *default* when writing to the driver, you won't lose much sending sound data _to_ it.

    The big performance-eaters will be network and video drivers, as both have to deal with large amounts of data. Again, though, the logical way around the problem is to queue substantial amounts of data before sending it to the driver. Make the default driver access points operate on long lists of primitives (for graphics drivers) or large blocks of data (for network drivers), and *remove* single-primitive functions. Add sample code or a user-space library to handle queueing, that the user can use if they don't feel like writing their own queueing code. Add a "flush" command that can purge the queue. Problem solved; the number of driver calls drops greatly, thus making context-switching overhead much less relevant.
  • I've done driver development under Linux, and my main complaint is having to reboot after every test until I've stomped all of the pointer errors in the driver. Having driver modules live in their own processes would solve this problem, and would also add insurance against the Windows destabilization effect (half of the instability of Windows comes from buggy drivers scribbling over memory, and it's just a matter of time before the same thing happens to Linux).

    Yes, this is already present in other operating systems, and yes, this would slow down driver functions with the overhead from additional context switching, but that doesn't change the fact that it's still a Good Thing for Linux to have.

    You should even be able to do this without breaking most of the kernel programming interface. You'd just need to add extra functions for doing things like messing with PCI configuration space (which most drivers do, but few do extensively, so no great imposition here).
  • The big advantage to XML is that there's been lots of energy put into it, it's been pushed through the standards committees already, and there's lots of good, free tool support. The disadvantage is of course the verbosity and the supposed editing difficulty (although anyone one who can make an HTML page shouldn't have any real trouble with XML in their text editor, not to mention that it can be validated, so you'll know when you screwed up before your program goes freaky).

    So it's not a clear Win-Win, obviously. But the problem is that nobody else has a standard structured format ready to go. You might find a lot of anti-XML sympathy among Unix admins, but then the anti-XML faction will have to spend the next 5 years arguing over what the best format actually is, and then you would have to go and build bug-free tool support for it.

    And, of course, the efforts to build a non-XML standard syntax will likely devolve into arguments about square versus angle versus curly brackets and line-ending characters, and will eventually fork and fork again and probably fail. And then we will be in the same situation as we are now -- several rolled-their-own open parsers and formats, except now each with the glossy sheen of bogostandard declarations.

    But let's be realistic here -- the argument isn't really about XML versus some non-XML alternative. The argument is about GUI'd admin tools and an newbie-friendly system versus the crusty ego and fat paychecks of unix wizardry. You need a little of the former if you are ever going to get Linux/Unix on the desktop, but you can never get rid of the latter. That means that both systems will need to be in place, and there will need to be near seemless conversion between both. That's lots of engineering and lots of mucky code. (I'm thinking of something like NeXT netinfo which imports/exports to /etc) Which means, you're right, it would take a lead distributor like RedHat or Debian to back the project, although having cross-Unix commercial support from someone like Sun wouldn't hurt either.

    --
  • Well, I could be completely wrong about this, but it seems like every Unix system I've ever seen is built up from the core assuming 8-bit (or even 7-bit) text streams. Kludging Unicode onto that would seem like a nightmare (or a complete rewrite of user space), and the config file issue is minor in the big picture.
    --

  • You wont have to edit xml by hand.


    Well, I was thinking about single user mode, and you will always have the VI-or-die faction to deal with.

    It would be [a win-win] if the only objection was the one you raised


    Tell that the the significant faction of Unix users that like things they way they are. I'm an XML proponent here, BTW.

    What's being transferred - in the unix world, anyway - will remain out the newbies reach just as it always has.

    To some extent, yes -- I'm not expecting XML to solve anyone's sendmail configuration problems. However the most compelling reason for doing this is to build a system that has a better/more friendly toolset for configuration changes. (See linuxconf, which imo is just dangerous.) XML-for-XML alone is not going to fly
    --
  • Linux needs a good forking. Seriously. Competition is good.

    While this may be true, it seems unlikely for such a fork to develop without animosity between the two sides, especially if both are targeting the general marketplace. (Niche markets will be more readily tolerated as forks, especially if they have conflicting needs.) XEmacs vs. GNU Emacs is a good example of this animosity. On the other hand, EGCS did resolve its differences with GCC. (But did they actually achieve their goals as stated?)

    That being said, I suspect there is a window of opportunity for someone to attempt to create a mainstream fork of the Linux kernel. Since Linus has eschewed most software engineering suggestions (CVS server, regression testing, etc.) and many people feel this is hampering Linux development, someone dedicated to maintaining a fork with a more engineered development process might actually have a chance at gaining mindshare. Unfortunately, this would almost certainly cause animosity and risk dividing the community. "United we stand; divided we fall." There's a delicate balance between encouraging competition and self-destructive infighting. GNOME and KDE may benefit from competition, but the entire Unix community was damaged by the Unix wars between vendors in the 80's...

    How do we strike the proper balance, and should we take the risk?
  • The kernel will start to move to more modularity. Yes it supports modules, but more generic hoks to the kernel, so that adding new modules will never require a recompile. The only real reason would be if the base kernel needed changes. More towards a micro kernel based architecture. More improved kernel subsystems.

    In user land, I'd like to see more kde /gnome /pick your wm integration. Yes they are compatable, but more compatability inin things like themes, and theme creation tools. Applications need better performance as well. So many kde and gnome applications take forever to start up, on my dual 233 w/128Meg of RAM. That is bad. maybe this could be fixed by more compatibility in th elibraries them selves and then the libraries that these programs use could be loaded into RAM at window manager startup. Or this could be an option for those with the memory. Thus the applications could start up real quick.

    Next I'd like to see the distros become more standardized across the board. I.E. I should be able to install an RPM from one distro on another with no recompile and no problems, like this is not Mandrake or SUSE error messages on Redhat.

    I don't want a lot, I just want it all!
    Flame away, I have a hose!

  • I'd like to see rpm be improved, but I don't think that apt-get is the way to go. apt-get is a cludge and cumbersome to use. It is not friendly and easy. If it were they'd be using it by now. Why do you think that so many distros are based on RPM? Cause is is slightly better. Yes it does lack functionality. I'd really like to see rpm gain the functionality of apt-get, but still remain the way it is.

    I don't want a lot, I just want it all!
    Flame away, I have a hose!

  • Uh, no confusion here. Kde and GNOME are as much a part of Linux (the OS) as the kernel is. YOu don't hear someone saying I use Redhat BSD it's redhat Linux or redhat GNU/Linux. Yes kde/gnome work on Solaris and BSD and other UNIXes, but lets face it, they follow the open source trend with Linux and much of kde (qt license issues) have been driven by Redhat and debian refusing to include them. Yes it is true that you don't need linux to use them, but did anyone see BSD complain about kde license? NO you saw it from Linux users.I am not talking about kernel integration but integration between kde and gnome. Yes they can still be seperate, but it would be nice if they had common libraries. I guess it would be nice if qt and gtk were part of the standard libraries, and kde libs and well as gnome libs were too. Thus a base install of X could have these libraries like include X lib extensions? Yes and then they are installed. Maybe this is with the installer.

    By more compatibility between distros, I mean that rpms from SUSE shoudl be easily installed with no --force no --nodeps and no error messages provided that you do have the proper libraries. This is essentially up to rpm. Which IS part of Linux as I don't think any NON linux OSes use rpm. Sure you can have 100 linux distros, on the shelf, but I should not need to compile 1 program 100 times for each of them.

    By slow I mean slow to start up. To many libraries have to be loaded on start bu kde and / or gnome. try starting konqueror in gnome/sawmill. It takes a while to start up. It would be nice maybe if gtk and qt libs could be loaded up by X or the windowmanager upon start up or if this were an option available in X or the windowmanagers. This means that starting X would take longer, but the applications would start up fast. Hey it takes a while for my system to start, but that is beause I start all my service outside of inetd and in the startup scripts.

    I don't want a lot, I just want it all!
    Flame away, I have a hose!

  • FINALLY!!!

    I have been complaining/whinging/asking about the lack of fscking ACL's for over a year an a half. They are (IMHO) one the BIGGEST limitation of linux. If anybody has attempted to use linux for a large number of users, they soon find out that they need ACLs!! Yes, there are patches and such, and I have tried them - although they are very unreliable.
  • ok.. lets say every user has a public_html folder in their home dir.

    you want

    1. for default files in their home dir to be read only by the user.

    BUT

    2. any file in public_html to be read and execute by world.

    you want these permissions to apply TRANSPARENTLY. So whether if they put the files there from their shell, samba, ftp, appletalk, nfs WHATEVER - those files in public_html are available to the world.

    go on.. try do that without ACL's.
  • so you believe that you can achieve this setup with a umask? What is it?
  • you _cant_ do this transparently without ACLs.
  • This command won't work. Look at the output of "ps -ae". Something like this is closer:

    ps -C "tcsh" --format=pid= | xargs kill -9

    or if you really want to do it with a grep and regex:

    ps aux | egrep ".*tcsh.*" | awk '{print $2}' | xargs kill -9


    ----
  • ^tcsh^clip
    ----
  • Gee, I have a single config tool: it's called vi (on a minimal system).

    Microsoft really f*cked up windows for many of us when they moved away from *.ini to the registry. Personally, I think flat-files for configuration are the pinnacle of configuration methods. XML might seem equally easy to edit, but the parsers would need to preserve formatting and comments. Why not just standardize on

    # this is a comment
    keyword=setting

    We're almost there already and there are plenty of libraries out there to read and write them.

    I guess the only real advantage of XML conf files would be that your GUI config utility wouldn't have to know about a new tool before it could give you the config options for it. But my philosophy is that if you have to use a GUI to config your system, maybe you shouldn't be config'ing it.
  • On kernel-traffic you can see a recent discussion about select and poll on Linux. They just don't scale well AT ALL.

    Last week's Kernel-Traffic summary [linuxcare.com], Linus actually says the opposite:

    what you're showing is that Linux actually is _closer_ to the perfect scaling (Linux is off by a factor of 5, while Solaris is off by a factor of 15 from the perfect scaling line, and scales down really badly).

    Now, that factor of 5 (or 3, for 2.4.0) is still bad. I'd love to see Linux scale perfectly (which in this case means that 10000 fd's should take exactly 100 times as long to poll() as 100 entries take). But I suspect that there are a few things going on, one of the main ones probably being that the kernel data working set for 100 entries fits in the cache or something like that.

    Either

    1. Solaris has solved the faster-than-light problem, and Sun engineers should get a Nobel price in physics or something.
    2. Solaris "scales" by being optimized for 10000 entries, and not speeding up sufficiently for a small number of entries.


  • Linux needs a good forking. Seriously.

    Yeah, me too, I could definitely use a good forking. Perhaps I should spend less time with you guys and more with my girlfriend ;-)


    --Gfunk
  • Although it isn't point and drool.. The Gnu CFENGINE (http://www.iu.hioslo.no/cfengine/) allows easy management of 1000's of systems. It is scripting based, but the scripting language is very, very, very simple and easy to learn.

    I believe PIKT (http://pikt.uchicago.edu/pikt/) is another similar app..

    As for one click managment, even in the NT world I've never seen a decent system for managing large numbers of systems that is totally point and click. Even Microsoft System Managment Server requires some scripting. (as well as requiring you to set up SQL server).

  • There is no "+1, Pun" moderation option so the moderator had to select another reason. Some moderators recognize how enthusiastic some people are about the grape.
  • Also, it would be nice to be able to adjust the resolution of X while actually in X

    It is called xvidtune
  • Does anyone have an online resource explaining what ACL's enable you to do? Maybe even comparing the Unix flavor(s) to the ACL's they have on Windows NT? Just some introductory text.
  • 1) Speed is already one of the most emphasized design goals of the Linux kernel. If you have never read the Linux kernel source, then you're probably not aware of the numerous gcc-specific tricks and tangled code that is used to speed optimizations. Trust me. The kernel is as fast as it's going to get.

    2) It is standardized between every incremental revision, as much as any driver model can be standardized. The major changes only come between minor and major revisions, such as 2.2 -> 2.3/2.4.

    3) Whatever.

    4) Nonsense. The kernel uses the user-level helper tasks for efficiency's sake. If you haven't studied the kernel module loading system, then you shouldn't really be talking about it. Standardization of config files is also impossible in many cases do to the differences between said devices.

    5) Might be nice.

    6) Okay, now this just shows total ignorance of how Linux and software/system interaction in general works. The kernel's treatment of threads is that they are equal to processes that share the same memory mapping structures. The Linux scheduler handles thread-switching quite admirably, and you will be hard-pressed to find any desktop OS scheduler that performs as well as Linux's. The major misconception here is in assuming that "pervasive multi-threading" is all the kernel's responsibility. That is all the responsibility of the user-level library and application code. The kernel schedules and relegates threads. It's the user-level code that responsible for taking advantage of the threading facilities available.

    You seem to be bound up in the desktop world of "GUI == part of OS." In Linux, the GUI is a user-level application, as well it should be in the case of what they have available. God forbid anyone attempt to weld the Unseelie monstrosity that is X11 into the kernel. That would introduce bloat and instability into the entire system. This is why there aren't kernel-level SAMBA implementations, BTW. The responsiveness of the desktop interface is not the Linux kernel's fault. The fact is that QNX and BeOS both have very well written GUI layers. However, to claim that they are in general more responsive is a little bit of marketing/evangelism propoganda. The responsiveness of the Linux kernel is very, very nice on a process/thread level.
  • Flat text files are good for simpistic yes/no answers and for storing strings of domain names and such - but it's just linear expansion and there's no nature of child/parent and nesting.

    Not at all. Mathematically, XML and ini file are probably equivalent with respect to the kinds of graphs they can represent, if you allow values to reference keys in other sections.

    The main strength of XML is as a markup language, to combine content and meta data into a single data stream in an easily separable way. As a way of describing assigning bindings to parameters it is neither as simple as ini files nor as powerful as BNF grammars.

  • The Linux model of having a bunch of stuff out there competing does a better job of getting the winners into widespread use than a system where one version may (e.g.) have better drivers but a worse scheduler than another version, because they forked and can no longer just take each other's code when one of them is clearly better

    I agree, but with a caveat. The alternative you present is not what I'm suggesting, and it wouldn't be allowed to happen anyway. The idea here is just an extension of the odd-number/even-number idea we already have. There would always be exactly one official Linux, compared to which any forks have a somewhat inferior status. It's like after 2.4 we allow both 2.5red and 2.5blue to go ahead, with a promise that when the time comes to make 2.5 into 2.6 fair consideration will be given to basing 2.6 on 2.5blue instead of 2.5red.

    Looked at another way, it's like Linus (used here for convenience as a personification of Linux authority) saying, "I don't believe in this enough to devote my own time and effort to it, but I recognize that I'm not infallible. I'm willing to let the community decide which they prefer, instead of branding anyone who disagrees with me as a renegade splinter group unworthy of attention or support." Wouldn't that be nice? What I'm suggesting is that the dictator - however benevolent he may be - allow elections to be held once in a while, with real candidates given a real opportunity to present their own views to the electorate. The Linux "brand" has become too important to be the exclusive property of one person, even Linus. Linus does other things besides Linux, and Linux can be done by other people besides Linus.

  • Here's another point I forgot to mention. If, as in your example, A has better drivers and B has a better scheduler but code cannot be readily shared between the two, I'd say it might be because the code isn't modular enough. In an ideal world one could mix and match "best of breed" kernel components - even things like drivers and schedulers - in the same way that we already do for applications. There's nothing fundamentally different about kernels that precludes this, and if you'll look back at my original post you'll see that modularity was the very first item on my wish list. Maybe the idea of a fork will be more palatable when things have been made more modular...which IMO makes it all more important that modularity be improved.

  • Chances are, a large number of voters(I think they should be developers) would vote to keep good desktop performance; after all, that's what most of them will be programming for.

    That seems like a reasonable outcome. Some of us - myself included in this example - might not like it, but if that's what "the people" want...

    Next thing you know, someone is in charge for a year or two, making horrible technical decisions

    Whoa, wait a minute. I'm suggesting that the community be allowed to decide between work products, not between people. "Who's in charge" wouldn't be affected; it would still be the same not-quite-meritocracy it is now. Those same people would be making a promise to give other people's ideas a fair chance, not to give up their leadership positions.

    In any case, even if our leadership were elected, if they started making bad technical decisions the same remedy would apply that always applies in a democracy: vote them out. The scenario you describe, of people getting "into power" and remaining there despite inferior technical decision-making, just seems impossible to me.

    Issues that are very big(ie: Big Iron support) should be resolved through compromise(if a compromise is technically feasible), as opposed to "this way" or "that way".

    Yes, if possible. However, as you seem aware, sometimes it's not. Sometimes there really is just "this way" and "that way" and they're fundamentally incompatible. Furthermore, sometimes it's not clear even to the most highly informed people which will turn out better. All I'm saying is: do the experiment. Right now, when such an impasse is reached, only one option is pursued - the one that Linus and/or other senior developers happen to favor, for reasons that may or may not be entirely supportable from a technical standpoint. I think we have enough developers now that, in at least a few of the hardest cases, we can try both and see what happens, but I don't think that can work if one group is "real Linux" and the other group is "a bunch of renegades who are trying to destroy real Linux".

    As I pointed out in an earlier post, one critical part of making this work is an agreement on both sides to share information about common problems and solutions. I would expect that, any time there's a fork, a "winner" would eventually be declared and the "loser" would then abandon the fork in favor of trying to apply lessons learned during the process to the new official Linux. Maybe that's too idealistic, maybe people are too stubborn and territorial and ego- or profit-driven to allow things to happen that way. *shrug* It's just an idea that I think deserves at least a moment's consideration, even if it's later rejected, and in general it doesn't seem to get considered at all.

  • I thought that a somewhat-concrete example might help clarify what I'm suggesting.

    Let's say that I, and a significant number of other folks, want to rework the Linux SCSI mid-layer using the CAM-based BSD code as a model. We'll call this group "red" in reference to 1917. The "white" group, which includes some individuals with more clout than any of the reds, also wants to rework the SCSI mid-layer but wants to reinvent the wheel and do it a whole new way unlike any other platform. In addition to the project-specific "core code" for each version, a certain number of tweaks would be necessary to common code to make the core code "fit". Here's what would happen now:

    1. The white group's changes, to both project-specific and common code, will be accepted as soon as they're ready and put into the latest official development kernel.
    2. The red group's changes will remain in an unofficial patch. The patch will not appear in the usual places people go to for Linux updates, so the reds will have to develop their own separate distribution network.
    3. Eventually, both projects will be near completion. The white code will already be in the official Linux kernel, and will be accepted by default. The red code, will only get in if several conditions are met:
      1. The red code is not only better, but quantifiably and significantly better, than the white code.
      2. The red code must be a patch not to the kernel as it was when both projects started, but to the current kernel loaded with white-specific changes.
      3. The reds can somehow overcome the objection that their code cannot be as well tested as the white code which has had the advantages of the official-Linux distribution system for several months.
    4. Not surprisingly, given the much higher standard applied to the reds than to the whites, the red code never makes it into Linux. If red was in fact better, but not by enough to overcome these obstacles, what the Linux community gets is inferior code.

    Now, here's how I think things would work with an approved fork.

    1. Two separate development kernel branches are created, for red and white. Both are widely distributed on all the usual Linux sites.
    2. Both red and white are required to track the others' changes to common code. When incompatible changes occur, Linus steps in and decides which change takes precedence.
    3. Eventually, both red and white code are ready. Both have been tested equally. Both are fully self-contained and up-to-date kernel versions ready to go to the next stage.
    4. One version is accepted as the next official Linux version. The other is relegated to the status of an unapproved fork not subject to the rules that governed the approved fork before.
    5. The members of the "losing" team join the "winners" and try to push for applicable elements of their approach to be rolled into the official version. Eventually, the Linux community gets the best code, perhaps even better than either team's individual efforts.

    Doesn't that just seem a whole lot better?

  • You can simulate ACL's through users/groups/ownership/permissions just fine.

    Simply wrong. Read/write/execute gives eight combinations, and user/group/world only allows three to be expressed simultaneously, and expressiveness is further limited by the fact that most users can't add/delete/modify groups to suit their purposes. ACLs, by contrast, can be used to express all eight possibilities at once, and apply each to arbitrary sets of users.

    Anyone who ever studied even one minute of logic can see that the two systems are not equivalent.

  • But I think that *BSD is the fork that you were wishing for

    I knew someone would say that. I don't strongly disagree, but I do disagree. Anyone capable of doing meaningful work knows of the *BSD option. If they have foregone that option, there's probably a reason. Most often it's because they're more familiar with Linux. It could also be because they want to address a perceived deficiency in Linux, and working on *BSD doesn't do that. Their idea might not even apply to *BSD. In any case, I think they should have that option.

    In order for this to work, there has to be at least lukewarm support from the regular Linux cabal. For example, the following:

    • An agreement, from both sides, to share information on common problems and common solutions.
    • An agreement from the "in crowd" not to hamper the forkers' ability to contribute by ostracizing them - e.g. by refusing to answer questions that would be answered for anyone else, by leaving the forkers "out of the loop" when discussing changes that affect them, or by plain old badmouthing.
    • The right to use the name "Linux", so long as the versions remain clearly distinguishable.

    I don't think that's too much to ask. It's not like asking for the insiders to devote gobs of their own time to further a project they don't believe in. If the people involved can be mature enough to make and keep these kinds of promises, and let the community decide in a semi-democratic way which set of changes produced the best result[1], a fork could be a very good thing. On the other hand, if they decide to be territorial about it, if they decide to put their egos or their prospects for financial gain as full-time paid "Linux gurus" ahead of technical progress, then it could be really ugly. I could not recommend an "unsanctioned" fork; that would only hurt everyone. The only way a fork can be beneficial is if the cabal members can be persuaded of the benefits. Quite honestly, I don't think they're up to it, so a productive fork is an extremely remote possibility.

    [1] Of course, the community that really matters is the community of distribution providers, who get to decide which version to use as the basis for what they provide, and in many cases they'd be "voting" for what their own members/employees happened to be involved in. It's hardly in Red Hat's interest, for example, to say that someone else's kernel mods worked out better than the ones they paid their own highly-touted employees to produce. This is hardly what I'd call an example of democracy in action, but it's at least a little bit closer than the current autocracy.

  • One problem I see with this is that it means that both branches have to be ready at the same time in order to make the descision.

    I don't think that's necessarily true. At a certain point it's perfectly reasonable to say "too late, you lose, better luck next time" and yank the slow group's approved-fork status.

    The problem I see with your proposal is that it doesn't solve the basic problem of placing unreasonable - and non-technical - barriers before people who want to try something different. In my example two posts ago of how things work now, the red team not only has to do their own original work but faces the formidable extra obstacle of tracking changes made by the white team. The white team, by contrast, is given carte blanche (heh) to proceed without worrying about the red team's changes, or even whether their own changes will adversely affect the red team's efforts. This is not a small issue. With all of the interpendencies in the kernel (see my comments on modularity or lack thereof), this makes it extremely difficult for red to keep up unless they have many more developers than white - and if they did have such a majority they'd be the white team.

    This is actually very similar to a current political debate about winner-take-all vs. proportional representation. What we have now is very close to winner-take-all; those in the minority are effectively shut out of the process. Approved forks are like a loyal opposition; they give the dissenters at least some chance to get their point across. What's important is that we find some way to turn dissent and competition into productive forces. Are approved forks the best or only way to achieve that? Do they achieve it at all? Maybe; maybe not. What's certain is that continuing to let good ideas get "killed in committee" (this political analogy works better than I thought) is not healthy for Linux in the long term.

  • the one thing that i believe linux should really be looking at improving substantially is user friendliness. especially with installing software. that was the one thing that made me so angry when i first tried using linux

    The linux kernel is never going to be "user-friendly" territory. You're probably talking about distriubtion issues, which are beyond the scope of the discussion here.
  • The most important thing we need is a everthing becoming more modular.

    Linux is already as modular as it needs to be for the moment. QNX and Be are fine OS's, but saying that their hardware installation procedures are without their own problems is just wishful thinking.

    OS modularization is more-or-less a religious issue. Advocating more modularity in every aspect of the Linux kernel is to slide down the slippery slope of micro-kernel advocacy, and that way madness lies.
  • Sorry, I"m not going to try to convince you. You can simulate ACL's through users/groups/ownership/permissions just fine. The only other thing I'd do is remove the root restriction on ports 1024. On many Linux machines, root is no smarter than the other user of the machine. Ports 1024 are no more secure on these machines.
    -russ
  • No, I'm not joking. Why does the lpr system have to run as root?? Because the lpr port is 1024. That's the ONLY reason. /dev/lpr* can be owned by a user ''lpr''. Why does bind have to run as root?? Because its port is 1024. Why does sendmail has to run as root? Because its port is 1024 (yes it has to deliver mail to users mailboxes, but that could be done by a separate program which sendmail communicates with).

    In short, most of the root exploits have occurred NOT because of any need to be root, but simply because of the 1024 restriction.
    -russ
  • No, you can't. If the machine serves those ports, they've already been bound by the program that serves them.

    And in any case, instead of requiring uid==0, they could be limited to uid100. That still gives the sysadmin control over who opens the ports, but it keeps root the hell off network-accessible ports.
    -russ
  • Does anyone want to do that? Are ACL's more or less easy to implement? Is their correct operation more or less easy to audit?

    Just because the feature list allows more flexibility, you also have to consider the difficulty of implementation. Just because you can split up security more finely, it's no help if one of the splits creates a security hole.
    -russ
  • "Well, let's just say, 'if your VCR is still blinking 12:00,you don't want Linux'".

    thats funny. i never set the clock on my vcr and linux is the onlything i use. i tried to set it once, but i couldnt find the etc directory so i had nowhere to put the ntp.conf file. : >(

    john
  • Not really, the discussion is about the kernel and the OS. Although the installer is definitely tied to the distro, it would be nice if Linux had a singular installation system.
  • You can simulate ACL's through users/groups/ownership/permissions just fine.

    No you can't. How can you have a file that is owned by user 'alex', readable by user 'bob', writable by user 'carol', and not readable by anybody else?

  • I don't understand what you need ACLs for...

    All they do is complexify and bloat a relitively simple permissions concept.

    Again, what actual situation do you want to deal with that you can't do with existing Linux permissions?

  • I think Linux needs to add Wizards and paperclips, if only for the sheer pleasure of doing stuff like this:

    ps -ae | grep "*clip*" | kill -9

    George
  • XML gains you i/o more structured than the stream, which is a Good Thing. However, that's not directly related to configuration files. What it /does/ do for configuration files it make it possible for there to be a single 'universal' configuration tool. The semantics of 'what does this mean' can be embedded the same way the kernel build system has its help section; that's not a problem. The configuration tool doesn't actually need to understand the semantics. Why XML? It's an open and well-supported standard. There's no need to invent something.

    Regarding a standard format for this -- is there a reason we can't clone Apple's? Didn't they already solve this problem for OS X?

    -_Quinn
  • I'm not sure either.

    It seems to me the bloat ACLs add to the file systems, all the utilities that work with files, AND of course, the kernel need to be considered versus the functionalty that not a lot of people will ultimately need (if ACLS work smoothly everyone will use them, but it will be functionality that the current model does ok...) Are ACLS a fundamentally good idea?

    I think a good question to be asked is "Is Linux/*BSD with ACLs still Unix?". My gut feeling is that it is not... In Unix, everything is supposed to be a file. Directories are just files that describe files. When ACLs are introduced, where are those lists going to be saved? In the directory (how)? In an entire shadow file system? As secret blocks with the file? Then whatever mechanism is used, it will certainly break ALL of the major file interchange tools (dump, tar, cpio) since they don't support ACLs. Poof, the same problem we have with idiot Macintosh files (namely resource forks) appears in *nix -- UGH! Fsck will be broken too. I, being an old person (at least in this business), don't like the idea that my new tars won't be in the same format of my old tars. I *do* read tapes from the 1970s. And yes, there will be 'compatibility switches' so one can move back and forth, but are these kludges really a good idea?

    As a result, perhaps it is time for a fork in the road where the ACLers go their own way and the Unix people go another. Perhaps it is time now to re-assess where this is going. Is it time to build a new O/S that has B1 or B2 as it's goal?

    I've worked on ACLed systems both in the pre-computers-are-cool days (IBM mainframes) and I've worked on them in the contemporary world (NT). I find the abstraction painful to deal with and the tools to deal with it nearly impossible to reliably use, especially in the hands of junior system administrators. One of the neat things about Unix is I can teach a new person about the filesystem (FFS for example) in an eligant way in about 3 hours to a level I've not a hope of doing with messes like NTFS or HFS+ no matter what amount of time.

    Finally, I think it is also time to ask when is an operating system 'done'? I'm not talking about the continual addition and subtraction of devices and services to keep the O/S vibrant & alive, but new core functionality like changing the security abstraction or the concepts of processes and files. ACLs will change the security abstraction of Unix and introduce a new entire class of functionality along with an entire new class of problems... I think they don't fit well in the normal Unix way of doing things and so I think that this is one feature *nix should probably let go by.

  • nonononono....

    The only place I can see XML being used, as far as a universal configuration file format, is only as a large central repository, with well-defined semantics...perhaps RDF or something.

    Otherwise, for the majority of small programs, XML is cracking a nut with a jackhammer. Before we even talk of XML, we should talk of some standard flat file. Take the java.lang.Properties format for instance. Straightforward key=value pairs. No magic, no whitespace wierdness. Very simple. Applicable for probably 95% of general applications. Only when you start needing to introduce *structure* do you really want XML. Which leads me to think the most applicable place for XML as far as configuration files, would be a central XML settings registry which *all* applications read through some standard API (but were not allowed to directly manipulate). XML is not a panacea. I think the vast majority of programs could do just find with a *standardized* flat file format. The problem now is just that there are so *many* different flat file formats being used.
  • This article is about Linux (the popular OS kernel). You are talking about GNU/Linux, or some Linux based distribution.
    Your suggestions should be aimed at the XFree developers who have nothing to do with Linux development.
  • The difference between a database and an XML-centralized-parser system is pretty small. It feels like what you're arguing for is a registry. Personally, I think a registry is an excellent thing, Linus' opinions on the matter otherwise. But, I recognize the fear that as we move away from simple ASCII, the files will become too complex to edit by hand, and people will be at the mercy of the tool interfaces. And some people just don't like that. You're job is to convince them this is not something to fear.
  • snol wrote:
    Also, it seems to me that Linuxes are typically less willing to try and figure things out their own durn selves than Windowses. In MS desktop OS's once you install your nic driver it goes and FINDS the darn DNS and gateway and all that shit, which yes I should know but why the hell should I have to type it in? If Windows can find all that stuff Linux should be able to.

    If you're going to complain, you'll have to do better than that.

    You seem to believe that Microsoft Products can automatically determine what network they're running in, the DNS servers, the gateway, and so forth, but it isn't true. In order for a windows client to autoconfigure, you need to have a DHCP (Dynamic Host Control Protocol) server on the network. Linux networking drivers can use DHCP, too, and have been able to do that for a couple of years.

    snol also wrote:

    I'm afraid of the word "compile". I know, I know, you can't get away from it when you have software that has to be compatible between different distros. Probably it's all very easy; now go and convince your mother of that and I'll eat my words. Seriously, maybe APT is as good as everyone says it is; I'm just talking from the experience of four aborted installs.

    While there is no reason to be afraid of compiling software, there also isn't any reason to compile software, not with any of the current distributions. I don't usually compile any of the bits of my system, and compiling isn't part of the basic installation process of any Linux distribution or commercial application of which I am aware. (FreeBSD is a different matter, but FreeBSD takes a whole different approach to things and it's not all that hard to type "make install".)

    I can do nothing but sympathise with you on your failed installs, but maybe my sympathy is worth something. It usually takes me a couple of days to figure out the installation procedure for a new operating system whether it be Windows NT, FreeBSD, or yet another Linux distribution. While they all ask pretty much the same questions, the wording and context is different enough that the consequences of the choice may not be immediately apparent. I usually treat an install like a particularly easy-to-solve adventure game. If I don't get all the way through it the first time, I can figure out what choices to make different the second time. Or the third. Or...

    snol also wrote:

    I want things that I need to know about to jump out at me - I don't want to dig through unfamiliar directories via the command prompt. I want a folder called "Control Panels." Maybe I'm just choosing the wrong distros or something.

    Speaking as someone who has done his share of front-line Internet tech support, many people find the Windows "control panel" to be the height of confusing software. I'm glad you've learned to navigate it, but you did have to learn. It didn't "jump out at you" the first time. (Well, maybe it did, but that was unlikely. For most Windows Users, the Control Panel is definitely unfamiliar territory.)

    Under Unix-like systems, most everything configuration-wise is under /etc. If looking at that is "digging through unfamiliar directories via the command prompt" then you need to become as familiar with it as you do with your Control Panel. Either that, or familiarize yourself with the mechanisms that you're given for easily manipulating those files. I wouldn't know what they are because I don't want to climb that particular learning curve (not after having climbed the much steeper one to learn about the whole /etc tree) but they're there and many people like them.

    Of course, another option to take is the one you took: You can simply not run Linux until you've got a more compelling reason to make the effort. However, Linux and Windows are always going to be different, so those people who define "easy to use" as "works just like Windows" will forever complain about how hard Linux is to setup and use even though, for someone unfamiliar with either, the effort is actually comparable and has been for some time.

  • embedded: QNX, Beos
    >>>>>>>>>>
    You do realize that BeOS is a desktop OS and not an embedded one? BeIA is probably what you're referring to. In the embedded market, there are MANY other OSs besides QNX and BeOS. These two are probably best for stuff like handhelds or web pads, but you go into the automated machinery OSs and these two don't cut it.

    workstation: w2k,
    >>>>>>>>>
    Huh, funny. NT4 kicks Win2K's ass all over the place on the workstation. If it fits right (ie its got the software you want), then BeOS also makes a great workstation OS. Linux also makes a pretty decent workstation OS that's getting better everyday.

    small office server: w2k, novell, osX
    >>>>>>>>>>
    osx-server is totally unproven. Win2K is easy to admin, but Linux, FreeBSD or Novell are probably best here.

    web server: *BSD, w2k
    >>>>>>>>>>>>>>>
    Agreed, but Linux can do this too.

    realtime: Inferno, QNX
    >>>>>>>>>>>
    Who actually uses Inferno? QNX is good but there are a lot of other hard realtime OSs that are better depending on the use. In this catagory, the OS is so use dependant, that you can't classify any one OS as the best.

    games: w2k (+aka "whistler")
    >>>>>>>>
    Bullshit. Win95 all the way. Win98 is slower and more unstable, Millenium is significantly slower, and Win2K's OpenGL support is crappy. If you're games run well in OpenGL, then NT4 kicks everything's ass as a game OS. When more games get ported to BeOS (and if the OpenGL is as good as the previews show it to be) then BeOS will also be competitive as a gaming OS.
  • Standardization != lack of choices. If you hadn't noticed, cars have become incredibly standardized these days, yet nobody complains that there aren't enough choices. Or take graphics cards. Everybody has standardized on D3D/OpenGL interfaces, and yet there are still tons of choices. Or soundcards, everyone uses DSound3D w/ EAX, yet I still can't decide whether to get one of the new fangled cards or stick with a good old SBLive!. I really doubt anybody would care whether a distro uses .debs or .RPMS. Pick the best one and stick with. Standardize the directory layout. Standardize the config file layout. Set a minimum level of software (that will means kernels will have to become totally compatible at the driver level between 2.X.Y versions. Meaning drivers from 2.4.0 should work flawlessly without recompilation on 2.4.17) Specify that any new distro adhering to the Linux 2.4 standard needs to have a 2.4 compatible kernel, a glibc 2.0.3 compatible c-library, ncurses 5.0 compatible ncurses library, etc. Nobody really gives a flying f*ck what version their ncurses library is, so why not standardize it? Specify a standard set of libraries so joe's-own-stupid-library.so.56 can be left out of distros. Standardize on Perl or Python (pick one dammit!) for config scripts so other stuff didn't have to be installed. Standardize on bash (or tcsh or zsh or whatever) for shell scripts and stick with it. None of this prevents any user from using whatever they want, it just forces developers (developers don't have nearly as many rights as they think they do) to standardize on one set of programs so users who don't use other things don't need to have them installed. Once you set up a baseline, then the distro makers are free to do whatever they bloody well want. Slackware can continue to balance performance/modernness for stability, Debian can contiue to use the oldest, stablest software possible (as long as it meets the baseline), and Suse can continue to stuff 2.X-pre1-ac1 kernels into its distros. Most of the concessions the distro makers would have to make really wouldn't affect anybody that much. Unless of course you cannot stand the thought of KDE in /usr, in which case you could always use a non-standard distro.
  • 1) Speed it up. The kernel already does a great job with speed, but don't trade speed for gee-wiz features that 1% of the population uses. That's what third party patches are for. Keep the standard kernel as general/lightweight as possible.

    2) Standardize the driver interface. Yea, it takes actual thinking to come up with a good interface and stick with it, but it would only have to hold between major version changes (a nice trade-off between a single driver interface forever) and you really shouldn't be changing the driver interface between 2.4.1 and 2.4.18 anyway.

    3) Put hooks in to only allow KDE to run. Although methinks that's not going to happen anytime soon ;)

    4) Standardize configuration. modules.conf, moduels.rc and all of the config programs (iptables, ipchains, etc) are too much. Get rid of modeprobe, kerneld, etc, and have the kernel automatically load modules itself, maybe with some help from config files. BeOS can automatically load the correct drivers for its hardware and there is no reason Linux shouldn't be able to do the same. Finally, standardize the configuaration of kernel modules. All kernel modueles, from iptables to joystick drivers should be configured the same way, either through a strict hierarchy of config files, or through a standard program.

    5) Replace OSS with ALSA and tie it to the standard configuaration mode discussed above. I know they're working on that, any idea when it will be finished?

    6) Make it more thread rather than process oriented. Make it totally reentrant, and make it highly multi-threaded. As anyone whose used BeOS will tell you, "pervasive multi-threading" is far more than just a buzz-word. QNX, AtheOS, and BeOS are all highly multi-threaded and are three of the most responsive desktop OSs available. That's no coincidence.
  • The sysadmins need to get their heads out of their asses and learn something new? And if they aren't, they should probably be using Windows. How many times have you heard the argument, "Linux is different, users should learn how to use it!" The same thing should apply for sysadmins.
  • The registry isn't the thing that's bloated, its the apps that use the registry irresponsbily. The apps that leave dead wood lying around and write to wrong keys. The problem persists wheter or not that feature is in Linux.
  • 2) If the standard is designed correctly only the occasional device would need to break it. I mean there isn't much difference at the driver level, an ethernet driver is an ethernet driver, and there is no reason to change it between minor revision. It doesn't happen much anyway, but I think making it an official policy would keep drivers from breaking. Take, for example, the fact that the NVIDIA kernel interface broke three or four times. That is really too much.

    3) I was kidding. I know, subtlety is hard.

    4) Automatic loads != no choice. It simply means that you don't have to fuss with it unless you want to. In BeOS, if you want to load a different driver, then you just mv the old driver out of the driver directory heirarchy and substitute the new one. If you need to change the default parameters, you can edit the driver config file (each driver that has configurable parameters has a paired config file that uses a fairly standard parsing method across different drivers) The point is, that you don't have to hand control unless you want to.

    5) Then fix ALSA. There should be only ONE low-level Linux sound API.

    6) Agreed. Its a wonder that Linux doesn't take better advantage of its OSS nature by specializing kernels. It would even be possible to keep the APIs the same, so different, compatible, kernels could be tuned for different tasks. It would take more work, but isn't the huge number of programmers one of the advantages of OSS?
  • 1) Speed is already one of the most emphasized design goals of the Linux kernel. If you have never read the Linux kernel source, then you're probably not aware of the numerous gcc-specific tricks and tangled code that is used to speed optimizations. Trust me. The kernel is as fast as it's going to get.
    >>>>>>>>>>>>>>>>>
    I know the kernel does a good job with speed. However, it is NEVER fast enough. There is always some more tuning that can be done, somewhere.

    2) It is standardized between every incremental revision, as much as any driver model can be standardized. The major changes only come between minor and major revisions, such
    as 2.2 -> 2.3/2.4.
    >>>>>>>>>>>>
    It is not "officially" standardize. Stuff can, and does break. The NVIDIA kernel drivers broke three times over the period of a few kernels. That's not good. If there was an official decree that "all 2.4.X" kernels will have a standard driver interface, then you wouldn't have to worry about breaking drivers until the next rev rolled around in 2 years.

    3) Whatever.
    >>>>>>
    Good god, subtlety is lost on you people. Didn't the smiley face give away the fact that I was kidding!

    4) Nonsense. The kernel uses the user-level helper tasks for efficiency's sake. If you haven't studied the kernel module loading system, then you shouldn't really be talking about it. Standardization of config files is also impossible in many cases do to the differences between said devices.
    >>>>>>>>>>>>>>>>>>>>>
    I really don't see how config files can't be standardized. The XML stuff floating around on this discussion sounds like a great idea. Also, using user-level processes to load drivers is a stupid idea. BeOS has by far the best driver loading sheme of any OS out there (OSs that don't have Quake ports are excluded) The system takes care of system stuff (loading drivers) and you only have to get involved when something goes wrong (rare.) Installing new drivers amounts to copying an add-on (a dynamically loaded .so) into the proper directory. It doesn't get any damn easier.

    5) Might be nice.

    6) Okay, now this just shows total ignorance of how Linux and software/system interaction in general works. The kernel's treatment of threads is that they are equal to processes that share the same memory mapping structures. The Linux scheduler handles thread-switching quite admirably, and you will be hard-pressed to find any desktop OS scheduler that performs as well as Linux's. The major misconception here is in assuming that "pervasive multi-threading" is all the kernel's responsibility. That is all the responsibility of the
    user-level library and application code. The kernel schedules and relegates threads. It's the user-level code that responsible for taking advantage of the threading facilities
    available.
    >>>>>>>>>>>>>>>>>
    It's not the kernel responsibility, but unless the kernel itself is super thread friendly (multi-threading the kernel itself helps in this respect) it is harder to make multi-threaded apps. Also, the kernel can "force" multi threading on apps in which it is useful.

    You seem to be bound up in the desktop world of "GUI == part of OS." In Linux, the GUI is a user-level application, as well it should be in the case of what they have available.
    >>>>>>>>>
    How do you derive that? The OS that I use 90% of the time has NETWORKING in userspace.

    God forbid anyone attempt to weld the Unseelie monstrosity that is X11 into the kernel. That would introduce bloat and instability into the entire system.
    >>>>>>>>>>
    yea, that's why it is abad idea!

    This is why there
    aren't kernel-level SAMBA implementations, BTW. The responsiveness of the desktop interface is not the Linux kernel's fault. The fact is that QNX and BeOS both have very well written GUI layers.
    >>>>>>>>>>>>
    As I said, the kernel can force correct behavior. Also, I'm not only talking GUI here, BeOS has audio latencies that approach that of RTLinux, and the context-switch/event response times for QNX blow everything else out of the water. Trust me, there is room for the kernel to improve.

    However, to claim that they are in general more responsive is a little bit of marketing/evangelism propoganda. The responsiveness of the Linux kernel is
    very, very nice on a process/thread level.
    >>>>>>>>
    It is very, very nice. Just not as nice as some of the other OSs out there. Given the volume of talent devoted to Linux, there is no reason Linux's response shouldn't exceed that of those other OSs.
  • by mfterman ( 2719 ) on Sunday November 19, 2000 @06:03AM (#614783)
    More functionality moved into user space as separate modules/ and less functionality in kernel space. A reduced need for recompiling the kernel. Yes, there will be some performance hits here but now that we're starting to move into Ghz range we might be able to shed a few percentage points of performance in return for more modularity. The ideal world involves going to something like HURD but I don't think that's going to happen. Still, a direction towards more modularity is good.

    Honestly, most of the suggestions that have been going on here have been in the area of layers on top of the kernel. Not that they don't need to be done, but they're the sort of things that Linus is not going to be messing with. I think that replacing X with something with a more well thought out API, or taking the standard GNU tools and replacing them with tools that use a set of XML configuration files are nifty things, but these are not strictly things that concern the kernel.
  • by Ektanoor ( 9949 ) on Sunday November 19, 2000 @07:43AM (#614784) Journal
    I think there will be a crisis and I hope for it.
    The present kernel architecture is mostly the result of 2.0's times. And I believe that it is starting to get a little overused. Right now we have a good working base on 2.4. But continuing to perfect this will be nonsense, no matter the problems that still exist. Some people may claim that now we don't need to recompile kernels as before. WRONG! You may not feel the utter need because your machine is already running fast. However there is still too much bloatness on traditional kernels that come from distros. Today the complexity of supported hardware turns a kernel 2 times bigger than you really need. And I am talking only about vmlinuz. You run full gas at 180Km/h but the speedometer shows that you could reach 250Km/h...

    In the mean time the module architecture is getting old. Presently. I'm feeling that loading and unloading modules on the run has become more frequent. And this system has troubles and it is quite inflexible in some terms. Specially if modules are in a large chain dependency like Alsa.

    I don't really know how near we should go trough a more HURD-like model. But we should start thinking that we will need something like this soon. Specially when the PC starts to dissolve among smaller, bigger, medium devices of different nature and purpose. If suddenly the market starts calling for the interaction of all this trash, then Linux will be a looser. If it keeps its hybrid/awkwardish nature of the present architecture.

    I believe that we also need to restructurize the divisions between several drivers/devices. Somehow that is being done (IDE section got off from the Block devices one). But I think it is not enough. IP kernel section needs some clarification as many options there are not needed for a desktop world.

    I think it is a Bad Idea (TM) to think about ACL's and such similar things on Linux. Yes the traditional model is getting old. But my experience on NT has shown that the ACL's are a stupidity as a realization. They picked up a few basic rules, mixed them in God Knows What and gave them as a final product for all cases. The result is always confusion. Frankly I haven't seen no one doing serious work on NT's ACL's as the kernel of this structure has holes everywhere (starting from allowing full rights to some \WINNT stuff). Personally I prefer NDS to this. Rules are simple but you have some freedom to combine them. But i don't think that even NDS would be good for Linux.

    Sincerly, on such cases like ACL's, NDS's and alikes, Linux should be neutral. Yes there should be a protocol/specification on how to insert them into kernel. But kernel itself should be fully away from these things. It should carry only a minimum of security features. ACL is costy in performance, it is bloatness in some cases, and it carries some doubts on its real effectivness. Such things and other security issues should go parallel from kernel development to avoid compromising Linux with serious security issues that may arise from chosing a wrong path.

    Well anyway this is not all. I believe it is time for Microsoft to think on replacing kernel32.dll by ms-linuz64.o.
  • by the eric conspiracy ( 20178 ) on Sunday November 19, 2000 @04:37AM (#614785)
    ACL sounds nice until you try to do a security audit on an ACL based system.

  • by Salamander ( 33735 ) <jeff.pl@atyp@us> on Sunday November 19, 2000 @07:09AM (#614786) Homepage Journal

    Unfortunately, the previous poster picked an example that was too simple to highlight the weaknesses of the rwxrwxrwx permission system. Nonetheless, you still managed to get it wrong on two counts. In the first place, as other posters have since pointed out, the problem specification did not say that the file should be writable by bob or readable by carol, both of which your "solution" allows. Here's where you have a fundamental problem: you have three different kinds of permission you need to specify (read-only for bob, write-only for carol, and none for anyone else) and only two places (group and world) to specify them. It just doesn't work. The model simply isn't up to the task.

    Your second error is in assuming that alex is able to create/modify/delete groups on the fly to suit his needs which may differ for every single file he owns. This would not be the case on most systems - almost never on a well-run one. Even on a system that allowed any user to edit /etc/group, the rapid proliferation of groups that would result from this attempt to use a weak model to simulate a stronger one would quickly run into problems with scalability and manageability.

    In short, you just can't make rwxrwxrwx do ACLs' job. The current permissions system is an artifact of a very time- and environment-specific tradeoff between functionality and resources (space, processor time).

  • by ttfkam ( 37064 ) on Sunday November 19, 2000 @08:23AM (#614787) Homepage Journal
    "Buzzword compliance" has certain advantages in this case: ubiquity.

    Right now there are multiple XML parsers in multiple programming languages. Their common standing? They all try to be compliant with W3C specs. For example on Java, you can use the ASF Xerces parser for a while and, assuming you used the W3C java interfaces (publicly available) in your program as you should, you could swap it out for Oracle's XML parser. As long as you maintain standards, you will have support.

    XML advantages: hierarchical, normalized, Unicode compliant, simple APIs (DOM and SAX), human readable as much as HTML (ie. basically self-documenting if done correctly). FYI: DOM creates a data structure/tree representation in memory and SAX is a low memory, event-based API.

    Now let's look at the alternatives:

    Sendmail's config file: Many people understand it. Everything but the kitchen sink (or maybe the kitchen sink is in fact in there and undocumented). Easy to understand? No. Human readable? About as much as a programming language. Universally loved? Only by the sendmail priesthood. Unicode compliant? Umm.. what?

    Apache's config file: Many people understand it. Basically a flat model of key value pairs with a second level of depth patched on because of the module and multi-server support. Easy to understand? Yep (especially with comments). Human readable. Yep (especially with comments). Universally loved? Well... much more than sendmail's config file. Unicode compliant? Oops.

    Generic key/value pairs: Fast and easy to parse. Flat model -- no hierarchy. Okay, granted you could use could tweak the model to allow hierarchy. Universal key/value delimited file? Not even close. Do you use the '=' as a delimiter? What about ':'? Are comments preceeded by a '#' character or a ';'? Ready to use? Nope; everyone must write their own from scratch much of the time.

    People who knock XML have obviously never used it to solve any problems. Go to the W3C site (http://www.w3.org/) and check out all of the projects related to XML that are coming to fruition. All of the projects that follow rules and have publicly created and discussed implementations.

    Development tools. Libraries. Easy road to entry. XML has these in abundance. It is the simplest, most complete way to solve the babel on Linux known as /etc.

    The only thing preventing a Linux-universal config file format in XML is the unified schema/DTD. That's a non-trivial task, but it's a whole hell of a lot closer than any other technology. And, in fact, universal consensus on the file format is not strictly necessary for a first step. Just by moving to XML alleviates the need for multiple types of parsers in the elusive universal configuration tool. Specifying a separate schema/DTD for each config file still affords a much easier job for the config tool authors.

    I do agree with you about inertia. It's hard to change entrenched methods of doing things. However if people were to submit patches that allowed the possibility of XML config files and not unilaterally replacing the exisiting ones so that if people want XML, they can have it. Does this solve the babel problem? No, but once again, it's a viable first step.

    ./configure --configformat=XML
    make all

    You use a public parser such as the James Clark "expat" parser. You look up the internal data structure for a particular program. You connect the dots. How hard is that?

    Linux/UNIX have always suffered from the babel. It's time for the /etc babelfish.

  • by 1010011010 ( 53039 ) on Sunday November 19, 2000 @08:14AM (#614788) Homepage
    Linux needs a good forking. Seriously. Competition is good.

    Woohoo! Advanced Linux Kernel Project! I'd like an integrated, supported kernel debugger, modularity, and the various "big system" vendor patches to be integrated. And a better VFS (not the current "virtual ext2 interface"). I'd also like to see more capability on the small end -- i.e., the ability to leave stuff out without triggering stupid dependancy bugs. This would come along, in large part, with better modularity. Oh, and much better planning, cummunication and forethought. And development done with CVS, not patches from the current broken system just checked into cvs. It would be cool for the big players -- IBM, SGI, HP, perhaps others -- to get together and form and operate an Advanced Linux Kernel Project.

    The cabal -- [...] the Powers That Be sometimes suffer from severe Not Invented Here syndrome, and sometimes they use their bully pulpit to shout down perfectly good ideas that conflict with their own biases

    Also, "posixitis." Refer back to the discussion about supporting named streams ("NTFS streams") to see a sever case of "it's not posix, so it sucks." Even Linus was arguing that we need it for interoperability, and so what if Posix doesn't say anything about it. Alan Cox was actually making the bizzare claim that the HFS way of representing streams in a posix-acceptable way was good. So, gee, we have a free OS designed by the government, but only partially implemented. Yay, Posix.

    Linus and the others deserve our gratitude, and our respect, but not worship or unquestioning obedience.

    The Emporer Has No Clothes.

    Thanks for the good post!

    ________________________________________
  • by NetJunkie ( 56134 ) <jason.nash@nosPam.gmail.com> on Sunday November 19, 2000 @08:08AM (#614789)
    The "problem" with going to a 169.x.x.x address is a new way to automatically set up a home network, without DHCP being needed. It's an RFC and is standard. So if you have 3 PCs at home, you can boot them all up and they'll talk over IP without using a DHCP server. They figure out which addresses are taken.
  • by Nailer ( 69468 ) on Sunday November 19, 2000 @11:11AM (#614790)
    And exactly because Slashdot is comprised primarily of Linux users, not Linux developers. I'd hope the users input has some influence on the direction of this OS.

    User's aren't armchair critics. They're the sole reason Linux is popular today. Thank God other developers don't share this attitude or nobody would be using Linux at all. Users are the the people that are testing your OS for you, and previding the feedback necessary to make it better. And yes, they even contribute, without writing a single speck of code, through running user groups, creating bug reports, advocacy, paying developers salaries, giving up time and money to organize Linux events, and more.

    Slashdot Linux users are generally more at the power user level, but then again [at the current point in time] most Linux users are. I would see nothing wrong with asking a large body of Windows power users [eg, NT admins] where they think their OS should go.

    But I wouldn't expect a coherent answer, and I wouldn't today. The benefit is the discussion and the issues it brings on to the table. This little discussion might be the start of some wonderful projects, as developers may get inspired by the issues raised today and start a project. And when that happens, more people will use Linux because it will be better. There also be more people willing to put bread [and caviar] on your table, more software for you to use to develop, etc. Software evolution is a dance between developers and users, and what had spiralled Linux into its current greatness today.

  • by dbarclay10 ( 70443 ) on Sunday November 19, 2000 @05:51AM (#614791)
    Somebody should hit you with a clue stick.

    Oddly enough, optimizing the Kernel for massive systems with a plethora of processors and RAM could hurt Linux if the big Unix companies see it as a threat.

    Did you know that there are kernel patches(available to the kernel folks, but I've never seen them around myself) that have exactly those optomizations? And guess who wrote those patches? Yeah, the Big Iron vendors. Jeeze. Even I'm not that cynical.

    Dave

    'Round the firewall,
    Out the modem,
    Through the router,
    Down the wire,
  • by Baki ( 72515 ) on Sunday November 19, 2000 @07:57AM (#614792)
    Indeed, I've always hated ACL because of the chaos they create. It looks nice in the beginning, fine grain control creating different permissions for every different file/directory.

    Then after a while you'll see that you have all permissions without logic, without a scheme behind it.

    The simple but very UNIX user/group/others scheme in contrast promotes a little thinking and planning in advance, and as a result you can do what you want by putting people in different groups and use those.

    Only the rare cases that two groups need access to files (and you cannot do the same by putting the users in >1 groups) doesn't fit in the scheme.
  • by snol ( 175626 ) on Sunday November 19, 2000 @04:54AM (#614793)
    The reasons why this luser lisn't lusing Linux mostly have to do with the fact that I found it so durned hard to get the thing working --

    - At any given point my graphics card is unsupported or else requires a more advanced version of XFree86 or else there's something in XF86Config (capitalization? oh well) that I'm missing. In other words, no matter what I do I wind up at 320x200 4-bit, when it doesn't just crash. I know, I know, I oughta have bought a compatible (read: older than my leet Geforce2) graphics card...

    - I have no idea where to look for config files. Don't tell me; if I really wanted to know I'd buy a book. Also, it seems to me that Linuxes are typically less willing to try and figure things out their own durn selves than Windowses. In MS desktop OS's once you install your nic driver it goes and FINDS the darn DNS and gateway and all that shit, which yes I should know but why the hell should I have to type it in? If Windows can find all that stuff Linux should be able to.

    - I'm afraid of the word "compile". I know, I know, you can't get away from it when you have software that has to be compatible between different distros. Probably it's all very easy; now go and convince your mother of that and I'll eat my words. Seriously, maybe APT is as good as everyone says it is; I'm just talking from the experience of four aborted installs.

    - I want things that I need to know about to jump out at me - I don't want to dig through unfamiliar directories via the command prompt. I want a folder called "Control Panels." Maybe I'm just choosing the wrong distros or something.

    Maybe I'm offtopic cause none of these are real kernel hardcore shit but leet-haX0r or no, if you want more otherwise-capable computer users to go over to Linux, these are the things that have to be taken care of. I'm not unsympathetic to issues about hardware mfgrs making drivers difficult to write but that doesn't change the fact that if my card don't work, my card don't work.

    It ain't flamebait, it's food for thought, and if you MUST mod it down use "offtopic." Thanks.
  • by commandant ( 208059 ) on Sunday November 19, 2000 @06:03AM (#614794)

    First of all, it would seem that you have some desire to run Linux, since you've tried it and are now complaining about your bad experience. Obviously, though, you don't desire it enough to invest some time in learning the issues you point out. Just as well; people have no business running software they can't (or won't) figure out. The only legitimate software gripes, I believe, are gripes against lacking features, not gripes against a user's inability to figure things out. After all, many people here have figured out Linux, so it's not that Linux is impossible to use, it's that you aren't good enough with it.

    If NVidia would release chip specs to open source programmers, you would have Geforce2 support almost immediately. This isn't a problem with Linux, or XFree86, but with NVidia. They are scared that, by releasing specs, competitors will copy their chips. And yet, I've not heard of this happening to anybody else that releases chip specs.

    Most configuration files are located in /etc, /usr/etc, or /usr/local/etc. For the most part, you can specify this location at compile-time. You might think the Windows registry is superior to this, but I disagree. The registry is an oversized, all-inclusive jumbled mess of things that often are not obvious. In /etc, however, the relevant config files typically are named after their programs, as in /etc/inetd.conf or /usr/local/etc/sshd_config.conf. If you like, you can do what I did, and just softlink /usr/local/etc to /etc, so that just about every config file is in /etc (I don't have a /usr/etc directory). The only exceptions on my system are Samba (installed with a prefix of /usr/local/samba) and Enlightenment (/usr/local/enlightenment). But, as you can see, that was my choice.

    I'm no programmer. My most complex C programs are under 500 lines of code, implement no GUI, and generally don't do things that aren't related to iterative numerical analysis. In other words, I'm not going to produce the next great windowing system in this lifetime. However, I have compiled most of the high-level programs on my system. If you'd just read the damn documentation, if wouldn't be frightening. Don't come whining to other people without doing your research first.

    About DNS and gateways; that's called DHCP. Linux has DHCP support, so you don't need to enter that information manually. Check out dhcpcd or pump. Both are good DHCP clients. If you want a static IP, you need to enter that information, both in Windows and Linux.

    Why would you want things to jump out at you? That's the reason so many people wind up reinstalling Windows so many times. With system-critical configuration options at any idiot's control, Windows has got to be the most often screwed-up operating system. Linux does not exactly make configuration options hidden, but it does implement a dual-control: access and intelligence. It takes some understanding of your system to make sense of config files and modify your system, and there are access controls to prevent anyone but root from changing them.

    I do not belong in the spam.redirect.de domain.

  • by Bezanti ( 235957 ) on Sunday November 19, 2000 @05:46AM (#614795)
    The problem with most software is not lack of features. It is rather the opposite: you can't get rid of particular features that are absolutely unpleasant under particular, given circumstances.

    I'm sure ACLs may be useful for some people, in some situations.

    We know of the existence of the concept. We probably came across them in some other setting (e.g. NT) and not everybody is impressed.

    ACLs are not generally useful. If they were, everybody and their little sister would be clamouring for them, and they obviously aren't.

    Undoubtedly, there must be kernel patches available for you to have your ACLs, without forcing them on everybody.

    If Linus built ACLs right into the kernel, I would be forced to put in serious effort and rip it out again. If he does too much of this kind of things, Linux would start forking.
  • by god, did I say that ( 253932 ) on Sunday November 19, 2000 @05:25PM (#614796)
    I'm only going to say this once - XML is *a* solution to the trivial problem of syntax;

    XML is *the* general solution to the untrivial problem of *extensible* syntax. There are no other contenders waiting in the wings.

    it does not, however, assault the intractable problems of semantics.

    No shit. What configuration format does?

    Your fantasy of trees magically passed between programs with no knowledge of each other falls down when you realize that each program assigns different meanings to each file/tree/whatever.

    Fine. They should continue to do so. XML parsers dont interpret, they parse. The onus of interpretation lays upon the individual app writer. That's not going to change; we already knew computers werent thinking machines.

    After 6 years of adminning systems, I feel more secure with each daemon checking their config files than passing it onto an independent parser daemon.

    Really? Do you also think compilers are the work of the devil? Do you hand assemble source code into binary 1's and 0's?

    Each program gets to test once with the parser code inside the program; if you move parsing out then you move that work onto the sysadmin (or home user) - the world is filled with minor API changes that screw you over.

    Nonsense. The app defines the xml, the parser parses that xml and the app interprets the results. People dont write a compiler to compile their source, why should they write a parser? Given a well defined schema, I would much sooner trust an xml parser than I would a hand rolled parser.

    I dont understand what api changes you are refering to. If XML changes, you rewrite your xml file, not your app and you upgrade your system's parser without dropping the older version. There are several versions of ncurses on my machine, for example. This sounds like a red herring. Already XML is far more standard than the unix api.

    I dont think you quite understand what xml is.

    I think it's because XML is not relevant for most of our [Linux's] problems.

    I'm sorry, XML is here to stay and for good reason. It will become manifest in everything from docs to configuration files. There's nothing to debate.

    --

  • by slothbait ( 2922 ) on Sunday November 19, 2000 @07:00AM (#614797)
    without a corresponding "standard" to describe the structure the file should have. If you want regularity, we need a well-accepted consensus on the different styles of options that can be used, and syntax for representing them.

    If you had standard meanings for different kinds of config options and a standard format, you could get your wish of a single tool (be it command-line, slang, or X) being used to parse and modify all compliant config files. I've considered implementing something like this, myself. The problem, of course, is less in implementing it than in getting all of the disparate projects to accept it.

    The only groups that might have the power to start this change are the distros. Red Hat seems reluctant to define standards, as they don't want to be seen as "pushy". Mandrake doesn't have the clout. To me, that leaves Debian and possibly Suse. If Debian came out with a standard, started using it for their tools, and developed a nice, simple interface, I think they could start persuading others to follow suit. It's a good idea, but it will take a great deal of persuasion.

    But back to my earlier point point: simply moving to XML buys you nothing but complexity. What you really want is a common ordering and syntax to the different files such that they can be edited by a common tool. That's all well and good, but that can be achieved *without* XML just as easily as it can with. I see no inherit advantage in using XML, except the gain in "buzzword compliance".

    --Lenny
  • by JohnZed ( 20191 ) on Sunday November 19, 2000 @08:09AM (#614798)
    My ultra-wishlist would include:
    • Performance monitoring! This has already been mentioned in this discussion, but it's such a great feature that I had to mention it. Without patching the kernel, it's almost impossible to create a sysadmin tool like PerfMon or a developer's tool like SpeedShop or VTune. And SGI has suggested that they'd be willing to port SpeedShop if the infrastructure were in place. There are a lot of patches floating around that give the basic capabilities (essentially, recording and configuring hardware performance counters), but most are lacking overflow monitoring, which is huge.
    • Modern high-performance I/O: On kernel-traffic you can see a recent discussion about select and poll on Linux. They just don't scale well AT ALL. Similarly, zero-copy TCP is very well established as a massive performance gain for network servers, and there's no excuse to fail to use it for calls like sendfile() which could so clearly benefit.
    • New scheduler: The CITY-Umich scheduler project is pretty interesting, as was the work done by IBM's Java group. Regardless of what solution is used, though, we have to stop making excuses and admit that the current scheduler is highly flawed for systems running hundreds or thousands of threads.
      --JRZ
  • by Nailer ( 69468 ) on Sunday November 19, 2000 @10:38AM (#614799)
    The simple but very UNIX user/group/others scheme in contrast promotes a little thinking and planning in advance, and as a result you can do what you want by putting people in different groups and use those.

    I don't think rwx permissions are very Unixy at all - in that they're unfortunately popular on Unix, but don't fit in with Unix philosophy. Unix philosophy gives users only the permission they need to do their work on the system. Unix-style systems are by default locked down much more fully than other OSes for regular users.

    For root, however, its a different story. Why is is someone logging in as the system account with permission to perform any action on the system for the purposes of adding users?


    Run top [or of your a GUI person, KDE System Guard or a similar utility]. Check whose running all those daemons. The answer is root. Why [especially if rwxs is enough]? SSH and CUPS and automount don't need permissions to create new entries in /dev, or to modify /etc/passwd. But if you can exploit them, you have permission to do just that. Crackers heaven.

    If rwxs is enough, why do nasty tools like sudo exist, and why do so many apps require setuid root permission? rwxs is not UNix like. Its a kludge that's popular on Unix systems. And it definitely not Unix philosophy.
  • As long as there is Unix, there will be Linux and BSD.

    Why? Simple. Sun, IBM and HP all see Linux as a way to gain mind share. Let's look at two hypothetical:

    A business builds a small web server and database server using NT or 2000. Their business grows and soon they realize they need more power. Microsoft promises to scale infinitely. A salesman from Sun shows up and offers a Solaris solution. Better stability, scales better, and Oracle is kicking MS-SQL's but in exactly the kind of business they're running. But running Solaris would require retraining of staff, replacing all their software and migrating their data to a new system. Sun looses the sale.

    A business builds a small web server and database server using Linux. The business outgrows the memory model of Linux because the same memory optimizations that allow you to have a 4,000 hit per day web server running on a 386 cause some problems when you reach eight processors and 24 gigs of RAM. Solaris walks in and offers a solution that scales better, runs ports of all the software they're using and because they know how to run Linux, Solaris is a small skip and a jump away in IT skills.

    The more people who run Linux or BSD on their home systems, the more sales the big Unix vendors will be able to make, because there will be more people who know how to operate the systems. This is exactly why NT has gained so much ground. People know how to run Windows, so they figure a web server running NT can't be that big a leap.

    Oddly enough, optimizing the Kernel for massive systems with a plethora of processors and RAM could hurt Linux if the big Unix companies see it as a threat.

    www.matthewmiller.net [matthewmiller.net]

  • In UNIX, there are three entities that can be given rights to an object; the owner, a group and everybody. Thusly, no more than one group can have control over a an object. In other words, Finance and Marketing cannot be both given write access to a directory; you'd need a third group which had, as members, the first two groups. Access Control Lists start out by stating the default permissions (usually 'nobody can do anything') and then applying exceptions (Bill can read, Lucy can read/write, Doug can read/write/execute.) ACLs also allow for more options; with UNIX, you're stuck with read/write/execute. With an ACL, you can define almost any operation and allow/deny it. Read, write, execute, list contents, create subdirectory, grant rights to others, revoke rights from others, copy files in, copy files out, etc etc. ACLs are a requirement of any level of Trusted Computing above c2, I think. They're very powerful, but also much more difficult to maintain than UNIX style permissions.
  • by Cmdr. Marille ( 189584 ) on Sunday November 19, 2000 @03:54AM (#614802)
    Well I guess I would be easier to just talk about Linux, the kernel.
    As already mentioned integrating some kind of Journalling Filesystem will maybe the most important task.
    May it be XFS, JFS, ext3, ReiserFS, or Tux2, i think this will and must be one of the next addition to the kernel tree(probbably ReiserFS in 2.4.1). There is no big and bright future for Linux if we don't get to a filesystem with better data integrity. I think that in fact the diversity of Jfs's is a great thing because:

    It will create competition

    There will be FS's optimized for certain tasks(something that is already happening now, look at ReiserFS SQUID Optimization)
    The existence of XFS and JFS for Linux already shows that both IBM and SGI are really willing to put their Unix experience into the future of Linux (At least that's what I hope)
    I think the involvment of companies like IBM and SGI will maybe the biggest chance for Linux over the next years
    There is the Unix experience of two of the biggest players on the market going into the most active Developer community in the world. We can really hope for a bright future

  • by clinko ( 232501 ) on Sunday November 19, 2000 @03:34AM (#614803) Journal
    Dun Dun DUN!!!!

    2.4.1
    then!
    2.4.2 maybe 2.4.3!

    It's a joke, don't kill me :)
  • Linus' comment on this:

    I've actually read the BSD kevent stuff, and I think it's classic over-design. It's not easy to see what it's all about, and the whole <kq, ident, filter> tuple crap is just silly. Looks much too complicated.

    All due respect to Mr Torvalds, but I'm not sure he understands the intent behind the kqueue interface. It's not just intended to be a poll() on steriods, though it serves that function. A kevent does not need to be related to a file descriptor--for example, signals and the status of other processes can also be monitored with the appropriate filter. Other filters will be added as time goes on. The tuples Linus objects to are actually the simplest way to provide this level of generality through a single interface.

    -Ed
  • by Salamander ( 33735 ) <jeff.pl@atyp@us> on Sunday November 19, 2000 @07:46AM (#614805) Homepage Journal

    Here are some of the ways I'd like to see Linux evolve in the near future:

    • Better modularity. There's still way too much interdependency between what should be separate subsystems. Sometimes this is in the form of explicit references to each others' data or functions, but just as often it's a more subtle but still undocumented reliance on "X never does this so I won't even check for it, and God help the guy who comes after me if X ever changes".
    • A better VFS layer. Just having one isn't enough because, to be blunt, some folks did a pretty piss-poor job of specifying, implementing, or testing it. This fact is the primary reason 2.4 isn't out yet.
    • A mature persistent-module-data interface. We don't need anything as overburdened as the Windows registry or AIX ODM, but we do need something and recent steps in this direction are a very good sign.
    • Journaled, soft-update, and atomic-update filesystems. Not one, but several. Let them compete. This is an area where Linux can be the foundation for significant improvements in the state of the art.
    • Better testing. We need a serious test suite for each of the major kernel components, and one for the whole system as well. We should be beyond the point where the current near-total lack of unit or regression tests is acceptable.
    • Better performance management and monitoring tools. How many of you have used PerfMon on NT? The way it's implemented is kind of crufty, but the flexibility and functionality it provides makes Linux look really bad by comparison. It should be a standard part of Linux kernel or major-subsystem (e.g. database, webserver) development to define and export counters for a general tool like PerfMon to use. A lot of the bickering on linux-kernel about where the bottlenecks are would be neatly settled by a five-minute session with such a tool.
    • ACLs? I'm still not 100% convinced we need them, but they are more powerful than the current system and there seems to be a demand. In any case they're likely to become a checklist item for a lot of folks soon.

    Lastly, there's one more thing I think Linux needs, but explaining why takes more than should go into a single list item. Linux needs a good forking. Seriously. Competition is good. The cabal - yes, there is one in effect, even if its exact membership is debatable - generally has good ideas and provides incredible value in bringing order to chaos. On the other hand, the Powers That Be sometimes suffer from severe Not Invented Here syndrome, and sometimes they use their bully pulpit to shout down perfectly good ideas that conflict with their own biases (or even projects that would compete with what they want to work on). Several have recently seemed to start believing that they're omniscient, as though merely being a genius wasn't good enough. Linus and the others deserve our gratitude, and our respect, but not worship or unquestioning obedience. They need a wakeup call, in the form of someone defying their wishes and achieving superior results in the process.

    This will happen, sooner or later, one way or another. We have two choices:

    • Embrace the possibility as a generator of innovation and healthy competition, and as a way to keep everyone honest and humble.
    • Let it become a source of chaos and dissension, until someone else eats our lunch for us.

    That's it. There are no other choices. I know this will probably not "resonate" with the younger members of the audience, but I would compare the situation to a divorce. The most amicable divorces, where people still remain friends, occur when the people accept the reality and work together on making necessary change go smoothly. The messiest, most destructive divorces happen when people have stayed together long after their interests diverged and they've spent years learning how to hate each other. I don't want Linux to turn into War of the Roses.

  • by Carnage4Life ( 106069 ) on Sunday November 19, 2000 @05:26AM (#614806) Homepage Journal
    Nailer asks:
    With kernel 2.4 in the final stages of bug hunting, and on track for a December release, I thought it might be pertinent to discuss the future of Linux. What now?

    If you are truly interested in Linux kernel development and the future of the OS, why not just just subscribe to the Linux Kernel mailing list [tux.org], browse the archive [tux.org] or read the digests on Kernel traffic [linuxcare.com]?

    Slashdot is comprised primarily of Linux users not Linux developers. Questions like this are better sent to mailing lists frequented by the people who make these decisions than to a bunch of armchair critics. This article is similar to asking a bunch of random Windows users where Windows&#153 development should go and expecting a coherrent answer.

    Second Law of Blissful Ignorance
  • by __donald_ball__ ( 136977 ) on Sunday November 19, 2000 @08:47AM (#614807) Homepage

    But back to my earlier point point: simply moving to XML buys you nothing but complexity. What you really want is a common ordering and syntax to the different files such that they can be edited by a common tool. That's all well and good, but that can be achieved *without* XML just as easily as it can with. I see no inherit advantage in using XML, except the gain in "buzzword compliance".

    You've evidently never read the XML specification, or tried to use XML anywhere. Here's the deal. XML takes ASCII to the next level. ASCII lets you encode strings (in latin characters) in files in standard fashion. XML lets you encode trees (in arbitrary character sets) in files in a standard fashion. Trees are the natural data structure for most configuration files.

    Even more useful stuff - you get validation with DTDs and Schemas. That means each program doesn't necessarily have to check its own configuration files for sanity, it can rely on the parser to catch the syntactical and much of the grammatical errors. Hell, each program doesn't have to write its own parser any more, making it simpler and reducing the possible number of bugs in the system (if all daemons shared a common configuration file parser).

    Finally, by using XSLT stylesheets, it's very easy to transform XML files between different formats - giving you a relatively simple upgrade procedure when a daemon changes its configuration file format, and giving you a relatively simple way to convert configuration files between daemons - from postfix to sendmail, say. Think I'm blowing smoke? I already have all of my system data centralized in one XML file from which I generate the various daemon's configuration files. I think this is probably going to be the way this unfolds - userland tools will arise that use XML files to generate the collection of oddities that is the /etc filesystem.

    I really don't understand the Linux community's perceived resistance to XML. I think it's because Microsoft was an active participant in the development process, and because many of the early implementations of the XML tools were written in Java. I really wish that attitude would change, because XML is an important standard for taking UNIX to the next level. One of the things that made UNIX great was the proliferation of small command line tools that cooperated by passing ASCII streams around. Imagine how much more powerful that paradigm could be if you they passed trees around!

  • by Alomex ( 148003 ) on Sunday November 19, 2000 @04:30PM (#614808) Homepage
    I'm only going to say this once - XML is *a* solution to the trivial problem of syntax;

    Syntax is not a trivial problem between advanced applications. Designing a protocol that is expandable, be it for config files or interprocess communication is a pain in the neck. We started using SGML for config files and interprocess communication in 1995. You have no idea how much work we saved by using standard parsers and a protocol that wouldn't break if we added a column to the message...

    it does not, however, assault the intractable problems of semantics.

    True enough. XML only gets you half-way there. Isn't that still better than nothing? Moreover, since the semantics is intractable, as you well state, the solution is to manually provide syntactic markings through, you guessed it... XML tagging.

    After 6 years of adminning systems, I feel more secure with each daemon checking their config files than passing it onto an independent parser daemon.

    Reality check. What is likelier to be buggy: a one-off parsing routine or a well established and universally tested parser such as SAX?

    We know the answer to that one since the whole open source movement is predicated on it. The publicly available routine will be less buggy.

    The world is moving en-masse to XML. Yes, it is overhyped (just as high-level languages, structured programming, OOP and Java were in their time). Yet all of those were clear steps forward in computing.

    Same goes with XML. Linux can be either ahead of the curve, or behind it, always destined to be a late copy of a thirty-year old operating system.

    Now that Linux is stable and of amazing quality it is time to start looking towards the future and make sure a good operating system becomes hands down the best.

  • by denshi ( 173594 ) <toddg@math.utexas.edu> on Sunday November 19, 2000 @10:06AM (#614809) Homepage Journal
    You've evidently never read the XML specification, or tried to use XML anywhere.
    So you wade right into it with an ad hominum attack - how nice.

    I'm only going to say this once - XML is *a* solution to the trivial problem of syntax; it does not, however, assault the intractable problems of semantics. Your fantasy of trees magically passed between programs with no knowledge of each other falls down when you realize that each program assigns different meanings to each file/tree/whatever.

    Other points:
    Programs don't have to check their own config files: After 6 years of adminning systems, I feel more secure with each daemon checking their config files than passing it onto an independent parser daemon. Each program gets to test once with the parser code inside the program; if you move parsing out then you move that work onto the sysadmin (or home user) - the world is filled with minor API changes that screw you over. Feel like running regression tests on every piece of software you install?
    Programs avoid bugs in their parsers: Parsing text is a solved problem. When was the last bug where Apache couldn't parse its own files? Compare that to XML, which is an evolving standard - even if you never have a bug in the XML parsers, who's to say that total backwards compatibility will be maintained?
    When the daemon changes its config file format: Riiiggghht.. I've seen this happen only a handful of times over the hundreds of software packages I've fucked with. Copying config between applications? Best idea in your post, but most of us will still handcheck it, because of Semantic Complexity. Does postfix's 'address' field mean IPv4, and sendmail's 'address' field allow IP or DNS hostname? When you move to IPv6, does postfix break silently? These are issues that sysadmins everywhere deal with on a daily basis - none of it will go away with a common syntax.

    I really don't understand the Linux community's perceived resistance to XML. I think it's because Microsoft was an active participant in the development process, and because many of the early implementations of the XML tools were written in Java.
    I think it's because XML is not relevant for most of our problems. I use XML for foreign database data exchange. (I also use Java, so I don't understand your reasoning.) Thus far, I don't have any better uses for it, and I'm not hurrying to find any.

    I do like your idea about changing streams to structured streams, though. Most apps these days are moving to shared memory in the absence of any such solution.

  • by jackb_guppy ( 204733 ) on Sunday November 19, 2000 @03:53AM (#614810)
    Stop using flat files for .conf, Use XML as standard configuration format. This includes the make tools, lilo, and the rc.d as well.

    With this grand unification:

    1) XML parser API could be added so all modules will not have to create thier own.

    2) A single config tool could be written so that the options and help information can be entered and checked. Lowering the entry skill level for general users.

    Anothe wish is to get the /etc back. Move the conf data for a /.conf directory or some thing.

news: gotcha

Working...