Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
Linux Software

The story of the Linux kernel 183

Todd Bradshaw wrote in with an excerpt from Linus' chapter in "Open Sources: Voices from the Open Source Revolution". Linus' number one rule for keeping the kernel healthy is to avoid new system interfaces.
This discussion has been archived. No new comments can be posted.

The story of the Linux kernel

Comments Filter:
  • by Anonymous Coward
    What I found most interesting was Linus' opinion of forking in the Plan9 process model. It seems that Bell Labs has stopped work on Plan9 and I think that's a big shame. I would like to hear more of his opinions on Plan9 in general, because I think Unix (Linux included) DOES carry a lot of baggage that is holding it back.

    In particular, I think distributed OS's hold a lot of promise in the new era of "everything's connected". This is an area which I would like to spend a lot of time on....
  • by Anonymous Coward
    Ports are hard to count.

    Linux people usually count ports by the CPU.
    There are at least 8 working Linux ports if
    you count that way, perhaps more when you
    include experimental ports. This is about the
    same as NetBSD has, perhaps a tiny bit more.
    (finally NetBSD can do sparc64 like Linux!)

    NetBSD people usually count ports by general
    hardware similarity. This makes the Atari and
    Amiga ports different, even though Linux once
    used a single kernel binary to handle both.
    Counting this way, Linux crushes NetBSD with
    an awesome 29 ports.

    We've been over this before, and there is an
    old Slashdot post that enumerates all 29 ports.
    You might find it by searching for a whole bunch
    of different architectures.
  • In most Linux distributions (and most other operating systems as well), the core internet tools, from finger and telnet, to DNS and named are BSD in both heritage and license. There are some other common BSD items (like ld.so and tcsh), but internet tools are the big ones.

    Most of the system tools, the compiler, the binutils, the shell, the library, emacs, are GNU in heritage and GPL in license.

    Both BSD and GNU are critical to any Linux distribution. Tux should have GNU horns and a daemon tail.
  • That's been discussed to death many times in the past. I believe the concensus was it just isn't worth it: the bulk of the package is in the drivers, not the architectures. Hmmm, someone with access to the sources correct me if I'm wrong, but I think the code for all the archs put together is smaller than the core+net+fs (and I'm not including specific protocols or file systems).
  • Does he mean something like a system call that could directly copy from file descriptor to file descriptor?
    I beleive so, or something like it. I'm thinking that simple requests could be recognised as normal file requeses and just handed off to the filesystem, with the web server `root' somehow configured (/proc/.../web_root or somesuch, probably).
  • With the GUI off, you don't get multitasking (even the usual approximation thereof), networking, protected mode operation or much anything else that DOS didn't offer.

    Funny how much stuff MS has built in to their GUI... :)
  • Posted by wraith-q:

    Torvalds has accomplished much with his skills, he has earned the respect and admiration of the Linux community. So what have you done with your life? Make it count, or shut up.
  • A number of them I hate with a passion; the Emacs editor is horrible, for example. While Linux is larger than Emacs, at least Linux has the excuse that it needs to be.

    I'm really sorry to hear Linus say this. First of all, calling Emacs an "editor" is a bit dishonest. Yes, it is an editor, but it is *so* much more. It is a file manager, an IDE, a gaming platform, etc.

    I was using Emacs long before I was using Linux, and will continue to use it for many years to come; it is my Swiss Army knife. Just the other day, I had horked myself experimenting with some arguments to tar. I ended up with a huge file with a name that started with "--". How did I delete it? That's right: DirEd! Thank you, Emacs.

    As for the size of Emacs, if you really only want an editor, there is micro Emacs.

    Personally, I am the worst of pig-editor users: I use XEmacs. And I like it that way.

    If you wanna use vi, fine. But know that Emacs is a great, multi-faceted tool, not just an "editor."

  • Is there any reason why one couldn't choose an appropriate scheduler at config time, based upon the target machine?
  • Oh my god what bullshit.

    If there is one thing I learned from that article, it's that Linus is not nearly as humble as I thought he was.

    He seems to want to make a point of pissing on everyone else ("microkernels suck and the people who made them are stupid, everything from GNU except GCC is lame, etc") for some reason. What an ego.
  • I agree wholeheartedly.

    I used to think Linus was humble and I respected that. It's a great thing to do something great and still be humble about it.

    It's quite the opposite to do a great thing (albeit with the help of thousands of others) and in the end use it as a platform to insult others and minimize their accomplishments.

    I have to say, RMS may have some radical ideas, and he may support them pretty strongly, but I can't remember the last time I heard of him calling someone else stupid or minimizing their accomplishments.
  • Unfortunately I'd have to agree with you. This wasn't an interview, where one can make mistakes in an instant, this is a written work where one should look back and think about the implications of such things. I must admit to being quite disappointed.
  • Here is a site which aims to list all the supported and experimental ports.

    http://www.ctv.es/USERS/xose/linux/linux_ports.h tml

    This list isn't organized per processor architecture though and it doesn't carry information about the current state of the port. For example the Sun3 port just recently got into a state where it can boot to a working system with userspace applications. The problem is that they can't share binaries with the rest of the m68k ports because of the different page size (8kB vs. 4 kB).

    I'd say it could be accounted as a benefit for the NetBSD that a some of these m68k platforms (and some others) are better supported under it but on the other hand nowadays the said platforms are exceedingly hard to find anywhere.
  • Yes, that's certainly true. This kernel archive splitting business has been beaten to death many times. It's even in the FAQ.

    Btw. here's a link which illustrates the growth of the kernel archive. (thought I had to advertise this somewhere since I bothered to do it :) The kernel source is such an attractive data set don't you think?-) In future I'd like to include more detailed breakdowns of the various parts and maybe do some analysis and predictions of the future growth.

    http://www.helsinki.fi/~amlaukka/kernel-size.htm l

    Don't mind the script. It's a result of couple hours worth of spontaneous hacking. It's interesting to see that since the late 1995 the growth has been pretty linear and amounts to about 200 kBs worth of bzip packed kernel source per month.
  • For consumer versions of Windows, this makes sense. For something intended as a server platform, this is a bad decision, in my view.

    Servers don't normally need good GUI performance; they're not trying to do complex rendering or animation, and most of the time nobody's anywhere near their consoles. What's important is that they're robust and reliable, have the necessary administrative capability, and have good I/O and sometimes computational performance. If the GUI preempts disk or network activities to update some flashy widget, the server's ability to do its job is compromised. If a bug in the display driver (and display drivers tend to be complex) locks up or crashes the system, likewise. If some alert box pops up and the system can't do anything until someone manually dismisses it, it's also not very good.

    This suggests that NT Workstation (where graphics performance actually may be important for some applications) and NT Server should have different design goals, but that doesn't seem to be the case.
  • I haven't heard RMS speak on the topic, but if he's disappointed, it's more likely (IMHO) to be because Linus credits gcc more than the GPL. I certainly don't think that RMS expects everyone to like emacs -- it's just an application, after all -- and certainly the utilities could have been written by someone else, but the free, well-ported compiler is the linchpin of the whole enterprise.

    And note that Linus said "...are for Linux insignificant IN COMPARISON." That's a long way from saying that they're truly insignificant.
  • As it stands now, the user space code can come awfully close to it:

    fd = open(translated_URI, O_RDONLY);
    addr = mmap(fd, random_mmap_args);
    write(socket, addr, whatever);

    Obviously, there's all sorts of stuff involving retries, tail ends, and so forth that I've left out, but the upshot is that most of the data never actually makes it into user space. mmap simply does the mapping; the data doesn't fault in until someone actually requests it, and if the user program never touches it, it doesn't get copied into user space.

    However, there are things that could be done at the kernel level to strip away even this kind of overhead. The fd>fd copy (with retries done at the kernel level, rather than the user level) is one possibility. Another possibility is a system call to actually copy a file to a file descriptor, thus avoiding the user-space open(). This seems a bit of a stretch.

    However, there's another approach that might yield a big payoff, which is actually to embed some knowledge of the http protocol in a kernel module. This isn't as far-fetched as it sounds; NFS servers have been doing this for well over a decade, explicitly for performance reasons. While http is a stream-oriented protocol, it's actually quite a simple protocol indeed, and completely stateless. Even the keepalive option introduces no state beyond a persistent socket; each http transaction is logically independent of anything else.

    So the basic idea here is that the user-space http daemon registers mappings between URI's and filesystem locations, and the kernel-space http daemon intercepts these requests and processes them without involving the user-space daemon at all.

    This is very schematic, and doesn't address issues such as cookies, filesystem permissions, and such. But it's certainly a possible architecture for this kind of beast.
  • Well, I'd argue glibc is pretty key too.
  • Actually, the irony of that is that it's actually a result of a key restriction in the BSD license which does not exist in the GPL. With the BSD license you are required to credit the "Regents of University of California" during startup of any derivative work you make. The GPL has no such restriction (hence a lot of GNU software goes uncredited).
  • by Thandor ( 1371 ) on Wednesday March 24, 1999 @05:58PM (#1963817) Homepage
    I think the thing that shone through in this article is that Linus is one of the world's great diplomats. Although he'd probably deny it, saying he's just pragmatic, I think it's true. He's just modest, a trait that goes hand in hand with being diplomatic.

    Take, for example, the way he manages to mention Windows NT several times in a less than complimentary way, without ever sounding like he was being condescending, or "Microsoft bashing".

    Or the way he manages to bring home his point against microkernel architecture. He made points that coming from most people would have been flamebait, but from him seem little more than quiet assertions of the truth, due to his modest and humble manner. Then again, Linux is probably proof that this assertions are in fact truth.

    I'm not saying that the article was brilliant, in fact I thought it served to highlight the difference between a great author and a great computer scientist. That is, the article, while doubtlessly interesting and informative, lacked an artist's touch (much like certain operating systems in fact). However, it did yet again highlight what makes Linus such a good kernel maintainer - his people skills are first class. It's something that unfortunately can't be said for enough CS people, which is probably why Linus stands out so much.

    I think this is the reason why many people, myself included, have the greatest amount of respect for people like Linus. For in being a pragmatist, while the world hasn't benefitted (or suffered!) from any great ideas of his own, his contribution in helping people come from often vastly different views to meet in the middle ground has more than made up for that.

    So while I respect RMS as the rightful "Saint of free software", as I respect various other "celebrities" of the free software/open source world, I think that what Linus has done is far greater. That is, in the way he has managed to bring people together (not just with code, but also with his words) rather than tear them apart over a relentless pursuit of an ideal, as some have done. Perhaps he realises, in his more balanced world view, that the end does not justify the means, nor, just as importantly, does the means justify the end.

    It's probably no coincidence that Linus is generally reluctant to offer his opinion on things, and when he does he tends to be brief and to the point. Perhaps that's a hint that I've said enough! :)
  • Editors are a religous thing. But if you want to learn to use emacs. Try the O'Reilly book. It is very good.

    I personaly love emacs, but will admit it is very hard to learn to use.
  • 2 keystrokes. One finger holds down control while two others hits x then c (and these keys are right next to eachother on a QWERTY keyboard).
  • I thought Linux used pico.
  • He claims that Linux is the most widely ported PC-OS. NetBSD [netbsd.org] looks a lot more impressive though. As far as I can tell, there's only one Linux platform which is not supported by NetBSD, and that is the Palm. And there are at least 10 platforms which are supported by NetBSD but not by Linux, some major ones among them.

    Credit where credit is due.

    --

  • Linus has said on many occasions that his has no problems with Microsoft as a company.

    He objects to the quality of MS operating systems, but that's it. He's even complimented MS application software.
    --
  • My problem with emacs is that I've tried to learn it three times (using the built in tutorial), and each time I've given up, frustrated, and gone back to vi/elvis/vim/vile

    I don't *think* I'm stupid, but emacs makes me feel inadequate... while vi makes me feel powerful...
    --
  • Not really, since I never had any trouble learning vi.

    Of course I know this is purely personal, and I know that many people *do* have trouble learning vi. I just found it very logical and consistent straight away.
    --
  • It's obvious this guy has resigned himself to accepting blue-screens, and the inevitable three-fingered-salute which must follow, as the accepted norm in fixing a problem!

    He might not actually know there are OTHER operating systems out there (FreeBSD, Linux, Solaris, Tru64 Unix or whatever Compaq are calling it this week, etc.), that have a decent command-line interface and can actually handle long file names!

    NT isn't even a true multi-user operating system!
  • Linus can always say: "Hey, that interface is crap and it's not going into the kernel. You can put it in user space".

    So Linus may speak!
  • I think you're overlooking that fact that NT is a piece of bloated crap based on a flawed operating system design!

    Microsoft deserve nothing less than what Linus said. Designing an operating system that can only handle 8.3 file names is, indeed, STUPID!

    That's about as dumb as the drive-lettering scheme in Microsoft's "operating systems". Just put another hard drive into your NT box and watch all your drives swap letters, and then watch all your programs fail because they can't find C:\whatever...

    Challenge to Microsoft: write an operating system that is *HALF* as good as Linux/Unix/BSD!
  • On the other hand let's look at vi(*)


    Lessee
    [esc]:wq![enter]

    I count six :)


    Don't get me wrong, I *like* modal editors (anyone familiar with isredit on VM/MVS/OS-390?), it's just that I don't think "keystrokes to leave the editor" is a good argument for the virtues of one editor over the other.


  • I don't know for certain, but I suspect that emacs is fine for people who spend most of their computing time editing/compiling/debugging.

    That's OK, but I don't. I spend most of my system time looking at logfiles and process displays, searching text files, checking disk space, and editing config files. Hence, my requirements for a working environment are better matched by a shell than by an editor.

    Horses for courses folx :)

    Hi-ho silver!

  • Ah yes. Vi, the lowest common denominator of all Unices.

    Were it not for my knowledge of vi, I would not have been able to talk (over the phone) a newbie through editing /etc/fstab on a system that could not determine the terminal type of it's own console.

    "Alright, now type 'jjwwwcwc0t0d4' etc."



  • Does he mean something like a system call that could directly copy from file descriptor to file descriptor?

    Isn't that what sendfile [linux.no] does?

  • I think your comparison of Glide vs. DirectX (or Direct3D) is a little unfair. Think about the ioctl's. In comparison to them, glide and DirectX both appear "thick", since they are higher level APIs that talk to the hardware driver itself. I think that's what is really important in defining something as "thick" as opposed to "thin" - where the interface is implemented, rather than what it represents. DirectX may abstract the hardward while Glide better represents the hardware architecture, but they are both higher level, programmer-oriented interfaces. You'd never see the Glide interface implemented in the kernel as sys-calls, which is the point Linus is trying to make.

    Great article. =)
  • Operating Systems--Design and Implementation by Andrew S. Tanenbaum

    Hardly used. Read only once, in fact. The are no markings in the book. Unlike most text books, you wont find highlighted sections with annotations marking all of the useful and helpful areas of the book.

  • bzzt. that's the long answer

    [esc]ZZ

    depending on whether you are editing text or not, it is either 2 or 3 keystrokes.....

  • I think that's because Windows doesn't like to use the CPU. Instead it spends all its time swapping out to disk. Due to the large cache-miss percentage, the CPU gets time to cool down every 10 cycles or so while waiting for the HD to catch up.

    Linux, OTOH, is so efficient that it keeps the CPU running all the time causing it to overhead. Either that or it spends all its time in interrupt handling routines.

    Maybe I'm just making this all up :P

    self: *THWACK*


  • When the video-driver-in-the-kernal thing gets dredged up, I always wonder if it really matters.

    Servers generally run really generic SVGA or S3 drivers, which don't crash on NT 4, and from a normal user perspective, if your video driver crashes, you pretty much hosed anyway. (Unless the normal users you know like a command prompt.)

    If it's a server, why not just not bother? That is the whole point, with NT, you have no choice.


    Linux doesn't have video drivers in the kernal pretty much only because Unix has never done it that way and Linus doesn't want it. But video on Unix has always been an afterthought, whereas on a client OS like Windows, it's practically the most important thing.

    Not strictly true. SGI for example, put minimal drivers in the kernel. Even newer versions of Linux do. It is not a good idea to give user space processes direct hardware access. The hardware drivers should therefore be in the kernel.
  • I am afraid Linus will hold back scaling Linux up. He even mentions that one will only be able to use a modified (non-standard) version of Linux if you want to run on 64 processors or more. It is true that in order to scale, you need to make things more complicated and possibly slow for single processor machines, but this should be done. I mean, how many 386's are there running Linux nowadays? How about in 5 years?

    I would bet money that there will be many 64 processor machines out there in 5-10 years. Moore's Law is going to give out eventually on the single processor and the only where to move will be in the parallel direction. Linux should be prepared for this.

    I think Linux will allways run quite well on machines in the $1000 to $3000 proce range. When such machines have 64 CPUs (or 64 hardware level threads, maybe not as many physical CPUs) then the base Linux will run quite well on it. After all that's what most Linux developers have, so thats what will get the most work. It will allways run somewhat less well (but still decently) on the cheaper and more expensave machines. At least if the future follows past trends.

    However I don't see any signs pointing to cheep large scale multiprocessing. The prices on 2 CPU machiens have dropped dramatically over the last few years, but the price of 16CPU systems has been fairly stable compaired to single CPU systems. The Mediaprocesser (the only cheep general purpose multithread CPU that was aiming to go comercial) has sunk without a trace. The only two comercial multithreaded CPUs I know of are the Terra which is only being sold in really large machines (think bomb testing, and weather simulations, not kick ass raytracer), and um, that thing I just read about in EDN which is targeting 8bit and 16bit systems.

    I would love a 64way machine. I do raytracings (when I'm not coding, sleeping, or watching TV, actually I frequently raytrace while sleeping, watching TV...), so lots of compute will do me good, even if I only have a one giant lock kernel.

  • I have to say you're probably right. I'm as much of an emacs-phile as anyone: I compile, edit, debug, read email, news, etc, in emacs, write emacs packages, etc. But when I have to look at logfiles and checking disk space, I normally use the shell. Well, if it's one-off stuff... for big disk cleaning, I do use dired with some scripts I wrote to find usage hotspots.

    Note that this is dependant on having a decent shell, namely bash. If I have to use anything else, I tend to do things more from Emacs, because it can compensate for faults in other shells, like not having both ! and C-r histories.
  • I thought it was interesting how he made some design decisions that were intended to make it easier for him to work with the other developers out there and it also ended up being a good decision on technical merits.

    I wonder if the free software community's dynamics help create better design decisions as opposed to a corporate environment. I'm not talking about the often-cited advantages like having lots of people pounding on the code and such; I'm talking about the need to have many developers working in parallel affects technical decisions in a way that are benificial.

    It is an interesting thought to ponder, and one I'm not qualified to answer from lack of experience.
  • Smart people can have stupid ideas. It is more politic to say "they aren't the best, or most efficient designs", but that's just politese for "they are stupid ideas."
  • I beleive so, or something like it. I'm thinking that simple requests could be recognised as normal file requeses and just handed off to the filesystem, with the web server `root' somehow configured
    (/proc/.../web_root or somesuch, probably).

    But wouldn't the web server still have to process the URI... open() the requested page and then make the fd to fd copy system call?

    I guess this is what I was confused about. To have the kernel be able to handle HTTP requests would seem to go against linus's philosophy... I suppose you could have a web server kernel module, though, but that just seems like something that should be running in user space.



  • Yup:

    # du -sk linux-2.2.1/*
    20 linux-2.2.1/COPYING
    54 linux-2.2.1/CREDITS
    2401 linux-2.2.1/Documentation
    19 linux-2.2.1/MAINTAINERS
    14 linux-2.2.1/Makefile
    15 linux-2.2.1/README
    3 linux-2.2.1/REPORTING-BUGS
    8 linux-2.2.1/Rules.make
    8655 linux-2.2.1/arch
    27993 linux-2.2.1/drivers
    3945 linux-2.2.1/fs
    6298 linux-2.2.1/include
    36 linux-2.2.1/init
    63 linux-2.2.1/ipc
    253 linux-2.2.1/kernel
    55 linux-2.2.1/lib
    258 linux-2.2.1/mm
    3959 linux-2.2.1/net
    355 linux-2.2.1/scripts
  • I have to say, that was a very well written article. I enjoyed reading it.

    I was interested in what Linus had to say about web serving, but I am a little curious what he means about the kernel handling requests for static pages. Does he mean something like a system call that could directly copy from file descriptor to file descriptor?

  • I will read Linus again and again.

    English is not even our favorite Finn's first language, but his article was a joy to read, and just the right length too.

    Other comments in response to the Gates book were about how BORING Gates is. Linus isn't boring at all.
  • Something that always amazes me about these self congratulatory love fests is how little Linus, Eric Raymoyd, etc, etc, give any credit at all to the people who blazed the trails for them. I haven't read the book yet, but it seems devoid of any interviews of people like Ken Thompson, Dennis Ritchie, Brian Kernighan, Alfred Aho and many others.

    Sorry, boys and girls, but without those guys there would likely be NO 'Open Source Movement'. They were the first to do readily available, accesible research on practical, NON PROPRIETARY, non big iron requiring O/S's, languages and scripting tools, and for the sheer pleasure of it at that. Linux is a direct descendent of their work, and with a healthy supply of Unix-alike tools and utilities (without which Linux would just be a curiosity).

    Cmon, Open Source 'leaders'... does it really hurt that bad to give a little credit to the guys that got you here?

  • Can you recognize sarcasm? Apparently not.
    ********************************************
    Superstition is a word the ignorant use to describe their ignorance. -Sifu
  • if it's more than a line long, you probably aint doin' it right.

    paulzilla
  • Such vehemence against GNU


    Hmm. In one of those weekend magazine liftouts
    (no URL, I'm afraid, though it might've been a mirrored, ahh, syndicated article online elsewhere.) in the Sydney Morning Herald, they had an article on Linux. Obviously, they bunged up Linus' goofy face on the first page, but they also had a smaller picture of RMS (replete with borrowed laptop with "GNU/Linux inside")


    Anyway, the point is, Linus in the article was very forthright about his and Stallman's relative
    contributions to the system, giving RMS plenty of credit. I can understand why RMS is pissed tho', since he's an idealist, and idealists tend not to appreciate having their ideals diluted.


    As for the further development of Linux, if you don't like what Linus is doing, fork the bitch!
    :) If it means that the kernel for embedded systems has to take a different course to large scale systems, so be it. I think the "movement"
    is big enough and ugly enough to handle it.


  • I don't see any irony in the fact that a non-commercial OS gets ported to obscure platforms (cough, Amiga, cough), where a commercial OS that has a revenue stream does not.

    Microsoft had a MIPS and PowerPC port. Haven't seen many Non-SGI MIPS or Non-Mac/Non-AIX PowerPC boxes lately have you? They only could push their thumb in Intel's eye for so long before they did the logical thing.
    --

  • So, when GNU formed in the early eighties, it was in order to fight Microsoft?

    --
  • Well, I think Linus has the valid concern of keeping Linux the same on all architectures.

    If you started to see home made platform-specific kernals, I can imagine the situation where that might become standard enough that people would submit patches just for the platform-specific kernal, and not the Linus kernal. This effectively forks the code, especially if RedHat or someone picks up on the patches.

    --

  • When the video-driver-in-the-kernal thing gets dredged up, I always wonder if it really matters.

    Servers generally run really generic SVGA or S3 drivers, which don't crash on NT 4, and from a normal user perspective, if your video driver crashes, you pretty much hosed anyway. (Unless the normal users you know like a command prompt.)

    Linux doesn't have video drivers in the kernal pretty much only because Unix has never done it that way and Linus doesn't want it. But video on Unix has always been an afterthought, whereas on a client OS like Windows, it's practically the most important thing.

    The thing Linux has, that Windows doesn't is compartmentalization. Sure you can run Linux on your 4 meg 386, just disable everything you don't need including X. Microsoft always designs their product around absurdly low standards for marketing reasons (486SX/25 with 16 MB is minimum spec for Windows NT), yet has to make everything kinda-sorta-run. That's why you don't see things like network-transparency with the Windows GUI - it would price them out of the unrealistically low-end market.


    --

  • That wouldn't be a "Macintosh G3" would it?

    --

  • Yeah but turning off the GUI in Windows doesn't give you Windows-with-no-GUI -- It gives you good ol' 640K MS-DOS!

    (Someone did hack a 32-bit no-GUI DOS using Win95 VxDs - it had networking and file system caching, but nothing really ran on it.)
    --

  • Correct. NT WS and NT Server are exactly the same operating system - except for the price, and the fact that WS doesn't include server services.
    --

  • Good Point

    - I think the video-in-the-kernal thing was one of the things that had WinFrame on NT 3.51 only for a long time.
    --
  • "Linus Torvalds explains what makes the Linux kernel great."

    I'd rather have someone else do it, personally, someone a little less partial - seems to me that Linus would be a little biased. Why not ask the author of Minix to explain it? >;) Microkernels - just a way to get more research dollars. Of no use in the real world.
    -lx
  • Oooh! Harsh! I love it. :)
    -lx
  • by Lx ( 12170 )
    i think the implication was that Linus didn't read it well enough. At least, I hope it was. :)
    -lx
  • It seems to me that quite a lot of my distribution owes its ancestry to BSD or others (Messages about "Regents of the University of California ..>" or "Copyright Caldera...." during bootup from a whole slew of drivers/modules).
    These may be on the GPL, but the're NOT developed by the FSF.

    Just my £0.0125 ...
  • by cjs ( 12969 )

    When the video-driver-in-the-kernal thing gets dredged up, I always wonder if it really matters...
    It matters a lot, but not for the reasons anybody here has yet mentioned. The tremendous problem is not the video drivers, but the fact that you can't run an NT console over a serial line, requiring you to be physically present if you need access to the machine and (for any of a multitude of reasons) you can't reach it or do what you need via the network.

    This is a really big deal when you're maintaining, say, 300 servers that are 21 floors and two elevators away from you. It's an even bigger deal when you need to deal with a server from home.

    Of course, most PC hardware limits your capability in this area regardless of the OS you run. I've run lots of NetBSD/i386 systems with serial consoles, but that still doesn't give me access to the BIOS setup. With a Sun server, on the other hand, using a terminal server to talk to the serial port is just as good (better, in fact), than having a graphics console on the machine and being there.

    cjs

  • Linus's bald claim that Linux is the most ported operating system that runs on the PC really burns me, because I don't think that anyone who actually examined the issue would disagree that NetBSD has a strong claim to this. (I believe it's stronger than Linux, myself, but that's just my opinion.) I expect that this will be perceived by the free software community outside of Linux as yet another typical example of Linux hogging the spotlight, rather than sharing the fame. There are not an insignificant number of people out there who see little difference between Linux movement and Microsoft; in both cases the promoters tend to gloss over flaws and ignore other technology, instead giving the impression that they are the one and only option.

    The link given above, http://www.ctv.es/USE RS/xose/linux/linux_ports.html [www.ctv.es], is a little optimistic in what it considers a `port'; The VAX port isn't anywhere near bringing you to a single-user shell prompt yet, for example. (This is typical of most `Linux ports' pages I've seen; they don't indicate which ones are real and which are currently vapour to some degree or other. Again, more Microsoft-style marketing.)

    If you're going to discuss this issue, it helps to make clear exactly how you're approaching it, as I've done a href="http://www.cynic.net/~cjs/computer/os-ports. html">here. (Note that this page is getting old and needs an update; I'll get to it as time permits.) Some of the questions you have to deal with are:

    • Are incomplete ports still considered ports? The sparc64 port of NetBSD is not in releaseable condition right now; do we consider it ported anyway? How about Linux ports like the Sun3, that are still missing large amounts of functionality? Or the Linux VAX port, which can't even run a shell yet? (It somehow doesn't seem entirely fair to me to imply that the Linux VAX port, such as it is, is equivalant to the NetBSD VAX port, which is running production web servers on the Internet.)
    • Are you counting all CPUs as one port, are different architectures around different CPUs as different ports? If FreeBSD, for example, were ported to an Amiga, does that mean it's just as ported to the 68K as NetBSD and Linux, which run on a much larger number of 68K machines?
    • What capabilities does a `port' have to have? Do you expect to be able to use all the same applications without modification on all the ports? I don't believe you can grab perl source off the net, drop it on to a PalmPilot running `Linux,' type `configure && make', and have a fully-functioning perl just as you'd have on the i386. Do you consider virtual memory and memory protection `optional' features of Linux, so that you can claim that code running on a machine without hardware support for this is still `Linux'?

    I have another comment on Linus's article, but I'll put it in another post.

    cjs

  • I believe the concensus was it just isn't worth it: the bulk of the package is in the drivers, not the architectures.
    Hell, just fix the drivers to be MI, and you'd probably get rid of half the kernel source right there. There's no reason to have, for example, eleven different drivers for the same chip (lance Ethernet). NetBSD has only one lance Ethernet driver, and supports more cards and other implementations of the chip (and supports it on more machines) than Linux does.

    Most of his comments on portability are quite ignorant. The Linux kernel is *not* very portable internally in many ways. Compare the device driver model Linux uses to NetBSD's bus_space and bus_dma structure for a look at the difference between portable and non-portable. (And if you're going to argue this point, please actually *read* the source code first, before spouting off. Though I'm a NetBSD developer, I read a fair amount of Linux kernel code before coming to this conclusion, so I'm not talking through my hat.)

    cjs

  • I'm not at all impressed with this article, and I must say it's reduced my opinion of Linus considerably. I've mentioned a few of the things that have bugged me in other posts, but I'll summarise here.

    1. The claim that Linux is `the most widely ported operating system available for PCs' is certainly arguable. It's unfair to ignore the lesser known systems (such as NetBSD) in an article with such wide distribution.

    2. He's insulting. There's no reason for calling the people he's discussing `dishonest' or `stupid.' That's immature.

    3. He's not correct that the OS research world had abandoned monolithic kernels for microkernels or felt that only microkernels offered good prospects of portability. Around the time Linus started his first i386 work, Berkely and other folks were busy making 4BSD (which is monolithic) more portable, and moving it on to several other architectures. The period between 4.3BSD and 4.4BSD showed a dramatic portability and ports increase.

    4. Linux is far behind the curve in terms of internal structure for portablity; NetBSD is unarguably significantly better in that regard. Take a look at device drivers, for example; Linux has a proliferation of machine-dependent drivers where NetBSD uses machine-independent drivers almost everywhere. Linux doesn't even have a structure to support MI device drivers! (See NetBSD's bus_space and bus_dma work for an example of what such a structure can look like.)

    In short: he insults others, denegrates the work of others that Linux was built on, and he frequently ignores the work of others. Either he's lacking in technical knowledge, or he's willfully ignoring other stuff out there that `competes' with Linux. This article is marketing, not information, and is only going to worsen the reputation Linux already has as a `Microsoft' among the non-Linux free software community.

    cjs

  • In the article, Linus mentioned that, after the Alpha port, it became clear that he did not want to manage more than one source tree. So, all the architectures have been merged into one tree.

    If someone with FTP space and bandwidth wants to take the kernel releases, and make up different .bz2's for each architecture, the world would be grateful. However, I think Linus should just continue to do what is easiest in that regard, which is to keep all the files together, to keep them most manageable.
  • Such vehemence against GNU... I bet Linus Torvalds is the only guy who could get away with saying such things about GNU, without being ripped on by 250 posts or so.

    He'd probably also get a -2 or so moderation if he posted that here...
  • I can see having 64 processor server machines in the future, but I think they'll be far from common. I think that the current trend towards one high speed processor will continue in desktop machines and low-end server machines into the future, mainly for cost reasons. Even on high-end server boxen, a 64 processor machine would be reaching, or above, the complexity/speed vs. price threshold.
  • The whole point of a server is stability and reliability. You want to setup a server to perform certain services (file service, print service, mail service -- or daemon depending on what world you come from) and that it would just do it's job with very little intervention from the
    administrator. Ideally, all the administrator should be doing is to continually customize the services the server is performing to meet the
    needs of those receiving service. However, things not being ideal, an administrator has to worry about hardware failure and hence must add
    tape backup, RAID, etc. to his server for fault tolerance, insurance against data loss, and to minimize down time. Now concerning the issue
    of whether or not to put certain drivers in the kernel, you must look at your application. In a server environment, you want to maximize
    stability and reliability. Therefore, the ideal situation is to minimize the number of factors that can directly affect stability. Device drivers are included in that category. With the creation of new hardware comes new drivers. The question is, is the driver updated enough to run the risk of failure? Patches and fixes are great, but you still have the possibility of failure present if such patches and fixes
    are needed. The question when looking into the operating system's design, and determining what drivers to include, do the drivers present
    a danger? Are they patched frequently? Are they stable? But those are all relative to what we know about the driver and it's history? What
    about rare and unknown bugs? Bugs that show up under unusual circumstances. Ever had a machine crash once out of the blue without knowing why? I have. Only God knows why in such situations. Maybe it never happens again. But it still happens. So what does one do? Do we accept that it will happen infrequent enough that we won't have to deal with it again? If you want stability, that is not acceptable. You cannot
    just leave such unknowns unanswered. NT and Linux both have such vulnerabilities since they both can have drivers present in the kernel.
    Getting back to the server, what do we do? Well, a design which I have been very impressed with in terms of an operating system is QNX [qnx.com]. I do not know how many of you are familiar with it. It has a small message
    passing kernel and runs everything else as protected and separate entities. I found their microkernel model to be very impressive. I
    downloaded their demo and was very impressed with it also. For a server environment, it is a much better design than Linux or NT. I hope that
    someday Linux and NT will follow it's example. Perhaps two versions of each operating system. One version with a QNX-like model for servers,
    and one for high-performance workstations that don't need that kind of reliability. Another thing of noteworthiness that I wanted to look at
    deals with a difference between linux and NT. GUI interfaces are nice and pretty and are sometimes easier to use than a console. But if you think about it, on NT, the GUI is always running wasting precious memory and cpu time. I mean, how often does an administrator tinker with his server. Not often unless he is experiencing a high demand for changes from his users. Linux allows you to run the GUI if you want to, when
    you want to, and shut it down when you are done. In my opinion, that makes it a better design. I hope that microsoft will go back to the Win
    3.xx/DOS model that is similiar to the X-Windows/Linux Model. It would certainly improve their server product. One of the engineers that
    designed VAX VMS worked on NT as you probably know. I believe at the heart of it, the microkernel is pretty good. It is all those nasty
    libraries on top, the integrated GUI, and the registry (yuck!!!had to mention that) that mess everything up. Hopefully my logic on this stuff is sound, but if someone finds fault, please reply and set it all straight. The point of discussions like this is not about who wins the argument, but that we find the best answer. One additional note about QNX, it is designed for embedded systems and the currently do not seem interested in the server market. However, I have heard rumors about Cyrix and QNX getting together to build ultra-cheap boxes for web-browsing or something. If you access their search engine, it provides detailed explanations about the QNX microkernel. I hope you all will read up on QNX. Linux and NT could definitely learn some things from it.
  • Does this sound diplomatic to you?

    I think that all the other projects from the GNU group are for Linux insignificant in comparison. GCC is the only one that I really care about. A number of them I hate with a passion; the Emacs editor is horrible, for example. While Linux is larger than Emacs, at least Linux has the excuse that it needs to be.
    Not only is this not diplomatic, it isn't even entirely rational. Once upon a time, emacs was an unusually large program, now it's only about average. And emacs has plenty of excuses to be larger than a simple text editor, because it does a lot more... sure, there are other ways of doing most of those things, but so what? You might as well complain about Perl being larger than it needs to be because you use Python.

    This makes me wonder if some of the anti-RMS sentiment that you see is really the result of a deeper ideological split than open vs free software: it all goes back to the vi/emacs wars.

    After LinuxWorld, I was actually a lot less impressed with Linus than I had been previously. It seemed to me like he'd gotten his fingers burned in the past by shooting his mouth off, and had concluded that he should never say anything. Look at his old style back in 1992, during the famous "Linux is Obsolete" argument with Tanenbaum: Linux is Obsolete [xach.com] (This is also reprinted in the back of the "Open Sources" book).

    If you want to see a real diplomat in action some time, check out Brian Behlendorf.

  • can be set be editing c:\msdos.sys and changing the 1 to 0 in the line:
    LoadGUI=1
    (may be spelled wrong, I'm going from memory)
    This also enables you to resume a command line session by typing Mode co80 at the "you may shut your computer off now" screen. Also, add the line Logo=0 to msdos.sys to get rid of that stupid splash screen. BTW, this all works in w95, I've not tried it on 98.
  • multitasking, no. Old games, yes. Ability to actually _do_ something when windoze corrupts the registry (again), yes. I just wish NT had a similar front door. :) Hey in a year or two I won't need to use windows at all, only doors.
  • From: ast@cs.vu.nl (Andy Tanenbaum)

    Newsgroups: comp.os.minix
    Subject: Re: LINUX is obsolete
    Date: 30 Jan 92 13:44:34 GMT
    [much snippage]

    ... 5 years from now everyone will be running free GNU on their 200 MIPS, 64M SPARCstation-5. Newsgroups: comp.os.minix
    From: kevin@nuchat.sccsi.com (Kevin Brown)
    Subject: Re: LINUX is obsolete
    Organization: Where???
    Date: Fri, 31 Jan 1992 07:43:47 GMT
    [much more snippage]

    Maybe. But by then, the 386/486 will probably be where the PC is now: everyone will have one and they'll be dirt cheap. The timing will be about right. In which case Linux will fit right in, wouldn't you say?

  • by 7021 ( 15479 )
    Here Here..

    I have been using emacs for 5 years now and although it may be a bit large if you just use it for an editor, but many don't as explained before.

    In fact I was on contract this summer and was asked to choose the IDE (Java was the implementation language) of my choice. The company had set aside 1G. After some heated discussions i broke down and agreed i would spend one week trying different IDE's on the market. I tried JBuilder, VisualAge, Cafe, and basically they all stunk.

    I choose emacs with JDE cause it was by far the most powerful. While it is not for everyone (including Linus apparently) I consider it one of the most usefull programming tools i have, and to me Linux without emacs is only half as useful.

    -7021

  • Just because those things aren't in the Kernel doesn't mean we can't use them. It's probably best that they stay out of there, anyway. If the GUI were in the kernel (*cough*Windows*cough*), there would be lots more problems with debugging and with possibly switching if Berlin ever turns out. Wrappers for the low-level interfaces keep things clean.
  • I know RMS has isssues with Linux (or GNU/Linux, take your pick) in general (some of which I have to agree with), and in this article Linus seems to ignore the importance of all the utilities GNU provided, but the kicker is Linus's quote "I think that all the other projects from the GNU group are for Linux insignificant in comparison. GCC is the only one that I really care about. A number of them I hate with a passion; the Emacs editor is horrible, for example." OUCH!
  • Kernel modules are a new idea to me, and I was glad to read what he said. // I used to be an editor some time ago (Electronic Design).
    The content and style are great; I do agree that
    calling other ways "stupid" and such is a bit strong, but it *is* concise, and he doesn't seem to be personal when he says it, just to the point.
    HOWever... If this is the actual text that appears in the book, shame on them! It's not the best copy editing. Linus does fine with English (try some Finnish, if you doubt me), but a courteous square-bracketed word or two would help in a few spots.
    Also: There are some typos and/or detailed editing errors that shouldn't get into an O'Reilly book. I expect better of them. (If this is still a draft, fine).
    (Fwiw, this msg. is no great piece of writing /editing, either.)
  • Would it be a good idea to split the different architectures that the Linux kernel supports into different packages? It would certainly reduce the size of the download for most people...

    I became a Linux convert the day that NT crashed five times on me.
  • I learned vi at age 8 on a Tandy Model 16 running MS-Xenix, this was roughly 15 years ago. I still use it today, people look at me funny but vi is the BEST example of efficiency and elegance when it comes to editors. Oh BTW this reminds me of a really lame college computer class I took in high school where the instructor actually marked me incorrect for using the term text editor instead of word processor!? WTF Anyway... just my little vi ancedote
  • Though throwing money at one is equally pointless.

  • Tannenbaum started the microkernel thing. You can read it at a link above (or below depending on your preferences). ast said Linus deserved an "F" for writing a mono-kernel in the 90's, he also started the usenet thread bashing Linux.

    In a reply though, Linus said he probably wuold get an "F", because he got into a verbal argument with his OS teacher over something completely unrelated. I guess Linus' diplomatic skills are about average.

  • Diplomats don't call things stupid or horrible or say that they hate them. Linus does that throughout the article. Not that he's wrong :)

    My diplomatic replacment for "stupid" etc. is "non-optimal". Unfortunately, my co-workers seem to have caught on. Bunch of cow-orkers...
  • I see your point, but one of the features I currently like about linux is the flexability to run on 386's all the way up to high end hardware. I currently use linux on a wide range of systems, all the way from 386's to alphas. Having a common system and reasonable preformance on all of these systems really is one of the primary benefits of linux, at least to me.

    Just my 2 cents.
  • You know, the best things on the original Win95 CD are the Weezer and Edie Brickell videos. The rest of the CD is fairly worthless.

    Mike
    --

  • I have to agree with you. It seems that the guy was already pissed about Linus a while ago, and now, he replys by saying something like this.

    I also think that Linux being free software, it should have a certain aliance with gnu (even if we don't always call it gnu/linux), because, after all, it's just the same comminity. The anti-microsoft community...

    Papi
  • I think everybody should hate a text editor that takes 124M tar.gzipped.

    Ok, it has niffty features like reading mail, but, if I want to read my mail, I use pine.

    Vi is the wave of the future (actually, it was, in the early days of UNIX, but anyway...)...


    Papi
  • Hmmm,

    I think you need some time away from your computer or something. He is everything you said (to a lesser extend), but, you looked like you were asking him out or something....


    Papi
  • If you check out the source code, it is briently designed as it is. Go take a peek at lxr.linux.no (linux source crossed referenced) you will realize that the tree is fine as it is.

    Also, to save yourself some bandwidth, use kernel patches, they are easily installed, and fit in 600K.

    (To install a patch, if that is your problem, take the patch in bzip2 format and put it in /usr/src then just type:

    bzcat | patch -p0)

    You would get the same upgrade result that you would have gotten with a full download....


    Papi
  • I beleive that he was talking about main memory management. As you know, to be able to perform multitasking, the kernel must share memory between precesses.

    To do so, the hardware must do some work for us (like being able to translate virtual adresses to phisical ones and vice versa). It does so by spliting the total amount of memory into pages (4K on i386, 2K on alpha).

    What he meant (to my opinion) handling requests to static pages is that when a process wants to get memory for himself (when is is started, or by a malloc call or so), it asks the kernel for a certain number of pages. Then, the kernel gives him descriptors to these pages, and alocates them when the process actually uses them (so if you malloc(10000000) and don't use it, you won't realy waste memory).

    Also, when he was talking about "sane architectures" and page handling, he meant (I think) that all basic architectures use such schemes. It's just that the number of levels of page caching changes (3 for alpha, 2 for intel).

    Maybe you already knew all that, or maybe I suck at explainig stuff, but, if you want to know more about this, go get "The linux kernel" from ldp. Or even better, if you understand french go get "programmation linux 2.0 api système et fonctionnement du noyau" by Rémi Card (author of the ext2 file system) and the only good french computer book I've ever read.

    Papi
  • Good thing. Now, the only thing left for us to do, is burn all windows cds and kill (or making him suffer very much) Bill Gates...

    Now, all the pieces fit....

    Papi
  • Maybe Linus doesn't have time to write about the kernel. But, if you are really intersted in design issues, you can pick up a copy of "The linux kernel" from ldp.

    This book is written by a kernel hacker that made the port to alpha possible. It's not Linus, but he is a very knowligeable individual, and he explains design issues very well. Plus, there are pointers to files in the source tree so you can read the code while following his book...


    Papi
  • Vi is said to be the only text editor for UNIX gourus. Probably because it is so old, and that really good unix users used it for like 15 years...

    Switch to nedit instead....

    Papi
  • After reading the future plans for linux, I saw a big problem. Linus wants Linux to be able to run on embedded systems -- great! But he also wants it to run on 16 processor super servers. From my understanding of kernels, it seems these are conflicting goals.

    So far, Linux has been able to work on Palms and on 4 processors. But if all goes as Linus plans, there will be a much bigger discrepency in the scale of systems in the future. And this will just hurt the performance of Linux on either of these systems.

    For instance, does it make any sense to have red-black tree virtual memory areas on an embedded chip running 2 processes? Or should the scheduler be as simple as it is for a machine with 16 processors?

    I am afraid Linus will hold back scaling Linux up. He even mentions that one will only be able to use a modified (non-standard) version of Linux if you want to run on 64 processors or more. It is true that in order to scale, you need to make things more complicated and possibly slow for single processor machines, but this should be done. I mean, how many 386's are there running Linux nowadays? How about in 5 years?

    I would bet money that there will be many 64 processor machines out there in 5-10 years. Moore's Law is going to give out eventually on the single processor and the only where to move will be in the parallel direction. Linux should be prepared for this.

    -tbd
  • Most current keyboards have keys that traditional Unix editors simply ignore. I'm used to Home, End, Del, function keys, etc. and I much prefer pressing two keys to invoke a command than six.

    Flame me as much as you like but vi(m) as well as emacs, joe, etc are predominantly for masochists, gurus or not.
  • But when you think about it it's not very surprising that the ease of porting depends most of all on how good your design is, even if you designed it with only one platform in mind.

    One of the troubles with M$ products is that marketing reasons can take over pure technical ones. One such decision (I read it somewhere here) was to place the video driver in the kernel space to make the GUI run faster in NT 3.51. The other obvious example is the attempt to bury the web browser as deep as possible in the OS only to limit the market share of a rival product.

    I am aware this is off topic but please post more examples if you know such. The Windows user lives with the undying hope that the next version will finally get rid of the bugs and will become stable. But it would be good Linux advocacy if we are able to prove that they are consistent in sacrificing reasonable solutions for the sake of greater revenue.

    Linus talks about how Linux will inevitably be replaced by another OS once when the hardware evolves enough. But at the same time he tries to ensure that Linux is designed in such a way that it lasts as much as possible. M$ on the other hand don't need an OS which lasts more than 3 years because how else could they convince you to buy their next one if you are comfortable with the current. 'But please, try our new one. It's not only richer in features but we got rid of the bugs. Really! This time for sure!'
  • There's a few other texts worth reading from Open Sources too. I would especially like to point out Larry Walls essay which can be found online at http://kiev.wall.org/~larry/onion/onio n.html [wall.org]. Also worth a read are Bruce Perens article, which unfortunately doesn't seem to be reprinted online anywhere.
  • .. that Linux was written originally for the Intel platform, but now it's more widely ported than WindowsNT.

    Linus does a good job of being modest, but bashes Microsoft while he is doing it :)

    And we shouldn't forgot to thank RMS and the FSF for gcc.
  • Great review! My rambling thoughts...

    Especially interesting from an engineering pov is the separation of interfaces from modulisation. The idea of writing the OS in C and basing "portability" on the portability of the compiler is a cool idea.

    But, I suggest that you do need "interfaces" otherwise how can ppl use the devices? The "interface" in Linux is the /proc + /dev file systems. So, Linux is actually using "low-level" interfaces, ie interfaces which are "thin" as apposed to "thick" (example, glide vs DirectX).

    In effect, we have a trade-off against "portability" towards usage. This, I agree with totally.

    Cheers /.ers

How many hardware guys does it take to change a light bulb? "Well the diagnostics say it's fine buddy, so it's a software problem."

Working...