Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
Linux Business Software Linux

Fork the Linux Kernel? 455

Joe Barr writes "Fork the kernel? Are you crazy? A blog entry on InfoWorld.com urged the Linux community to fork the kernel into desktop and server versions because, according to the author, all Linus Torvalds cares about is big iron. Sorry, but that's both wrong and stupid."
This discussion has been archived. No new comments can be posted.

Fork the Linux Kernel?

Comments Filter:
  • by tekiegreg ( 674773 ) * <tekieg1-slashdot@yahoo.com> on Tuesday September 18, 2007 @10:05AM (#20652581) Homepage Journal
    If you want to fork the Linux Kernel, there's absolutely nothing from stopping you from doing it yourself. Wanna tune a version just for Desktop or Server? By all means, just adhere to GPL. Your attempt at forking might even get some support from the community, however I'd think Linus's blessing on such a fork means something however...
    • by MightyMartian ( 840721 ) on Tuesday September 18, 2007 @10:14AM (#20652809) Journal
      Or, alternatively, you could just custom compile the fucking thing to take out the "big iron" if that's what you want. I frequently custom compile kernels, particularly when I'm putting Linux on older hardware.

      There's nothing quite like the grand proclamations of the idiots.
      • No you can not (Score:3, Interesting)

        by iamacat ( 583406 )
        Putting a bunch of #if 0's into complex, bloated code doesn't make it slim and efficient. Statements elsewhere still make assumptions about one of 1000 things happening rather than one in 10. Slow, scalable algorithms are used rather than lean but limited ones. make config is not going to turn your Linux into FreeDOS.
        • Re:No you can not (Score:4, Interesting)

          by Midnight Thunder ( 17205 ) on Tuesday September 18, 2007 @10:53AM (#20653567) Homepage Journal
          Putting a bunch of #if 0's into complex, bloated code doesn't make it slim and efficient. Statements elsewhere still make assumptions about one of 1000 things happening rather than one in 10. Slow, scalable algorithms are used rather than lean but limited ones. make config is not going to turn your Linux into FreeDOS.

          Another approach is to use an object-oriented model, so you just include the implementation you need for the specific interface or class. I believe Darwin (the kernel used by MacOS X) already uses such an approach for some things?
          • Re:No you can not (Score:4, Insightful)

            by 644bd346996 ( 1012333 ) on Tuesday September 18, 2007 @10:59AM (#20653715)
            The linux kernel already does this, with modules that can be loaded and unloaded at runtime. Whole subsystems (things like SCSI and DRI) can be loaded on demand. You can also enable or disable kernel preemption at compile time, and you can swap out I/O schedulers at run-time.

            However, the modular approach can have some overhead of its own (though not as much on linux as on darwin). If you really need a small kernel, you can actually disable loadable module support at compile-time, if you know exactly which drivers you need.
        • Re:No you can not (Score:5, Insightful)

          by 644bd346996 ( 1012333 ) on Tuesday September 18, 2007 @10:53AM (#20653579)
          Have you got any examples where there is significant overhead that can't be removed with a make config but could be removed with a specific, less scalable algorithm that isn't available in the kernel source?

          I'm pretty sure your comment is mostly BS. The vanilla kernel source includes a lot of configuration options for embedded systems. Low on RAM? Turn off CONFIG_BASE_FULL to use several smaller, slower data structures. Don't have swap space? Turn off things like CONFIG_SHMEM. Using uClibc? For now, you might as well throw out CONFIG_FUTEX as well.
          • Re:No you can not (Score:5, Informative)

            by recoiledsnake ( 879048 ) on Tuesday September 18, 2007 @11:17AM (#20654079)

            Your examples totally miss the point. The CPU scheduler is a *lot* more crucial to desktop performance than swap space, memory config etc. etc.

            Have you even been keeping up with the whole CPU scheduler in the kernel issue that the article mentions?

            The whole point is that the CPU scheduler is NOT modular and you cannot change its behavior by much by changing kernel options. Con(along with soemone else) made patches to make it modular, calling it plugsched, but it was nixed from getting into the kernel by Linus who said something on the lines of "The scheduler is not something you see frequent changes in."

            Con wanted it because desktop users can easily plug his desktop-centric scheduler into the kernel. For a lot more details read here [apcmag.com].

            • Re:No you can not (Score:5, Informative)

              by 644bd346996 ( 1012333 ) on Tuesday September 18, 2007 @11:31AM (#20654391)
              You're missing the point. A pluggable scheduler is not necessary for desktop usage, because nobody (not even Con) has been able to come up with a scheduler that is significantly better than CFS for desktop usage, except by doing things that amount to hard-coded nice levels. All of the meaningful performance improvements have made it into the default scheduler.
              • Re:No you can not (Score:5, Insightful)

                by MarcoAtWork ( 28889 ) on Tuesday September 18, 2007 @12:12PM (#20655311)

                except by doing things that amount to hard-coded nice levels. All of the meaningful performance


                meaningful according to whom? and desktop users couldn't care less about 'hard coded nice levels' if it means their 3d games and/or X apps work better: yes, I know this is anathema to the linux developers where only super perfect code supposedly should go in, however if this supposedly super perfect code doesn't meet desktop users' needs as well as hacks, well, I'd all be for giving desktop users as many hacks as they want/need (as long as this could be changed via either a pluggable architecture or a difference in make config).

                As much as Linus has done a great job into making linux a great server side OS, if he's not willing to make compromises to make the desktop faster (because either it's too 'hacky' or it will cause issues for big iron, which is what pays the devs' bills) maybe it IS time to fork under the stewardship of somebody with the desktop users' needs more at heart. If companies like, say, NVIDIA or Adobe paid a kernel developer to make linux better on the desktop this is what would probably happen IMHO.

                I don't think a fork would be the end of the world, fork it and let the best survive: if a year from now we have a 'server linux' and a 'desktop linux' kernels, so be it, if instead the 'desktop linux' project flounders due to minimal speed improvements and so on, well, so be it as well. The vast majority of the patches/changes would apply to both the same way, so I don't see this causing issues and slowing development, if at all maybe people could spend less time flaming on LKML and more time writing code.
                • by shmlco ( 594907 )
                  I hear what you're saying, but the real question is whether or not the gain balances out the pain. Assuming, and that's a big assumption, that there's some improvement to be had, the question is: how much?

                  Let's assume that you fork the kernel, tweak it to meet "desktop users' needs", and find that your real world improvements offer no significant advantage? So what if you get an extra FPS in Quake? Would that really be worth all of that effort?

                  Personally, I think all of the effort on eking out the last iota
                  • by MarcoAtWork ( 28889 ) on Tuesday September 18, 2007 @01:26PM (#20656807)

                    whether or not the gain balances out the pain


                    what's the pain really? business will continue as usual on LKML etc., there will just be a separate tree handled by somebody interested in this which will accept 'desktop patches' and will also integrate most, if not all, of the mainline patches.

                    So what if you get an extra FPS in Quake


                    and why shouldn't desktop users get that extra fps? desktop users couldn't care less that getting an extra fps in quake will lower some Oracle benchmark by 50%. Also what if, by really messing things up for databases or network loads and by hardcoding specific scheduler behaviour for the X binary, you could make xorg 50% more responsive? No way this would go in a mainstream kernel, but I bet a lot of users would run this quite happily if they could.

                    Personally I'd rather have a system with more internal checks and layers to ensure stability and to protect the kernel from hacks and attacks.


                    I am sure there are people that feel the same way you do, maybe you could consider a fork yourself? ;)
                    • Re: (Score:3, Informative)

                      by maraist ( 68387 ) *
                      and why shouldn't desktop users get that extra fps?
                      Ok, lets return to reality for a second here.

                      FPS games are SINGLE-THREADED! Thereby schedulers are meaningless. They do not perform disk-IO - they pre-load the entire level into memory; so tread-contention with the swapper/flusher daemons are not an issue. They use direct-to-video-frame-buffer operations, so socket calls to X are meaningless. They make very few system-calls (aside from the calls to video drivers).

                      If you consider that a scheduler will gi
                • Re: (Score:3, Informative)

                  by SL Baur ( 19540 )

                  As much as Linus has done a great job into making linux a great server side OS, if he's not willing to make compromises to make the desktop faster (because either it's too 'hacky' or it will cause issues for big iron, which is what pays the devs' bills)

                  It's just not true that Linus doesn't care about the desktop. See http://www.ussg.iu.edu/hypermail/linux/kernel/0708.2/1530.html [iu.edu] and http://www.ussg.iu.edu/hypermail/linux/kernel/0708.2/1893.html [iu.edu] .

                  Think of the (Linus') Children!

                  • Re: (Score:3, Interesting)

                    by MarcoAtWork ( 28889 )

                    It's just not true that Linus doesn't care about the desktop

                    I didn't say he didn't care, but if something comes up that will increase the desktop performance by x% and kill server performance by y%, it won't go in as far as I can see. Linus wants the kernel to get better as a whole, of course, but this is a lot harder than having a separate branch where the focus will be shifted from 'making things better' to 'making things better FOR THE DESKTOP even if it means a significant lowering of server/big iron pe

                    • Re: (Score:3, Interesting)

                      by MarcoAtWork ( 28889 )
                      I am not volunteering to be a mantainer: I don't have any interest in linux on the desktop nowadays since although I use it at work both as a server OS and as a development OS, at home I am running XPSP2 and have been for a couple of years after many years running linux exclusively (since the 1.1.98 days to give you an idea). I am talking from the perspective of somebody who has been semi on the sidelines of this, and hence I don't have any specific examples to contribute regarding my assertion, it was just
              • Re:No you can not (Score:5, Informative)

                by recoiledsnake ( 879048 ) on Tuesday September 18, 2007 @12:35PM (#20655809)
                That is a gross over-simplication of what happened and almost qualifies as revisionist history and brushing things under the carpet. Let me summarize my understanding of what happened and someone please correct me if I am wrong.

                Con Kolivas had been shouting from rooftops about slow desktop performance and was submitting feedback and bug reports. One of the kernel devs apparently said "I do not notice the issue on my quadcore machine with 4GB RAM". Rightly or wrongly, this lead Con to believe that the kernel devs do not care about desktop performance and only give priority to issues that big corporates complain about.

                In the true open source style, he took upon himself to learn kernel programming and released a whole set of -CK patches and various versions of benchmarking tools and schedulers. On the other side, Ingo Molnar was the maintainer of the scheduler portion of the kernel and maintained that the O(1) scheduler(and the one before it?) is good enough and has no problems. Con conclusively started proving this wrong with his benchmarks. At this point, everyone assumed the -CK branch would be merged into the kernel at some point and Linus says he had been considering it.

                At some point, Ingo starts making his own scheduler, which later evolved into the Completely Fair Scheduler. A number of posts claim that it was kind of rip off of the ideas behind Con's scheduler with which it was in a race to get included in the kernel. Then Linus decides to include CFS into the kernel instead of Con's scheduler. The reason he gave was that Con thought SD was perfect and that he ignored and flamed the users on the CK mailing list and that he(Linus) was far more comfortable working with Ingo since he knew him well. He also admitted that he might have formed this opinion on a single incident on the mailing list and he didn't have the time to follow the CK mailing list.

                Some people on Con's side in the LKML tried to explain this by saying that the single incident was in response to a troll who submitted faulty bug reports and ignored the reasons for why they were rejected and that Linus was playing favorites. Con couldn't take the non-inclusion of -CK and plugsched(which would have given users a clean way of using a custom scheduler) and quit kernel development totally.

                The latest twist in the story was reported on Slashdot here [slashdot.org]. The gist of it was that another hacker(Roman Zippel) was trying work on CFS. He had asked questions about what some parts of the code did, and also made some patches that considerably simplified the code and mathematically proved his patches made things better. In response, Ingo came out with a big patch that ripped out the code that was questioned and included Roman's Zippel's ideas(another rip off?) with hardly any discussion and a tangential acknowledgement of including his changes. Roman complained that talking in patches without explanation is detrimental to collaborative OSS development.

                • Re: (Score:3, Insightful)

                  I'm quite familiar with all the CFS vs. SD issues.

                  My point was that no forking or pluggable schedulers are necessary because all the important ideas, if not the actual code, from Con's SD (and more recently, Roman's RFS) have been incorporated into the mainline scheduler.

                  Forking would only be justified if the work done by the likes of Con and Roman wouldn't be accepted into the mainline kernel. Even though there code hasn't been merged, the kernel has undeniably benefitted from their design work (which is f
                  • Re: (Score:3, Insightful)

                    Using pluggable schedulers would only be justified if the single scheduler were forced to incorporate some fundamental tradeoffs that weren't acceptable to all users. That obviously hasn't happened: CFS and SD are both better than the previous O(1) scheduler all-around. Neither scheduler sacrifices desktop performance for server performance, or vice versa.

                    I think the point of a pluggable scheduler would be so that *future* enhancements can be tested, benchmarked, tried out and deployed without either blessings from kernel devs or messy patches that need to be kept current between releases of the mainline. Is there no chance of a better scheduler than CFS coming along at all? The argument makes sense only if the pluggable scheduler causes excessive compuational or administrative overhead.

                    • Re: (Score:3, Informative)

                      by Wdomburg ( 141264 )
                      One of the features of CFS is that the scheduler policy is pluggable. As per Ingo:

                      One goal behind the CFS changes was to remove the
                      need for massive scheduler rewrites and to ease prototyping. Somehow
                      there are lots of people who really love to hack the scheduler,
                      those weirdos ;-)
                • by Anonymous Coward

                  Very interesting! This "recoiledsnake" guy (parent poster), up to this point, was a thinly masked Microsoft apologist:

                  He was slamming OpenOffice [slashdot.org]

                  He was posting a Microsoft explanation for the Windows stealth-update scandal [slashdot.org]

                  He was flaming Apple users [slashdot.org]

                  He was downplaying an article about a boot sector virus on a Windows Vista laptop [slashdot.org]

                  And now, after a long history of Microsoft-centric and Microsoft-friendly comments, he is suddenly pretending to be an expert in Linux kernel matters, giving a deceptive a

                  • Re: (Score:3, Interesting)

                    Note: I am only replying to this AC post because it has been modded up and because of it, my +5 informative GP has been reduced +4 by a "- troll" moderation.

                    Very interesting! This "recoiledsnake" guy (parent poster), up to this point, was a thinly masked Microsoft apologist:

                    Those who don't follow the Slashdot groupthink == Microsoft apologist?

                    He was slamming OpenOffice

                    Pointing out OO's deficiencies is slamming it? Is it a perfect piece of software?

                    He was flaming Apple users

                    I was correcting a mistake in the parent's post.

                    He was downplaying an article about a boot sector virus on a Windows Vista laptop

                    Read again. I did nothing of that sort and was adding relevant information to the parent post.

                    And now, after a long history of Microsoft-centric and Microsoft-friendly comments, he is suddenly pretending to be an expert in Linux kernel matters...

                    You mean one cannot make Microsoft-cent

                • Re: (Score:3, Informative)

                  by Wdomburg ( 141264 )
                  Roman wrote a poorly documented monolithic patch. Ingo requested that he split it into more manageable pieces isolating the various changes. Roman didn't, so Ingo did, crediting him in the description and on all the segments based on Roman's ideas. How is that wrong?

                  (On a side note, it was hardly a large patch. The bulk of it was removing dead code.)
        • Re: (Score:3, Insightful)

          by Fizzl ( 209397 )
          Wish I had mod points.
          People should just read this comment before saying anything.

          (No, I will not resort to changing subject to MOD PARENT UP, because that annoyes me :))
        • Re: (Score:3, Insightful)

          by Lonewolf666 ( 259450 )
          Slow, scalable algorithms are used rather than lean but limited ones.

          If this is true it is actually a good idea. Today's personal computers have a lot in common with high end machines from 10 years ago.
          Multiple processors? Check.
          Gigabytes of RAM? Check.
          Harddisks with hundreds of Gigabytes? Check.

          And I guess the trend will continue, so what belongs in the big iron of today will be fine for tomorrow's personal computers.
        • by Lumpy ( 12016 ) on Tuesday September 18, 2007 @11:14AM (#20654017) Homepage
          Really? I better tell the guys down in R&D that the 1.2 meg linux install we use on the embedded box devices does not exist and cant work.

          Thanks for letting us know before shipping this thing it would have been embarassing that the linux would explode and become huge upon use.
        • Re: (Score:3, Informative)

          by Bogtha ( 906264 )

          Putting a bunch of #if 0's into complex, bloated code doesn't make it slim and efficient.

          #ifs are preprocessor directives. They are evaluated at compile-time and have absolutely no effect on efficiency when the kernel is running. Sorry, but you're going to have to look elsewhere for your "bloat".

    • by leuk_he ( 194174 ) on Tuesday September 18, 2007 @10:14AM (#20652827) Homepage Journal
      Actually a lot of forks do exist and are supported. There are all kind of real-time and low-latency and security patches floating around that get a lot of attention. Most big vendors do not ship a exact copy of the version that linus creates, but add some patches/modules that they think their actual users need.

      One time they may be get merged into the main linux kernel, or maybe their features are obsoleed by features that are accepted by linus.
    • In fact the further beauty of the kernel is that you can compile it how you like. For instance Red Hat Enterprise kernels whilst based on the same source code as say teh Open Moko mobile phone are compiled with different options and different modules. No one runs the Linux kernel with "everything" in some cases there are mutually exclusive options (like choice of scheduler).
    • by Anonymous Coward
      In Linux, if the projected is run badly, there are calls to:
      - Fork the project and do it "right"[TM]

      In Windows, if the projected is run badly, there are calls to:
      - Knife the new version and fix the dam bugs in the old version

      In MacOS, if the project is run badly, there are calls to:
      - Spoon the new version. You just don't understand the superiority of the new "Mac way"[TM] because you're stuck in the "Windows Mindset"[TM]
    • If you want to fork the Linux Kernel, there's absolutely nothing from stopping you from doing it yourself.

      And in further news, water flows downhill.

      The question is not whether you can fork the kernel, the question is whether you should. On one side, you have hope that this would revive progress in desktop Linux. On the other, you have fear that this would create conflict and duplication of effort.

      My answer? It just doesn't matter. Yes, desktop Linux is being neglected. But it's not because LT has develop

      • Re: (Score:3, Informative)

        TFA cites Con Kolivas's retirement from kernel work as a sign that desktop Linux isn't healthy. But in fact the bad sign was that Con Kolivas was ever the leading hacker for desktop kernel features. Because nobody ever paid him for his work on the kernel. Indeed, he's not even a working programmer! He's a medical doctor who programs as a hobby.

        That pretty much sums up the status of desktop Linux: it still belongs to hobbyists at a time when server-side Linux is an important commercial product. Unless and until you can change that, it doesn't matter who controls Linux kernel development: the needs of Big Iron will prevail.

        I think you've hit the nail right on the head, and you state an important aspect of open source software that linux fanboys don't seem to grasp. There will never be a widespread, successful "desktop linux" until it becomes an economically viable necessity for someone or some group of people with cash and investors. Right now, what impetus is there in investing all of this effort into diverting from the canonical linux kernel? Microshaft still dominates the desktop market, they've got infection deals with

  • Meh (Score:5, Funny)

    by paullb ( 904941 ) on Tuesday September 18, 2007 @10:05AM (#20652595)
    I'd rather spoon it
  • sure (Score:2, Funny)

    Why not? It made Microsoft plenty of money...err
  • Why is it stupid? (Score:4, Insightful)

    by xtracto ( 837672 ) on Tuesday September 18, 2007 @10:07AM (#20652631) Journal
    I can not see why is it a stupid idea. Forking the Kernel in desktop and server forks will mean that each specific kernel is optimized for such tasks and that the distribution makers have just a subset of the huge kernel to care about when creating their distributions.

    A server is a relly different beast than a desktop and having this "all-in-one" kernel means that the operating system gets bloated with a) desktop specific features when using a server and b) Server specific features when installing a desktop.

    I think that a controlled fork in the linux version control tree might be beneficial.
    • Re:Why is it stupid? (Score:5, Informative)

      by Gordonjcp ( 186804 ) on Tuesday September 18, 2007 @10:14AM (#20652821) Homepage
      A server is a relly different beast than a desktop and having this "all-in-one" kernel means that the operating system gets bloated with a) desktop specific features when using a server and b) Server specific features when installing a desktop.

      Perhaps the source code does, but there's nothing stopping you from leaving out all the server-specific stuff from your desktop kernel when you compile it. If you're producing a server-grade OS, leave off the desktoppy stuff. Simple.
      • by quanticle ( 843097 ) on Tuesday September 18, 2007 @10:29AM (#20653103) Homepage

        Perhaps the source code does, but there's nothing stopping you from leaving out all the server-specific stuff from your desktop kernel when you compile it.

        If I understand correctly, that's exactly what Ubuntu does with their "desktop" and "server" version. The desktop version have certain modules and patches that the server versions do not, and vice versa.

      • Re: (Score:3, Informative)

        by ForumTroll ( 900233 )

        Perhaps the source code does, but there's nothing stopping you from leaving out all the server-specific stuff from your desktop kernel when you compile it.
        This is NOT true and it keeps getting repeated here. Compiling the kernel does not allow you to change algorithms that are performance bottlenecks for desktop systems. Unless you're applying patches, merely recompiling the kernel offers very little in terms of optimizing it for the desktop.
        • by 644bd346996 ( 1012333 ) on Tuesday September 18, 2007 @11:10AM (#20653929)
          Which algorithms are bottlenecking your desktop? Is it the I/O scheduler? You can swap between one of four choices, at runtime.
          Is it the CPU scheduler? If so, you're a liar. Nobody had produced repeatable benchmarks that show a significant shortcoming in CFS for desktop and gaming use.
          Is the memory allocator really bad for your workload? Try using the new SLUB allocator, instead of the older SLAB allocator.
          Is the system not as responsive as you want? Turn on forced preemption and set the tick speed to 1000Hz.
    • by Otter ( 3800 )
      I haven't used non-x86 Linux in a couple of years and don't know if this is still the case, but it used to be that other architectures had de-facto "controlled forks". PowerPC, for example, was officially supported in the Linus kernel but pretty much everyone used a separate fork in which code was developed and slowly copied over to the main source tree.
    • Forking the Kernel in desktop and server forks will mean that each specific kernel is optimized for such tasks and that the distribution makers have just a subset of the huge kernel to care about when creating their distributions.

      Actually, I have a better idea! Let's make it so, at runtime, folks can unload or load tasks - heck, let's call them modules - from the kernel. Even better yet, if there was only a way to control various tunings and constants in the kernel while the computer is turned on!

      Sa

    • by nomadic ( 141991 ) <nomadicworld@ g m a i l . com> on Tuesday September 18, 2007 @10:26AM (#20653029) Homepage
      I can not see why is it a stupid idea.

      Me neither, especially considering that's all the frothy-mouthed zealots tell you to do when you criticize the kernel developers.

      Linux user: I like Linux but I think the kernel should incorporate feature X.
      Linux zealot: If you don't like it, fork the kernel!
      Linux user: I think the kernel developers aren't open enough to contributions.
      Linux zealot: If you don't like it, fork the kernel!
      Linux user: I think the kernel is too focused on big iron.
      Linux zealot: If you don't like it, fork the kernel!
      Linux user: Ok, I guess I'll fork the kernel then.
      Linux zealot: OMG YOU CAN'T FORK THE KERNEL!!!
    • by WindBourne ( 631190 ) on Tuesday September 18, 2007 @10:31AM (#20653145) Journal
      Really, the distro should do the fork, and they actually do. While most have general compiled kernels, others have kernels compiled based on what is desired; server or desktop. Solves the issue.
    • by DaleGlass ( 1068434 ) on Tuesday September 18, 2007 @10:32AM (#20653151) Homepage
      Because the distinction between server and desktop is rather fuzzy these days. What could you leave out of the desktop OS?

      RAID? Doubtful with it being so affordable these days.
      ECC RAM? That can be had on many boards as well.
      Support for SCSI tape drives? Does my box suddenly turn into a server if I get a cheap drive on ebay?
      Ok, how about say, optimizing the desktop version for latency and the server version for throughput? Problem with that is that there exist server tasks that want low latency.
      Years ago you'd say "remove SMP support, nobody uses that". Not so these days.

      What could you leave out of the server?
      Support for sound cards? What if it's a server that records audio?
      Support for video cards? What if the server uses it for computation (rare but possible)

    • Re: (Score:3, Informative)

      by walt-sjc ( 145127 )
      It's already done. ALL THE TIME. Ubuntu, Redhat, Suse/Novell all maintain their own version of the base kernel. There is NO reason why some other person (or group) can't maintain his "desktop tuned" kernel. He would be wise to re-sync with the base kernel every now and then unless he wants to start maintaining all the drivers....

      The objection is that maintaining the patch is a PITA. IMHO, it's a lot easier to just maintain a patch set than an entire kernel however, but FORK AWAY!

      All this said and done, I ha
    • "They" don't agree with the new scheduler; face it this is where most of the divide is; so "they" want their own version but they know damn well that unless they have Linus's blessing its dead in the water. As such expect attempts to guilt him. As such see attempts to deflect attention from their real peeve by suggesting 'multiples' instead of just their way and his way.

      Arrogant from the standpoint that since they can't have their way and cannot get support for their own on their own they want Linus to do
    • Re: (Score:3, Informative)

      A server is a relly different beast than a desktop

      No, it isn't IMO. It's all software that does the same things: open, read, mmap, copy, etc. Different software names, sure, and different workloads, but there's not so much difference. The kernel doesn't care if the process reading a file is apache or firefox, it just tries to read it fast. It's have been a long time since desktop was "stupid" software. Desktop software needs performance just as much as a server.

      This (stupid) idea of splitting the kernel see
    • by vdboor ( 827057 ) on Tuesday September 18, 2007 @11:25AM (#20654255) Homepage

      Call me stupid, but the Linux desktop already crawls.

      There used to be a time I could download 5 shared files, burn a CD and watch a DivX movie at the same time. That was with Slackware 9.0 and Linux 2.4.20.

      Nowadays it takes my browser 2 seconds to open a *tab*, and another 2 seconds per website. This happened because there was continuous I/O activity in the background. After the I/O completed everything was back to normal. Bottom line: every serious I/O activity causes the desktop to crawl.

      It's still the same machine (AMD 1800 and DMA-enabled) but interactivity my Linux system had is unmatched by the recent kernels. The problem is too many commercial developers care about server performance alone, or test desktop performance with their quad-core raid array configuration. Patches get rejected too when they affect server performance.

      I'm honestly not surprised people want a change here, or even start suggesting a fork.

    • Re: (Score:3, Interesting)

      by Burz ( 138833 )
      It is not stupid.
      1. For one, there's no attempt to provide a stable ABI for 3rd-party drivers, so users must contend with their video card not working after upgrading the kernel.
      2. Same goes for all kinds of drivers, like VMware, OMFS for my Rio Karma, and some Wifi modules. The only accommodation has been the new userspace driver interface for low-performance devices... far too little too late.
      3. The Sound architechture is a failure: Even with OSS fully deprecated there are still various sound servers the user mus
  • Fork? (Score:5, Insightful)

    by EggyToast ( 858951 ) on Tuesday September 18, 2007 @10:07AM (#20652633) Homepage
    It's a blog post, so it's not like it's going to happen, but I don't see how forking the kernel would do anything than just lead to distribution craziness. Arguably that's Linux's biggest hurdle for new people -- deciding which distribution to get. And if people are checking out linux for workload purposes, forcing them to decide whether to get a server distro or a home distro and making that distinction at the kernel level? Buh?

    Generally, if it's good enough for enterprise, it's good enough for home use. And things that are useful for desktop Linux are often utilized at the enterprise level anyway. So yeah, it's just a blog post; I'm not sure anyone will take it seriously.
    • Re: (Score:3, Insightful)

      by bogaboga ( 793279 )

      so it's not like it's going to happen, but I don't see how forking the kernel would do anything than just lead to distribution craziness.


      We already have "distribution craziness", with each distro placing vital system files in different places...and sometimes applications requiring different versions of a particular file in order to function. Man, it's crazy already.

  • by trolltalk.com ( 1108067 ) on Tuesday September 18, 2007 @10:09AM (#20652681) Homepage Journal

    ... but only in the sense that it is customized for different purposes - mobile phones, desktops, servers, supercomputing clusters.

    Besides, most people's desktops are much more powerful than any server you'd be able to buy years ago. With the cost of cheap disks going down, there's no excuse for even home users to ignore the benfits of such "server" features as raid.

    • by JoelKatz ( 46478 )
      Today's desktops were yesterdays servers. The technology you find on the desktop today are the technologies you only found on servers a few years ago.

      To give you a simple example of why this is a retarded idea, consider SMP (support for multiple cores/CPU). Five years ago, a rational desktop OS developer might have thought that SMP support should be dumped. It has significant overhead, and when will there be multiple CPU desktops?

      Well, now.

      The same goes for all kinds of things that were once unheard of on t
  • Actually.... (Score:2, Interesting)

    Actually, its been done before. Remember when we had a "stable" and an "unstable" pair? IMHO the idea of forking into desktop and server versions is a technical answer to a political problem with various developer's goals.
  • by xxxJonBoyxxx ( 565205 ) on Tuesday September 18, 2007 @10:09AM (#20652689)

    Despite all the warm, fuzzy talk of open source and community development, the fact remains that, at the kernel level at least, Linux is still controlled by a small group of elitist "prigs." Stick too close to the "approved" Linux path and you end up with a crappy desktop experience. Stray too far, and you risk having your customizations broken if/when the kernel team decides to take things in a new direction.


    Why is this even controversial? If you don't like the way things work, the beauty of open source is that you can fork the code at any point. So...quit whining ("prings"?) and good luck with your fork.

  • by athloi ( 1075845 ) on Tuesday September 18, 2007 @10:11AM (#20652721) Homepage Journal
    A different branch of distros for the desktop makes sense, but I'm not sure the kernel is what needs addressing.

    It makes sense for Linux to fork into two branches: one, a conservative one, aimed at upkeeping what already works, and the second, a wild-ass anarchist, aimed at forging new and innovative technologies.

    I think what the original author was saying was that he/she would like the Linux community to fork into two branches, one thinking like desktop software (Windows XP is the best example) and another thinking like big iron, where Linux already has a presence but could learn a thing or two from *BSD.

    • Re: (Score:3, Insightful)

      It makes sense for Linux to fork into two branches: one, a conservative one, aimed at upkeeping what already works, and the second, a wild-ass anarchist, aimed at forging new and innovative technologies.

      That's what they used to have with odd-number versioning. Problem is that cross-merging kept happening and the whole thing turned into a mess. Seems like what they do now (I'm not a kernel developer) is to do mini-forks to work on the new technologies and merge it back in when it works well enough. Sou

    • Re: (Score:2, Insightful)

      It makes sense for Linux to fork into two branches: one, a conservative one, aimed at upkeeping what already works, and the second, a wild-ass anarchist, aimed at forging new and innovative technologies.

      I totally agree here. We need to bring back the odd numbered branches for doing development work. I don't want to have to track down a specific sub-sub version if I want code tweaks, etc. The current system just means that the entrenched developers get to push their projects to the detriment of everyone el

  • by Vanders ( 110092 ) on Tuesday September 18, 2007 @10:11AM (#20652739) Homepage
    There is no need to fork Linux into a "desktop" version. Projects like Syllable [syllable.org] already exist, and we re-use a fair amount of code from Linux, GNU and other OSS projects.
  • So? (Score:4, Insightful)

    by SatanicPuppy ( 611928 ) * <Satanicpuppy@nosPAm.gmail.com> on Tuesday September 18, 2007 @10:11AM (#20652745) Journal
    Shrug. Let 'em fork it. I doubt they'll be able to swing enough maintainers to seriously effect development on the main fork.

    One of the great strengths of open source is that it allows for competing code. If the new fork is better (I view this as unlikely) then I'll switch. I'm about what works.

    When the level of discourse falls to articles of faith and prejudice, it's not about what's best for the code anymore. It's about your personal ideology, y
  • by Kartoffel ( 30238 ) on Tuesday September 18, 2007 @10:12AM (#20652753)

    # Forking isn't necessary.

    options BIGIRON
    #options DESKTOP
    • by pohl ( 872 ) * on Tuesday September 18, 2007 @10:31AM (#20653149) Homepage

      At the moment I'm making this post, the parent post has been moderated "Interesting". I think "Insightful" or "Informative" would be more appropriate.

      What the parent poster is saying is that C pre-processor [wikipedia.org] flags already allow the same kernel source code to contain features for both server and desktop without resulting in any bloat or compromise in the resulting binary.

      Only those who don't understand C would fret about a "bloated" kernel in this context.

      Now given a binary kernel that contains both feature sets you would have a legitimate concern, because then there would certainly be a bevy of both bloat and compromises. But this is linux, after all. We have the source code -- so none of that matters.

  • by craznar ( 710808 ) on Tuesday September 18, 2007 @10:12AM (#20652755) Homepage
    Linux has far too many varieties already, it makes mainstream hardware and software support almost impossible.

    And they want to fork the only consistent bit ?

    If they want to do a desktop version, it's time for the kernel developers to branch out into standardising Desktop libraries, desktops (KDE vs Gnome), devices, packages etc etc... so that we can have our 1000 versions of Linux and a single underlying version of Desktop Linux.

    Maybe then, Linux may make a dent in the world of Desktop Windows.
    • If they want to do a desktop version, it's time for the kernel developers to branch out into standardising Desktop libraries, desktops (KDE vs Gnome), devices, packages
      Yes, because desktop libraries, desktop environments, and packaging formats are something that the kernel developers need to worry about.

      If you want an OS that isn't patched together from many different sources, try a BSD.
  • memories... (Score:2, Insightful)

    by TheSHAD0W ( 258774 )
    I can sympathize with Mr. Barr; I remember when Linux natively ran on a 386SX with 16MB of RAM, and ran *well*. X? We don' need no steenkin' X! I think that, even if you stripped the current kernel to the bare bones, you'd have trouble running it in 16MB, it's been "spoiled" by too much cheap memory.
  • by somersault ( 912633 ) on Tuesday September 18, 2007 @10:12AM (#20652777) Homepage Journal
    The less segregation in the Linux world the better, at least until desktop Linux is better at coping with new versions of the current kernel line (eg nVidia graphics drivers needing recompiling when a new version comes out and that sort of thing). Having different forks of the kernel would eventually also lead to software that can only be run on one fork without modification, and that's not much use either. The less work involved in porting to different distros/platforms, the better IMO.
  • fact, the kernel is the core, everything else sits on top, no matter what, server, desktop, etc. Linux is doing well in server, desktop, mobile devices because its consistently provided a powerful and (read this, microsoft bastards) functional operating system. I have friends with reasonably powerful laptops which choke on windows bile, become soperific and lethargic, unresponsive and surly (like the dwarf). I run X windows with fluxbox on some of our old servers fine. Splitting linux is pointless and count
  • by Creamsickle ( 792801 ) on Tuesday September 18, 2007 @10:14AM (#20652817)
    People who advocate this aren't necessarily stupid, just ignorant. The Linux kernel's flexibility is being taken to the limit, and people are forgetting the easiest way to improve performance for their particular rig: Customize your kernel! You can add all the code in the universe, and then you pick and choose the particular things you need or don't need! Say I run a 486/25 with 16 MB RAM as an IP Masq router. The hard drive is an old IDE with 600 megs of space. I have two network cards, and that's about it. Do I need SCSI support? Do I need to support joysticks, X, Pentiums, AX.25, or anything else? No! I compile a kernel specifically to run the IP Masq, and run it well. My P100 laptop, on the other hand needs a bit more. I use it for packet, so I need AX.25. It uses PCMCIA, so PCMCIA support needs to go in. I use Seamonkey and the GIMP, so I need graphics. But, my HD is not SCSI. I yank out SCSI. My CPU is subject to the 0xf00f bug, so that gets included. I brew a custom kernel, and boot time is a lot shorter. My big-rig is a AMD X2. I need just about everything, as I have a Nvidia card for Quake4; a SCSI scanner; and a connection to my Packet base station. I optimize compilation for the higher-end computers. I plan on getting a Mac Pro from Apple and putting SuSE on it. Again, by optimizing the options I optimize my system. Get the point? If you want a once-size-fits-all kernel, use Windows. If you want a kernel which can be adjusted for your particular and peculiar environment, use Linux and customize your kernel!
  • by MonGuSE ( 798397 ) on Tuesday September 18, 2007 @10:16AM (#20652851)
    The advantages gained in forking a kernel are minimal compared to the disadvantages. Not to mention a lot of those advantages of a fork can be obtained by simply compiling a kernel based on your server's hardware and computing needs. If someone forked it they would then have to maintain two separate code bases, two separate patch bases a new naming scheme. Not to mention the main advantage stated which is getting rid of bloat occurs because of compiled driver support which means that only a small subset of hardware would be supported in theory and most of the bloat he is speaking about comes from the GNU side of things and can easily NOT BE INSTALLED or un-installed if so necessary...
  • Linux has no central repository, so the concept of forking linux is meaningless. Linus' branch is considered "official" because it historical and institutional reasons, not technical ones. Anyone can create their own branch and start incorporating patches, even pulling from others' branch. I believe this is exactly the reason why Linus switched to the new SCM system (Git).
  • I don't see the need (Score:5, Informative)

    by downix ( 84795 ) on Tuesday September 18, 2007 @10:16AM (#20652863) Homepage
    The only difference between a "server" build and a "desktop" build, kernel-wise, is in which components/modules you compile. Functionally, there is no difference. Same goes for Windows, the "desktop" and "server" kernels are fundimentally the same, it is only what you put on top of them that differentiates the two.

    Someone here does not understand the difference between a kernel and an OS.
  • Why not. (Score:3, Interesting)

    by LWATCDR ( 28044 ) on Tuesday September 18, 2007 @10:18AM (#20652889) Homepage Journal
    I think that right now the majority of development at the kernel level is server based. It is only logical after all since the majority of paying Linux systems are servers. When I mean paying I mean paying their way. The technical question is can one scheduler work well for both server loads and desktop loads. Is there an ideal scheduler that works every where? We know that isn't true when you are dealing with real-time systems so is it true for the desktop?
    I don't think this is a dumb question I just happen to think that currently there isn't a need to fork the kernel.
    I happen to think that currently there isn't really a need to fork the Kernel into a server and desktop version. I feel that most of the performance problems with Linux on the desktop are in X and not in the kernel. I think more work needs to be done in X to solve the problem than the Kernel.
  • by psbrogna ( 611644 ) on Tuesday September 18, 2007 @10:18AM (#20652891)
    I've generally found "Wrong and stupid" goes hand in hand with blogs. The easier it is to be heard, the lower the signal to noise ratio is going to get. It'd be nice if we could just taser them but perhaps unconstitutional. :D

    Relevant quote: "Don't taser me! Ow! Ow! Ow!" - opportunistic journalist at Democratic National Convention
  • WTF? (Score:3, Informative)

    by hackstraw ( 262471 ) on Tuesday September 18, 2007 @10:20AM (#20652935)

    First, Linux is Linus' hobby that is kinda also a job. I've read somewhere where he said that he is more proud of Linux being on a digital picture frame that he bought his wife than having it on the top500 list.

    Second, AFAIK, the kernel is fine for both desktop and server stuff. There are compile options to optimize for each, and patches, etc. Linux on the desktop is difficult because of a lacking standard and good software installation system and GUI environment and various other things. The kernel is fine for the desktop. There simply is not software on top of said kernel to make it desktop friendly.

  • Forking the kernel would fork it indeed.

    .....


    [tumbleweed]

    ...


    I'll get my coat.

  • According to the article, this isn't about Linus, nor big iron. It's personal.

    Author Randall Kennedy depicts Con Kolivas, touted as the "champion of all things desktop centric," as "the victim of an ideological rift within the Linux community" who has given up on Linux because his scheduler patch has been rejected.

    I think that says it all. Of course, we'd have to wait until Randall replies to these accusations.
  • by n6kuy ( 172098 ) on Tuesday September 18, 2007 @10:25AM (#20653017)
    It's probably a serious concern!
  • by Minwee ( 522556 ) <dcr@neverwhen.org> on Tuesday September 18, 2007 @10:28AM (#20653085) Homepage

    What's this? Someone with a blog pulled a half-baked idea out of his butt, and then posted it where the entire Internet could see it? And some other people don't agree with him?

    That's amazing! An event of this magnitude only happens once in a billion femtoseconds! Why aren't we all paying more attention to this incredible discovery?

  • by Bogtha ( 906264 ) on Tuesday September 18, 2007 @10:36AM (#20653247)

    Okay, so somebody made a stupid blog post. Why submit it to Slashdot?

    • Easy (Score:3, Insightful)

      by NaCh0 ( 6124 )
      Slashdot and linux.com are owned by the same company. Joe Barr submitted the slashdot article and also wrote the rebuttal blog. He can look smart and double the ad revenue all in one story.
  • by MORB ( 793798 ) on Tuesday September 18, 2007 @11:47AM (#20654763)
    See subject.
    The linux development model is built on forking anyway.
    Trying to fork linux is like trying to burn fire.

Trap full -- please empty.

Working...