Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!


Forgot your password?
Linux Software

The Silent Kernel Platform War? 242

iJosh asks: "Recently I decided to be hip and cool and update to the latest Linux Kernel (v2.4.1). Since this decision I've downloaded and tried to compile the offical source from Linus and crew on my PowerMac 7300 only to run into errors for the PowerMac PCI controller. I took this up with Paul Mackerras maintainer of the PPC kernel and his response was quite interesting to say the least and it got me thinking. He basically says that Linus is ignoring the patches from the people working on the PPC side of the kernel, and that they are keeping their own tree so people are not stranded out in the dust with kernels that will not work. My question really comes down to this: Is the linux kernel forking away from PowerPPC? Is this happening because of issues regarding OS X and the possibility of many users jumping ship, away from LinuxPPC upon release? Or is this some kind of quiet platform war from the major kernel developers?"
This discussion has been archived. No new comments can be posted.

The Silent Kernel Platform War?

Comments Filter:
  • Well, yes people objected to sourceforge fwd'ings to the -dev list. Hence Dave Wolfe (the guy running all the lists) is creating a list for them. So yes, I do think you're a bit too cynical Who have you been reporting these bugs to? The one you're referring to is rather well known (Paul knows about it, Cort knows about it, I think everyone knows about it.) The problem is finding time to yank yet another hw bug fix out of darwin. What machine do you have anyways. And dare I ask what "2.1.24" kernel it was? That tree took ages to finally die.
  • > It's pretty hard to keep up with nowadays. It is apparently getting worse.

    Actually, as of 2.4.2preN, the difference is much smaller than it used to be (>500kB). It's still missing critical things like IDE updates, but it's not as bad as before.

    What SMP box do you have? The dual G4s work rather well, and the daystar boards are still being fixed up (they almost work rather stabily). Finally, Linux (2.2.18 and 2.4.N, from the right tree of course) can read/write HFS rather well. I wouldn't use my main volume, but i've had a 32mb partition on a few machines, and diskcheck (the apple util, I think thats the name but I'm not sure). hasn't found any errors.
  • The only problem here is that Cort (the maintainer for PPC) has been sending in patches since the late 2.3 days. Hell, 2.3.49ish or so was actually up to date. There wasn't too much work being done on PPC-specific things (because the main kernel was too buggy and liked to eat FS'). But as other people have said, if you're machine isn't x86, you shouldn't be using the "main" tree anyways. It's not Linus' fault or the maintainers fault. It just happens. It's also certainly not a "War".
  • Let's face it most linux development is done on 32 bit little endian. And quite a lot of people do not recheck their stuff for endian issues.

    And virtually all of those people are the hotshot intel developers. Almost never do I see a non-endian-clean patch from the MIPS or SPARC crowds. They're all from the i386/ia64 people. So if the kernel has an endianness problem you can bet good money where it came from. This is absolutely not the reason patches for (some) big-endian platforms get rejected.

  • And then they will have a bloated piece of shit, horribly out of date with the current kernel. Its not like there are a thousand people working on the Linux kernel at any one time...
  • aside from the people who wrote it (and therefore already have a more up-to-date tree than Linus) and are prototyping the silicon, who actually has an IA64 system?

    OSC [osc.edu], for one. NCSA [ncsa.edu], for another. Most of the larger vendors (IBM, HP, Dell, Compaq, SGI, etc.) have a few as well. Plus there's a software simulator from HP that lets you run IA64 programs (including the Linux kernel) on an IA32 box.

    They exist, though admittedly only as engineering samples right now. But people do have them.

  • if NetBSD has avoided a fork, Linux couldn't avoid a fork?

    NetBSD has already forked. Where do you think OpenBSD came from?

  • I started using NetBSD in 1996 after a surplus decstation 5000/240 was given to me by a friend. since then I've been won over by NetBSD's emphasis on portability and full support of platforms linux is still struggling on. (alpha 3000/xxx, decstation, and vax.)

    the NetBSD Goals [netbsd.org] page really lays out the reasons I like NetBSD.

  • Um, no. People (like me) who own Macs but want a Unix system at home to fiddle with would naturally be attracted to LinuxPPC. But I also want an OS which actually support my computer's hardware and has a usable UI and applications.

    Ummm....so download the kernel patches.

    Problem solved.

    Or, as someone else here said, try NetBSD, though I can't say I've ever tried it myself.

  • by Anonymous Coward
    51. Linus, this patch fixes an assumption in the Frangle Hypercube driver that made it crash on PPC.

    If only it were that easy... Unsaid in the line was: (oh, and by the way Linus.. it fixes the Frangle Hypercube driver, but in the process breaks the Froptle Metacube support... but neither you and I have one to test on, so just apply it and pray).

    One simple patch can maytimes break other things in the process. Who does the QA testing? If I send him 50 lines of code to the VM system and say "this fixes bug X" is he just supposed to apply it?
  • by Anonymous Coward
    A lot of the ports are in their own trees. Ia64, you name it... there's a couple of reasons: Linus only has so much time to incorporate the patches and the other one is that the tarballs are already way too large, not everyone needs to download all the architectures, most only want ia32.
  • You can't just compare OSes willy-nilly. You can compare them for a specific task, but you aren't doing that.

  • If your feelings are so brittle that what I wrote felt like an attack, then you probably should run, not walk away from trying to get code integrated into the "official" kernel, as the normal course of discussions involves vastly stronger wording than I ever use...
  • So where is Windows for PPC?

    NT4 used to ship on x86, Alpha, MIPS and PPC. There was even a SPARC port at one point, although it was abandoned. Initial NT development was, IIRC, on the Intel i960. But with the low-end MIPS market shrinking, CHRP cancelled (due to an IBM/Motorola/Apple turf war) and x86 pretty much dominating the low-to-mid range workstation and server markets (by market share) that's where Microsoft are focussing their development efforts. It's not worth porting to SPARC because almost no-one buys SPARCs unless they also want Solaris, and similarly there's no market for PPCs that don't run AIX or MacOS.

    I'm sorry, but you can't slate MS here - they tried to ship an architecture neutral OS and no-one wanted it!

  • That makes no sense when its a platform patch. If I port the kernel to chip 'XYZ', do I submit 1k 10k patches, each labeled, "Adds code to handle XYZ chip"?
  • If you think you can do better, go ahead and try. The kernel's GPL, so you are entitled to do what you like with it.

    Throwing stones can invalidate your greenhouse's warranty.

  • Nonono. He didn't say he maintained them, he said he could do them. Sounds like something more suitable on an adult site, to be honest.

    Besides, he said he didn't think he -could- do better. It's up to you how you interpret W2K in that light.

  • It is Linus's project, and he's free to do with his code as he pleases. If his needs don't match everyone else's, they are completely free to fork. This actually happens quite a bit, as none of the distributions use a pure Linus kernel.
  • I don't understand your question. There are several points here:

    1) the kernel _is_ the hardware abstraction
    2) some things (notably drivers and VM) are abstracted as well

    However, every tree needs tweaks/testing/changes

    Does that answer your question?
  • Actually, the kernel has forked several times over, and has been doing so for many years. The forks I can think of are:

    MontaVista Real-Time
    There's another real-time platform I'm forgetting
    Each distribution
    Each architecture

    Yes, each distribution basically has its own kernel tree, because the users of each distribution are different. Of course, this kind of forking has typically only helped Linux, and I don't see that changing.
  • Ummm.... its actually only by the good graces of people that any of the architectures stay in decent condition. They don't pop out of thin air. I've used LinuxPPC for over a year now, with no problems/reboots except to upgrade my machine. I run Ximian GNOME with no problems whatsoever. I've never use PPC on a server, however, so I can't comment there
  • It's tough to say what's a fork and what's "just patches". Was it a fork when someone introduced USB support to the 2.2 series? What about removing the MMU component for ucLinux? Real-time support? On each of these, are they patches, or actual forks? I consider them forks, because they are independently maintained (generally). Alan Cox basically has his own kernel tree. It can takes months for a patch to move from Alan's tree to Linus's, if at all. Now, all these people tend to sync up every once in a while. This happens because (a)they can, (b) it would be stupid not to use someone else's technical advances, and (c) their common ancestry makes it easier to do so than porting from the BSDs or whatnot (that happens too, though). However, I don't think that a fork necessitates something becoming a completely different product. Think of the egcs fork of gcc, and things of that nature. With free software, eventually the best stuff will be used by all, and the sucky stuff will eventually be left in disuse or transformed into something better (note I said eventually).
  • ... and only by the good graces of folks like Jay Estabrook at DEC did it manage to stay in decent condition. I've become so frustrated with Linux/Alpha and its instabilities (including utterly broken IPv6 and major toolchain issues) that I moved to NetBSD/Alpha a little over a week ago. It's been wonderful.

    I'd suggest that unhappy PPC people do the same. You'll find the NetBSD community to be a lot more responsive to issues with portability, and are on top of bugs very quickly.
  • I believe the hangup is not the line count of the patches, but how many topics they cover. When one patch touches a lot of different areas, it's hard to tell exactly what it does, it's impossible to back out just part in case something fails, and there's more potential for conflict with other patches.

    Linus has mde it pretty clear he wants one patch for one problem, not one patch for many problems.

  • I have a UDB too, what distro are you using? Are you using an internal or external drive?? email me back
  • "Linus shouldn't be "reviewing" anything."

    nonsense. it's his source tree, he can (and does) review anything that gets submitted to him. bitching and moaning about how "he can't do that!" is the least productive way to respond.

    when faced with a rejection from linus, you have two options: suck it down and do as the man says, or fork off your own tree.

    it's obvious which path the linuxppc folks took.
  • > but you can't slate MS here - they tried to ship an architecture neutral OS and no-one wanted it!

    I was responding to someone's claim that Open Source doesn't work as well as commercial software. They claimed that MicroSoft would never fail to support some hardware because they'd loose money if they did. The whole story here, of course, is that Linux must be failing because Linus hasn't updated the 2.4 kernel for PPC. If "commercial" is better because it supports more hardware, you'd think that it would at least support the hardware that the entire article was complaining about.

    I didn't realize that NT ever worked on PPC. No one seems to care. If it no longer works on PPC, of course, you're SOL because you can't do it yourself (the way the PPC folks did for Linux).

    > It's not worth porting to SPARC because almost no-one buys SPARCs unless they also want Solaris

    Yet Linux was ported there.

    > and similarly there's no market for PPCs that don't run AIX or MacOS.

    The whole article was about someone wanting Linux for PPC, so there must be *some* sort of market for alternative OSes. I guess just not Windows...
  • Ahh, so we should all keep to using whatever compiler happens to compile the kernel, no matter how old it is. After all, we wouldn't want to give the kernel developers any incentive to clean up their code so it compiles properly with a newer, better optimizing, more standards compliant compiler.

    I don't think Redhat was in the least bit disingenuous by calling the compiler that could compile the kernel 'kgcc'.

    As for whether or not they should released a snapshopt of gcc... Well, I question the wisdom of that too. I would point at that almost all (maybe all?) of the (very few) bugs that caused bad code to be generated were in the C++ front end, not in C, which is the language most things depend on anyway.

  • Open source and/or free software is far more important than the "New Jerusalem", in that it's real and provides tangible and important benefits in the here-and-now.

    If you don't recognize the benefits of having access to the source code for the software platform(s) on which you or your company depend, you either aren't a developer or CTO, or else you haven't really thought it through yet. Religion has nothing to do with it. The religious ones are just those who are following the smart ones, because they recognize that they're onto something important, something that liberates them from being at the mercy of large software companies with private agendas that don't put the interests of their customers first.

  • If you're not prepared to hack the source code, what's the point in having it?

    That depends. You might know people who can hack the source code, or the existence of a larger community of source code hackers may help you get what you want. That's one of the benefits of Linux, even notwithstanding some of its shortcomings: its popularity means that it now fits a staggering variety of applications, from diskette-based routers to massively parallel supercomputers, just to take one measure of its breadth. Sure, there are always exceptions, things that it doesn't do well, and any user has to evaluate their requirements and choose what's most appropriate for them.

    ["Evil" is] a simple expression of the fact that the core Free Software movement considers commercial software to be morally wrong. See Richard Stallman's essay on the subject.

    I'm not an expert on Stallman's position, but I don't see this in the essay you referenced. He doesn't actually use the word "evil" in the essay, and the strongest adjective I could find applied to proprietary software was that it is "harmful". His only use of the word "moral" is related to the moral obligation of users to pay developers for their efforts.

    Stallman's objection is to software for which the source code is not available - not necessarily "commercial" software but rather "proprietary" software. His position is based entirely on what I've been saying about proprietary software: "...doesn't come with source code, and therefore wouldn't be as useful to [users]".

    This pragmatic and fairly uncontroversial observation forms the basis for Stallman's other claims. Whether you agree with his conclusions or not, the underlying premise remains valid, and has nothing to do with good or evil: instead, it's eminently practical - all things being equal, open source is better than closed source, for the user. No religion necessary.

  • This was the same situation with the IRDA susbsystem as detailed in this Kernel Traffic [linuxcare.com] thread. Linus dosn't like parge patches. If he gets 30 10K patches in seperate emails rather than one 300K patch, he can decide to merge 25 of them and ask questions about the other five and maybe later accept them or get them modified to fit his idea of the "right way" to do it. If Linus dosn't like a few lines of a 300K patch, he has no chouce but to reject the whole thing.

    It has everything to do with the way that Linus works and nothing to do with the technical merit of the port.

    Remember, it is Linus' kernel.
  • Um, no. People (like me) who own Macs but want a Unix system at home to fiddle with would naturally be attracted to LinuxPPC. But I also want an OS which actually support my computer's hardware and has a usable UI and applications. If Mac OS X wasn't coming out in 6 weeks, I'd have a Linux partition at home and dual-boot. But Mac OS X is going to be here Real Soon Now, so I don't bother with Linux.

    In fact Mac OS X could put a tiny dent in Linux x86, too. Due to the less than stellar quality of Linux for the Macs, I've considered spending a grand or so to buy an x86 box to run Linux. Now I've got no need to do that. I'd imagine that there are plenty of Mac-using people who fall in the same boat.


  • I just checked; this is a Java program to configure a Hardware Base Station. It doesn't let a NetBSD-running Mac work as a Software Base Station.


  • I actually did try to install Linux/m68K on my old SE/30 once, but I couldn't get the install to complete. I never tried NetBSD, though.

    Since I lack an Ethernet card for the SE/30, it's not exactly a network-friendly computer. Dang cards (required for Apple's custom SE/30 slot) still cost more money than I'd consider paying for one, too.


  • Well, that's just it. I don't want to apply kernel patches. I want an OS that works.

    LinuxPPC is not a tenth as usable as Mac OS, much less Mac OS X. Linux bigots aside, the GUI isn't as good, the applications aren't as good or as plentiful, and the hardware support (where's my Software Base Station support in LinuxPPC? For those who don't know, it lets a Mac with an AirPort card work as an 802.11b base station, complete with WEP, MAC restrictions, etc.) is poor.

    NetBSD and Darwin are kissin' cousins, so it doesn't seem worthwhile to use NetBSD if I want Unix. I might as well go with Mac OS X.


  • Slackware ships with a stock Linus kernel... It's one reason a switch from **cough**, because I never knew what I was getting.
  • The next time I reinstall Linux I think I'll install Debian instead of RedHat. I've stuck with RH because of habit, but RH7 really convinced me to switch. kgcc, plus shipping a frickin SNAPSHOT of gcc - are they on crack? If you can't release something of good quality, don't do a release at all.

    Shipping something of quality was exactly what they tried to do. The problem was that the gcc maintainers had not released anything for over a year. gcc 2.95.2 was broken with regards to lots of stuff, most notably C++ and Alpha stuff, for which many fixes had existed for a long time in cvs.
    So what Red Hat did was to debug and polish a release snapshot with all these fixes, to ensure a quality compiler.
    And with the 2.96-69 update [redhat.com], it's probably the best gcc compiler you can get.

  • Fine, yeah, I agree.

    You have to admit though, Dell giving in and selling AMD machines would be a big story!

  • That's very interesting, since I never heard of Dell selling Athlons before. Do you mean Pentium 4?
  • I thing the sysop needs to go get the fire extinguisher off the wall and sit it next to the server. He'll need it with this article...

  • Hi Ben. Your knowledgable comments on this topic are greatly appreciated (trini's too!). You guys have a lot of hardware to support from multiple vendors (some proprietary, some rare, some that you don't even own, etc...), and you do a helluva good job in keeping up with the frequent changes. We greatly appreciate the effort. I'm a PPC user myself. I have a 7500/100 with a G3 in it at my former employer's that is a mirror for LPPC, YD, etc... ("was" I should say. It's has to come down soon. :( ). I use a beige G3 tower at work as my personal server. My cable firewall at home is a 7350. On my desk in front of me is a G4 500MP, and two G4 400's (AGP). My LPPC 2k Q4 box arrived just today and it will be going on the MP tonight.

    Basically what all this means is that I use Linux on a PPC machine every day. It's an indespensible tool for me. I couldn't do my job without it. Keeping up with the latest greatest kernel is something I have to do at times. For me, I really don't mind rsyncing the Paul's latest source (I don't really like bk). It is annoying to have to explain to someone why they (or I) can't just use the official Linus kernel. Granted, it is a pain. It's a pain not having an "official" 2.4.1 tarball that works on PPC machines. I would love it if you guys could roll a final version of 2.2.18, tarball it, and leave it be. I understand that the bug fixing process is neverending but there never seems to be an end to it, or an official release. Sure I could snag the latest pmac-stable but that's not a complete release. That's not the final 2.2.18. That's a little annoyance for me personally.

    Another annoyance for me is that I can't find a 3rd-party IDE controller that will work in my machines. The onboard Apple controller will work but none of the 3rd-party controllers seem to. 3rd-party SCSI driver support is also sketchy. Having that would be a big boon for me.

    I don't want this to sound like I'm nit-picking. I'm not a kernel hacker myself. I do a nominal amount of programming, that's it. You guys do a great job and don't really get much for it. These are just a couple of thing I've noticed and wouldn't mind seeing changed. Keep up the great work!



  • In addition, Debian does not commit a Red Hat-ism and package such awful software renames like kgcc. Why not call it what it is, gcc-2.7.2. I mean, come on. Pull the wool over the lusers eyes, don't ya. "Yeah. Red Hat has a special compiler for the Kernel..." Whatever.

    The next time I reinstall Linux I think I'll install Debian instead of RedHat. I've stuck with RH because of habit, but RH7 really convinced me to switch. kgcc, plus shipping a frickin SNAPSHOT of gcc - are they on crack? If you can't release something of good quality, don't do a release at all.

  • by dcs ( 42578 )
    Alternatively, he could adopt a more decentralized model where, with the help of version control and source repository management software, people who have shown a good track record in presenting patches would be allowed direct access to the source in development and be able to check in the changes needed themselves, as well as serving as proxy for other people to submit their changes.

    But only the BSD people adopt such a closed model, right?

  • It sounds suspiciously like how Hans Reiser was "victimized" by the entire linux-kernel list in some huge conspiracy. No one ever believes their code is rejected on technical merits :P
  • 1. Linus, this patch fixes the "glooble" function which makes a bad endian assumption.

    2. Linus, this patch addes the magic number for ppc to the magic numbers for X86, alpha, and m68k in the "toto" function.

    3. Linus, this patch fixes a typo in the momark function that prevents it from working on ppc.

    51. Linus, this patch fixes an assumption in the Frangle Hypercube driver that made it crash on PPC.

  • How much is this royalty and where's the URL to prove it?
    Oh my lord... is it really that difficult to detect sarcasm?


  • >Yeah, I'm going to spend 50% more on a laptop b/c it looks cool and runs a POS non-multithreaded OS? Sure..

    Uh, how is LinuxPPC a POS non-multithreading OS?

    >I bet it really screams at Q3 and compiling the kernel. I hope you still like your 350Mhz imac in 2 years, when everyone else has 2Ghz machines. At least you can say it looks cool!

    Performance is good compared to x86 machines of equivalent spec - outfit a P2-450 with a crappy Rage Pro or whatever the rev. B iMacs have and watch it crawl with Q3.

    I don't dispute the fact that your 1GHz machine is fast, sorry to make you feel like less of a man.

    I just think your notion that dropping support for every platform except x86 because its the cheapest is laughable, stupid and obviously the wrong thing to do.

  • What if you already have a Mac on your desk?

    You think it's cheaper to throw it away and go out and buy a new 1GHz Athlon than to run LinuxPPC on your existing machine??

    Try getting a laptop that looks anywhere near as cool at a Titanium Powerbook G4 from any x86 vendor.

    'Oh forget it, those guys don't need another OS. If you buy a Mac, you don't deserve Linux. Linux should only be available for x86 because its the cheapest' - is that your line of reasoning?

    Should we just deep-six the Alpha, PPC, MIPS, SHx, 68k ports of Linux because Athlons are cheap right now.

    You better tell the NetBSD guys they've been wasting their time, and how bout you email the CEO of Lineo and all the other embedded Linux developers and break the news to them.

    And while you're at it, why don't you have Linus ditch support for Intel chips. Athlons are, after all, cheaper.

    My 350 MHz iMac makes a great Linux workstation. It doesn't take up too much room on my desk, is easily transportable - without making two trips (one for sys unit etc, one for monitor) every time i want to move it somewhere, and it runs extremely snappily.

    I have been very happy with it, and you sir, are a f*cking idiot.

  • Hopper! Dude. I know we've butted heads on this before. I do not disagree with your comment regarding bad programming on the part of the kernel developers with respect to compiler incompatibilities. However, I do not agree with Red Hat simply repackaging a compiler under a misleading name. Educate your users, don't give them crutches. If RH really felt that calling a compiler kgcc would make it easier for the user, then create a "virtual" package that installs the compiler unders it's version name and create a symbolic link to it:
    Note: Hypothetical package example
    bash$ rpm -Uhv gcc-2.72-01.i386.rpm
    bash$ rpm -Uhv kgcc-wrapper-0.1.noarch.rpm
    bash$ ls -la /usr/bin/kgcc
    ...[snip] kgcc -> /usr/bin/gcc-2.72
    bash$ ls /usr/share/doc/kgcc-wrapper
    No misleading, no misunderstanding. No BS. The nice thing about Debian packages are their ability to use virtual packages and their dependencies to force package installation to perform certain tasks. If, for example, they found it necessary to package an older copiler for kernel compilation (which they didn't, BTW), they could include a "Depends:" line to the control file for the 'kernel-package' that looked something like this:
    Depends: gcc2.72 | gcc ( < 2.95 )
    Then, using apt-get:
    bash# apt-get install kernel-package

    IMHO, there are much cleaner ways of doing it than Red Hat's hack. Regardless, you cannot defend Red Hat's bad decisions in package policy by placing the blame on the Kernel developers. They didn't write the spec file. They didn't mislead the users.


  • While true, inside arch/ppc Linus shouldn't be "reviewing" anything. He's not a PPC guru. A quicky diffstat and browsing of the stuff outside arch/ppc would appear to be all that is required.

    Linus' rant was about the ISDN people sending him huge (and they are bloody huge) patches a few days after the declaration of "code freeze" (which remains to be clearly defined by Linus as he, himself, then pushed the entire IA64 tree [4.9M gziped patch] into the kernel.) Linus is also insanely anti-CVS. (It worked perfectly for sparclinux.)
  • PPC Linux has always been about getting the right patches to get a bootable system. No one in the official kernel seems to care that the deviations between the 'Linus' kernel and that which will boot a PPC box have been getting larger all of the time. It's pretty hard to keep up with nowadays. It is apparently getting worse.

    My personal experience: I have an SMP PowerMac. It can run LinuxPPC or YellowDog in uniprocessor mode only. Any attempt to get the other processor working results in an unbootable kernel. There are no patches for it, there are no tools for debugging the problems, there is simply no way to get the system working correctly.

    Then you have to take into consideration how difficult it is to actually get a new kernel installed on the system. The kernel has to reside in the HFS partition. The kernel cannot safely write to the HFS partition. Using a second box as an intermediate FTP server is the only way to change kernels. Maybe it's better now, I wouldn't know, it's too depressing to try to fix, every time you try to do something you have to reinstall. It's like windoze.

    The whole edifice of running Linux on a Mac depends on just using the kernels that came with the computer. Which, by the way, they don't tell you how to build or what options they used. The LinuxPPC guys are trying their hardest but the system is still years behind the usability of Intel Linux. (and another problem, no ECC memory on the Macs, either, constant rebooting is necessary)

    So, I'm back to MacOS. It would be nice to have a usable Linux system but it's looking like that will never happen. I wanted to use it for validating MSB code but it's too unusable. Maybe someday.
  • "The Linux kernel belongs to Linus."

    What? Silly me, I thought the kernel was GPL software. Imagine my disillusionment, I though that "Information wants to be free."

    I never said it wasn't. In fact, I was referring to *his particular tree*, since that is the base issue of this whole slashdot thread anyway. It ought to have been implied, but I guess I should make such things more clear next time.

    Anothing thing that I [failingly] implied was the existance of the GPL itself. What does one do if they have an uber-patch that increases kernel speed by 1400% but Linus won't accept it because he doesn't like the commenting style? You fork it. Or you just release your patch anyway, like the Reiser team did.

    Apparently I have this all wrong - the kernel belongs to Linus. If I want to use it, I to ask him to use it.

    You aren't going to make much of a point by emulating an extreme that is far detatched from the issue.

    Linus simply maintains one of the Linux forks (currently about the only one, excepting patched versions). He maintains the fork that he wants to maintain, with the features that he likes. He controls this fork, but that does not mean that it belongs to him. It is GPLed software, after all.

    That's just what I said! But you can GPL it all day long and you'll *never* change the fact that he is the only one that has the final say in what goes in it.

    Sometimes I think that forking the Linux kernel might be one of the best things to happen to Linux. We are getting close to the need, since it will begin to be difficult to maintain a full kernel that will run on everything from a palm-top to a mutiple-processor server.

    I agree completely. It will be neccessary eventually.

    Competition encourages development, and if you add GPL to the mix, the competition benifits everyone.

    The GPL was never in question. I am just fairly tired of people saying what Linus *ought* to do with the kernel he maintains. In particular relevance to this /. post, if the LinuxPPC people feel abandoned, they should be the ones working for support on their hardware, not simply sitting around and asking Linus to do it.
  • Since Linus really has the final say on what goes in or remains left out, is it time to go to a more Republic-like model? Most people will freak out at this thought but I think its way past due for Linux. ulinux has different ideas than SparcLinux and AlphaLinux which is different from BettyLinux and BarneyLinux (me obscure? never!) A core team or core group could hold the reigns a bit more.
  • Ok, I'm going to make the charitable assumption that you are intelligent, but English isn't your native tongue. So, here's a free clue: the article you replied to was Sarcasm. Sarcasm is a type of humour and rhetorical device where you make a point by stating the point's anti-thesis in such a way to show that the anti-thesis is ridiculous and beyond believability.
  • Lots of folks in this thread seem to be referring to Kernel 2.4.1 as a test kernel.

    This doesn't seem to jibe with my understanding of the kernel tree. I believe that any *.even-numbered revision is a productions kernel. *.odd-numbered kernels are the development branches. For example, any 2.3 kernel was development for the 2.4 kernel. You can still have 2.4.1test1 or 2.4.2pre3 however.

    But this brings me to my question - where is the 2.5 kernel series? Do we have any goals stated for Kernel 2.6?

  • Project axed, but he OS is sort of still out there, check any NT Workstation 4.0 CD, there's a "Ppc" arch directory. Not that I know if it even functions, or even if I care, I'm not a winfreak, that's why I went with PPC in the first place.
  • So because of the competition, Linus decides not to make a competing product... That's marketing.

    Ext3fs, Tux2fs are still waiting for merge. 2.4.1 was nothing more than 2.4.0 with Reiserfs merged in. You can't merge too much at the same time, that's the policy that makes you deserve the keyword "stable".

    I don't know how all the kernel folks manage to exchange parts of these different trees each time and keep an overview of what they're doing, but I guess that's they way things go at the kernel. Until stuff can merge it has to wait in a custom kernel source tree. 2.4 is still very young so there must be a lot of stuff in wait. Linus said he would for now not make much big changes in order to maintain stability. People said the inclusion of Reiserfs in 2.4.1 was a miracle because of that.

    So I guess the real reason is that the PowerPC folks are just bad at sweet-talking Linus. They don't have to feel ashamed about that: as far as I can tell, it's a real skill an sich.

    It's... It's...
  • Hey BenH - if you know and can tell us, why did Apple yank the ppclinux.apple.com site, and your page along with it?

    just curious... and a big thanks for all the great work on linux-ppc!
  • Isn't OpenBSD a fork of NetBSD, created when Theo fell out with the other developers.
  • How much is this royalty and where's the URL to prove it?
  • You're a bit misinformed about the 603, actually...

    The original 603 was indeed a bit of a copout processor, compounded by the fact that Apple wedded it to a bus that was even more backwards than the original NuBus machines. The 603e that succeeded it, though, was actually (at least potentially) a better processor than the 604. (I use a PowerMac 6500 -- the 603e inside really is pretty slick, probably about 75-80% of the performance of a first-gen iMac.) If you don't need MP, the 603e is the way to go.

  • I don't see why this is a problem. Even the i386 branch is in this state. For example, I wanted to do exactly the same thing; upgrade a PC running a 2.2 kernel to 2.4.1. It turned out that the "official" 2.4.1 didn't correctly support some of the hardware I needed (Iomega Buz video capture), so I went with the Alan Cox 2.4.1-ac9 kernel (which incorporates the Buz patches and is working just fine for my configuration).

    I periodically follow the kernel groups, and it's clear to me that the idea is to keep the kernel as stable as possible, while supporting the most mainstream environments. I'm sure the support I need will eventually find its way into the mainstream kernel.

    If you're the type of person that needs the latest and greatest (probably most /. users...), then you already know how to get a kernel or apply a patch.

    So what's the problem?
  • Open source and/or free software is far more important than the "New Jerusalem"...

    Don't leap into a religous lecture unless it's a religious issue. It's not important whether or not I buy into the open source model. What's important is that people who choose to use Open Source products understand the consequences.

    Since we don't pay Linus to maintain the kernel for us, what obligation does he have to maintain it any way but his own? If it's important to anyone that the kernel have more PPC features than Linus is willing to include, they're free to start their own branch. The same goes for any other open source product.

    The whole Open Source thing relies on people with a "Lead, Follow, or Get Out of the Way" mentality. If you say, "I want X to happen, but I want somebody else to take responsibility for it," you're thinking in terms of commercial software, no matter how many RMS mantras you recite.


  • Unfortunately, Open Source is a religion to many people. Which brings me back to the my original statement: people moving back and forth between PPC Linux and Mac OS is a good thing for both platforms. The sole basis for disagreement seems to be that Mac OS is a proprietary product, and therefore evil.


  • I would phrase this as "Mac OS doesn't come with source code, and therefore wouldn't be as useful to me".

    That hardly applies to developers who are moving from Linux to Mac OS because of features missing from Linux. If you're not prepared to hack the source code, what's the point in having it?

    "Evil" is nicely inflammatory shorthand for that.

    No it's not. It's a simple expression of the fact that the core Free Software movement considers commercial software to be morally wrong. See Richard Stallman's essay on the subject [gnu.org].


  • Forgive a stupid question from a non-PPC enthusiast, but what's wrong with people jumping ship? I admire both Linux and Linus, but it's just software, not the New Jerusalem [njchurch.org]. Anyway, the traffic will not be one-way, and having technology exchange between the Linux and Mac OS communities will be good for both platforms.

    Oh wait, there's the "Free Software" religion. Well, if you believe in it that strongly, do what Linus did -- go hack your own kernel. You don't even have to start from scratch.


  • over and over again that he does not care where the kernel goes. He only works on what he thinks is cool. I'm thinking that the people who use the PPC stuff are doing a good job and Linus is simply doing what he wants to.
  • Ten years ago when Linux was a very young project, with only a few of really devoted developers, the mailing list structure of the kernel development was efficient. Since it took over 3 years to get Linux to work on a TYPICAL PC configuration, it wouldn't make sense to talk about splitting the work into several groups.

    However, now the situation is cardinally different. Linux runs on about a dozen of major CPU platforms, supports endless types of devices that sit on every imaginable kind of a bus.

    In this situation, it makes sense to split the single bazaar into a few smaller ones, when a strict relationship is declared between them. I think no single individual (and that includes Linus) can fully understand the whole kernel workings. While Linus could (and should) remain active in many of those groups, it is no longer possible to leave all the decision-taking for him.

    A kernel split would be BAD. First of all, consumers want one set of sources (and preferably, binaries). Secondly, the improvements in some projects could be of use to others. For instance, while hardware I/O and filesystem issues are quite different, their workings have to be combined in order to achieve optimal performance.

    The bottom line: Kernel is growing up. More parties are willing to participate in its development. Linus has to ease his control, for the common good.

  • who is "they"? the following distributions all run on PPC: debian, suse, and yellowdoglinux.
  • Linux Thorvald owes nothing to the Linux community. He doesn't have to give up every waking hour to appearse millions of people just because they want to use something he created. The bottom line is that If someone creates something and people start using it, the creater does not owe those people anything.

    It pisses me off when people refer to musics grous and claim the the group owes their fans a new album. The group only needs to put out a new album if they want to continue their popularity. If they don't care, more power to 'em.

    Same goes here.

  • it'd be appreciated if that vulnerability hole was closed.

    we all want that hole closed, man! believe me...

  • 1.Apple doesn't open up its specs, so coding for linux/ppc largely consists of hacks and other substandard patches.

    Apple is partly open, partly closed. Some chipset details on their motherboard are closed, the OpenFirmware stuff is VERY well documented and a bit of a boon. Things ain't as bad as Be made it out to be.

    2.The platform isn't as popular, so maintaining the ppc tree at the expense of the x86 one would be ludicrous.

    This is true. Including PPC patches in the kernel shouldn't have to bring about this dichotomy, though.

    3.The trend has been rightfully away from ppc for the last couple years, since Be decided to abandon the platform.

    Uh... what? Be didn't decide to do this based on techincal merit. It was a political move -- and a good one for Be, but it has absolutely nothing to do with the nature of the PPC processor.

    4.Several companies (LinuxPPC, YellowDog, etc.) exist to maintain linux/ppc. So why should Linus do their work for them?

    We're simply talking about why Linus won't fold the work they've already done into the Kernel, not why he won't do their work. There are several well articulated reasons for this. Yours are not among them, though.

    If you want an alternative to x86, then stick with Alpha. Now there's a real platform. The support still isn't as good as with x86, but what can you expect?

    The support for Alpha is likely no better than for Linux PPC. In fact, if you want to apply the commodity hardware/number of units out there argument, I'll wager that LinuxPPC seats outnumber Linux Alpha seats. There is no sense in jumping from LinuxPPC to Alpha just because you're having some architecture deals. Out of the frying pan, into the fire.

    I support Linus on this one. I think Paul Mackerras is treading awfully close to a kernel fork, and that's the last thing we need.

    As has been well-pointed out in other posts for this article, we already have kernel "forks" of the kind that Paul Mackerras is treading close to, and we have for years.

  • Many assume PPC = Mac, and granted it does, usually. Thus the teenaged platform-war instigators (no, I'm not discriminating against teens, I'm saying most rabid platform-war participants are the lesser brand of teenagers) decide to make some smart-assed comment such as "PPC is an inferior platform." Rant aside, that's not so bad, the Mac is a great platform in general, but PPC is a very fine processor. RISC architecture has outperformed the usual CISC for years now, and the 750 has some beautiful features on it that make open-source computing downright yummy. As for the kernel patches, yeah, the other guy was right -- if they suck, they're not going to use them. Are they good? I don't know, I've never done an evaluation but would love to see some input on them. If they are good, then is this some large conspiracy to start a platform war? Prolly not. Ryan
  • Why isn't it being supported as well as the x86 hardware, you ask? I can think of plenty of reasons off the top of my head:
    1. Apple doesn't open up its specs, so coding for linux/ppc largely consists of hacks and other substandard patches.
    2. The platform isn't as popular, so maintaining the ppc tree at the expense of the x86 one would be ludicrous.
    3. The trend has been rightfully away from ppc for the last couple years, since Be decided to abandon the platform.
    4. Several companies (LinuxPPC, YellowDog, etc.) exist to maintain linux/ppc. So why should Linus do their work for them?

    PPC was great four years ago, but it's no longer a viable alternative to x86. Part of that has to do with Apple's bungling, and part of that has to do with the realities of the market -- unless hardware has the wide support of a broad community, it can't maintain the same level of support within an open-source system. There just aren't enough eyeballs.

    If you want an alternative to x86, then stick with Alpha. Now there's a real platform. The support still isn't as good as with x86, but what can you expect? Linux is still a hobby OS for most people, and unless there's a strong economic motive driving the support, it'll lag. That motive exists for x86 and its large userbase. It'll be years (if ever) before it can exist for other platforms.

    I support Linus on this one. I think Paul Mackerras is treading awfully close to a kernel fork, and that's the last thing we need.
  • by Tom Rini ( 680 ) on Tuesday February 13, 2001 @10:26AM (#435119) Homepage
    As someone who actually works on the PPC kernel, I do admit that there are bugs, but the other side of the coin is that hardly anyone reports these things. The developers only have a finite number of machines, and can only test what they have.
    But anyways, frambuffers are working well (with an occasional problem on the wierder ATIs, or some of the undocumented apple controllers.) Serial was broken once upon a time, but that's been fixed for ages (and even made it into Linus' tree in 2.4.2pre2). I assume you're referring to "standard" IDE cards which work in 2.4. Patches do indeed get ignored, but again there's more people trying to keep track of things now.

    As for the rewrites you mention, I know some of the recent ones have been so that new machines can be used and maintained sanely.
  • by Christopher B. Brown ( 1267 ) <cbbrowne@gmail.com> on Tuesday February 13, 2001 @12:55PM (#435120) Homepage
    That's probably a good thought; this is pretty much where Bitkeeper [bitmover.com] came from, as seen if you visit Why Bitkeeper? [bitmover.com]
    The current Linux development model has some problems and Linus needs tools to help solve those problems. Without a decent distributed source management system, all of the merging and tracking work falls on Linus' shoulders and that is getting to be way too much for any one person, even someone like Linus. The goal of the Bitkeeper effort is to provide tools that help the Linux kernel effort, and more specifically, help Linus.

    Unfortunately, it has sat in "ready Real Soon Now" status for a long time now. I'd hazard the guess that a bunch of developers are feeling rebellious about the fact that it is not free software. [bitmover.com]

    By the way, the "let's set up a CVS repository" idea has the conspicuous demerit at "send the patch to Linus time" that it is still going to take a lot of effort to make sure that the patches that get sent on to Linus are reasonably perspicuous. [bartleby.com] You're still left with the dilemma that:

    • If you send him each and every patch, that represents a huge number of patches to evaluate, and if they're tiny and keep changing all the time as developers experiment things, it is certainly not a perspicuous set of changes.
    • If you send him patches periodically, they'll bulk up, hopefully meaning that some of the little changes that go back and forth as people experiment before resolving to Regis' "Final Answer" will fold together.

      But this will tend to "bulk up" into something that involves a horde of changes, which again will not be terribly perspicuous.

    • If you wait longer between times that updates get released to Linus, the deltas will get bigger and bigger, and become just too big and unperspicuous to get applied to the "official" kernel.

    This is certainly spelled "dilemma," as all the alternatives are pretty poor...

  • by Thomas Charron ( 1485 ) <twaffle@nOsPaM.gmail.com> on Tuesday February 13, 2001 @09:20AM (#435121) Homepage
    Not in this particular case. It's being rejected due to the sheer size of the patch required. It's a fairly significant change, and, well, Linux ain't to hot on updates such as those. Alan Cox is the only reason why alot of the stuff makes it into the kernel..
  • by johnnyb ( 4816 ) <jonathan@bartlettpublishing.com> on Tuesday February 13, 2001 @09:14AM (#435122) Homepage
    I think a lot of people are _greatly_ misinformed about Linux forking. The truth is, there are at least tens of forks of Linux. Every major distribution has it's own kernel tree - no major distribution has ever shipped one of Linus's kernels. They all have at least one patch or another applied on them. Then there are projects like ucLinux, which are pretty major Linux forks. And then you have the real-time Linux forks, of which there are several. So, the LinuxPPC forking is really not a new thing. Linus is generally pretty slow about applying patches for other architectures. If you are not on an x86 box, you _need_ to not be running a Linus kernel - you almost have to run a forked version. It's not that Linus doesn't like the other architectures or that the other architectures are trying to be rebels. They just have different goals and emphases. Linus can't validate every patch that comes in for every architecture, so he generally just does the x86 stuff. Also, Linus doesn't like large patches, because he can't validate them. And the patches for other architectures tend to be large. Anyway, kernel forking is a regular part of Linux, it's been happening for years with no ill effects (all the good stuff from each fork is shared). In fact, it's rather positive.
  • by scrytch ( 9198 ) <chuck@myrealbox.com> on Tuesday February 13, 2001 @12:40PM (#435123)
    When Linus can start delegating authority over subsystems, when Linus can learn to use revision control, when any kind of coordination of changes affecting multiple subsystems does or even *can* go through anyone but Linus himself, then perhaps he won't need to personally review each and every patch that comes to him.

    As it is now, he doesn't even trust Alan Cox to maintain any part of the official tree -- he still has to send Linus patches. Forks do happen in a project, when such sweeping changes are needed, and they get merged in later versions. Linus tacitly admits this because he can find an incremental way to do it each time. In other cases, people decided to just stop trying to go through Linus, because they know what Linus doesn't: Linus doesn't know everything.
  • by deeny ( 10239 ) on Tuesday February 13, 2001 @09:37AM (#435124) Homepage
    Functionally, the PowerPC tree forked a long time ago. Way back when, before the Linux kernel had any USB support, for example, Mackerras' tree had an incorporation of Inaky Perez Gonzales' USB stuff so that iMacs could boot. The USB style changed about a year and a half ago to support the newer stuff Linus was doing (Linus having rejected Inaky's USB code). But the support's always been way ahead on PowerPC of what it was on x86 -- and of necessity.

    That makes the forking, what, 2 1/2 years old?

    Yeah, it re-integrates from time to time, but the official kernel tree hasn't been the place to get a *usable* PowerPC kernel in like forever.

    PS - don't get me started on support for weird PowerPC chipsets. Just don't.


  • by deeny ( 10239 ) on Tuesday February 13, 2001 @09:46AM (#435125) Homepage
    So where is Windows for PPC?

    Actually, at a MacWorld one year (1995?), IBM was showing Windows NT running on an IBM CHRP-based PowerPC system.

    The project was axed.


  • by Col. Klink (retired) ( 11632 ) on Tuesday February 13, 2001 @09:18AM (#435126)
    > if MS ignore a set of hardware, they lose money, so they won't do it

    So where is Windows for PPC?
  • by arivanov ( 12034 ) on Tuesday February 13, 2001 @09:22AM (#435127) Homepage

    The same complaint goes for m68K (which has maybe a handfull of mainstream kernels that even compile) and quite often for (u)sparc. There are lots of conspiracy theories but I think that the answer is very simple: endian-ness. Let's face it most linux development is done on 32 bit little endian. And quite a lot of people do not recheck their stuff for endian issues.

    Hopefully now, with IBM's and other "big endian guns" involvment the issues will subside by themselves.

  • by Pemdas ( 33265 ) on Tuesday February 13, 2001 @09:07AM (#435128) Journal
    The mips tree has its own CVS repository which is the most current mips stuff (oss.sgi.com). That one is maintained primarily by Ralf Baechle.

    Similarly, the sparc tree's most up-to-date stuff can be found at the repository David S Miller maintains on vger.

    In addition to being the gatekeeper for the official tree, Linus is pulling double duty as the portmaster for the x86 port. Thus, the x86 stuff is always up-to-date in the official tree, and the other archs tend to have some lag time associated with them.

    This is nothing new. It's just symptomatic of the hierarchical Linux development style. {Free|Net|Open}BSD don't tend to suffer from this due to their use of a central CVS repository with all portmasters having access to their relevant parts. Whether this is a better system is left for others to flamewar about, but it does prevent the port floating the author is talking about.

  • by drudd ( 43032 ) on Wednesday February 14, 2001 @08:17AM (#435129)
    This is exactly why Linus wants small patches. With a small patch, it is possible to quickly look at it, see what it's doing, and evaluate the patch on several levels:

    clarity - is it clear what the patch fixes/adds (this tends to be a sign of well-written code)

    correctness - a 300K patch would take days to go through and make sure that you aren't breaking something else by applying the patch. Small gotchas will stand out more in smaller patches.

    If you apply a 300K patch, and something new breaks, what do you do now? Look through the code slowly and try to figure out what happened. With a small patch, there is a greater chance that you can back out exactly what change broke whatever function, making it easier to find where you broke it, if not why.

  • by sabre ( 79070 ) on Tuesday February 13, 2001 @09:16AM (#435130) Homepage
    First of all, there is a lot more that goes into this than what you might first see... I recommend you read some of these links to get a better sense for the interations that go on about the kernel:

    http://kt.linuxcare.com/kernel-traffic/kt20010108_ 101.epl#7 [linuxcare.com], http://kt.linuxcare.com/kernel-traffic/kt20001127_ 95.epl#6 [linuxcare.com]

    and especially: http://kt.linuxcare.com/kernel-traffic/kt20001002_ 87.epl#3 [linuxcare.com], http://kt.linuxcare.com/kernel-traffic/kt20001010_ 88.epl#7 [linuxcare.com].

    Have a good read. KernelTraffic Rocks.


    http://www.nondot.org/~sabre/os/ [nondot.org]

  • by Michael Woodhams ( 112247 ) on Tuesday February 13, 2001 @11:37AM (#435131) Journal
    Lots of people say something to the effect: Linus wants *small* patches, which do specific things, or implement one new feature.

    So, if your small patch is "Prevents the Frangle Hyperqueue Monitor from crashing on PPC", how does it get verified in the official kernel if you need 50 other similar small patches before your kernel will even get as far as trying to access the Frangle Hyperqueue on PPC?

  • by wesmo ( 181075 ) on Tuesday February 13, 2001 @09:25AM (#435132)

    The LinuxPPC kernel (that's all I can speak about aside from the x86 kernel.. no experience with the others) and the main distribution tree have always diverted away from one another, and, then, seemingly magically, they get sync'ed.

    If I remember correctly, it wasn't until the 2.1.128 series kernel that it started building, right out of the box (http://www.kernel.org) on a PPC box. Prior to that, PPC users had to rsync their kernl from a site in AU.

    From then on through the 2.2 kernel, this remained true. But, as new Mac hardware flooded into the pool and USB device support became a much higher demand, patches and changes to the kernel came at an accelerated rate, and the master kernel source (http://www.kernel.org) didn't provide the functionality PPC users wanted/needed.

    With the 2.4 kernel, it seems that almost all support within the master kernel tree has been halted, and, hence, secondary architecture-focused trees have popped up to fill the void.

    PPC users have gotten accustomed to the kernel.org kernel source working for them (as it does for most other architectures), and, with that comes a feeling of acceptance. The fact that it hasn't been working as of late seems like a step backwards (or, in this case, sideways), and is pretty disappointing..

    I suspect that, as one response stated already, things will get sync'ed again as soon as it bubbles up to the top of Linus' to-do list.
  • by tenzig_112 ( 213387 ) on Tuesday February 13, 2001 @09:19AM (#435133) Homepage
    I knew I shouldn't have bought the Mac with "Binary Plus."

    Zeros, Ones, and sometimes Twos.

    I could kick my own ass for thinking different [ridiculopathy.com].

  • by nomadic ( 141991 ) <.moc.liamg. .ta. .dlrowcidamon.> on Tuesday February 13, 2001 @01:52PM (#435134) Homepage
    Don't post on slashdot anymore pal. This site is for people with brains in their heads, not prejudism in their heart.

    You must be new...
  • by eXtro ( 258933 ) on Tuesday February 13, 2001 @09:03AM (#435135) Homepage
    It may be a case where fixing outstanding bugs is more important than maintaining the different architectures is the right thing to do for the time being. If you look through the kernel logs you'll see that from time to time architecture specific changes are rolled in. The number of x86 users dwarfs the number of PPC/SPARC/etc users, so in the time it takes to verify and integrate the other architectures might be better spent elsewhere for now.

    I'd hope that architecture politics stay away from the linux kernel.

  • by Ranger Rick ( 197 ) <slashdot AT raccoonfink DOT com> on Tuesday February 13, 2001 @09:17AM (#435136) Homepage

    Yup, go read the linux-kernel mailing list archives; at least once every couple of months, someone tries to give Linus a 300K patch, and he rejects it. Linus wants *small* patches, which do specific things, or implement one new feature.

    Kernel Traffic [linuxcare.com] has summarized this numerous times, if you don't want to wade through the lkml. Essentially, the only reason NON-platform-specific stuff gets through faster is because it all goes to Alan Cox, who then stuffs them into his own tree (the -ac* patches). When he decides they're stable enough to pass on, he breaks them up into bite-sized pieces for Linus.

    It sounds like the PPC maintainer isn't willing to do this, and so they're falling by the wayside.

    1st Law Of Networking: Loose ends are bad, termination is good.

  • by Christopher B. Brown ( 1267 ) <cbbrowne@gmail.com> on Tuesday February 13, 2001 @09:32AM (#435137) Homepage
    If the PPC people are offering Linus huge patches that change lots of non-PPC-specific code and interact scarily with the little patches coming in from other groups, then what's he to do?

    He can choose to:

    • Take the PPC patches as gospel, and throw away everyone else's contributions,
    • Go through a whole lot of work breaking the PPC patches down into bite-sized components, or
    • Tell the PPC developers to turn the PPC patches into bite-sized components themselves, and ignore the huge patches that he hasn't time to integrate in.
    In effect, he's going with "option 3" here, and that's not an outrageous outcome. The LinuxPPC folk may not like this outcome, but it's neither new, capricious, nor is it discriminatory. Alpha has suffered from much the same issue...
  • by BenH ( 4366 ) on Tuesday February 13, 2001 @12:12PM (#435138) Homepage
    Well, I won't even try to rely to most of the crap I've read here, sorry guys, but some of your comments are just so...

    There are several points about the PPC arch. First of all, it's huge and very quickly growing. Why ? Well, in a single arch, it handles all PCI PowerMacs (with new hardware coming out of Apple every 6 monthes), CHRP boxes (including some RS6k IBM machines), PReP boxes, APUS, NuBus PowerMac is coming in, Gemini, and I'm forgetting some...

    Add to that the embedded hardware (8xx CPUs, 4xx CPUs coming in soon) with the zillion of hardware variations.

    So it's _huge_, it's quickly moving forward (remember USB, we needed to get that working for kbd and mouse on iMacs while most x86 boxes didn't even had Windoze drivers for their USB controller), and the necessary consequence is that patches are huge. Fortunately, most of them just touch arch/ppc, include/asm-ppc or PPC specific drivers.

    But I'm not telling you all here ;) The truth is that we are in fact paid by Microsoft to write crappy code, flood Linus with huge and unmergeable patches in the clear intend to cause a Linux fork !

  • by artdodge ( 9053 ) on Tuesday February 13, 2001 @09:23AM (#435139) Homepage

    I hate to be an old.fart.kernel.hacker and rain on your parade, but there is no news here. Stuff like this has been going on since 1995, at least, with all of the non-ia32 ports. It's a pretty simple problem - Linux supports a lot of platforms, and platform developers don't usually synchronize well with Linus's attempts at keeping some sort of release schedule for the "core" kernel. Linus himself worked on the initial AXP port, and it wasn't long before it fell off the "core" radar and had a separate team with independent patches feeding it. It wasn't a fork, it was just a concession to practical limits on Linus's time and energy.

    IIRC, Linus' usual behavior with platforms he doesn't frequently use is to let the primary maintainers feed him big merges periodically... he basically lets them run their own "development" cycles (the "odd" cycles for the core kernel) and merge "stable points".

    Since we're now in 2.4.small# mode, Linus is going to be extremely anal retentive about what he accepts, at least until 2.5 launches. I don't know the nature of the stuff the PPC people may be trying to feed him right now, but odds are unless it's either a critical bugfix or an "independent merge" that's been long planned for (f.e. reiserfs), it'll be rejected. When we get into later 2.4.X's, the policy will probably become more liberal.

    The rule has always been (ever since the Alpha and M68k ports) that if you run on an alternative platform and follow the latest greatest developments for that platform, track with its maintainers kernels, not with the "mainstream". If you want to follow latest-greatest core stuff, either use ia32 or use a known-good arch bundle and cross-port any necessary changes from your arch maintainer's tree.

  • by Phexro ( 9814 ) on Tuesday February 13, 2001 @09:06AM (#435140)
    from what i have heard (from linux-kernel) linus doesn't like to get huge patches. he likes to get small patches that do one thing, since that's easier to review.

    the linppc folks have had a hard time accepting this. they want to send one huge patch to get the ppc architecture up-to-date.

    it's not a new problem. see this [linuxcare.com] bit on kernel traffic, which covers some of it. there was another thread, where linus flamed people for sending huge patches, but i can't find it atm.
  • by sith ( 15384 ) on Tuesday February 13, 2001 @08:59AM (#435141)
    I think we may be missing part of the story here... we did not get an explanation of how linus is justifying rejections and such. Anybody have more info?
  • by Alan Cox ( 27532 ) on Tuesday February 13, 2001 @10:18AM (#435142) Homepage
    The first priority for 2.4.x has to be to get it rock solid for the majority of users (and unfortunately for most definitions that means x86). Linus has also been avoiding making vast numbers of changes in one go while working on the really hard to debug and critical fixes to the core code. In the meantime I've been merging chunks of architecture code into the -ac tree ready to go as I get them.

    A good solid well maintained self sufficient ppc tree is one of the reasons we can do that of course
  • by runswithd6s ( 65165 ) on Tuesday February 13, 2001 @10:31AM (#435143) Homepage
    Every major distribution has it's own kernel tree - no major distribution has ever shipped one of Linus's kernels.

    Incorrect. Debian ships with the original Linus kernel tarball. There are some kernels that you can install with various patches applied, but everything is available as a *.deb or a *.[dsc,diff,orig.tar.gz].

    In addition, Debian does not commit a Red Hat-ism and package such awful software renames like kgcc. Why not call it what it is, gcc-2.7.2. I mean, come on. Pull the wool over the lusers eyes, don't ya. "Yeah. Red Hat has a special compiler for the Kernel..." Whatever.

    Another nice thing about Debian's kernel packaging is that the very tools the developers use are available to the average user.

    • bash$ apt-get install kernel-package
    • bash$ cd /usr/src/linux
    • bash$ make-kpkg --revision="myversion.1.1" --rootcmd fakeroot binary
    • bash$ cd ..;dpkg --install kernel-image_-myversion.1.1_.deb


  • by OlympicSponsor ( 236309 ) on Tuesday February 13, 2001 @09:38AM (#435144)
    Anyone who's ever read the linux-kernel mailing list knows how vindictive and political Linus is. There's nothing he loves more than excluding platforms from HIS kernel (he's very protective of it, he only let's a couple of people submit changes, and even then only if they pay him a royalty).

    Also, this problem just came out of the blue. It certainly was never discussed (and re-discussed and re-discussed ad nauseum) over the ISDN patches.

    Get a grip people.

The Macintosh is Xerox technology at its best.