Forgot your password?
typodupeerror
Linux Software

TurboLinux Releases "Potentially Dangerous" Clustering Software? 233

Posted by Hemos
from the new-directions-new-trials dept.
relaye writes "The performance clustering software and services announced today by Linux vendor TurboLinux Inc. and a cabal of partners including Unix vendor SCO Inc. takes the Linux market in an unusual and somewhat risky direction, analysts are saying. " The article cites risks of forking the kernel - not an incredibly probable risk, but a thought-provoking scenario. The danger comes if Linus decides not to incorporates TurboLinux's changes into the kernel.
This discussion has been archived. No new comments can be posted.

TurboLinux Releases "Potentially Dangerous" Clustering Software?

Comments Filter:
  • by JeffI (87909)
    Well it sounds like they should have checked with him first now...eh? Well, regardless more good steps in making this powerful OS even more powerful with clustering. Good Job.
  • Does anyone really see this as a threat? Why wouldn't Linus add this? I guess the media is just trying a dose of FUD on us.

  • Maybe these guys can explain to me how the inclusion of Pacific TurboLinux's unblessed kernel patches to support clustering is any different from the non-standard kernel that ships with Redhat.

    Now they must follow GPL licensing restrictions, but this doesn't legally prevent them from selling a tailored distribution which contains a mix of GPL patches and proprietary closed source driver modules... and it's not any more forked than the heavily patched kernel source that ships with Redhat Linux.
  • by PD (9577) <slashdotlinux@pdrap.org> on Wednesday October 27, 1999 @08:32AM (#1583230) Homepage Journal
    The analysts are getting too jumpy over nothing. TurboLinux has the right to make whatever changes they want to. That's the *purpose* of open source. If Linus was concerned about a code fork, then logically he would have chosen a different licence.

    We should all be pleased that Linux is so flexible technically and legally that anyone who has a problem can either use Linux to solve the problem, or change Linux to solve the problem.

    Using a feature of the operating system like the open source licence is no different than using any other feature of the operating system, like support for a TV Tuner card. The users will use any features of the operating system in the way that they want to, and nobody can tell them they can't.

    Turbo Linux isn't forking the code, they are using one of the most powerful features of the code.

    And that's my view.
  • I just rifled off an E-mail to turbolinux yesterday re: maintenance of their patches. I was wondering if they would receive approval for their patches from Linus and if not, if they would continue to maintain their patches as patches against the kernel, not as complete kernel releases. In this way, some, if not all of their patches can be incorporated, and others can be downloaded and applied as necessary by those who want them (such as the secure linux patches at kerneli.org).

    - Michael T. Babcock <homepage [linuxsupportline.com]>
  • by Anonymous Coward
    So the danger comes if Linus decides to Fork the kernel (by refusing to incorporate changes already adopted by various vendors.) It isn't inevitable that Linus will always lead the kernel releases. Maybe recognizing that is part of the evolution of Linux.
  • Disclaimer: I'm not really familiar with the Turbo Linux Clustering Technology.
    That being said, I wonder how useful the changes to the Linux kernel will be if the other tools to manage/configure/use the clustering technology are not available to the masses. An Analogy: A CD without a drive is just a shiny coaster.
  • by MAXOMENOS (9802) <maxomai AT gmail DOT com> on Wednesday October 27, 1999 @08:33AM (#1583235) Homepage
    Speaking frankly, I think the fears of code forking are unfounded. Linux is very good for high-performance clustering, but here at the Linux General Store, getting high-availability clustering has been a pain in the rear. TurboLinux's kernel patches to support high-availability clustering are an easy win for Linux, and a no-brainer for Linus. TurboLinux did the Linux community a great service by adding these patches (IMO).

  • Even if it doesn't make it into the main kernel, it's open source, it's supported by a vendor, so what's the problem? Every time a new "main branch" kernel comes out, the TurboLinux people can make their same changes to it that they did to previous versions. And if the code they're modifying to do it doesn't change much between kernel versions, it will be trivial for them. If somebody rips out and re-writes the stuff they're based on, then they have a problem - but anybody who cares about clustering in the open source community will be able to help them.
  • by Pascal Q. Porcupine (4467) on Wednesday October 27, 1999 @08:35AM (#1583237) Homepage
    Before everyone starts to scream bloody murder about how this will fragment the Linux community, keep in mind that it wouldn't be in TurboLinux's best interests to fork it in incompatible ways. They only are keeping the possibility open if Linus doesn't accept the changes, and even then it'd be stupid of them not to keep adding their changes to the main kernel source. Everyone can still win in this situation.

    Unfortunately, one of the parties that can win is the Microsoft PR department, who has been shouting FUD about the fragmentation of Linux for quite some time. So, hopefully a kernel fork won't be necessary, since even if the fork doesn't cause the problems of fragmentation, MS will still love the opportunity to claim that it's fragmentation whether it's a bad thing or not.

    Personally, I'm all for kernel forking. It's not like 8086 Linux or RTLinux are currently part of the main kernel distribution, nor should they be. They fill in special needs, rather than being something good for everyone. A clustering-optimized kernel would be similar, IMO. Clustered systems tend to be homogenous and not have any exotic hardware to support (with the exception of gigabit network cards, which are generally supported just fine by the main kernel as it is). It's a special-need kernel, not something for general consumption. As much as how every article on /. has a comment saying "Man, I'd like a Beowulf of these babies," most of the people saying that never will have a Beowulf or a need for a clustered system. (I mean, come ON, what would you, personally, use all that computing power for?)
    ---
    "'Is not a quine' is not a quine" is a quine.

  • It might not be added if Linus didn't like the implementation (unlikely, given the backing, but it might happen sometime).

    But would that be a problem ? I don't think so - it would just mean that Turbo customers wanting those modifications wouldn't be able to use the latest stock kernel. That's their choice - it doesn't cause anybody else a problem unless large numbers of closed-source application developers start producing apps that ONLY run on the modified kernel.

    Seems to me Redhat already does this with their nonstandard module-info thing .. it might be easy to get around, but it does mean that the kernel releases don't plug in and go.


  • by Erich (151) on Wednesday October 27, 1999 @08:35AM (#1583239) Homepage Journal
    SCO CEO Doug Michels was deeply critical of Linux: "Companies like Red Hat ... take Linux technology with a lot less value added, and they package it up and say, `Hey, this is better than SCO.' Well, it isn't. And very few customers are buying that story."

    Hmm... I've used SCO before...

    I think that for most people SCO is inferior to Red Hat. Look at how much extra stuff Red Hat puts into their product, and how well it works with other stuff... Red Hat also does an amazing job of detecting hardware nowadays.

    Not to say that SCO doesn't have lots of interesting things in it... there are some very nifty security model aspects that SCO has, for instance. But for people who want a web server or an smtp/pop server or a workstation, for cheap, with lots of power, I think that Red Hat provides a better solution. And I think that many customers are realizing that.

    Not to mention a cooler name. :-)

  • by Effugas (2378) on Wednesday October 27, 1999 @08:35AM (#1583240) Homepage
    TurboLinux is making alot of noise regarding the work they've done, meanwhile aren't they just taking an existing (very impressive) kernel patch referencing Virtual Servers and claiming it as their own?

    There's an aspect of dirty PR pool going on here.

    Gotta love, incidentally, more Linux bashing by SCO. Their hatred is so tangible. Then again, at least they're honest.

    Overall, I hope Linus doesn't feel pressured to incorporate a technically inferior solution because somebody is attempting an ad hoc kernel power grab. We don't want people saying to Linus, "You're going to put this into the kernel because we've made it the standard." Embrace and Extend indeed.

    That being said, I've heard very good things about the patch TurboLinux has appropriated without due credit. I've also heard some insanely interesting things about MOSIX, the virtual server project started in Israel and made GPL around six or eight months ago. Mosix is immensely interesting mainly because of its ability for seamless and invisible process migration--all processes, not just those written via PVM/MPI, get automagically clustered.

    Very, very cool.

    Comments from people more knowledgable than I about the details glossed over in this would be most appreciated.

    Yours Truly,

    Dan Kaminsky
    DoxPara Research
    http://www.doxpara.com

  • 1) Will media types and pseudo media types ever understand the differences between ``public domain'' and ``copylefted software''? This is, of course, unless I'm just kidding myself into thinking that the GNU General Public License is, well, a license.

    2) Doug Michels.. is a gimp? If you think that is some sort of unfounded flame, you should really consider reading that interview. He's a complete and utter tool. Few people are that lost.. hee hee hee.. He must be living on a totally different fscking planet. =P

  • by Fnkmaster (89084) on Wednesday October 27, 1999 @08:37AM (#1583244)
    Well, I am not entirely sure of what sort of changes to the kernel the Turbo Linux folk have made, but unless they provide some actual functionality to non-Turbo Linux clustering users, they *shouldn't* be incorporated into the main kernel fork.

    This article clearly states that Turbo Linux plans to keep some chunk of their clustering technology proprietary (presumably all parts of it that operate in userland). If they don't plan on making their HA clustering work for the rest of us in any way, why should the kernel maintainers add support for their HA clustering, unless it somehow is part of an open standard?

    I have no big moral problem with Turbo Linux choosing to fork the kernel. It'll be their problem if they introduce compatibility issues. People simply won't use Turbo Linux. The right to fork is an integral part of the GPL. Let the market (i.e. user choice) decide. If the features are useful, people will want them, and they will make their way into the mainline kernel. If they aren't useful to us, they won't, and TurboLinux will just have to patch every kernel release (frankly I don't care if they do, as long as they are abiding by the terms of the GPL).

  • Does anybody know what they are changing?
    (another flavor of kernel support for cluster wide resoures/pids/etc. or something?)

    The article basically said nothing about what their system does to make it better/different than any existing clustering setup.

    dv
  • Since *BSD forked, Linux HAS to, right? Since 'free' entities can't share a common vision? Bah!

    At best, the code is good and Linus incorporates it as a kernel option.

    At worst, it's a patch for a very specialized function, examples of which already exist:

    Embedded Linux
    uLinux
    RT Linux
    e2compr (compression for the ext2fs filesystem)

    I don't see something as specialized as server clustering forcing an actual 'fork' of the Linux kernel, except as a vertical application (Like Embedded Linux).

    jf
  • by Anonymous Coward
    Well it sounds like they should have checked with him first now...eh?

    Yes, you're right. Linus has the final say in Linux anything. So they should have checked with him first. Of course, this points out even more than before how precarious the whole Linux project is. How close to forking it is, and how much more likely the fork will become with time (and with more commercial use of Linux).

  • Is there really any reason the changes would become a part of the main kernel if the rest of it is closed. I think part of the problem is that if it changes things in ways that slowdown a nonclustered kernel it won't make it, and if it doesn't should Linus incorporate the changes when the Userspace clustering software won't be open, so it won't be able to be used by the free software community as a whole?
  • But we already HAVE that kind of fragmentation in 8086 Linux and RTLinux and likely a dozen other special-interest kernel forks, and so far I haven't seen any collapse of the Linux world because of that...
    ---
    "'Is not a quine' is not a quine" is a quine.
  • I'm surprised this is even an issue. Linux isn't NetBSD, with tight oversight and cathedral-like concentration on purity. Thisis Linux -- people are supposed to be able to contribute freely.

    This isn't to say that all submitted diffs should be merged immediately, but why give up one of Linux's great strengths -- the ease of contribution.

    --

  • How does RedHat ship with a non standard Kernel?

    Simply becouse they presetup the source files to make everything as modules?

    The RedHat kernel is NOT heavily patched, nor has it ever been..
  • I think this is an outstanding step for the market. Any new development that add functionality, to Linux is, in my opnion, a step in the right direction. I also tend to believe that if this all works out and is becomes an in demand feature that Linus will incorporate clustering into his kernel. After all, everyone working together to make better software is what Linux is all about.
  • The question of weather Linus will accept these kernel patches is a matter of what is being changed. If they are architecturally sound and take the kernel in a direction that Linus wants it to go they will be incorporated, if they are just some glue for proprietary stuff that TurboLinux sells then they don't have a chance in hell.

    The other question is -- could they, or would they, fork the kernel if TurboLinux doesn't get their way. The other solution is to either make due without their enhancements or port their patches to each kernel version. The second option is not too far away from what other vendors do in backporting security updates to the old, shipping, version of thier kernel (COL w/2.2.10 has patches from 2.2.12/13 in a 2.2.10 update RPM). There are also other distros that add beta or unsupported patches, like devfs (Correct me if I am wrong on this point, I don't have this personally).

    What does the GPL allow? They don't own Linux, no one does, what would they be able to accomplish (barring Linus from accepting their patches)without the support of the core developers.

    I guess that I have more questions than answers, GPLd software hasn't been as popular as recently and some of these issues are being tested on a large scale, for the first time. Or maybe not, the GPL has been around for many years. Maybe this kind of thing has happened before and we can just look back and learn from experience. If anyone can point out an instance I would appreciate it greatly.

    Enough rambling for one post.
  • by Otto (17870) on Wednesday October 27, 1999 @08:42AM (#1583254) Homepage Journal
    If they fork the kernel, then they have the responsibility of maintaining their new kernel and integrating new features and so forth. Fine. They have that right, as long as all the source is available. Good for them! Code forks make the linux world a better place, because they cause the best options to be produced. Plus, standard linux can steal their code (the good parts) and integrate it back into the normal kernel if they want. Good too!

    However, if they don't want all that responsibility, they can release kernel patches to be applied to the standard kernel to make it work with their system. Good too. Those may be eventually integrated into the standard kernel distribution, if they're worthy.

    Either way, who cares? The ONLY entity this could hurt is TurboLinux itself, for the fear of being incompatible with the standard kernel. And that's not likely anyway..

    This article is FUD.

    ---
  • Look, this is all GPL'ed. So long as they ship the source, who cares. If TurboLinux have to tow the Linus Line, or are prevented from doing something by Alan Cox, we are back to the same position we have with Microsoft.

    The kernel is already forking, with the Red Hat patches and now Turbo Linux. We are living in a dream if we think that Linus is going to control all those vendors from doing their thing.

    And now, to keep the Moderators happy: "Linux is cool, /. is cool, I hate Gill Bates".

  • Forks of GPL software are different from forks of software with other licenses, because their source must always be disclosed, and the source can be incorporated into any of the other forks at any time. Thus, forks will tend to converge as good features from other forks are incorporated into them.

    We're also not talking about a "fork" so much as a patch to the main kernel thread. There's little chance that this patch would be allowed to diverge from the main kernel thread, as it's easier for TurboLinux to maintain it as an add-on - otherwise, they have to maintain an entire kernel rather than just a patch.

    A lot of the talk about the danger of forking the Linux kernel is FUD or ignorance of the licensing issues.

    Thanks

    Bruce

  • Well, it DID have the hardware RAID support prior to it being in the main kernel, and I believe it had some knfsd NFS compatability patches although I'm not too sure about that one. If one wanted to upgrade the kernel and was using features patched in, one did have to patch the kernel a bit. I haven't checked lately as to how patched it is (in 6.1)

    Tim Gaastra
  • With my little Beowulf-building experience, I have to say I agree with you; not only, a specialized "cluster node" kernel isn't a bad thing, it's probably a needed step: e.g. cluster nodes need not be too picky about security/authentication when talking to their 'master node', since that's all they talk to. We already have so many patches to deal with, it's unwieldy (TCP patches, memory patches, unified process space patches, etc...)

    As long as (a) the changes are made public (and the GPL so far has ensured that), and (b) the 'cluster' kernel follows (closely ;-) the tech advances of the main kernel (e.g. SMP, a sore point so far in clustering) we should be ok...

    Just my $.02

  • While polite, I though that was the whole GNU idea - take the code, do what you want, release it.
  • by Bruce Perens (3872) <bruce@perens.com> on Wednesday October 27, 1999 @08:51AM (#1583264) Homepage Journal
    We had a major fork of the Linux C library, "libc5", vs. GNU LIBC. That fork was resolved once the Linux fork had a chance to mature. Its modifications were incorporated into the main thread.

    Folks, we've been here before. The forks converged. There's no reason that future forks of GPL software will not converge.

    Thanks

    Bruce

  • by Anonymous Coward on Wednesday October 27, 1999 @08:51AM (#1583265)
    and Giganet Inc. of Concord,Mass., for ``VI'' software that allows the cluster nodes to communicate with minimal overhead on the processors.

    that must be some new functionality I wasn't familiar with. Thanks, Computerworld!

    Oh, and Take _That_, emacs!!

    :)

  • by Kaz Kylheku (1484) on Wednesday October 27, 1999 @08:57AM (#1583269) Homepage
    They make it sound like someone is jumping out of an airplane on a motorcycle or something.

    So what if TurboLinux forks the kernel? They will either die out or have to keep a parallel development stream whereby they keep taking mainstream kernels and patch their changes onto them. No big deal. There are nice tools for this, like CVS update -j or GNU patch. Eventually, their stuff will mature and may be accepted into the mainstream.

    Forking happened before (anyone remember SLS?).

    I think that for any significant feature to be added by an independent software team, forking *has* to take place. In fact, Linux is continuously sprouting many short-lived forks. Any time a hacker unpacks a kernel and does anything to it, wham, you have a tiny fork. Then when it becomes part of the stream, the fork goes away. To create a significant feature, you may have to have branch a much longer-lived fork. And to let a community of users test that feature, you *have* to release from that branch. Now crap: you are ostracized by the idiot industry journalists who will accuse you of fragmenting the OS.

    Linus *can't* integrate Turbo's changes until those changes are thoroughly hammered on by Turbo users, so a fork is required. The only kinds of changes that Linus can accept casually are ones that do not impact the core codebase. For example, if someone develops a driver for hitherto unsupported device, great. The driver can only screw up kernels that are built with it, or into which it is loaded. Just mark the driver as very experimental and that's it.
  • There's not much question but that there are some significant demerits to code forks. The plethora of mutually-incompatible patches to GCC that resulted from people supporting forks for:
    • Pentium optimization
    • Trying to support C++
    • FORTRAN
    • Pascal
    • Ada
    • Special forms of optimizations (IBM Haifa stuff, for instance)

    The net result of the forks were that you could have a compiler that covers one purpose, but not necessarily more than one.

    I do support of some R/3 [hex.net] code where our local group has "forked" from the standard stuff SAP AG [sapag.com] provides; it is a bear of a job to just handle the parallel changes when we do minor "Legal Change Patch" upgrades. We've got a sizable project team in support of a major version number upgrade; the stuff that we have forked will represent a big chunk of the grief that results in that year long project.

    I would consider a substantial fork of the Linux kernel to be a significantly Bad Thing. [tuxedo.org]

    Note that if it forks, the Turbo version may have a hard time supporting code written for the non-Turbo version. Major things that are likely forthcoming include:

    • New filesystem support, including LVMs, ext3, Reiserfs, SGI's XFS
    • New devices such as network cards, SCSI host adaptors, USB devices
    • Further support for 64 bit architectures, and support for 64 bit structures on 32 bit architectures ( e.g. - solving such issues as the 2038 Problem and the 2GB File Size Limit Problem and the 2GB Process Size Limit and such)
    Deployment of such facilities would be substantially hampered by a kernel fork.
  • by Skyshadow (508) on Wednesday October 27, 1999 @09:00AM (#1583272) Homepage
    This isn't the only place this is happening, yano.

    I have a friend who works at SGI, and we were just talking the other day about how their development people have been frustrated lately about their inablility to get certain scalability-oriented bits included in the kernel. So, essentially, SGI's Linux is headed for this same sort of fragmentation for the same sort of reason.

    I told 'em that if he killed Linux I'd slash his tires, but I don't think he took me seriously.

    We in the community have nothing to fear but fragmentation itself. The 10,000 faces of UNIX is what originally killed it as a server operating system -- that's why I refer to Linux as being the Second Coming of UNIX so often. The really key thing is that it runs on a common platform (Intel) and it's not the mess that the commercial UNIXes evolved into during the last decade.

    I don't know how to stop this from happening, only that it must be stopped.

    ----

  • >much as how every article on /. has a comment >saying "Man, I'd like a Beowulf of these babies," >most of the people saying that never will have a >Beowulf or a need for a clustered system. (I >mean, come ON, what would you, personally, use >all that computing power for?)

    PovRayQuake of course!

    That is for the people who aren't simulating nuclear explosions of their neighbors dog.
  • Do you have any insight into why the Gnu emacs
    xemacs split is staying split?

    I actually think that this subject is really
    interesting... it would be really good to have
    someone do some serious historical research
    into code forks.

    In particular, I suspect that BSD-licensed
    software is more suceptible to code forks
    than GPL software, because of the temptation
    to do proprietary closed source forks. It'd
    take more knowledge than I have to pin down
    whether this is really the way it works.



  • by Blue Lang (13117) on Wednesday October 27, 1999 @09:03AM (#1583278) Homepage
    Maybe these guys can explain to me how the inclusion of Pacific TurboLinux's unblessed kernel patches to support
    clustering is any different from the non-standard kernel that ships with Redhat.

    Now they must follow GPL licensing restrictions, but this doesn't legally prevent them from selling a tailored distribution
    which contains a mix of GPL patches and proprietary closed source driver modules... and it's not any more forked than the
    heavily patched kernel source that ships with Redhat Linux.


    Please don't moderate total falsehoods like this up - this is flamebait. Alan Cox, the actual primary code architect of the Linux Kernel, is a Red Hat employee. While RH does often ship a 'tweener' kernel, or one that is in some state of AC's patches, there is nothing at all non-standard about it. They simply ship the newest build that they have on hand at the time of pressing. They occasionally even update the kernel image during single revisions.

    And, if I'm wrong, please reply with a list of drivers or patches that RH has included since, say, 4.0 or so, that weren't available as kernel.org + current AC patch.

    Secondly, IMHO, SCO's CEO need a lot more fiber in his diet. You could randomly take away every other file in Red Hat's distro, ship it, and it would STILL have 'more value' than SCO.
  • by SoftwareJanitor (15983) on Wednesday October 27, 1999 @09:06AM (#1583279)
    Doug Michels shouldn't be expected to say anything else, but I don't see how he can expect anyone who has seriously used or evaluated both SCO's products (OpenServer and Unixware) and Red Hat's product. Obviously he is speaking to PHB's who don't know enough to dismiss his argument outright.

    Certainly in price/performance, there can be little dispute that Red Hat beats SCO for commercial use in all but the most extreme circumstances. SCO's products are very expensive if you purchase all of their debundled pieces that it takes to match what you get in a Red Hat box for under $100. Let alone user based license fees. And even if you purchase all of SCO's commercial offerings, you still end up having to add a significant amount of open source to really make it comparable to Red Hat's offering, and that is all extra work.

    Michel's point about Red Hat not adding extra value is misleading. It doesn't matter whether Red Hat themselves add value (as opposed to other Linux vendors such as SuSE or Caldera), but what the overall value of the package is. There is no doubt in my mind that the overall package from Red Hat for most people has a much higher value than what you get from SCO, and at a small fraction of the price.

  • Redhat kernels (at least, the ones I tried ..) are not identical to the standard ones, and so the standard kernel patches can't be applied. This is a nuisance, but only to Redhat users who have to download a huge rpm instead of a few 100K patch file : it doesn't hurt anybody else.

    The only other problem I've had is that Redhat initscripts require build-specific System.map and module-info files. The stock release doesn't create those, so you have to bodge around it. Maybe this is documented properly somewhere now - if so, I haven't found it yet. Again, a pain only to Redhat users.


    My point exactly... just compare a .depend made from a make config on a pristine kernel to one made with a Redhat supplied kernel to view the differences. This is not a value judgement against Redhat for including non-standard kernel patches with their product, they have every right to do so. Just as Turbo Linux has every right to modify the kernel and include non-blessed patches with their product, as long as they don't break the terms of the GPL. This is a non-issue, as so many others have stated.
  • See top... my mistake.
  • As much as how every article on /. has a comment saying "Man, I'd like a Beowulf of these babies," most of the people saying that never will have a Beowulf or a need for a clustered system. (I mean, come ON, what would you, personally, use all that computing power for?)


    Oh, I don't know... say, a Beowulf and a CD-ROM jukebox that could take in 200 CDs and spit out CDs filled with MP3s of the CDs in under an hour.



    --
  • I would be concerned about the customization if it prevented me from compiling my own kernel and using that instead.

    I've not done a fresh install of RHL since 5.1, so "perhaps they've gotten tremendously more proprietary since," but I rather doubt that.

    The concern with TurboLinux customizations is if this makes TurboLinux kernels not interoperable with other kernels.

    This will only matter if people adopt TurboLinux in droves; if they do their thing, producing a bad, scary forked kernel, and nobody uses it, this won't matter. It's not like the "tree in the forest;" if nobody is there using TurboLinux, nobody cares about a disused fork.

  • i mean, seriously. who frickin' cares what Linus says? you have the code. don't like it? who frickin' cares, incorporate your own changes. i do, and i love it. Linus' needs are what drive kernel development, not overall needs and issues. the PCMCIA shit should teach you that, as should the lousy IP stack implementation. it's about time someone stand up to this BS development model and actually do something based on performance or whatnot in a big way. the current model of "Well, it's Linus' OS" is a surefire way to stagnate development.
  • That depends on how much functionality is in their userland utilities. If it is the case that these utilities are easily implementable, then my comment that *if* the improvements provide functionality to the rest of us, they should be added to the kernel would apply. Clearly if the userland utilities are a small part of the HA clustering technology that we could implement, then we obviously should add the TurboLinux kernel code to the primary fork. If however they are keeping all the meat to themselves and essentially adding a minimal amount of functionality to kernelspace, then there's no reason to.

    The point is, since I haven't seen the source nor heard from a more technically sophisticated source than this article, I don't know how much stuff they are using in kernelspace. However, I have the utmost faith in the kernel maintainers (Linux, Alan, etc.) and the desires of the Linux user base as a whole to direct patch incorporation into the kernel in the most appropriate way. What I said still holds: if their patch adds value for us (or can be made to add value with reasonable amount of effort), then by all means it should and will be put into the main kernel fork.

  • by Parity (12797) on Wednesday October 27, 1999 @09:27AM (#1583290)
    I know how to stop it from happening, but I don't have the power to -do- so.
    Just get Linus &co. to add all the 'inferior' patches to the kernel and put them in as non-standard build options...

    Build with SGI scalability extension (non-standard) [y/N]?
    Build with TurboLinux clustering extensions (non-standard) [y/N]?

    Maybe give them their own 'non-standard extensions' section with warnings that enabling these extensions may break things, these extensions are not as thoroughly tested as the 'main' portion of the kernel, etc, etc.

    It's not like there aren't unstable/experimental build options already.

  • I would be concerned about the customization if it prevented me from compiling my own kernel and using that instead.

    And how are you prevented from compiling and booting a standard "blessed" linux kernel on Pacific Linux? You may lose the clustering capabilities, but that's no different from compiling a non RAID enabled kernel on a system which depending on the RAID capabilities which were included as non-blessed patches in previous Redhat releases.
  • The "VI" system mentioned in the article is probably one of the changes. I have never used VI under Linux, or VI with the Giganet hardware, but I wrote the original VI prototypes for Windows NT. It's a communication system that gets lower latency than the kernel TCP/IP stack by exporting some hardware registers directly to the user applications, allowing them to send and recieve network data without ever doing a kernel call. You need special hardware to do this without creating huge security holes of course! You also need an extra kernel interface to allow the user program to pin/lock some amount of virtual memory, and a special user-level communications stack. This can't be used to talk to computers across the internet because it doesn't use IP protocols. But if you have a cluster application where message latency is critical, it can give you a big performance boost.

    PS - This was a much bigger benefit under Windows NT, where the system call overhead was much higher than it is under Linux. But it should still help out Linux.
  • MOSIX was designed to distribute multiple processes throughout several machines.
    It really isn't useful in a network server environment, but it's very useful for computation-intensive work (especially work that doesn't need to hit the disk that much). Actually, besides some difficult security concerns, MOSIX may even make network server software less efficient.


    For TurboLinux, from what very little I know about it, the opposite is true (it's designed only for internet server things).

    The stuff TurboLinux is doing doesn't seem earth-shattering to me, either. Usefull maybe, but many others have or are doing similar things that might be better.


    Now what would be great is to have for Linux what what VMS had (to be more specific, it was OpenVMS, I think), it would have some exciting consequences.

  • ...and Giganet Inc. of Concord,Mass., for ``VI'' software that allows the cluster nodes to communicate with minimal overhead on the processors.

    Wow. VI has always been my choice for situations when I didn't want the overhead of EMACS, but I didn't know it did clustering! :) :) And who are these Giganet people? Is that like nvi or vim?

  • by jd (1658) <imipak&yahoo,com> on Wednesday October 27, 1999 @09:44AM (#1583299) Homepage Journal
    1. Any changes TurboLinux make to the kernel must be made available, in their entirity, to all other developers and distributions, under the GPL.
    2. If people like the mods and Linus chooses to not use them, then the distributers will simply package them up with the distributions anyway, so there's no fragmentation.
    3. If people like the mods and Linus chooses to use them, there's no fragmentation.
    4. If people don't like the mods, it doesn't matter, as nobody'll make use of them.
    5. This is not significantly different from any of the "non-standard" kernel patches that are provided, be it from Alan Cox (who's patches are worth two or three "official" ones), or anyone else. (PPS is unlikely to make it into the kernel. Nor are any of the ACL patches. The International patches and IPSEC can't, until there's worldwide agreement on crypto tech. The E1000 patches from Intel aren't being offered to be part of the kernel. Nor were the Transputer patches.)
    6. The whole point of Open Source and the GPL is that you have evolution, and evolution requires evolutionary pressure. You only get that when the environment changes, or alternatives are competing with each other.
  • I agree. TurboLinux can take whatever business tact they want, as long as they stick to their licensing.

    But I don't like the paragraph..

    There is precedent for Torvalds quickly deciding to incorporate changes to the kernel produced by commercial developers, Iams said. Engineers at Siemens and Linux distributor SuSE Inc. provided a 4G-byte memory extension that Torvalds incorporated.

    This seems to be a backhanded swipe at Linus. They make it seem as if Linus should do it because he did it for someone else. Well, SGI has had a bunch of patches rejected( http://oss.sgi.com/projects/ linuxsgi/download/patches/ [sgi.com] ). So have ALOT of others. Tough luck... But A Precedent?

    Media Pressure on Linux is dirty, ignorant, and non-productive when you say someone should be doing this. Computerworld sucks and blows at the same time.
  • Emacs and XEmacs are staying split mostly for two reasons.

    The first is that RMS won't put any sizable code into Emacs without legal papers assigning copyright to the FSF or placing the work in the public domain. (One line bug fixes are ok, though.) Given that RMS has been burned in the past, this is an understandable position. But it does mean that he can't simply lift code from other GPLed stuff (ie, XEmacs) without the author signing said papers. Since XEmacs doesn't do this, the specific author of a piece of code isn't always known, or may be difficult to contact.

    The second reason is due to a personality conflict between certain XEmacs developers and RMS. Since I'm not a party to any of the conflicts, I can't comment in detail, but it does make getting those legal papers a bit more difficult (read as "hell will freeze over first").
  • by docwhat (3582) on Wednesday October 27, 1999 @09:53AM (#1583305) Homepage
    Hello!

    I am the kernel maintainer for TurboLinux. I'd like to dispell a few myths here:

    • The kernel isn't "forking" from what Linus distributes anymore than Debian, Redhat, SUSE, etc. do. We add extra patches for enhanced functionality, like raid, IBM Serveraid, etc.
    • The actual kernel patch that is used by TurboCluster is *in the kernel rpm*. You can grab the source rpm and look at it.
    • The TurboCluster was based upon the Virtual Server in the beginning. Since then we have hired a company to re-write it from scratch. There is nothing left of VS in the Cluster code, except some concepts (but none of their code). Did I mention it is GPL'ed in the source.
    • Did I mention that all the patches are available from the kernel source RPM?
    • At some point, the Cluster module will be submitted to Linus. However, we only know it works for 2.2.x. I *will* submit it for 2.3 and 2.5 (if it doesn't make 2.3), but I am in the process of re-writing the kernel RPM and am very busy. It needs to have all the CONFIG options and such added in, and checked to work in 2.3.x.
    • The TurboClusterD (the only non-GPL part of TurboCluster) will be OpenSource'd in the future. Our current plan (this is *not* an official commitment) is to release it as the next version comes out. The next version will be much better, of course.

    I hope this addresses some people's concerns. Don't worry, I am **very** pro-GPL and am responsible for sanity checking these choices.

    Ciao!

    (aka Christian Holtje docwhat@turoblinux.com [mailto]>)

  • by David Greene (463) on Wednesday October 27, 1999 @09:54AM (#1583306)
    GCC is not a good example of a code fork problem. If anything, it proves the value of the ability to fork.

    GCC became forked because the FSF sat on changes that were being submitted. For years. EGCS was an attempt to get working C++ code out to the general public (Cygnus had been releasing it as part of GNUPro for some time). EGCS literally saved the project I was working on and I'm sure it did the same for others.

    Now that EGCS and GCC are back together as one, some of the other forks are being rolled in (Haifa, FORTRAN and Ada for sure, though I don't know what's happening with PGCC).

    The act of forking caused the FSF to get off their collective duff and do something. That's a Good Thing [tuxedo.org].

    --

  • I think it's a non-issue. Open Source versions of the same facilities are already at least in part there, whatever is missing should be filled in soon enougn.

    Bruce

  • While Linux is considered a Unix clone, keep in mind two big difference.

    1) Linux has always been open. The Unix vendors, on the other hand, released commercial, propietary, closed OS's.
    2) Linux has a clearly defined "lead" developer. Unix vendors were led by nameless businessmen.

    Regardless of whether TurboLinux' changes are the greatest thing since sliced bread, if Linus doesn't think they deserve inclusion into the next kernel release, it will go off on its own and sort of do a slow death-dance. Linus, along with his horde or developers, has gained the respect of developers and business folks and are accepted as the true stewards of the Linux system. There is no one else around who can claim equal credibility and usurp momentum from Linus and gang.

    The Unix vendors ran into trouble when they started to incorporate propietary code into their versions and closed development. Linux will never encounter this problem. Anything based off the linux kernel base can be re-incoportated into the kernel.

    Linux is in no trouble from code forking at all.
  • by Bruce Perens (3872) <bruce@perens.com> on Wednesday October 27, 1999 @10:04AM (#1583313) Homepage Journal
    That's an FSF-specific issue. Linus doesn't insist on the same copyright sign-over. That, by the way, effectively locks Linus (and everyone else) into the GPL version 2, which most people believe is a good thing. Now that there are so many contriubtors, it's just not possible to get everyone to agree to change the license. No doubt some of those copyright holders have died, etc.

    Thanks

    Bruce

  • To be specific, though it is possible (if you're a corporation) to not only fork anything GPLed but also have big teams of programmers working on it full tilt without disclosing their information, when the product is released they _do_ have to release the information.
    It's possible to maintain such a fork in 'no cooperation' mode indefinitely, but at a very crippling cost- to keep it under total control you'd have to be changing things radically enough that no outside influences would be relevant. Otherwise things would converge. Particularly with regard to the Linux kernel, even a _hostile_ attempt to fork it and take over control is a losing game, requiring a really large amount of effort for a very unimpressive return. Yes, if you're a corporation you can devote more resources to a private development than individuals can, but then you have to release source (and not obfuscated, either) and this makes it difficult to use this mechanism for more than hit-and-run marketing games.
  • by docwhat (3582) on Wednesday October 27, 1999 @10:16AM (#1583321) Homepage
    Aaahhhh! No! I refuse to fork the kernel! ;-)

    We are overworked as is. I will not, as TurboLinux's Kernel Maintainer (Kernel Colonel?), fork the kernel off. Having Alan Cox, and the wonderful crew in Linux-Kernel maintian the core stable kernel makes my life *much* easier.

    The Cluster Module is just a module! It can be compiled in later after the kernel is done. It cannot (yet, as far as I can see) be compiled into the kernel as a non-module.

    Feel free to grab the cluster module and see for yourself (You'll need to hold shift):
    cluster-kernel-4.0.5-19991009.tgz [turbolinux.com]

    Ciao!

  • by JordanH (75307) on Wednesday October 27, 1999 @10:18AM (#1583324) Homepage Journal
    The plethora of mutually-incompatible patches to GCC that resulted from people supporting forks for:
    • Pentium optimization
    • Trying to support C++
    • FORTRAN
    • Pascal
    • Ada
    • Special forms of optimizations (IBM Haifa stuff, for instance)

    The net result of the forks were that you could have a compiler that covers one purpose, but not necessarily more than one.

    All of the things you mention above are good things to support. They all have their market and perhaps none of them would have been available had we waited for complete consensus among all GCC developers to bless every change.

    Code forks are just healthy competition. Remember that? Competition?

    You fail to mention that a lot of these things were eventually folded back into the latest GCC versions.

    The EGCS split was eventually folded back into the mainline, and the result is a better GCC, I think. People were allowed to go their own way, proving their approach good and when the fork was unforked, it benefitted everyone.

    I do support of some R/3 code where our local group has "forked" from the standard stuff SAP AG provides; it is a bear of a job to just handle the parallel changes when we do minor "Legal Change Patch" upgrades. We've got a sizable project team in support of a major version number upgrade; the stuff that we have forked will represent a big chunk of the grief that results in that year long project.

    Oh, so you're having problems with parallel changes. Hmm... This is bad. I know. Don't make any local changes! Use the SAP out-of-the-box. Whew! That was easy, problem resolved, the badness of a code fork vanquished once and for all.

    What's this I hear? You need those changes? Those changes are there for a good reason? Oh, well, I guess nothing worthwhile doesn't have a price, eh?

    Sure, it's a bear to syncronize parallel updates, but that's no justification to never fork.

    The ability to fork is an important aspect of the software's essential freedom [fsf.org]. If we never fork, we're possibly missing out on important development direction that would be missed.

    Besides, there already are a number of Linux code forks out there. People are still developing in 2.0, 2.1, 2.2 and now 2.3 and 2.4 kernels. Each of these represent a fork. When someone improves a 2.2 kernel in some significant way, someone will probably try and integrate those changes into 2.3 and 2.4 kernels.

    What people are really concerned about here is that Linus will no longer be have control over the forks.

    My guess is that Linus would welcome the contributions. Remember that anything these TurboLinux people might do would be available to be merged into a Linus blessed kernel in the future.

    Hey, if these are real improvements, I'm just glad they're putting them into a GPL OS rather than doing them (again and again) to some proprietary commercial OS.

    The forks that have occurred in the *BSD world haven't seemed to hurt them. *BSD is gaining support all the time, we read. The various *BSD projects have learned a lot from one another. The only forks in *BSD that one might argue don't contribute to the Open Source world are the ones by BSDI and other commercial interests. Even these have probably helped popularized *BSD operating systems.

  • by Anonymous Coward
    This is basically comercialisation of the Linux Virtual Server Project [freshmeat.net]... it's a load balancer - much like Cisco's LocalDirector [cisco.com]...

    Now if you want real clustering, help with the Linux High-Availability Howto [unc.edu] or go look at HP/UX's MC/ServiceGuard [hp.com] - or if you are forced to play with toys, MS makes NT Enterprise [microsoft.com]...

    GEEK! [thegeek.org]
  • The real issue is how much the commercial world can pull on Linus's reins. These capabilities should be in Linux but only if it makes sense. If Linus evaluates them and they agree with his overall vision for the Linux kernel, then by all means, they should be included. If he incorporates them because he fears a code fork, he sends the message that he can be manipluated by some large entity. I look forward to seeing how this turns out.
    --
  • In this context ``VI'' stands for Virtual Interface. It is a way to get low-latency communication between processes within a cluster. It can accompish this by having less protocol overhead than routed IP protocols, and by avoiding user-to-kernel context switches and user-to-kernel buffer copies. In the ideal case the data goes from a user buffer to the NIC by DMA with no kernel participation. Data is also DMA'ed directly from NIC to a user buffer. Of course this requires a little bit of help from special hardware or firmware on the NIC.

    You can find general info at http://www.viarch.org [viarch.org].

    Info on a Linux version which can work without special support from the NIC is available from http://www.nersc.gov/research/ftg/via [nersc.gov].

    --

  • I mailed Rob the CID of both of my postings that got banged down in this thread. Nice people appear to have come along to bump them up, anyway.

    Thanks

    Bruce

  • Please don't moderate total falsehoods like this up - this is flamebait. Alan Cox, the actual primary code architect of the Linux Kernel, is a Red Hat employee. While RH does often ship a 'tweener' kernel, or one that is in some state of AC's patches, there is nothing at all non-standard about it.

    So, since Alan Cox works for Redhat it's OK for Redhat to ship modified kernel source, but not OK for Pacific HI-TEC?

    This is Free Software, as long as the patches comply with the licensing terms of the Linux kernel the distributers of TurboLinux have every right to ship a modified GPL kernel source, just as they have every right to ship a distribution which contains proprietary closed source drivers bundled as binary modules.

    You can't call the GPL'd patches included with either Redhat or TurboLinux innapropriate because that complies with the GPL. And you can't call the proprietary kernel modules innapropriate (even though Redhat doesn't ship proprietary kernel modules with it's distribution) becuase Linus has made quite clear that he accepts the legality of priprietary binary kernel modules.

    So, how is this different from Redhat, or any other distribution vendor? And how am I baiting flames with my statements?
  • by docwhat (3582) on Wednesday October 27, 1999 @10:32AM (#1583337) Homepage
    Hello!

    I am the kernel maintainer for TurboLinux. Your email hasn't arrived in my mail box yet. I suspect that you sent others in my organization. Most of us are at ISPCon, so it hasn't filtered to me yet.

    We have no intent of packaging and maintaining a seperate linux kernel tree. It would be too much work for no benefits.

    Our kernel RPMs includes the base standard kernel tarball and additional patches. You can get all the additional patches out of the .src.rpm file. You can build a complete kernel from the .src.rpm file.

    I have not put up a web-page or submitted it to Linus et al as I have not had time. Our primary concern is getting a quality product to our customers.

    You may get the TurboLinux Cluster Kernel Patch here (You'll need to hold shift to download):
    cluster-kernel-4.0.5-19991009.tgz [turbolinux.com]

    Does this answer all your questions?

    Ciao!

  • Lest we confuse people, the sources have to be released when the binaries are distributed , not released.

    I concur with the rest of your posting.

    Thanks

    Bruce

  • Heh, funny you should mention that... at NMSU, they hacked up a semi-realtime/interactive version of POVray for the Beowulf. Thing is, although you have a whole bunch of CPU bandwidth, your communication latency is rather high, and so it'd be horrible for anything like Quake.
    ---
    "'Is not a quine' is not a quine" is a quine.
  • by jms (11418) on Wednesday October 27, 1999 @10:39AM (#1583341)
    If they break binary compatability with the Linux world, then they are going to be cutting themselves off from all of the applications that people want that are only available in binary form (Netscape, for instance)

    If they break source compilable compatability, then they're going to have an operating system with either no applications, or they are going to have to start modifying applications themselves, and they will NEVER keep up with the rest of the world.

    Either way, eventually, customers are going to become frustrated when new versions of Linux applications become available, but they can't use them because their hacked up Linux kernel won't support them.

    Here's my "trailblazing" analogy.

    Think of the evolution of Linux as trailblazing a new road.

    In the front lines, there are people off, hacking through the brush, trying different paths. Some paths are better then others. Some people wander off on obscure paths and are never heard from again. Others find good, safe, productive paths and bring back maps and suggest that the main road run that way.

    In the second line, group leaders such as Torvalds and Cox look at the trailblazers' work and decide where to lay the main road.

    In the third line, millions of users follow along, driving on the nicely paved road.

    They don't HAVE to drive on the big, paved road --
    There's always trails that lead off the main road, but those roads have more potholes, and usually aren't maintained very well, and they're lonely roads, and if you went that way you might run out of gas and become stranded.

    But there's nothing to stop someone from building a new, parallel road, and making it enticing enough that it renders the old road obsolete, much as the interstate highway system destroyed the commercial viability of old roads like Route 66.

    But considering that much of the attraction of Linux is in the culture, and the freedom from propriatary code forking, I don't see this happening in the near future.
  • I'm going to re-state that, lest I confuse people in attempting to prevent their confusion. Darn.

    The sources have to be distributed or made available when the binaries are distributed , not released. See the GPL for the exact language.

    Pardon the garble.

  • There you'd be killed simply by the lack of bandwidth in the CD jukebox. Even if you had one CPU dedicated to MP3ing each track, it still will take lots of time to rip the CDs. For example, assuming a 24x jukebox used to rip (at full speed) 200 45-minute CDs, that'll still take around 600 minutes, assuming an average thoroughput of 15x and no scratches or anything to worry about (remember, 24x CD-ROM drives only read that quickly at the outer edge of the disc). Of course, a beowulf would still be useful to make the encoding quicker (it'd still take about 9000 minutes to encode 9000 minutes of music on a P2-450), but you're still talking about a lot of attendance as well; 200 CDs become about 20 MP3 CD-Rs, and also, those still take what, 30 minutes to burn? Hey, still 600 minutes - looks like you're talking about at least 10 hours no matter what. Damn.
    ---
    "'Is not a quine' is not a quine" is a quine.
  • by NovaX (37364) on Wednesday October 27, 1999 @10:50AM (#1583347)
    The delema is not whether there's a fork, because there are numerous forks already, it is whether will there be forks that are quite popular, but not be integrated into Linus's Linux kernel. I Turbo Linux makes inroads with their product (their fork), but its not then blessed by Linus, and a trend continues, boom.. real forks. I doubt that will happen. Either Linus would cave because it was good technology, or few people would buy it. That's the joy of a dictatorship, things move far faster and are solved (in these issues) quicker.

    Its just the idea, which seems to be the point of this entire slashdot article, is whether Linux will not just fork into distributions, but kernels. That's already happened, but most users are content pretending only BSD has forked, that any BSD supporter must cover every BSD (ie, the FreeBSD driver site was given the incentive to go to BSD, and then slashdot posters asked 'Will they support Darwin, and not just Net/Open/Free?'). Windows, dos, BSD, UNIX, BSD, and Linux have forked. Its just whether people want to be ignorant and using forking as an excuse for why their 'compitition' (why must every other OS called 'the enemy'?) is worse.
  • Marc, project leader of PGCC, is also on the steering committee for EGCS (now GCC). As far as I know, all of the stable changes (Which are all of the big improvements and some of the smaller ones) have been merged into PGCC. Nowadays, PGCC is more of an experimental compiler for ideas Marc comes up with - which is why I got slightly upset when I found out Mandrake was using it. It offers no constant speed improvement (speed reduction in many cases), and is completely unstable.

    So PGCC has been merged except for experiments being carried on by Marc.
  • IANAL, but there really seems to be two levels that this conversation is moving on. On the one hand is the GPL giving equal footing to all men (and women). On the other is the almighty Linus sprinkling "holy penguin pee" to make patches "official". When Linus dies (as most of us tend to do at one time or another), is there going to be gourds and sandals left and right to follow? What if AC and Linus are on the same plane that crashes?

    (did you notice I squeezed 2 Monty Python references into 1 post?)

  • You picked a really bad example. GCC was way behind in C++ compared to egcs. If there hadn't been a fork, we may never had gotten a decent C++ compiler. Of course, you may not like C++. But it's a widely used, important language that many open source projects depend on. In the end, of course, gcc and egcs merged back together. I can't see anything but good stuff resulting from the fork.
  • by JordanH (75307) on Wednesday October 27, 1999 @10:59AM (#1583354) Homepage Journal
    We in the community have nothing to fear but fragmentation itself. The 10,000 faces of UNIX is what originally killed it as a server operating system ...

    Excuse me? UNIX dead as a server operating system? I wonder what it is that Sun is making so much money from?

    This is unnecessarily alarmist. The problem with the 10,000 faces of UNIX was that these versions were all in competition and could not be merged. The good thing with differing versions of Linux out there could be that someone will take the best of all of them and put them together into the best system.

    Remember too that various directions may not be entirely compatible with each other. The best server system may be fundamentally different from the best desktop system, and may actually require different teams of people working on each to produce the best result.

    There's also the danger that the Linux kernel will grow unboundedly trying to support every possible environment. I doubt one Linux kernel can serve both the super Enterprise Server environment and the palmtop environment, yet people are going in both directions with Linux right now.

  • I thought the kernel had? Of course not in any way people really notice like the numerous BSDs, but I remember one poster reply once to a message of mine giving a few examples. Wish I could remember any... long time ago.

    The real difference with BSD is that Berkeley released it (and under the BSDL) for anyone willing to play, and fork. They were through playing with BSD. So, BSDI, Sun, i386BSD, etc picked it up, and began coding. The free BSDs can still fork just like Linux can, its just whether there's an extreme enough reason to do it. Only OpenBSD actually forked from free BSDs, and when I read Theo's archive, core seemed stubborn and unwilling to resolve the problems. If Alan Cox was suddenly booted from the kernel team, with significant peices of code (and a direction) he wanted to add, but over and over again shoved away by Linus and the rest.. I think Mr. Cox would do something. What, I'm not sure.

    Considering DOS, windows, BSDs, etc. all forked.. Linux will sometime too.
  • by Alan Cox (27532) on Wednesday October 27, 1999 @11:02AM (#1583356) Homepage
    This really isnt a problem. Think about it carefully. SGI wrote 4Gig mem patches. They worked but were clunky. SGI ship them, SGI customers are happy. Siemens + SuSE write non clunky 4Gig patches. Everyone will use those and Linus endorsed them. SGI will use them too Im sure.

    It hasnt broken anything. In fact one thing Linux gets right other vendors don't is we say "no" to crap code. If you dont do that you codebase turns to crap. Linux does it right, *BSD does it right.
  • Im not actually sure what they add. I'd need to dig over their patches. Wensong Zhang however has had this stuff working in Linux for a long time, and indeed for 2.2.x -ac I've gone that path and would do for an official 2.2 except that its a new feature so not eligible for 2.2

    I know Wensongs stuff works. I know people doing production work wih it so for 2.2.x thats probably the final and absolute path. For 2.3.x it depends what Linus thinks is better.
  • Um, it would be wrong to use "=" in this case as well, as he isn't doing assignment. He is asserting equality : a valid use of the equals sign, which is in C spelt "==".

    And it's not a keyword anyway, it's an operator. Please stop misapplying terminology, it makes you look very stupid.

  • First the article has typos,
    Carson City,Nev.-based
    SCO.He said the consulting arm ofSCO is
    Obviously the guy was writing this article in a hurry. Probably an intern who thinks he knows about all of this computer stuff which is just so hot these days. Do the folks at Computerworld think that online-journalism is allowed to get away with this sort of disrespectful writing.

    Second, forking is the whole idea behind copyleftism. You allow people to make whatever changes to the OS they want as long as they make their changes public. That way we can see if TurboLinux has done something stupid. If it is good and is not the first high-availablity clustering kernel because they wanted to be the first, Linus will put the changes in the kernel. Linux does not benefit or get taken away from. Nothing has changed, and anyone that wishes to use TurboClustering is perfectly welcome to buy their distro. journalists should do their homework. this is not a crisis as the author makes would lead George Weiss' comments to infer. This would have been a much better article for a computer magazine if it had explained the internals of the technology annd let us decide what to do with the facts and make our own inferences as computer literate/savy/scientists (pick one) as to the implications of this new technology. This is a great technology to be available to the comunity and perhaps the reason that Sun released their source. Their clustering technology is no longer a secret. Does anyone else feel like their articles about linux and computers in general do not talk about anything interesting, just business (except for ACM, IEEE, Usenix, etc.... publications). We should be smart enough to make inferences to implications on distrobutions. Paraphrasing experts only makes confusion!
  • by Alan Cox (27532) on Wednesday October 27, 1999 @11:19AM (#1583363) Homepage
    Simple answer
    2.2: new feature, not going in
    2.2ac: Using Wensong Zhangs code because it is
    rock solid and production hardened. It needs no
    proprietary tools. Several vendors already ship this code. I also know people building big web setups using it.
    [www.linuxvirtualserver.org]

    2.3.x is up to Linus, actually possibly to Rusty
    as all of this code area has totally changed to
    use netfilter.

    Alan
  • I don't know how RedHat compares to SCO (as I've never used the latter), but don't forget that RedHat only develops about 10% of the code that is in the package whereas SCO develops its own kernel and everything.
    Based on your marketing insight, I think I'll package my own OS. SprocketOS. I develop the entire thing. From kernel to userspace apps. Its all my doing.

    I add more value than RedHat.

    The fact that I'm not very versed in coding anything, and that the entire "OS" is actually examples of "Hello World" renamed hundreds of times should be overlooked.

    Now all I need is a few acidic remarks about a Linux vendor and I'll have a business model...

  • I also remember all the confusion and all the time and energy and bandwidth wasted sorting out the confusion and incompatibilities. It was a Good Thing (or at least a Better Thing) when things got resolved, but if you really do remember the situation at the time it was going on, then I'm astounded by your nonchalance just because that incident is for the most part behind us.

    When the hype dies down and people start to look at Linux with a critical eye, things like your example would be a serious black eye for any hopes of large-scale Linux acceptance. And with the commercial vultures, er vendors, entering the fray, it's more likely to happen in the future. How do you think it would bode for Linux's acceptance in the non-hobbyist community if two or three or four such forks were going on at the same time?

    Cheers,
    ZicoKnows@hotmail.com

  • by bendawg (72695) on Wednesday October 27, 1999 @12:28PM (#1583387)
    Oh no! What horror! I'd hate for new, potentially better technology to be available for the open source community to choose!
    I suppose that TurboLinux should just throw away their code so nobody's feelings get hurt.
    • So, since Alan Cox works for Redhat it's OK for Redhat to ship modified kernel source, but not OK for Pacific HI-TEC?

    No, since Alan Cox is one of the three core contributors to the linux kernel, since he regularly supplies updates, and since he is the person who puts together the kernel that Red Hat ships, it is ok for them to ship whatever the hell they want to - it IS the linux kernel. That would make a great piece of Red Hat Trivia - name all of AC's changes to the kernel shipped by Red Hat that Linus later nixed. I'm sure there are at least 1 or 2.

    You insinuated that they were shipping extensions, modifications, or additions to the kernel that are not part of the 'stock' linux kernel, and that is false. Their CONFIGURATION of said kernel is quite different from what Linus or Alan choose to post, ie, the default configuration, but I know you're much too smart to be confusing configuration with code - at least, I've had enough respect for your posts in the past to hope so.

    I'm insinuating nothing of the sort, I'm stating it outright. All you have to do is run a make config on the RH6.1 2.2.12-20 kernel which is supplied with the distribution against a make config from a stock 2.2.12 which has been blessed by Linus and diff the comparing .config's. There are many additional patches which don't ship with the Linus blessed kernel supplied in the Redhat distribution kernel. Last I heard, Alan Cox is not Linus Torvalds, and his ac branch kernel series is not distributed as a "proper" Linux kernel. This is not a value judgment against Redhat, or Alan Cox, it is simply the truth. When you make the claim that Alan Cox, among other well known kernel developers, have more or better special rights over the kernel source than everyone else you are underminding the very meaning of the GPL! This is nothing with which to dilly-dally about... Free Software has a special meaning... if what you say were true legally the GPL would be meaningless as a Free Software document.

    • You can't call the GPL'd patches included with either Redhat or TurboLinux inapropriate because that complies with the GPL.And you can't call the proprietary kernel modules inapropriate (even though Redhat doesn't ship proprietary kernel modules with it's distribution) because Linus has made quite clear that he accepts the legality of proprietary binary kernel modules.

    _I_ am not calling anything anything, other than calling you on crack - show me these 'patches' that Red Hat ships. The TL patches are really that, patches that apply against a base stable or devel release of the kernel. This is an extension of the existing kernel. Red Hat supplies, to my knowledge, no such patches. They supply a kernel, a stock linux kernel, usually a branch of the stable release. There are no PVM extensions, there are no scalability extensions. I think you might be confusing the fact that they, by default, enable almost every single driver available to be built as a module, with them including extra code. They supply those modules because they are needed at install time to interface with the customer's hardware.

    Now who's baiting flames? Like I said, as long as it meets the guidelines of GPL licensing, it's perfectly legal! Free Software isn't about whether you like it that I can include my own GPL'd code in your distribution, it's about FREEDOM to modify your and my code as I see fit! Pacific Hi-Tec isn't even skirting the laws here, unlike Corel with their previous beta Corel Linux program, they are releasing a set of GPL'd patches and some proprietary kernel modules... all actions of which Linus has made perfectly clear in the past he supports.

    See above for how it's different, and you're baiting flames by making completely false claims. A lie, to me, is always flame bait.

    I didn't lie in the first post, and I still don't see a single person who has pointed out even a factual error! I'm perfectly happy to be corrected with factual mistakes, but to call me a liar simply because I wrote a seemingly unpopular truth really stretches your point. And I note that since moderators have chosen to moderate this down to the cruft, nobody cares anyway. Still, Damn rude on your part.

    Cheers! :-)
  • TurboLinux is making alot of noise regarding the work they've done, meanwhile aren't they just taking an existing (very impressive) kernel patch referencing Virtual Servers and claiming it as their own?

    Elsewhere in a reply to this article, here's what one of the TurboLinux people had to say:

    "The TurboCluster was based upon the Virtual Server in the beginning. Since then we have hired a company to re-write it from scratch. There is nothing left of VS in the Cluster code, except some concepts (but none of their code). Did I mention it is GPL'ed in the source."

    So, in a word, no.
  • _I_ am not calling anything anything, other than calling you on crack - show me these 'patches' that Red Hat ships.

    OK, I'd like to thank users "tap" and "mmclure" for pointing out the obvious; that installing the kernel-2.2.12-20.src.rpm will generate our list of patches for you:

    [root@marquez /tmp]# rpm -ivh kernel-2.2.12-20.src.rpm
    kernel ##################################################
    [root@marquez /tmp]# cd /usr/src/redhat/SOURCES/
    [root@marquez SOURCES]# ls -al
    total 17158
    drwxr-xr-x 2 root root 3072 Oct 27 17:46 .
    drwxr-xr-x 9 root root 1024 Sep 25 20:49 ..
    -rw-r--r-- 1 root root 642 Apr 15 1999 README.kernel-sources
    -rw-rw-r-- 1 root root 19474 Sep 21 19:04 aic7xxx-5.1.20.patch
    -rw-r--r-- 1 root root 229351 Nov 5 1998 ibcs-2.1-981105.tar.gz
    -rw-r--r-- 1 root root 2291 Jan 27 1999 ibcs-2.1-rh.patch
    -rw-rw-r-- 1 root root 728 Mar 25 1997 installkernel
    -rw-rw-r-- 1 root root 109385 Sep 8 09:11 ipvs-0.8.3-2.2.12.patch
    -rwxr-xr-x 1 root root 775 Feb 25 1999 kernel-2.2-BuildASM.sh
    -rw-r--r-- 1 root root 11238 Sep 23 15:14 kernel-2.2.12-alpha-BOOT.config
    -rw-r--r-- 1 root root 11205 Sep 23

    [snip for brevity]

    [root@marquez SOURCES]# ls -l | wc -l
    65
    [root@marquez SOURCES]#

    Am I still a liar? Do these patches live in never-never land? Does this whole thread really deserve to be moderated down by several points to a 1 simply because some moderators didn't agree with its position? Isn't the point of moderation to promote factually correct and valuable discourse?

    A public apology for calling me a liar would be nice, Blue.
  • Yes. Signing the copyright over to FSF means that FSF can be the complaintant in a lawsuit regarding the code, and that they are the complaintant for a portion of the code so large (the whole thing) that it can not be "written out" of the product. In the case of the Linux kernel, any one of the hundreds of copyright holders could be a complaintant, and working together several of them would make an _effective_ complaintant. If I complained based on the line or two I've added to the kernel, that wouldn't be too effective.

    Thanks

    Bruce

  • So, in a word, no.

    Clue deposit accepted. Thank you, drive through.

    Yours Truly,

    Dan Kaminsky
    DoxPara Research
    http://www.doxpara.com
  • At the time development of the Linux port of LIBC was going on, the main thread maintainers were reluctant to merge in the Linux changes. One of the reasons might have been the lack of maturity of the Linux code but I do not know the entire story. But the GPL worked in this example. The main thread maintainers were circumvented when they would not help with an issue someone else felt was important. When that fork got the lion's share of distribution, much more than the original fork, the main thread maintainers saw the need to incorporate the Linux changes and did so. This was not done as smoothly as you might like, but I bet it was done more quickly than it would have been if there was some sort of dictatorial control on the C library with license restrictions to back it up. In that case, the Linux C library might simply never have been done, because the main thread maintainers didn't care about Linux when it started.

    I maintain that this was a demonstration of the GPL working the way it should. Nobody was allowed to stand in the way of the Linux development because of the terms of the GPL, and the final result did get merged back in.

    Thanks

    Bruce

  • I'm curious. Is everything needed to make a functional cluster GPL'd? If so - great. If not, then doesn't this violate GPL (i.e. the proprietary parts form an integrated software system with the patched kernel, despite not being statically linked)?
  • Please don't moderate total falsehoods like this up - this is flamebait. Alan Cox, the actual primary code architect of the Linux Kernel, is a Red Hat employee. While RH does often ship a 'tweener' kernel, or one that is in some state of AC's patches, there is nothing at all non-standard about it.

    I sense you've missed the point. The Linux 2.2.5-15 kernel that came with RedHat 6.0 is not identical to the stock Linux 2.2.5 kernel. Configuration issues aside, the 2.2.5-15 that shipped with RedHat 6.0 included a handful of other patches as well. This is what makes it a nonstandard kernel. Sure, the patches may be publically available, and sure, they're probably included in an "-ac#" patch, but that doesn't make them part of the mainstream kernel series.

    If Pacific Hi-Tech places their clustering patches online for all to download and use, what's the difference? Since it sounds like they're going to try to get Linus to accept them, they've got to be made public anyway. What's the difference if distribution vendor X ships a kernel with H.J. Liu's latest knfsd patches or Pacific Hi Tech's latest clustering patches? Both result in kernels that differ in more than configuration selection from the mainstream kernel.

    Just because Alan Cox works for RedHat doesn't mean that RedHat's patches are part of the mainstream kernel. (Same would be true if Transmeta got into the Linux Distrib business and shipped their own tweaked kernel -- despite the fact that Linus works there.) Alan knows and acknowledges that the "-ac" kernels are a sort of feature enhancement mini-fork. (His diary entry for October 21 [linux.org.uk] refers to 2.2.13-ac1 as a feature enhancement addon kit.) To give another concrete example: While the "large FDs" patch was not part of the mainstream kernel, Alan offered it as a separate patch and stated publicly that it's one that many vendors may apply to the kernels they ship, even though it wasn't part of the mainstream kernel. Those patched vendor kernels are non-standard kernels once patched.

    There's nothing wrong with shipping a modified kernel, particularly if the modifications are public and can be applied to any kernel. But, such a kernel can hardly be considered standard.

    --Joe
    --
  • by NovaX (37364)
    must have been a bit to tired when writing that one above... Past little grammer things... didn't mean to enfasize BSD in the list (that was the point, its not the only OS to split). heh...

    BTW, since BSD and SysV were the two styles of UNIX, would you not say that if BSD split, so did System V? The code for both is still available from the archives (who holds SysV now? Last I remember was Novell letting the UNIX trademark go, though not sure what happened with SysV code.. All UNIX OSs are BSD or SysV, and UNIX-like being BSD or just.. -like. Would seem pointless to make a big deal about BSD splitting if System V did too, and they were the two design styles of UNIX, not full fledged OSs, just the building blocks.
  • The definition of 'distribution' (as Bruce found for us) being 'transfer from one legal entity to another'.
    And that's why a corporation can fork and have its programmers developing GPLed source under NDAs- but at the same time it means that as soon as the binaries get out to ANYBODY not legally part of the corporation, the source must follow.
    I think this suggests that open betas grant full rights to recipients under the GPL, and that it is possible that closed betas may not- the exact point of concern is whether a beta tester is legally part of the corporation, or not. They would have to be part of the corporation, legally, in order to be subject to any sort of NDA over GPLed stuff. This also makes internal testing totally controllable, always insisting that the recipients be part of the corporation and under NDAs. As soon as the binaries or source get into the hands of someone who isn't part of the corporation, the source must be forthcoming and the recipient has full rights under the GPL. Not a bad compromise really :) it'll be interesting to see who tries to grab momentary advantage by building up a head of steam behind secret development.
  • The problem is that new, really scalable (by which I mean on the order of thousands of processors, not eight) hardware is going to require more and more of this sort of thing. My friend's patches (which are really patches from SGI's development team in general) are coming with an eye on SGI's own ccNUMA architecture. You might consider it a risk to do so, but for SGI to make money from Linux, Linux has to run on its custom (and might I add, damn cool) machines.

    It's not difficult to foresee us getting to the point where apps work under one kernel rendition and not the other; SGI is probably just the tip of the iceburg. Wait for IBM or Sun (it could happen) or any other "big-ass server" maker to start eyeballing Linux for their own machines. It could go nuts - picture having ten variations of the Linux kernel, all running their own sets of applications. That's what forking is, and its very possibility should scare you. After all, is Linux still Linux if one version runs Lightwave and another can't, or is it just suddenly another fragmented UN*X?

    ----

  • Nope. The GPL does NOT state that source must be provided to anyone. It states that it must be available to anyone, which is QUITE different.

    Who is more likely to request the source? Developers or general users? Let's face it - I can't see many Windows 98 users, migrating to Linux, caring too much about some TurboLinux kernel patch source code. A developer, on the other hand, would probably eagerly snatch the patch from the site within the first 5 microseconds of it being announced on Freshmeat.

  • Was EGCS necessary? Surely. GCC development had gotten stuck, and it was necessary that something happen to resolve the blockage.

    The fork may have been necessary, and the eventual reintegration (or "reverse fork") that came from EGCS was also necessary.

    But the initial fork displays that there were problems with GCC development that could not be reconciled at the time. And that was not a good thing.

  • Hey, man, I was just refuting the setup proposed by someone else. :) You'd still want one system per CD in that case, though, and it'd still take over an hour per CD (takes 45 minutes to encode the average CD in pristine conditions on a P2-450, and then 20-30 minutes to do the burn).
    ---
    "'Is not a quine' is not a quine" is a quine.

Real Programmers don't write in FORTRAN. FORTRAN is for pipe stress freaks and crystallography weenies. FORTRAN is for wimp engineers who wear white socks.

Working...