TurboLinux Releases "Potentially Dangerous" Clustering Software? 233
relaye writes "The performance clustering software and services announced today by Linux vendor TurboLinux Inc. and a cabal of partners including Unix vendor SCO Inc. takes the Linux market in an unusual and somewhat risky direction, analysts are saying. " The article cites risks of forking the kernel - not an incredibly probable risk, but a thought-provoking scenario. The danger comes if Linus decides not to incorporates TurboLinux's changes into the kernel.
Well... (Score:1)
A Problem, Really? (Score:2)
Does anyone really see this as a threat? Why wouldn't Linus add this? I guess the media is just trying a dose of FUD on us.
And this is different from Redhat how???? (Score:2)
Now they must follow GPL licensing restrictions, but this doesn't legally prevent them from selling a tailored distribution which contains a mix of GPL patches and proprietary closed source driver modules... and it's not any more forked than the heavily patched kernel source that ships with Redhat Linux.
Fork? Big deal (Score:5)
We should all be pleased that Linux is so flexible technically and legally that anyone who has a problem can either use Linux to solve the problem, or change Linux to solve the problem.
Using a feature of the operating system like the open source licence is no different than using any other feature of the operating system, like support for a TV Tuner card. The users will use any features of the operating system in the way that they want to, and nobody can tell them they can't.
Turbo Linux isn't forking the code, they are using one of the most powerful features of the code.
And that's my view.
Maintaining patches (Score:2)
- Michael T. Babcock <homepage [linuxsupportline.com]>
Linus forking the kernel? (Score:2)
Re:A Problem, Really? (Score:1)
That being said, I wonder how useful the changes to the Linux kernel will be if the other tools to manage/configure/use the clustering technology are not available to the masses. An Analogy: A CD without a drive is just a shiny coaster.
printf("%d\n", fork()) prints -1 (Score:3)
How is this dangerous? (Score:1)
Could be good *or* bad (Score:4)
Unfortunately, one of the parties that can win is the Microsoft PR department, who has been shouting FUD about the fragmentation of Linux for quite some time. So, hopefully a kernel fork won't be necessary, since even if the fork doesn't cause the problems of fragmentation, MS will still love the opportunity to claim that it's fragmentation whether it's a bad thing or not.
Personally, I'm all for kernel forking. It's not like 8086 Linux or RTLinux are currently part of the main kernel distribution, nor should they be. They fill in special needs, rather than being something good for everyone. A clustering-optimized kernel would be similar, IMO. Clustered systems tend to be homogenous and not have any exotic hardware to support (with the exception of gigabit network cards, which are generally supported just fine by the main kernel as it is). It's a special-need kernel, not something for general consumption. As much as how every article on /. has a comment saying "Man, I'd like a Beowulf of these babies," most of the people saying that never will have a Beowulf or a need for a clustered system. (I mean, come ON, what would you, personally, use all that computing power for?)
---
"'Is not a quine' is not a quine" is a quine.
Re:A Problem, Really? (Score:2)
But would that be a problem ? I don't think so - it would just mean that Turbo customers wanting those modifications wouldn't be able to use the latest stock kernel. That's their choice - it doesn't cause anybody else a problem unless large numbers of closed-source application developers start producing apps that ONLY run on the modified kernel.
Seems to me Redhat already does this with their nonstandard module-info thing
RedHat/SCO (Score:4)
Hmm... I've used SCO before...
I think that for most people SCO is inferior to Red Hat. Look at how much extra stuff Red Hat puts into their product, and how well it works with other stuff... Red Hat also does an amazing job of detecting hardware nowadays.
Not to say that SCO doesn't have lots of interesting things in it... there are some very nifty security model aspects that SCO has, for instance. But for people who want a web server or an smtp/pop server or a workstation, for cheap, with lots of power, I think that Red Hat provides a better solution. And I think that many customers are realizing that.
Not to mention a cooler name. :-)
Excessive Credit? (Score:4)
There's an aspect of dirty PR pool going on here.
Gotta love, incidentally, more Linux bashing by SCO. Their hatred is so tangible. Then again, at least they're honest.
Overall, I hope Linus doesn't feel pressured to incorporate a technically inferior solution because somebody is attempting an ad hoc kernel power grab. We don't want people saying to Linus, "You're going to put this into the kernel because we've made it the standard." Embrace and Extend indeed.
That being said, I've heard very good things about the patch TurboLinux has appropriated without due credit. I've also heard some insanely interesting things about MOSIX, the virtual server project started in Israel and made GPL around six or eight months ago. Mosix is immensely interesting mainly because of its ability for seamless and invisible process migration--all processes, not just those written via PVM/MPI, get automagically clustered.
Very, very cool.
Comments from people more knowledgable than I about the details glossed over in this would be most appreciated.
Yours Truly,
Dan Kaminsky
DoxPara Research
http://www.doxpara.com
Short and sweet.. (Score:1)
1) Will media types and pseudo media types ever understand the differences between ``public domain'' and ``copylefted software''? This is, of course, unless I'm just kidding myself into thinking that the GNU General Public License is, well, a license.
2) Doug Michels.. is a gimp? If you think that is some sort of unfounded flame, you should really consider reading that interview. He's a complete and utter tool. Few people are that lost.. hee hee hee.. He must be living on a totally different fscking planet. =P
TurboLinux extensions (Score:3)
This article clearly states that Turbo Linux plans to keep some chunk of their clustering technology proprietary (presumably all parts of it that operate in userland). If they don't plan on making their HA clustering work for the rest of us in any way, why should the kernel maintainers add support for their HA clustering, unless it somehow is part of an open standard?
I have no big moral problem with Turbo Linux choosing to fork the kernel. It'll be their problem if they introduce compatibility issues. People simply won't use Turbo Linux. The right to fork is an integral part of the GPL. Let the market (i.e. user choice) decide. If the features are useful, people will want them, and they will make their way into the mainline kernel. If they aren't useful to us, they won't, and TurboLinux will just have to patch every kernel release (frankly I don't care if they do, as long as they are abiding by the terms of the GPL).
What are they changing? (Score:1)
(another flavor of kernel support for cluster wide resoures/pids/etc. or something?)
The article basically said nothing about what their system does to make it better/different than any existing clustering setup.
dv
Fork, schmork... (Score:2)
At best, the code is good and Linus incorporates it as a kernel option.
At worst, it's a patch for a very specialized function, examples of which already exist:
Embedded Linux
uLinux
RT Linux
e2compr (compression for the ext2fs filesystem)
I don't see something as specialized as server clustering forcing an actual 'fork' of the Linux kernel, except as a vertical application (Like Embedded Linux).
jf
Re:Well... (Score:1)
Yes, you're right. Linus has the final say in Linux anything. So they should have checked with him first. Of course, this points out even more than before how precarious the whole Linux project is. How close to forking it is, and how much more likely the fork will become with time (and with more commercial use of Linux).
Why incorporate changes??? (Score:1)
Re:This can't be good (Score:2)
---
"'Is not a quine' is not a quine" is a quine.
The Spirit of Linux (Score:2)
I'm surprised this is even an issue. Linux isn't NetBSD, with tight oversight and cathedral-like concentration on purity. Thisis Linux -- people are supposed to be able to contribute freely.
This isn't to say that all submitted diffs should be merged immediately, but why give up one of Linux's great strengths -- the ease of contribution.
--
Re:And this is different from Redhat how???? (Score:1)
Simply becouse they presetup the source files to make everything as modules?
The RedHat kernel is NOT heavily patched, nor has it ever been..
good direction (Score:1)
What changes need to be made? (Score:2)
The other question is -- could they, or would they, fork the kernel if TurboLinux doesn't get their way. The other solution is to either make due without their enhancements or port their patches to each kernel version. The second option is not too far away from what other vendors do in backporting security updates to the old, shipping, version of thier kernel (COL w/2.2.10 has patches from 2.2.12/13 in a 2.2.10 update RPM). There are also other distros that add beta or unsupported patches, like devfs (Correct me if I am wrong on this point, I don't have this personally).
What does the GPL allow? They don't own Linux, no one does, what would they be able to accomplish (barring Linus from accepting their patches)without the support of the core developers.
I guess that I have more questions than answers, GPLd software hasn't been as popular as recently and some of these issues are being tested on a large scale, for the first time. Or maybe not, the GPL has been around for many years. Maybe this kind of thing has happened before and we can just look back and learn from experience. If anyone can point out an instance I would appreciate it greatly.
Enough rambling for one post.
I'm not sure I see a problem here (Score:3)
However, if they don't want all that responsibility, they can release kernel patches to be applied to the standard kernel to make it work with their system. Good too. Those may be eventually integrated into the standard kernel distribution, if they're worthy.
Either way, who cares? The ONLY entity this could hurt is TurboLinux itself, for the fear of being incompatible with the standard kernel. And that's not likely anyway..
This article is FUD.
---
This is GPL',ed - Linus and Alan Cox are not God (Score:2)
The kernel is already forking, with the Red Hat patches and now Turbo Linux. We are living in a dream if we think that Linus is going to control all those vendors from doing their thing.
And now, to keep the Moderators happy: "Linux is cool, /. is cool, I hate Gill Bates".
There is no danger in forking GPL software (Score:2)
We're also not talking about a "fork" so much as a patch to the main kernel thread. There's little chance that this patch would be allowed to diverge from the main kernel thread, as it's easier for TurboLinux to maintain it as an add-on - otherwise, they have to maintain an entire kernel rather than just a patch.
A lot of the talk about the danger of forking the Linux kernel is FUD or ignorance of the licensing issues.
Thanks
Bruce
Re:And this is different from Redhat how???? (Score:2)
Tim Gaastra
Re:Could be good *or* bad (Score:2)
As long as (a) the changes are made public (and the GPL so far has ensured that), and (b) the 'cluster' kernel follows (closely
Just my $.02
Re:Well... (Score:2)
Remember libc5 vs. GNU libc? (Score:5)
Folks, we've been here before. The forks converged. There's no reason that future forks of GPL software will not converge.
Thanks
Bruce
Startling News (Score:3)
that must be some new functionality I wasn't familiar with. Thanks, Computerworld!
Oh, and Take _That_, emacs!!
:)
What danger? Geez. (Score:4)
So what if TurboLinux forks the kernel? They will either die out or have to keep a parallel development stream whereby they keep taking mainstream kernels and patch their changes onto them. No big deal. There are nice tools for this, like CVS update -j or GNU patch. Eventually, their stuff will mature and may be accepted into the mainstream.
Forking happened before (anyone remember SLS?).
I think that for any significant feature to be added by an independent software team, forking *has* to take place. In fact, Linux is continuously sprouting many short-lived forks. Any time a hacker unpacks a kernel and does anything to it, wham, you have a tiny fork. Then when it becomes part of the stream, the fork goes away. To create a significant feature, you may have to have branch a much longer-lived fork. And to let a community of users test that feature, you *have* to release from that branch. Now crap: you are ostracized by the idiot industry journalists who will accuse you of fragmenting the OS.
Linus *can't* integrate Turbo's changes until those changes are thoroughly hammered on by Turbo users, so a fork is required. The only kinds of changes that Linus can accept casually are ones that do not impact the core codebase. For example, if someone develops a driver for hitherto unsupported device, great. The driver can only screw up kernels that are built with it, or into which it is loaded. Just mark the driver as very experimental and that's it.
Forking *is* bad; see GCC and other projects (Score:4)
The net result of the forks were that you could have a compiler that covers one purpose, but not necessarily more than one.
I do support of some R/3 [hex.net] code where our local group has "forked" from the standard stuff SAP AG [sapag.com] provides; it is a bear of a job to just handle the parallel changes when we do minor "Legal Change Patch" upgrades. We've got a sizable project team in support of a major version number upgrade; the stuff that we have forked will represent a big chunk of the grief that results in that year long project.
I would consider a substantial fork of the Linux kernel to be a significantly Bad Thing. [tuxedo.org]
Note that if it forks, the Turbo version may have a hard time supporting code written for the non-Turbo version. Major things that are likely forthcoming include:
fork() (Score:3)
I have a friend who works at SGI, and we were just talking the other day about how their development people have been frustrated lately about their inablility to get certain scalability-oriented bits included in the kernel. So, essentially, SGI's Linux is headed for this same sort of fragmentation for the same sort of reason.
I told 'em that if he killed Linux I'd slash his tires, but I don't think he took me seriously.
We in the community have nothing to fear but fragmentation itself. The 10,000 faces of UNIX is what originally killed it as a server operating system -- that's why I refer to Linux as being the Second Coming of UNIX so often. The really key thing is that it runs on a common platform (Intel) and it's not the mess that the commercial UNIXes evolved into during the last decade.
I don't know how to stop this from happening, only that it must be stopped.
----
Re:Could be good *or* bad (Score:2)
PovRayQuake of course!
That is for the people who aren't simulating nuclear explosions of their neighbors dog.
Re:Remember libc5 vs. GNU libc? (Score:2)
xemacs split is staying split?
I actually think that this subject is really
interesting... it would be really good to have
someone do some serious historical research
into code forks.
In particular, I suspect that BSD-licensed
software is more suceptible to code forks
than GPL software, because of the temptation
to do proprietary closed source forks. It'd
take more knowledge than I have to pin down
whether this is really the way it works.
Re:And this is different from Redhat how???? (Score:4)
clustering is any different from the non-standard kernel that ships with Redhat.
Now they must follow GPL licensing restrictions, but this doesn't legally prevent them from selling a tailored distribution
which contains a mix of GPL patches and proprietary closed source driver modules... and it's not any more forked than the
heavily patched kernel source that ships with Redhat Linux.
Please don't moderate total falsehoods like this up - this is flamebait. Alan Cox, the actual primary code architect of the Linux Kernel, is a Red Hat employee. While RH does often ship a 'tweener' kernel, or one that is in some state of AC's patches, there is nothing at all non-standard about it. They simply ship the newest build that they have on hand at the time of pressing. They occasionally even update the kernel image during single revisions.
And, if I'm wrong, please reply with a list of drivers or patches that RH has included since, say, 4.0 or so, that weren't available as kernel.org + current AC patch.
Secondly, IMHO, SCO's CEO need a lot more fiber in his diet. You could randomly take away every other file in Red Hat's distro, ship it, and it would STILL have 'more value' than SCO.
Re:RedHat/SCO (Score:3)
Certainly in price/performance, there can be little dispute that Red Hat beats SCO for commercial use in all but the most extreme circumstances. SCO's products are very expensive if you purchase all of their debundled pieces that it takes to match what you get in a Red Hat box for under $100. Let alone user based license fees. And even if you purchase all of SCO's commercial offerings, you still end up having to add a significant amount of open source to really make it comparable to Red Hat's offering, and that is all extra work.
Michel's point about Red Hat not adding extra value is misleading. It doesn't matter whether Red Hat themselves add value (as opposed to other Linux vendors such as SuSE or Caldera), but what the overall value of the package is. There is no doubt in my mind that the overall package from Red Hat for most people has a much higher value than what you get from SCO, and at a small fraction of the price.
Re:And this is different from Redhat how???? (Score:2)
The only other problem I've had is that Redhat initscripts require build-specific System.map and module-info files. The stock release doesn't create those, so you have to bodge around it. Maybe this is documented properly somewhere now - if so, I haven't found it yet. Again, a pain only to Redhat users.
My point exactly... just compare a
s/.depend/.config (Score:2)
Re:Could be good *or* bad (Score:2)
As much as how every article on /. has a comment saying "Man, I'd like a Beowulf of these babies," most of the people saying that never will have a Beowulf or a need for a clustered system. (I mean, come ON, what would you, personally, use all that computing power for?)
Oh, I don't know... say, a Beowulf and a CD-ROM jukebox that could take in 200 CDs and spit out CDs filled with MP3s of the CDs in under an hour.
--
And this prevents using "standard" kernels how? (Score:2)
I've not done a fresh install of RHL since 5.1, so "perhaps they've gotten tremendously more proprietary since," but I rather doubt that.
The concern with TurboLinux customizations is if this makes TurboLinux kernels not interoperable with other kernels.
This will only matter if people adopt TurboLinux in droves; if they do their thing, producing a bad, scary forked kernel, and nobody uses it, this won't matter. It's not like the "tree in the forest;" if nobody is there using TurboLinux, nobody cares about a disused fork.
who frickin' cares? (Score:2)
Re:TurboLinux extensions (Score:2)
The point is, since I haven't seen the source nor heard from a more technically sophisticated source than this article, I don't know how much stuff they are using in kernelspace. However, I have the utmost faith in the kernel maintainers (Linux, Alan, etc.) and the desires of the Linux user base as a whole to direct patch incorporation into the kernel in the most appropriate way. What I said still holds: if their patch adds value for us (or can be made to add value with reasonable amount of effort), then by all means it should and will be put into the main kernel fork.
Re:fork() (Score:3)
Just get Linus &co. to add all the 'inferior' patches to the kernel and put them in as non-standard build options...
Build with SGI scalability extension (non-standard) [y/N]?
Build with TurboLinux clustering extensions (non-standard) [y/N]?
Maybe give them their own 'non-standard extensions' section with warnings that enabling these extensions may break things, these extensions are not as thoroughly tested as the 'main' portion of the kernel, etc, etc.
It's not like there aren't unstable/experimental build options already.
Re:And this prevents using "standard" kernels how? (Score:2)
And how are you prevented from compiling and booting a standard "blessed" linux kernel on Pacific Linux? You may lose the clustering capabilities, but that's no different from compiling a non RAID enabled kernel on a system which depending on the RAID capabilities which were included as non-blessed patches in previous Redhat releases.
I know about one change mentioned (Score:2)
PS - This was a much bigger benefit under Windows NT, where the system call overhead was much higher than it is under Linux. But it should still help out Linux.
Re:Excessive Credit? (Score:2)
It really isn't useful in a network server environment, but it's very useful for computation-intensive work (especially work that doesn't need to hit the disk that much). Actually, besides some difficult security concerns, MOSIX may even make network server software less efficient.
For TurboLinux, from what very little I know about it, the opposite is true (it's designed only for internet server things).
The stuff TurboLinux is doing doesn't seem earth-shattering to me, either. Usefull maybe, but many others have or are doing similar things that might be better.
Now what would be great is to have for Linux what what VMS had (to be more specific, it was OpenVMS, I think), it would have some exciting consequences.
They're using VI for *that*? (Score:2)
Wow. VI has always been my choice for situations when I didn't want the overhead of EMACS, but I didn't know it did clustering! :) :) And who are these Giganet people? Is that like nvi or vim?
Let's get this straight... (Score:4)
Re:Fork? Big deal (Score:2)
But I don't like the paragraph..
There is precedent for Torvalds quickly deciding to incorporate changes to the kernel produced by commercial developers, Iams said. Engineers at Siemens and Linux distributor SuSE Inc. provided a 4G-byte memory extension that Torvalds incorporated.
This seems to be a backhanded swipe at Linus. They make it seem as if Linus should do it because he did it for someone else. Well, SGI has had a bunch of patches rejected( http://oss.sgi.com/projects/ linuxsgi/download/patches/ [sgi.com] ). So have ALOT of others. Tough luck... But A Precedent?
Media Pressure on Linux is dirty, ignorant, and non-productive when you say someone should be doing this. Computerworld sucks and blows at the same time.
Re:Remember libc5 vs. GNU libc? (Score:2)
The first is that RMS won't put any sizable code into Emacs without legal papers assigning copyright to the FSF or placing the work in the public domain. (One line bug fixes are ok, though.) Given that RMS has been burned in the past, this is an understandable position. But it does mean that he can't simply lift code from other GPLed stuff (ie, XEmacs) without the author signing said papers. Since XEmacs doesn't do this, the specific author of a piece of code isn't always known, or may be difficult to contact.
The second reason is due to a personality conflict between certain XEmacs developers and RMS. Since I'm not a party to any of the conflicts, I can't comment in detail, but it does make getting those legal papers a bit more difficult (read as "hell will freeze over first").
TurboLinux's Kernel (Score:5)
I am the kernel maintainer for TurboLinux. I'd like to dispell a few myths here:
I hope this addresses some people's concerns. Don't worry, I am **very** pro-GPL and am responsible for sanity checking these choices.
Ciao!
(aka Christian Holtje docwhat@turoblinux.com [mailto]>)
Re:Forking *is* bad; see GCC and other projects (Score:3)
GCC became forked because the FSF sat on changes that were being submitted. For years. EGCS was an attempt to get working C++ code out to the general public (Cygnus had been releasing it as part of GNUPro for some time). EGCS literally saved the project I was working on and I'm sure it did the same for others.
Now that EGCS and GCC are back together as one, some of the other forks are being rolled in (Haifa, FORTRAN and Ada for sure, though I don't know what's happening with PGCC).
The act of forking caused the FSF to get off their collective duff and do something. That's a Good Thing [tuxedo.org].
--
Re:There is no danger in forking GPL software (Score:2)
Bruce
Re:Fork? Big deal (Score:2)
1) Linux has always been open. The Unix vendors, on the other hand, released commercial, propietary, closed OS's.
2) Linux has a clearly defined "lead" developer. Unix vendors were led by nameless businessmen.
Regardless of whether TurboLinux' changes are the greatest thing since sliced bread, if Linus doesn't think they deserve inclusion into the next kernel release, it will go off on its own and sort of do a slow death-dance. Linus, along with his horde or developers, has gained the respect of developers and business folks and are accepted as the true stewards of the Linux system. There is no one else around who can claim equal credibility and usurp momentum from Linus and gang.
The Unix vendors ran into trouble when they started to incorporate propietary code into their versions and closed development. Linux will never encounter this problem. Anything based off the linux kernel base can be re-incoportated into the kernel.
Linux is in no trouble from code forking at all.
Xemacs and Emacs (Score:3)
Thanks
Bruce
Re:There is no danger in forking GPL software (Score:2)
It's possible to maintain such a fork in 'no cooperation' mode indefinitely, but at a very crippling cost- to keep it under total control you'd have to be changing things radically enough that no outside influences would be relevant. Otherwise things would converge. Particularly with regard to the Linux kernel, even a _hostile_ attempt to fork it and take over control is a losing game, requiring a really large amount of effort for a very unimpressive return. Yes, if you're a corporation you can devote more resources to a private development than individuals can, but then you have to release source (and not obfuscated, either) and this makes it difficult to use this mechanism for more than hit-and-run marketing games.
Re:What changes need to be made? (Score:4)
We are overworked as is. I will not, as TurboLinux's Kernel Maintainer (Kernel Colonel?), fork the kernel off. Having Alan Cox, and the wonderful crew in Linux-Kernel maintian the core stable kernel makes my life *much* easier.
The Cluster Module is just a module! It can be compiled in later after the kernel is done. It cannot (yet, as far as I can see) be compiled into the kernel as a non-module.
Feel free to grab the cluster module and see for yourself (You'll need to hold shift):
cluster-kernel-4.0.5-19991009.tgz [turbolinux.com]
Ciao!
Re:Forking *is* bad; see GCC and other projects (Score:3)
The net result of the forks were that you could have a compiler that covers one purpose, but not necessarily more than one.
All of the things you mention above are good things to support. They all have their market and perhaps none of them would have been available had we waited for complete consensus among all GCC developers to bless every change.
Code forks are just healthy competition. Remember that? Competition?
You fail to mention that a lot of these things were eventually folded back into the latest GCC versions.
The EGCS split was eventually folded back into the mainline, and the result is a better GCC, I think. People were allowed to go their own way, proving their approach good and when the fork was unforked, it benefitted everyone.
I do support of some R/3 code where our local group has "forked" from the standard stuff SAP AG provides; it is a bear of a job to just handle the parallel changes when we do minor "Legal Change Patch" upgrades. We've got a sizable project team in support of a major version number upgrade; the stuff that we have forked will represent a big chunk of the grief that results in that year long project.
Oh, so you're having problems with parallel changes. Hmm... This is bad. I know. Don't make any local changes! Use the SAP out-of-the-box. Whew! That was easy, problem resolved, the badness of a code fork vanquished once and for all.
What's this I hear? You need those changes? Those changes are there for a good reason? Oh, well, I guess nothing worthwhile doesn't have a price, eh?
Sure, it's a bear to syncronize parallel updates, but that's no justification to never fork.
The ability to fork is an important aspect of the software's essential freedom [fsf.org]. If we never fork, we're possibly missing out on important development direction that would be missed.
Besides, there already are a number of Linux code forks out there. People are still developing in 2.0, 2.1, 2.2 and now 2.3 and 2.4 kernels. Each of these represent a fork. When someone improves a 2.2 kernel in some significant way, someone will probably try and integrate those changes into 2.3 and 2.4 kernels.
What people are really concerned about here is that Linus will no longer be have control over the forks.
My guess is that Linus would welcome the contributions. Remember that anything these TurboLinux people might do would be available to be merged into a Linus blessed kernel in the future.
Hey, if these are real improvements, I'm just glad they're putting them into a GPL OS rather than doing them (again and again) to some proprietary commercial OS.
The forks that have occurred in the *BSD world haven't seemed to hurt them. *BSD is gaining support all the time, we read. The various *BSD projects have learned a lot from one another. The only forks in *BSD that one might argue don't contribute to the Open Source world are the ones by BSDI and other commercial interests. Even these have probably helped popularized *BSD operating systems.
Not real clustering... (Score:2)
Now if you want real clustering, help with the Linux High-Availability Howto [unc.edu] or go look at HP/UX's MC/ServiceGuard [hp.com] - or if you are forced to play with toys, MS makes NT Enterprise [microsoft.com]...
GEEK! [thegeek.org]
The real issue is... (Score:2)
--
Re:They're using VI for *that*? (Score:2)
You can find general info at http://www.viarch.org [viarch.org].
Info on a Linux version which can work without special support from the NIC is available from http://www.nersc.gov/research/ftg/via [nersc.gov].
--
Re:Malicious use of moderation today (Score:2)
Thanks
Bruce
Re:And this is different from Redhat how???? (Score:3)
So, since Alan Cox works for Redhat it's OK for Redhat to ship modified kernel source, but not OK for Pacific HI-TEC?
This is Free Software, as long as the patches comply with the licensing terms of the Linux kernel the distributers of TurboLinux have every right to ship a modified GPL kernel source, just as they have every right to ship a distribution which contains proprietary closed source drivers bundled as binary modules.
You can't call the GPL'd patches included with either Redhat or TurboLinux innapropriate because that complies with the GPL. And you can't call the proprietary kernel modules innapropriate (even though Redhat doesn't ship proprietary kernel modules with it's distribution) becuase Linus has made quite clear that he accepts the legality of priprietary binary kernel modules.
So, how is this different from Redhat, or any other distribution vendor? And how am I baiting flames with my statements?
Re:Maintaining patches (Score:5)
I am the kernel maintainer for TurboLinux. Your email hasn't arrived in my mail box yet. I suspect that you sent others in my organization. Most of us are at ISPCon, so it hasn't filtered to me yet.
We have no intent of packaging and maintaining a seperate linux kernel tree. It would be too much work for no benefits.
Our kernel RPMs includes the base standard kernel tarball and additional patches. You can get all the additional patches out of the .src.rpm file. You can build a complete kernel from the .src.rpm file.
I have not put up a web-page or submitted it to Linus et al as I have not had time. Our primary concern is getting a quality product to our customers.
You may get the TurboLinux Cluster Kernel Patch here (You'll need to hold shift to download):
cluster-kernel-4.0.5-19991009.tgz [turbolinux.com]
Does this answer all your questions?
Ciao!
Re:There is no danger in forking GPL software (Score:2)
I concur with the rest of your posting.
Thanks
Bruce
Re:Could be good *or* bad (Score:2)
---
"'Is not a quine' is not a quine" is a quine.
Hmm. (Score:3)
If they break source compilable compatability, then they're going to have an operating system with either no applications, or they are going to have to start modifying applications themselves, and they will NEVER keep up with the rest of the world.
Either way, eventually, customers are going to become frustrated when new versions of Linux applications become available, but they can't use them because their hacked up Linux kernel won't support them.
Here's my "trailblazing" analogy.
Think of the evolution of Linux as trailblazing a new road.
In the front lines, there are people off, hacking through the brush, trying different paths. Some paths are better then others. Some people wander off on obscure paths and are never heard from again. Others find good, safe, productive paths and bring back maps and suggest that the main road run that way.
In the second line, group leaders such as Torvalds and Cox look at the trailblazers' work and decide where to lay the main road.
In the third line, millions of users follow along, driving on the nicely paved road.
They don't HAVE to drive on the big, paved road --
There's always trails that lead off the main road, but those roads have more potholes, and usually aren't maintained very well, and they're lonely roads, and if you went that way you might run out of gas and become stranded.
But there's nothing to stop someone from building a new, parallel road, and making it enticing enough that it renders the old road obsolete, much as the interstate highway system destroyed the commercial viability of old roads like Route 66.
But considering that much of the attraction of Linux is in the culture, and the freedom from propriatary code forking, I don't see this happening in the near future.
Re:There is no danger in forking GPL software (Score:2)
The sources have to be distributed or made available when the binaries are distributed , not released. See the GPL for the exact language.
Pardon the garble.
Re:Could be good *or* bad (Score:2)
---
"'Is not a quine' is not a quine" is a quine.
Re:What danger? Geez. (Score:3)
Its just the idea, which seems to be the point of this entire slashdot article, is whether Linux will not just fork into distributions, but kernels. That's already happened, but most users are content pretending only BSD has forked, that any BSD supporter must cover every BSD (ie, the FreeBSD driver site was given the incentive to go to BSD, and then slashdot posters asked 'Will they support Darwin, and not just Net/Open/Free?'). Windows, dos, BSD, UNIX, BSD, and Linux have forked. Its just whether people want to be ignorant and using forking as an excuse for why their 'compitition' (why must every other OS called 'the enemy'?) is worse.
Re:Forking *is* bad; see GCC and other projects (Score:2)
So PGCC has been merged except for experiments being carried on by Marc.
Re:What if Linus gets hit by a bus? (Score:2)
(did you notice I squeezed 2 Monty Python references into 1 post?)
Bad example (Score:2)
Re:fork() (Score:3)
Excuse me? UNIX dead as a server operating system? I wonder what it is that Sun is making so much money from?
This is unnecessarily alarmist. The problem with the 10,000 faces of UNIX was that these versions were all in competition and could not be merged. The good thing with differing versions of Linux out there could be that someone will take the best of all of them and put them together into the best system.
Remember too that various directions may not be entirely compatible with each other. The best server system may be fundamentally different from the best desktop system, and may actually require different teams of people working on each to produce the best result.
There's also the danger that the Linux kernel will grow unboundedly trying to support every possible environment. I doubt one Linux kernel can serve both the super Enterprise Server environment and the palmtop environment, yet people are going in both directions with Linux right now.
Re:well... (Score:2)
The real difference with BSD is that Berkeley released it (and under the BSDL) for anyone willing to play, and fork. They were through playing with BSD. So, BSDI, Sun, i386BSD, etc picked it up, and began coding. The free BSDs can still fork just like Linux can, its just whether there's an extreme enough reason to do it. Only OpenBSD actually forked from free BSDs, and when I read Theo's archive, core seemed stubborn and unwilling to resolve the problems. If Alan Cox was suddenly booted from the kernel team, with significant peices of code (and a direction) he wanted to add, but over and over again shoved away by Linus and the rest.. I think Mr. Cox would do something. What, I'm not sure.
Considering DOS, windows, BSDs, etc. all forked.. Linux will sometime too.
SGI and stuff - not a problem (Score:3)
It hasnt broken anything. In fact one thing Linux gets right other vendors don't is we say "no" to crap code. If you dont do that you codebase turns to crap. Linux does it right, *BSD does it right.
Adding features ? (Score:2)
I know Wensongs stuff works. I know people doing production work wih it so for 2.2.x thats probably the final and absolute path. For 2.3.x it depends what Linus thinks is better.
Re:Of course it is! (Score:2)
And it's not a keyword anyway, it's an operator. Please stop misapplying terminology, it makes you look very stupid.
The article is hogwash (Score:2)
Carson City,Nev.-based
SCO.He said the consulting arm ofSCO is
Obviously the guy was writing this article in a hurry. Probably an intern who thinks he knows about all of this computer stuff which is just so hot these days. Do the folks at Computerworld think that online-journalism is allowed to get away with this sort of disrespectful writing.
Second, forking is the whole idea behind copyleftism. You allow people to make whatever changes to the OS they want as long as they make their changes public. That way we can see if TurboLinux has done something stupid. If it is good and is not the first high-availablity clustering kernel because they wanted to be the first, Linus will put the changes in the kernel. Linux does not benefit or get taken away from. Nothing has changed, and anyone that wishes to use TurboClustering is perfectly welcome to buy their distro. journalists should do their homework. this is not a crisis as the author makes would lead George Weiss' comments to infer. This would have been a much better article for a computer magazine if it had explained the internals of the technology annd let us decide what to do with the facts and make our own inferences as computer literate/savy/scientists (pick one) as to the implications of this new technology. This is a great technology to be available to the comunity and perhaps the reason that Sun released their source. Their clustering technology is no longer a secret. Does anyone else feel like their articles about linux and computers in general do not talk about anything interesting, just business (except for ACM, IEEE, Usenix, etc.... publications). We should be smart enough to make inferences to implications on distrobutions. Paraphrasing experts only makes confusion!
Re:Clustering Technology (Score:5)
2.2: new feature, not going in
2.2ac: Using Wensong Zhangs code because it is
rock solid and production hardened. It needs no
proprietary tools. Several vendors already ship this code. I also know people building big web setups using it.
[www.linuxvirtualserver.org]
2.3.x is up to Linus, actually possibly to Rusty
as all of this code area has totally changed to
use netfilter.
Alan
Re:RedHat/SCO (Score:2)
I add more value than RedHat.
The fact that I'm not very versed in coding anything, and that the entire "OS" is actually examples of "Hello World" renamed hundreds of times should be overlooked.
Now all I need is a few acidic remarks about a Linux vendor and I'll have a business model...
I remember. Do you? (Score:2)
I also remember all the confusion and all the time and energy and bandwidth wasted sorting out the confusion and incompatibilities. It was a Good Thing (or at least a Better Thing) when things got resolved, but if you really do remember the situation at the time it was going on, then I'm astounded by your nonchalance just because that incident is for the most part behind us.
When the hype dies down and people start to look at Linux with a critical eye, things like your example would be a serious black eye for any hopes of large-scale Linux acceptance. And with the commercial vultures, er vendors, entering the fray, it's more likely to happen in the future. How do you think it would bode for Linux's acceptance in the non-hobbyist community if two or three or four such forks were going on at the same time?
Cheers,
ZicoKnows@hotmail.com
oh no! (Score:3)
I suppose that TurboLinux should just throw away their code so nobody's feelings get hurt.
Re:And this is different from Redhat how???? (Score:3)
No, since Alan Cox is one of the three core contributors to the linux kernel, since he regularly supplies updates, and since he is the person who puts together the kernel that Red Hat ships, it is ok for them to ship whatever the hell they want to - it IS the linux kernel. That would make a great piece of Red Hat Trivia - name all of AC's changes to the kernel shipped by Red Hat that Linus later nixed. I'm sure there are at least 1 or 2.
You insinuated that they were shipping extensions, modifications, or additions to the kernel that are not part of the 'stock' linux kernel, and that is false. Their CONFIGURATION of said kernel is quite different from what Linus or Alan choose to post, ie, the default configuration, but I know you're much too smart to be confusing configuration with code - at least, I've had enough respect for your posts in the past to hope so.
I'm insinuating nothing of the sort, I'm stating it outright. All you have to do is run a make config on the RH6.1 2.2.12-20 kernel which is supplied with the distribution against a make config from a stock 2.2.12 which has been blessed by Linus and diff the comparing
_I_ am not calling anything anything, other than calling you on crack - show me these 'patches' that Red Hat ships. The TL patches are really that, patches that apply against a base stable or devel release of the kernel. This is an extension of the existing kernel. Red Hat supplies, to my knowledge, no such patches. They supply a kernel, a stock linux kernel, usually a branch of the stable release. There are no PVM extensions, there are no scalability extensions. I think you might be confusing the fact that they, by default, enable almost every single driver available to be built as a module, with them including extra code. They supply those modules because they are needed at install time to interface with the customer's hardware.
Now who's baiting flames? Like I said, as long as it meets the guidelines of GPL licensing, it's perfectly legal! Free Software isn't about whether you like it that I can include my own GPL'd code in your distribution, it's about FREEDOM to modify your and my code as I see fit! Pacific Hi-Tec isn't even skirting the laws here, unlike Corel with their previous beta Corel Linux program, they are releasing a set of GPL'd patches and some proprietary kernel modules... all actions of which Linus has made perfectly clear in the past he supports.
See above for how it's different, and you're baiting flames by making completely false claims. A lie, to me, is always flame bait.
I didn't lie in the first post, and I still don't see a single person who has pointed out even a factual error! I'm perfectly happy to be corrected with factual mistakes, but to call me a liar simply because I wrote a seemingly unpopular truth really stretches your point. And I note that since moderators have chosen to moderate this down to the cruft, nobody cares anyway. Still, Damn rude on your part.
Cheers!
Re:Excessive Credit? (Score:2)
Elsewhere in a reply to this article, here's what one of the TurboLinux people had to say:
"The TurboCluster was based upon the Virtual Server in the beginning. Since then we have hired a company to re-write it from scratch. There is nothing left of VS in the Cluster code, except some concepts (but none of their code). Did I mention it is GPL'ed in the source."
So, in a word, no.
Your patches Blue. What's wrong with moderation? (Score:4)
OK, I'd like to thank users "tap" and "mmclure" for pointing out the obvious; that installing the kernel-2.2.12-20.src.rpm will generate our list of patches for you:
[snip for brevity]
Am I still a liar? Do these patches live in never-never land? Does this whole thread really deserve to be moderated down by several points to a 1 simply because some moderators didn't agree with its position? Isn't the point of moderation to promote factually correct and valuable discourse?
A public apology for calling me a liar would be nice, Blue.
Copyright owners and complaintants (Score:2)
Thanks
Bruce
Re:Excessive Credit? (Score:2)
Clue deposit accepted. Thank you, drive through.
Yours Truly,
Dan Kaminsky
DoxPara Research
http://www.doxpara.com
Re:I remember. Do you? (Score:2)
I maintain that this was a demonstration of the GPL working the way it should. Nobody was allowed to stand in the way of the Linux development because of the terms of the GPL, and the final result did get merged back in.
Thanks
Bruce
Re:Maintaining patches (Score:2)
Re:And this is different from Redhat how???? (Score:2)
I sense you've missed the point. The Linux 2.2.5-15 kernel that came with RedHat 6.0 is not identical to the stock Linux 2.2.5 kernel. Configuration issues aside, the 2.2.5-15 that shipped with RedHat 6.0 included a handful of other patches as well. This is what makes it a nonstandard kernel. Sure, the patches may be publically available, and sure, they're probably included in an "-ac#" patch, but that doesn't make them part of the mainstream kernel series.
If Pacific Hi-Tech places their clustering patches online for all to download and use, what's the difference? Since it sounds like they're going to try to get Linus to accept them, they've got to be made public anyway. What's the difference if distribution vendor X ships a kernel with H.J. Liu's latest knfsd patches or Pacific Hi Tech's latest clustering patches? Both result in kernels that differ in more than configuration selection from the mainstream kernel.
Just because Alan Cox works for RedHat doesn't mean that RedHat's patches are part of the mainstream kernel. (Same would be true if Transmeta got into the Linux Distrib business and shipped their own tweaked kernel -- despite the fact that Linus works there.) Alan knows and acknowledges that the "-ac" kernels are a sort of feature enhancement mini-fork. (His diary entry for October 21 [linux.org.uk] refers to 2.2.13-ac1 as a feature enhancement addon kit.) To give another concrete example: While the "large FDs" patch was not part of the mainstream kernel, Alan offered it as a separate patch and stated publicly that it's one that many vendors may apply to the kernels they ship, even though it wasn't part of the mainstream kernel. Those patched vendor kernels are non-standard kernels once patched.
There's nothing wrong with shipping a modified kernel, particularly if the modifications are public and can be applied to any kernel. But, such a kernel can hardly be considered standard.
--Joe--
lol.. (Score:2)
BTW, since BSD and SysV were the two styles of UNIX, would you not say that if BSD split, so did System V? The code for both is still available from the archives (who holds SysV now? Last I remember was Novell letting the UNIX trademark go, though not sure what happened with SysV code.. All UNIX OSs are BSD or SysV, and UNIX-like being BSD or just.. -like. Would seem pointless to make a big deal about BSD splitting if System V did too, and they were the two design styles of UNIX, not full fledged OSs, just the building blocks.
Yes, good point (Score:2)
And that's why a corporation can fork and have its programmers developing GPLed source under NDAs- but at the same time it means that as soon as the binaries get out to ANYBODY not legally part of the corporation, the source must follow.
I think this suggests that open betas grant full rights to recipients under the GPL, and that it is possible that closed betas may not- the exact point of concern is whether a beta tester is legally part of the corporation, or not. They would have to be part of the corporation, legally, in order to be subject to any sort of NDA over GPLed stuff. This also makes internal testing totally controllable, always insisting that the recipients be part of the corporation and under NDAs. As soon as the binaries or source get into the hands of someone who isn't part of the corporation, the source must be forthcoming and the recipient has full rights under the GPL. Not a bad compromise really
Really BIG architectures (Score:2)
It's not difficult to foresee us getting to the point where apps work under one kernel rendition and not the other; SGI is probably just the tip of the iceburg. Wait for IBM or Sun (it could happen) or any other "big-ass server" maker to start eyeballing Linux for their own machines. It could go nuts - picture having ten variations of the Linux kernel, all running their own sets of applications. That's what forking is, and its very possibility should scare you. After all, is Linux still Linux if one version runs Lightwave and another can't, or is it just suddenly another fragmented UN*X?
----
Re:Let's get this straight... (Score:2)
Who is more likely to request the source? Developers or general users? Let's face it - I can't see many Windows 98 users, migrating to Linux, caring too much about some TurboLinux kernel patch source code. A developer, on the other hand, would probably eagerly snatch the patch from the site within the first 5 microseconds of it being announced on Freshmeat.
The Fork Displayed Problems with GCC (Score:2)
The fork may have been necessary, and the eventual reintegration (or "reverse fork") that came from EGCS was also necessary.
But the initial fork displays that there were problems with GCC development that could not be reconciled at the time. And that was not a good thing.
Re:Beowulf CD-ripper (Score:2)
---
"'Is not a quine' is not a quine" is a quine.