Kernel Fork For Big Iron? 155
Boone^ writes: "ZDNet is running an article on the future of Linux when used on Big Iron. Just a bit ago we read about running Linux on a large scale Alpha box, and SGI wants NUMA support in Linux so it can support their hardware configuration. The article talks about how memory algorithms used with 256GB machines would hamper performance on 386s with 8MB ram. So far Linus et al have been rejecting kernel patches that provide solutions for Big Iron scaling problems. How soon before a Big Iron company forks the kernel?"
Just an excuse? (Score:1)
Forks and the maintainer (Score:4)
Just as importantly, forks are probably necessary when a significant part of the user/developer base disagrees with the direction of the project. This usually implies that the forked version and the original version are aiming at solving different problems within the same vein. If the original project wants to continue in the original direction and some people want to use the source to solve a slightly different project, then they pretty much have to fork in order for the project to achieve its maximal result of being most useful to the most people.
This isn't a bad thing if it's done right. It's just that most of the big forks you hear of are at least partially the result of bitter, angry wars (OpenBSD anyone?). You don't hear that much about the ones which are completely amicable.
Re:ZDNet's tendencies to sensationalize at work? (Score:2)
Just because a system is running doesn't mean it is running to full capacity. With any OS the default kernel/device drivers will get the system running, but updates need to be applied to get optimal performance. A few hours spent after installation will save hours in the long run.
It amazing how many don't do this, though! I've seen whole networks of machines running IDE in PIO mode on hardware that could run UDMA/33. I also supported an OS with statically configured communications buffers and cache sizes, most customers again left these at their conservative defaults. And others.
Basically some administrators are lazy. They can't be bothered to tweak their system, or install critical updates. If I had my way, anyone who didn't bother to install current security updates should be fired for basic incompetance and not be hired again!
The kernal *should* be forked (Score:1)
The more I think about it and the more Linux distributions I try, the more convinced I become that one size does not fit all.
Linux as a workstation is is big and clunky. Linux as a datacenter server isn't scaleable enough. Linux on handhelds and wireless devices isn't as efficient Operating Systems build specifically for that purpose.
As it stands, Linux is great for web, file and print serving - which is what its mostly used for.
X11 creates a massive overhead for desktop users and the kernal doesn't scale to so-called Big Iron.
As an aside, I've tried both BeOS and the 1.44MB floppy version of QNX and am convinced that those scale downwards to handheld and wireless devices.
I'm not so sure of Linux in this regard, especially Linux with X11 and at least 4 different graphic toolkits (GRK, QT, FTLK, Fox etc).
I think it's time that Linus et al accept patches which do fork the kernal for so-called Big Iron and also for handheld/wireless devices creating three kernal streams.
It has to happen eventually and at least if Linus takes the initiative, the community will contain and control the changes. It will still be Linux, the brand will be strengthend rather than weakened.
I think this is really necessary if Linux is to achieve the stated aim of "World Domination"
-------------------------------------------------
Re:Why not? (Score:1)
I agree except that I think Linus and the Kernal team should manage different types of Kernal. That way, it's still Linux rather than SGIs or Suns or IBMs or whoevers flavour of Linux.
-------------------------------------------------
Re:speaking of code forks (Score:1)
The Solaris name has been used for SunOS 4.x releases as well. Solaris 1.1 has SunOS 4.1.3 is its core OS component.
Solaris is the name used to refer to everything that gets installed from the OS CDs; this is more than SunOS 5.x + Openwin + CDE + X11Rx.
Solaris is a essentially a marketing name, SunOS is the value used in the utsname structure - ie what you get back from uname -s.
Re:You don't know what you're asking. (Score:1)
I saw the
How about embedded systems? (Score:2)
You forget that ancient machines are not the only places 386's and 486's appear. Embedded systems generally don't need a heap of processing power, so you can get things done cheaper (and cooler) with a 386- or 486-level chip.
To take your points one by one:
1. Earlier platforms generally had no CD-ROM. Most Linux distros . . . come on CD-ROMs.
1. You install at the factory onto ROM/flash/whatever. No need for a distribution's install CD.
2. Earlier machines usually had a 5 1/4" floppy disk . . .
2. See above.
3. Earlier machines had RAM limitations . . .
3. So what? Even without limiting oneself to embedded systems, there's no real need for huge amounts of RAM besides the RAM companies saying "BUY MORE RAM". I ran Linux on a 386 with 8MB at a summer job a few years back with little trouble, and that only in the setup. (On the other hand, it would be nice to see a libc that wasn't as bloated as glibc...)
4. Some earlier machines had fscked BIOSes, aside from Y2K-unfriendly BIOSes.
4. Repeat after me: Linux does not use the BIOS. The BIOS is only used at boot time (and by DOS). And as far as embedded systems go, you can use a modern BIOS that works, or just write something simple that starts up Linux on your box. After all, embedded systems don't need to worry about being general.
5. Earlier machines had ISA, EISA, etc.
5. Modern embedded systems probably use PCI if they need anything at all.
6. Earlier network cards are not all supported . . .
6. Modern embedded systems can use supported hardware.
Re:Inevitable, but not so bad (Score:1)
-------------------------------------------------
Re:hmm.. (Score:1)
Linux is still driven by ppl working in their spare time on their home computers. As far as I know, most ppl still have
Now, I haven't done any kernel hacking myself, but if I were working on the kernel I'd feel kinda taken advantage of if the IBMs and SGIs of the world were to fork the kernel, and focus all their efforts on scaling the system, without contributing to the areas that make a difference on affordable machines (ie sub-$100K)
Linux is the People's OS. Created by the people, for the people. Yes, it is free, but I don't see any reason to sacrifice the needs of the many to enable the few who can afford machines more expensive than my house.
In fact, if I see sacrifices being made in the kernel so that it runs effectively on big iron, I'd be all for a fork to keep things running on real hardware (in the sense that you would feel ok storing your pron collection on it).
I mean, if (thanks to SCO's free[beer] licencing of the old Unix(tm) sources) there can be an effort to get a usable 4.3BSD distro running for old VAXen, I'm sure you can find ppl willing to keep a fork of the Linux kernel that remains true to its roots and ideals...
Re:Why not detect memory size at runtime? (Score:1)
Why would they need different interfaces, your still doing the same thing, just doing it in a different way.
It's called "modularity", and it's a Very Good Thing.
If introducing modularity requires re-writing code that uses the module that code is Broken to start with and should be fixed.
All the memory management stuff should be done in one place, and only exposed via necessary interfaces (such as malloc()), and kernel-internals should use those interaces instead of talking directly to the mm code.
Assuming, of course, well designed interfaces to start with.
- Aidan
How about #ifdef CONFIG_BIG_IRON? :) (Score:2)
If the same code cannot handle both kinds of machines, then you eventually need both pieces of code in the same codebase, not a fork.
Forking is essential for experimentation. That's why we have tools like CVS which encourage forking for making stable releases and for experimenting with new features.
Re:The obvious solution: the kernel does have to f (Score:1)
Re:this is kinda kewl (Score:1)
Re:What's wrong with ifdef's? (Score:1)
There is absolutely nothing wrong with a fork if whoever forks takes on the responsibility of forever merging the tip changes onto the fork.
On a personal note, it is nice to see you GPL boys now possibly having to code with real-world issues like forks and specials. I don't mean it as a slam, just an observation.
Bad usability for server admins and users alike. (Score:1)
Re:A brief history of computing. (Score:1)
I thought I avoided that confusion by saying "platforms (or architectures)."
To be clear, there will next be an abstraction of platforms (architectures), not to be confused with an abstraction of platforms (OSes).
What I meant is that the next level is something like Transmeta's code morphing software which will "abstract out" (or "make transparent") the hardware
architecture. This will make it irrelevant to OSes (be it Windows 2006 or Linux 4.2.x) whether the system is a sub-palm with a scant 256 megs of
RAM, or a super computer with 1/4 T of RAM (or whatever.) This is the subtext, which you seem to have missed, which makes my post relevant to
the article, which is about Linux possibly having to fork to properly support the growing rift in supported architectures.
I'm not really sure why I am even replying, since your sig clearly demonstrates that you are going out of your way to mis-understand.
-Peter
Woohoo! (Score:2)
Maybe we can get changes to the VFS and VM system now!
___________________________
Re:Let them fork (Score:3)
Re:Why not? (Score:1)
Re:Why not? (Score:1)
Yes ... after all, it's not as if people are looking to run Quake on big iron.
/me pauses to look at the Alpha thread
Never mind ...
=) [on a serious note, I agree ... so *what* if development forks, would it really impact the average user all that much?
Isn't it ALREADY forked? (Score:1)
Surprised (Score:1)
Re:Why reject? (Score:1)
Re:Supporting 386s: Some Problems... (Score:2)
Its probably just me, but I've never seen a 5 1/4" floppydiskdrive on anything except 286's and below. Hmm.. or maybe once.. yes.. I did see it on a 386 once. But only once.
Most 286's had 3.5" too
So, THAT is not a problem, and besides, it untrue
--
Re:Why not? (Score:3)
whyn do we need one huge kernel anyway? Probably several kernels are needed. One for big-ass servers, one for tiny-ass routers, one for mainstream workstations, and one TBD. Having one all encompasing kernel makes building the kernel a pain in the ass. I've been using linux for four years, and I still have to build my kernels a couple times before I get it right. So many freakin options, i'm bound to get something wrong.
but that's just my opioion...
-earl
Re:Microsoft parallels? (Score:1)
Re:hmm.. (Score:1)
I agree that we do need faster CPUs, but not just because the OS demands it. Linux is the best example and proof: you can add features to the OS; while still being able to install it on your 486.
I myself have two 486 running Slackware, and they do their job amazingly well, just as an Athlon 1 GHz would have done. But I don't want to be forced to buy an Athlon 1 GHz just because someone decided that I don't need my 486 PCs anymore.
Re:Why not have a kernel option... (Score:4)
One of the issues that people seem to fail to realize is that Linus is not necessarily rejectiung the patches because of what they do, but how they are implemented. If patch code is submitted to Linus and the patch is going to make mataining that system difficult (read messy unmaintainable code) Linus will reject it. Linus also does not like large patches either. He likes bits and peices and clean fixes. Hey he started this whole thing, I think he has that right.
Another thing to think of is that ZDNet is a news network. Everyone has been saying that the kernel will fork and blah blah. There are already forks in the kernel but people just don't realize this.
Redhat kernels: Have you ever tried to apply a patch to a stock redhat kernel? I know that since RH5.2 they ship the Linux kernel with there own patches.
SuSE kernels: Last SuSSE I installed (5.3) had both a stock Linux kernel and a custom SUSE kernel with custom SuSE patches.
Corel: never tries them but they patched kde and made it hard to compile other kde software with there distro.
Point? There are already forks in the Linux community, yet it goes on. That is the whole thing about open source. There can be forks. If an idea is good it gets into the mainstream kernel. But these 'forks' need to be tried first and become tested and cleand up in such a maner that they can exist with the rest of the linux kernel.
If you think that everyone is running P200 or P500 or GigHz machines you are wrong. I am sure that there are lots of people out there that are running old 386 / 486 with Linux as routers firewalls, etc. After all you do not need a superfast machine for a firewall if all you are going to firewall is 3 or 4 other machines.
I don't want a lot, I just want it all!
Flame away, I have a hose!
Abstraction if very usefull! (Score:1)
First fork! (Score:1)
Re:speaking of code forks (Score:2)
Solaris is based off SysV 4.x. SunOS was based off BSD, but the current BSD's are not based off it, they are based off the same code it is based off.
5.25" Floppies live!!!.... (Score:1)
It's because:
a) I upgrade my systems rather than throw them out
b) I started with an old 486-33 system and this drive and the keyboard are the last remnant of it.
c) I write software for embedded systems and you'd be surprised how long some systems remain in operation.....
Re:Why not have a kernel option... (Score:2)
Re:wasting old pc's or electricity? (Score:1)
Linux Kernel (Score:1)
Linus is probably rejecting this because:
I do think however that Linux is getting so big that Linus will have to change the way patches are integrated and accepted, he's going to have to delegate more and become more concerned with Linux's overall direction and working with the big companies while also working in the interest of the community. I think the US government should pay for this even, why not, they pay for NASA and Linux is alot more use to citizens than a space shuttle.
RTFHtml (Score:1)
There will be no kernal split - Linus has agreed to put it in -
Geesh...
Over to you then. (Score:1)
Itty-bitty palm-top, wear-on-your-wrist PDAs. One processor, 2Mb RAM, no permanent store.
Linux runs at both extremes. Inefficiently, but it runs.
Now, you want to manage this scalability with the preprocessor. Well, that's nice. Off you go.
One day, you may get to work on a large software project. Clearly, you haven't done so far.
I'm confused. (Score:2)
If they produce stable patches, that can compile cleanly in with everythign else, especially after the new kernel revs are done in 2.3, and 2.4 is stable, I bet they WOULD make it into the mainstream.
They simply don't add everything just because it's just starting. Lots of great features started out as separate kernel patches and eventually made it into the main tree.
And if they want to fork, what's the big deal? Who cares? They are more than free to do so, and produce their own. It's not like it would be any less open.. and heck, a third party can always glue them back together and ship his 'complete linux' or whatever...
Sheesh. hard up for topics today?
Re:hmm.. (Score:2)
If 486's weren't supported it probably wouldn't be that big a deal -- there's little lost in running a 2.0 kernel, and in the future that will probably remain true. (We should face it -- the kernel is really rather boring) But getting rid of 486 support wouldn't help much.
--
Why not? (Score:2)
Speak for yourself (Score:5)
MP3 file server for the geeks in IT? Throw in a big drive, but a 486 will do.
Hell, my company's web server is running on a low end PII, and I think it's a horrendous waste! It could be doing *so* much more.
Linux is a UNIX for cheap Intel hardware first. That's where its roots are, and I don't see why it should sacrifice its roots for big iron that can quite happily run a UNIX designed for big iron.
Neither does Linus, apparently.
not too bad (Score:3)
Cleaner kernel trees (Score:1)
There's no real reason there can't be different official memory managers for low memory and high memory situations, since there are clearly different issues. Of course, at this point, lots of people testing a single one is important.
Ifdefs imply accepted patch. (Score:5)
It depends on how pervasive the code changes have to be. If it involves #ifdeffing every single file, then it's going to be very difficult to maintain that, and it's going to be very unlikely that the maintainers of the project are going to allow that feature to remain part of the major distribution.
That problem is a dual-edged sword. It also means that maintaining one big patch is a complete nightmare. Every version of the kernel that comes out has to be separately patched, with two important considerations:
Basically, it comes down to how pervasive the work has to be. If it's a really pervasive change which touches on almost everything, then the only option from a software engineering perspective is a fork. Anything else is being done from a feel-good PR perspective, because it just doesn't make any sense from a technical perspective to try to maintain a huge patch that covers everything.
Re:Not an issue (Score:1)
Hmm (Score:1)
Right now, for example, if you want apache HTTPD threads at the kernel level, there is an option (in make menuconfig atleast) to include this support, it isn't included by default.
Couldn't it be worked out like that? Include the patch into the kernel, have it disabled by default, but have a accessiable method of readily adding support for it?
No I am not a kernel hacker or (real) programmer, that is why I am asking.
Re:hmm.. (Score:2)
"Evil beware: I'm armed to the teeth and packing a hampster!"
But what is 'it'. (Score:2)
This is the problem, folks.. linux isn't an 'it'. It's a plural, it's an ideology, and relatively loosely defined codebase.
We have compatability between distributions right now by *fluke* because noone has seen a need to change that. There is no 'rule' that says it has to stay this way.
If the community wants linux to be on the desktop, then THAT IS WHERE IT WILL GO. Period. Regardless of who forks what. If we need a way to distinguish between our 'community' supported stuff that runs on 'true' linux, and the forks, we will do so. It's no big deal, really.
Re:speaking of code forks (Score:2)
SunOS V4.x was based on BSD.
SunOS V5.x was based on SysV 4.x
Solaris is the name for SunOS 5 + OpenWin
Fork this, you horny spoons (Score:1)
Imagine a beowulf cluster of tire irons. Now that would make a great stress reliever.
Re:Why reject? (Score:3)
A great many features start out as independent kernel patches.
When thigns stabilize down the road, I'm sure they will gladly put 'Big Iron' flags in the compile stuff.
The point is, linus (et al) can't just stick everyting everyone submits, big OR small, into the main kernel, especially if it's not even developed yet!
Also... the feature set for current kernels is already listed... and this isn't one of htem.
You don't just add shit to a project partway through because someone wants you to.
I'm sure than by the time 2.5 kicks up, we'll see a 'big iron' flag in main kernel options.
Re:Why reject? (Score:1)
i've not tested it, but now that i've got a laptop ;)
and windows people to impress, i just might give it a go
-
sig sig sputnik
Comment removed (Score:5)
Why not have a kernel option... (Score:2)
Re:What's wrong with ifdef's? (Score:2)
Ifdef's are one of the hacks in C that are nice if used in moderation, but you can see where this might go....
Say Linux puts one BigIron patch in, then he won't have any reason for not putting the rest in, and when you do that, you get a nest of #ifdefs and #endifs ( because they are funtamentally different than PCs, there would be a lot of changes -- the style of the kernel might have to be changed in order for the patches to be applied and keep it in a useable state).
What this means is, that it is significantly harder for kernel hackers to read the code. That is a bad thing (tm). As I read in another post, Linus will put these things in, just not in the 2.4 kernel.
Re:ZDNet's tendencies to sensationalize at work? (Score:2)
Is it regarding my comment of incompetant sys-admins not installing critical security updates who should basically be dismissed for gross incompetance? If so, I stand by this. A lot of security breaches are made by known open ports. Broken CGI scripts, holes in wu-ftp that a truck could be driven through, default passwords and so on.
I've seen the after effects of incompetant staff, and had to clean up after them. Yucck!
Re:ZDNet's tendencies to sensationalize at work? (Score:2)
Re:Not an issue (Score:2)
This issue of scalability of Linux has been put to rest, IMHO.
So... the scalability of Linux has been put to rest because of an opinion on how it might work out?
In my opinion, the scalability of Linux can be put to rest if someone proves it by running it on 32 or 64 processors, and get the same kind of scalability as other OSses that run on such number of processors.
-- Abigail
First watches, and now forks? (Score:2)
Re:You don't know what you're asking. (Score:2)
That must have been a huge increase since 2.2.13 then.
$ find /usr/src/linux-2.2.13 -name '*.[ch]' | xargs grep '^# *if' | wc -l
22022
$
-- Abigail
Re:What's wrong with ifdef's? (Score:2)
This is probably true in the long-run, but expecting current Linux kernel maintainers to maintain code for machines they'll never see is unrealistic. These sort of changes are going to occur at first experimentally in-house at a large corporation. That will be a fork for at least a little while. Presumably they'll be GPL'd (they better be!) so the changes can always be brought in if people want them. And hopefully the unnamed corporation will want the good karma they'll achieve by later hiring someone to help with folding the resulting code back into the regular kernel.
With GPL'd code, I don't find a (possibly temporary) fork to do something extremely specialized all that threatening; if anything it sounds like a practical necessity at the moment.
Re:hmm.. (Score:2)
Sun microsystems delivers a kernel that runs from Sparc Classics to E10ks, without forking off "older machines".
Which is great, as you can develop stuff on low end machines and run it on large production servers, without the need to have costly development servers around.
-- Abigail
Re:Supporting 386s: Some Problems... (Score:2)
Re:hmm.. (Score:2)
Just to make things clear, IBM and SGI don't want to fork. It would be unfair to accuse IBM and SGI from taking advantage of you if they make the effort of writing non-trivial patches, offering those patches back to the community, but see those patches rejected.
IBM and SGI want Linux to succeed. On both the big iron and the simple workstations (programs for big iron machines have to be developped somewhere, and you don't think every developper for a big iron machine has 32 CPUs stacked under his/her desk, do you?), but to do the former, changes have to be made. They don't demand from anyone to make those changes - they made them themselves. But if the people in charge reject them, what can IBM and SGI do?
-- Abigail
Re:Inevitable, but not so bad (Score:2)
Re:A brief history of computing. (Score:2)
"<BR><BR>"Sig
Re:Inevitable, but not so bad (Score:3)
It sounds inevitable that a Big Iron fork will occur, and as Linus says above, this is not necessarily a bad thing. The problem comes when you have competing factions trying to do the same thing and causing confusion (as in the UNIX wars of the past). But when you have different solutions for different problems, yet everyone is moving forward together overall, it should be manageable. Indeed, it should be helpful, for it maximizes the solution for each platform.
The biggest potential problem of forking an OS is binary and API incompatibility. The reason most people use computers is to run specific applications. I want to be able to walk into my local CompUSA/log on to Egghead and get a copy of application X and run it on my computer. I don't really care what the OS is, as long as it runs application X.
If I've got Linux on my system, I'd like all applications that run on Linux to run on my system. The more forks that introduce binary or API incompatibilites, the less chance I have of being able to run the apps I want, and the more reason I have for removing Linux from my computer.
If Linux wants to be a mainstream desktop OS, it needs to make sure it doesn't fork too much. That was a big part of the reason desktop UNIX failed to take off in the late 80's/early 90's.
Re:hmm.. (Score:2)
What is bothering me about the current distributions is that they are forgetting about old hardware. I can't install Mandrake on a system with 8 megs of ram, but the system will run.. How screwed it that - the installer needs more ram than the OS!
If this Linux bloat continues, I'll just keep moving more of my boxes to the BSD's (Free and Net are my personal fav's - gotta love the deamon!)
Just don't forget that linux has prided itself on excelling on hardware that most people would call "old". As we go forward, we can't forget the past.
Re:Cleaner kernel trees (Score:2)
You download it once after installation. After that, just download the patches. 1 Mb should be pretty fast, even on a slow connection.
And if the distributions would just give us a kernel source that wasn't patched to heck, it would be even easier (although I think Redhat's
Architectural Crises (Score:2)
Joe Hacker writes a small application to solve a simple problem. Simple data structures are used. Unless Joe is a genius (and maybe not so productive) no thought is spent on interactions between components.
The thing proves useful. Other developers contribute pathes. Initially the patches merely fix things and fill gaps, so the overall quality of the software rises. The software becomes a "polished" package.
After some time, there is a significant amount of contributed functionality that was outside of the original scope of the package. The underlying data structures don't quite fit it. As there is no coherent model for the interactions between components, people tend to just add things where it seems most immediately convenient. Quality suffers. The project is in a crisis of architecture.
The way to get out of the crisis is to take a step back, look at the new scope of the project, look at the way the current components ought to interact, including forseeable extensions, and design a new architecture. This is not a case of throwing the code out and starting again, but a refactoring of how things hang together. There will be a period of instability, but it will be relatively short.
These architectural crises are normal and the only way to have a successful long-lived project. Some approaches that don't work are:
The ivory tower: Design the whole architecture in the beginning and never change it. X11 is the best example.
The clean slate: Throw out all or most of the project and rewrite the central framework from scratch. See Mozilla.
The ancient ship: Do nothing. Continue to add functionality in whatever way each developer sees fit. Eventually the software resembles a sci-fi starship that has been patched here, expanded there, re-plumbed somewhere else...
As for Linux, it appears to do this very well with the even/odd release pattern. Every odd kernel release affords the developers a planned architectural crisis, so they can accommodate a new set of sunctionality cleanly. I am confident that the developers will find whatever architectural tool is needed (#ifdefs, macros, templates, modules...) to maintain everything from embedded to high-end systems in one code base. It may be till 2.5 though...
Pavlos
Supporting 386s: Some Problems... (Score:4)
Yes, it is nice that it will still run on a 386, but there are other factors to consider:
1. Earlier platforms generally had no CD-ROM. Most Linux distros (except for fringe distros) come on CD-ROMs. Most people do not want to buy a CD-ROM for their 386, 486s. There are places that offer small "floppy-disk-sized" Linux distros, but they are obviously chopped. 1400K on a 500MB HDD.
2. Earlier machines usually had a 5 1/4" floppy disk, until the late 486s started really using 3.5" floppies. Most people are not going to spend money and time ripping out an old floppy.
3. Earlier machines had RAM limitations, aside from the fact that no one wants to really waste the money on putting more EDO memory into an obsolete machine.
4. Some earlier machines had fscked BIOSes, aside from Y2K-unfriendly BIOSes; Most people will not research whether the particular BIOS is okay to determine whether or not to spend money on the first three items.
5. Earlier machines had ISA, EISA, etc. Oh, what, you want to run GNU/Linux in something other than CGA?
6. Earlier network cards are not all supported to get around many of these limitations... I tried to get around not having a CD or a 3.5" floppy in an old 486 by using some sort of older ISA-based network card.
Obviously, there are many issues to consider before nodding one's head to allow Linus to try to preserve performance in ancient boxen for nostalgic purposes.
Lucas
--
Spindletop Blackbird, the GNU/Linux Cube.
Re:ZDNet's tendencies to sensationalize at work? (Score:3)
-----------
"You can't shake the Devil's hand and say you're only kidding."
Re:There is a point: One size rarely fits all. (Score:2)
-----------
"You can't shake the Devil's hand and say you're only kidding."
Why not detect memory size at runtime? (Score:2)
Inevitable, but not so bad (Score:3)
The process of non-standard kernel patches is just fine with Torvalds. "On the whole we've actually tried to come up with compromises that most people can live with," he said. "It's fairly clear that at least early on you will see kernel patches for specific uses -- that's actually been going on forever, and it's just a sign of the fact that it takes a while to get to a solution that works for all the different cases." He continued:
"That's how things work in Open Source. If my taste ended up being the limiting factor for somebody, the whole point of Open Source would be gone."
It sounds inevitable that a Big Iron fork will occur, and as Linus says above, this is not necessarily a bad thing. The problem comes when you have competing factions trying to do the same thing and causing confusion (as in the UNIX wars of the past). But when you have different solutions for different problems, yet everyone is moving forward together overall, it should be manageable. Indeed, it should be helpful, for it maximizes the solution for each platform.
________________
Not an issue (Score:5)
Even in the cases where Linus has outright rejected BigIron patches, nothing stops a hardware vendor from patching the source after the fact - almost every major Linux distribution does this now for x86/ppc/sparc etc. (NFSv3 is a great example)
Re:speaking of code forks (Score:2)
Here at ASU the Sun guys I know in IT refer to the SysV versions of SunOS as Solaris and the previous BSD based versions which came before as just SunOS. What the vernacular terms are where you are at I don't know.
Lee
Re:Why not? (Score:3)
The other reason that we are scared of the monster systems patch is the number of Linux kernels that come out. How often do we recheck the patch? Which kernels do we release the patch officially for? How do we decide? There are no really good answers to any of those questions which is why the big patch is to be avoided if at all possible.
Re:Let them fork (Score:2)
If they fork all it would cause is a temporary fork. It will be incorporated back into the kernel any how if it is any good. If it is not good people will not use it. If they want to fork let them.
No, you're missing the point. They would need to fork because the memory mangement techniques for "Big Iron" machines are fundamentally different from low end home machines. You need to use different techniques on machines that are so different, so they won't get incorporated back into a single kernel. You will get (for some reasonably long timeframe) two different kernels as a result of this.
Now, whether that's a bad thing or not is a different question.
Re:Who cares? (Score:2)
I installed SP1NETWORK.EXE (service pack 1) like a good user, and now when I print, it takes over 2 minutes per black and white page of text, whereas before service pack 1 it was fast as usual. I was already running the latest printer drivers for my model of printer - I checked their website.
When I installed SP1 I chose to save automatically so I could uninstall it if I had to. When I went to uninstall it, I got the error message "Windows will uninstall the Service Pack 1 but will not uninstall the Service Pack 1." I wish I had a screenshot of it.
Now my only option is to save to disk and print somewhere else, or follow THE USUAL MICROSOFT SOLUTION - RRR = Reboot, Reformat, Reinstall.
And I can't beleive I paid fucking money for this peice of shit.
Re:hmm.. (Score:2)
It's not a question of older but of smaller, and if you've ever compiled a kernel from scratch, you know how insanely flexible the choices are. Kernel Traffic, as others have mentioned is a must-read [linuxcare.com] if you want to understand the design decisions being made.
For systems with limited resources -- embeded systems, or those mini-distribututions with under 16MB of storage (flash) and RAM -- the decisions made for the kernel in general are the same as larger systems with a few gigs of RAM and multiple processors. Read a few comments on these in KT, and the reasons will become more obvious.
I agree with others who said that this is just Ziff-Davis making an issue out of nothing, and that nearly everything can be a patch or an ifdef -- no fork needed.
Re:Supporting 386s: Some Problems... (Score:2)
The issue isn't really the 386, that's just an exageration of the problem. The same patches that help these massive machines will hurt performance on most machines. The article mentions allocating memory for caching is a problem for machines with little RAM, 15 megs of pure cache can really hurt on say... a celeron with 64 megs. Not a big problem for say a 31 cp Alpha with 256GB of Ram. Compared to monsters like that most desktop machines running linux are about as powerful as that 386 or 486.
I can see Linux eventually dropping mainstream support for the 386, but right now there's no huge gain to be made by not supporting it. Killing performance on most x86 machines out there on the other hand is a bad thing.
treke
Inappropriate Ifdefs: BAD (Score:5)
I wouldn't be shocked if the stretching of boundaries that comes from:
The fundamental problem with a fork comes in the code that you'd ideally like to be able to share between the systems. Device drivers float particularly to mind.
After a 2-way fork, it becomes necessary to port device drivers to both forks, which adds further work.
And if a given driver is only ported to one fork, and not the other, can it correctly be said that
or do we need to be forever vague about that?Re:The obvious solution: the kernel does have to f (Score:3)
Read KT [linuxcare.com]. Read KT [linuxcare.com] often.
Re:What's wrong with ifdef's? (Score:3)
Of course then you would have to ensure that both offer a similar interface so that either can be used transparently. This *could* be a maintainence nightmare. I think there are a lot of ways that this *could* be done, but it depends greatly on the details involved whether it will be practical or not. I find it hard to believe that Linus would have looked over something as obvious as ifdefs or makefile tricks, so he's probably used his (undoubtedly god-like) judgement to decide that it would be a bad idea in the long run.
What's wrong with ifdef's? (Score:5)
Forks are usually justified only if the original maintainer pollutes the source with hacks or changes the license.
Why reject? (Score:3)
There is a patch out there that stores the computer's state to disk before shutdown and then give you an instant boot. My home machine is used that much, and my UPS needs repair. This patch would be useful for me, but I'd have to patch it in by hand and then I'd be out of sync with the official Mandrake kernel. That means I'd have to patch in security update by hand.
The problem is, this patch is as useless to Big Iron as support for 256GB of memory is to me (right now). But why can't both Big Blue and I have our way with conditional compiles? All it would take are a couple of more menu selections in xconfig.
Do you have more that 2G of memory?
Would you like instant-on?
hmm.. (Score:4)
I don't know, it depends on where the split of cost/benefit falls.. ZD doesn't say...
`Sides, having a Compaq/SGI/IBM 'approved' kernel patch doesn't hurt much..
It only makes sense (Score:3)
Specialized kernels are good, so long as the support behind all of these kernels remains great enough. I don't think I need to point out the possible pitfalls of forking the kernel and thus, effectively forking the developers behind the kernel into two or more camps. But at some point, the linux kernel that runs on a 386 should be different than the one that runs on the XYZ super computer, just because it can take full advantage of all the wonderful scaleability that the XYZ super computer offers.
Anyway, as I said I'm not an expert but this just seems logical.
Kernel fork for big iron? Why not? (Score:3)
Re:There is a point: One size rarely fits all. (Score:2)
All the examples you've chosen are either processor architecture or device driver related. Most of the code that both of these classes use in the "core" of the OS are non-architecture dependant, and are coded for best general use.
Forking the kernel for "big iron" may be required because utilizing that many resources effectively requires different algorithms at the very core of the OS - scheduling, virtual memory, caching, etc.
The advantage to forking the kernel is the simpilicity in maintaining the code for either. However, the major disadvantage -- as seen with the BSD's -- is that features in either tree just end up getting implemented twice.
It's a tough question to answer, and both choices have major long term implications.
-Jeff
wasting old pc's or electricity? (Score:2)
but arent you consuming a tremendous amount of
power by running 3 or 4 machines when one new
machine could do the whole thing?
Re:Why not detect memory size at runtime? (Score:2)
Re:Supporting 386s: Some Problems... (Score:2)
The issues you raise are packaging issues, important to people putting together distributions -- not kernel development or design issues. Even though that is the case, most distributions tend to load specific hardware support as a module.
If you roll your own kernel you have ultimate control over what disk, BIOS, and bus types are supported. Very little in the Linux kernel is manditory. That's why it runs on such wildly different systems.
Re:Supporting 386s: Some Problems... (Score:5)
1. Earlier platforms generally had no CD-ROM.
Install via NFS or on a pre-formatted hard disk with all the necessary files. Been there, done that.
2. Earlier machines usually had a 5 1/4" floppy disk, until the late 486s started really using 3.5" floppies.
You can boot from a 5.25 floppy disk as well as from a 3.5 one. Besides from booting for the installation, there is no need at all for a floppy drive.
3. Earlier machines had RAM limitations
Many old 3/486s can use up to 16 or even 32 MB RAM. That's more than enough for a small (slow) home-sized server. Even 8 MB does the job.
4. Some earlier machines had fscked BIOSes, aside from Y2K-unfriendly BIOSes
Y2k is only an issue during boot-up, after that you can set the system's time to whatever you want. From what I've seen, Linux deals better with really old motherboards than some brand new ones.
5. Earlier machines had ISA, EISA, etc. Oh, what, you want to run GNU/Linux in something other than CGA?
There are very good SVGA cards for ISA, although running XFree with a "modern" window manager on such an old box is suicide. However, any kind of video card does the job for a "server" type of computer.
6. Earlier network cards are not all supported to get around many of these limitations
Granted, very old ISA cards might not work well, but many cards do. NE2000, old 3Com cards? No problem, work fine, and deliver good speeds too.
To make a long story short, killing support for old systems is a Bad Thing IMHO, and isn't necessary either, it would only make the kernel tarball smaller. I'm all for conditional compiles, and I actually wondered why some of the kernel patches out there (like the openwall patch) haven't been put into the mainstream kernel as 'make config' option. If they can put in accelerator thingies for Apache, why not this?
A brief history of computing. (Score:3)
Then someone "abstracted" them with "BIOS"
Then there were lines of systems, and they were all different.
Then someone "abstracted" them with "C"
Then there were platforms, and they were all different.
Someone (Transmeta?) will come up with a way of abstracting platforms (or architectures) and
make them "seem" the same.
This relates directly with performance increases. When you find yourself wondering what is going to make a 10GHz system better than a 1GHz system I think the answer is the level of abstraction.
Any number of quibbles can be made with the above statements, but I am illustrating a point, not
being a historian.
-Peter
The obvious solution: the kernel does have to fork (Score:2)
If the patch is getting rejected by Linus, it's not because he favors 386's over 40k bogoMips. It's because it is bad design. Besides, if they want to make changes to the kernel that help some people (but hurt the majority), they need to design it in a way that it can be a compile-time feature. In the same way that 1GB or 2GB support is a compile-time option right now.
Of course, it's always easier said than done. Another solution would be to forever maintain a big-iron-patch.tgz. But the reason they want to fork the kernel is because it's probably too hard to maintain a patch like that.
Another solutions would be to start another branch (alpha, MIPS, intel, and BIG-IRON), but it includes more than cpu stuff so that would be an issue.
There is a point: One size rarely fits all. (Score:5)
Re:Inevitable, but not so bad (Score:3)
1) Is the resulting code still Linux?
This is a BIG question, especially for IBM and SGI who want to say they're Linux supporters. If Linus doesn't grant use of the Linux name to their OS, they're back to naming the resulting kernel something other than Linux. Big PR problem.
2) Will the "Linus approved" patches make it into the follow up kernels released by IBM and SGI?
I'd be willing to bet both companies are willing to do the right thing and include them, but how big can this fork get?
Now, all that aside, distros have been doing small scale forks for a while now. I think SuSE had a 1GB mem patch, and RedHat frequently patches the kernels they distribute. Nothing bad for most ussers.