Linux Kernel to Fork? 578
Ninjy writes "Techworld has a story up about the possibility of the 2.7 kernel forking to accomodate large patch sets. Will this actually happen, and will the community back such a decision? "
It is easier to write an incorrect program than understand a correct one.
From the article... (Score:5, Insightful)
> compiled specifically for it.
FUD FUD FUD. No. no no no. NO!. Who writes this generic shit?. There's no truth behind the above statement and it just implies something that is not a problem.
About time.... (Score:5, Insightful)
Yes, of course it will. (Score:4, Insightful)
Even if this was a more hostile type of fork it wouldn't matter. Some amount of forking is healthy in open source.
New linux development process (Score:5, Insightful)
In fact, out of all the news articles out there about linux 2.7, it seems (not that this surprises me) that slashdot went out of its way to pick one laden with the most possible negative FUD and the least possible useful information about what really is news with 2.7. A much better writeup can be found at LWN [lwn.net]. In summary, the present situation is:
Idiot. (Score:5, Insightful)
i don't get it (Score:3, Insightful)
A couple of months ago there was a general upheavel over the fact that Torvalds et al. had decided not to fork a developement tree of of 2.6.8, but rather do feature developement in the main kernel tree. The message of the article (brushing aside the compiling-applications-for-each-kernel-FUD) seems to be that they have made up their mind and fork an unstable kernel branch of anyway.
What am I missing?
Re:Uh-oh (Score:5, Insightful)
Secondly, linux (the kernel) already "forks" every time a new development version is opened. ie, 2.1, 2,3, 2.5 etc. All this is saying is that 2.7 is about to open.
"Fork" is not a dirty word.
Re:I'd Like to Run Linux -- Just No Time (Score:3, Insightful)
erm.. when did you last try installing linux, and which distro did you use?
I have recently installed ubuntu and fedora 3 on hardware ranging from a fairly old PII 400 with matrox gfx and scsi to an amd64 3000 with radeon 9200 gfx and serial ata, to an ibm thinkpad r40e.
All of these installed with almost no effort and Just Worked. (apart from power management on the laptop which took about 30 mins of googling to find a solution)
I even had hardware accelerated gfx on _all_ of the above machines with no extra configuration of drivers to download or install.
Really, if you want "easy to install and get running" give something like ubuntu or fedora a try. You might be pleasantly surprised.
Re:I'd Like to Run Linux -- Just No Time (Score:5, Insightful)
Dude - just stick with Winblows. You have no time to "know linux", as you put it, so just stick with what you know. You can post on Slashdot either way.
Please, developers, don't dumb Linux apps/distros down so much that it looks and feels like Windows.
Re:From the article... (Score:5, Insightful)
Perhaps he is not talking about applications such as "Emacs" or "vim" ? (Or, he just finished his crackpipe
It is Linus's fault. (Score:4, Insightful)
There needs to be a consistent driver API across each major version of the kernel.
A driver compiled for 2.6.1 should work, in its binary form, on 2.6.2, 2.6.3, and 2.6.99. If Linus wants to change the API, he should wait until 2.7/2.8 to do so.
The current situation is completely ridiculous. Anything which requires talking to the kernel (mainly drivers, but there are other things) needs either driver source code (watch your Windows people laugh at you when you tell them that) or half a dozen different modules compiled for the most popular Linux distributions. These days, that usually means you're going to get a RHEL version, and possibly nothing else. What happens when you're competent enough to maintain Fedora or Debian, but you don't have driver binaries? (Yeah I know, White Box or Scientific, but that's not the point.)
In fact, I recently had to ditch Linux for a project which required four different third-party add-ons, because I couldn't find a Linux distribution common to those supported by all four. We had to buy a Sun machine and use Solaris, because Sun has the common sense to keep a consistent driver API across each major version.
Yes, I've heard all the noise. Linus and others say that a stable driver API encourages IHV's to release binary-only drivers. So what? They're going to release binary-only drivers anyway. Others will simply avoid supporting Linux at all. LSB is going to make distributing userland software for Linux a lot easier, but until Linus grows up and stabilizes the driver API, anything which requires talking to the kernel is still stuck in the bad old days of 1980's-1990's. Come on people, it's 2004 and it's not too much to expect to be able to buy a piece of hardware that says "Drivers supplied for Linux 2.6" and expect to be able to use those drivers.
Re:From the article... (Score:5, Insightful)
Re:Run, Chicken Little, Run! (Score:2, Insightful)
Re:From the article... (Score:3, Insightful)
Re:It is Linus's fault. (Score:4, Insightful)
That's deliberate...
In fact, I recently had to ditch Linux for a project which required four different third-party add-ons, because I couldn't find a Linux distribution common to those supported by all four. We had to buy a Sun machine and use Solaris, because Sun has the common sense to keep a consistent driver API across each major version.
Overall, please either by from open source friendly hardware vendors, or pay the price for a proprietary operating system. You have chosen the second option, so deal with it.
Re:My favorite quote from the article..... (Score:2, Insightful)
Now the binary parts of those modules mean that the kernel can't autorecompile them for you, but that's not the kernel's fault.
And if fact, the 2.4->2.6 kernel change did require a new version of modutils, and also, you could get improvements to some applications if you recompiled.
Re:From the article... (Score:1, Insightful)
Gah, I get irritated just thinking about it. I hate, hate, HATE this about Linux.
Re:Uh-oh (Score:4, Insightful)
Someone decided that this is "bad" (and which finally opened the market for DOS/Windows), which I still don't fully get. If the software/system is still usable to me, I keep on using it (I'm still running my trusty old Atari in the studio for average MIDI sequencing). If I need to get a more powerful machine and/or the software will only be supported on this new machine -- what is this any different to todays Windows/Office situation?
With each new Windows the user interface changes (think of 3.11->95; XP anyone?), new data formats which are not backward compatible are introduced (.doc), and all they ensure is that you can load your old documents and please, please use the new formats as quickly as possible to make a lot of people buy the latest release...
If your Linux application breaks because it requires some stoneage whatever library, then just install it. For instance, people are used to carry a shitload of same-but-of-different-version DLLs on Windows systems and don't seem to object it.
With wide acceptance of RPMs we also accepted the breaks-if-lib-version-of-the-day-is-not-present kind of behavior... (The next logial step would be including required libraries in the RPMs just as every Windows program will come with all required DLLs.)
Wow, this article is pure uneducated guesswork... (Score:5, Insightful)
Well, this was fun to read. This article is about as educated about the subject as the average donkey.
Uhm, what gave MS the edge in the 80s was cheap i386 (well, actually 8088) hardware, and a relatively cheap OS (MS-DOS). Unix servers cost an arm and a leg in those days, and many companies/people wanted a pc as cheap as possible. Buying an i386 in those days meant running DOS, and the "marketplace" standardized around MS-DOS.
Utter bull. Upgrade kernels as much as you like, it won't break software unless you change major/minor numbers perhaps. The same thing will happen to windows if you start running stuff for win2k on win'95. But this is rather a matter of features in the kernel, not compilation against the kernel.
And the big news is? This happens every couple of years, with stable versions having even minor version numbers and unstable versions having odd minor version numbers. This helps admins and users to effectively KNOW which versions are good for everyday use, and which versions are experimental and for developers.
Well, imagine a Beowulf cluster... How long have those patches existed? There's several ways to build a cluster as long as you patch your kernel.
And why on earth would they want to do that? Linux is on the right track, so why bother with an entire rewrite of good functional code with good design.
It's also focussed on multimedia (xmms, mplayer, xine), webservers (apache), mailservers (sendmail, qmail, postfix)... I'd rather have people say that open source has focussed on internet servers than stuff it needs to make an OS run and wordprocessors. This like saying that an oven is primarily being used for making toast, while actually it also bakes cake, pizza and whatever you toss inside.
I'm sorry, this kind of article belongs in the trashbin. Either the journalist doesn't know what he's writing about, or he's being paid to know nothing about the subject. One of the things that keeps suprising me in business IT journalism is the lack of knowledge these people have about the subjects they're writing about.
Re:I'd Like to Run Linux -- Just No Time (Score:3, Insightful)
I think linux gets the blame, but you wouldn't expect microsoft to write drivers for your camera.
Case in point, I bought a HP scanner/copier/printer about a week ago, and it took about 2 hours of constant reboots, driver conflict errors, and other problems to get it to work correctly. The end result had me download almost 400MB worth of drivers from hp.com, uninstall the printer, and reinstall it with the new drivers. The drivers on the cd were bad. That's not an "everything works" scenario. Yeah, and that's with WindowsXPhome on a HP workstation connected to the printer with usb. A problem like that is NEVER a windows problem, it's always a problem with the device. If I were using linux, it would be linux's problem, and not the device.
Re:Letter to Editor... (Score:3, Insightful)
Christ.
I'm not making fun of you. What you said was completely accurate, but when you're dealing with clueless people, you need to speak simply and plainly. "holy pengiun pee?" C'mon.
Quick example:
To Whomever:
Your most recent article regarding the upcoming linux fork may be confusing to your readers. The current version of Linux is 2.6. As new enhancements and bug fixes are developed and tested, they are added to this 2.6 kernel. This is similar to the way Microsoft puts out service packs on their current version of the Windows XP operating system.
When significant or cutting-edge features are added, the team in charge of maintaining the linux kernel needs to decide whether to "fork" the kernel to a new version. Again, this is similar to how Microsoft made decide to put a new feature into Longhorn instead of patching it in to Windows XP in a service pack.
Forking simply means that a new release of Linux is being actively prepared.
FUD (Score:3, Insightful)
The fact that patches exist, large or small, is what keeps the main kernel working. So for special implementations, patched kernels exist and everyone is cool with that. I have yet to see a patch that isn't from the main kernel and I don't forsee a situation necessitating that it not be.
I think we should look into the motivation of this article that cites no specific information or sources. It's pure speculation.
Re:It is Linus's fault. (Score:3, Insightful)
Re:From the article... (Score:2, Insightful)
which can and does cause earlier programs to suddenly fail, because they depended upon a particular DLL's quirks. It's called "DLL Hell".
Sorry, I've never had this happen in my life. Ever. It's simply not an issue that comes up all that often. And I think the weight of evidence is on my side... people download stuff for Windows all the time.
Re:From the article... (Score:1, Insightful)
Re:I'd Like to Run Linux -- Just No Time (Score:3, Insightful)
Because some people are overzaelous in their free software speeches to the masses. Linux users have a bad rep for a few bad elements.
Everyone should use what they want to use (at home at least). You like MacOS? Be my guest. Windows? Go right ahead. Linux? Hell yeah! People should be encouraged to try and use open source software, not forced. If people don't have the time to learn new things, let them use whatever they want.
Please, end-users, stop having this elitist feeling because you're running linux. If apps and distros want to dumb down their applications to increase the amount of users, let them. A good example is perhaps lprng versus cups. Cups is easy to setup and use, lprng is not that easy to setup and use. If normal users can setup their printservers using an easy tool, and power users can set it up with their favourite tool, who is going to complain? It's a matter of choice.
As soon as we make linux distributions easy enough for Joe Common to use, and decide that Random J. Hacker can't do things the way he wants to do them then we're in trouble. Then it's no longer a matter of choice, but a matter of locking in people to solutions that only work in 80% of all cases.
Re:From the article... (Score:5, Insightful)
One of my biggest gripes, was how when you try to install the latest version of Foo, it requires the latest version of Bar, which in turn requires newer versions of X and Y and so on.
If you are using a more recent distro, this is far less of a problem, but the moment you move back to something older that cannot be updated as far as required... you end up with problems.
Specifically, I was trying to get some things working under Red Hat 6.2, a 5 year old distro. Many called me dumb for even trying such a thing, which I find quite entertaining considering how many still use 6.2 in server back ends, not unlike how many still use NT4, because it works.
Speaking of NT4, I found it far easier to back port a Windows based app written for XP or 2k back to NT4, jumping back 5-9 years in terms of age, than it is to go from Fedora Core 2 to Red Hat 6.2, a jump of only 5.
This is why I so love Windows, consistent targets (within reason), where the # of system updates is finite and can be controlled.
As for this so called 'Dll hell' people like to bad mouth Windows for... I can't say I've ever had that issue myself... however I did find it worse than hell to try to figure out how to run 2 different versions of GLIBC on a system without recompiling every single application requiring one or the other... Windows has many simple solutions for a problem like that.
Re:From the article... (Score:3, Insightful)
Re:From the article... (Score:2, Insightful)
Re:From the article... (Score:2, Insightful)
some of prefer to... LEARN.
Re:It is Linus's fault. (Score:1, Insightful)
You are assuming that Linus and co. are doing everything to accomodate the current businessmodel that lead to the wild-success for Windows. I find that notion silly and asking them to betray the things they believe in.
The way of thinking you completely trash in your post is the reason for its success, and it means to me to not betray your roots.
Remember that Linux ships more drivers than ANY OS out there.
Re:From the article... (Score:3, Insightful)
Re:From the article... (Score:5, Insightful)
You aren't going through bullshit. How is 'apt-get install foo' or 'yum install foo' or 'emerge foo' going through bullshit? It's one command! Do you want something easier? Must the OS read your mind and install the package for you?
These "200 other barely-related packages" are called dependencies. Pakcage managers don't just start downloading other packages willy-nilly. It installs those packages that your new package is dependant on. Some package managers can also download packages marked as suggested or recommended, but that is easily changed via a config option, menu choice, or dialog box.
Re:BS.. (Score:2, Insightful)
Re:From the article... (Score:4, Insightful)
But how exactly does that collide with grandparents point?
You have stated yourself that any installer is free to use any of the quierks you've described (in short: rely on registry in hope it's not messed up yet again, overwrite DLLs that other programs may be using or waste diskspace and memory by dumping yet another copy of bozo.dll to be loaded at runtime).
So it's only a matter of time until you run into a piece of software that picks the route that breaks your system.
Re:It is Linus's fault. (Score:3, Insightful)
I think you may be missing the point of OSS. These things (breaks to backwards compatibility) aren't really as much of a problem on Linux as they would be on Windows because virtually all of the code in question is available in source form. You can always fix the problem by recompiling stuff. On Windows if an operating system API changes you have to wait for whoever made all of your software to fix and recompile it and then redownload/repurchase it. This is part of why Microsoft is unable to fix many longstanding problems with Windows and the Windows API: they are slaves to backwards compatibility. In fact the whole .NET thing seems to me to be an attempt to escape from this limitation and enjoy some of the benefits that open source now does.
Virtually all of the problems you describe are problems with binary packaging rather than with the core Linux software itself (with a very small number of exceptions such as the GCC 2.9.x -> GCC 3.x transition; and even that was fixable through recompilation). All I can say is get a better distro. Debian doesn't have so many problems with this, and Gentoo and other source-based distros certainly don't either. This is in fact why I stopped using Red Hat and switched to Gentoo in early 2002. You don't have many binary version compatibility problems on Gentoo because it doesn't use binary packages except where the software is not available in any other form. It thus manages to parallel the development of most open source software very well: it isn't a problem when developers break an API...you just use a single command to recompile everything that was broken.
Open source developers often don't worry about maintaining binary compatibility because it isn't a problem if you just recompile. Using binary packages just invites problems: whoever makes your distro has to stay on top of the constant changes in the API. So if you do use binary packages, at least do yourself a favor and use a well-tested distro like Debian.
Re:From the article... (Score:5, Insightful)
Going thru this "bullshit" is actually easier than installing software in Windows. Assuming you use and apt-based distro, just type apt-get install foo. You don't need to even download the software, apt does it for you. The only interaction it require is a confirmation if your package have dependencies. A minute or two later (depending on the size of the software and the speed of your connection), the magic happen : the software is installed ! No chasing software on the Web, no downloading, almost no interaction (don't you find clicking Next, Next, Next stupid at last ?). It's the best thing since sliced bread, yet you fail to see it. Again, which distro do you use so I can give you clear instructions on how to use your package manager properly ?
Re:From the article... (Score:3, Insightful)
If you want to target Red Hat 6.2, target Red Hat 6.2. If you want to have it both ways and depend on something that's much newer (and thus has lots of dependencies that have to be updated) that's your choice. You can't have it both ways - you want to target an old OS but you want to use the newest libraries to save yourself some effort, but you also don't want to have to update those libraries. Somehow they're supposed to magically appear.
I suspect that your IDE and/or installer maker on Windows make your dependencies magically appear on NT4, by figuring out what you need and bundling it with your executable. That doesn't make the API consistent, it just means that the development tools you're using are convenient in that respect.
Also, you picked a distro that doesn't come with an automatic package downloader (at least, as far as I know it doesn't). It has a local package manager (RPM) that stops you from shooting yourself in the foot, but it doesn't go get stuff for you.
I think you need to learn the difference between using developer tools that insulate you from dependency horrors, and a package manager that insulates you from dependency horrors. It sounds like on Windows you had the first, and on RH 6.2 you had neither.
Re:From the article... (Score:3, Insightful)
A very few of the top Kernel developers are actually paid to do what they do. For the rest of the developers (the countless number of real folks with other things to do) who submit patches (many of which actually end up in the Kernel after a few bounces back-and forth with a lead).
For the perspective of these folks, the kernel does exist for them to code.
I think what you are forgetting, is that nobody can lock the Linux kernel up into an ivory tower. It is a community effort. When it's really, really important to someone with resources (IBM, HP), that someone will assign a few developers to get it done.
I think the biggest thing your argument forgets is that - by the nature of Open Source development - implimentation of something someone else has already done (often the case in Linux) must be done in a vacuum to avoid IP infringement. So, when it was time to do USB support - decisions had to be made. For most devices, the USB stuff does work, the fact that it isn't done the same as Windows is important.
Look at it this way (Score:3, Insightful)
break them late.
If all 2.6.x. kernels supported a driver, you'd just
accept that driver... until the 2.8.0 kernel comes
out. Then what? The vendor doesn't care; they got
their money. They either want to sell you new
hardware, or they've gone out of business. So you'd
then expect Linus to add some serious bloat for
supporting a driver ABI translation layer to let you
run ancient drivers on modern kernels.
Then what if you upgrade to a 64-bit processor?
You want Linux to emulate the old stuff????
That's what made Windows 95 so lovely.
The way Linus does things, you and these corporations
can't ever forget that binary drivers are 2nd-class.
The Reporter Doesn't Seem to Understand OSS (Score:3, Insightful)
"Top contributors to the Linux kernel have been Red Hat and SuSE, he said. Also contributing have been IBM, SGI, HP, and Intel."
Usually, when talking about the Kernel, it's valid to at least note some individuals, such as, say, Linus.
Re:Ours is not to wonder why. (Score:4, Insightful)
Re:Uh-oh (Score:3, Insightful)
You see there's a flaw in your logic.
Not fixing a bug to allow some bad code that uses said bug to run is just plain ignorant.
Re:From the article... (Score:5, Insightful)
Most other operating systems do this and it's about time Linux provided some standards for drivers.
As much as we hate it, we do need to support binary only drivers.
It pisses me off that I can no longer use my webcam because the driver maintainer can't keep up with every variation of the kernel, and for legal reasons can't release the source code.
-Aaron
Re:Wow, this article is pure uneducated guesswork. (Score:3, Insightful)
I'm not sure he's ever actually followed kernel development before.
For all those wanting to know whats going on without reading the linux-kernel mailing list, just run over to Kernel Traffic [kerneltraffic.org] -- a summary of the week's happenings on the list.
Re:From the article... (Score:3, Insightful)
The people who are motivated to create the app (the author) should be the one releasing packages, not some third party. Think what it would be like if you couldn't install anything on Windows without Microsoft individually making a special installer package for it; you can still install but not without a major headache.
Re:From the article... (Score:2, Insightful)
If I write software, all I should release is the source code. Let the distributions package it for their architecture.
I understand the concept perfectly, thank you. I just think it's wrong. It's unrealistic to expect distros to package every conceivable piece of software a user might want, and a whole lot of wasted effort if each distro packages its own version. There's no reason that a binary that runs on one distro shouldn't run on a different distro or even another version of the same distro.
Re:From the article... (Score:5, Insightful)
If we want to maintain the quality and stability that the Linux kernel has, we need to resist binary drivers. Many of the stability issues remaining with Windows today I believe are in fact driver issues.
Giving in to the hardware companies' (pointless) fear of losing so-called "intellectual property" by opening up their drivers would pass part of the control of the kernel from Linus & co. to countless programmers who may or may not have special interest in improving Linux specifically. The quality assurance that currently takes place for the free software drivers that get into the kernel is valuable.
Giving up on free/open source software at every turn where it is convenient would lead us to having an OS that is an assortment of non-free parts a bit like the current proprietary UNIXes. It might even lead to someone eventually getting into a position where they could charge for an essential part of the system thus rendering it non-free even in the beer sense.
For a kernel developer's take on this, read this, it's from Greg Kroah-Hartmann's blog [kroah.com]:
Serious question: (Score:5, Insightful)
"It pisses me off that I can no longer use my webcam because the driver maintainer can't keep up with every variation of the kernel..."
Since this is a webcam I am making an assumption that this is more of a personal/desktop/workstation type role. With that in mind, is there any compelling reason that you must upgrade to the latest greatest kernel as opposed to sticking with a previous kernel that has worked along with your "webcam" driver that worked as well?
I am under the assumption that there are a lot of users that upgrade/acquire the latest greatest software and that in and of itself is not a bad thing but not always the smart thing. I'm referring to a "if it ain't broke, don't fix it" line of thinking.
Can you or someone else inform me what the other part of this issue is I seem to completely miss?
Re:It is Linus's fault. (Score:3, Insightful)
Why? You tell a nice little story about running 4 different binary only drivers. You represent a very, very niche market.
Of anyone I've ever read or heard of, you are absolutely unique. I've run and installed Linux on hundreds if not thousands of computers. The only binary only driver I've ever had to install is the nVidia one. Even that is merely just because I wanted better performance. I could easily use the machine just fine without it.
I've installed SCSI cards, IDE cards, RAID controllers for SCSI drives. I've installed SAN drivers. I've installed up to Gigabit cards. I've installed scanners, mice and keyboards, printers, CD burners, soundcards, USB and Firewire drives. I've had desktop cameras. I've had digital cameras that used USB connectors. I've used magneto-optical drives. Heck, the Parallel port Zip drive worked better under Linux then it did under Windows. I've hooked up flash readers. I've connected a myriad of perphials. What on earth are you connecting up to a machine that you need 4 separate binary drivers? Are you sure they aren't replaceable with a component that would have worked with drivers?
I'd much rather have the developers have the ability to fix and solve bugs rather then cludge together something that might or might not work.
For the most part, 99% of the driver API is in fact stable inside of a single kernel minor release (technical speaking 2 is the major kernel release, 4 is the minor release) kernel series (the single exception I can think of is that 2.4.10 completely broke the VM subsystem from 2.4.9). Most of the other things are merely your vendor being too lazy to get off their butt and release the binary only driver. It's not particularly hard. If what your are using doesn't support both Suse and RedHat, put down the box, and back away.
Finally, just like the Apache guys have been saying for years, "We have a lot more drivers to maintain then you do, if we change the API we have a very good reason to". It's not like Linus goes and makes changes willy nilly. Generally speaking, it's been to make the API easier to use, to refactor common parts into higher layers for code reuse, or to use a more efficient/scalable model.
You can keep saying that it really needs a decent driver model. No it doesn't. It is what it is, precisely because they refuse to have choke points to when innovation can happen. You might love it if it does. Personally, I like it not having choke points. I review the hardware I plan on hooking up to a machine, and ensure that all the perphials do what I need them to do, and that they are usable under the OS I run.
Next you'll complain because your x86 machine doesn't support Nubus, or Altivec instructions. Pick things that have OS drivers and your problems go away.
Lastly, you are complaining to the wrong guy. Linus isn't your man there. Try complaining about the distributors. The distro people are the ones who make the actual final release you are using. They could just wait until tail end of the kernel life cycle and release it then with a stable API. They could maintain a stable API for you. However, you are not common in terms of users. If everyone clamoured for it, they'd get it. Heck, if that's really what you want, start using the 2.2 series kernel. It has a world class stable driver API. The problem is Linux moves fast and you want to stay with the cutting edge. It's not Window's is releasing their development stuff as early or as often as Linux is. I'm fairly sure that by the time you get the Windows kernel, it's probably similar to roughly where the 2.4 kernel is right now (maybe 6 months or a year ago). By the time you get Windows software, it's old and stale (in terms of true innovation and change, Microsoft is fairly serious about limiting architectural change, just like the 2.2 and 2.4 guys are right about now). Pick what you want.
Kirby
Re:From the article... (Score:3, Insightful)
Re:kernel modules are not applications (Score:3, Insightful)
Yes your right, my mistake.
strong bias toward requiring people and organizations to release their software in source form
And this is the childishness of Linux. My Linux system has a nvidia tnt2 card because thats what I had around when I put the system together. Now I have 2 choices of drivers, nvidias official one, or the barely works nv in X. Now if I felt like being a childish zealot, the nv driver would be a no brainer, however I like to use what works best, and thats the nvidia binary driver. Anywhere else, this is fine, but not with Linux because Linus and others have decided that I shouldn't use the best choice for my card, I should either use an inferior solution (nv), or bought another card, also an inferior solution (spent money I didn't have on an open card that doesn't exist). They seem to go out of there way to break every binary driver they can with every release without even considering that the open source alternatives range from almost alright to compleatly useless. Linux can be a little hobby or an actual, usefull OS product and at the moment the kernel dev's have gone with acting like children and developing Linux like a little hobby.
Re:Serious question: (Score:3, Insightful)
-Leigh
Re:From the article... (Score:3, Insightful)
Re:Why upgrade... (Score:3, Insightful)
Under Fedora (on the other hand), the NTFS driver (fully open, and PART OF the kernel) is not a default-included module (Fedora is not alone in this distinction) - so the module must be rebuilt (or wait for a new RPM, and download that). It's not the fault of 'Linux', per se, but the kernel developers could elliviate this problem by better structure versioning within the drivers - let the driver itself determine if the kernel is close enough.
On my RHEL 3.0/Oracle 9i server, you are certainly right - RedHat does a great job back-porting all 'patches' into the same build-number code base as the original release. This server was also purchased with RedHat in mind, and I had the freedom at the time to make sure that everything would be fully supported by the default 2.4 RedHat Enterprise kernel.
Finally, as a working manager - I'm happy when users can answer their own questions. On the other hand, I get a lot of technical respect from those who work with-me, and the requisite questions that go with that. It's too bad you don't have managers deserving of respect where you work.
In IT it's part of my job to know what is available, and how it works. I take that part of my job seriously.
Re:From the article... (Score:4, Insightful)
A better implementation would allow binary drivers, without any of these issues.
While many of the issues out there may BE the binary drivers.. but, if Joe User goes out and buys a piece of hardware for his computer, plugs it in, and can just install a little driver program, to make it work, then Joe User is happy.
When I, someone who has been using Unix since System III was common, and have been using computers for the last 25 years (i'm 28..), and have done kernel hacking, and worked on major products, even gained quite a bit of income from my own projects back in the day, has to set aside an entire day just because I want to operate my USB Webcam in Linux (which I Haven't done yet, because I am too busy to have an entire day to spend fucking with my computer), and I --know-- that it's not going to be as simple as "plug it in, install the driver, recompile".
To Greg's points:
re: fixing bugs. Yes, you fix the api bugs, you fix the drivers you have control over, you bump the API version, and now drivers that can't work with that change refuse to load. I highly doubt that my Diamond Stealth 64 Windows '95 video driver will work if I try to load it into XP. It ain't gonna happen.
re: building better apis. Yes, you fix the API, bump the version number, and now drivers that can't work with the new version refuse to load.
re: CONFIG_SMP There must be an API to deal with "core kernel structures". What the fuck is a driver doing with "core kernel structures"?
re: GCC alignment issues- GCC is obviously not the best compiler to do this with. GCC is quite obviously not the best at anything, except being compatible across a bazillion operating systems. That's ALL GCC is good for.
re: drivers outside the kernel tree give nothing back; Maybe they have nothing useful to give, either? WHY is it important that we know how everything the hardware manufacturer does works, if they are competent enough to make it work? YES, I agree that open source drivers and being available with the kernel are the BETTER option, but that doesn't mean that some people aren't going to want a different option.
re: deleting functions; See also API versioning. I think I've repeated that a few times now.
Why do you hate Linux? Why do you not want it to succeed?
Or do you enjoy spending an entire day or longer just making some USB gadget work?
Re:From the article... (Score:3, Insightful)