Kernel Changes Draw Concern 685
Saeed al-Sahaf writes "Is the Linux kernel becoming fat and unstable? Computer Associates seems to think so. Sam Greenblatt, a senior vice president at Computer Associates, said the kernel is 'getting fatter. We are not interested in the game drivers and music drivers that are being added to the kernel. We are interested in a more stable kernel.' There continues to be a huge debate over what technology to fold into the Linux kernel, and Andrew Morton, the current maintainer of the Linux 2.6 kernel, expands on these subjects in this article at eWeek."
Two Sides (Score:3, Insightful)
Hypocritical (Score:5, Insightful)
WTF? (Score:3, Insightful)
Huh????
What about older hardware! (Score:5, Insightful)
The problem, I think, is that developers tend to be people who love computers. And people who love computers tend to have nice rigs, just as people who enjoy cars tend to spend a disproportionatly large amount of their income on cars (ever see the parking lot at a lan party--complete with people pulling multi-thousand dollar machines out of the hatch of a Hyundai?).
Perhaps Linux needs more developers from third world nations; the kid from a rural village with intermitant electricity getting his hands on an old, but useful machine and learning that he, too, can tell it to do all sorts of things!
can't please everyone all of the time (Score:3, Insightful)
I myself would like better multimedia drivers, good solid and easy to install and configure drivers for my PVR-250 and pcHDTV tuner cards in my MythTV box. CA may not give a darn about those at all, but this is my primary Linux goal and getting my particular MythTV rig running is the only application I myself presently give a darn about in all of Linux land.
I myself do not give a darn about gaming support either right now. That may change in the future if I decide to expand on MythTV and turn the thing into a high-end game console as well. But for the moment I'm not interested, just as many gamers may not be particularly interested in TV tuner drivers.
Though keeping stability and efficiency as primary goalsagreeably is a good idea. But I think high-quality (ie. NOT alpha or beta) drivers for more hardware should also be important.
Re:Just my $0.02 (Score:1, Insightful)
Re:Just my $0.02 (Score:5, Insightful)
Which is exactly what Andrew Morton said. I think that the underlying issue is a human resources one. CA wants Linus and Andrew to spend all of their time working on "Enterprise" features and none of it on things like improving Linux's real-time performance and integrating drivers for non-server hardware. I think that they're being selfish and unreasonable, but that seems to be par for the course for CA.
What is it with CA? (Score:3, Insightful)
We must listen to CA ! (Score:5, Insightful)
CA have contributed so much to the Linux kernel, so they know what they're talking about. NOT.
What is CA's motive in saying this ? They have no real experience in developing operating systems, nor are they producing data and a testing methodology to backup their opinion.
It seems to me they might be talking through their hat. [cambridge.org]
BAAA (Score:4, Insightful)
If anything the extra junk benefits them because the folks developing those drivers are likely to find bugs in the kernel proper.
Re:Just my $0.02 (Score:5, Insightful)
Re:Just my $0.02 (Score:4, Insightful)
If you don't want it, don't compile it in.
It gets better. If someone says "but I use a stock kernel," remind them that they don't have to load every module under the sun.
This guy would be better off going off to tell hardware manufacturers to quit making new hardware. Yeah right! Also, why does he not complain about bloat in the Windows kernel? IIRC, there is a much larger segment of hardware supported in Windows than in Linux. Mehtinks his statement should be modded -1 Flamebait.
Re:Just my $0.02 (Score:2, Insightful)
I even think that more drivers improve the structure and stability of the core kernel. More drivers prove that certain internal APIs work, they trigger bugs in the glue code etc. On a higher level, they may also show design/architecture problems in the kernel (e.g. many similar ioctls could hint that there is the need for a new kernel subsystem).
People may be right if they say that linux is not the cleanest way of implementing an OS kernel, but for a production (*and* even research - various bew filesystems, mosix, xen etc.) kernel, it is IMHO pretty mature and non-bloated code.
Natural evolution of an OS (Score:5, Insightful)
The trick, for Linux, will be to do what Apple did in moving to OS X -- create a new, "from-scratch" (yes, I know Apple borrowed a lot from others), OS with some form of compatibility-creating layer or old-kernal box. Incrementalism only takes an OS so far before revolution is needed to build a new, better system from the ground up.
Re:What is it with CA? (Score:5, Insightful)
Re:"fatter" (Score:5, Insightful)
The kernel is fine it's the setup that sucks.
Re:Inevitable event (Score:3, Insightful)
You know... (Score:2, Insightful)
sheesh. as many others have said already - if you dont want/need the driver - don't compile it.
Re:What is it with CA? (Score:5, Insightful)
Re:Compiled Kernel not necessarily getting fatter. (Score:2, Insightful)
Re:"fatter" (Score:4, Insightful)
Re:What about older hardware! (Score:5, Insightful)
My point with this is that it's not the kernel that's making GNU/Linux systems crawl on older hardware. It's the newer versions of GNOME and KDE. As long as you aren't running GNOME or KDE, older hardware works just fine. My servers chug along just fine, and my 233 MHz laptop with 64 MBs of RAM running Sawfish also suffices just fine to do virtually all my common tasks (except running any Mozilla product :-P ).
So, certainly, GNU/Linux may need more developers from third world nations, as you put it. Linux, however, does not.
Re:"fatter" (Score:4, Insightful)
So that loading the kernel on 100s of machines is as easy as distributing a single file rather than a distribution of files.
Personally? I never used modules when I could just compile it all in. It's easier to transport that way.
Has Sam Greenblatt EVER compiled a linux kernel? (Score:3, Insightful)
Yeah, I personally find increased driver support a real problem
If he wants an OS for which you can't optimise the kernel in anyway try microsoft.com. I hear there are a couple there.
Re:Microkernels... (Score:3, Insightful)
of separate interfaces for every kind of object a single regular interface could be used at least as a starting point
There is; the C function interface. Abstract as much as useful, but no more. Again, whether or not this is a microkernel or not, the interfaces can be made, and have been made to the extent that they were felt useful.
Re:Just my $0.02 (Score:5, Insightful)
Just my 5 bytes (Score:1, Insightful)
They're a for-profit company, after all.
If they have to spend extra time just to take out the bloat and re-QA - well, it's just gonna cost more to use Linux.
And it's not like you can "roll your own" - you change the kernel, you have to re-certify or else you're running an unsupported config, son.
That's the case at least as far as enterprise distros and ISVs are concerned.
It's all great but I just don't see how Gentoo, without h/w and s/w certifications, can replace enterprise distros and help solve this problem.
Re:"fatter" (Score:3, Insightful)
If you're going to run a typical "server" for a business then a 20-50mb download isn't that much. combine it with it's source so you can build a different kernel for each server (if needed).
Yes there are large sections if the kernel I've never touched (and I doubt I ever would), but I for one still want to see it in the source.
Re:Compiled Kernel not necessarily getting fatter. (Score:5, Insightful)
Have you ever installed a late version of Windows?
Watch the installer load device drivers for every known weird form of RAID before it even begins to ask you how you want to install the OS?
And then how long does it take to do "hardware detection" - versus Knoppix that does it all in the three minutes or so it takes to boot from CD?
Yes, Windows is bloated - bloated with (so-called) "features", not drivers. If Linux makes THAT mistake, we can complain. Having a bunch of drivers and support for oddball subsystems loaded into the kernel is not serious and until somebody DEMONSTRATES a stability problem, it's bullshit.
So far I've heard nobody say the 2.6 kernel is in FACT unstable because of x, y, z drivers or subsystems.
Problem with the Mac analogy (Score:2, Insightful)
Linux on the other hand has a sound design (no design is ideal).
Further if you think Linux sucks because [chose your reason] there is most likely already an OS project running to address your issue.
In short MacOS needed a restart worse then windows 3.1, Linux does not.
Re:What is it with CA? (Score:3, Insightful)
This sucks because of something I cannot understand enough to clearly articulate or really know whether it sucks = bashing.
"Getting fatter" is an analogy in the first place, and since it is talking about the size of the download and not the executable, not paticularly relevent. It isn't clear either whether "stable" is used in the context of "more code just keeps coming out" or the accepted operating definition of "the machine stays up". My bet is on the former, in which case it is questioning the model of frequent releases.
In my option, the article is saying that linux is bad now because it has a lot of hardware drivers, so the codebase is big. I disagree with that idea, and consider it poorly informed bashing.
Re:Thanks, CA (Score:5, Insightful)
Menuconfig is just the window to the maze that is the kernel ifdefs. You have no idea of the size or speed impacts of the options you through if the help doesn't tell you. You have no idea of the component interactions.
Menuconfig is just a parking place for problems. The real problem is too many options, and not enough testing of the combinations. That is what CA is complaining about.
Re:What about older hardware! (Score:5, Insightful)
It's ridiculous to suggest that the kernel layout should be restricted to the level of a 486.
First of all, you can already do that if you know what you're doing. People in the Third World either know what they're doing or get their machines from people who do - just like in the rest of the world.
Secondly, there are tons of stripped down distros. Pick one.
This is merely asking for your cake and eating it, too - you want the latest kernel and everything it can support to run on the oldest hardware.
Try it with Windows 20003 Server.
Then go back and read the specs for Longhorn: a GB of RAM, a terabyte of hard disk, and a minimum 3GHz CPU.
The Linux kernel is intended to push the boundaries of OS technology - not run on every Third World machine in existence.
Yet, at that, as I pointed out, Linux is incredibly flexible in what it will run on compared to virtually every other OS in existence.
All of this is just utterly pointless criticism.
Re:Just my $0.02 (Score:4, Insightful)
1. Modprobe/insmod/rmmod.
2. The OpenVMS kernel is written in VAX assembler (http://research.compaq.com/wrl/DECarchives/DTJ/DT J807/DTJ807SC.TXT [compaq.com]). It was not written in "languages like" Ada. Jesus christ.
Re:That line of thinking can be dangerous though (Score:5, Insightful)
As for other situations. If you are going to get a certain level of support for a product (new features, custom installations), that is going to cost you a certain number of dollars, whether it be licensing costs (you need to be a large enough customer to have that level of influence with a vendor), or it be in hiring developer time to work on an OS project.
I would love to see some sort of feature wishlist where smaller companies could vote with their dollars on certain bugs or features. I've heard of bounty systems like this being tried, and I would love to hear more about why they haven't really worked yet.
You are right about the OS community being quick to jump on the "code it yourself" excuse. But that is reality of dealing with volunteers. Some are motivated by competing with commercial products, and will work on features to make that happen. Others are totally unconcerned with what corporations think about their work. At the end of the day, many developers are scratching their own itch and shouldn't be expected to care about what other people want their software to do.
At the same time as some people are quick to jump on this excuse, others are quick to assume that the goal of OS should be to beat proprietary software. This is simply not many peoples goal.
I for one, welcome our BIG FAT KERNEL! :) (Score:2, Insightful)
Sounds to me like the Open Source Community is REALLY HAPPY to have faster desktops, better gaming machines.
I think it's a case of sour grapes. They've got to spend money so complain, and by the way, make it sound legitimate by saying it's the Open Source Community.
I for one, welcome our BIG FAT KERNEL!
Wrong Issue (Score:2, Insightful)
Re:Natural evolution of an OS (Score:2, Insightful)
Last I knew there were always new pieces of hardware coming onto the market with people wanting to use the fancy new hardware. and lets not forget all the existing hardware they people are reverse engineering to write drivers without manufacturers help for
Re:Just my $0.02 (Score:4, Insightful)
Re:I see a lot of clueless replies (Score:5, Insightful)
But isn't most of that code base specific drivers for specific hardware, maintained by individuals who wrote that code? Are you saying that instead of including possibly buggy drivers, it would be better to leave them out and give no support at all to people who happen to have that hardware??
Remember, any potential bugs in drivers won't affect anyone who doesn't have that hardware - these drivers are compiled in default kernel distributions as modules and never get loaded unless they're needed. All it means is that the kernel modules take up a bit of disk space, which is trivial compared to the sizes of current hard disks. They don't impede performance and they don't do any other harm. I really can't work out what all the fuss is about
That makes no sense (Score:5, Insightful)
Re:Just my 5 bytes (Score:1, Insightful)
obviously bad example .. (Score:3, Insightful)
He might or might not have a point but things like music and game drivers do not make a good example of kernel bloat. It's not like it hurts that those drivers exist in the kenrel. Such drivers are usually shipped as loadable kernel modules. If you don't need them, they won't be loaded. They're only using up your disk space (which shouldn't be a concern these days)
Re:Probably true (Score:3, Insightful)
Once again, building a case on nonsense.
CA is not a "corporate user" - they are a software marketing outfit. They want to market their stuff on Linux, fine. They want to market their own distro, fine. They want to hire a kernel hacker to do that, fine. They want the Linux developers to do it for them, not so fine - particularly when they have no specifics for why.
Secondly, no ordinary corporate user needs a kernel hacker. If they did, they'd sure need it with Windows - assuming of course that was even possible with a totally closed source system.
Third, "TCO" - which in itself is based on air - has nothing to do with this or anything at all to do with the kernel (except to the degree that a kernel improves performance on high-end hardware and might allow purchasing either less expensive hardware or fewer pieces of same.) Anybody running a high-end server on Linux knows how to tweak Linux to get stability and adequate performance and isn't worried about an extra "music driver" that isn't even loaded.
Fourth, while price is indeed important to Linux's success (even if in fact it's not that big a factor in actual deployment and is more a perceived value), Linux has no significant competitors (other than Windows) in terms of pace of development, support, applications, etc., all of which are much more significant than whether the OS costs money or not. Even Solaris isn't in the ball park. None of the bigger iron UNIX variants are significant - not HP/UX, not AIX. The FreeBSD variants are also rans.
Linux is the only game in town against Windows. Period.
The ONLY threat to Linux is if the desktops - and more importantly, the OS system services and their configuration - get SO bloated and insanely complex - like Windows Server 2003 - that nobody can figure out how to use it. And the desktops can always be replaced by something better.
It's manipulating and configuring the system that needs to be kept straightforward and task-oriented - not filled with thousands of menus, dialogs, Management Consoles, Control Panels, ad nauseum, like Windows - which has a fatal case of "featuritis" and absolutely NO usability engineering.
Try figuring out "effective" permissions, end-user lockdown, and Group Policy application in Windows 2003 Server. They have to give COURSES in this stuff, for Baron von Christ's sakes! I know, I'm taking one.
Re:Just my $0.02 (Score:4, Insightful)
So now instead of paying Microsoft to make your choices for you, you pay Red Hat or Novell to do it. You can even hire a consultant that will tailor the kernel to your specific needs if it's that big of an issue, and if it is, I doubt that Windows would suffice anyway.
Choice alone is a good thing, and when your choices are open it's even better. Find someone to do what you want well for as cheap as you can, or take one of the prepackaged solutions. It's not that big of a deal.
Re:Just my $0.02 (Score:5, Insightful)
ls -l /boot/vmlinuz-* (Score:3, Insightful)
-rw-r--r-- 1 root root 808295 Mar 24 2004
-rw-r--r-- 1 root root 1458226 Mar 28 15:19
I suppose it is a little bigger. I did compile scsi support into the second one for a usb keydrive though.
Re:I see a lot of clueless replies (Score:4, Insightful)
code in the tree, even if it's perfectly disconnected from the rest, still has to be modified when an API changes. With the 2.6 the de facto development codebase, that's not something to ignore.
Re:I'm torn (Score:5, Insightful)
It's my belief that the kernel won't really stabilize until they branch off to 2.7. They're too focused on adding new features for the code to ever really shake out and get stable. They're shoveling new stuff in there way way way faster than it can really be debugged.
And they just wave their hands in the air and say that it's up to the distros to make this mess usable.
Until they get over this phase, in which they're pushing the hard work of debugging onto everyone else in the world, the kernel is not going to stabilize. And we will be held hostage by particular vendor kernels, instead of being able to track the 'one true Linux'. If we start with Redhat, we're stuck with Redhat. In the past, we were able to fall back on the One True Kernel if Redhat or Mandrake made a mistake. But that's not really an option anymore... tracking the One True Linux is now dangerous, because the kernel devs don't really care if it works right.
I can't find the precise quote right now, because I can't see my old comments on Slashdot... apparently I now have to pay for the privilege of seeing my OWN old comments
Until that mindset changes, Linux is just not trustworthy. It needs to be made right BY THE PEOPLE WHO WRITE IT. You can't hack reliability in as an afterthought, it has to be a major focus all the way along. This is exactly the sort of crap we always derided Microsoft for... ship it buggy and then fix it later. I hated this behavior in Microsoft. I hate it just as much in Linux. I switched to Linux because it was, first and foremost, reliable. It no longer offers me that, and I am starting to switch machines over to the BSDs now.
Waving one's hand and expecting 'the distributions' to do the grunt work of actually making the kernel stable is just wishful thinking... it's expecting other people to do the job that should be the very first one on their list. Reliability is THE MOST IMPORTANT FEATURE. It's not fun, it's not glamorous, but it's what got Linux so popular that these guys actually get paid to do it. If it doesn't return to relatively bulletproof status, then people are going to use other solutions instead, and there won't be as many Linux jobs available.
It's the reliability that creates the jobs. I wonder if they really grok this?
Re:Thanks, CA (Score:4, Insightful)
Is it? Because in the article Greenblatt snivels about "too many game drivers!@" and then breaks down completely and starts complaining that Xen "doesn't do enough." I'm not sure which side of the fence he's on. I do know that if I don't have an ATI Radeon in my system I'm not going to be totally baffled by the vast array of ATI driver options. But I don't work for CA.
Re:Just my $0.02 (Score:1, Insightful)
Good news: you don't have to.
Just launch the following command (just the same approach you use with Solaris): modprobe [modulename].
The Big Bloat (Score:5, Insightful)
linux-2.6.11 is forty four megabytes. Gzipped up. I don't want to waste my bandwidth downloading it to see what it is unzipped, but trust me, it's massive. Where does all this bloat come from? Drivers. Drivers are good, but the current kernel paradigm (and Linux isn't alone in this) is that every driver has to be included with the kernel. So we end up with huge packages and huger repositories where everything is required to reside.
Imagine the size of Linux when we finally get to the goal of having every past and current device with a dedicated driver in the source tree. You're talking possibly ten gigabytes uncompressed. Even if you're not using 99.9% of those drivers, they're still there. The day may come when you can actually build the kernel faster than you can make its dependencies.
Could you imagine a KDE or GNOME where every core, addon, auxiliary and experimental component was all part of one single tarball? Even if you only wanted GTK+ and GIMP, you still have to download and configure the entirety of the GNOME repository to get it. That's what it's like with the Linux kernel.
It's time non-core drivers got split off from the main Linux project. If you don't need to add anything into the kernel to get driver to work, then put it in the driver subproject and don't bug the big guys with this penny ante crap.
Re:That line of thinking can be dangerous though (Score:3, Insightful)
Split Drivers out of main line kernel development (Score:3, Insightful)
Re:That line of thinking can be dangerous though (Score:2, Insightful)
Because it may encourage people to just go to a commercial alternative.
What? If a stranger or relative requests as a favor to drop him at the airport, do I have to honor his request or provide a lengthy explanation why I choose not to comply? No explanation is needed. And so what if the stranger chooses a commercial alternative by hiring a taxi cab?
Unless of course you imply that kernel developers should be slaves, either to CA or to 'the common good', which for you means Linux market share. The kernel developers have their own personal goals, it is their own time, and have little obligation to follow other peoples interests.
Re:Just my $0.02 (Score:5, Insightful)
And this points to the real evolution in linux that has Microsoft sweating: what CA wants is a kernel that works better for businesses. Why? Because businesses have come to rely on linux.
Business (in general, I'm not talking about CA specifically but about all the businesses that now use linux in their operations or, even more, in their firmware) to linux: "Linus, we didn't pay you to write the kernel, we didn't give you much help in writing it, we've often appropriated it and ignored our legal responsibility under the GPL while at the same time keeping out own drivers closed-source and binary only. But, ah, now that we use -- for free -- what started out as your hobby project, we expect you to give up your hobbyist ways and toe the line, because it's now our bottom line."
This really isn't all that much different from the RIAA's "buggy-whip manufacturers'" outlook on file-sharing: "we've always made buggy-whips, and we loved it when Linus and the rest of the OS community were producing free leather for us to make buggy-whips, but now that you're producing those infernal auto-mobiles, we'll, you better stop before you threaten our profits."
The one thing I've never liked about the GPL was that it gave the same rights to a for-profit business as to a fellow hobbyist. I'm more than glad to share my code with a fellow, who like me, is coding for the love of it. I'm a bit less happy to share with someone who just sees my uncompensated work as a way for him to parasite off it.
Linus should tell CA that businesses have gotten far far more -- just in dollars, I'm not talking intangibles -- from Linus than they can ever repay, and that he's going to go on doing what makes Linus happy. After all, that attitude worked out pretty well for the parasites last time around.
As for the rest of us, maybe those of us who can and do code should ask ourselves why we're so happy to give our work away for free to businesses that do their level best, day in and day out, not to give away anything for free.
Is the GPL really our best answer?
Re:Just my $0.02 (Score:5, Insightful)
But how about sharing it with a multinational company that is able to throw massive resources into helping you to develop your program? If you shut out all companies you shut out the freeloaders, but you also shut out companies that would otherwise be helping your project. The Linux kernel isn't mostly the work of hobbyists, and it hasn't been for a long time. For many years Linus worked for Transmeta, who hired him in part because they wanted to use Linux with their chips, and now he works for OSDL, which is funded by big corporate Linux users. Alan Cox works for Red Hat. Marcello Tostatti works for Conectiva (now Mandriva or whatever they're calling it). The list goes on and on.
And then there are the direct corporate code contributions. SGI has contributed XFS and a lot of work on NUMA. IBM has contributed a boatload of code including JFS, NUMA, and RCU, and they've tried to contribut more things that were eventually passed up because others came up with better solutions. Namesys developed ReiserFS. Many vendors have contributed drivers for their hardware. The Linux kernel wouldn't be nearly what it is today if those companies hadn't been contributing.
The key thing to understand is that freeloaders don't actually cost anything, except for the bandwidth they use for downloads, but contributors help to build the software. It's smart to let anyone use the software because then anyone can be a contributor. Help from the IBMs and Red Hats of the corporate world more than pays for all the freeloaders.
Re:Just my $0.02 (Score:4, Insightful)
And major F/OSS projects like linux aren't artificially hampered by the commercial OS vendors that want to sell a "desktop" version and a "server" version, or worse yet charge per client licenses (WTF!) Linux is imminently tweekable, runs on everything from embedded ARM7 to supercomputer cluster IA64. Stable linux distributions like Slackware offer far more compatability from desktop to server than RedHat's offerings (okay, FC4 is a "committee" project, not unlike the proverbial horse that became a camel).
Perhaps CA just needs to hire some F/OSS consultants -- they could get on the cluetrain just by lurking on the forums like slashdot. So to CA, I say "Quit you're mewing!".
Re:"fatter" (Score:3, Insightful)
Re:Just my $0.02 (Score:3, Insightful)
You make a persuasive argument, and I largely agree with you -- my problem isn't with the companies that have and do contribute to OS, or that hire OS coders to work on OS projects.
I agree, that's the sort of arrangement that helps everyone. But it's also an honest arrangement: the businesses know that they're getting a great deal -- a whole operating system that drives the sales of their products and services -- and the coders know they're getting a great deal -- good pay for what they'd do for free as a hobby.
And you're largely right that the freeloaders only cost download bandwidth.
My problem is when the freeloaders start telling Linus that he can't take what is still his hobby (and now lots of other people's hobby too) in the direction he wishes to take it.
My larger question is how to get the freeloading companies to act more like the honest-dealing companies.
Because the freeloaders hurt the hobby with their demands, and they also get -- to a certain degree -- a competitive advantage over the Transmeta and IBMs which are supporting them by hiring the coders. Of course, I say "to a certain degree" because Transmeta and Red Hat, by hiring the coders, do get some say in where the coding is going.
If other companies want that, then, to be fair, rather than bitch about linux's direction, they ought to hire a linux kernel coder.
I mean, I've never contributed to the kernel, but I also don't call up Linus (or haunt the newsgroups) with demands.
Again, I think we're largely in agreement and I want to emphasize that your points are good.
(I've amazed that my (not open source) spell-checker has learned to spell Transmeta.)
Re:I'm torn (Score:4, Insightful)
I agreed with you up to this... This is just FUD.
Many commercial vendors are famous for leaving serious open bugs, and not fixing them for a LONG time.
Now, it's true that OSS/FS developers aren't compelled to fix the problems you are having, but that doesn't mean you're screwed. If you are having a problem, you can fix it yourself, you aren't stuck if the company decides they aren't interested in fixing it. With plenty of developers using it, small bugs like yours get unoffical patches pretty quickly.
As I said, I agreed with you up to that point. Linux does seem to be very poor at stability testing before releasing. I would suggest switching to on of the BSDs if you want a rock-solid system... I know comments like this get marked as trolls here on
Re:Just my 5 bytes (Score:4, Insightful)
Yes, and the US Pledge of Allegiance ends with the words "with liberty and justice for all". Just because you say something doesn't mean it's true.
Re:That line of thinking can be dangerous though (Score:3, Insightful)
Possibly one of the least desirable outcomes of using Unicenter is that the monitoring guys now distrust *all* the monitoring tools.
Re:I'm torn (Score:2, Insightful)
Why not PAY the developer to fix the bug, then?
Re:Compiled Kernel not necessarily getting fatter. (Score:3, Insightful)
[And yes, I *am* a developer]
Oh, and did you not see the bit where the OP talks about booting the system with the USB device attached? He didn't say anything about it not working after it's booted...
Re:I'm torn (Score:1, Insightful)
Re:Just my $0.02 (Score:3, Insightful)
If ATI, for instance, loses enough customers over its substandard and difficult to install drivers, they might reconsider opening the sources. Which would pave the way for a (hopefully) better driver that can be made into a kernel module and shipped as part of distributions.
Re:Just my $0.02 (Score:4, Insightful)
Where did you get the idea that hobbyists have customers? Corporations have customers and if they want changes in the kernel or another open source application then they can code it themselves. Hobbyists don't care if Linux "takes off" because they make no money off of it and don't care to. For most of us hobbyists Linux is good enough as it is and if we want something more then we'll code it for OURSELVES. It's nice when big corporations contribute code but we don't owe them a damn thing, they are using our free code after all.
Re:Thanks, CA (Score:3, Insightful)
Actually, they haven't. They understand that unless the community as a whole agrees to this kind of change, that all they'll accomplish is to create an anonymous fork.
Re:Modules that work with different kernel version (Score:1, Insightful)
Oh please (Score:2, Insightful)
Disclaimer: I am not a linux geek, but I am an engineer, so I understand technology and the reasons why geeks do what they do.
That being said, my initial reaction to this story was: "oh man, the fact this is even an issue means linux has a long way to go". Why do I say that? Because it's obvious if linux wants more desktop share, they need to be working on the features that most people are interested in. Namely, games, music, etc. The fact is, games sell machines. Multimedia features sell machines. Look at apple: people are buying macs just to use their iLife programs. Last I checked, a stable kernel was not high on their list of reasons why they made the purchase.
I'm not discounting clean, organized code. Stablity and speed are important. But my general impression of the linux community (from the outside looking in) is that it's one big crab theory gone bad. As soon as one part of the community realizes the truth, that they only way to sell linux is to build into the system features that people actually will buy, the geeky half of the community steps in and whines that linux no longer has clean code and has become "feature bloated".
Look guys, I'd hate to put a lightbulb right up to the obvious, but consumers are not geeks. No matter how clean or efficent the code is made, the average person is simply NOT going to get excited unless the operating system has the FEATURES they want. Ultimately, it comes down to what they heck you can do with the operating system at the user level. If the user's experience is not "doing it" for them, then no amount of "clean code" is going to solve that problem.
And I know that comes as a complete downer to most geeks. We spend 10 hours a day tweaking our setups, getting everything just "perfect", and expect to be rewarded comparably. The sad thing is, most people don't care. They don't care what the code looks like, they don't care about how much time it took, and they don't care about our "brilliant" hacks. The important thing to them is what they can do with it.
So what is the solution? Easy: split up linux for the different markets. One market is for geeks, like the above gentleman who want a stable kernel and nothing else. The second market is for consumers that play games, listen to music, etc. Geeks get their geektoy, and consumers get what they want. But the community is not going to be able to make a version of linux that will appeal totally to both markets, since the markets are COMPLETELY different. (Again, geeks aren't consumers.)