Time for a Linux Bug-Fixing Cycle 236
AlanS2002 writes "As reported here on Slashdot last week, there are some people who are concerned that the Linux Kernel is slowly getting buggier with the new development cycle. Now, according to Linux.com (Also owned by VA) Linus Torvalds has thrown his two cents in, saying that while there are some concerns, it is not as bad as some might have thought from the various reporting. However he says that the 2.6 Kernel could probably do with a breather to get people to calm down a bit."
I preferred the old odd/even split (Score:5, Insightful)
I suppose if you buy your linux off the shelf you can complain to your vendor, but for home users looking to do some DIY kernel building the new way is a bit worse. However, I suspect we're a dying breed...
Re:I preferred the old odd/even split (Score:2, Insightful)
Re:I preferred the old odd/even split (Score:4, Insightful)
There was a time when you could grab the next stable kernel, for example when there was an exploit and you really had to, and you'd know you'd only get *more* stability. Now it's exactly the opposite. If you have to upgrade, you're just screwed.
This started around the time they added reiserfs in the stable series although it was far from stable yet. It's not new in the 2.6 series, really. It's a wrong philosophy.
Compare this to FreeBSD release engineering with RELENG, STABLE and CURRENT. FreeLinux anyone?
Re:I preferred the old odd/even split (Score:2)
I know we're talking about the kernel, but isn't this sort of the same thing as Debian's Stable/Testing/Unstable versioning scheme?
Re:I preferred the old odd/even split (Score:2)
Re:I preferred the old odd/even split (Score:3, Funny)
Re:I preferred the old odd/even split (Score:5, Insightful)
Stable driver APIs anyone?
Oh wait ... stable driver APIs promote binary drivers ... EVIL EVIL EVIL
Re:I preferred the old odd/even split (Score:3, Informative)
And yes I would like to see both.
Re:I preferred the old odd/even split (Score:2)
Re:I preferred the old odd/even split (Score:2)
Re:I preferred the old odd/even split (Score:5, Informative)
No, the 2.6.x.y are patch releases of 2.6.x. The development releases are 2.6.x-preY. The release candidates are 2.6.x-rcY.
Makes sense to me at least.
--JoeRe:I preferred the old odd/even split (Score:4, Interesting)
Re:I preferred the old odd/even split (Score:3, Funny)
Re:I preferred the old odd/even split (Score:3, Interesting)
I hear this a lot but it goes against all my experience. Usually the people I met who compile their kernels and do other geeky things tend to get way more work done then the people who want everything dropped on their laps.
Am I hanging out with a different crowd then you? The people I meet who use computers while not understanding anything about them tend to be some of the least productive people in any business. It's always the savvy guy/girl who can use th
Re:I preferred the old odd/even split (Score:2)
I don't, the new model work much better for me (and my customers) because:
I'm
Re:I preferred the old odd/even split (Score:5, Informative)
So it increased the workload, didn't seem to offer massive stability benefits (although, maybe it did, in retrospect), it reduced the amount of testing the new features got, and limited the workloads on which they were tested.
Personally, I find the present -stable branch of non-bleeding edge kernels to be as solid as 2.4 and 2.2 ever were. I do think we've a tendency to look back at that dev-cycle with rose-tinted glasses. It's not as if 2.4 or 2.2 were reasonably bug-free until the twentieth cycle or so.
Re:I preferred the old odd/even split (Score:3, Interesting)
(Yes, I recall some times in the 2.4.x era when this wasn't true either.)
Re:I preferred the old odd/even split (Score:3, Interesting)
my ide/ata interface was broken 3 times in the 2.4.x series
i started to use linux quite late, on the 2.2 series
2.6
i miss -ac series, i miss the stability and i welcome my new freebsd overlord for now, after all it's a choice of a tool that let
Re:I preferred the old odd/even split (Score:2)
well, i think that was referring to 2.4 being the latest stable for a looong time.
overall i enjoy 2.4 for some simple production servers where a custom kernel is needed - it's life cycle is quite long.
it seems to me that branching 2.7, but trying to
Re:I preferred the old odd/even split (Score:2)
Re:I preferred the old odd/even split (Score:5, Insightful)
With the old model, the linux kernel would start a unstable release and people would start adding stuff which not the care you'd put into merging something in a stable tree, is not tested a lot, etc...
Now keep this for one, two years. When you decide to release the unstable tree as the next stable version you realize that your unstable tree is full of crap, and you need to waste months or years (Vista) trying to stabilize it. Even when you release the
The "new" development fixed that. In the current linux development model people is allowed to put new features in the kernel even if they're invasive. But programmers are not allowed to put crap in the kernel, they need to be VERY WELL tested (in the -mm tree) and reviewed, show numbers that back your words if neccesary, document things, etc. Of course no code is free of bugs, so the released version will not be 100% stable as current 2.4 is, but it's QUITE stable.
Because the features are merged progressively, it's MUCH easier to find and fix bugs. Even if there're new features in every release, there're not a LOT of new features - it's much easier to find out what feature broke something between two releases. Compare it with a stable/unstable development model: People keeps adding things for years, when the user switches from 2.4.x to 2.6.0 his kernel doesn't boot. How do you find out who broke that with so many changes?
IMO, from a Q/A POV, the new development model has more sense than a pure stable/unstable development model. It's about "progressive" vs "disruptive", and for projects with several millions of lines and so many contributors it may have sense. Of course, because new things got added there're always some bugs, which is what people is bitching about today. Maybe this could be fixed by leaving the current tree as "stable" and start a new tree - but instead of a "unstable" 2.7 tree, a 2.8 "stable" tree. A pure unstable release doesn't works that well with huge projects like the linux kernel. Remember the hell that FreeBSD 5.x was and how much has affected to the FreeBSD project, remember windows Vista. Maybe it works for some people, but I don't thing it's the best development model for such projects. Solaris is also using this model to some extent - they release things into opensolaris, but what you see in opensolaris is not the "official stable release", it only becomes "stable" after a while.
Re:I preferred the old odd/even split (Score:2)
>projects like the linux kernel is.
I think FreeBSD is nice proof that you are wrong. See below.
>Remember the hell that FreeBSD 5.x was and how much has affected to the
>FreeBSD project, remember windows Vista.
With FreeBSD 5.x, if you had a working system (be it 4.x), you could choose not to upgrade until the mess was sorted out. Such things are mostly impossible with Linux, because critical fixes are intermigned with new bugs
Re:I preferred the old odd/even split (Score:4, Insightful)
I'm not saying you couldn't choose a stable FreeBSD version - you can run a 2.4 kernel if you don't like 2.6, aswell.
I was talking about development models. 5.X was a disaster, and this is something that even the core FreeBSD developers have accepted (they have changed a bit their development model to avoid the 5.x disaster again, you know): Too many time, too unstable, too many time to stabilize. 6.1 (which was released today, BTW) is great, sure. That doesn't means the development model is the best
Re:I preferred the old odd/even split (Score:3, Interesting)
It's about time people realised that the distinction between Linux the kernel and GNU/Linux the operating system is a real and important one, not just RMS whinging (although I agree with his whinge anyway).
Re:I preferred the old odd/even split (Score:2)
Exactly. I run Ubuntu, and between the Debian patches and the Ubuntu patches, I've got a very stable kernel because they're constantly backporting the bugfixes from newer kernels and releasing them into the apt repositories.
Stability and polish are up to the distro maintainers -- on everything. Not just the desktop, not just the apps, but on the kernel, too.
Re:I preferred the old odd/even split (Score:2)
Standardize the Kernel API!! (Score:5, Interesting)
The problem is that the drivers have to remain in constant flux because the kernel API is always changing. Now, when there are a limited number of drivers, this means that you can move quickly on the kernel. As you add more and more drivers, you add more and more work to keep the drivers updated. Eventually, there is more work needed to update the drivers than modify the kernel, and the drivers become your sticking point.
This is where I believe Linux is stuck. Linus and the kernel team has to look at the various kernel APIs and standardize them with the next release.
Sorry guys, time to grow up. Linux *is* mainstream!
Re:Standardize the Kernel API!! (Score:2)
No - The kernel API (whilst not set in stone) is quite stable & doesn't change ofte
Re:Standardize the Kernel API!! (Score:2)
Because most companies in this position make hardware, not software. Until the open source hardware movement takes off (and there's good reasons why it never will), they make their money from their hardware and doing anything that gets in the way of this is a bad thing.
It's not about not giving anything back (the company i used to work for released closed source drivers, and
Re:Standardize the Kernel API!! (Score:5, Insightful)
Re:Standardize the Kernel API!! (Score:3, Insightful)
156 tree, what are you, a developer, supposed to do? Releasing a binary
157 driver for every different kernel version for every distribution is a
158 nightmare, and trying to keep up with an ever changing kernel interface
159 is also a rough job.
160
161 Simple, get your kernel driver into the main kernel tree (remember we
162 are talking about GPL released drivers here, if your code doesn't fall
163 under this category, good luck, you are on you
Re:Standardize the Kernel API!! (Score:2)
If you really respected the GPL so much, you'd have read it. Binary kernel modules are forbidden by a strict interpretation of the GPL; kernel developpers have merely tolerated them. Notice the warning in dmesg when you insert nvidi
Re:Standardize the Kernel API!! (Score:2)
Re:Standardize the Kernel API!! (Score:3, Insightful)
Agreed. Open source is a choice, and not chosing to OS a driver code package does not immediately or synonymously make a company evil. Most people want Linux to start playing in the same space as Windows (well, at least OSX) in terms of user numbers. This will never happen unless hardware vendors are allowed to create binary drivers for their products.
Look at the video card space - drivers can so
Re:Standardize the Kernel API!! (Score:2)
Your +20% card will be -20% card in a year anyway.
Re:Standardize the Kernel API!! (Score:2)
Re:Standardize the Kernel API!! (Score:2)
"Most people" meaning all the people that want Linux to just work without giving anything to it to make it better. The folks who want a free (as in beer) Windows. Don't get me wrong, I love that fact that my Mom uses Linux and she doesn't give anything back to it. And I honestly can't say I am a major OS contributor other then advocacy and local support for my community but I also have no inter
Re:Standardize the Kernel API!! (Score:2)
For a startoff their is mindshare - might not get you many geek points, but that fact that the average person on the street is starting ot hear of and use linux helps everyone, especially those who develop.
Second if she chooses hardware that is supported under linux, then she is helping the case for more people supporing Linux hardware - the more money people make from selling compatable hardware, the more companies will put effort into drivers.
Third, with any luck it'
Re:prove it-ball is in your court (Score:2)
Actually, I find your response ludicrous. Opening up proprietary drivers that a company has spent millions of dollars on R&D developing will not suddenly be helped by Joe Hacker. When MIT graduates and PHDs in EE and physics are working to make your drivers perform that much faster, the OSS comunity will contribute very little at best.
But put this argument aside for a second and let's look at the economics. OSS enables others to
Re:Standardize the Kernel API!! (Score:5, Insightful)
People who code free software MUST not do anything unless they feel like it. Sure some of them might get paid by Company X to develop Driver Y or Application Z but they do so on the shoulders of what's already been put in place by free software developers.
If Linus and the rest of the kernel developers decide at some point to provide an ABI that proprietary companies can use to build their drivers, all the while clinging to their dated business methodologies and obsession with "IP", then great, that's their choice. It might take a Herculean effort to get all those copyright holders to agree and do it but if they can then that's up to them.
Conversely, if they choose not to, they under no obligation to provide anything. Nobody on the kernel team, IMHO, ever got together and said "we need to start coding and provide some free software so companies with no interest in participating in the process can take our free software and make some money selling hardware". They do it for themselves, their friends and family, their community. Whether or not ATI and NVIDIA want to be a member of that community entitles them to exactly nothing.
Re:Standardize the Kernel API!! (Score:3, Insightful)
First of all, I don't believe that Linus et al refuse to provide a stable kernel API merely so they can snub companies who only release binary drivers (although obviously it is perceived by many as a nice side effect).
Providing a stable kernel API would provide substantial benefits for open source drivers as well in terms of reduced maintenance. This would especially be true for hardware
Re:Standardize the Kernel API!! (Score:2)
I just don't understand where people come off dictating what should or shouldn't happen to free software based on the needs of ATI, NVIDIA or any other commercial entity that tries to reap the benefits of free software all the while snubbing the development model that makes it possible in the first place.
Now you could argue that ATI and NVIDIA don't really get much out
Re:Standardize the Kernel API!! (Score:2)
I've invested $30Mill on this thing. I release the source code. What do I get? Well some people might find bug fixes, but at $10 Mill per spin of the masks, I'm not going to get very rapid development cycles. The kid in the backroom that is good at re-compiling the kernel isn't going to be able to do the same thing for this chip as he can in the Kernel.
Meanwhile my competitor and all my partners now know every trick I have. and can take t
Unstable Drivers == $$$$$$$$ (Score:2)
Here's what you guys are overlooking. There is a stable kernel ABI. It's called "RedHat Enterprise". Look at high-end storage drivers and the like and they only come in binary and only "certified" for certain distributions. If you want stability, yo
Re:Standardize the Kernel API!! (Score:3, Interesting)
Point 2 - Linux is FSF free, share and share alike by license. BSD is not. You can't generalize them together on this issue. If you don't get the difference, you don't know what the hell your talking about.
Point 3 - The operating system doesn't _have_ to do shit. If the companies want their shit to run in Linux, the should submit GPL'd drivers or suffer their rightful hell for being miserly with their code in a project based on sharing.
Re:Standardize the Kernel API!! (Score:2, Insightful)
Look at the current APIs, augment or "bless them."
Don't access structures, use macros.
Bless tried and true interfaces, and make damn sure no one changes them without keeping backward compatibility.
Assign temporary status to "experimental" interfaces.
Maybe create a synthetic API layer an
Re:Standardize the Kernel API!! (Score:5, Insightful)
As a software developer whose experience goes back more than 40 years, to the Stanford Time-Sharing System on the DEC PDP-1, I can assure you that the only way to keep the kernel API from changing is to kill the project. Just as you wouldn't expect a driver written for Microsoft's MS-DOS to be effective on a modern NUMA machine, you shouldn't expect any driver interface standardized today to be effective 10 or 20 years from now. An attempt to freeze the driver API would hamstring the kernel developers, making the kernel less interesting to work on. Somebody would fork it, to lift the compatibility restriction, and the new kernel would work much better with modern computers, causing everyone to migrate to it.
The only way to keep Linux relavent it to let it evolve. Yes, that creates a burden on driver writers. Linux has a partial solution: keep your drivers in the kernel source tree, and test each kernel to be sure your driver still works. When it breaks the cause should be obvious, and easily fixed. If you are lucky, the person who changed the API will also update your driver, but you can't count on that, which is why you must test.
Re:Standardize the Kernel API!! (Score:2, Insightful)
You don't have to stop the API chaging, you just have to stop it changing all of the time. Doing that also give you the added benifit that third party vendors don't keep pulling their hair out because the kernel API keeps changing so they may be more included to actually release drivers in the first place.
Re:Standardize the Kernel API!! (Score:5, Insightful)
There needs to be a stable API for drivers PER MAJOR RELEASE so that the driver maintainers can keep stable, well tested and debugged drivers.
The API should be allowed to change with every major kernel revision but any change should be made with a great deal of thought and, unless it's very difficult to do, the old API should be supported for backward compatability.
Not only this, but I would argue that it would be good hygene to separate the core kernel from the drivers. Doing this would make developers think hard about the bounderies between the two and not have one polluting the other. It would also make the developers think long and hard about whether changing the API for something is such a good idea just because it would be useful for the "ACME USB SLi Graphics card programming port widget" interface.
The the kernel is the kernel, the drivers are merely plug-ins to virtualise the hardware, the two should be as separate and distinct as they are logically.
Re:Standardize the Kernel API!! (Score:2)
Re:Standardize the Kernel API!! (Score:2)
Re:Standardize the Kernel API!! (Score:2)
I refer you to solaris. Still very much alive, and backward compatible as far as version 5.
Re:Standardize the Kernel API!! (Score:2)
There are many reasons to change the module interface format. Many of which include the ability to do things we take for granted today.
What the kernel really lacks is a good standard for coding practices, like say adding comments and indenting at least somewhat sensibly [yeah I know for some of you "elites" you can take reading a complete lack of consistent indentation but for the rest of us
Tom
Re:Standardize the Kernel API!! (Score:4, Informative)
The kernel includes a document detailing the coding style to use. It lives in Documentation/CodingStyle.txt You can read the current version from Linus' Git tree here [kernel.org]. If you spot anything in the kernel that doesn't follow CodeingStyle.txt you should submit a patch to the kernel janitors to fix it up.
Re:Standardize the Kernel API!! (Score:2)
8-space tabs, it's pretty much what I do now (I use 4-space tabs).
What, pray tell, are your issues with it?
Re:Standardize the Kernel API!! (Score:2)
Well, this is just a consequence of what people is discussing. In the current 2.6 model you're allowed to merge new features (ie: things that break the kernel API). What you're proposing is to go back to the stable/unstable development model.
But I don't think the "changing kernel API" is the source of problems - if a driver doesn't work, it's because it doesn't have a good maintainer. Releases take aroun
Re:Standardize the Kernel API!! (Score:3, Interesting)
I would like to modify this slightly. I don't think a single DDI (device driver interface) will work, but several DDIs can be defined:
A low level SCSI DDI
A low level audio DDI
A low level network DDI
and maybe others. Factor the drivers, and extract common parts into the appropriate DDI.
Now, a vendor would write to that DDI, and the Linux team would have to promise that the defined DDI would have a lifespan of (?, but as long as possible). Any drivers needing a custom kernel interface would be plant
Re:Standardize the Kernel API!! (Score:2)
Defining, documenting, maintaining and verifying consistency of those DDIs would be a lot of work, taking time away from other tasks, and constrainig the pace and direction of development.
Where's the benefit of all that effort? Enabling closed-source drivers that destabilize the kernel in difficult-to-debug ways? Creating a situation where more and more users are running kernels the kernel developers refuse to debug?
I don't see why it would make any sense at all to put all of that effort into creating
Re:Standardize the Kernel API!! (Score:2)
The purpose of building the DDI is to allow the drivers to be abstracted away from the kernel. This already occurs, but is not an official policy. If the drivers are so abstracted, kernel development can push forward at an accelerated rate.
The reason is simple -- instead of having to check hundreds of drivers against a change, the interface can be checked. As long as the drivers all conform to t
Re:Standardize the Kernel API!! (Score:3, Interesting)
Drivers which are not completely "class clean" need
Re:Standardize the Kernel API!! (Score:2)
I fully agree with this, and I see this as one of the biggest barriers to enterprise adoption.
Vendors such as Red Hat and Novell attempt to give the client this - in fact they both guaranteed no API / ABI changes within a version (e.g. RHEL 3). This was difficult but feasible under the old model, but from what I can see it is impossible with the new development model.
They either have to stick with one specific kernel version for the entire lifetime of the product, backporting the things they need from new k
Re:Standardize the Kernel API!! (Score:2)
If we could "freeze" what devices the kernel supported, then I would agree, 'stabalizing' the kernal API would be a great feature, for each of the 2.x series independently. But that is just not going to happen when the _hardware_ itself changes. i.e. new buses (PCI Express), etc.
In fact, there
Extermination cycle (Score:2)
Sounds like this approach will benefit everyone in the long run, instead of constantly playing catch-up later?
Dave Jones take on the story (Score:5, Informative)
(Davej is a long time kernel hacker and currently the Fedora kernel maintainer.)
Good ol' Pat... (Score:3, Interesting)
Not holding my breath or anything, but it might be nice.
Re:Good ol' Pat... (Score:2)
I've been running 2.6 since 2.6.10 [or so] without any significant problems or stability issues. x86_64 support was better initially with 2.6 than 2.4 as well.
Perhaps they should spend a few months fixing bugs but I wouldn't favour 2.4 over 2.6 any day.
Tom
Really now. (Score:2, Troll)
A few comments have flown already, but let's be sane here and examine microkernelism.
File system crashes. Microkernel is going to panic because there's no way in hell it can guarantee consistency with running processes now; the FS driver might log-replay or FSCK, but all open file handles become invalid (this can be reduced though...). A monolith is also going to panic; the driver may be in a kernel thread and get flushed and re-initialized, but same problem with file handles.
IDE driver crashes.
What to do? (Score:2)
However the current development model does "seem" to have introduced more instability into the kernel then I remember. (My box at home seems to crash every 17days like clockwork.) Even with the 2.6.x.y releases, they are only maintained for a couple
Re:question (Score:3, Insightful)
1. The underlying technology is non-trivial
2. The implementation often is dirty, quick and without consistent method.
In the case of #1 it's the case that the technology is not trivial. How many people understand paging?
In the case of #2 the code in many cases lacks comments, uses cryptic variables and the documentation [even doxygen style comments] are just not there.
Those two issues fight against anyone will
Re:question (Score:5, Insightful)
As the previous article pointed out, there's no lack of developers, just a lack of developer interest in fixing the bugs. Many of the larger contributors are paid by companies to ensure that specific features are put into (or at least developed for) the kernel. And let's face it; bug-fixing is not fun. Regardless of how hard-working the people are on average, bugfixing is generally the sort of thing that people shy away from unless the bugs directly affect them, especially when working voluntarily.
All large systems have a danger of bugs creeping in over time, and it can be easy to let their numbers get out of control as time goes on. The fact that the people in charge are point it out now is basically an example of good management — attempting to address a concern before it becomes more serious.
Re:question (Score:2)
Some people find bug-fixing fun. Well, not the fuxung so much as the hunt. You need to stalk your bug through the dense undergrowth of your source tree until you find its lair. Then you excise it with surgical precision.
At its best, bug-stalking can be more fun than development.
Re:question (Score:2)
Yeah, but as the sibling post points out, this only applies to some bugs. With many bugs it's a long, infuriating process to find and fix them, and a lot of people just don't have the patience to do this.
Re:question (Score:2)
project is dedicated specifically to fixing the bugs that are largely considered unrewarding
or uninteresting to fix.
It's a great way to learn about kernel hacking. It's educational to just lurk on the mailing
list before you're willing to actually get your hands dirty.
Re:question (Score:5, Interesting)
I thought I know C until I tried to fix a bug in the kernel.
It was a simple syntax bug. Somebody put xxx[...]->yyy instead of xxx->yyy[...] in one line, and the compiler was protesting about type mismatch. One single line. But it took me 4 hours or so and I figured out only what the correct syntax for that piece of code would be, by analysing types of the variables used. I have no idea if the fix really corrected the problem, it just made the line lexically correct and let the compiler go on. In the meantime I had to crawl about 4 levels of header files for each of the variables/records used in the line to reach primitive types of given variables and macros, from which the structures, pointers etc were derived, and generally was totally dizzy. And I was doing it the code monkey style, I didn't really understand the workings of the kernel, what was the line I edited meant. I was purely checking that a pointer to float isn't directly assigned a value of float, just pointer to it etc.
Kernel is too difficult for us average coders. Only the elite can fix these bugs for us.
Re:question (Score:5, Interesting)
Back to the point what can spending some time and having a bug fixing cycle hurt? I don't see a downside...
Re:question (Score:2)
Re:question (Score:3, Insightful)
Re:question (Score:2)
The point is, some projects are straightforward. Jump right in, the water's fine. Others, not so much.
LXR be with you (Score:2)
You should consider LXR [linux.no] (Linux Cross Reference), it's a very useful tool. You can navigate through kernel source up to 2.6.11 here [linux.no].
Now, it's not perfect because function pointers still are requiring hard work, and the tool can't understand macro-defined 'nightmare' functions like this one (in kernel/spinlock.c)
Re:question (Score:2)
Re:question (Score:3, Informative)
Re:question (Score:2)
While I'm sure some things would be fixed, do you really think that the problems with Windows are so easily fixed that a developer in their spare time would be able to do it? Remember, MS isn't Linux. There are thousands of full-time employees working on these products. It just can't be that simple, or it would have been fixed by n
How "more eyes == fewer bugs" works (Score:2)
It's obvious that better developers should generate fewer bugs, but I think you don't get the point.
Open source has fewer bugs irregardless of overall developers quality, because it's the quality of the best developer that has access to the code that matters. It doesn't matter if 99.999% of the people who have access to the code are ignorant, or unwilling to debug it. I
Re:How "more eyes == fewer bugs" works (Score:2)
Whi
Re:How "more eyes == fewer bugs" works (Score:2)
Re:Linux is BUGGY so it IS about TIME ! (Score:3, Insightful)
If a desktop user sees a blue screen of death (device driver, bad hardware, what have you) it's nothing incredibly shocking; we've grown used to it over the years.
Linux has certainly crashed on me (mostly when trying out drivers that arn't exactly stable), and when it happens it is a much rarer (and stranger
Certai
blah blah "Windows stable" blah (Score:2)
I get so tired of hearing MS fanboys running around yelling "it is too stable! is too, is too, is too!"
Quoting myself here:
I have four boxes here in my office, a six-month-old, high-end Dell Windows box, my Powerbook, a Dell 2800 running VMware ESX Server, and a Dell 2800 running Ubuntu (crazy, I know, but the 2800 was what was available).
* My servers only reboot when I need to document startup behavior. Since I'm doing work that involves explaining how to build drivers for ESX, which includes info o
Re:Linux is BUGGY so it IS about TIME ! (Score:2)
>and a great deal of effort is made to make sure this doesn't happen. In
>Windows land all the DDK examples in both DOS Windows and NT, upon
>encountering a problem, they are encouraged to issue a stop. (similar to a
>kernel panic) rather than fail to operate.
Uhm, it's not as if the Linux kernel won't PANIC (or was it BUG, there are multiple, IIRC) if there's a problem. Your statement has no base in reality, pure FUD.
Re:Linux is BUGGY so it IS about TIME ! (Score:2)
8. Remember to put breaks after each line or preview your message =P
Re:Typical monolithic kernel problem (Score:4, Insightful)
Splitting any software into external pieces is exactly the same as splitting the software into internal pieces. Microkernel is not the answer -- encapsulation is the answer.
Besides, converting the kernel will not get rid of the bugs; it will just make different ones. 2.5 million lines is a lot to rewrite, and any rewrite will lose all the bugfixes already in place [joelonsoftware.com].
Re:Typical monolithic kernel problem (Score:2)
But encapsulation means stable interfaces. Not acceptable for the Linux crowd (GPL binary drivers blablalba).
Of course you are right though. One main reason why so many things get broken is that all drivers need a rewrite on each interface change.
Re:Typical monolithic kernel problem (Score:2)
But encapsulation means stable interfaces. Not acceptable for the Linux crowd (GPL binary drivers blablalba).
I think your perception of the Linux developers' point of view is skewed. You say that they consider stable interfaces to be unacceptable and then imply that it's in order to deter the use of binary drivers.
That's not correct. If they really wanted to deter the use of binary drivers, they could simply file lawsuits against the developers of those binary drivers (and it would be easy to find fu
Re:Typical monolithic kernel problem (Score:2)
Encapsulation is not the answer, that will just add more code, and therefor it will have more bugs (encapsulation needs an interface etc too).
I think a microkernel will indeed work better, be more reliable because it allows a limited group of people understand all relevant code of one component.
What you see happening in Linux is exactly Tanenbaum's point for a microkernel. But then again the man is often misunderstood and misquoted.
Minix3 (Score:2)
It is free now. Well documented and now has performance as it's primary goal.
Can you put convert BSD code into GPL code? I know you can not take GPL and make it BSD but I wondered about the other way around?
The current problem with Minix goes back to drivers.
I almost want to download it and give it a shot.
Re:Typical monolithic kernel problem (Score:4, Insightful)
You mean that a microkernel is magically going to implement the same funcionality than linux, with all the thousand of driver, with its support for docens of hardware platforms, in less of 2.5 millions of lines of code?
Sure, a "microkernel" itself doesn't takes a lot of code. But BECAUSE it's a microkernel, drivers, filesystems, networks tacks etc. need to be implemented as servers. Implementing servers that implement the same funcionality than linux has today would take more of 2.5 milliones of lines, for sure. And those servers can have bugs, you know. And hardware bugs exist - it's completely possible (too easy, in fact) to hang your machine by touching the wrong registers no matter if you're using a microkernel or not.
Also, I don't understand why a microkernel would be magically more maintainable than a monolithic kernel. As far as I know, software design is something that doesn't depends in whether you pass messages or not. Sure, a server running in userspace can't take the system down. But that's completely unrelated to modularity and mainteinability. Microkernels were in fact invented because people though that hardware complexity wouldn't allow to continue running monolithic kernels, ignoring the fact that it's perfectly possible to write a mainteinable monolithic kernel with modular design - which is how Linux, Solaris internals etc. are today - just like it's completely possible to write a unmainteinable, non-modular microkernel. It all boils down to software design. And guess what: Current general-purpose monolithic kernels (linux, *BSD, Solaris, NT, Mac OS X - no, a operative system that implement drivers, filesystems and network stacks in kernel space it's not a microkernel) have had a lot of time and resources ($$$) to become mainteinable and modular, extensible, etc.
It's fun how when a monolithic kernel has a bug it means microkernels are better, like a microkernel model magically makes coders bug-free, or like it's not possible to write a microkernel server with a bad API that forces all driver developers to patch their drivers to fix a security bug. I'd love to hear what development model would use the Hurd/QNX/whatever guys to maintain six millions of lines of code, be it driver for a monolithic kernel or drivers implemented as microkernel servers.
Re:Typical monolithic kernel problem (Score:3, Insightful)
Kernel: ext2/3, reiser3/4, jfs, xfs, minix, romfs, cdrom, fat, ntfs, proc, sysfs, adfs, ffs, hfs, BeFS, jffs (flash), cramfs, qnx4 fs, smb, cifs, andrew fs, plan 9
FUSE:
FunFS: network filesystem,
KIO Fuse Gateway: mount anything kde can talk to as a filesystem
Bluetooth FS: bl
Re:Typical monolithic kernel problem (Score:3, Insightful)
Yes I notice a difference: The filesystems in the kernel tree are general-purpose, performance-critical filesystems, meanwhile a fuseftp filesystem is quite the contrary.
Noticed how FUSE is a linux thing that allows people to write filesystems in userspace *despite of being a monolithic kernel*, giving users all the advantages of a microkernel without any of the disadvanta
Re:Typical monolithic kernel problem (Score:2, Interesting)
Eventually with multi-core cpus with stupid amounts of threads the micro-kernel will make it's come back.
Hurd (Score:2)
Speaking of Hurd, I had actually forgotten about the whole thing. You don't see it mentioned as often as you used to, even in jokes. Bad sign?
Chill. (Score:2)