

Kernel 2.4.12 Released 377
Whoops. A nasty bug affecting symlinks made it into 2.4.11, and Linus has ditched that "sorry excuse for a kernel" in favor of the new and improved 2.4.12. :) See the (short) changelog or list of mirrors, as usual.
Quality Assurance (Score:2, Insightful)
Re:Quality Assurance (Score:5, Interesting)
Yes, it was around 2.2.8 or 2.2.9 IIRC. What I do remember though is that after that happenned Linus decided it was time to fork to the 2.3 series around the same time.
Maybe now this has happenned he will start 2.5 and hand over 2.4.x to Alan who IMO keeps kernel series stable better than Linux does.
Re:Quality Assurance (Score:2)
No the Brown Paper Bag [tuxedo.org] release was 2.2.0. IIRC correctly, you could cause an OOPS by examining pretty much any coredump file.
yep, 2.5 has started... (Score:2, Informative)
That's exactly what Linus said in his 2.4.12 annoucement. But I guess you knew that already
Re:Quality Assurance (Score:2, Funny)
From the KML, as reported elsewere in this thread:
On the other hand, the good news is that I'll open 2.5.x RSN, just because Alan is so much better at maintaining things ;)
Linus
Either you alredy knew that, or you are a real cassandra. In the last case, would you mind give me some hints on the next horse race? :-)
watch out. (Score:5, Insightful)
Check lkml archives for a patch to fix it.
Re:watch out. (Score:5, Informative)
--- linux/drivers/parport/ieee1284_ops.c.orig Thu Oct 11 09:40:39 2001
+++ linux/drivers/parport/ieee1284_ops.c Thu Oct 11 09:40:42 2001
@@ -362,7 +362,7 @@
} else {
DPRINTK (KERN_DEBUG "%s: ECP direction: failed to reverse\n",
port->name);
- port->ieee1284.phase = IEEE1284_PH_DIR_UNKNOWN;
+ port->ieee1284.phase = IEEE1284_PH_ECP_DIR_UNKNOWN;
}
return retval;
@@ -394,7 +394,7 @@
DPRINTK (KERN_DEBUG
"%s: ECP direction: failed to switch forward\n",
port->name);
- port->ieee1284.phase = IEEE1284_PH_DIR_UNKNOWN;
+ port->ieee1284.phase = IEEE1284_PH_ECP_DIR_UNKNOWN;
}
Re:watch out. (Score:4, Informative)
Re:watch out. (Score:4, Informative)
Actually I am not sure what people keep talking about with this bug. As far as I could tell this error is caught by the compiler...
gcc -D__KERNEL__ -I/usr/src/kernels/linux/include -Wall -Wstrict-prototypes -Wno-trigraphs -O2 -fomit-frame-pointer -fno-strict-aliasing -fno-common -pipe -mpreferred-stack-boundary=2 -march=i686 -malign-functions=4 -DMODULE -DMODVERSIONS -include
ieee1284_ops.c: In function `ecp_forward_to_reverse':
ieee1284_ops.c:365: `IEEE1284_PH_DIR_UNKNOWN' undeclared (first use in this function)
ieee1284_ops.c:365: (Each undeclared identifier is reported only once
ieee1284_ops.c:365: for each function it appears in.)
ieee1284_ops.c: In function `ecp_reverse_to_forward':
ieee1284_ops.c:397: `IEEE1284_PH_DIR_UNKNOWN' undeclared (first use in this function)
make[2]: *** [ieee1284_ops.o] Error 1
make[2]: Leaving directory `/usr/src/kernels/linux/drivers/parport'
make[1]: *** [_modsubdir_parport] Error 2
make[1]: Leaving directory `/usr/src/kernels/linux/drivers'
make: *** [_mod_drivers] Error 2
So if you compiled it and it worked you aren't using the module that this was in. Or your compiler is broke.:)
d
New marketing scheme (Score:2, Insightful)
Users, doing the QA in Linux for over 10 years!
Re:New marketing scheme (Score:3, Interesting)
Since the beginning, OSS users pay their software doing testing, documentation and, if they can, code.
You know what? Many people are happy with that. Because they discovered that they stll end up with a better product that most commercial stuff, at a lower price.
The reason for this is that, given current software complexity, alpha-level tests,done in laboratory, can only do so much [and often not even that because smart managers cuts test budgets to save their ass].
That is wy _every_ commercial company does beta-testing (e.g. asks user to test their product for them).
In OSS, beta testing is simply done putting out a new release (sometime dubbed 'beta' or 'unstable', but you shold always expect potential troubles).
Whoever don't like this is free to pay whatever company they want for doing their own Test or QA on whatever software they choose to use.
Re:New marketing scheme (Score:2)
Open Source Testing (Score:2, Insightful)
Off the top of my head, Mozilla is the only large Open Source project I can think of that has a reliable testing process. I'm sure there are others of course, it just seems that Linux is not one of them. Saying "Release early, release often" and wait for your users to find your bugs is nice in practice, but for large projects you need to have a more structured approach to testing, at least IMHO. Obviously there is some small Unit & Intergration testing done when the code is written and the patches applied, but that won't catch things like this in general. Whats needed is an overall testing plan.
Maybe it's about time the Open Source world started paying more attention to testing?
Re:Open Source Testing (Score:3, Insightful)
Re:Open Source Testing (Score:3, Interesting)
But unless an upgraded kernel has something that you need or desire, why would you upgrade at all? Once a version is running, just stick with it.
I'm of the belief that one of the greatest assets to open source is the fact that all older versions are still available for the vast majority of projects. This means that if you think your system was better off with the 2.2.14 kernel, the 2.67 gcc and libc-5, you can still get those, and load them onto a system built fairly recently (OK, no USB... but that's kinda what I'm talking about here.)
Try doing this with any microsoft product. Go to a store and ask for Windows 95. It's getting hard to find 98. Try asking to buy office 97. Or any older version tha might work really well on a super fast system.
So, to get back on topic, if you have a kernel that works, why would you even think about upgrading unless you are testing/comparing/a hobbyest.
Re:Open Source Testing (Score:2, Interesting)
2.2.14 has a number of security problems. So no, it's not always prudent to pick some kernel at random and use it, just because it's old. However, it probably is wise to grab the latest 2.2.x kernel, and go with that.
And yes, you can still find older versions of Windows, so don't pull that typical zealot FUD...It was only about a week ago that MS finally discontinued NT 4.0, which has been out for ages.
Re:Open Source Testing (Score:4, Insightful)
I think you have to ask the question that when Linus releases a new kernel, who is that aimed at? IMO, these days it is at the distributors, the kernel developers and enthusiastic individuals. It is no longer the case that most Linux users download and compile their own kernels. Because of this the release of the kernels to the distributers actually forms part of the testing itself. ie, don't consider a release to be stable just because its in the so-called stable series. Consider it stable when you either (1) get it from a distributor who will have tested it and guarentee (sp?) it's stability or (2) downloaded and tested it yourself before production use.
Yes, it would of course have been better it this hadn't crept in but it's not really a big deal. How many users do you see around you who have lost work because of this bug?
Re:Open Source Testing (Score:2, Interesting)
I think you have to ask the question that when Linus releases a new kernel, who is that aimed at?
While thinking about who a kernel is aimed at, we've really got to consider our role as users of the latest kernels.
On any project, regardless of size, the developers are often too close to the code to test it completely. They know too well the right thing to do. They are, in effect, well programmed users for the system that they are building, and this makes them bad testers. This is why so many companies have quality assurance departments that never interect with developers except through bug reports.
In this Free/Open community, where the entire user base is the `company', it suddenly becoms apparent, that we, the users of bleeding edge kernels are this qa department. It is we who do the final testing, it is we who write the reviews for people who stick to old kernels until they get these reviews.
Some may argue that there is already an unstable branch in which to do these things. Sure, but there are other testers for those. Often those testers aren't suitable to test stable kernels because they are now too close to the bugs that they first reported. I hope I'm making sense here.
Anyway, this is one instance where this huge qa team found a bug real fast, got it back to the developers, and they fixed/undid it in equally quick time.
Philip
Re:Open Source Testing (Score:5, Insightful)
Because releasing is part of the testing process. This is, unfortunately, true even for closed, commercial software -- there's always some bugs and some releases are always dogs. With free software the releases come frequently and you can pick which release you want to put your chips on. It's like the difference between having a single, conservative (but still not guaranteed) retirement plan at work, and having a choice of a diverse set of funds.
And I don't believe itis valid to compare a product with Mozilla with the a product like the kernel. You simply can't test them the same way. Nobody (or at least very few people) grab the source code off of CVS and change their kernels every night; and if they did it wouldn't be as useful because many problems only become apparent after you run things for a while on a kernel.
Not to minimize the good work that the Mozilla peple are doing, testing a kernel is simply much harder. This is why commercial operating systems are so slow in their releases. I actually think situations like this demonstrate a strategic advantage of the open software. Nobody in their right minds runs the latest, or even any recent kernel on any machine where reliable uptime is a consideration. I personally am using 2.2.19 on my production servers because it works for me. But lots of people do run every new kernel as it comes out on some machine, so when I switch to 2.4 there will be an excellent knowledge base, that will tell me for example to avoid 2.4.9 for VM problems, or to apply specific patches for 2.4.10 if I need.
This is what it is all about folks -- release early and often, and hang your dirty laundry up where everyone can see it. This is a great benefit, and the only people who this affects in any negative way are fools.
You are laboring under obsolete semantics. (Score:3, Interesting)
Open source is a paradigm shift. The standard way of thinking about releases, which never was very practical, IMHO, is misleading.
There are two outmoded connotations to the word "release" which you need to free yourself from. First, is that "release" is a indicator of some absolute level of quality: that the number of bugs is small and their severity is low. Second, that the "Current Release" is the best version for you.
I don't think the idea of "release quality" was ever realistic, except in some limited circumstances. Another way of putting this is that the true test of a product is when it is put into actual widespread use. What a release represents in most case is a slowing of velocity of change, at which point more testing benefit is gained by putting the product into use than by our contrived tests. This slowed velocity is sometimes a sign that the bugs are few and not very severe, but it is always a sign that the developer's ingenuity or energy is exhausted. There are always more bugs. You have to be re-energized and refocused by reports of needs or problems by people in the field.
Sometimes this comes quickly. I personally have "released" software which I almost immediately retracted because of a severe flaw I missed. In any case, more energy and ingenuity in testing is always better, but the marginal value of internal testing at some point is always outweighed by real world tests.
The second mistaken idea is that the "current release" is always best. This idea was never very credible -- it's more of a fiction which enriches the software developer. Ask a common user, and you will find they often favor older versions of the tools they use.
To bring this back to testing, one benefit of open source or free software for something like an operating system is that I can get a better tested product (in real world terms) by being selective about which release I choose.
And, the level of testing for a complex product like the kernel is not a single metric -- it depends on your exact configuration and requirements. Thus for some embedded developers, versions of the kernel in the 1.x range are the best tested alternatives to use. For most users today, late version of the 2.2 tree are the best alternatives. For people with multiple processors, their best bet may be somewhere in the 2.4 tree, although it may not be ready for mainstream use.
Re:You are laboring under obsolete semantics. (Score:2)
True, but did you also have to throw out analysis and design on the front end, and validation and verification on the back end, out along with the closed source model
Free of the time and resource constraints of commercial development, Open Source is one of the few areas where a real and effective waterfall development life cycle can occur. But instead they use some weird mutant spiral life cycle with only a "prototype" stage.
Re:You are laboring under obsolete semantics. (Score:2)
Perhaps BSD is developed much more along the lines you suggest,but it may also be possible that this is the reason it hasn't gained the same popularity, as deserving as they may be on technical grounds.
Re:So basically... (Score:2)
If the service packs were released more often, and were more targetted so I could patch a single problem that affected me without introducing a host of new ones in different areas, yes that would be OK by me. It's not really a perfect analogy though because most of what is in a service pack is user space stuff.
THe linux kernel is by far the most buggy POS "kernel" that I have ever come across in 20 years of coding!
What can I say? I'm running several different versions of the Linux kernel; I have never had any problems related to the kernel; I think I've seen one kernel panic in several machines running for several years. Maybe it's POSIX compatibility isn't as good as some other OS's, but it doesn't seem to prevent lots of software, lots of cross platform software from working well on it.
Also, to be fair, Linux is a relatively new kernel; the BSDs and their derivants are much more mature and have a common history and so would naturally be more compatible.
It is *your* responsibility to do what it takes to prevent such unforgivable disasters as 2.4.1 and 2.4.11
2.4.11 an unforgivable disaster? Aren't you being a bit theatrical here? Nobody was affected by the problems in 2.4.11; even if it had taken a month or more to catch instead a day or so, it would only have affected a few kernel heads. Virtually everyone's still on 2.2, which is pretty much the standard production kernel that ships with most distros. If you roll your own kernel for production machines, well, you had better know what you're doing. Any recent kernel is still in testing. Most of them will never appear in any shrinkwrapped boxes, and when they do they'll probably be a year old or more and very, very well proven.
Every new kernel has to be seen as experimental until it has established a track record "release" or not.
If the development methodology offends you, ignore it. Just install older kernels with a track record.
2.5 is comming (Score:4, Informative)
Linus is handing 2.4 over to Alan (Score:3, Interesting)
> Not a good week.
>
> On the other hand, the good news is that I'll open 2.5.x RSN, just
> because Alan is so much better at maintaining things
On Thu, 2001-10-11 at 07:54, Alan Cox wrote:
> > And will Alan release 2.4.13 asap with Rik's VM? - (sorry, couldn't resist)
>
> I think 2.4.13 will be a Linus release
Is it just me ? (Score:2, Interesting)
I'm aware that this is not the KernelCrisis hotline; however, since it doesn't appear to be offtopic and it really bugs me and there's a heap of wizzards in here, I'd like to find out more.
Usually, when a new kernel is out, I download the patch, apply it, use the most recent config file, which I go through some, but not necessary through all umpteen options and this usually worked just fine. It doesn't anymore since 2.4.10. From aborting compilations for strange missing files, up to USB race conditions and kernels which if they boot at all, sputter all sorts of gibberish, I've seen it all.
Any suggestions where to look ? Could it be that gcc 2.95.x is really too buggy and why suddenly.
I don't really have the expertise to up/downgrade the compiler and the related libraries. So, I'd be really thankful for each and every hint.
Re:Is it just me ? (Score:3, Informative)
Um, you should use "make oldconfig" when you're upgrading kernels and using the same config file. It'll prompt you for any new options.
Re:Is it just me ? (Score:2, Informative)
Re:Is it just me ? (Score:5, Insightful)
From the kernel Readme:
SOFTWARE REQUIREMENTS
Compiling and running the 2.4.xx kernels requires up-to-date versions of various software packages. Consult
Many sources recommend that if you don't have a critical reason to upgrade your kernel, don't. I will follow in this recommendation, as the old adage, "If it ain't broke, don't fix it," is especially true if you don't know how to fix it. Installing/uninstalling a program is far more mundane than upgrading a kernel. If you're not comfortable upgrading (downgrading) gcc, and your kernel is performing well as is (or was working fine, as the case may be), you aren't a strong candidate for a kernel upgrade. Learn the basics and fundamentals of the OS before diving headfirst into something as critical as kernel patches. Distribution providers usually do extensive testing on the kernel version included to ensure stability and compatibility.
If you're determined to go ahead with this, Linuxnewbie.org has a decent amount of information, linuxdoc.org and linuxnow.com have HOWTOs on virtually any subject (including the GCC HOWTO [linuxdoc.org], although I can't say with any degree of certainty that gcc is at fault here), and the website of your distribution is probably another good source of info. If you still have problems, and turn to the net for answers, make sure to state specifically each step you took thus far and try to detail the problems you encountered, providing logs and diagnostic output when possible. In doing so, you or someone else may find you skipped a crucial step. Kernel upgrades are not to be taken lightly and, as you have experienced, can quite possibly be more trouble than they're worth.
Staying with 2.4.9 (Score:3, Interesting)
Re:Staying with 2.4.9 (Score:4, Informative)
Linux CVS and bugtracker (Score:4, Insightful)
That way each new release can be a "tag" and every new 2.x+1 that occurs can be a branch.
FreeBSD (while not void of bugs) has had a great deal of success with CVS IMHO. The parport bug is that the author submitted something that can't even compile due to a #define constant name that was malformed. I think he forgot the ECP part of it. Someone else posted the patch here in the replies section.
Re:Linux CVS and bugtracker (Score:2)
CVS NOW!
The current 'system' results in lost patches, random accidental additions cough*JFFS*cough, little/no tracking of changes, difficulty getting/making patches among various versions, etc.
Please, please, CVS, please!
Does it compile? (Score:3, Interesting)
the old or new Adaptec 7xxx driver enabled. This is the 3rd or 4th 2.4.x kernel that would not compile "out of the box" for me. 2.4.9-ac18 compiled and seems OK on that particular box.
On the bright side, 2.4.11 does seem to have decent VM. And the firewire support seems to be better than before with my Digital 8 camcorder.
Definition of a stable kernel (Score:5, Funny)
The stable kernel has become ready for production usage once development has started somewhere else.
May I recommend this attitude to people who complain about the instability of the 2.4 series. It's called pragmatism.
Re:Definition of a stable kernel (Score:5, Informative)
The same goes for service packs for Windows. None of the Windows shops that I used to work for would ever install service packs until they had been available long enough to know the new errors they would introduce. In fact many of those companies had policies that declared you would be fired for installing any new service packs until IT had determined that they wouldn't break usability.
If you install software on a production system that was just released yesterday, you're just asking for trouble. This applies to ALL software, not just kernels.
Re:Definition of a stable kernel (Score:2)
In the context of the linux kernel, I view 'stable' as meaning "ready for UAT".
People who run kernels while developers are still modifying them are conducting that UAT.
Stable code is the end result.
I'm not talking about -ac kernels. I am talking about kernels in "maintenance" by the "maintainer" who has of late been Alan Cox.
Re:Definition of a stable kernel (Score:3, Interesting)
I consider 2.2.x the "stable" kernel, until 2.5 is out, and until then, I urge others to adopt the same attitude.
I don't think it is fair to complain about changes and breakages until the developers have somewhere else to make them.
Re:Definition of a stable kernel (Score:2)
Your argument seems to be "They have a change to make, there is no unstable tree, so naturally it must go there", but the better way is to not make the change at all if there is no unstable playground to work in, they don't *have* to go in as soon as people dream them up, wait for an odd series.
Re:Definition of a stable kernel (Score:2)
Re:Definition of a stable kernel (Score:2)
I agree the naming of the "stable" kernel as "stable" is a dubious practice.
Someone suggested, in another story, a three-way kernel release split, akin to Debian's unstable, testing and stable release. I like that, and I think if Linux moves to use that naming things would be better.
What I am suggesting is that, in the interim, and indefinitely if things don't change, people should use the following mapping:
The 'old' stable kernel, 2.2, is the REAL stable kernel.
The 'new' stable kernel, 2.4, is the REAL testing kernel.
Once 2.5 is out, then 2.4 becomes the real stable kernel and 2.5 is unstable.
To paraphrase. (Score:5, Funny)
Comic Guy: Worst kernel EVER
Bart: Why do you get to complain? They've given you thousands of hours of entertainment for free?
Comic Guy:As a loyal [user], they owe me.
Admittedly, I'm probably off the actual text by a bit here, the point remain. Try not to be the Comic Book Guy when Linus makes one mistake.
Re:To paraphrase. (Score:5, Insightful)
Re:To paraphrase. (Score:2)
Re:To paraphrase. (Score:3, Insightful)
I do agree that the latest releases haven't been tested as much as they should have been before being released as stable, but the competing with Windows point is moot.
this is why.... (Score:4, Insightful)
...
but i cant figure out why my box keeps getting owned...
oh well
OSS Test Harnesses? OSS Test Suites? (Score:5, Interesting)
I'm a relative newcomer to the Open Source world, but what has struck me is how none of the big profile projects seem to have their own test harness or test suites. Maybe I'm missing something. Please let me know what test suites major OSS software ships with. (The only one I could think of was autoconf, which isn't a quality-management test suite but a build manager, and the Perl build process does a few demonstrations of terminal features.)
What I mean is something like "make test" integrated into the project. Running that generated test code would perform hundreds of sanity checks (or even thousands for complicated projects) on the code.
Perhaps Red Hat and SuSE have this kind of code locked away in their "commercial advantage" (and I could see the arguments for keeping those closed) but it would seem to me that Linux and Alan Cox and crew would be more open about test suite software for the kernel.
Install a kernel, run a battery of tests. Find systemic breakers really quickly. It's not hard, it's just a matter of discipline to write the tests. As code is written, write the tests for the code. Any time a bug is found outside the normal test suite, write the test that should have found it. Automatable tests wherever possible.
In the "eXtreme Programming" development paradigm, this is codified even more vigorously: write the test(s) BEFORE the code. In Eiffel, you program by contract; each method has a pretest and a posttest to ensure that the state of the world is correct. Part of the official build process for releasing the software should be a 100% compliance with the automated tests.
GCC has it (Score:2, Insightful)
GCC has a test suite: http://gcc.gnu.org/testresults/ [gnu.org] and uses the test suite as a formal release criterion. The GCC team also uses those tests as benchmarks for the compiler.
We are the test suites (Score:5, Insightful)
Install a kernel, run a battery of tests. Find systemic breakers really quickly. It's not hard, it's just a matter of discipline to write the tests. As code is written, write the tests for the code. Any time a bug is found outside the normal test suite, write the test that should have found it. Automatable tests wherever possible...Part of the official build process for releasing the software should be a 100% compliance with the automated tests.
There is a comprehensive testing suite in place for linux; in fact, we just saw it in action. It involves testing the kernel on thousands of boxes simultaneously, running ten of thousands of hours of tests, and getting feedback to the developers within a few hours.
To paraphrase pogo: "We've seen the test suite, and it is us."
Now, this may seem odd or broken, but it has a few charming advantages. First, the costs are distributed amongst those that benefit most, with zero accounting overhead. Second, the response time is very fast. Third (and, IMHO, most importantly) test coverage is maintained by the same laws of statistics that make sure there is air for you each time you take a breath; if usage patterns change, the new usage is included in the tests automatically--even if no one is consciously aware that they are doing "something new" it still gets tested.
-- MarkusQ
Re:We are the test suites (Score:2)
It also has several rude disadvantages. First, I am being treated as an unpaid QA employee. Second, when I find a problem and report it, I am flamed for not reporting it in the proper format or with the proper dump or trace. Third, my intelligence is insulted by labelling this software "stable" before any unit, system or integration testing has been done.
Hacking on software is fun, exciting and rewarding. But there are a few of us who just want to *use* the software. The last thing I want to do when I come home is to trace another core and file a problem report.
Why Open Source developers get paid for writing software, when the Open Source users don't get paid for testing software?
Re:We are the test suites (Score:3, Insightful)
I never got paid for any OSS work I did. Linus doesn't get any money for maintaining the linux kernel.
But there are a few of us who just want to *use* the software.
Then don't upgrade a stable production system the same night a patch comes out. If you would've waited a day or two (like I'm doing), you would be fine.
Re:We are the test suites (Score:3, Informative)
Just to add to Leto2's comments: "don't upgrade a stable production system" is not limited to open source. A decent sysadmin will test any patch commercial or otherwise before rolling it out to production systems. Patching blindly is just asking for trouble.
And to all the linux bashers: this is nothing new. Most big software packages that I am aware of has had a bad patch or "fix."
Re:OSS Test Harnesses? OSS Test Suites? (Score:3, Informative)
The Gnu Compiler Suite has an extensive regression test. See for example "GCC Automated Testing System" [cygnus.com] or "GCC 2.95 Regression Test Strategy" [gnu.org]
If you need to write a regression test for your own software check out DejaGnu [utah.edu].
--Andre
Re:OSS Test Harnesses? OSS Test Suites? (Score:3, Informative)
The checker lives here [stanford.edu]
Re:OSS Test Harnesses? OSS Test Suites? (Score:2)
I agree that having a test suite would be good for catching a lot of simple bugs in places that are easy to test. Sometimes a new kernel will have something that doesn't work at all, and that should get caught. But there are relatively few of these, compared to things which aren't really quite safe, although you have to look at them carefully and think about them, understanding the code, for a while, and these are the more important bugs.
Re:OSS Test Harnesses? OSS Test Suites? (Score:4, Insightful)
My pet complaint is documentation which is sometimes barely there at all. Saying, as some do, "Well, what did you pay for it?" implying that its "free beer" status excuses this doesn't cut it when we're also saying "Hey, ditch your proprietary commercial stuff and use this instead!" We should coin a new acronym. WITFM. Where is the F#%@#*@ manual?
We bash M$ when they turn out buggy products and declare that they don't have a quality software process. The same is true of open source. The problem isn't closed source and the solution isn't open source. Both sides simply need to use a stronger process if they're to produce quality software.
Re:OSS Test Harnesses? OSS Test Suites? (Score:2)
Yes I do sound like a linux zealot. That's my cross to bear. My message is no less poignant or acurate.
Kernel 2.bla.bla.coding.release.now.noway (Score:2, Insightful)
thenthey start to complain about lackof testing.
YOU are the ones who should testthis folks.
if you dont like it, why change kernel.
my main machine still running 2.0.30
why ? well i dont need any of the newthings.
just because its new and shiny, it doesnt have to be better folks.
with 2 yearsof uptime, i would never even think of swapping kernel.
andif i find i abug, i dont complain, instead i fix it, that is the way of open source.
learn to code before complaining people, it really isnt all that hard.
my main question is why download a new "release"
if you arent prepared for bugs orto sort them out?. No one, and i mean nooo one who codes can make a totally bugfree product.
/ J.Thorsell Sysadm.
Umm...How much did you pay for the kernel? (Score:2, Insightful)
KidA
Re:Umm...How much did you pay for the kernel? (Score:2)
And the amount staked on it may be millions. If you make the claim that Linux is equal to or better than commercial OS's, then you need to be prepared to have it subjected to the same demanding standards that paying customers have. Since I don't pay directly for development, I don't expect that development to happen on my schedule. But when it makes claims of quality and reliability, I expect them to be backed up. You can't have it both ways.
Fact is, I am often disappointed by commercial OS's too. I respect Linux enough to not cut it any more slack. I don't think Linus would have it any other way.
Question about rebuilding 2.4.12 (Score:3, Interesting)
Re:Question about rebuilding 2.4.12 (Score:3, Informative)
mr.
Is 2.4.x Being Rushed? (Score:3, Interesting)
*shrug*
It just seems that the patch on this tree are coming out faster than any of the other branches before it. And many more "issues" slipping through the cracks including controversy and laziness. I don't know about the rest of you, but I would prefer slower progress in favor of more careful, more tested releases.
Just one problem... (Score:2)
There's just one problem with it. If most people did that, bugs would take a lot longer to be found, let alone squashed. And the problem we've seen in 2.4.11 is not some obscure, minor thing that would have not have hindered stable use of the kernel. (Before you argue that, consider how severe it is, this is a show-stopper bug. Bugs in the kernel that corrupt the file system, even if they are rare, are serious problems.) This should have been caught and the fact that the bug exists indicates that whoever made the changes that created failed to even test said changes. 2.4.x is a "stable" kernel tree, and there's no reason why anyone should have to wait a couple patch levels before upgrading to the latest. There should be more thought and testing done on such a branch before releases are made. Period.
Impressive (Score:2)
Actually, I can't wait to see what 2.5 development will bring us. The fact that so much is changing even in the stable kernel likely means developers will really cut loose when 2.5 work begins. I think it's actually quite exciting. (As long as 2.4 gets all its holes patched up in the near future of course..)
Mapping Linux to proprietary software (Score:2)
Let us compare the Linux development of a 2.odd version to a project branch in your average software company. That project branch will be unstable and used only for feature testing.
Then, you have the 2.evens. These are equivalent to a release branch or product mainline. You expect these to be *more* stable, but you still don't expect perfect code. When these are sent to your release egnineers and Q/A, you get several small rounds of bug-fixing as a result of regression tests, feature verification and so on.
This last step can be mapped to what distribution vendors do. So, for example, we expect Red Hat to go with something like 2.4.12, but to add the parport patch into their SRPM. They will doubtless also discover several other problems, or may be dragging along patches from previous kernels that have yet to be merged.
This is the art of release engineering. There's no such thing as a developer-released product. Never was. This is the value-add that distributions give us. They act as Q/A.
Finally, the truth is out. (Score:2, Funny)
they get released into the "stable" tree.
Perhaps, then, it can be said that they follow the "Slashdot" model of development: Post first and correct things (maybe) later.
Yikes... 2.4.11 bugs, 2.4.12 freezes, time for -ac (Score:2)
I guess it's time to hit 2.4.10-ac11 and see what Alan+Rik can do for me. The 2.4 series so far has been a crap shoot... I wonder if they can save it before 2.4.15-20 or so?
My vote (I don't know these people so it's by no means binding): Linus starts 2.5 and leaves this 2.4 nonsense behind (sooner we get a 2.6 the better) and Alan makes a big, ugly change to allow user-select at compile time of Rik's VM vs. AA VM. Both Rik and AA then both get to keep going nuts trying to keep the 2.4 boxes up...
Re:Stable? (Score:5, Funny)
Re:Stable? (Score:2, Funny)
Re:Stable? (Score:3, Funny)
Re:Stable? (Score:2, Informative)
Please correct me if I am wrong.
Re:Stable? (Score:4, Funny)
Close, but it's actually the 2.whatever.whatever that's the "unstable tree". The stable tree starts at "5.00.2195".
Re:Stable? (Score:2)
Re:Stable? (Score:2)
Re:Stable? (Score:5, Informative)
..it's just a finer numbering scheme (Score:2)
Don't you know that the odd numbered kernels are experimental?
Re:..it's just a finer numbering scheme (Score:2)
how strange... I didn't realize that 2.4.0 was odd.
Re:Stable? (Score:3, Offtopic)
Personally, I call it Wintendo.
Re:Stable? (Score:2, Insightful)
In my opinion something like this should not, but can happen. Improving quality is a reasonable goal for the Linux kernel, but really, we are not supposed to scream at the developers because it happens. (I don't mean the parent's author has screamed... some others have here.)
Re:I doubt... (Score:2, Interesting)
Re:I doubt... (Score:3, Insightful)
And remember - while some vendors drag out beta testing with sub-par software (the rest of the bugs will get flushed out in beat they say), MOST of the time the Linux kernels are usable the day they come out.
But be realistic - anyone who downloads, builds, and installs a new Linux kernel within 30 days of its release is a defacto beta tester. No sysadmin in their right mind would install a new kernel on a production server until its been run for a bit. SO all of us who love to grab the kernel and put it on desktops, small non mission critical servers that can go down, etc are flushing out any remaining bugs so that mission criticla server admins can sit back and see which kernels to move to (plus other factors like existing bug fixes, new device support, etc)
So those of you faulting Linus for releasing this kernel with this bug - give it a rest. It was an obscure bug that only cropped up if the software did something it really shouldn't have (ie bad design). I can't imagine a commercial vendor would have caught this bug in testing either - they'd have found it in beta just like Linus did. After all, I bet 99.9% of you who are already running 2.4.11 thought it was great till you read /. this morning :)
I for one think the current system works well. Yes, Linus may put stuff in faster than Alan and there are pros/cons to both - for all the folks saying Linus was putting too much in others would say AC is waiting too long. But step back and think about how great we Linux users have it. Stable kernels with many fixes coming out monthly from Linus with bigger more feature rich kernels available from -ac How awesome is that?
Re:I doubt... (Score:2, Insightful)
I suppose your article should be moderated as "cynical" - or is that option unavailable? Currently, Linus "stable" kernels look pretty unusable (including a basically untested VM, which is just a bad joke in a "stable" kernel, and including near-daily releases of so-called stable kernels which include new bugs). On the other hand, current -ac kernels look like they work nicely, with a usable VM, without crashing the system or part of it. This is just a bad joke of a development model.
Frankly, I'm starting to consider *BSD the better option.
(I've been a Linux user since somewhere in the 1.1 kernel series, but this is really starting to frustrate me.)
Re:I doubt... (Score:2)
After all, I bet 99.9% of you who are already running 2.4.11 thought it was great till you read /. this morning :)
Offtopic, but I had some problems getting crypto kernel modules [sourceforge.net] going with 2.4.11, and all those USB timeouts were being a pain, so I was back to 2.4.10 before 2.4.12 even came out.
Re:I doubt... (Score:2)
Re:I doubt... (Score:2)
Neither do I. But I am sure that no matter what large vendor you speak of, they have released buggy products. One particular recent incident with an IBM ATM switch (MSS) microcode release (a "stable" release) comes to mind. The company where I work were advised to upgrade to this latest release; a week later we were downgrading because of the huge network performance problems it caused. IBM withdrew the microcode release.
I'm sure other
Re:I doubt... (Score:2, Interesting)
code without regression testing. (Userland
applications are another story; and don't
get me started on the GNU userland formal
verification process.)
Look, it's a damn bug that says more about
the linux development model (one guy, lotsa
of little chefs), than it does OSS. This says little about OSS; it speaks volumes about Linus'
flexibility in releasing "stable" code.
Now that more folks are using Linux for mission critical stuff, it would be nice if more testing would take place. This kind of bug was common and even OK in the Usenet days of linux; right now it just generates bad press... and probably even some flame wars on
Speaking of which, switch to FreeBSD for stability.
Re:ridiculous! (Score:2)
Funny! On the flip side, look how many patches and service packs come out for Windows, but they don't make the Slashdot home page. Good to see we're covering the bad as well as the good.
Re:ridiculous! (Score:3)
What are you talking about ? Seems like Slashdot is constantly running stories about this or that bug in Outlook, or Powerpoint, or IE, or....
If WinXP had as big a showstopper flaw as this one in 2.4.11, you can bet the criticism here would be loud, ridiculing, and vociferous, even if MS released a patch for it in the same timeframe. As it is, there is really no criticism, just a big mutual love-in. You are deluded if you think Linux flaws are covered with the same scrutiny here that Windows' shortcomings are.
Re:ridiculous! (Score:2)
In any case, this pointed out something awkward that the SUSE installer was doing, and it will probably not do it in the future. I'm sure there are quite a few syscalls in XP that won't work right given a particular set of obscure options. On top of that, when has MS -ever- had a rapid turn-around on bug-fixes?
Re:ridiculous (that is you)! (Score:3, Insightful)
I really feel that 2.3 was turned into 2.4 because of marketing reasons, since it is obvious that 2.4 isn't ready yet.
2.2->2.4 was supposed to go faster than 2.0->2.2 took (ages). As the release of 2.4 was postponed and postponed the pressure got (too) big to give in and release it.
Re:ridiculous! (Score:2)
--- linux/drivers/parport/ieee1284_ops.c.orig Thu Oct 11 09:40:39 2001
+++ linux/drivers/parport/ieee1284_ops.c Thu Oct 11 09:40:42 2001
@@ -362,7 +362,7 @@
} else {
DPRINTK (KERN_DEBUG "%s: ECP direction: failed to reverse\n",
port->name);
- port->ieee1284.phase = IEEE1284_PH_DIR_UNKNOWN;
[ETC]
Re:Error in 2.4.12 tar balls? (Score:4, Informative)
Philip
Re:Error in 2.4.12 tar balls? (Score:2)
With bzip2, you should always decompress and compare byte-for-byte against the original data before trusting the compressed data, especially for large files.
Yes, it's a hassle, but it's necessary. I've seen this byte-for-byte verification fail a number of times, usually megabytes into the original data. I find it alarming that a "lossless" algorithm (or a buggy implementation) allows this to happen, when people are known to be trusting data to this program. I have never seen gzip fail like this, nor have I heard of such failures. However, I've witnessed it repeatedly with bzip2, so I won't trust it blindly.
It would be good if a version of bzip2 would automatically feeds the compressed data and a copy of the uncompressed data to an independent piece of decompression code (which sees only the compressed data, not the data structures of the compressor) and have this byte-for-byte check happen on-the-fly during compression. Whether the bug lies in the algorithm itself or the code for the compressor or decompressor isn't important; the important thing is that this "lossless" algorithm isn't 100% reliable, and 99.9999% reliable just isn't good enough.
(And what's with the brilliant idea of requiring ".bz2" as the extension?!? Would it have been so difficult to put the version number in the header of the file like gzip does??)
Re:2.4.12 dosn't compile (Score:3, Informative)
It is the right constant. Tim Waugh has released a linux-booboo.patch [redhat.com] that corrects the constant in ieee1284_ops.c to IEEE1284_PH_ECP_DIR_UNKNOWN.
Linux 2.4.12 compiles nicely for me now that I've integrated that patch.
Re:2.4.11 to 2.4.12 (Score:2)
Maybe the next Kernel version should be called 2002 instead of 2.5
Re:Patching hole.Please help. (Score:3, Informative)
Re:ext3 (Score:2, Informative)
Re:ext3 (Score:2, Funny)
Re:Ouch (Score:2)