Kernel 2.6.12 Released 291
Mad Merlin writes "Linux kernel 2.6.12 has been released! Kerneltrap has a brief summary on it. The changelog is only partial however: 'The full ChangeLog ended up missing, because I only have the history from 2.6.12-rc2 in my git archives, but if you want to, you can puzzle it together by taking the 2.6.12 changelog and merging it with the -rc1 and -rc2 logs in the testing directory. The file that says ChangeLog-2.6.12 only contains the stuff from -rc2 onward.' As always you can find the changelog and the source at kernel.org"
One thing I'm a bit confused about... (Score:5, Interesting)
Re:Now, there's the right message (Score:2, Interesting)
I'm a card-carrying member of the FSF, a Linux user, a supporter, and didn't mean to HURT anybody. I meant to make an obvservation, and hope that it perhaps HELPS somebody.
Re:Linux+OpenSolaris (Score:5, Interesting)
Short answer: No, and no.
Longer answer, while there are a few places Solaris still has an advantage, you can't just rip code out of one and stick it another. The structure of the code is quite different, so an implementation in one codebase just won't transfer to another cleanly.
And two, the CDDL, besides being horridly written, is clearly and intentionally not GPL compatible, so even if you could transplant code like that technically, it wouldn't work legally.
Maybe? (Score:5, Interesting)
Poor Linus (Score:5, Interesting)
borked (Score:3, Interesting)
Re:You're Fired (Score:4, Interesting)
That being said, Linus *has* given a reason why there's no full changelog this *one* time (it's reproduced right above in this very Slashdot discussion, for example); if anyone has issues with that, I assume they're more than welcome to create a full one and post that. If noone does... well, then the itch probably wasn't worth scratching after all.
So there. If it really matters to you, then go and create a full changelog. If it's not worth your time and effort, why do you complain that Linus feels the same way?
It is not that simple (Score:1, Interesting)
This is the very problem with Linux - and they do not want to change this. Instead they want manufacturers to open their code. Of course ATi and nVidia will never do this and we are stuck with having to wait for them to release new drivers.
nVidia has partially solved the problem by providing a wrapper part that you can recompile for your specific kernel. It is a work-around and not a good solution.
Re:Making it stable... (Score:3, Interesting)
still needs heavy development
(ReiserFS, VM,
Speaking of VMs, I'm a little confused about the topic. Can anyone direct me to some good material that explains the differences between various VM systems? Specifically, I'm concerned about overcommitting memory and the OOM killer in linux. Do any other OSes have an OOM killer? Why or why not? If an OS overcommits memory, how can it not have an OOM killer? Does setting "vm.overcommit_memory = 2" disable the OOM killer, or just make it less likely?
Reiser4 (Score:2, Interesting)
Re:One thing I'm a bit confused about... (Score:4, Interesting)
The central problem was that series progressed from unstable to stable to obsolescent to non-functioning. The solution is to always have a place for unstable features (the -mm series) and a place for stable features (the mainline series), and features move into the stable series as they become stable, rather than requiring a major upheaval and years of fussing. Then there needs to be something that corresponds to the period where mistakes in a release are fixed in a release that doesn't include anything else, even new features which are extensively tested; this is only useful until there are no known regressions in the next version with added features.
The reason that the numbering changed is that, were the numbering maintained, the recent releases would now be 2.16.11, 2.18.0, and 2.19.0 (assuming that the new system was adopted at 2.6.6, the old numbering would make what was 2.6.7 into 2.8.0, and so forth, replacing one point release with two minors, which would just be silly). Also, it would be confusing if 2.17.x included stuff that wasn't in 2.18.0, was in 2.19.0, and was in 2.20.0; this is what happens to anything that's still in testing when a release is made and then stabilizes. Since there's always stuff in testing, no stable release could ever be made without having the existance of features be confusingly non-continguous.
In any case, 2.6.x.y is now about as stable as 2.4.x was during the period before distros started moving to 2.6.
Re:Maybe? (Score:1, Interesting)
If you need to get something done which requires 3D accel you do not really have an option to bitch simply because you will not get the task done. Thus there really is no choice on the user side.
I think it would be better for everyone if NVIDIA just provided COMPLETE specs for their videocard regarding interfacing and some kernel maintainers would write open source driver from those specs.
Regarding someone's comment above that giving out the specs would give a leg up for the competitors this is utterly false. This is not something like smart cards where a competitor reverse engineered the technology and leaked it so people could pirate it or wireless chipset where you have access to VCO (voltage controlled oscillator) to change the frequency outside the FCC accepted range. No, here we only need interfacing. I can think of Java APIs as a good example, think OOP, you do not know the implementation, however, you know how to interface the device. And that is all that is needed. I do not see why NVIDIA doesn't want to save themselves their engineers time as well as increase their customer base by hiding by excuses such as it gives competitors a chance to catch up. Even if that were the case, NVIDIA would already have released a card and the competitor would only start developing and testing chips and then complete product. Rellevant example I can think of is when USSR stole 8088 [or was it 8080]. By the time they finished reverse engineering, building and testing it to make it ready for production, Intel made 80386 chips.
When viewed from perspective of an end user, I have things to do and I have deadlines; of course, there's room for optimism.
Best regards,
Oleg M
Re:Maybe? (Score:3, Interesting)
If you want to play 3D games on Linux today, you need to use binary drivers. Another alternative is to use Windows for gaming, and Linux only for desktop applications. In that case, nvidia still has no incentive to release any specifications or open source drivers for Linux.
A third alternative is to forgo playing any 3D games at all, in which case ATI/nVidia lose perhaps 1% of their sales, and the user has to be unhappy. And ATI/nvidia still have no incentive to release open source drivers/specifications, so these people will be unhappy forever (or at least until way in the future when Linux has a large enough market share that a successful boycott can actually be established).
So, essentially, your position is that you should make significant sacrifices for no conceivable gain, whereas the people you're arguing against are suggesting that people reap the benefits of an almost completely open-source desktop in addition to being able to enjoy their games, by making a concession on a point that they have no hope of winning.
Does that sum things up pretty well?