Follow Slashdot blog updates by subscribing to our blog RSS feed

 



Forgot your password?
typodupeerror
×
Linux Software

Kernel 2.6.12 Released 291

Mad Merlin writes "Linux kernel 2.6.12 has been released! Kerneltrap has a brief summary on it. The changelog is only partial however: 'The full ChangeLog ended up missing, because I only have the history from 2.6.12-rc2 in my git archives, but if you want to, you can puzzle it together by taking the 2.6.12 changelog and merging it with the -rc1 and -rc2 logs in the testing directory. The file that says ChangeLog-2.6.12 only contains the stuff from -rc2 onward.' As always you can find the changelog and the source at kernel.org"
This discussion has been archived. No new comments can be posted.

Kernel 2.6.12 Released

Comments Filter:
  • by Sheetrock ( 152993 ) on Saturday June 18, 2005 @05:47PM (#12852928) Homepage Journal
    When and why did they stop the system of releasing stable versions on the even minor releases (2.4.x, 2.6.x, etc.) and unstable/development versions on the odd minor releases (2.5.x, 2.7.x, etc.)?
  • by NitsujTPU ( 19263 ) on Saturday June 18, 2005 @05:56PM (#12852969)
    Whoooaaa buddy.

    I'm a card-carrying member of the FSF, a Linux user, a supporter, and didn't mean to HURT anybody. I meant to make an obvservation, and hope that it perhaps HELPS somebody.
  • Re:Linux+OpenSolaris (Score:5, Interesting)

    by Arker ( 91948 ) on Saturday June 18, 2005 @06:05PM (#12853003) Homepage

    Does anyone think there will be anything benificial to linux to borrow from solaris now that the source is out, or does their license even allow this?

    Short answer: No, and no.

    Longer answer, while there are a few places Solaris still has an advantage, you can't just rip code out of one and stick it another. The structure of the code is quite different, so an implementation in one codebase just won't transfer to another cleanly.

    And two, the CDDL, besides being horridly written, is clearly and intentionally not GPL compatible, so even if you could transplant code like that technically, it wouldn't work legally.

  • Maybe? (Score:5, Interesting)

    by Saeed al-Sahaf ( 665390 ) on Saturday June 18, 2005 @06:09PM (#12853012) Homepage
    Could this be part of the reason hardware manufacturers don't put a high priority for Linux drivers?
  • Poor Linus (Score:5, Interesting)

    by RickPartin ( 892479 ) on Saturday June 18, 2005 @06:12PM (#12853027) Homepage
    Sorry about being off-topic but I've been thinking, since Linus is a normal guy and not some super human CEO, he must go through a "family tech support guy" hell that only exists in only our darkest of nightmares. I pity him.
  • borked (Score:3, Interesting)

    by Sparr0 ( 451780 ) <sparr0@gmail.com> on Saturday June 18, 2005 @06:14PM (#12853034) Homepage Journal
    Yet another kernel release without fixed/rolledback highpoint RAID drivers :( Kernel Oopses and Panics abound and they insist on keeping the broken code merged in around 2.6.9. Well, there's always hope for 2.6.13!
  • Re:You're Fired (Score:4, Interesting)

    by slavemowgli ( 585321 ) on Saturday June 18, 2005 @07:27PM (#12853360) Homepage
    There is no release manager. A new kernel gets published when Linus decides it's time; in a way, that makes him the release manager, but it's not really managing as in "creating schedules, specifications, requirements, deadlines and all that". And I at least would rather see him do actual work instead of meeting arbitrary requirements imposed on him by the more marketing-oriented types.

    That being said, Linus *has* given a reason why there's no full changelog this *one* time (it's reproduced right above in this very Slashdot discussion, for example); if anyone has issues with that, I assume they're more than welcome to create a full one and post that. If noone does... well, then the itch probably wasn't worth scratching after all.

    So there. If it really matters to you, then go and create a full changelog. If it's not worth your time and effort, why do you complain that Linus feels the same way?
  • by Anonymous Coward on Saturday June 18, 2005 @08:11PM (#12853564)
    Things aren't as simple as just "according to specs". The kernel currently does not keep a standardized API that binary modules can easilly use. Therefore you need to recompile your driver for the new kernel, even though you do not actually have to change the source code.

    This is the very problem with Linux - and they do not want to change this. Instead they want manufacturers to open their code. Of course ATi and nVidia will never do this and we are stuck with having to wait for them to release new drivers.

    nVidia has partially solved the problem by providing a wrapper part that you can recompile for your specific kernel. It is a work-around and not a good solution.
  • by jadavis ( 473492 ) on Saturday June 18, 2005 @09:47PM (#12853972)
    I can understand that some part of the kernel
    still needs heavy development
    (ReiserFS, VM, ...


    Speaking of VMs, I'm a little confused about the topic. Can anyone direct me to some good material that explains the differences between various VM systems? Specifically, I'm concerned about overcommitting memory and the OOM killer in linux. Do any other OSes have an OOM killer? Why or why not? If an OS overcommits memory, how can it not have an OOM killer? Does setting "vm.overcommit_memory = 2" disable the OOM killer, or just make it less likely?
  • Reiser4 (Score:2, Interesting)

    by azdruid ( 893225 ) on Saturday June 18, 2005 @10:21PM (#12854110)
    Well, I'm a bit disappointed that native Reiser4 support wasn't included in this release. It's one of the features I'm greatly looking forward to...and I'm too noobish to compile a Reiser4 kernel module myself.
  • by iabervon ( 1971 ) on Sunday June 19, 2005 @12:46AM (#12854613) Homepage Journal
    The problem was that starting an unstable series was exactly the kind of fork that causes problems with development. There was a substantial period when the functionality needed to run a system with recent software and on recent hardware was only in the unstable one of the official series, and was backported to the nominally stable series by each distro individually separately and to different degrees. Furthermore, there was constant pressure to add new features to the officially stable series, rather than to the unstable series (or backported from the unstable series).

    The central problem was that series progressed from unstable to stable to obsolescent to non-functioning. The solution is to always have a place for unstable features (the -mm series) and a place for stable features (the mainline series), and features move into the stable series as they become stable, rather than requiring a major upheaval and years of fussing. Then there needs to be something that corresponds to the period where mistakes in a release are fixed in a release that doesn't include anything else, even new features which are extensively tested; this is only useful until there are no known regressions in the next version with added features.

    The reason that the numbering changed is that, were the numbering maintained, the recent releases would now be 2.16.11, 2.18.0, and 2.19.0 (assuming that the new system was adopted at 2.6.6, the old numbering would make what was 2.6.7 into 2.8.0, and so forth, replacing one point release with two minors, which would just be silly). Also, it would be confusing if 2.17.x included stuff that wasn't in 2.18.0, was in 2.19.0, and was in 2.20.0; this is what happens to anything that's still in testing when a release is made and then stabilizes. Since there's always stuff in testing, no stable release could ever be made without having the existance of features be confusingly non-continguous.

    In any case, 2.6.x.y is now about as stable as 2.4.x was during the period before distros started moving to 2.6.
  • Re:Maybe? (Score:1, Interesting)

    by Anonymous Coward on Sunday June 19, 2005 @01:24AM (#12854752)
    But in order to make use of the product you need to use the binary driver as using the open source driver is not an option due to the fact that it is lacking the feature that I need to use 3d accelerated graphics. If I did not need this feature I would get some crappy old PCI video card.

    If you need to get something done which requires 3D accel you do not really have an option to bitch simply because you will not get the task done. Thus there really is no choice on the user side.

    I think it would be better for everyone if NVIDIA just provided COMPLETE specs for their videocard regarding interfacing and some kernel maintainers would write open source driver from those specs.

    Regarding someone's comment above that giving out the specs would give a leg up for the competitors this is utterly false. This is not something like smart cards where a competitor reverse engineered the technology and leaked it so people could pirate it or wireless chipset where you have access to VCO (voltage controlled oscillator) to change the frequency outside the FCC accepted range. No, here we only need interfacing. I can think of Java APIs as a good example, think OOP, you do not know the implementation, however, you know how to interface the device. And that is all that is needed. I do not see why NVIDIA doesn't want to save themselves their engineers time as well as increase their customer base by hiding by excuses such as it gives competitors a chance to catch up. Even if that were the case, NVIDIA would already have released a card and the competitor would only start developing and testing chips and then complete product. Rellevant example I can think of is when USSR stole 8088 [or was it 8080]. By the time they finished reverse engineering, building and testing it to make it ready for production, Intel made 80386 chips.

    When viewed from perspective of an end user, I have things to do and I have deadlines; of course, there's room for optimism.

    Best regards,

    Oleg M
  • Re:Maybe? (Score:3, Interesting)

    by Mornelithe ( 83633 ) on Sunday June 19, 2005 @01:25AM (#12854757)
    The open source nvidia drivers are only good if you want a $600 video card to perform like a $10 video card. The ati drivers are better than that on old cards, but not much on the new ones.

    If you want to play 3D games on Linux today, you need to use binary drivers. Another alternative is to use Windows for gaming, and Linux only for desktop applications. In that case, nvidia still has no incentive to release any specifications or open source drivers for Linux.

    A third alternative is to forgo playing any 3D games at all, in which case ATI/nVidia lose perhaps 1% of their sales, and the user has to be unhappy. And ATI/nvidia still have no incentive to release open source drivers/specifications, so these people will be unhappy forever (or at least until way in the future when Linux has a large enough market share that a successful boycott can actually be established).

    So, essentially, your position is that you should make significant sacrifices for no conceivable gain, whereas the people you're arguing against are suggesting that people reap the benefits of an almost completely open-source desktop in addition to being able to enjoy their games, by making a concession on a point that they have no hope of winning.

    Does that sum things up pretty well?

Neutrinos have bad breadth.

Working...