Become a fan of Slashdot on Facebook


Forgot your password?
Software Operating Systems Linux

Linux 2.6.27 Out 452 writes "Linux 2.6.27 has been released. It adds a new filesystem (UBIFS) for 'pure' flash-based storage, the page-cache is now lockless, much improved Direct I/O scalability and performance, delayed allocation support for ext4, multiqueue networking, data integrity support in the block layer, a function tracer, a mmio tracer, sysprof support, improved webcam support, support for the Intel wifi 5000 series and RTL8187B network cards, a new ath9k driver for the Atheros AR5008 and AR9001 chipsets, more new drivers, and many other improvements and fixes. Full list of changes can be found here."
This discussion has been archived. No new comments can be posted.

Linux 2.6.27 Out

Comments Filter:
  • uname -a (Score:1, Interesting)

    by Anonymous Coward on Friday October 10, 2008 @12:02AM (#25324079)
    Linux grumpy 2.6.27-6-generic #1 SMP Tue Oct 7 04:15:04 UTC 2008 i686 GNU/Linux huh? has ubuntu been using early releases or something?
  • by gringer ( 252588 ) on Friday October 10, 2008 @12:04AM (#25324107)

    It's a shame this won't be in the upcoming Lenny release of Debian. The in-kernel support for heaps of webcams via gspca is a very nice user-visible element of this release. []

    Although, I guess they made the decision for 2.6.26 before they realised that a September release would be an impossible target.

  • by QuantumG ( 50515 ) * <> on Friday October 10, 2008 @12:06AM (#25324121) Homepage Journal

    In only 3 months, all of this code has been completed and reviewed by multiple developers. This happens *every* three months. The pace at which the Linux kernel is moving and yet still maintaining quality is incredible. It is clearly the case that the Linux kernel has hit a new kind of critical mass and is now a form of software development that has never been seen before. The sheer number of people involved changes what is possible. If you suggested that every single change to the codebase be reviewed by multiple developers in a traditional proprietary software development house you would be, rightly, laughed at. There simply isn't the resources.

  • AR5008 (Score:5, Interesting)

    by log0n ( 18224 ) on Friday October 10, 2008 @12:07AM (#25324123)

    Excellent! Macbook & Pro users can finally have wifi support.

  • Current Limiting? (Score:2, Interesting)

    by um_atrain ( 810963 ) on Friday October 10, 2008 @12:08AM (#25324131) Homepage

    Hmm, wonder if this new kernel will finally do something about power consumption in laptops...

    Also, the kexec-based hibernation sounds interesting, hopefully new distro releases will start playing around with these.

  • 'pure' flash devices (Score:5, Interesting)

    by Chris Pimlott ( 16212 ) on Friday October 10, 2008 @12:17AM (#25324177)

    Before you get all excited about running UBIFS on your USB drive, take note: UBI is not for consumer flash media []. These devices already incorporate hardware to hide their flash nature so they look like a plain old block device to your OS. UBI is for pure flash devices that directly expose the quirks and distinct characteristics of the underlying media.

    So what kind of flash hardware is this for? Embedded devices, apparently. But maybe as flash storage becomes more common, more devices will support raw access?

  • by FooAtWFU ( 699187 ) on Friday October 10, 2008 @12:29AM (#25324223) Homepage

    If you suggested that every single change to the codebase be reviewed by multiple developers in a traditional proprietary software development house you would be, rightly, laughed at. There simply isn't the resources.

    Where I work, it's called "pair programming".

    (If two programmers is enough to count as "multiple". Also, bug fixes are supposed to get an additional diff check.)

    If you do it right, you not only save time by not-writing bugs you don't have to fix later, but you can also avoid wasting all sorts of time (writing the feature wrong, going down paths that could lead to disaster, or spinning your wheels and banging your head when you can't figure out something stupid like feeding rrdtool deltas when it expects raw counters...), and you can bring new developers up to speed on a code base very very quickly.

  • by bendodge ( 998616 ) <bendodge@bsgpr[ ... m ['ogr' in gap]> on Friday October 10, 2008 @12:55AM (#25324363) Homepage Journal

    How about SD cards? They appear to be rather low on circuitry.

  • by schwaang ( 667808 ) on Friday October 10, 2008 @01:01AM (#25324385)

    It seems quite likely that OLPC will largely replace jffs2 with UBI [] for the internal nand on the XO. Good news. Maybe this will apply to the Asus eee as and other solid-state drives as well.

  • by Weaselmancer ( 533834 ) on Friday October 10, 2008 @01:03AM (#25324397)

    Yeah, embedded devices definitely. It'll be awfully nice to see simple flash chips soldered onto a board rather than someone bolting an SD or compact flash socket onto them just so you can have a boot device.

    Fragile, more expensive, and adds another physical item that can break. And not only that, but you can drop about 20-30 dollars worth of non-essential hardware from your design and still be on target. If you do any embedded work you know how big 20 dollars worth of hardware savings is. This new driver is *huge*.

  • by QuantumG ( 50515 ) * <> on Friday October 10, 2008 @01:18AM (#25324467) Homepage Journal

    Do you have this hardware? Any chance you could narrow down the versions it works on and the versions it doesn't?

    This is a general problem with kernel development.. if you don't have the hardware, it's a bitch to test. Please do contribute your findings.

  • by pembo13 ( 770295 ) on Friday October 10, 2008 @01:22AM (#25324489) Homepage
    I was kinda expecting to see news about ath9k and AR5007 found in some HP notebooks, among others. Currently using a very flaky ath_pci module.
  • Re:ACPI (Score:5, Interesting)

    by Anonymous Coward on Friday October 10, 2008 @01:30AM (#25324539)

    > Any chance that this will fix some of the ACPI problems with Linux?

    Just to be clear, ACPI problems are motherboard problems, not Linux problems.

    If the ACPI function of your motherboard is correct and compliant with the ACPI specification, Linux will work just fine.

    Part of the motherboard ACPI problem is that Windows expects, and uses, some functions within ACPI that are not compliant with the ACPI specification ... you know the drill: embrace, extend, obscure, try to screw the opposition ...

    Fortunately with ACPI we have not quite yet got to the "extinguish" phase.

  • Re:Linux 2.6.27 Out (Score:5, Interesting)

    by frankm_slashdot ( 614772 ) on Friday October 10, 2008 @01:35AM (#25324569)

    sad part is i just pre-ordered the openbsd 4.4 cd set... hah. im not sure if i should be proud or ashamed.

    then again, i sometimes think im the last of the right-os-for-the-job heretics... openbsd on my firewall. solaris (with zfs) on my fileserver... mac os x on my main desktop... (i dabble in photoshop and video.. mostly failed fark contests. ha) and windows vista on my macbook pro (along side of os x of course)... because i do a lot of autocad/solidworks stuff on the side. my webserver runs gentoo..

    i guess you could call me a glutton for punishment.

  • by cryptoluddite ( 658517 ) on Friday October 10, 2008 @01:40AM (#25324603)

    Do you have this hardware? Any chance you could narrow down the versions it works on and the versions it doesn't?

    Same hardware as this guy: []

    System is at work... I would test except there are not any easy options for doing so there. Also, I realize that you can't be expected to fix hardware problems where you don't have the hardware... in fact I've personally seen code fail on one system and run perfectly on the exact same spec hardware sitting right next to it, with exactly the same software (ghosted).

    Mostly I'm just pointing out that there are longstanding problems in linux... the original fanboy post was way over the top.

  • by lysergic.acid ( 845423 ) on Friday October 10, 2008 @01:56AM (#25324687) Homepage

    that's a pretty interesting development technique. i'd never heard of it so i had to look it up [] on wikipedia.

    at first i'd assumed this was simply assigning a two person team for each development task, but turns out it's a much more involved methodology involving close cooperation and meticulous division of labor, with all duties being split between two separate roles of the driver and the observer/navigator.

    the driver is the person coding, and the observer/navigator is responsible for reviewing the driver's code and acting as a safety net by catching errors. the observer also seems to be responsible for looking ahead and thinking about general strategy and long-term planning. this frees up the driver to focus completely on the immediate task of implementation.

    apparently, two programmers using this technique are more than twice as productive as a single programmer. but i wonder if it wouldn't be incredibly boring being the observer and have to sit there watching someone else code. it might be good if the two programmers are about equally skilled and can learn from each other, but otherwise i think the observer might get bored and not pay attention to the code being written. and if he's also thinking about long-term strategy, he could easily be distracted and miss some of the bugs he's supposed to catch. perhaps simply having programmers partner up to get together and review each other's code, discuss problems/concerns, share insights/exchange thoughts, etc. every once in a while would accomplish the same thing without such a rigid structure.

  • by RzUpAnmsCwrds ( 262647 ) on Friday October 10, 2008 @03:54AM (#25325229)

    If you suggested that every single change to the codebase be reviewed by multiple developers in a traditional proprietary software development house you would be, rightly, laughed at.

    When I was at Microsoft, that's exactly how it worked. All code had to be reviewed and approved by the feature owner and the PM. There was also a team that reviewed any changes to the common libraries, in addition to the PM.

    In addition, to actually get code checked in, it had to pass FxCop (code standards verification tool), not break the build, and not break any of the build verification tests (BVTs).

    Mind you, I worked in the test team. Developers have to go through all of the same steps, and then their code also gets tested by the test team.

  • by QuantumG ( 50515 ) * <> on Friday October 10, 2008 @04:19AM (#25325317) Homepage Journal

    When I worked at VMware we had to get code reviews for every checkin. Code reviews are literally the only thing that has been shown to consistently improve quality. Of course, it's not just code reviews.. it's also attitude. If you're accepting of stuff being broken because it is "in development" then that's what you'll get. On the other hand, if you have a tight knit small team working on the same stuff then you can get similar quality by just maintaining pace and having lots of communication through the code.. but that doesn't scale.. this does.

  • by RiotingPacifist ( 1228016 ) on Friday October 10, 2008 @04:45AM (#25325405)

    Did you bother reading the bug report: seems linked to the HDA Intel chipset, although I do not have this problem in Fedora or PCLinuxOS."

    Its a ubuntu problem not a kernel problem, i would have guessed it was pulseaudio/alsa problem and not a kernel based problem too.

  • Re:Thank you Linus. (Score:4, Interesting)

    by JohnFluxx ( 413620 ) on Friday October 10, 2008 @04:49AM (#25325423)

    I know you joke, but on average he merges four code bases (patches) per day. That is not trivial by any measure.

  • by mrpacmanjel ( 38218 ) on Friday October 10, 2008 @05:03AM (#25325491)

    First I would just like to say thank-you to everybody that develops the Linux kernel, without it I would have been stuck with the "other" OS that everybody loves to hate!

    Linux (through various distros) has been my OS of choice for about 9 years now, has enriched my IT life and quite frankly made IT actually interesting again.

    But one thing has been bothering me!

    I recently upgraded my OS to Ubuntu 8.04 then hit a problem - my wifi network connection became unusable (very weak signal and slooowwww internet access). I tried pretty much most fixes but it still wasn't working right (slightly better wifi signal but then would randomally stop altogether). If I booted into my "production" partition (Ubuntu 7.10) everything was fine and the "balance of the force" was restored. I had a spare partition on the hard drive and installed Fedora 9(? It may have been 10 - can't remember). This also exhibited "dodgy wifi behaviour". Of course, it was a kernel(2.6.22) driver problem and I need to find the time to download the latest drivers and compile. Thankfully I can do this but it still irritating!
    I have gone back to Ubuntu 7.10 (kernel 2.6.14?) and it's been fine since.

    My wifi hardware is based on the rt2500 chipset series and is quite common on most laptops and until recently were reliable. As far as I remember the drivers were being rewritten for the kernel - which is fine but if it breaks hardware (which until that time had been reliable)
    then people should have been made aware of this or even work with the distos for a interm fix.

    At least include the compiled legacy drivers with the distro and not force people to download them from the internet and recompile.

  • by Bent Mind ( 853241 ) on Friday October 10, 2008 @05:05AM (#25325511)

    To each their own. However, I always preferred having the driver just be there when I need it. I always found it annoying, under Windows, to have to hunt down drivers. Especially when you have a hundred similar devices that have the same binary driver blob (same chipset) but require a hundred different INF files because every company that assembles a board insists on having a unique driver download. Then you can throw in driver signing that makes life even more difficult.

    Linux drivers are much easier to deal with.

  • by paulbd ( 118132 ) on Friday October 10, 2008 @05:39AM (#25325657) Homepage
    There are also longstanding issues with Intel HDA hardware ... this supposed "standard" isn't really a standard at all. It has huge amounts of slop for mobo makers to futz around with the pinouts, and indeed, there are at least as many variants on HDA h/w as seen by the kernel as there are major laptop models. The windows drivers work because of collaboration with mobo makers who provide the info about how they specifically wired up the pinouts. The linux ones are a case of trial-and-error. There are thousands (or even millions) of Intel HDA users on linux who do not have your problem, another whole set who do, and and even bigger set who have different problems with this godforsaken "standard" h/w.
  • by david.given ( 6740 ) <dg.cowlark@com> on Friday October 10, 2008 @06:40AM (#25325915) Homepage Journal

    How about SD cards? They appear to be rather low on circuitry.

    No, SD cards still have an on-board microcontroller. If you take the lid off, there are usually two chips in there: one's the flash itself, the other's the microcontroller.

    (SD cards are awesome if you're a homebrewer. They speak a high-level protocol over a very simple four-wire serial interface. It clocks down far enough that it's possible to hook one up to, say, a C64 or Spectrum by just connecting it to some spare I/O pins and wiggling them manually. You can then read and write 512 byte sectors by sending the appropriate command, and you don't have to worry about any of that horrible flash stuff.)

  • by x2A ( 858210 ) on Friday October 10, 2008 @07:46AM (#25326163)

    That's where driverpacks [] and perhaps nlite [] projects come in handy.

  • Re:ACPI (Score:5, Interesting)

    by TheNetAvenger ( 624455 ) on Friday October 10, 2008 @08:33AM (#25326417)

    Part of the motherboard ACPI problem is that Windows expects, and uses, some functions within ACPI that are not compliant with the ACPI specification ... you know the drill: embrace, extend, obscure, try to screw the opposition

    Yet Windows works around more 'crap' ACPI implementations than it 'takes advantage of' non-compliant specifications.

    This is really a goofy argument, as there is very little mainboard ACPI implementations that are Windows specific, let alone off spec to be Windows specific.

    Instead you find crap Motherboards that still have exceptions for OS/2 RAM usage, non-Windows features like VGA palette crawling, cobbled Sx states, and horrid USB support for 'legacy' OS methods that Windows hasn't used in 10 years. (Yes we know these are not all ACPI specific)

    I'm sure it is fun to blame windows for ACPI sucking and Linux's support of ACPI sucking.

    The bottom line is, ACPI tends to suck, and Linux doesn't have the development resources to make it work in all circumstances, even though it does a pretty good job. Apple has trouble with their hardware, yet have few model, moved to EFI and still have some of the same inconsistent behavior Linux and Windows users encounter or messed up combinations of hardware.

    As for ACPI, MS tried to push the industry on ACPI and move past it back in the 90s, and it was hobbists that were using non-Windows OSes like Linux that screamed and stopped EFI type suggestions from taking hold. MS shoved for legacy free BIOS concepts, and there is some hardware even out there that used a generic proprietary EFI type of legacy free BIOS system, go look at Toshiba laptops from 2002 that required OS level drivers, as there was no traditional BIOS. They also didn't have legacy ISA or older device support and could boot WindowsXP in less than 10secs on some machines, and return from a full hibernate in under 2 secs because of no BIOS time delay.

    Just to blow your argument to the side, crap like this link would not exist if Windows did have more control over ACPI compliance as you suggest. []

    Specifications and variations in the specification is an area that 'logic' would dictate that the OSS model would be supreme; however, in reality, the complexity and diversity of the implementations favors larger production OSes like Windows where exceptions have to be implemented, and a large vendor like Microsoft can force Motherboard companies to clean up their crappy implementations or work around them, as Windows often does.

    One of the biggest bitches users had with Vista and hibernation and Standby were because of Vista adhereing to the specifications and trying to force vendors to do the same, so that S1,S2,S3 etc were consistent. Instead MS had to write a bunch of 'exception' code for motherboards and even up until SP1 was still adding code to deal with crappy motherboard implementations to get the hibernation and standby back in line so that hybrid sleep could work consistently.

    Microsoft doesn't have control over the hardware markets like people assume they do, and never really have. If they did, they would not have had to resort to proprietary hardware for the XBox 360, as some of the hardware specifications in the console are things MS shoved for in the PC market years before. Just an example would be unified shaders, and this didn't finally get shoved to PC users until Vista's DX10 required them, even though the benefits of a more agnostic GPU shader system was known years and years ago.

  • by Mr. Sketch ( 111112 ) <mister.sketch@gm[ ].com ['ail' in gap]> on Friday October 10, 2008 @08:38AM (#25326459)

    I think the best suggestion I heard was a date-based scheme like year.month scheme so we at least know how old our current kernel is. This release would be 8.10 (or 2008.10 if you want long dates). If we want to keep with the 2.x series with 3 parts perhaps we could do millennium.year.month so this would be 2.8.10. If another release came out this month, say on the 21st, we could add a day to the end of it for additional releases that month like

  • by dayton967 ( 647640 ) on Friday October 10, 2008 @08:49AM (#25326567) Journal
    Way back when, I made a comment that the kernel should, be more modular, that was done. But I also stated that the development of the Kernel, and the Drivers should be seperated.

    This is to save on these massive downloads required these days, also to allow for faster development of both kernels, and drivers.

    One requirement of this, would be to build out driver stubs, so that there would be standardize the communication between the kernel and the drivers.

    - Some of the benefits would be to have faster development schedules.
    - Reduce the downloads.
    - Provide a method for Hardware modules to communicate with the kernel. Allows for commercial modules to be used, and to provide a method for the kernels to be developed without kernel specific code.
    - Removes the requirement for kernel specific modules. Some hardware doesn't have even upto date drivers because of changes with the kernel. (VMWare has this problem with the VMWare-Tools, considering the code hasn't changed that much there is at least 2 #ifdef's for the 2.6.* kernels).
    - Allows for urgent updates of individual drivers. eg. e1000
    - Distributions would upgrade more frequently, instead of back porting some fixes.
    - Reduced bandwidth requirements, don't have to download a 50-60M tar.gz for the kernel, or 17+M for the kernel.
    - Ultimately, it would eliminate a person from making a change in an area of the kernel, that affects many other modules, which results in changes in those modules or bugs in those modules.

    All of this would allow for greater development speed, improved security, reduction of bugs.

  • by Windowser ( 191974 ) on Friday October 10, 2008 @09:01AM (#25326655)

    There's nothing wrong with wanting things to 'work' sometimes. Some people have better things to do in the evening than trying to get working. Especially when they spend their day fixing other people's problems.

    Exactly. I have no time to waist on something that I already made working and then the OS barfed on himself and it doesn't work anymore. That's why I prefer Linux : it maybe sometimes more trouble to make it work, but once it is setup, I never have to touch it again.

    I spend enough time fixing other people Windows machine that when I get home, I just want to use my PC, not fight with it.

    Linux : because I have better things to do than fix a damn computer

  • Stability version? (Score:4, Interesting)

    by Spazmania ( 174582 ) on Friday October 10, 2008 @09:27AM (#25326901) Homepage

    When is the next stability-focused version (like 2.6.16) due out?

  • by ebuck ( 585470 ) on Friday October 10, 2008 @11:06AM (#25327931)

    Your humor is appreciated, but there's a large body of evidence that the best programmers can be more than 10x as effective than the rank-and-file, which can be (more than) 10x as effective as the bad programmers.

    So a methodology that boosts output of two programmers 400% isn't really promising the impossible. Just consider that they are likely talking about average programmers, and the new environment keeps them engaged at an above-average attention level.

    Peer pressure can make people do incredible things (good and bad). Altering the environment to make it more likely that good code is produced isn't snake oil, provided the results do follow. I've never seen a methodology that denies certain techniques are beneficial; instead they seem to argue over different combinations and inclusions of techniques that were observed to work.

    Now, as you pointed out, the acronym spewing masses often don't know what they're saying. For that group, any methodology results in the same thing: changing the appearance of the current methodology without altering how the actual work is done. Months later, they have the perfect scapegoat: the methodology sucks.

  • by vidarlo ( 134906 ) <> on Friday October 10, 2008 @11:30AM (#25328235) Homepage

    This [] bug could've been a showstopper. It essentially ruined your intel e1000e ethernet card, by overwriting the firmware. They've not patched it, according to LWN:

    It is worth noting that, as of this writing, 2.6.27 does not contain a fix for the e1000e hardware corruption bug. What it does contain, though, is a series of patches which will prevent that bug from actually damaging the hardware. That makes the kernel safer to run, which is an important step in the right direction.

    What does that mean? Obviously, it should not ruin your ethernet card anymore, but will e1000e work very well with this kernel? Or what?

    Since this is a pretty high-profile bug it's strange it ain't mentioned in the summary. E1000e is a very popular gigabit ethernet chip from Intel, and actual hardware corruption is serious and (luckily) rare.

  • Re:ACPI (Score:3, Interesting)

    by spitzak ( 4019 ) on Friday October 10, 2008 @12:45PM (#25329143) Homepage

    Both of you are missing the point.

    The hardware manufacturers test their hardware with Windows. Whatever Windows does (whether it is correct or not), if the motherboard does not work, they will fix the motherboard. This means that whatever Windows uses gets fixed. But stuff that is not used by Windows (ie various ACPI apis or arguments to the api) is untested and thus it is a total crap shoot whether it works or not or matches the spec.

    Basically if Windows uses interfaces A and D of A,B,C,D of ACPI then A & D will work, and B & C will be a random api that varies depending on manufacturer.

    Linux has to reverse engineer "what parts did Windows really use". They actually have to figure out that B & C cannot be used because Windows did not call them.

    Windows may very well be obeying the spec and not using "undocumented" apis. But that does not mean that there is no problem making Linux work. There is a major reverse-engineering effort to find out exactly what subset is used and thus works.

    THe most obvious proof is the problem Microsoft had with Vista that you mention. Microsoft programmers decided to start using B & C and discovered that because previous versions of Windows were not using them that lots of hardware did not implement them correctly. I'll bet that despite access to their own source code, they had to do a pretty major "reverse engineering" just to get Vista to work, as the exact behavior is not that easy to figure out even with the source code.

  • by spitzak ( 4019 ) on Friday October 10, 2008 @01:00PM (#25329415) Homepage

    I agreed at one point but I think Linux may have a better scheme possible now, and it would be nice for them to persue it.

    This would be to make the "stable binary api" be only for user-space drivers. If you want a stable api then you have to write a user-space driver. In-kernel drivers would remain as they are, with a binary api that changes, and could even be changed to require GPL code.

    An awful lot of the problem drivers (web cams, sound, scanners, printers, cameras, even wireless) do not need the speed of being in the kernel. Things that really need speed (disks and wired networks) actually work pretty good in Linux today. (graphics are a whole different story but I think before Linux figures out that mess, graphics will end up being so integrated into the cpu that a "graphics driver" will make as much sense as a "floating point co-processor driver", ie it just won't exist)

    This will require Linux to fix and promote the user space driver api. And there certainly is no guarantee that will happen. But I feel this is the correct approach today if a "stable binary api" is wanted.

    Things that provide a filesystem-like api would be the easiest (use FUSE) but this covers a lot of random devices. Things that require a lot of fast and synchronous communication (ie wireless) are harder. But this can be done in steps, so a bad API is not locked down prematurely.

The party adjourned to a hot tub, yes. Fully clothed, I might add. -- IBM employee, testifying in California State Supreme Court