Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!


Forgot your password?
Google Novell Operating Systems Linux

Devs Discuss Android's Possible Readmission To Linux Kernel 151

MonsterTrimble writes "At the Linux Collaboration Summit, Google and Linux Kernel Developers are meeting to discuss the issues surrounding the Android fork and how it can be re-admitted to the mainline kernel. From the article: 'James Bottomley, Linux SCSI subsystem maintainer and Novell distinguished engineer, said during the kernel panel that forks are prevalent in embedded systems where companies use the fork once, then "throw it away. Google is not the first to have done something like this by far, just the one that's made the most publicity. Hopefully the function of this collaboration summit is that there is some collaboration over the next two days and we might actually solve it."'"
This discussion has been archived. No new comments can be posted.

Devs Discuss Android's Possible Readmission To Linux Kernel

Comments Filter:
  • Re:Cheaper costs (Score:4, Informative)

    by cynyr ( 703126 ) on Friday April 16, 2010 @07:10PM (#31878628)

    i second that.

    you have hardware level access to the N[789]00 devices. I would like an ipad sized n900. that would be a great device.

  • Re:Backwards? (Score:5, Informative)

    by Sponge Bath ( 413667 ) on Friday April 16, 2010 @07:15PM (#31878656)

    That said, if you're keeping your driver closed it's a problem you're bringing upon yourself.

    I should have been more clear. I'm talking about drivers in the main kernel source. I know the linux kernel mantra: binary only drivers are evil (I agree), out of tree open source drivers are slightly less evil. I think out of tree open source drivers can be useful when inclusion to the main kernel is denied because some critical functionality is deemed unnecessary by the gatekeepers who require it to be removed before consideration. But I'm not even talking about that.

    Last I checked, changes to the interfaces by someone puts the onus on them to fix all the calls to it in the kernel...

    That's the theory. Here is how it works in practice: A pet project or cosmetic change that touches a lot of code is implemented and then dependencies are grepped. The dependencies are fixed up in a cut and paste way. Sometimes more important drivers get some review to make sure nothing breaks. Everything else just gets shipped if it compiles. Then when that kernel is used in a distribution, sometimes years later, many drivers are suddenly broken and you have to back track to see which change took it out. If someone has a lot of time and desire to support a "lesser" driver then they can spend all of their time playing catch up, but that wears out volunteers quickly and annoys commercial vendors.

  • Re:Backwards? (Score:4, Informative)

    by dgatwood ( 11270 ) on Friday April 16, 2010 @11:53PM (#31880252) Homepage Journal

    Wrong, absolutely wrong. Greg K-H himself has explicitly said that he WANTS people with drivers for even highly obscure devices to merge them into the mainline kernel. It doesn't matter if your capacitive multi-touch screen is only used in one phone; the code is useful to have publicly available in the kernel as a reference. Furthermore, as more drivers for similar devices are merged into the kernel, commonalities between them can be found, and more generic drivers can be created.

    Based on what I've seen over the years (as a developer on a project that never made it back into the mainstream kernel), the problems with this approach are threefold:

    1. Nobody maintains most of them. Most of the 5% of drivers that everybody uses are already in the kernel tree. Of the remaining 95%, half of the drivers don't build at all, and most of the other half don't work. If they're barely maintained now, you can bet money that they won't be maintained at all when some kernel tree maintainer gets a hair up his/her backside and decides that a particular fix isn't elegant enough and won't take the changes....
    2. The tree is already too large. If every driver out there were in the tree, checking out an update to the tree would be horribly painful, the source packages that distributions include would become huge, etc. The bigger it gets, the fewer people are going to be willing to maintain their drivers inside that tree, so in the long run, encouraging people to put their drivers in the tree is just going to cause other drivers to move back out of the tree, eliminating any real benefit.
    3. Many such drivers are outside the tree because they require substantial changes to some subsystem in order to build them. Now one could argue that these changes should be made to those subsystems to make them more general, or one could argue that those drivers are so specialized that nothing else will use them, so there's no reason to bother. That's often not an easy question to answer, and tends to result in highly political shouting matches, with the end result being that the driver never goes in, which is usually why those drivers got published outside the kernel tree to begin with.

    There are ways to solve these problems, of course; IMHO, they basically amount to:

    • Design a kernel build infrastructure that can easily bring in driver sources from third-party sites (like a ports collection, but for kernel drivers). With proper categorization, this can provide all the same benefits as having the drivers in the main tree, but also allows for a richer tagging scheme instead of a simple filesystem hierarchy, which should actually make it significantly easier to spot patterns (for example, seeing that there are now eighty-seven different drivers for capacitive touchscreens, or whatever), all without bloating the tree that everybody has to download.

    • Subject all kernel API changes to a formal API review process in which no API change can go in unless the owners of all drivers in that area agree that the design is acceptable and will meet with their needs. Set up a reasonable set of rules of engagement (e.g. A. don't shoot down the idea just because you don't need it, B. don't shoot down an idea without proposing an alternative). And so on.

    • Redesign the kernel interfaces in an object-oriented language. Such designs make it more likely that drivers can extend the interfaces without requiring major changes to the core code. The Linux kernel sort of halfway adopts this approach insofar as code reuse is concerned, but does so in ways that aren't particularly clean and neat.

      For example, if I were writing an ATA driver and needed to do almost everything the same way but change the behavior of one function in some other library... say down at the block device layer, I'd either have to make a change to the block device layer with some special case detection code or I'd have to copy entire swaths of code at the ATA device layer and c

  • Re:Backwards? (Score:3, Informative)

    by Mad Merlin ( 837387 ) on Saturday April 17, 2010 @12:20AM (#31880352) Homepage

    The problem is that the volatility is so high that kernel drivers need 24/7 maintenance, or else they're dropped and then it becomes even harder to re-integrate them. Ask Microsoft about their paravirtualization drivers. They've submitted two or three versions to the kernel, and each time you had to use the specific version of the kernel that they compiled them on, or it didn't work. That's the problem. Linux. Isn't. Free. Microsoft is however eventually going to have to come to a sad realization: it may cost them a salaried employee and benefits just to maintain these drivers. That's ridiculous. If it's difficult for Microsoft to justify targeting Linux, how is a small business going to justify putting 1/10 of its development staff on it? 1/20?

    Bzzt! Wrong.

    Once code is properly merged to the Linux kernel, it is maintained by the kernel community at large, which need not include the original author. When a kernel developer changes an API, they are required to simultaneously update all in kernel drivers that use the API in question. The only drivers that require 24/7 maintenance are those that are out of tree (regardless of the reason they're out of tree).

    Android was never properly merged to the Linux kernel. Google did a big code dump for Android and it was merged as a set of staging drivers with the caveat that it needed a lot of cleanup before being moved into non-staging. Unfortunately, that cleanup never came and Google basically let the code rot. Thus, a few releases later, Android was removed.

    Indeed, probably well over half of the code in the Linux kernel is now maintained by someone other than the original author (be it an individual or a corporation), particularly for non-core subsystems and drivers. As a hardware vendor or other similar party, if you want a) your widget to work out of the box on every Linux distro and b) to not worry about maintaining your driver, you should be getting your driver merged to the Linux kernel.

  • Re:Backwards? (Score:3, Informative)

    by MostAwesomeDude ( 980382 ) on Saturday April 17, 2010 @09:59AM (#31881330) Homepage

    You're entirely right. That's why they fund several thousand students worldwide to join open-source projects and contribute code to those projects every summer, even if the projects in question don't directly benefit Google.

  • Re:Backwards? (Score:3, Informative)

    by dgatwood ( 11270 ) on Saturday April 17, 2010 @02:17PM (#31882744) Homepage Journal

    The latest linux kernel weighs 60 MB;

    Odd, I'm looking at http://www.kernel.org/pub/linux/kernel/v2.6/ [kernel.org] and the latest kernel I see is 2.6.33, and that comes in at a whopping 81 megabytes for the compressed tarball. Extracted, it takes almost 434 megabytes. That's over twelve minutes of DVD-quality video. That's two-thirds of a CD-R. That's ten times the size of the Mac OS X kernel. That's two months of bandwidth at the lowest tier of cell phone service.... You get the idea. It's freaking huge. The kernel sources were too big way back in 2.2. Now, they're just pure comedy.

    Also, remember that source control systems add a significant performance penalty that is also proportional to the number of files, not just the number of bytes. So although the giant compressed tarball may take only five minutes to download from kernel.org (which is an eternity), I'd expect a source checkout to take a good bit longer.

    But doesn't "just overriding one method in a class" mean changing a function pointer?

    The point wasn't that there's an underlying difference, but rather that the syntax of a class hierarchy tends to result in design patterns in which the things that need to be part of the class are part of the class and not part of some giant library of functions. The result is that instead of the semi-OO design pattern of using pointers for only the functions that you already know will need to be replaced, you have a true OO design pattern where any of them can be replaced without having to push for changes to thousands of lines of code all over the place that refer to that function.

    It's not that if you use a derived class for your driver, all the other drivers will magically use the derived class instead of the one they were designed and compiled for.

    I never suggested that they would. Why would they need to? There should be no accidental interaction between drivers. Any instances of variables shared between two unrelated drivers should be deliberate and rare. I should be able to make changes to my copy of the ATA core code without breaking your driver. That said, if you want the ability to do things like that, Objective-C categories would work.... :-D

    Also consider that C++ is less supported on embedded systems, and has a much more complex (and changing) ABI than C.

    All the more reason to use a limited subset of C++ (no exceptions, no templates, etc.) and to freeze the parts of the ABI that the kernel uses. Apple has managed to write their driver stack in C++ with AFAIK no binary compatibility breakage since they switched from GCC 2.95 to GCC 3 way back in 2003.... (Okay, so the CPU architecture change was something of a binary compatibility breakage, but you know what I mean.)

If you want to put yourself on the map, publish your own map.