Catch up on stories from the past week (and beyond) at the Slashdot story archive

 



Forgot your password?
typodupeerror
×
Google Novell Operating Systems Linux

Devs Discuss Android's Possible Readmission To Linux Kernel 151

MonsterTrimble writes "At the Linux Collaboration Summit, Google and Linux Kernel Developers are meeting to discuss the issues surrounding the Android fork and how it can be re-admitted to the mainline kernel. From the article: 'James Bottomley, Linux SCSI subsystem maintainer and Novell distinguished engineer, said during the kernel panel that forks are prevalent in embedded systems where companies use the fork once, then "throw it away. Google is not the first to have done something like this by far, just the one that's made the most publicity. Hopefully the function of this collaboration summit is that there is some collaboration over the next two days and we might actually solve it."'"
This discussion has been archived. No new comments can be posted.

Devs Discuss Android's Possible Readmission To Linux Kernel

Comments Filter:
  • Yawn (Score:5, Funny)

    by Anonymous Coward on Friday April 16, 2010 @05:39PM (#31878328)

    What does this have to do with the iPad? I come to slashdot for iPad stories, not stuff real nerds have never heard of.

    • Re:Yawn (Score:4, Funny)

      by ickleberry ( 864871 ) <web@pineapple.vg> on Friday April 16, 2010 @05:50PM (#31878434) Homepage
      Really? I come to slashdot to read about how Google is taking yet another piece of technology we have taken for granted for many years and turning it into an online, ad-based Clout 2.0 service and tunneling it through HTTP with JSON and SOAP to their servers for a nice intense data-mining session for better targeted ads and predicting future crimes one might commit.
      • Re:Yawn (Score:5, Insightful)

        by girlintraining ( 1395911 ) on Friday April 16, 2010 @06:02PM (#31878552)

        Really? I come to slashdot to read about how Google is taking yet another piece of technology we have taken for granted for many years and turning it into an online, ad-based Clout 2.0 service and tunneling it through HTTP with JSON and SOAP to their servers for a nice intense data-mining session for better targeted ads and predicting future crimes one might commit.

        Unbridled capitalism and the apathetic and ignorant citizens are to blame for that. Your personal data can be aggregated and monetized, and for the foreseeable future, there's very little legislation to prevent this and very little awareness of how pervasive such technology is. My whole generation is living with software riddled with government and corporations that have put back doors into everything, freely share data with each other, and those living in urban areas (the majority of the population) are rarely out of contact with some device or another wired into the global network, tracking their movements, purchases, communications, relationships, and every aspect of your life. Remotely-enabled webcams, cell phones that can be turned on silently to broadcast everything it hears and sees, and laptops and routers that can be readily converted into eavesdropping devices, just to name a few of the many things that are out there right now. And the only reason it's not all interconnected more seamlessly is because the technology is still rapidly evolving and hasn't reached a stable plateau where convergence is possible, although the internet has made a giant leap forward in enabling that future. The NSA spends billions each year trying to keep up with infrastructure changes and only is able to harness a fraction of that potential.

        But I mean, comeon -- what do you expect from a world where we find it okay to setup metal fences with razor-tipped wire and cameras everywhere as "official protest zones", where we have passports, credit cards, (and soon ID cards) that can be remotely scanned to identify you... put it all together. Where do you think this all ends?

    • What does this have to do with the iPad? I come to slashdot for iPad stories, not stuff real nerds have never heard of.

      Just be patient and give it time. This Lenax stuff may just catch on one day and we'd all look pretty foolish if we didn't pay attention to it now.

    • I think your definition of “real nerds” is way off.
      My dictionary says:
      iPad users — faggy hipsters who are easily influenced by viral marketing, and usually play with shiny colorful clickable UIs on locked-down appliances.
      real nerds — people who use text-based UIs and solder their own hardware, have great logic skills at the expense of social skills and are actually really using computers (=automating things).

  • On the front page of the android website is an announcement for a conference that has already happened.

    Gee maybe they could update the front page?

    Android will be at the 2010 Game Developers Conference in San Francisco from March 9th to March 11th.

    • Re: (Score:2, Insightful)

      Outdated webpages are the hallmark of a dying product
    • Gee maybe they could update the front page?

      It's in the process of updating. Just keep in mind that "the cloud" isn't exactly 'real time'. It'll be showing the "back to school specials" well before Christmas.

  • Backwards? (Score:5, Insightful)

    by Sponge Bath ( 413667 ) on Friday April 16, 2010 @05:48PM (#31878406)

    Google must now balance any desire to respect the wishes of the Linux community for compatibility with the more diverse, competing - and not always logical - interests of those now adopting Android and its own plans.

    I did a double take on this statement.

    What I've seen on the kernel mailing list is more a conflict of commercial developer's desire for compatibility (across kernel versions) with the core kernel developer's more diverse (and not always logical) desire to push pet projects and make frequent cosmetic changes that creates a hellish torrent of code churn. The lack of well defined kernel driver interfaces means a lot of time spent chasing the latest changes instead of adding features or fixing bugs.

    • Re:Backwards? (Score:5, Insightful)

      by Microlith ( 54737 ) on Friday April 16, 2010 @05:56PM (#31878482)

      The only people I've seen clamoring for a static, unchanging driver interface are those writing proprietary drivers. Last I checked, changes to the interfaces by someone puts the onus on them to fix all the calls to it in the kernel, which is why getting your driver into the tree is considered better than keeping it closed.

      That said, if you're keeping your driver closed it's a problem you're bringing upon yourself.

      • If hardware makers can't include third-party code or processes that they aren't permitted to sublicense as free software, then perhaps they won't write a driver at all. Instead of proprietary drivers, you'll have completely unsupported hardware.
        • by cynyr ( 703126 ) on Friday April 16, 2010 @06:07PM (#31878604)
          yes, if it's enough of a market for them, they will make sure that they get support from upstream, if enough companies ask for linux support for subassembly Y then maybe it will change. If you really feel you need to keep it closed, do like nvidia, or handle it yourself.
          • Where's the ROI? (Score:4, Insightful)

            by tepples ( 727027 ) <.tepples. .at. .gmail.com.> on Saturday April 17, 2010 @07:22AM (#31880716) Homepage Journal

            if it's enough of a market for them

            It isn't. Because GNU/Linux has roughly 1% of the desktop share, a lot of companies don't see the return on investment in getting support from upstream.

            • This is circular reasoning. And everybody knows it.

              The reason that the market share is so low, is that companies didn’t support it in the first place. Because they did not look at the long-term profits, or were just too greedy. It’s got nothing to do with market share. That is just a straw-man.

              But hey, I have yet to see hardware that I couldn’t use under Linux. So the whole driver problem is’t even there anymore.

              It’s big companies like Adobe, and game creators, with their short

              • But hey, I have yet to see hardware that I couldn't use under Linux.

                Then you haven't seen a Microtek ScanMaker 4850 flatbed scanner. It's still listed as unsupported in SANE, just like it was when I first checked back in the Mandrake 9.1 days.

        • Re: (Score:3, Insightful)

          by Microlith ( 54737 )

          What's your point, that we should encourage closed drivers by setting the APIs in stone for years on end? Allow the non-open to dictate the actions of the open?

          That's not -my- problem. It's theirs. They choose to stay closed, so when the APIs change no one else can fix it but them. They have no room to bitch about unstable APIs in an open kernel that is constantly changing, when they won't commit to being open themselves. Others do, and as a result don't have nearly the problems. It's a cost they must accep

          • Re: (Score:3, Interesting)

            by tepples ( 727027 )
            So I, an end user, am inside a Best Buy store, and I don't have a cell phone with a data plan to check what is in stock against the distro's HCL. How do I find peripherals that are definitely compatible with a free OS?
            • Re: (Score:2, Interesting)

              by Anonymous Coward

              1. Class compatibility.

              Prefer to buy the object which says it implements a device class standard. AHCI is a good example. Classes rule so much that in a lot of cases all the non-class compatible products just went away - HID and ATAPI are good examples of that. In a few cases products don't advertise their class compliance but there's a well known sign that you can learn before you start looking. For example, if a webcam has the symbol that means it's designed to work with Vista, that means it'll work with

            • Re: (Score:3, Insightful)

              Step one would be: don't shop at Best Buy, as you're probably paying too much.

              Step two would be: shop at home, online, where you can compare both prices and compatibility with your OS.

              I think these steps are valid whether or not you're a clueless end-user. Clueless end-users are more than capable of comparison shopping online (and if the end-user really wants to buy from Best Buy, they can look at Best Buy's website without leaving home).

              • Step two would be: shop at home, online

                How much do return shipping and restocking fees cost if the product A. turns out to be an incompatible revision after they switched from, say, Atheros to Broadcom within the same model number, or B. is a laptop computer that turns out to be incompatible with my hands and/or eyes?

                • Stores like Best Buy often charge restocking fees on open electronics. What's your point?

                  At any rate, it's not hard to google the model number and see if people have had trouble getting it to work with your distro, see whether the manufacturer has changed chipsets under the hood, and so on and so forth. Isn't that part of what I said earlier, in step two?

                  You can't complain that an alternative solution doesn't work if you ignore part of the instructions.

                  • by tepples ( 727027 )
                    So what recourse do I have if I see good results on Google, but it doesn't work once I have bought it?
                    • If you buy from a reputable vendor, you have exactly the same recourse you'd have buying from Best Buy - return it.

                      You're going to complain about cost of shipping or something, I'm sure. That's true. But if you pay less for the item in the first place, and most of the time you'll get the right thing (after all, you're talking about an edge case here), so you'll come out ahead even if once in a while you have to return something.

            • To respond to you signature, Valve had this idea, and people spurned it at the time. Of course that was before they actually had a bunch of games in their lineup. (At least more than three years ago they did a survey.) The idea was to pay $10-15 and get all the games for free. That idea wasn't bad considering the prices that are paid for games... And you get the kind of support that Steam can offer, such as cloud based services (configuration, saved games, etc.).

            • by h4rr4r ( 612664 )

              look for one with a tux sticker on it?

            • I have to agree with the other responders: don't buy at Best Buy! You're just getting ripped off. Go home, and shop on Newegg.com (or zipzoomfly.com, or many others). The prices are much lower, there's customer reviews so you can see what other people say about the product or if there's common problems, and you probably won't have to pay sales tax which should make up for any shipping charges.

              • The prices are much lower

                Because they charge return shipping plus a 15% restocking fee if you're among the first to buy something after the manufacturer has made an incompatible revision to the hardware.

                and you probably won't have to pay sales tax which should make up for any shipping charges.

                Until you get audited and billed for back use tax plus penalties for non-payment of tax.

                • Because they charge return shipping plus a 15% restocking fee if you're among the first to buy something after the manufacturer has made an incompatible revision to the hardware.

                  Best Buy has a very poor return policy too, in case you didn't know. It's not like Wal-Mart, where you can return anything, even with a damaged open box.

                  Until you get audited and billed for back use tax plus penalties for non-payment of tax.

                  Which NEVER happens. [citation needed]. It's not worth the state's tax department time to

                • Sorry for replying to your sig, but

                  "Give away software and sell support." But how do you sell support contracts for a computer game?

                  You don't. That model obviously doesn't work with games, it works for software used by businesses. For games, you just have to sell it outright. Or, you could give it away and sell access to a central server for multiplayer games (of course, you run the risk of someone reverse-engineering your protocol and making their own compatible multiplayer server).

                  One of the selling fe

          • Yes - as much as possible. I don't think API's should never be fixed or improved, but at some point Linux ought to be 'done' enough that driver API's don't need to be changed, and that backward compatibility isn't such a liability that it outweighs the advantages.

            C'mon folks. Linux is way past the experimental phase. It's the basis for many of the devices we use and love. If the API's aren't solid enough to freeze (or maintain backward compatibility) at this point, then it ought to be a priority to make

          • What's your point, that we should encourage closed drivers by setting the APIs in stone for years on end?

            I think the point is that the driver ABI doesnt need to change every cycle just to discourage closed drivers.

            ..and here is an idea.. an operating system can support more than one driver model and ABI. Pick one, call it BIN_DRV_1. Declare it to be supported for at least N > 5 years, and then continue to fuck around with the SRC_DRV one. After 5 or more years, when there seems to be a significant advantage if BIN_DRV_1 had the same features as SRC_DRV, you define BIN_DRV_2 and then support that one for

        • by grcumb ( 781340 )

          If hardware makers can't include third-party code or processes that they aren't permitted to sublicense as free software, then perhaps they won't write a driver at all. Instead of proprietary drivers, you'll have completely unsupported hardware.

          Releasing unsupported hardware because you don't like the alternative seems like a case of cutting off your nose to spite your face.

          Given the situation you describe, in which hardware makers sub-license proprietary code because it costs them less, it would seem to me that they should be promoting FOSS for all they're worth. No more upstream lock-in for the manufacturers, fewer overheads and almost certainly increased profits per unit sold because of reduced demand for royalties.

          I realise that it's extremely

          • Releasing unsupported hardware because you don't like the alternative seems like a case of cutting off your nose to spite your face.

            If the (supported) Mac OS X market is an order of magnitude bigger than the (unsupported) GNU/Linux market, and the (supported) Windows market is yet another order of magnitude bigger than that, then cutting off your nose to hide your lies [printfection.com] becomes profitable.

        • by h4rr4r ( 612664 )

          Fine by me. Better than encouraging closed drivers.

      • Re: (Score:3, Insightful)

        by Anpheus ( 908711 )

        Those proprietary drivers still have to be maintained against the rest of the kernel, and that costs time, and consequently money.

        Furthermore, many of these devices are protected by patents, and I'm sure you don't want code for a special model of capacitive multi-touch screen that only one phone uses to be added to the general Linux kernel. There's no point in it.

        So that's the problem. All these phones have highly specialized devices that may be protected by patents that in Europe have no weight, but in the

        • Re: (Score:2, Interesting)

          by 0123456 ( 636235 )

          The last thing Linux needs is a set-in-stone kernel interface: 'backwards compatibility' is what has ensured that Windows remains a steaming pile of kludges and security holes as no old components can be thrown away.

          I can only presume that you are actually Bill Gates and want to destroy Linux by forcing it to repeat Windows' mistakes.

          • Re:Backwards? (Score:5, Insightful)

            by Sponge Bath ( 413667 ) on Friday April 16, 2010 @06:48PM (#31878946)

            The last thing Linux needs is a set-in-stone kernel interface...

            I can agree with this, but then again I don't see anyone asking for that.

            How about something in between, say a well defined interface that is stable for a reasonable period of time with clear points of deprecation and then replacement with improved interfaces? Windows's driver interface is not set in stone with never ending backwards compatibility, you can't use Win 9X drivers on XP. Yet a binary driver that works on Windows 2K has a reasonable chance of running on Vista.

            There needs to be a balance between improvements/changes and stability/maintainability.

          • by Anpheus ( 908711 )

            Not set in stone, but less volatile than "every other release needs some minor fixup." That's all.

            For example, we're currently on 2.6.33.2. Why not standardize on an ABI for the minor version number? 2.6 versus 2.8 for example. (Or since they switched development pattern, will 2.7 be a legit release? I don't know.)

            The problem is that the volatility is so high that kernel drivers need 24/7 maintenance, or else they're dropped and then it becomes even harder to re-integrate them. Ask Microsoft about their par

            • Re: (Score:2, Troll)

              Why not standardize on an ABI for the minor version number? 2.6 versus 2.8 for example.

              There is no 2.8 on the horizon, the next number over to the right has become the de facto minor version number, and the module ABI is stable within each of those releases. Clearly, you are not involved in actual kernel development, but thanks for playing.

              • There is no 2.8 on the horizon, the next number over to the right has become the de facto minor version number, and the module ABI is stable within each of those releases.

                I don't know where you get that. I've seen and continue to see plenty of changes to kernel functions called by drivers between 2.6.x and 2.6.x+1. Maybe you mean the next number to the right of that, the so called stable branches maintained by Greg Kroah-Hartman?

                Those are a step in the right direction, but the x changes every few months an

            • Re: (Score:3, Informative)

              by Mad Merlin ( 837387 )

              The problem is that the volatility is so high that kernel drivers need 24/7 maintenance, or else they're dropped and then it becomes even harder to re-integrate them. Ask Microsoft about their paravirtualization drivers. They've submitted two or three versions to the kernel, and each time you had to use the specific version of the kernel that they compiled them on, or it didn't work. That's the problem. Linux. Isn't. Free. Microsoft is however eventually going to have to come to a sad realization: it may co

              • by Anpheus ( 908711 )

                Apparently if HTC simply "frees their code" for their thousands of phones, maintainers will come out of the woodwork to keep them up to date for HTC. And HTC will never have to worry about it again...

                That's a good fairytale, do you have any more?

        • First, I'd like to say, what moron modded this "troll"? I personally don't agree with it, but that doesn't make it a troll.

          Furthermore, many of these devices are protected by patents, and I'm sure you don't want code for a special model of capacitive multi-touch screen that only one phone uses to be added to the general Linux kernel. There's no point in it.

          Wrong, absolutely wrong. Greg K-H himself has explicitly said that he WANTS people with drivers for even highly obscure devices to merge them into the

          • Re:Backwards? (Score:4, Informative)

            by dgatwood ( 11270 ) on Friday April 16, 2010 @10:53PM (#31880252) Homepage Journal

            Wrong, absolutely wrong. Greg K-H himself has explicitly said that he WANTS people with drivers for even highly obscure devices to merge them into the mainline kernel. It doesn't matter if your capacitive multi-touch screen is only used in one phone; the code is useful to have publicly available in the kernel as a reference. Furthermore, as more drivers for similar devices are merged into the kernel, commonalities between them can be found, and more generic drivers can be created.

            Based on what I've seen over the years (as a developer on a project that never made it back into the mainstream kernel), the problems with this approach are threefold:

            1. Nobody maintains most of them. Most of the 5% of drivers that everybody uses are already in the kernel tree. Of the remaining 95%, half of the drivers don't build at all, and most of the other half don't work. If they're barely maintained now, you can bet money that they won't be maintained at all when some kernel tree maintainer gets a hair up his/her backside and decides that a particular fix isn't elegant enough and won't take the changes....
            2. The tree is already too large. If every driver out there were in the tree, checking out an update to the tree would be horribly painful, the source packages that distributions include would become huge, etc. The bigger it gets, the fewer people are going to be willing to maintain their drivers inside that tree, so in the long run, encouraging people to put their drivers in the tree is just going to cause other drivers to move back out of the tree, eliminating any real benefit.
            3. Many such drivers are outside the tree because they require substantial changes to some subsystem in order to build them. Now one could argue that these changes should be made to those subsystems to make them more general, or one could argue that those drivers are so specialized that nothing else will use them, so there's no reason to bother. That's often not an easy question to answer, and tends to result in highly political shouting matches, with the end result being that the driver never goes in, which is usually why those drivers got published outside the kernel tree to begin with.

            There are ways to solve these problems, of course; IMHO, they basically amount to:

            • Design a kernel build infrastructure that can easily bring in driver sources from third-party sites (like a ports collection, but for kernel drivers). With proper categorization, this can provide all the same benefits as having the drivers in the main tree, but also allows for a richer tagging scheme instead of a simple filesystem hierarchy, which should actually make it significantly easier to spot patterns (for example, seeing that there are now eighty-seven different drivers for capacitive touchscreens, or whatever), all without bloating the tree that everybody has to download.

            • Subject all kernel API changes to a formal API review process in which no API change can go in unless the owners of all drivers in that area agree that the design is acceptable and will meet with their needs. Set up a reasonable set of rules of engagement (e.g. A. don't shoot down the idea just because you don't need it, B. don't shoot down an idea without proposing an alternative). And so on.

            • Redesign the kernel interfaces in an object-oriented language. Such designs make it more likely that drivers can extend the interfaces without requiring major changes to the core code. The Linux kernel sort of halfway adopts this approach insofar as code reuse is concerned, but does so in ways that aren't particularly clean and neat.

              For example, if I were writing an ATA driver and needed to do almost everything the same way but change the behavior of one function in some other library... say down at the block device layer, I'd either have to make a change to the block device layer with some special case detection code or I'd have to copy entire swaths of code at the ATA device layer and c

            • The tree is already too large. If every driver out there were in the tree, checking out an update to the tree would be horribly painful, the source packages that distributions include would become huge, etc. The bigger it gets, the fewer people are going to be willing to maintain their drivers inside that tree, so in the long run, encouraging people to put their drivers in the tree is just going to cause other drivers to move back out of the tree, eliminating any real benefit.

              Again, I don't see the problem

              • by dgatwood ( 11270 )

                If you're doing actual development work, what difference does another 100MB make?

                The average Linux 2.2 kernel patch was somewhere around 5kB compressed. The average 2.6 patch is somewhere around 150kB compressed. If you were doing development back in 2.2, when you pulled an update, you got a handful of files and a couple of k in lots of little high latency pieces. Half a minute later and you were patched. With 2.6, do the math.

                Now imagine in a couple of years when the drivers have bloated that up to 2GB

        • by sjames ( 1099 )

          The thing is though, I've seen that argument since the mid 2.0.x kernels. The ABI hasn't happened and the Linux kernel hasn't shriveled up and died.

          It's not like the interface for a driver changes every single release either. There are a number of out-of-tree drivers that compile and work fine for most of the 2.6.x series (perhaps all, I haven't tried them all). So it's not exactly a lot of work to keep current. There's no real call in an embedded device (or server for that matter) to slavishly track every

      • Re:Backwards? (Score:5, Informative)

        by Sponge Bath ( 413667 ) on Friday April 16, 2010 @06:15PM (#31878656)

        That said, if you're keeping your driver closed it's a problem you're bringing upon yourself.

        I should have been more clear. I'm talking about drivers in the main kernel source. I know the linux kernel mantra: binary only drivers are evil (I agree), out of tree open source drivers are slightly less evil. I think out of tree open source drivers can be useful when inclusion to the main kernel is denied because some critical functionality is deemed unnecessary by the gatekeepers who require it to be removed before consideration. But I'm not even talking about that.

        Last I checked, changes to the interfaces by someone puts the onus on them to fix all the calls to it in the kernel...

        That's the theory. Here is how it works in practice: A pet project or cosmetic change that touches a lot of code is implemented and then dependencies are grepped. The dependencies are fixed up in a cut and paste way. Sometimes more important drivers get some review to make sure nothing breaks. Everything else just gets shipped if it compiles. Then when that kernel is used in a distribution, sometimes years later, many drivers are suddenly broken and you have to back track to see which change took it out. If someone has a lot of time and desire to support a "lesser" driver then they can spend all of their time playing catch up, but that wears out volunteers quickly and annoys commercial vendors.

        • by jhol13 ( 1087781 )

          All drivers are binary only to those who are not willing to compile, fix, debug and test. ALL, even those on kernel tree as they are not tested either.

          Very, very few drivers get into the kernel tree within reasonable time period, several years of driver hell is not INMSHO acceptable.

      • by jhol13 ( 1087781 )

        I have had several years of driver hell, when two or three machines have constantly died because FOSS drivers have stopped working in every kernel security update (which occured about monthly for 8.04).

        Now the eee.ko driver which gives 900MHz on Eee701 does not even compile (on 10.04beta). How do you explain it? It is FOSS, I did not "bring it myself", but ...

        All this because some idiot has religious hate against "proprietary".

      • Once I write something, Free or not Free, I prefer that it stays written, unless there are good reasons for the underlying system to change.

    • Re:Backwards? (Score:5, Insightful)

      by Daniel Phillips ( 238627 ) on Friday April 16, 2010 @05:57PM (#31878498)

      The truth is, Google doesn't really get open source even though its livelihood depends on it.

      • Re: (Score:3, Informative)

        You're entirely right. That's why they fund several thousand students worldwide to join open-source projects and contribute code to those projects every summer, even if the projects in question don't directly benefit Google.

    • by cynyr ( 703126 )
      in kernel drivers should be an issue. The module ABI/API changes as needed, but this has already been hashed out. Opensource you diver and get it in the kernel and it will work across versions. want your's to live outside the kernel (nvidia) you maintain it.
    • What I've seen on the kernel mailing list is more a conflict of commercial developer's desire for compatibility (across kernel versions) with the core kernel developer's more diverse (and not always logical) desire to push pet projects and make frequent cosmetic changes that creates a hellish torrent of code churn. The lack of well defined kernel driver interfaces means a lot of time spent chasing the latest changes instead of adding features or fixing bugs.

      If you ever used Gentoo, you’ll know that this is true for all of Linux, and the main thing to really really hate about it. (I love Linux in general, but this is destroying most of that love.) Your distribution maintainers just usually shield you from it.

      Stallman had good intentions, but it seems he never was at a bazaar himself, since otherwise he would have known that every bazaar is a totally chaotic mess. ^^
      Interfaces, just like standards, are a good thing.
      Maybe we should do it like the Germans wo

  • Tricky (Score:5, Funny)

    by Monkeedude1212 ( 1560403 ) on Friday April 16, 2010 @05:49PM (#31878414) Journal

    "{
            '{
                    "{
                      }"
                    "{
                        }"
              }'
    }"

    I wasn't sure if 5 quotes at the end of the article was correct or not. I decided to employ brackets to handle the scenario. Taking out the wording, my findings are above. It IS correct.

    Is it bad that this was the most exciting part of the article to me?

  • Cheaper costs (Score:5, Insightful)

    by girlintraining ( 1395911 ) on Friday April 16, 2010 @05:50PM (#31878420)

    It's a real problem -- Android is easily the most hackable phone out there. And that's exactly the kind of thing cell phone manufacturers in this country don't want. It's bundled services that they make their fortunes on -- selling overpriced phones, contract cancellation fees, locking in devices, and more. Android threatens to separate the market into service providers and device providers and up until now, the service provider dictated what the device providers could do.

    Imagine if you could just eject your SIM card from your phone, plug it into your computer, and browse the net, take phone calls, etc., then eject it like it's a memory card, slap it back into your phone, and go off to school, work, wherever. Or using bluetooth so that as soon as you get home, it automagically resyncs all your e-mails, text messages, and more. There's so much the technology can do -- and the only reason it's not happening is because service providers want to charge for everything, rather than simply flat-rating everything on a per minute, day, or megabyte use.

    My Sidekick recently lost the ability to send files to my computer over bluetooth. Why? Because of an OTA update that disabled that. So now I can't just sit my phone near my laptop and transfer my pictures out of it, I have to open the back up, eject the little card, plug it into my system, copy the files, and then do the reverse. Very cumbersome when before it was 'click icon, drag files'.

    It's complete and utter bullshit that cell phones are as powerful now as desktops were ten years ago sitting in the palm of my hand, and yet they have less than a third of the capability. And not a one of them is really interoperable with any other except on the most primitive level. Hell, the dialup days of computing offered more functionality and standardization than the cell phone market does. Why should a 14.4k modem and an antiquidated pentium 133 have more communication functionality than today's devices? Hell... it even cost less.

    • Yes.

      I still waiting for some kind of phone etc that isn't crippled in that way. I just don't have a mobile phone now. The operators are creaming way too much off the top and giving so little back.

      I have a 10 meg line for a tenner a month to my house and can do pretty much whatever I want with it. The fact that operators charge 5p or what ever to send a 160 byte SMS message, or if I Pay 25 a month for 24 months I can send 500 or 1000 just sucks. It needs to be £5 for a month and including internet ac

      • by tepples ( 727027 )

        I still waiting for some kind of phone etc that isn't crippled in that way.

        Then buy one from the manufacturer instead of from a carrier.

        I have a 10 meg line for a tenner a month to my house and can do pretty much whatever I want with it.

        Spatial multiplexing of RF signals over a wired connection is easy: just pull another insulated cable through existing conduits. Doing so over the air is harder because there's no copper or fiber waveguide to keep your signals from mixing with other subscribers' signals.

        • by h4rr4r ( 612664 )

          The signals are not the issue, you talk to the cellular gear via ATT. This is the same method that PCs using usb cellular modems use.

          • The signals are not the issue, you talk to the cellular gear via ATT.

            With cable, DSL, or FTTH, the ISP just has to put another "refrigerator" on the corner to handle more signals. But with USB cellular modems, it costs a lot for AT&T to build more towers to handle more subscribers.

    • Re:Cheaper costs (Score:5, Insightful)

      by EvanED ( 569694 ) <evaned.gmail@com> on Friday April 16, 2010 @06:00PM (#31878522)

      It's a real problem -- Android is easily the most hackable phone out there.

      I'm not so sure... I think the Nokia N900 has got it beat.

      • Re:Cheaper costs (Score:5, Interesting)

        by girlintraining ( 1395911 ) on Friday April 16, 2010 @06:05PM (#31878586)

        I'm not so sure... I think the Nokia N900 has got it beat.

        Yeah, but who's heard of the Nokia N900, or even knows what that means, outside geek circles? On the other hand, billboards and TVs everywhere are blasting out "Droid does". For bringing a hackable system to the masses, Android has it beat.

        • Re:Cheaper costs (Score:5, Insightful)

          by drsmithy ( 35869 ) <drsmithy@gm[ ].com ['ail' in gap]> on Friday April 16, 2010 @07:46PM (#31879364)

          Yeah, but who's heard of the Nokia N900, or even knows what that means, outside geek circles? On the other hand, billboards and TVs everywhere are blasting out "Droid does". For bringing a hackable system to the masses, Android has it beat.

          But "the masses" aren't interested in hacking it, thus making said hackability essentially irrelevant to anyone who isn't in "geek circles" anyway.

          • Re:Cheaper costs (Score:4, Insightful)

            by girlintraining ( 1395911 ) on Friday April 16, 2010 @09:01PM (#31879778)

            But "the masses" aren't interested in hacking it, thus making said hackability essentially irrelevant to anyone who isn't in "geek circles" anyway.

            They said the same thing about the internet, twenty years ago. And yet look what the hackers of the world built out of the refuse of wires and chips that the corporations of then said was useless and had no commercial value. Now they're fighting to tax it, control it, and some countries have declared it an inalienable human right to have it.

            Maybe it has no value to them, but that's because they don't know the value of it yet. It's our job to find it and tell them. You just haven't been around long enough to realize the purpose of your own learning yet. Your individuality, your knowledge and talents, are not for your own gratification. The purpose of the democratic process, which the internet comes closest in form and function, is not to create a great country, or great works, but to create great people.

            Hacking is therefore the highest form of the democratic process; Not because of what we do, but for what we share.

            • Thanks, girlintraining, that was an awesome post.

              The purpose of the democratic process, which the internet comes closest in form and function, should not be to create a great country, or great works, but to create great people.

              Brilliant. (I fixed it a little for you. It reads a little better to me that way).

            • by drsmithy ( 35869 )

              They said the same thing about the internet, twenty years ago. And yet look what the hackers of the world built out of the refuse of wires and chips that the corporations of then said was useless and had no commercial value. Now they're fighting to tax it, control it, and some countries have declared it an inalienable human right to have it.

              You seem to have a much different recollection of the internet 20 years ago than I do.

              Maybe it has no value to them, but that's because they don't know the value of

        • Are you kidding? Maybe in the stone-age US. But here in Germany, I have yet to see a Droid in any shop on in any person’s hands. Motorola, Apple and Google are niche companies in our phone market. You barely ever see someone owning such a phone. Nokia and Samsung rule the market.

          Also what do you mean “outside geek circles”? We were talking about hackable phones. “Outside of geek circles” is off-topic.

          The N900 is the only phone I’d call hackable at all. Android phones are

      • Re:Cheaper costs (Score:4, Informative)

        by cynyr ( 703126 ) on Friday April 16, 2010 @06:10PM (#31878628)

        i second that.

        you have hardware level access to the N[789]00 devices. I would like an ipad sized n900. that would be a great device.

      • Re: (Score:2, Offtopic)

        by bug1 ( 96678 )

        Dont they both have "binary only" components ?

        Or do you mean crackable ?

    • I'm confused by your second paragraph, because that's pretty much exactly how it does work. My SIM enables the device that contains it to make calls or use the data service. I can drop it into a phone or a computer, although mostly if I want to use a computer while mobile and be online I use the bluetooth DUN profile on my Phone, because it's less effort than removing the SIM and doesn't prevent me from receiving calls. I've never come across a firmware update for any phone I've owned removing functional
      • I've never come across a firmware update for any phone I've owned removing functionality either, but maybe that's because I don't buy phones from the service provider.

        You're european.

        • Or maybe he has a phone that doesn't allow OTA updates like Windows Mobile? Never had a problem with OTA updates since I've had WinMo smartphones for the last 8 years... And I can load pretty much any program I'd like, independent of what the carrier desires.
    • by upuv ( 1201447 )

      "My Sidekick recently lost the ability to send files to my computer over bluetooth. Why? "

      You bought a phone controlled by the operator.

      ----
      "It's complete and utter bullshit that cell phones are as powerful now as desktops were ten years ago "

      Actually I dare say the phone is more powerful than the PC of 10 years ago. My phone can drive 720p straight to my TV. No way a PC I could afford could do that 10 years ago. My phone also communicates at very good broadband speed over 3 techs. bluetooth, 802.11g,

    • by h4rr4r ( 612664 )

      Then why did you buy it?

      An Eris is $79 and can be rooted within 15 minutes of having it home. Heck, 2 minutes if you install the sdk and get the files you need before you go pickup the phone.

  • I am against this. (Score:3, Insightful)

    by FlyingGuy ( 989135 ) <flyingguy@gm a i l .com> on Friday April 16, 2010 @09:28PM (#31879916)

    And here is why.

    Google has proven to be benevolent, but I am not sure I want their hooks in my Linux kernel. Google exists to make money and do things in their own self interest. The problem is if their fork gets merged that they will become the maintainers for this. I believe as long as it remains in their self interest they will maintain the code but as soon as it is no longer in their self interest it will be abandoned and where will that leave us should we all decide to begin uses that functionality?

    I think they should put the parts that are different out there, lets us all examine them and then let us decide if we want their frankencode or not.

    • Because volunteers are consistently reliable maintainers.
      Because companies don't often contribute to the kernel.

      I get your concern, but don't think it's called for. Besides, doesn't code have to meet guidelines to get into mainline?

      • Well as long as the code meets Linus's guidelines yes.

        I do realize that Google provides time for employees to work on outside projects but this is a bit more then that. These are some fundamental changes to the kernel.

We are Microsoft. Unix is irrelevant. Openness is futile. Prepare to be assimilated.

Working...