Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
Google Linux

How Google Uses Linux 155

postfail writes 'lwn.net coverage of the 2009 Linux Kernel Summit includes a recap of a presentation by Google engineers on how they use Linux. According to the article, a team of 30 Google engineers is rebasing to the mainline kernel every 17 months, presently carrying 1208 patches to 2.6.26 and inserting almost 300,000 lines of code; roughly 25% of those patches are backports of newer features.'
This discussion has been archived. No new comments can be posted.

How Google Uses Linux

Comments Filter:
  • by Anonymous Coward on Saturday November 07, 2009 @05:07PM (#30016768)

    Try iotop.

    http://guichaz.free.fr/iotop/ [guichaz.free.fr]

  • by Darkness404 ( 1287218 ) on Saturday November 07, 2009 @05:19PM (#30016842)
    Its kinda common sense that Google would see how much disk space is used or how much CPU time is used. I mean, what admin -doesn't- know that 2 Gigabytes of space is used by xxxx@gmail.com? Even if all the data was super-encrypted you would still know how large the file is.
  • Togh (Score:3, Informative)

    by Anonymous Coward on Saturday November 07, 2009 @05:19PM (#30016850)

    Google does not distribute the binaries, so they are not obliged to publish the source.

  • Re:Is it worth it? (Score:1, Informative)

    by Anonymous Coward on Saturday November 07, 2009 @05:55PM (#30017056)

    Mike wonders why the kernel tries so hard, rather than just failing allocation requests when memory gets too tight.

    Wait, what? Has Google seriously never heard of vm.overcommit_memory [kernel.org]?

  • Re:A New Culture (Score:5, Informative)

    by MichaelSmith ( 789609 ) on Saturday November 07, 2009 @05:57PM (#30017070) Homepage Journal

    Funnily enough the roads were there before the cars.

  • Re:Togh (Score:1, Informative)

    by Anonymous Coward on Saturday November 07, 2009 @05:57PM (#30017076)

    I think TFA also tries to notice how stupid is to base all your work in a old kernel because it's supposed to be the well-know stable release used in the organization, and then waste lots of human resources into backporting features from newer kernels. This is what Red Hat and Suse used to do years ago, and avoiding it is the main reason why Linus' set up the new development model. Google could learn from the distros, they probably can use all those human resources to follow more closely the kernel development. Switching to git will probably help a lot.

  • by MBCook ( 132727 ) <foobarsoft@foobarsoft.com> on Saturday November 07, 2009 @06:22PM (#30017234) Homepage

    Yes, they do. Since they use older kernels and have... unique... needs, they aren't a huge contributor like RedHat, but they do a lot.

    During 2.6.31, they were responsible for 6% [lwn.net] of the changes to the kernel.

  • by Anonymous Coward on Saturday November 07, 2009 @06:49PM (#30017376)

    During 2.6.31, they were responsible for 6% [lwn.net] of the changes to the kernel.

    That's 6% of non-author signoffs. It's not 6% of changes. I'm not saying they don't contribute, but the manner of their contribution isn't what your suggesting.

  • by CyrusOmega ( 1261328 ) on Saturday November 07, 2009 @07:34PM (#30017658)
    A lot of companies will also use a single employee for all of their commits too. I know the company I used to work for made one man look like a code factory to a certain open source project, but, in fact, it was a team of 20 or so devs behind him doing the real work.
  • by marcansoft ( 727665 ) <hector AT marcansoft DOT com> on Saturday November 07, 2009 @07:45PM (#30017726) Homepage

    Andrew has been doing a large amount of kernel work for some time now, before his employment with Google. Note that the 6% figure is under non-author signoffs - people that patches went through, instead of people who actually authored them. Heck, even I submitted a patch that went through Andrew once (and I've submitted like 5 patches to the kernel). Andrew does a lot of gatekeeping for the kernel, but he doesn't write that much code, and he certainly doesn't appear to be committing code written by Google's kernel team under his name as a committer.

    Google isn't even on the list of actual code-writing employers, which means they're under 0.9%. I watched a Google Tech Talk about the kernel once (I forget the exact name) where it was mentioned that Google was (minus Andrew) somewhere in the 40th place or so of companies who contribute changes to Linux.

  • by farnsworth ( 558449 ) on Saturday November 07, 2009 @07:51PM (#30017764)

    Google is responsible for a tiny part of kernel development last I heard, unfortunately.

    I don't know that much about google's private modifications, but the question of "what to give back" does not always have a clear default answer. I've modified lots of OSS in the past and not given it back, simply because my best guess was that I am the only person who will ever want feature x. There's no point in cluttering up mailing lists or documentation with something extremely esoteric. It's not because I'm lazy or selfish or greedy -- sometimes the right answer is to just keep things to yourself. (Of course, there are times when I've modified something hackishly, and had been too lazy or embarrassed to send it back upstream :)

    Perhaps google answers this question in a different way than others would, but that doesn't necessarily conflict with "the spirit of OSS", whatever that might be.

  • by marcansoft ( 727665 ) <hector AT marcansoft DOT com> on Saturday November 07, 2009 @08:09PM (#30017896) Homepage

    By that I meant "developed for Google, useful to other people".

    We can divide Andrew's potential kernel work into 4 categories:

    1. Private changes for Google, not useful for other people.
    2. Public changes for Google, deemed useful to other people but originally developed to suit Google's needs.
    3. Public changes of general usefulness. Google might find them useful, but doesn't drive their development.
    4. Maintaining -mm and signing off and merging other people's stuff

    Points 1 and 2 can be considered a result of Andrew's employment at google. Points 3 and 4 would happen even if he weren't employed at Google. From my understanding, the vast majority of Andrew's work is point 4 (that's why he's listed under non-author signoffs as 6%, along with Google). Both Andrew's and Google's commit-author contributions are below 0.9%.

    So what we can derive from the data in the article, assuming it's accurate, is:

    • Google's employees as a whole authored less than 0.9% of the changes that went into 2.6.31
    • Andrew authored less than 0.8% of the 2.6.31 changes
    • Andrew signed off on 6% of the 2.6.31 changes
    • Besides Andrew, 3 other changes were signed off by Google employees (that's like .03%)

    So no, Google doesn't contribute much to the kernel. Having Andrew on board gives them some presence and credibility in kernel-land, but they don't actually author much public kernel code. Hiring someone to keep doing what they were already doing doesn't make you a kernel contributor.

  • DTrace (Score:2, Informative)

    by Anonymous Coward on Saturday November 07, 2009 @08:26PM (#30018008)

    They monitor all disk and network traffic, record it, and use it for analyzing their operations later on. Hooks have been added to let them associate all disk I/O back to applications - including asynchronous writeback I/O.

    I. Want. This.

    DTrace code:

    #pragma D option quiet

    io:::start
    {
                    @[args[1]->dev_statname, execname, pid] = sum(args[0]->b_bcount);
    }

    END
    {
                    printf("%10s %20s %10s %15s\n", "DEVICE", "APP", "PID", "BYTES");
                    printa("%10s %20s %10d %15@d\n", @);
    }

    Output:

    # dtrace -s ./whoio.d
    ^C
            DEVICE APP PID BYTES
              cmdk0 cp 790 1515520
                  sd2 cp 790 1527808

    More examples at:

    http://wikis.sun.com/display/DTrace/io+Provider

  • Re:The Win32 Way (Score:1, Informative)

    by Anonymous Coward on Saturday November 07, 2009 @09:32PM (#30018394)

    Unless you run Linux!

    " By default, Linux follows an optimistic memory allocation strategy. This
                  means that when malloc() returns non-NULL there is no guarantee that the
                  memory really is available. This is a really bad bug. In case it turns
                  out that the system is out of memory, one or more processes will be
                  killed by the infamous OOM killer. In case Linux is employed under cir-
                  cumstances where it would be less desirable to suddenly lose some ran-
                  domly picked processes, and moreover the kernel version is sufficiently
                  recent, one can switch off this overcommitting behavior using a command
                  like:

                          # echo 2 > /proc/sys/vm/overcommit_memory
    "

  • Re:A New Culture (Score:3, Informative)

    by trytoguess ( 875793 ) on Sunday November 08, 2009 @04:16AM (#30019918)
    Amish don't avoid technology based on a point on a timeline. They believe in maintaining a certain lifestyle (strong family bonds, avoid thing that promote sloth luxury or vanity, etc), and many tech is seen as disruptive to such things. What is and isn't ok is debated tweaked and constantly modified depending on which Amish group you're dealing with. This [wikipedia.org] is a good place for more info.

    Fair chance you were just joking, but I figure, why not go on a info dump?
  • Re:Togh (Score:3, Informative)

    by bheekling ( 976077 ) on Sunday November 08, 2009 @10:32AM (#30021542)
    If you think udev and devtmpfs conflict, you don't know what each of them are supposed to do.

    If you read [lwn.net] about [lwn.net] them, you'd know that devtmpfs just populates /dev as devices are discovered by the kernel during boot. Which means udev doesn't have to spend several seconds parsing /sys to populate /dev with information the kernel already had.

    Now during init, udev's job is to parse udev rules and add user configuration plus fix the permissions of nodes in /dev. Afterwards it also monitors device addition and generates events which apps can monitor (recent versions added a gobject interface too), and adds device nodes according to rules, if any.

    In essence, devtmpfs's job is to allow a bootable system without the need to maintain a static /dev or depend on udev for a recovery shell.

    devfs was bad, really bad because there was no naming system back then, and every driver did something different causing utter chaos (which led to different distros patching the kernel in different ways to change the node names). Now there's uniformity, and the kernel knows what to call the basic device nodes created by the drivers.
  • Re:Solaris (Score:3, Informative)

    by T-Ranger ( 10520 ) <jeffw@NoSPAm.chebucto.ns.ca> on Sunday November 08, 2009 @12:00PM (#30022330) Homepage
    And yet, tar is still broken. Well, maybe not today, but it sure as fuck was in 1994.

    Pick your poison.

Those who can, do; those who can't, write. Those who can't write work for the Bell Labs Record.

Working...