Please create an account to participate in the Slashdot moderation system

 



Forgot your password?
typodupeerror
×
Software Linux

New Tool to Track Kernel Testing Time 98

mu22le writes "Andrea Arcangeli has created a new tool, 'klive', to automatically track the amount of testing that each kernel gets before release. According to Kernel Traffic "There was some discussion [on making it a kernel config option] that public perception might put this in the "spyware" category", but still the ability to track a kernel usage and reliability would be valuable to both developers and users."
This discussion has been archived. No new comments can be posted.

New Tool to Track Kernel Testing Time

Comments Filter:
  • by ReformedExCon ( 897248 ) <reformed.excon@gmail.com> on Wednesday September 07, 2005 @01:12PM (#13501208)
    They seem to be taking system stats and system uptimes and presenting it in a hard to understand table. Is that tracking testing?

    If I turn on my computer and don't touch it for a year, it will have excellent uptime, but it doesn't really test very much. Same too, if I just start up Apache and let it do its thing.

    Testing is a very important part of any development cycle and testing metrics are very useful in determining the quality of the delivered product. However, I've never heard of "testing time" used as a metric. Maybe "coverage" or "bugs over time", but the amount of time itself is never really a concern.

    From what I've seen of the Linux kernel (just downloading the source from kernel.org and browsing through it), there doesn't seem to be much in the way of actual debug code thoughtfully and diligently placed throughout the code. There are a few litterings of debug code here and there, but for the most part, it seems like the developers just expect it to work without error.

    Nothing wrong with that attitude, if reality backs it up. And luckily, with Linux, reality is right there to prove the developers correct.
    • by garcia ( 6573 ) on Wednesday September 07, 2005 @01:14PM (#13501228)
      If I turn on my computer and don't touch it for a year, it will have excellent uptime, but it doesn't really test very much. Same too, if I just start up Apache and let it do its thing.

      So? The theoretical number of users that will be doing that sort of operation should be outnumbered by those that use it for "normal, day to day operations".

      In the end it would even itself out.

      If they make the kernel option totally opt-in, which is the right way to go, most people won't use it and only power-users will enable the function which will end up with the results you mentioned (or will it?)

      An interesting debate but at least they are willing to 100% respect the rights of their users.
      • by Anonymous Coward
        "In the end it would even itself out."

        That doesn't make it useful. So what if 400 hours is spent testing 10% of the kernel interfaces? What about the other 90%. There is no accountability of the features that are tested and not tested.

        Time tested is about as useful as the number of votes for on-line polls.
    • You're right on all counts, but think of the anti-FUD capabilities, here. There is a really cool pseudo-logical argument to make quite quickly if this tool comes into heavy use:

      1. Everyone who uses linux is a complete nerd. (Common knowledge, doesn't even HAVE to be true)
      2. Nerds know lots about computers and how to keep them working.
      3. Nerds have run linux for X hours, and all these hours have been pretty hardcore QA time.
      4. All this time has been documented.
      5. Microsoft won't tell you how much te
    • Besides "sending" uptime for statistical purposes it also send your kernel config options (AFAIK). Thus, if your average uptime is around 1 hour (and you're not using reboot nor halt) it means some part of your kernel is screwed.

      Having a big amount of submitting computers can help track the cause (that is, kernel config option in conjunction with hardware & software used).
    • Coverage, hmm. An interesting idea would be distributed code coverage testing.

      But it has two downsides:

      1) At least a code line based coverage system slows the system down considerably
      2) The amount of data to send per computer would probably be quite big.

      Still I think it's an interesting idea. Maybe if it could be manually turned on for code that has got less coverage from other people, or for some non-speed-critical drivers etc...
  • That would be the difference between this idea and spyware.

    I don't particularly care if someone is getting anonymous data about my usage of the linux kernel for most of my boxen. It'll help improve the performance with good accurate real world information. However I don't want some sensitive boxen that I am responsible for to output data to any other source for good or ill.

    So if I can chose, I'll be happy.
    • If a 3rd party releases a kernel with modifications that allow them to track you without your knowing how nice for their revenue! Imagine redhat releasing a kernel for Fedora that gives feedback to companies on their users' computing habits...If you're doing all this at kernel level it will be too hard to track.
      • Yes, a sofware company _could_ make a modification and deploy it in the kernel as you say. They could do that now for all you know.

        But the beauty of open source is that you can find out about things like that by looking at the code. Seeing thier compile time options, build the kernel yourself and you would get the same size as they did. If it's different, something is wrong.

        And as soon as a company is discovered doing something like that (and in this community they would be discovered sooner than later) the
  • Tracking Kernel Usage
    http://kerneltrap.org/node/5606 [kerneltrap.org]
  • Hmmm... (Score:3, Funny)

    by Mondoz ( 672060 ) on Wednesday September 07, 2005 @01:18PM (#13501260)
    I wonder if this is similar to the tool used in my microwave to track Kernel popping time.
  • by 6031769 ( 829845 )
    Keeping this as an external script is definitely the way forward. As pointed out, having a kernel flag and especially having the possibility of it defaulting to YES is a step too far IMHO.

    This is definitely a very useful system however, and I for one would very much like to see something similar for distributions (ie. not just the kernel, but the whole damn caboodle).
    • Debian has "popularity contest" which is not a testing tool but it reports to Debian which programs and packages you use (presumably using file atime, thought I haven't looked at it... and I often mount my disks with noatime). This data is intended to be used to determine which packages belong on "disk 1" and which should be bumped to other disks.

  • by Biomechanical ( 829805 ) on Wednesday September 07, 2005 @01:23PM (#13501297) Homepage

    And I don't think it could be thought of as spyware.

    Spyware is supposed to be unknowingly reporting information about you, whether it was mistakenly installed by you or it crept in from somewhere else.

    An application, or kernel option you flick on like a switch, which you install, and that reports information you know about, to people you understand are going to use that information, can't be called spyware unless it also happened to report how much pr0n you have as well as the kernel's amount of usage.

    I think it would be a neat option to have in the kernel in general. Off by default, all us geeks who want to say "look! here! I'm running Linux!" could turn it on and it could report our uptimes and what kernels we're running.

    We could "stand up and be counted" to show our support for Linux and give the various distributions a rough idea of what we think about them.

    • Without a sufficient explanation to the user, and the default set to "YES", then it could indeed report information on you, more than you wish to allow.
    • by garcia ( 6573 ) on Wednesday September 07, 2005 @01:33PM (#13501374)
      And I don't think it could be thought of as spyware.

      Spyware is supposed to be unknowingly reporting information about you, whether it was mistakenly installed by you or it crept in from somewhere else.


      The typical Linux user won't think it's spyware, no, but those working to move Linux towards a larger market want to be certain that newer users don't ever confuse the two.

      Unfortunately, this *could* be confused with Spyware -- especially after a cute little Microsoft funded "research" item gets posted to ZDnet or news.com.com.

      Linux Kernel Includes Spyware Reporting Your Usage Habits!

      And don't think for one second that any backpedaling by the kernel gurus could outsmart the Microsoft FUD team.
      • Exactly.

        And think about if things were backwards:

        New Tool to Track Vista Testing Time

        /. would have a cow if this were not an option and disabled by default. (Actually, /. would probably have a cow no matter what.)

        While a usage reporting tool might be a nice idea, the kernel folks better think long and hard before it's added and enabled by default.

    • That's well and good for everyone who compiles their own kernel.

      I'd wager that the majority of Linux desktop users get binary kernels from their distros. They're not savvy enough to tell the difference, and they've been burned once (MS Windows).

      If it's included as default in a distro, many desktop users won't know how to turn it on/off via /etc/sysctl.conf.

      And IMHO, it sets up the "slippery slope" argument. :(
    • There's no reason for an OS kernel to be reporting anything to any third-party, period.

      If the kernel people want to test, they should reinstitute their old development policy, where there was a "testing" kernel & a "production" kernel.
  • by sednet ( 6179 ) on Wednesday September 07, 2005 @01:23PM (#13501299) Homepage
    if you download and install it as of 10am PST today, its going to try and install a cron job that begins:
    -*/10 * * * * ps x | grep...
    which vixie cron (and presumably others) rejects as invalid. i just changed it to run every 10 minutes like:
    */10 * * * * ps x | grep...
    hth
  • Hmmm.... (Score:1, Offtopic)

    I didn't know Phil Collins had a son.
  • I thought there already was an article about klive in slashdot, but I might be wrong too. Already installed it ^-^
    • How could that be possible? It isn't like the editors here just post anything people submit without checking if the article was already posted.
  • by fimbulvetr ( 598306 ) on Wednesday September 07, 2005 @01:34PM (#13501388)
    I think this is a fine idea - tracking and all - and I've been running klive since I saw it on kernel trap last week - however, I think that some people are correct when they question how uptime counts as reliability. It doesn't - in the sense of it testing load and the like - but it does because it takes a whole lot of kernel reliability/stability for it to boot in the first place, and it takes a bit for it to just gain uptime.

    Personally, I would like to see it as an option in the kernel - but I'd like it to be off by default. I'd the statistics to be available to everyone (*NOT* IP addresses, hostnames, etc) but rather version, compiler, memory and load.

    While I'm fine with just running some guys software for now, it's gonna turn into a huge mess . What happens if there's a bug? How's he gonna get it distributed to everyone? What if they want to track something else?
    • I don't know if you had a look at the bash script, but all it does it download and run a python-twisted script from cron - which means all a user has to do is re-run the bash script with the --install option and it'll pull down the new version.
      • I use the klive.tac script and it contains stuff like:

        PUSH_INTERVAL = 60*10 # start with one push every 10 minutes
        PUSH_INTERVAL_MAX = 60*60*24 # max out the backoff at 1 push per day
        PUSH_INTERVAL_BACKOFF = 1.25

        SERVER = 'klive.cpushare.com'

        PORT = 4921 ...
        Interesting, I see the bash script now, but I don't remember seeing it last week.
        Thanks for pointing that out.
    • Personally, I would like to see it as an option in the kernel

      I disagree. This is nothing other than a user mode utility that greps /proc and udp sends the data back to the server. How does this belong in the kernel? How can you justify the kernel communicating with anything other than the local programs/devices?

      Just curious,
      Enjoy.
      • You know what - maybe you're right. The entire idea of the kernel is to seperate userland/kernel and to have something that greps proc built into the kernel is a bad idea.
        Maybe I should rethink my assertion - we spent a long time getting userland stuff out of the kernel - the last thing we wanna do is cram it back in.
        What could we do though? Are there any standards? Do I trust this guys python script? Do I trust his coding? Shouldn't this maybe be hosted on kernel.org? Maybe this is the start of a new race
        • What could we do though? Are there any standards? Do I trust this guys python script? Do I trust his coding? Shouldn't this maybe be hosted on kernel.org? Maybe this is the start of a new race to see who gets the better "tracker", faster.
          Awww fuck it, back to more beer.


          Now drinking beer here myself, cheers :)

          Andrea Arcangeli is a well known / better kernel programmer than I am, so I kinda trust him. I downloaded his KLive files and viewed the source. I was annoyed to find a bunch of 'grep ' calls to /proc
  • ...it's called 'uptime'.
    • now you just need to allow uptime to run automatically when you boot and send that information over the network (together with uname's output) to some database that's publically available...

      klive seems to be a nice thing. but, it feels like old news for some reason (perhaps is because i read it some time ago in kerneltrap but an article in /. wasn't really submitted?).

      and the last factor, perhaps just flame-war-like, is that it's written in python! yikes! but, if it serves the purpose and it's done well.. w
      • now you just need to allow uptime to run automatically when you boot
        Running a program measuring the time since the last boot in minutes immediately after the boot seems pretty useless to me...
  • Well screw that. i was going to help by installing it but "python setup.py install" wants Zope installed. Fergit it!
  • a simiar project has been going on at the Linux Counter [li.org] for years - not every 10 minutes, and not specifically for beta kernels, but still, it's fun to watch the report [li.org] once in a while.


    Tidbit: Linux 2.6 is now running on more than half the computers tracked.

  • Of course it's not spyware for a linux power-user. We tweak our kernels all the time: "Oh, damn it, my new bluetooth device need the module bt_frobniz, guess I'll just make menuconfig, etc. to install it."

    However, linux is growing up. We have a number of distros out there that are supposedly targeting new or casual users, those that might never fiddle with their .config (even if they do upgrade their kernel, it's probably through an automated tool).

    • As an option in the kernel, it's not spyware.
    • As an
  • Twisted (Score:4, Insightful)

    by rongage ( 237813 ) on Wednesday September 07, 2005 @02:24PM (#13501871)

    You know, I don't know what universe these folk are living in, but this "python-twisted" package or whatever it is called is absolutely NOT included in every Linux distribution.

    Slackware - oldest living Linux distribution - does NOT have this twisted thing in it.

    You would think that the developers would use a standard programming language - like C - for something like this...(gr&d)

    • Even worse, this is a prime example of over engineering. All the client program does is use python to exec 'grep' calls to certain /proc areas and record the information. It then uses the twisted interface to send the information via UDP to the central repository.

      I hate to tell the author, but we have been doing this with a twenty line shell script in all our linux clients for years. Cron calls shell script, shell script calls grep/cut/wc to gather info from /proc. Script calls wget and pushes data down to
      • Over-engineering is what Open Source is known for.

        That, shoddy documentation, unfixed bugs (as long as they only impact unimportant people - see the X servers locks up your system on switch to a text VC and the Adaptec SCSI driver blocks an APM/ACPI sleep attempt and goes into a permanent 100% CPU spin loop for example), abandoned projects, and re-inventing the wheel 1000 times.

        Look, Slashdot still hasn't fixed the "It's been x minutes since you last posted" bug. For months!

        • I can't quite see where your coming from. What you stated hasn't been my experience. Under Linux/BSD the choice of software is still up to the user.

          BTW, the internet made it easier to see peoples mistakes (Failed OSS Projects). It also makes it easier to see the successes (GiMP, Linux, etc).

          No dis-respect intended,
          Enjoy.

          • Agreed but our failures are starting to become an extreme liability in the court of public opinion.

            Linux would likely have 15% of the desktop market or at the very least, 50% of what Apple has now if these issues were addressed.

            Linux has the geeks as customers.
            Now with every increase of market share they need to pickup customers who are more and more demanding, and less and less forgiving.

            And even geeks get tired of fiddling with stuff just to get it to work. Also, someone interested in TCP/IP, or game writ
  • The uptime metric alone seems fairly useless past the point where it's a few weeks on a particular HW platform or with a particular device because the uptime of an idle machine (or one that is just running cc or emacs) should be so long (months depending on the quality of the local power infrastructure) that it's more likely to be rebooted to install a new OS revision than because the OS failed. Anything less is so far from production quality it should never have gotten out of alpha.

    Coupled with meaningfu

  • I read the headline as "New Tool Track to Kernel Testing Time." I like Tool.
  • Honestly. Maybe you don't like what it's doing, but you know what it's doing, and you can change it. Or, if you aren't a programmer and can't afford one, you can simply disable the feature.
  • How can something that you have to compile and configure yourself be a spyware?

Understanding is always the understanding of a smaller problem in relation to a bigger problem. -- P.D. Ouspensky

Working...