Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
Programming Software Linux IT Technology

Top Linux Developers Losing the Will To Code? 170

E5Rebel noted that Don Marti has a piece that talks about "Core Linux developers are finding themselves managing and checking, rather than coding, as the number of kernel contributors grows and the contributor network becomes more complex."
This discussion has been archived. No new comments can be posted.

Top Linux Developers Losing the Will To Code?

Comments Filter:
  • This is Bad? (Score:4, Insightful)

    by Rachel Lucid ( 964267 ) on Monday July 02, 2007 @11:24AM (#19717827) Homepage Journal
    They're probably getting older, too.

    Perhaps the less coding you do the higher you get up in the management ladder is for a reason, after all...
  • by Anonymous Coward on Monday July 02, 2007 @11:25AM (#19717835)
    the talented get promoted to managing because they care about what's happening, how it gets done, and they know what's going on. This doesn't equate to "I don't feel like coding" as the article suggests.

    "That's all I do, is read patches these days," he said during a discussion at the Linux Symposium in Ottawa last month.

    This doesn't read "I don't want to code" it reads "I haven't time to code"
  • So? (Score:3, Insightful)

    by Actually, I do RTFA ( 1058596 ) on Monday July 02, 2007 @11:26AM (#19717851)

    This is what happens as projects get bigger. It's not that they lose the "will to code", it's that they spend all their time as managers of other coders. There's more to developing a large codebase than writing the code after all.

  • Git (Score:3, Insightful)

    by CarpetShark ( 865376 ) on Monday July 02, 2007 @11:26AM (#19717857)
    Isn't this what Linus said that Git was supposed to fix?

    I wonder are the rest using it... I wonder are the rest even delegating.
  • by postbigbang ( 761081 ) on Monday July 02, 2007 @11:28AM (#19717891)
    New projects open all the time. As the FOSS code base increases, it's easier to move code around. Once one takes on responsibility for a project, the new code vs maintenance code is always going to change. And there are thousands of projects where someone gets bored, moves on, or whatever, where the project then becomes stuck in the mud. SourceForge is full of them. It doesn't mean there's anything wrong, it's the fits-and-spurts of how coding works.

    Nothing to worry about. It's natural.
  • by khasim ( 1285 ) <brandioch.conner@gmail.com> on Monday July 02, 2007 @11:37AM (#19718015)
    ... how has the amount of code they actually approve and that gets into the kernel changed?

    Once you become a guru coder, you may write less code yourself, but you may approve more code over all. That would be code written by other people that you check, tell them where the bugs are and they fix the bugs and re-submit the code.

    When the code is up to your standards (and the evidence is the flat rate of bugs) then the code is included in the kernel.

    There was a time (long ago) when Linus wrote ALL of the code himself. If you look at just that metric, Linus barely writes anything anymore (percentage-wise).
  • by flaming-opus ( 8186 ) on Monday July 02, 2007 @11:47AM (#19718133)
    This assumes that the kernel is a single common software project.

    It isn't. A few filesystem developers might have to make changes to elevator, or allocator code, but most developers of XXXXfs don't really need to make changes outside of that directory. Developers writing a driver for the XXXX model scsi controller, don't really need to interact with the people mucking with Alsa, or gart, or whatever.

    The kernel might be contained in a single source repository, but it's really a few hundred, mostly-independent software projects.
  • Re:Book needed (Score:5, Insightful)

    by MROD ( 101561 ) on Monday July 02, 2007 @11:55AM (#19718265) Homepage
    The problem is that with almost every minor kernel version revision the driver interface is changed, so any book that goes into print will already be almost worthless by the time it got into the shops.

    This is why the current fluid kernel/driver interface specification is unsustainable and unmanagable in the long term (and why ultimately the kernel development process will bog down).

    The solution? Simple, separate the core kernel from the drivers and produce a specification for the interface which only changes with the major kernel version. Then the kernel developers can concentrate on the pure internals of the kernel which no-one but them should need to know about and the work which currently takes place to recode the hundreds of drivers each time there's a tweek to the driver interface could be redirected to more productive efforts... and the patch load should be less as well.

    There is a side benefit to this as well, the energy barrier for 3rd parties to write drivers would be lower and hence it would be far more likely that they'd actually write them rather than management seeing the driver maintenance and support costs being too high to bother because of the constant code churn.

    I know that there are many people who will veremently disagree with this because of the dogma saying, "the kernel hackers know best about the kernel so they should be the same people as those who write the drivers." There will also be those who believe the dogma of, "but the driver interface needs to change often so as to be Better(tm) so you can't set the interface in stone."
  • by hikerhat ( 678157 ) on Monday July 02, 2007 @12:00PM (#19718335)
    This is how it always works. Once you have enough experience doing anything, from building houses to writing code, you start to spend more time sheepherding the less experienced and less time implementing. It's the circle of life. I didn't rtfa.
  • by Fujisawa Sensei ( 207127 ) on Monday July 02, 2007 @12:26PM (#19718735) Journal

    Perhaps you need to get a little deeper into kernel development to find out why Eiffel is a bad choice:

    • Exceptions in kernel space suck. You can look through AROS ML archives for an example of this.
    • In kernel space you don't have access to the standard C libraries, malloc() for instance.
    • Not only don't you have access to standard C functions, you don't want your memory managed for you, that's the kernel's job.

    In summary languages that do stuff for you behind the scenes suck for kernel development.

  • Where's the beef? (Score:3, Insightful)

    by billsf ( 34378 ) <billsf@cuba.ca[ ].nl ['lyx' in gap]> on Monday July 02, 2007 @12:46PM (#19718977) Homepage Journal
    People simply tend to get more managerial as they get older. This extra proofreading, checking and review has resulted in a fantastic product. While BSD is my primary computer interest, I've maintained a Linux box since 2.6.16 to follow the most current developments. I'm running 2.6.22 now and have great respect for the way they use SMP to enhance reliability. Short of a hardware failure, it simply doesn't crash. The way I use a computer, getting an hour uptime out of XP would be rather remarkable.

    BillSF

  • by CoughDropAddict ( 40792 ) * on Monday July 02, 2007 @12:47PM (#19718993) Homepage
    That's exactly why it's totally ridiculous that it is contained in a single source repository!

    Need a newer version of the USB subsystem, or a fix to a driver? Have fun downloading a new version of Linux, which has an ungodly number of changes across the entire kernel.

    People tend to think of monolithic/micro kernel only in terms of run-time technical advantages/disadvantages. But equally important IMO is the impact on development processes. With a good microkernel architecture, it would be totally reasonable to think that you could upgrade an entire subsystem (like USB) without touching the rest of the OS. Each subsystem could be run as an independent project with its own releases, and there could be competition for who can make the best subsystem.
  • Antithetical. (Score:2, Insightful)

    by wild_berry ( 448019 ) on Monday July 02, 2007 @12:59PM (#19719149) Journal
    The kernel developers have decided that black-box/proprietary drivers aren't welcome in their kernel and ask that companies submit their drivers as patches to the existing kernel. That's why 52% of the kernel tree is driver code. It also means that the drivers are free-as-in-GPLv2 and can't be withdrawn later on. If they become abandonware, they are freely available to be updated by a third party. I see these as advantages sufficient to support the present Kernel Development method.
  • Re:Git (Score:5, Insightful)

    by Dan Ost ( 415913 ) on Monday July 02, 2007 @01:10PM (#19719299)
    This is actually an example of good management (or, more correctly, management knowing its own limits).

    Bouncing the patch back to the original author is exactly the correct thing to do. There's no way that Linus can be as familiar with the patch code as the person who wrote it, so why would he think that he could do a better job integrating than the original author?
  • by Fractal Dice ( 696349 ) on Monday July 02, 2007 @01:15PM (#19719359) Journal
    They haven't stopped coding, they just code in a higher language - just as C can take care of all that dirty assembler for you, a human coder can take care of all that dirty C. You just sit back, watch the code flow past, filter it and nudge it in the directions you need it to go. It's bleeding edge technology, it's just the system requirements are a little steep for most of us to assemble - give it a few decades and I'm sure we'll all be coding this way :)
  • Re:Antithetical. (Score:3, Insightful)

    by MROD ( 101561 ) on Monday July 02, 2007 @01:26PM (#19719483) Homepage
    I didn't actually mention black-box binary-only drivers or even those not released under the GPL, that's a totally separate issue. This is a maintainability issue and the costs to companies to keep modifying their code almost every time there's a minor kernel version change.

    If a company can assign programming resources once off for a driver project and not have to spend extra resources every few months just because of a change in the kernel interface then they will look far more kindly on the idea of developing a driver, be it GPL'd or not. Remember, Linux is a small player for the hardware people and hence only minimal budget can be allocated to any driver project. Recurring costs (other than for bug fixes) are a major financial barrier.
  • Comment removed (Score:5, Insightful)

    by account_deleted ( 4530225 ) on Monday July 02, 2007 @01:40PM (#19719633)
    Comment removed based on user account deletion
  • by mckyj57 ( 116386 ) on Monday July 02, 2007 @02:14PM (#19720009)
    Programmer burnout is a well-known, if not well understood, phenomenon.

    As far as older, I don't think age has much to do with burnout. I started a major open-source project after the age of 40, my first big programming project after a career change. (I am one of the few managers that then became a coder.)

    I am now pretty burned out. It isn't that I can't write code -- in fact, I am better than ever. I just don't *want* to write code any more.
  • by sick_soul ( 794596 ) on Monday July 02, 2007 @02:27PM (#19720167)
    > Actually writing code can easily be a generic position,
    > easily swappable

    This is what most companies get wrong.
    Coders are not easily swappable, no matter how much policy
    you try to set up: the differences in skill have a
    tremendous impact on the quality of the code, and if a
    more talented supervisor must always veto/fix the code of its
    underlings, it is a waste of time for everyone involved.

    Only good coders should work on the code base in any position.

    Those who also show management skills should get burdened by
    the additional task of actually managing people and
    distributing work, and they will slowly code less and less,
    while the good coders which do not show management skills
    should just code on.

  • Fundamental Flaws (Score:5, Insightful)

    by rAiNsT0rm ( 877553 ) on Monday July 02, 2007 @02:31PM (#19720243) Homepage
    I spend a lot of time online trying to get through to folks on this issue but everyone just blows it off. I have been a Linux user/contributer for over 12 years now and have nothing but the best interests in what I say. The biggest problem is the fact that the only area to have any management and direction is the kernel. The rest is far too chaotic and self-serving to ever become a cohesive system.

    Some examples: OS X. In ten years or so a fairly small team has taken BSD and turned it into what it is. In over 12 with Linux I still see many of the same issues and problems persist... why? Because Apple *focuses* their efforts and the entire project is properly managed and steered. Imagine with the same focus and direction what the huge amount of OSS talent could accomplish?

    Interoperability. Most applications are one-off programs made with no thought or care as to how it fits in the bigger picture. Unification, interoperability, and consistency are very important.

    Fleeting Nature. Projects worked on while in college, hosted on random servers, work/girlfriends/distractions. These all can bring even successful and popular projects down overnight.

    What needs to happen is to work under a single focus to create the most perfect distribution possible with clearly defined goals and concepts. Democracy, choice, and chaos have their place and they can be utilized still... just with some oversight and management before it goes live. Once there is a very good foundation (such as how OS X is now) then folks can branch out and work on their own projects and offshoots. I'm not suggesting that all choice needs to be eradicated, just that instead of trying to build a million individual sandcastles on a foundation of Jell-o we could be building a mansion on a sheet of bedrock.

    The talent is here, the passion is here, the momentum is here... the oversight and direction is not.
  • by zCyl ( 14362 ) on Monday July 02, 2007 @02:45PM (#19720423)

    When the code is up to your standards (and the evidence is the flat rate of bugs) then the code is included in the kernel.

    There was a time (long ago) when Linus wrote ALL of the code himself. If you look at just that metric, Linus barely writes anything anymore (percentage-wise).

    This of course implies that code is now checked more times and more carefully BEFORE inclusion, which is a win for everyone.
  • by rAiNsT0rm ( 877553 ) on Monday July 02, 2007 @04:18PM (#19721541) Homepage
    And you sir have summed up the exact problem here. You've glossed over the fact that nothing from NeXTstep just showed up magically working on BSD, it had to be coded from the ground up new. The point was it was done, done well, and in fairly short order.

    You honestly believe that 300 half-working yet-another-whatever-app that is just as buggy as the other 299 is better than 1 or 2 excellent apps with more eyes and talent focused on them? There is a point when a million and one false starts gets exasperating for folks to follow. This week it is Compiz, then next you need Beryl, then next week scrap it all and grab Compiz-Fusion, and on and on. That project is actually doing well too... think of all the ones that don't. Every week there is a new hotness, and the program that was #1 for the task last week is already not being developed anymore and another is. That doesn;t help anyone's cause, and the chaos is impossible to hold still long enough to make it truly great.

    So with focus things would be worse eh? Tell that to Canonical/Ubuntu. Because they are the closest thing to my theory so far in Linux and they just got into a ton of Dell's and all over the Internet. Why? Because it is a central, slower moving target. I could code for Sally's-Distro-of-the-Week and my work can be made obsolete when she gets a job, a life, hit by a car, or I can put my effort into a distro with a long-term release, huge support and userbase, stable and clear goal (for the most part), and some oversight. If you took all the talent from small projects like Puppy, DSL, Joe-Sixpack's Distro and got them together on one goal the outcome would be tremendous.

    The problem is ego and tunnelvision. It can be hard to wrangle a bunch of truly talented folks, and sometimes you don't get your way. Now, you just take your ball and go home to create yet-another-half-assed-distro or app. This isn't about FUN it is about being part of something greater, and the fun you sacrifice to be part of it instead of going nowhere by yourself is reaped tenfold when you get to the end and see what a huge impact you *helped* to create. But that satisfaction is delayed and not as instant as the perceived fun you get by having a few hundred people download your app from your .edu webpage and get some direct feedback... you could have a couple million download it and have your name actually be attached to a major project.

    Again, I've heard a thousand people react the way you did... it is the knee-jerk reaction, but I guarantee it is the wrong one. Time will bear it out.
  • by pschmied ( 5648 ) on Monday July 02, 2007 @05:40PM (#19722489) Homepage
    Most software shops have dedicated architects. In many cases, those architects can be blowhards with little practical skills, but a good architect can actually do a lot of good for a project. The same can be said of a good project manager.

    The strength / weakness of the Open Source model is that it collapses these structures into a flatter skill space. On the bright side, coders get to scratch their itch and prove that their employers' architects are simply wankers that hold them back. On the down side, those same architects are marginalized from the Open Source community unless they work for a company that sponsors major Linux development (RedHat, Novell, IBM, etc). The "shut up and code" ethos of Open Source can create major problems in a project. A lot of times, prolific coders aren't your best coders, and are often abysmal architects (not always, though!). Another downside of missing architects in Open Source is that most architects tend to hold positions of power in commercial IT shops. I suspect the poor integration of architects / project managers / etc. into the FOSS world is probably the reason commercial software is as prevalent as it is.

    Think of an olde tyme army that would have had archers, macemen, generals, horsemen, etc. A lot of them were effective because of the separation of tasks and the inherent benefits of combining them. FOSS development tends to lack a lot of those divisions. Consequently, we lose out on some of those modern software development tactics. On the upside we do have a lot of extremely talented foot soldiers.

    There's clearly some happy spot for efficient development. Microsoft isn't it. Neither is FOSS, I believe. But efficiency is not the point of either. Microsoft's goal is to produce stodgy software that doesn't change much. FOSS's "goal" is to spin out ideas rapidly. FOSS has been much more successful by that metric than Microsoft has.

    -Peter
  • by swillden ( 191260 ) * <shawn-ds@willden.org> on Monday July 02, 2007 @07:29PM (#19723437) Journal

    You've glossed over the fact that nothing from NeXTstep just showed up magically working on BSD, it had to be coded from the ground up new.

    I'm not sure whether you're talking about the original development effort or something more recent. Just to be clear, NeXTstep was always on BSD.

    In any case, your rant completely misses the point. You came closest (by virtue of being most wrong) when you said:

    This isn't about FUN it is about being part of something greater, and the fun you sacrifice to be part of it instead of going nowhere by yourself is reaped tenfold when you get to the end and see what a huge impact you *helped* to create.

    There are a lot of OSS developers, and they all have their own reasons, but FUN is a *huge* part of it for most of them. For example, I'm a professional software developer, have been for nearly 20 years, and I often find that the stuff I work on at work isn't all that interesting. So, in the evenings when I'm not busy with all of the things that come with having a life and too many hobbies, I write OSS code. I pick things that I think are interesting and fun and I contribute there.

    There are other motives, of course, and I think it's fantastic that Canonical, IBM, HP, Red Hat, Novell, Sun, MySQL, Trolltech and dozens of other companies are paying people to work on OSS full-time. I think the work that Canonical has done, by providing an intense focus on polishing rough edges, is excellent and very valuable. BUT... that in no way devalues the contributions of random hackers scratching their personal itches, because for all that the full-timers do, it's the random hackers that do 90% of the work, and it's due to the efforts of the random hackers that the guys at Canonical etc., have a diamond they can polish. The polish is useful, and unlikely to come from random itch-scratchers, but it's only polish.

    There simply isn't enough money in the OSS business model to justify full-blown from-scratch development a la Apple and Microsoft. The key to making the whole thing work is all of that chaotic development that you decry. Without that, Canonical is nothing, Red Hat is nothing.

    And you simply *can't* direct, or manage, or focus, the efforts of people who are scratching their own itches, entertaining themselves on their own time. The moment you begin to direct them, their work ceases to be enjoyable and becomes just another form of the same crap they get at the office. To achieve any sort of critical mass, OSS needs both the chaos and the people who grab the chaos and try to order and organize it. Even that is an oversimplification, because even the organization of the software benefits from the chaos of multiple approaches to organization.

    You'd like to force-fit the bazaar into the cathedral, but actually doing it would destroy it. The status quo is better: The bazaar produces all kinds of weird and wacky things and folks from nearby cathedrals cherry-pick the best, polish it up and integrate it into a cohesive whole (and also offer the polished pieces at the bazaar, plus a few of their own).

  • by bug1 ( 96678 ) on Tuesday July 03, 2007 @04:48AM (#19727937)
    Speaking from my own experience, i found that what motivates me to code has changed over time.

    First it was the technical challenge;
    Second the challenge was to earn money from my skill (not very successfully)
    The challenge for me now is design.

    I see free software as an art, code is just the medium in which a design is implemented.

    I dont care if my project has fancy features, i dont care if time spent on it can be justified from a commerical perspective. Its just about solving a problem that people (developers) can one day look at and realise its the way it should be.

    Art is really what seperates commercial code from free software, if you spent an hour thinking about a more appropriate variable or function name on a commercial project your boss wouldnt be impressed. But its little things like this (and design) that lower the level of participation, enabling them to get the "many eyes" that improve it.

    Free software programmers are like the artistic painters.
    Commercial programmers are like a signwriter.

    Deadlines can only lessen the quality of code.

Get hold of portable property. -- Charles Dickens, "Great Expectations"

Working...