Follow Slashdot blog updates by subscribing to our blog RSS feed

 



Forgot your password?
typodupeerror
×
Operating Systems Software Linux

Video Interview With Linus On Linux 2.7 178

daria42 writes "ZDNet Australia has put up a video interview of Linux creator Linus Torvalds talking about the kernel development process, explaining why the unexpected resilience of kernel version 2.6 has delayed the move to 2.7." From the interview: "One of the original worries was that we would not be able to make big changes within the confines of the development model... I always said that if there is something so fundamental that everything will break then we will start at 2.7 at that point... We have been able to do fairly invasive things even while not actually destabilizing the kernel... Having stable and unstable in parallel: I think it used to be a great model, and I think we may see that the kernel has actually become more mature and stable and it just doesn't seem to be that great a model, for the kernel."
This discussion has been archived. No new comments can be posted.

Video Interview With Linus On Linux 2.7

Comments Filter:
  • by Wonko the Sane ( 25252 ) * on Tuesday January 16, 2007 @10:09PM (#17640888) Journal
    How difficult is it to release a video about linux kernel development in a format that is easy to watch by people running linux? At least use flash 7...no need to blow their minds talking about ogg/theora.
    • Ummmm (Score:5, Informative)

      by aussersterne ( 212916 ) on Tuesday January 16, 2007 @10:13PM (#17640922) Homepage
      Visit the download page [adobe.com] from a Linux browser and you can download Flash 9 for Linux now. And P.S. the beta was out for months before this was...
      • Re: (Score:3, Informative)

        If it's not working on the Mac, download and install. There's separate versions for the Intel and PPC.
      • I have flash 9 and it still tells me I need 8 and above. I think there's something about Firefox or Linux that is preventing it from detecting my version right. Does anyone have a youtube link?
      • Not only that, but Flash 9 has hit the Ubuntu backports already, probably along with many other distributions. The video played perfectly for me, and I didn't even know I had Flash 9 until I saw the above comment and checked my Flash version.
      • Re: (Score:3, Insightful)

        by zsau ( 266209 )
        Umm... still doesn't work from PPC/Linux. And considering that's the platform Torvalds develops on, you'd think they could at least release the video in a format he could watch from his own computer. It's not hard to release a video in MPEG format people!
        • Re: (Score:3, Funny)

          by xtracto ( 837672 )
          And considering that's the platform Torvalds develops on, you'd think they could at least release the video in a format he could watch from his own computer

          Duh! Everyone knows Mr. Trovalds develops on a PPC/Linux machine but his home machine is still a Wintel one

          *runs*
      • by Kjella ( 173770 )
        And P.S. the beta was out for months before this was...

        Yes, it was and I installed it because of how horribly outdated Flash 7 was, and it was pre-alpha quality. If you navigated to another page before the current one was done loading, your browser would freeze 50% of the time with no recourse but to kill it.
        • by Nutria ( 679911 )
          Yes, it was and I installed it because of how horribly outdated Flash 7 was, and it was pre-alpha quality.

          Adobe released multiple beta versions of F9. (That's the beauty of getting it thru your package manager: no need to constantly recheck the Adobe web site...)

      • by caluml ( 551744 )
        From the page: Adobe Flash Player Download Center Linux (x86)
        Notice that? Not everyone running Linux uses x86. Now shut up.
      • The installer told me
        "ERROR: Your architecture, \'x86_64\', is not supported by the Adobe Flash Player installer."

        What happend to the 64bit flash? I seems to remember that someone said it does exists.
      • by 3vi1 ( 544505 )
        As someone who's been beta-testing v9b2 on Linux, I think that's a horrible idea. The thing is so buggy it crashes the browser every other Shockwave you open, and sound is even harder to get working than in v7 (if you can believe that).

        If you're a beta-type of person who won't mind your browser crashing/hanging frequently, install it and please give Adobe feedback. However, it's definitely not ready for prime-time at this point.
    • Re: (Score:3, Interesting)

      Flash 9 was released for linux today [adobe.com]. Enjoy :)
      • Talk about timing. It does seem to be x86 only, however. Seems like it will be a while until I can ditch the 32 bit firefox.
    • by kras ( 807696 )
      I just watched the video with flash 9 beta for linux. no problem. you find it here.
    • by sulfur ( 1008327 )
      (Un)fortunately, Flash format is better than other popular video streaming formats like WMV or Real Video. At least it works with flash player 9 beta (however, it crashes firefox once in a while). So I guess it is better to choose the lesser of two evils here. And please don't say about OGG, I know that it's great, but I am talking about popular video formats.
      • Comment removed based on user account deletion
        • Flash isn't a real video format
          VP6 and h.263 are as real as any any other codec. An MPEG-4 codec would be nice, admittedly.
          and usually to compensate for the larger file sizes, they reduce the video quality really badly.
          This is not a limitation of the Flash video system, but a choice by webmasters. Why do need high bitrates in an interview, anyway?
    • by h2g2bob ( 948006 )
      There is a reason for using Flash 8 for video - VP6 video support which is much better than the H.263 video in Flash 6.

      Can't get it to work in WINE for some reason though. Perhaps it's my uni proxy.
  • well (Score:5, Funny)

    by User 956 ( 568564 ) on Tuesday January 16, 2007 @10:14PM (#17640936) Homepage
    Having stable and unstable in parallel: I think it used to be a great model

    It certainly works when dual-booting.
  • Video interviews (Score:1, Insightful)

    by Anonymous Coward
    I've never bothered to look at a video interview on the net (part often not being able to, part just not liking video on my desktop, part that the moving images distract me from all the multitasking I somehow can do while reading), but if someone could post a transcript of what was said, I'd be sure to read it :)
    • Re:Video interviews (Score:5, Informative)

      by mollymoo ( 202721 ) on Tuesday January 16, 2007 @10:52PM (#17641342) Journal
      if someone could post a transcript of what was said, I'd be sure to read it
      There's really no more to it than what's in the /. summary (for a change). Unless you really want to see Linus trying to remember how long the 2.6 kernel has been out and whether they ever had a 4 month gap between releases, you're not missing much.
  • I can't watch the video due on that site, but I really am not certain what he is trying to say from the text I can read.

    Does he want to sacrifice stability for innovativeness in kernel 2.7 or does he think that things are going fine the way they are right now with a stable and an unstable kernel?
    • by canyon289 ( 848746 ) on Tuesday January 16, 2007 @10:21PM (#17641026)
      He's basically saying that no one is really developing a 2.7 kernel because 2.6 is extremely stable even with whatever experimentation they've done. He states that theres been times where they've gone over the 2 month release cycle because of the "big changes" they've done on the kernel. He states that unstable next to stable used to be a good model but it isn't good anymore. He states that if there was a 2.7 kernel they'd have to do all sorts of backporting to get whatever fixes on the 2.7 kernel to work on the 2.6 kernel.
      • I thought the odd numbers were "run with it" kernels. Leave the even number kernels static for bugfixing only.

        How about 2.9 then. Blue sky how would you design an OS for all the cheap commodity hardware around.
        • Re:Why backport? (Score:4, Informative)

          by Zerathdune ( 912589 ) on Wednesday January 17, 2007 @12:22AM (#17642094) Journal

          That scheme ended when 2.6 came out. The new system consists of 3 or 4 numbers formatted as:

          a.b.c
          or
          a.b.c.d

          a changes only when there is a massive restructuring of the kernel
          b changes when there are large sweeping changes, but not of quite the same order as a. (linus, in the interview, says they'll do a 2.7 when and if they need to make changes large enough that they will be breaking everything.)
          c changes when new features and/or drivers are added
          d changes for small bug fixes and security patches. after a new c release the d number is ommitted when the c number has just changed.

          • Re: (Score:3, Interesting)

            Very good.

            I'd add an (e) (Oracle has 5 :)

            Seriously (d) would be for bug fixes/security patches in general, e would be for ones that are expected to almost certainly not break anything.

            e level upgrades: should be nearly 100% safe
            d: should be safe, necessary fixes that could break things (e.g. fix a security hole but certain programs could have issues). NO API CHANGES or ADDITIONS!
            c: new features. Usually safe, but not for mission critical servers. NO API drops/deprecations.
            b: Major upgrade. System tools may
    • by Mogster ( 459037 )
      There used to be stable eg 2.4 and unstable e.g 2.5
      Minor changes made to unstable had to be back ported to stable and it was painful to do that as well as develop the unstable version.

      He is now saying that they

      have been able to do fairly invasive things even while not actually destabilizing the kernel

      ie the changes can be made to the current version without having to backport.

      Unless there is a very major change to be made which will break the current version they will continue with just the one version.

      • Re: (Score:2, Insightful)

        In other words, instead of doing things the right way, they are going to start taking shortcuts until things get bad again, and then, chastened, they'll go back to doing what they never should have stopped doing to begin with. Laziness, in other words.
        • by gmack ( 197796 )
          Actually getting chastened is why the new dev cycle was invented. A good ways through each dev cycle people would start demanding new features and drivers from the dev kernel. The stable maintainer would then try to backport the new drivers without any of the supporting infrastructure resulting in TWO unstable kernels. The new system of merge small changes then debug has actually made both stable and unstable more reliable.

          For the first time in years I can actually use a stable kernel on production hardw
  • by Xenographic ( 557057 ) on Tuesday January 16, 2007 @10:18PM (#17640994) Journal
    Does it contain anything inflammatory about the GPL v3? If not, I'm not interested. :]
  • Translation (Score:4, Interesting)

    by ClamIAm ( 926466 ) on Tuesday January 16, 2007 @10:25PM (#17641056)
    In my opinion, the real reason for no 2.7 is:

    If we open up an unstable branch, I have less testers. --Linus Torvalds

    I'm not saying the 2.6 series is unstable or anything, either. However as I watch Linux's development from the sidelines, I get the impression that most policy decisions Linus makes are designed to make his life easier. See also: Bitkeeper.
    • Re:Translation (Score:4, Insightful)

      by springbox ( 853816 ) on Tuesday January 16, 2007 @11:00PM (#17641406)
      It's harder to get a kernel that works nicely if a lot of people end up flocking to another version. This would leave 2.6 in a bad position because fewer people would be finding and reporting bugs, critical or not. One person can only do so much. Linux is very much a community project that needs participants to work well.
    • I've been on projects where that's a temptation but it's a shame Linus has fallen for it.

      Yes you have less testers on one or other of the branches (mostly people stick to unstable) but if you merge them then you essentially force everyone onto the unstable branch and lose a lot of credibility with those that like to stick to stable releases.

      In my own case I haven't upgraded a linux kernel since this policy change because my servers are too important to risk bringing in unstable code.
      • by Dan Ost ( 415913 )
        I believe the reasoning went something like this: people who want to stick with stable releases will use the kernel prepared by their distro. Anyone willing to build the kernel on their own is knowledgeable enough to deal with potentially unstable upgrades.

        Seriously, how often do you update the kernel on a production machine?
      • by iabervon ( 1971 )
        The problem with your approach is that most of the bugs that last for more than a development cycle and are in 2.6.19.y at this point are very old bugs, from 2.6.0 if not before. If you're using 2.6.8, for example, you may get file corruption for Berkeley DB. The incorrect behavior was introduced in the 2.5 series, made to trigger frequently with a recent (correct) patch, and then fixed. When this was publicized, people started mentioning that they'd been getting corruption that fits the symptoms for years,
    • by hey! ( 33014 )

      I'm not saying the 2.6 series is unstable or anything, either. However as I watch Linux's development from the sidelines, I get the impression that most policy decisions Linus makes are designed to make his life easier. See also: Bitkeeper.

      What is seldom appreciated about Larry Wall's famous "Laziness, impatience and hubris" is how all three qualities are needed to keep each other in check. Laziness -- or at least an aversion to unnecessary work, is critical to moderating unrealistic goals for functionalit

  • by poopie ( 35416 ) on Tuesday January 16, 2007 @10:27PM (#17641082) Journal
    The resiliency of the 2.6 kernel is most certainly due to corporate involvement in the development of and support for Linux. Companies can't design, build, test, and support product for a moving target.

    If anyone wanted to seriously break the Linux kernel ABI, I don't think corporate interests or major distros would support it or follow.

    OSes or platforms seem to change rapidly up until the point they reach a critical mass - at which point, the next ABI change is cause for general revolt. After that, $ENTITY learns their lesson and vows to never significantly break backwards compatibility again.
    • Re: (Score:2, Informative)

      by Anonymous Coward
      WTF are you talking about? The kernel ABI gets "broken" practically every version. It's a bit wacky if you ask me. Make a good design and stick with it guys.
    • by bfields ( 66644 ) on Tuesday January 16, 2007 @10:43PM (#17641248) Homepage
      If anyone wanted to seriously break the Linux kernel ABI, I don't think corporate interests or major distros would support it or follow.

      The ABI rules haven't changed at all: the user-kernel ABI (system-call interface) is supposed to be backwards compatible indefinitely; the internal ABI (e.g. for drivers) changes without warning whenever it's convenient.

      What's changed is the release cycle--we no longer have this odd-numbered fork where the kernel's half-broken for years at a time.... Which is a good thing.

      • by BobNET ( 119675 )
        What's changed is the release cycle--we no longer have this odd-numbered fork where the kernel's half-broken for years at a time.

        Yeah, now there's an even-numbered fork where the kernel's half-broken for years at a time...

  • Resilience? (Score:5, Insightful)

    by SuperBanana ( 662181 ) on Tuesday January 16, 2007 @11:17PM (#17641522)

    explaining why the unexpected resilience of kernel version 2.6 has delayed the move to 2.7.

    Uh...resilience?

    2.6 releases have "shipped" numerous times with some serious bugs, probably because Linus and company have let lots of people slip major new features into the 2.6 kernel, when it's supposed to be stable. 2.6 kernels regularly make it SEVERAL "point" releases into each point release:

    • 2.6.19.2
    • 2.6.18.6
    • 2.6.17.14 (!)
    • 2.6.16.37 (thirty seven releases. From 3/20/06 to 12/28/06. That's one release, on average, once a week.)
    • 2.6.15.7

    Go and look at the timestamps on 'em on ftp.kernel.org. Some of the sub-versions are just a few days apart. How the hell are end-users supposed to know when the kernel is ACTUALLY useable, if there are THIRTY SEVEN bug-fix releases?

    One of the more amazing bugs involved a bug in md that would hose raid partitions, and I assure you, it was not the only serious filesystem bug. I lost a reiserfs partition thanks to a half-baked 2.6 release.

    • Re:Resilience? (Score:5, Informative)

      by Anonymous Coward on Tuesday January 16, 2007 @11:36PM (#17641696)
      The point releases (or .y releases as they are sometimes called), are a new feature of the 2.6.x release cycle that's intended to get fixes in the hands of users faster. These are always small changes, usually only a handful of line changes in the diff.

      The 2.6.16 kernel is a special case. One of the core kernel devs decided to try an experiment to maintain a kernel release for an extended period of time. He continues to provide small fixes at a very regular rate without porting in the newer features of the more current kernel releases. This has only happened for 2.6.16 and there are no plans that I know of to offer extended maintenance on any other kernel release.
    • Re:Resilience? (Score:5, Insightful)

      by notamisfit ( 995619 ) on Tuesday January 16, 2007 @11:38PM (#17641720)
      In the interests of fairness, about 20 or so of those 37 bugfix releases were done after 2.6.17 was released as stable (2.6.16 is still being maintained as a "super-stable" type kernel). Bugfix releases pretty much seem to be a non-issue, considering that most people are going to be using the kernel provided with the distribution, as opposed to a vanilla one.
    • by bfields ( 66644 )

      Go and look at the timestamps on 'em on ftp.kernel.org. Some of the sub-versions are just a few days apart. How the hell are end-users supposed to know when the kernel is ACTUALLY useable, if there are THIRTY SEVEN bug-fix releases?

      They release those pretty frequently; each often consists of very few patches. But the kernel does have tons of bugs--it's a complicated piece of software.

      That doesn't mean any one user is likely to hit any of them--most are in drivers, most triggered only by particular work

      • I remember 2.4 was a disaster, actually... the whole VM system got turfed and replaced somewhere around 2.4.8 because it was broken by design, and something drastic had to be done.

        This is just from memory, someone with a better memory could probably be more specific. But anyhow, the 2.6 releases have been pretty smooth.
    • Re:Resilience? (Score:4, Insightful)

      by mcrbids ( 148650 ) on Wednesday January 17, 2007 @12:52AM (#17642306) Journal
      Go and look at the timestamps on 'em on ftp.kernel.org. Some of the sub-versions are just a few days apart. How the hell are end-users supposed to know when the kernel is ACTUALLY useable, if there are THIRTY SEVEN bug-fix releases?

      The people that go to kernel.org to choose a kernel to download and compile hardly qualify for what most people will call a "user".

      What Linus is calling "unexpected stability" is probably due to the distros intermediating between the kernel devs and the actual users. To put it another way, what's really happened is that the "stable" kernel is now being maintained by the likes of RedHat and Debian, while the "unstable" kernel is what you find at kernel.org.

      We'll see how this plays out - but for the real world, this leaves Linus doing what he does best - develop and oversee cool developments - while the more rank-and-file organizations lead by the distros intermediate for the end users.

      I've been using CentOS and Fedora Core, it's been at least 5 or 6 years since I felt the need to go to kernel.org!
    • Indeed. At this point, I have basically decided never to install a 2.6.x.y kernel until y >= 2. Losing data ain't worth it.
      • by Dan Ost ( 415913 )
        My approach is to not upgrade until the 'y' in 2.6.x.y hasn't been incremented for a week or two.

        Never had any problems (but I don't have any strange needs either...that probably has more to do with the stability I've experienced with 2.6).
    • You obviously haven't actually read how the 2.6 kernels are maintained. The kernels on kernel.org mirrors are not for end users, but for developers and distribution maintainers. If you want to work with one yourself on a development machine, that's great. Otherwise, you should be using the packaged kernels from your distributor.

      That's not me talking, that's a paraphrase of what was said when 2.6 was continued in this way.
    • by cortana ( 588495 )
      I think you should have stayed with your distribution-supplied kernels...
  • bad website (Score:4, Insightful)

    by towsonu2003 ( 928663 ) on Tuesday January 16, 2007 @11:57PM (#17641894)
    can anyone post this "fragmented" and unaccessible interview video to youtube or google video as one or two big file(s)?
  • We have been able to do fairly invasive things even while not actually destabilizing the kernel...

    Oh gods, my sides hurt sooo much.
  • I am getting ridiculously anti-Linux pro-Windows videoad with this story on /.

    Irony.
  • Some 2.7 ideas (Score:5, Insightful)

    by Anonymous Coward on Wednesday January 17, 2007 @10:44AM (#17646972)
    I'm a linux user, my company's products depend on it and I've contributed to it. I see a a couple of painful transitions coming though. I haven't seen the kernel quality go down, I'm not sure how people say "2.6 isn't stable" without specific issues they can point to. Overall the code keeps getting better. I say this as a warning, not as a doom and gloom kind of message.

    VFS probably needs to be addressed. Reiserfs4 sort of exposed some of the issues. There are others though. To my knowledge ext2/3 are the only OSes that actually code strcitly against VFS and the other layers. XFS, JFS, Reiserfs, etc.. are all hacked in to it. If you follow the kernel list, you'll see nobody uses JFS and XFS seems to have regular crash reports. Upon using it myself (for 5 years) it has memory leaks, it routinely has trouble with new kernels. There have been regular performance regressions. Now, I don't really care about the filesystem itself that much but it seems fundamentally broken to me that a non-experimental filesystem has such routine problems. Either the API is uses is broken, the filesystem is broken or both. I'm becoming more inclined to think that it's VFS. This creates a circular sort of problem, you don't need VFS if ext2,3,4 are the only filesystems that are really supported, it's not nearly as important as it is treated. Either that or the process of having something included and non-experimental needs to include some kind of support aspect and maybe be rethought. So far as I can tell, IBM isn't really doing much more with JFS and nobody uses it, let's move to remove it (bummer too because it's a quite clean and elegant FS, much better than reiserfs or xfs in terms of code and design quality and cleanness.) There isn't a clean process for removing stuff from the kernel. Reiserfs is a prime example, Reiserfs3 isn't supported, time to move to remove it; it has known bugs and design flaws which are not being addressed. This particular area is more complex also because selinux depends on filesystem support, LVM behaves differently with different filesystems, different filesystems have different and variable tools support.. System filesystems need some work too, what's debugfs? configfs? How is sysfs different than configfs or procfs?

    Filesystems are just an easy to see and expose portion of this problem there are other APIs which have the same issues. We retooled the build system a few years back, it's much better but there are major flaws still. There are drivers which cannot work unless loaded as a module and yet they can be linked in. There are a huge number that depend on other subsystems and you can easily misconfigure them (SATA depends on parts of SCSI. So I can static link some SATA modules in and dynamic link parts of the SCSI system in and the build system won't complain. Worse, if I break it just so, I can actually get it to build cleanly and freak out at runtime) I'm not advocating making it more difficult to hack on the kernel or add new modules to the build but it's fucked if it doens't catch that stuff. Worse, the driver is fucked if it can't be statically linked and if that's an acceptable limitation then it should be an option. (the Fusion series of RAID/SCSI/SAS type drivers is one that suffers from this problem) At the same time the build system is holy, good luck changing that without pissing off half the free world, and I don't even want to think about what would have to happen if it required a change to a .config file to take it to the next level. Part of the beauty of Linux in this regard is that it is remarkably simple to build and get involved with, there really aren't any tricks or anything to building it. This is something else where there needs to be a support component. There are good companies with well supported drivers and there are orphans. I'd rather have modules marked as supported or unsupported than whether or not they are GPL clean or tainted, I'd like to see that

No spitting on the Bus! Thank you, The Mgt.

Working...