Follow Slashdot blog updates by subscribing to our blog RSS feed

 



Forgot your password?
typodupeerror
×
Linux Software

Linus And Alan Settle On A New VM System 167

stylewagon writes: "ZDNet are reporting that Linus Torvalds and Alan Cox have finally agreed on which Virtual Memory manager to include in future kernel releases. Both have agreed to use the newer VM, written by Andrea Arcangeli, from kernel version 2.4.10 onwards. Read more in the article."
This discussion has been archived. No new comments can be posted.

Linus And Alan Settle On A New VM System

Comments Filter:
  • sensation seekers (Score:5, Informative)

    by selmer ( 37218 ) on Wednesday November 07, 2001 @08:53AM (#2532299) Homepage
    There never really was a big dispute about the VM subsystem, read alan's diary [linux.org.uk] for an account of what happened in his opinion.

    I cite November 2nd: The great VM dispute really isn't. It went something along the lines of "Putting a new vm in 2.4.10 is crzy", "Probably it was but its done so lets make it work" and at 2.4.14pre8 "See it works" "Yep".

    • Read about the big news on 5/6 Nov. Alan is installing windows. Alan's diary [linux.org.uk]
      • I didn't see the lowercase w. I wouldn't be surprised if Microsoft's marketing picked up on this and used it. :D

        Seriously though. This 'news item' is roughly akin to "Final Nissan VP agrees to stop calling it 'Datsun'".
        • I didn't see the lowercase w. I wouldn't be surprised if Microsoft's marketing picked up on this and used it.

          The comment you replied to has a lowercase w. Alan's diary has an uppercase w. And that's correct. Indeed, English spelling rules dictate that the first word of a sentence should be capitalized...

      • by Gleef ( 86 ) on Wednesday November 07, 2001 @10:39AM (#2532727) Homepage
        And people say it's easy to install windows. Two days and there's still more work to do.
        • I don't know why this got modded funny. I can't get a Windows installation back to full usability for at least three days, and that's assuming that I've done the Great Internet Driver Hunt beforehand.

          Sure Windows and drivers only take a few hours, but then you have to tweak it up so it's not walking with 4 toes on each foot and make sure you don't take off a leg in the process.

          Then you start installing applications:

          void install_program(void) {
          run_installer();
          reboot();
          while (! last_update()) && (reboot_count 0x10) {
          install_update();
          reboot();
          }
          }

          void main(void) {
          while (! last_program()) && (reboot_count 1e10) {
          install_program();
          reboot();
          }
          }

          That's why it takes forever.

          Yes, one could argue that doing your own Linux "from scratch" would take longer (especially if you want to compile XFree), but if you have good file management, you don't have to rebuild linux every 3 months to keep it running smoothly.
    • What?! Alan is installing _Windows_?!! ;)
  • good news (Score:2, Insightful)

    by Psychopax ( 525557 )
    Though I do not think that Linux would have been doomed just because there were two different versions of the kernel out there it would have been probably more difficult for Linux.
    So that's good news. J.
    • ...and not before time. Perhaps Linus can make a firm decision by, say, 2.5.10 on what goes into 2.6 and what gets kept aside for 2.7? That way we might see 2.6 before the end of 2002. We can always hope.

      It's important for a lot of this new stuff to get thoroughly used so that alternatives, replacements, options and enhancements can be devised at warp speed. It's also important to have an odd kernel series active so that imaginitive new stuff has a home and doesn't stagnate.
    • Having a single unified kernel source is, IMHO, not a worthy goal. The biggest advantage to an open-source kernel is that you can go in and tinker with it; having multiple folks pursue multiple tacks to VM is not in itself bad.

      There are other "branches" off the kernel tree for real-time kernels, etc. Getting rid of these would not be "good news".

      • I did not meen to get rid of other kernel trees than the "main Linus tree" (as far as i know e.g. some architectures aren't integrated in this tree too). But I mean it is not very good to have more than one "standard" kernel. It does not make a good appearance.
        Of course, perhaps after a while these two kernels would differ more and more from each other so that one of them gets another kernel for "special purposes" (e.g. real time kernel you mentioned above).
        Another kernel tree for an area where it is clearly visible other things are needed is not bad. But two kernel where the average user don't know which one to choose are not good in my opinion.
        Nice day,
        j.
  • NUMA?! (Score:5, Informative)

    by ssimpson ( 133662 ) <slashdot.samsimpson@com> on Wednesday November 07, 2001 @08:55AM (#2532307) Homepage

    It's previously been argued that Andrea's VM doesn't work with NUMA architectures, hence work should continue on Rik's 2.4.x design

    Not a problem now, but it's one of the major aims of 2.5, according to Linus. Anyone know how they are going to square this circle?

    • Re:NUMA?! (Score:4, Insightful)

      by Prop ( 4645 ) on Wednesday November 07, 2001 @09:04AM (#2532348) Homepage
      I have to ask ... shouldn't a NUMA-efficient VM be left as a patch, or thru a kernel fork ? I mean, how many people have access to NUMA machines, let alone own one ?

      The VM in the mainstream kernel should be optimized for what Linux runs on 99% of the time : single CPU, with a "standard" memory bus.

      With that being said, I couldn't believe that Linus made such a major change in a stable kernel. I'm glad it works, and that Alan Cox has agreed to go with it, but it wasn't an example of software engineering at its finest...
      • Re:NUMA?! (Score:2, Insightful)

        by Anonymous Coward

        but it wasn't an example of software engineering at its finest...

        In the strictest sense... you are correct. However, engineering of any sort is a real world activity, not some dry academic subject (Wirth is full of shit on many topics, for example). Knowing when it's time to give up on a bad job, chuck it out and give something else a chance is a valuable thing (but not something to do lightly, or often).

        • And I'd like to add that the fact that the VM worked fine from the very start, and Alan's choice to switch to it shows that Linus is capable of taking these decisions. Perhaps he shouldn't have been so second-guessed.
          • by led ( 3096 )
            Actualy I had some problems with it, on a few of my web servers, when the load was high a mem low....
            2.0.x,2.2.x,2.4.x kernels are suposed to be safe... the 2.4.x broke this... I don't think I will trust installing a new kernel like I used to...
            • by Anonymous Coward

              If you install the most recent kernel release (stable series or not) on a production box, you are a fool. No two ways about it, and it's never been any different.

              I only install kernels from Redhat (they've been throughly stress tested) on important boxes. I take risks with non-essential machines.

        • Knowing when it's time to give up on a bad job, chuck it out and give something else a chance is a valuable thing (but not something to do lightly, or often).

          Well, I don't disagree with you on that. Rules are meant to be broken.

          But in some ways, Linus got lucky - AA's VM worked much better than RvR's, so it makes look like it was the right decision. But the real question is - if RvR's VM was so broken, how did it get in there in the first place ?

          I think this is a situation where 2 wrongs made a right : bad move introducing RvR's VM when it was not ready for prime time (or at least, no tested enough to know if it was), and then switching VMs in a "stable" tree.

          • by Anonymous Coward

            But in some ways, Linus got lucky - AA's VM worked much better than RvR's,

            And it was accepted as a replacement because it worked better!

            But the real question is - if RvR's VM was so broken, how did it get in there in the first place ?

            Simple... you try something new, it doesn't work - but you don't know it isn't working until you try. The old VM "worked", in that it functioned as a VM, but certain problems came to light, and fixing them proved to be more complicated than just choosing a new, simpler, VM.

            It's not as simple as saying "Linus used a broken VM."

          • Re:NUMA?! (Score:2, Interesting)

            by psamuels ( 64397 )
            But the real question is - if RvR's VM was so broken, how did it get in there in the first place ?

            Something had to get there. Let me explain.

            Back in 2.3.7, Linus merged a huge change that moved most file I/O from a buffer cache (caching of device blocks) to a page cache (caching based on virtual memory mappings). The VM was severely affected by this, and it never quite recovered.

            So the Riel VM was not something wholly new, although it has some bits Rik put in late in the 2.3 cycle. -- But to answer your question, the "new" part of the 2.4 Riel VM was only accepted because the 2.3.7 VM was even worse.

            The real question is, why did Linus stop merging Rik's VM patches back in early 2.4? At least according to Rik, Linus's VM between 2.4.5 and 2.4.9 stayed the same even though Rik was still tweaking it and submitting patches.

      • Re:NUMA?! (Score:2, Interesting)

        by Anonymous Coward

        I mean, how many people have access to NUMA machines, let alone own one?

        All multi-processor AMD Hammer machines are / will be NUMA, so we may see a lot of mainstream users getting NUMA machines in the next few years.

        That having been said, there are more linux users than just mainstream users .. there are a lot of universities playing with NUMA machines, and some businesses too. Linux is working hard at being all things to everyone, but if it drops niche users whenever it's convenient its credibility will suffer in those niches. You may say "devil may care", but this is more or less what happened with gcc -- a lot of fringe groups got abandoned, and one of the reasons egcs was a success was because it picked up these fringe groups again (GNAT, et al) and made'em all one big happy family. If nothing else, the current maintainers need to avoid ostracizing people if they want to avoid having the mountain moved out from under them, like Cygnus did when they took over gcc maintanence from RMS. (OTOH, if Linus + friends don't care, then this is moot.)
    • Re:NUMA?! (Score:4, Informative)

      by Alan Cox ( 27532 ) on Wednesday November 07, 2001 @09:05AM (#2532349) Homepage
      Getting NUMA to work well with Andrea/Marcelo's VM might be more interesting. On the other hand Martin and the other IBM folks don't seem duly perturbed on list so I'm not that worried
    • Re:NUMA?! (Score:2, Insightful)

      by Ed Avis ( 5917 )
      I'm currently wondering about NUMA - or something close to it. I'm running Linux on a couple of machines where the memory is of differing speeds: a fast eight megabytes and then the rest of the RAM is a lot slower. Can existing Linux kernels handle that sensibly?
      • I'm running Linux on a couple of machines where the memory is of differing speeds: a fast eight megabytes and then the rest of the RAM is a lot slower. Can existing Linux kernels handle that sensibly?

        One way might be to configure the slow RAM as a ramdisk and use it as swap.

        • One way might be to configure the slow RAM as a ramdisk and use it as swap.
          I considered that, but it's a lot of overhead pagefaulting and going through the VM system and ramdisk driver every time a page is needed from outside the eight fast megabytes. It works fine to run processes using the memory above 8M, it's just slower. So I'd like Linux to somehow 'prefer' using the lower pages of memory and maybe rearrange things occasionally so that the more frequently used pages are kept in the fast RAM.
    • we can always get dirk pitt to solve this right? or maybe I've got the wrong acronym. :)
    • Re:NUMA?! (Score:2, Informative)

      by tkrotchko ( 124118 )
      If you're wondering what NUMA stands for (as I did), you can get a good 2 sentence definition at the Webopedia [internet.com]
  • by ssimpson ( 133662 ) <slashdot.samsimpson@com> on Wednesday November 07, 2001 @08:57AM (#2532318) Homepage
    At eWeek. [eweek.com]
  • by Juju ( 1688 ) on Wednesday November 07, 2001 @08:58AM (#2532320)
    It claims that SuSE is a US company (funny how I could have sworn it was German ;o)
    It also says that Alan Cox will take over 2.4 once 2. 5 is opened which is wrong...

    The whole war story is totally ridiculous. There has never been a talk about a fork. All there was were discussions of whether a new VM should be brought in 2.4 instead of 2.5, and some talk about the validity of benchmarks showing the improvements with the new VM.

    I guess there is not much going on in the news for them to feel like writing about this...
    Besides, this is nothing new since Alan Cox sayed that his last ac patch would probably be the last one with the old VM.
  • by Oshuma.Shiroki ( 232199 ) on Wednesday November 07, 2001 @08:58AM (#2532322) Homepage Journal
    Linus Torvalds and Alan Cox have finally agreed...

    In related news, pigs are now flying all over town, and hell just froze over.
    </sarcasm>
    • Why, oh why did I have moderator points *last* week?!? Look at me, I just spilled orange juice over my keyboard. I guess there is no such thing as a free laugh.
  • from the article- The accord also ends speculation that a fragmented Linux community would be doomed in the face of Windows.


    where does this ludicrious speculation come from? this sort of reporting of unsubstanciated claims is quite funny on the surface. but the more general audience reading this article will think MUCH less of the stability of the linux kernel reading crap like this. sure there is/were two different VM systems that has caused lots of posting here on ./ and probably much discussion on the kernel mailing list. how in the hell does that indicate that the linux community will be doomed in the face of windows? ARRRGGGGG!

  • by sphealey ( 2855 ) on Wednesday November 07, 2001 @09:01AM (#2532333)
    Was there really any dispute between ac and Linus, or was it just a technical competition to see which system could be pushed the farthest?

    I thought the eWeek article took an unnecessarily confrontational tone.

    sPh
    • yeah, plus the gross inaccuracy saying alan was going to maintain the 2.4 kernel.. didnt they *just* announce that it would be marcelo? oh the joys of clueless linux reporting..
    • Was there really any dispute between ac and Linus, or was it just a technical competition to see which system could be pushed the farthest?

      It was primarially a stability issue. To rip out a tested (but poorly performing under certain loads) VM for a brand spanking new one in the middle of a stable series was a big move. AC maintained the old VM for those who didn't want to participate in the inevitable tuning and bug-fixing that the new VM required.

      It's not like it wasn't unprecedented. Andrea Arcangeli's (what a cool name!) VM rewrite eventually solved the 2.2 VM problems. But those has been dveloped as a seperate set of patches before they were incorporated into the stable 2.2 series.
  • I was talking to a friend yesterday about this very issue. I'm glad to see it resolved. To the outside world, Linux already looks fragmented compared to windows (many different distributions of linux compared to one Microsoft). This may not be a correct assumption, but having two different kernels did not help the situation at all.

    Now, finally, we can say again : "There is one linux kernel, there are many different distributions. The kernel is the same and the different distributions differ only in programs and scripts".

    Linux isn't ready for the desktop? Ohhhh crap....do I have to erase it then?
    • That's not actually correct.

      Right now we have 3 kernel trees being sepperately
      maintained: 2.0.38, 2.2.20, 2.4.14 are all diffrent trees.

      Forking happens all of the time and it's in general a good thing if done right. People were making much too big a deal out of this.
  • Minor point (Score:5, Informative)

    by Anders ( 395 ) on Wednesday November 07, 2001 @09:04AM (#2532347)

    From the article: Torvalds is close to handing over the stable 2.4 kernel to Cox.

    ...and I thought that Marcelo Tosatti was going to maintain 2.4 [slashdot.org].

  • It seems these folks decided that using the whois database for the companies location was a good idea.

    It turns out that the registration for suse.com does point to an office in CA. But if this moron would do some real research, he'd find out that they are infact in Germany. Of course all of us knew this.

    Then again most reporters are morons..
    • SuSE, Inc. (SuSE's United States corporation) is in fact located in Oakland. Unless it's one of the other SuSE companies in another country that's paying the man, they got it right.

      Make your words less bitter... that way when you eat them later it won't suck so much.

  • by motherhead ( 344331 ) on Wednesday November 07, 2001 @09:09AM (#2532364)
    Linus and Alan had their wrists taped together and sort of danced/knife fought while eddy van halen played guitar in the background...

    oh wait that was a michael jackson video... sorry... though still this might be as accurate as anything else...
  • Good news. (Score:2, Interesting)

    by rmadmin ( 532701 )
    Regardless of what the article almost implied (that Cox and Linus were at dispute), this is good news for the kernel. From the sounds of it this new VM will make quite a difference from a performance aspect. I could almost care what people are fighting about. As long as new features get implimented, or the system is revamped to improve performance/stability, I'll be happy. And thats what the point is here... A new VM is going to be implimented, and its supposed to kick butt. So enjoy it and quit squabling about weather or not Cox and Linus are fighting!

  • by Duncan Cragg ( 209425 ) on Wednesday November 07, 2001 @09:13AM (#2532378)
    1. Everyone knew the Rik VM was poor
    2. Linus was stressed about it and took a brave decision to go with Andrea's VM
    3. It was VERY late to be doing this, but necessary.
    4. Linus' decision was correct as it turns out.
    5. Alan's decision was also correct in that you shouldn't be doing this kind of dramatic about-face in a 'stable' kernel.
    6. Alan's going with Andrea was also correct.
    7. I've been waiting, along with many others, for this whole mess to be sorted before 2.5 was started and I upgraded kernels.
    8. Passing 2.4 over to Alan means we can upgrade in confidence. This should be the test of stability for 2.6: upgrade when Linus passes it on to Alan.
    • by hey! ( 33014 ) on Wednesday November 07, 2001 @11:57AM (#2533100) Homepage Journal
      1. Everyone knew the Rik VM was poor


      Even that may be a bit strong. I'd say more that it was late than poor. I'm running a recent ac kernel on one of my production servers (long story, but suffice to say that I needed a bunch of things and this turned out to be the simplest way). I was initially concerned about the Rik VM, but so far it has worked fine. Of course this is anecdotal, but I almost certainly would have had serious problems if I had installed one of the earlier 2.4 kernels, which shows progress.

      Rik is right that you make things right before you make them fast; on the other hand the VM really needs to be right AND fast to be truly ready for prime time (and early releases in the 2.4.x series weren't all that "right" to begin with). The Rik VM is supposedly more advanced and featureful, but it may have simply been a bridge too far, at least given the time and resources he had to prove his ideas.

      I feel sorry for Rik, but thems the breaks. He may have been right but you don't get forever to prove your ideas. At some point the clock runs out on you, even if you are really close to pulling it off.

      • but it may have simply been a bridge too far

        I think so. I always got the impression that Rik is an extremely intelligent programmer with not enough time on his hands to do the enormous job of VM writer for all of Linux.

        Which was important, because it seemed, too, like he was one of a handful of people in the world that really understood how his VM system worked and, more importantly, was the ONLY one in the world that understood what needed to be done to it get it to work right.

  • by rnd() ( 118781 ) on Wednesday November 07, 2001 @09:13AM (#2532379) Homepage
    Posting dramatic stories about the heated debate among supergeeks in the Kernel newsgroups is a brilliant propaganda tool.


    What better way to insure the longevity of Linux than to recruit new Kernel Hackers with tales of heated debate and intrigue (I've noticed two in the last week).


    What's next, the KernelCam? Tune in to watch Linux, Alan, and the rest hacking away. View live feed of USENET postings! Only $3.99 per minute.

  • by gaj ( 1933 ) on Wednesday November 07, 2001 @09:13AM (#2532381) Homepage Journal
    This just further illustrates the strength of an open development process. There was a problem, and that fact was discussed openly and pointedly. That scares many people. I don't get why. It's code, not a person. It doesn't look like Rik is taking it very hard, at least as far as his posting on lkml shows.

    I like to think of Linux development as sort of a modified IETF style: rough consensus and running code, with a sprinkle of holy penguin pee when Linus thinks it's ready to ship. Linus saw a problem, had a solution presented to him, and just went for it. Alan thought it was a bit insane to switch horses in midstream. I would normally agree with Alan; better to try to get the horse you're on to do the job than try to jump to another one. Worry about getting a newer, better horse once you're safely on the other bank.

    Given the time frame for 2.5/2.6, though, and given the seriousness of the VM issues, I can see why Linus decided to take the risk. Apparently so does Alan. I'm kindof anal about release numbers, so I'd probably have started a 2.5 branch to test the new VM in, and refused any other changes, then released 2.6 with the new VM. That fundamental a change should probably get a point increase in version number.

    Regardless, the short version is that this is much to do about nothing. The rest of the industry just isn't used to seeing this sort of thing happen in plain view. It normally happens behind the scenes, with a carefully scripted spin put on it by marketing. Maybe if they see the process work enough times people will become comfortable with it. I doubt it.

    • Holy Penguin Pee (Score:2, Insightful)

      by Bonker ( 243350 )
      I like to think of Linux development as sort of a modified IETF style: rough consensus and running code, with a sprinkle of holy penguin pee when Linus thinks it's ready to ship.


      This is perhaps the most beautiful description of the process I have ever heard.

      I agree with you. People are used to dealing with a companies like MS, Apple, and Oracle, who are built from the ground up to never admit deficiency or the need for change even though that is a crucial aspect of any kind of upgrade cycle.

      When a group of firebrands come around that can freely admit deficiency, it does cause some waves.
      • MS, Apple, and Oracle, who are built from the ground up to never admit deficiency or the need for change

        Actually, they do admit the need for change, but only after the new version is a fait accompli. Then it's all about why the old version needs to be upgraded. (:


        One might think that vaporware hype would talk about the need for change, but for some odd reason they always seem to stress glitzy new features rather than glitzy new bugfixes. Come to think of it, for some companies this is the case even after the new version ships. (Remember BillG - getting bugs fixed is the "stupidest reason I can think of" to upgrade software.)

    • Oh, it is SOOO tempting to patch what you know, rather than start over with something new, and relatively untested, espescially when deadlines loom.

      Of course, this might not fix the problem: either the patch doesn't fix enough, or the design is flawed to start with (I have not delved into either Linux VM to competently present an opinion here, just speculation on what might be wrong). But there is something to be said for "the devil you know". At least the problems with the old VM were fairly well known. Moving to a new VM could potentially introduce new ones. Not something you want to do close to release.

      Now, those plagued by problems with the older VM might, in exasperation, think anything would be better. Enter Linus with Andrea's alternative: "looks good, ship it!" [my paraphrasing]. Those of us who'd tremble at the thought of new, untested code, could, well, stick to an older kernel, even if it meant giving up some new features.

      But wait! It's Alan to the rescue!! Picking up all the relatively easy fixes and enhancements, and giving me a choice. Leave the contentious parts till later..

      It strikes me that the minor, temporary -ac fork served both camps of users until the issues were resolved.

      Some might argue for a more disciplined approach, and not make major changes so close to release. (But, if it isn't "ready", why not just postpone release? Is Linus feeling pressure to release prematurely? Or just trying to release "often enough".) But that stands in the way of progress -- I've seen managers crippled by fear of changing anything. Sometimes you gotta take a chance, even if some things break while (many) others get fixed.

      While smaller, digestable kernel changes might be more palatable, they're already available in interim development releases. Sometimes a change is sufficiently profound that it touches everything (yes, that's an argument to refactor, but hindsight is 20/20, so I won't argue that point too much) and takes a while.

      I am not one to say which approach is globally better -- I can only comment on what works better for me. I can say, though, that when a community is really split over a course of action, and any single choice will not satisfy a large number of people, a fork, even if temporary, is probably the least painful route in the short term, even as the long term consequences are undesirable if it goes on too long.

      After all, all progress is a mini-fork from "stable" to "untested".

  • Details please (Score:4, Interesting)

    by lythari ( 118242 ) on Wednesday November 07, 2001 @09:14AM (#2532384)
    how does this new VM manager compare to the old one? I assume it's better, but exactly how does it improve over the old one.

    And how does it compare to VM manager in other 'nixs out there, especially FreeBSD.
    • Re:Details please (Score:1, Flamebait)

      by Tsk ( 2863 )
      Yes Please compare with what's supose to be the best *BSD implementation, NetBSD's one.
    • Re:Details please (Score:3, Informative)

      by Anonymous Coward
      They both had stability problems at 2.4.10. Both are fairly stable at this point.

      Rik's VM is considerred "smarter." It works harder to balance things out and keeps track of all kinds of info.

      Andrea's is not very well documented.

      Andrea's is simpler.

      Andrea's uses "class zones" which are not good for NUMA. We want to have NUMA in 2.5.

      Andrea's is faster.

      Linus is considerred the leader and Alan Cox is considerred the second in command. If it's a really tough call to make then probably going with Linus is the less contraversial thing to do.

      When Linus switched VM's we had gone through 10 point releases of the kernel and the VM still wasn't as stable as it should have been. If Andrea's VM took 10 releases to become stable then that would have sucked. A lot of people pointed this out to Linus in a blunt way. People thought it might take nearly that long but it only took 4.

      So it worked out in the end.
  • by Kefaa ( 76147 ) on Wednesday November 07, 2001 @09:19AM (#2532399)
    We seem to take things too personally here. Alan and Linus had a disagreement about when and why. Much like people I work with on a daily basis have differences of opinion on approach. In the end we do not start working for other companies, we reach an agreement.

    ZDnet is not the ACM; they are trying to sell magazines (or at least sponsors). A little conflict spices up the story. Should they put a more reasonable context around these things? Sure. However, if they did : "Linus and Alan agree on future" is hardly news worthy.

    The more people hear about LINUX the better. (positive spin coming...)

    In this context people can believe we know how to operate as open source and an effective business model. The need to evaluate, compare and when necessary compromise can be accomplished in this model for the benefit of everyone. People who appreciate that the people we want to be making business decisions for Linux.
  • by darkov ( 261309 ) on Wednesday November 07, 2001 @09:19AM (#2532401)
    I think it's unfair to characterise robust debate as fracturing or a lack of unity in the Linux comunity. Isn't it normal for people to disagree on things? It may look like disunity to your average joe, but the fact is that corporations very carefully control what the media knows and what discussions go on behind closed doors. I'm sure everyday people in companies all over the world not only argue till they are blue in the face, but also undermine each other's authority, turn coworkers against their opponent and other nasty political bullshit.

    Open software has an open process. That is a strength. Suggesting that just because there is disagreement in the Linux community means that it is less co-operative or cohesive than Microsoft or anyone else is utter crap. Open debate and having your own opinions are healthy signs, much better than some coerced worker toeing the company line, idependent of what is technically best.
    • Yeah! (Score:5, Funny)

      by Greyfox ( 87712 ) on Wednesday November 07, 2001 @09:50AM (#2532527) Homepage Journal
      In a company, you can't just come out and call your manager or a member of your programming team an idiot. That tends to get you fired, usually. Even if they really are an idiot (Generally, or about one decision in particular.) In the open model, you call someone an idiot and the community decides who the idiot is.

      I haven't been following this thing closely but my impression from this forum was it went something like this:

      Alan: You're an idiot!
      Linus: You're an idiot!
      Alan: Fine! I'll take my kernel and go home!
      Linus: Fine! I'll get someone else to do the kernel!
      Later...
      Alan: Linus, you bitch! Have you been seeing another kernel developer on the side?
      Linus: Yes, but I was a fool! I love you! I've always loved you!
      Alan: And I love you!
      Linus: You're not an idiot!
      Alan: Oh, Linus! (Kiss kiss kiss)
      Linus: Oh Alan! (Kiss kiss kiss)
      Linus: Oh! Alan!

      Hmm. Maybe I've been reading too many romance novels lately...

      • In a company, you can't just come out and call your manager or a member of your programming team an idiot.
        Heh. You haven't been a member of the work force for very long, have you?
      • Re:Yeah! (Score:1, Offtopic)

        Hmm. Maybe I've been reading too many romance novels lately...

        What? You read romance "novels"?

        Ill be reporting you to CommanderTaco for proper reeducation if this is confirmed. We will have to confiscate, and burn of course, every romance "novel" you own and re-issue you a library of SciFi proper... do not dispare commrade - THERE IS HOPE FOR YOU YET!

  • Its so funny how everything that goes on in the Linux community that is reported on, even by "tech-sauvy" news orginizations, has to do with the doom and gloom of Linux. It seems that almost weekly Linux is narrowly "saved" from total destruction. I think the media wants people to believe that Linux is so unstable and that the smallest dissagreement will seperate the whole community and send it into complete chaos.

    ..."The accord also ends speculation that a fragmented Linux community would be doomed in the face of Windows."...

    It seems that news about Linux is not intresting enough unless it is a struggle against Microsoft or has some doomesday issue that could cause it to "fall".

    just an observation ... I would just like to see a news article that didn't mention the "unknown future" of Linux
    • Don't you get it? Kernel development is like an old Flash Gordon serial. Team Linux has to race to save the universe every week from evil. They literally do save us from doomsday all the time.
  • by Bazman ( 4849 ) on Wednesday November 07, 2001 @09:23AM (#2532415) Journal
    "Cox, who also works for Red Hat, in Durham, N.C., and lives in Swansea, Wales."
    • The post is correct. The new VM was added in 2.4.10 so that VM will be in the kernel from 2.4.10 onward. It does read a bit odd given 2.4.14 is out :)
  • by Manes ( 17325 )
    Anyone care to give a short explanation on what this system is?

    I've always been interesting in kernel coding, but some of the concepts sound pretty black-magic for me :)
    • virtual memory (Score:5, Informative)

      by Lethyos ( 408045 ) on Wednesday November 07, 2001 @10:43AM (#2532741) Journal
      In brief and ONLY the basics... Modern operating systems today handle memory addressing in a virtual sense so that "fences" can be placed around memory owned by the OS and different applications. These fences serve to protect memory from being overwritten by other rouge programs. This works by making each program think that it's start of memory IS the actual start of physical memory. For example. Program "foobar" may be located at memory address 0xFF44 bytes, and have 0xFF bytes allocated to it. Instead of addressing its memory in the base:offset format as 0xFF44:0xFF, it thinks that 0xFF44 is actually address 0x00 and the top of memory is 0xFF. That way, it can't write to physical addresses at 0xFF43 or anything else lower. This range of memory can be broken into fragments and scattered through memory so that if other programs have been allocated since foobar started, it's not trapped.

      Bear in mind, this is only the basic gist of what virtual memory is all about. This particlar subsystem will also handle memory paging (which is part of swapping out to disk), amongst other tasks.

      Before you really get determined to start hacking the kernel tomorrow, I suggest you start with something a little more meager. You need to get some experience in computer arcitecture fundamentals, then really basic OS design. Read a few books. Learn Motorola or IA32 Assembly language. Learn to write some old DOS programs (a number of DOS emulators with free, open source DOS distros are available, so some searching) where you have to allocate every byte and word by hand, and not just say "Foo *f; f=new Foo();". Next, start to learn C and figure out what malloc() is all about. Then try coding a kernel module. This is obviously not an extensive road map, but computers and their operating systems are sophisticated. You can't really (unless you're someone like Cox or Torvalds) just dive right into systems programming and know what you're doing. It may take years of experience before you start to tinker with code in the kernel and actually write something that works.
      • by hawk ( 1151 )
        >These fences serve to protect memory from being overwritten by other
        >rouge programs.


        Yeah. I just *hate* it when my programs end up with makeup on them . . .


        :)


        hawk

    • Your computers processor maps memory addresses used by the kernel to physical addresses in the machine. The VM handles the software side of this, as well as paging less regularily accessed memory out to the swap partition on disk. The endless debates and tinkering stem from how difficult it is to create a VM that performs brilliantly under all situations. For a good treatment of Unix internals, see "Design and Implementation of the 4.4BSD Opertaing System" or "Design of the Unix Operating System" by Bach.
  • Settlement? (Score:2, Insightful)

    by utdpenguin ( 413984 )
    Maybe I am conpletely erroneous here, but I dont think so. :)

    But, is this really soemthign that cna be defined as "settling." As I understand it (correct me if I'm wrong) Linus put the new VM into his kernel. Its been there ever since. And its not going away. Rather Cox is giving in the Linux, as he should, since Linus is in charge. This isnt settlement, its the natural course of development. A change is proposed, Linus oks it and impelments it. Everyone else follows suit sooner or later.

    I understnad the potential horror of a kernel split, but does anyoner eally ebelieve that was going to happen? Im betting Cox would rather use a far inferior VM than allow a total split, simply because of the magnitude of suhc an action.

  • Thas is good news. If there's one thing that I don't need it is the choice of two trees.

    Now if only I could get Mandrake to work on my i2o contoller, only RedHat72 seems to work for me but it is having problems booting on the 300gb raid after install. fscking disk geometry, it always gives me problems. :-)
  • by X86Daddy ( 446356 )
    What does Voice Mail have to do with Linux??!!
  • while i don't pretend to know a lot about how to code kernels or vm's, if the new vm is better and faster than the old, i say kudos to linus and alan. this will bring linux one step closer to being ready for primetime on the desktop. little improvements such as this (although some may not see it as being as little as others) will further solidify the linux reputation as a fast, stable platform for desktop computing.
  • by MeerCat ( 5914 ) on Wednesday November 07, 2001 @09:42AM (#2532497) Homepage
    ... honestly not meant to be a troll, but does anyone else find it strange that slashdot is reporting a ZDnet story about news re:the Linux kernel development ??

    Have I missed something here ? I used to work in fraud investigation and there we have a dual scale of trusting information

    - how trustworthy is this source ?
    - how trustworthy is this source with regards to this type of information ?

    (e.g. The Queen as a news source is considered trustworthy, but if The Queen told me the local 7-11 was going to be robbed at 11:30 tonight then I'd doubt the information).

    Maybe that Jesse bloke really does know what he's talking about...

    T

  • Shock as Alan Cox installs [new double glassed] Windows.

    Phew that was close. NOT!!!
  • I think it's funny that so much attention is paid to your origin (i.e. location and language) in advertising engines, and the fact that it's an article about Linux is ignored. I got a random Windows advert here [zdnet.com]. Check it out.
  • i had been using 2.4.[789] for the past month or two on my stinkpad, and noticed some horrible swappage, especially after the system had been up for several days, with terrible interactive performance. After upgrading to 2.4.13, the problems all seem to have mysteriously vanished -- so I'm glad Linus decided to take the risk with the new VM. Hopefully we can approach something like a stable kernel. sheesh.
  • by Anonymous Coward
    Can somebody explain the differences between the Andrea Arcangelis and FreeBSD VM desing ?.

    I heard a lot of times that the *BSD desing is a lot better than the Linux, is this true ?.

    Thanks for comments.
    • This would be interesting for a high level discussion. One of the reasons I switched from Linux to BSD for servers in the mid 90's was because BSD's VM system was miles ahead of Linux. I know both systems have changed a lot since then. I don't know if Linux has caught up and surpassed FreeBSD or if FreeBSD has maintained its lead.
  • by rmadmin ( 532701 ) <{rmalek} {at} {homecode.org}> on Wednesday November 07, 2001 @10:41AM (#2532733) Homepage
    I ran across an archive at OSDN.com that had a video from the 2.5 Kernel Summit (March 30th and 31st 2001). On the list of videos is 'Future VM Work presented by Rik Van Riel. Its a 1 Hour 4 Minute Video clip, but after listening to it for 5 minutes I knew it was WAY too technical for me. =P Anyways, if you want to see what he said about improving VM, head over to:

    http://www.osdn.com/conferences/kernel [osdn.com]

    They have Real format in both 56K and 128K streams, Mpeg, and Mp3 of his speach. Looks interesting if you've got the bandwidth and the time.

  • by TurboRoot ( 249163 ) on Wednesday November 07, 2001 @11:03AM (#2532844)
    We run on alot of small systems, we are talking 8-16 megs of ram here and pentium 75ish processors. We tried to use linux once, but when linux runs out of memory with the old VM, it sucked HARD. I mean, I had processes being swaped out of memory compleatly that were ACTIVE!

    Why do we use such small systems? Because we want them to perform under extream load when placed on larger systems. Its smart really, its easy to benchmark a few functions on a pentium 75, than a 2 ghz pentium. If your application doesn't run peppy with one usuer on a pentirum 75, it sure as hell won't support 1000 users on 2 ghz pentium.

    Thats why we have used FreeBSD for all this time, in FreeBSD the VM manager is perfect, and isnt' even slated for upgrade in the near future due to the fact it works like it should. If you are using telnet on a FreeBSD machine, and _one_ applications uses a ton of resources, that one application will run slow. But your telnet will continue on fine. Try putting 12 megs of RAM in your machine, than compiling PostgreSQL while using a telnet session. You won't even notice the compile on FreeBSD, but you will with Linux.

    Funny enough, this also ties into the article earlier regarding why Linux isn't used for alot of large scale databases. Databases consume HUGE amounts of RAM and the OS under it has to be peppy about it. Linux in the past has been tuned for desktop/single user performance and not what those databases need. They need TONS of resources, and quick _CONSISTANT_ access to it.

    That said, I am very happy to see them getting a better VM. Because my biggest problem with FreeBSD is its crappy java support, the most recent stable JDK it supports is 1.2.2. And thats in Linux emulation mode!

    So if things work out, and Linux supports java well, and doesn' crap out when it runs out of resources. We will defiently switch to Linux, and life will be good!
    • While we're on the topic of the FreeBSD vm, let's see what Matt Dillon, the FreeBSD vm hacker has to say about the Linux vm:

      "I think Linux is going through a somewhat painful transition as it moves away from a Wild-West/Darwinist development methodology into something a bit more thoughtful. I will admit to wanting to take a clue-bat to some of the people arguing against Rik's VM work who simply do not understand the difference between optimizing a few nanoseconds out of a routine that is rarely called verses spending a few extra cpu cycles to choose the best pages to recycle in order to avoid disk I/O that would cost tens of millions of cpu cycles later on." (Read the rest of the interview here [osnews.com].)

      Kakemann
  • by Virtex ( 2914 )
    Has anybody else noticed that, since 2.4.10, the reported memory usage appears to be wrong? I noticed in the change logs that this was supposedly fixed in 2.4.13-pre1, but I still see the problem. Running "free" shows that I'm using up 245MB of RAM on the "-/+ buffers/cache" line (I'd paste it here, except Slashdot is rejecting the post due to "lameness filter encountered. *sigh*). Now I know I'm not using 245 MB of RAM (after subtracting out the buffers and cache), and I can prove it by running a program which allocates about 350MB of ram then frees it. When I do that, my memory usage, including swap, drops to about 70-80 MB. Is anybody else seeing this?
  • by TopherC ( 412335 ) on Wednesday November 07, 2001 @12:07PM (#2533166)
    "I took the older VM plus fixes Rik felt would solve all the problems. Linus took Andrea's work. Right now, as of 2.4.14pre, Andrea's VM seems to be beating the pants off Rik's VM. All the current numbers suggest it's the better path."

    Hey, I thought slashdot cited [slashdot.org] a comparison [nks.net] of the (fixed) Rik VM and the AA VM, and came to the conclusion that they performed about the same! They were both MUCH better than the 2.2 (old Rik) VM. What's Alan's evidence that the AA VM is "beating the pants off Rik'v VM?" If they really do perform about the same, I would have to side with Alan's original decision to just patch the old VM.

  • A few days ago I thought there was an article about Alan NOT being the maintainer anymore to focus more on "red hat customer issues" and other things?

    Has this changed?
  • reported this yesterday. Where have you guys been?
  • by TheLostOne ( 445114 ) on Wednesday November 07, 2001 @03:49PM (#2534322)
    Really.. why is everybody getting their panties all in a bunch here? Two kernels? My big toe! Making Linux look fragmented?! WHAT?!

    No I'm not a moron.. heh.. well maybe.. but I have two points..

    1- There are not '2 kernels'.. this is crap.. there is ONE linux kernel (currently at 2.4.14.. which is development anyway.. but thats for another post ;) and a couple of different development kernels that most 'normal users' (ie not kernel hackers) will never touch. I think we can agree that those adventurous users who dwelve into pre and ac patches at least know enough to know they are called pre/ac for a reason. And yes.. I know there were 2 kinds of kernels.. at a sort of esoteric level but when we say 'the Linux Kernel' in the context of a normal running system doesn't that rather imply the stable branch of it? How many distros ship with ac? In the default... not in 'super expert mode'? Too many distros to say none.. but I've never seen one..

    2- How is this going to ever possibly give the impression that linux is too fragmented compared to anything?! I figure (and I know this is a grevious over simplification) that there are basically two kinds of users.. those who know and those who don't. Those who know (IBM, Compaq.. those companies we want so bad) will know enough about the story to realize this is a disagreement in timing if anything and no big deal... Those who don't haven't heard about this anyway.

    So other than the kernel developers needing to run both.. what's the problem?

"A mind is a terrible thing to have leaking out your ears." -- The League of Sadistic Telepaths

Working...