Follow Slashdot blog updates by subscribing to our blog RSS feed

 



Forgot your password?
typodupeerror
×
Linux Software

Linux 2.6 Multithreading Advances 200

chromatic writes "Jerry Cooperstein has just written an excellent article explaining the competing threading implementations in the upcoming 2.6 kernel for the O'Reilly Network."
This discussion has been archived. No new comments can be posted.

Linux 2.6 Multithreading Advances

Comments Filter:
  • While it's great that Linux has excellent multithreading support, it's a shame, however, that many programmers do not take advantage of multi-threading in their programs.

    The worst example of this was the Quake I source code, which was used for many games, including Half-Life. The code was not multi-threaded, and the network code sat idle while everything else drew -- adding about 20ms of lag, unless you cut the frame rate down to about 15 or so.

    The problem wasn't fixed in Half-Life -- the most popular multiplayer game of all time -- until sometime in 2000. We can only imagine how many other programs are not taking full advantage of multithreading.
    • Actually, Half-Life was based on Quake II, but for all I know they may both derive heavily from the original Quake.
      • In terms of software archeology, there is an important intermediate ancestor.

        Quake's original networking was meant for LANs only- the fact that it was even barely playable over the internet suprised the authors.

        idsoftware soon released QuakeWorld free to Quake owners. It used the same interface and most of the graphics resources as Quake, so its arguably not a different program. But it came as a separate executable, with many Quake features removed (like monsters). And most importantly, the networking code was entirely re-written.

        It is that code that QuakeII and successors derived from.
      • by Ndr_Amigo ( 533266 ) on Saturday November 09, 2002 @04:12AM (#4631462)
        I really don't understand where people get that ridiculous idea.

        Half-Life was mostly based off Quake1. The network protocol and prediction code was taken from QuakeWorld. Some small Quake2 functionality was merged later on.

        The initial release of Half-Life was approximatly 65% Quake1, 15% QuakeWorld, 5% Quake2 and 15% Original(Not including the bastardisation of the code into MFC/C++).

        And yes, people from Valve have confirmed the base was Quake1, not (as some people continue to claim, and I really wish I knew where the rumor started) Quake2.

        Also, the percentages are based off some reverse engineering work I done a while ago when I was playing with making a compatible Linux clone of Half-Life.

        (FYI, I took the Quake1 engine.. added Half-Life map loading and rendering within about three hours... Half-Life model support took about four days, and adding a mini-WINE dll loader for the gameplay code took about a week. I gave up on the project when it came down to having to keep it up-to-date with Valves patches)

        - Ender
        Founder, http://www.quakesrc.org/
        Project Leader, http://www.scummvm.org/
        • Send the code for that Half-Life project to the creators of the Tenebrae engine, and you will be highly revered.

          Half-Life (or better yet, Counter-Strike) with Doom III graphics would just own.

          http://tenebrae.sourceforge.net [sourceforge.net]
          • I know most of the Tenebrae developers, and trust me, none of them have ANY time to do this.

            Besides which, Tenebrae's code is just a proof of concept, and not remotely suitable for a real workable port.

            That, and I'm too busy with ScummVM to help anyone implement this stuff, it's very disjointed and hacky. And I'm already revered, thanks :o)

            (If anyone does want to play with this type of thing theirselves, check the Forums section of my QuakeSrc site. Both the half-life map rendering code, and an older version of my Half-Life MDL renderer, are floating around in several engines. I'm sure someone on the forums would be happy to help you.

            Your on your own on network protocol and merging the WINE stuff tho. I done this port some years ago, and I don't have much of the source left)

            - Ender
            Founder, http://www.quakesrc.org/
            Project Leader, http://www.scummvm.org/
          • Counter Strike could be amazing with high-end graphics, but perhaps there is a reason that DoomIII isn't going to have really advanced multiplayer? Carmack said, some multiplayer, but not that much multiplayer. Could it be that DoomIII is too cumbersome to play over the net? Maybe too cumbersome to play over a LAN? (*ahem*Blood2*ahem*)

            I really think that a Counter Strike realism factor could rock, but what about the new Nvidia graphics language everyone keeps talking about? Maybe it won't be hard for us to make leaps toward a fast net CS game with super cool graphics. Maybe the new code would enable the CS team to split off from the Half Life engine and do their own thing?

            If you look at the stats on Gamespy [gamespy.com], Counterstrike takes the biggest share of online use. Take that and think about Valve and how much money they made on the backs of that project!

            *shudder*

            To clear up my point, I think that a glossy Doom3 version of CS might be immersive, but it might also lack the networking capability that the current version has. CS today doesn't take up much resources on higher end systems, or even lower ones. Today it's pretty cheap to get CS enabled.

            Maybe that plays a factor.
    • by Minna Kirai ( 624281 ) on Saturday November 09, 2002 @03:53AM (#4631420)
      Many coders are disinclined to use threads, because they don't necessarily improve code speed.

      Whether or not multithreading will accelerate any particular program has to be determined case-by-case. And for most software, the deciding factor should be whether threads will simplify development and correctness (theoretically they can, but lots of developers don't understand threads and use them wrong).

      My company has some realtime networked game for which threading was an impediment. Both the rate/duration of screen refreshs and network transmissions were low enough so they didn't usually interfere with each other in the same thread. But using thread-safe versions of standard library functions was degrading every other part of the program with constant locking/unlocking.

      So nonthreaded was faster. (Maybe cleverer people could've made special thread-unsafe alternative functions to use in contexts where we know inter-thread race conditions won't occur. But munging around with 2 standard libraries in one program is riskier than we'd like to deal with)

      • by awol ( 98751 ) on Saturday November 09, 2002 @05:33AM (#4631553) Journal

        Many coders are disinclined to use threads, because they don't necessarily improve code speed.



        Further there are a number of examples where writing a single threaded application has definitive benefits. For example applications where deadlocks or race conditions would be an integral problem in a multithreaded implementation whilst a single thread has none of these problems.


        • by Salamander ( 33735 ) <`jeff' `at' `pl.atyp.us'> on Saturday November 09, 2002 @10:31AM (#4632094) Homepage Journal
          applications where deadlocks or race conditions would be an integral problem in a multithreaded implementation whilst a single thread has none of these problems.

          That's a common myth. In fact, there are some kinds of deadlock that do go away, but there are also some kinds that merely change their shape. For example, the need to lock a data structure to guarantee consistent updates goes away, and so do deadlocks related to locking multiple data structures. OTOH, resource-contention deadlocks don't go away. You might still have two "tasks" contending for resources A and B, except that in the non-threaded model the tasks might be chained event handlers for some sort of state machine instead of threads. If task1 tries to get A then B, and task2 tries to get B then A, then task1's "B_READY" and task2's "A_READY" events will never fire and you're still deadlocked. Sure, you can solve it by requiring that resources be taken in order, but you can do that with threads too; the problem's solvable, but isn't solved by some kind of single-threading magic.

          I've written several articles on this topic for my website in the past. In case anyone's interested...

          • your comment makes even more sense if you take it to its logical extreme. you could implement a virtual machine as a single thread on a real machine. this virtual machine can have multi-threading that can experience deadlock, even though it is a single process on the real machine.
      • There are definitely cases where using multiple threads on a single-processor system can degrade performance (switching, locking, etc.). Now that dual- and quad-proc systems have become common, and hyperthreading right around the corner, multithreading will become a better-performing and therefore more frequently used approach. I, for one, am thrilled to see these improvements arrive.
        • There are definitely cases where using multiple threads on a single-processor system can degrade performance (switching, locking, etc.).

          This is only a factor with a poor multithreaded design. By contrast, single-threaded programs always fail to take advantage of multiple processors, no matter how well they're designed otherwise.

          • This is only a factor with a poor multithreaded design.
            Wrong. Every context switch burns hundreds, if not tens of thousands, of clock cycles. As the parent comment said, changing from single threading to multithreading on a uniprocessor system necessarily reduces performance. How bad it does depend on the program design, but there is always a slowdown.
            By contrast, single-threaded programs always fail to take advantage of multiple processors, no matter how well they're designed otherwise.
            Wrong again. It's trivial to run multiple copies of a single-threaded program on the different CPUs, and let them interact over IPC. One benefit is that this approach scales trivially to large numbers of networked processors. (How bad the latency hurts depends on the network and the workload, of course.) Another benefit is that catastrophic failure of one process does not necessarily corrupt the state of another process. (While one thread crashing is almost certain to bring down an entire multi-threaded program.)
            • Every context switch burns hundreds, if not tens of thousands, of clock cycles.

              A well-designed multi-threaded implemention will organize its thread usage in such a way that under light load and/or on a single processor it will not have significantly more context switches than a single-threaded equivalent. Under such conditions it will exhibit the same performance characteristics as that single-threaded version, and yet it will also be able to take advantage of inherent parallelism and multiple processors when they exist. Been there.

              Bad multithreaded implementations schedule so many computationally active threads that TSE switches are inevitable. Bad multithreaded implementations force two context switches per request as work is handed off between "listener" and "worker" threads. Bad multithreaded implementations do lots of stupid things, but not all multithreaded implementations are bad. The main overhead involved in running a well-designed multithreaded program on a uniprocessor is not context switches but locking, and that will be buried in the noise. Done that.

              A handful of extra context switches per second and a fraction of a percent of extra locking overhead are a small price to pay for multiprocessor scalability.

              It's trivial to run multiple copies of a single-threaded program on the different CPUs, and let them interact over IPC.

              Trivial, but stupid. You really will context-switch yourself to death that way, as every occasion where you need to coordinate between processes generates at least one IPC and every IPC generates at least one context switch (usually two or more)...and those are complete process/address-space switches, not relatively lightweight thread switches. That's how to build a really slow application.

              this approach scales trivially to large numbers of networked processors.

              No, it doesn't. There's simply no comparison between the speed of using the same memory - often the same cache on the same processor, if you do things right - and shipping stuff across the network...any network, and I was working on 1.6Gb/s full-duplex 2us app-to-app round-trip interconnects five years ago. Writing software to run efficiently on even the best-provisioned loosely-coupled system is even more difficult than writing a good multithreaded program. That's why people only bother for the most regularly decomposable problems with very high compute-to-communicate ratios.

              catastrophic failure of one process does not necessarily corrupt the state of another process. (While one thread crashing is almost certain to bring down an entire multi-threaded program.)

              Using separate processes instead of threads on a single machine might allow your other processes to stay alive if one dies, but your application will almost certainly be just as dead. The causal dependencies don't go away just because you're using processes instead of threads. In many ways having the entire application go down is better, because at least then it can be restarted. When I used to work in high availability, a hung node was considered much worse than a crash, and the same applies to indefinite waits when part of a complex application craps out.

              • Let me preface this by saying that everything depends on the workload. Pretty much every approach is optimal for somebody's real-world workload.
                You really will context-switch yourself to death that way, as every occasion where you need to coordinate between processes generates at least one IPC and every IPC generates at least one context switch (usually two or more)...and those are complete process/address-space switches, not relatively lightweight thread switches.
                On a reasonable OS on a single-CPU machine, it isn't significantly worse than multi-threading. (Especially if the multi-threading system is using high-performance kernel threads.) On a multi-CPU machine, which I was referring to, the difference relative to multithreading is miniscule: a couple of system calls.
                Writing software to run efficiently on even the best-provisioned loosely-coupled system is even more difficult than writing a good multithreaded program.
                I must admit that I'm looking toward the future. I'm interested in program architectures that will saturate the machines of ten or twenty years in the future. A one-CPU future-generation system will have worse latency problems than a current-generation networked system. The future of high-performance computing is message passing.

                Look at it this way: the best current transistors can switch at a rate of 200 GHz. That's a clock period of 5 picoseconds. These transistors will be in mass production within 20 years. (Possibly within a few years, but let's be pessimistic.) An electrical signal can travel only about 2 millimeters in that period of time. That means your arithmetic-logic unit cannot even fetch an operand from the L1 cache in a single clock cycle. These processors will use asynchronous message-passing logic at the very core, and farther out the system will be entirely based on messages. HyperTransport is the writing on the wall.

                Also consider current-generation transactional systems. The request-producer doesn't have to block waiting for the first response: it can keep queueing requests. If the scheduler is good, this amounts to batching up a bunch of requests, processing them in one fell swoop, then sending the responses in one fell swoop. Of course whether this actually happens depends on the workload and the OS.

                Incidentally, I'm typing this on a Unix machine that runs the graphics subsystem in a separate process. Every screen update requires at least one context switch. Does it suck? Not at all. The X11 protocol allows requests to be batched up and handled all at once. Whether the application draws a single character or renders a 3-D scene doesn't have much influence on the context switch overhead. Again, the appropriate solution depends on the workload, and the proper messaging abstraction can make separate processes quite practical. And if your compute jobs cannot possibly fit in a single machine, you have no choice but to use multiple processes.

                Using separate processes instead of threads on a single machine might allow your other processes to stay alive if one dies, but your application will almost certainly be just as dead.
                Not at all. What about Monte Carlo simulations? Losing the occassional random process is irrelevant. What about artificial intelligences running on really big machines? Getting erroneous answers from subsystems, or not getting timely answers at all, will be a fact of a-life. Being able to terminate processes at-will will be critical to building reliable AIs. What about graphics systems? Losing a frame or a part therof is nothing compared to a full crash. What about speech reconition systems? A momentary interruption for one user is nothing compared to a total disruption of all users.

                Even in the present day, there are plenty of practical workloads that can withstand a subsystem dying, but would rather not see the whole system die hard. If the system is built on a foundation of multithreading, the only failure mode is a total crash.

                • I must admit that I'm looking toward the future.

                  No, you're looking toward the past. In the future, multi-CPU machines will become more common, not less, and learning to use them efficiently will also become more important. Within the box, multithreading will perform better than alternatives, even if there's message passing going on between boxes.

                  These processors will use asynchronous message-passing logic at the very core, and farther out the system will be entirely based on messages. HyperTransport is the writing on the wall.

                  Yes, I'm somewhat familiar with transitions between message passing and shared memory. Remember that fast interconnect I mentioned, from five years ago? It was at Dolphin [dolphinics.com]. On the wire, it was message passing. Above that, it presented a shared-memory interface. Above that, I personally implemented DLPI/NDIS message-passing drivers because that's what some customers wanted.

                  The fact is that whatever's happening down below, at the programming-model level it's still more efficient to have multiple tasks coordinate by running in the same address space than by having them spend all their time packing and unpacking messages. The lower-level message-passing works precisely because it's very task-specific and carefully optimized to do particular things, but that all falls down when the messages have to be manipulated by the very same processors you were hoping to use for your real work.

                  The future of high-performance computing is message passing

                  ...between nodes that are internally multi-processor.

                  The request-producer doesn't have to block waiting for the first response: it can keep queueing requests.

                  Yes, yes, using parallelism to mask latency. Yawn. Irrelevant.

                  Every screen update requires at least one context switch. Does it suck? Not at all.

                  If context switches aren't all that bad, why were you bitching about context switching in multithreaded applications? Hm. The fact is, a context switch is less expensive that a context switch plus packing/unpacking plus address manipulation plus (often) a data copy. Your proposal is to use multiple processes instead of threads, even within one box. When are you going to start explaining how that will perform better, or even as well? When it won't "suck", to use your own charming phrase.

                  What about Monte Carlo simulations? Losing the occassional random process is irrelevant. What about artificial intelligences running on really big machines?

                  Please try to pay attention. I already referred to regularly decomposable applications with high compute-to-communicate ratios, and that's exactly what you're talking about. Yes, what you say is true for some applications, but does it work in general? No. As I said, I've worked in high availability. I've seen database app after database app, all based on message passing between nodes, lock up because one node froze but didn't crash. Everyone's familiar with applications hanging when the file server goes out, and that's not shared memory either. Message passing doesn't make causal dependencies go away.

                  If the system is built on a foundation of multithreading, the only failure mode is a total crash.

                  Simply untrue. I've seen (and written) plenty of multithreaded applications that could survive an abnormal thread exit better than most IPC-based apps could survive an abnormal process exit.

                  • In the future, multi-CPU machines will become more common, not less, and learning to use them efficiently will also become more important. Within the box, multithreading will perform better than alternatives, even if there's message passing going on between boxes.
                    That is identically the point I was trying to make. However, as I pointed out, core clock speeds are getting faster and faster. A 200 GHz clock speed will be practical within perhaps 5-10 years. That's an instruction cycle time of 5 picoseconds.

                    As I also pointed out, the minimum time to send a round-trip signal from one CPU to another is determined by the speed of light. Suppose you have two processors that are 6 centimeters apart. The round-trip time for light is 200 picoseconds. Electrical signals are about half that fast, or 400 picoseconds. Therefore the act of merely acquiring an inter-thread lock will waste 80 clock cycles waiting for the atomic lock instruction to execute, assuming there is no contention. After that the thread will perform memory reads on shared data. Each read from a cold cache line will have an additional 80 wait states while the data is snooped from the hot cache. Complex activity can easily touch 50 cache lines, which is 4,000 clock cycles.

                    And that's the best case scenario where the programmer has flawlessly layed out the variables in memory to minimize cache transfers. In the real world it is appalling easy to cause cache ping-pong, where two processes try to use the same cache line, and it keeps "bouncing" back and forth between multiple CPUs.

                    Oh, and that's assuming a zero-overhead protocol for coherency. Realistically you should expect a few hundred picoseconds or so of additional round-trip latency.

                    Finally, these numbers are for one node that contains a few CPUs. Going between nodes will be vastly worse. A four nanosecond cable (i.e., a one foot cable) between nodes means about 1000 wait states to acquire a lock. A two microsecond RTT (e.g., Myrinet) means 400,000 wait states.

                    The fact is that whatever's happening down below, at the programming-model level it's still more efficient to have multiple tasks coordinate by running in the same address space than by having them spend all their time packing and unpacking messages.
                    A shared memory space implies a coherency mechanism. A coherency mechanism implies round-trip messages. ("I need this, can I have it?" ...wait... "Yes, you can have it, here it is." ...wait...) Dependence on round-trip latency implies that the program is I/O bound.
                    The lower-level message-passing works precisely because it's very task-specific and carefully optimized to do particular things, but that all falls down when the messages have to be manipulated by the very same processors you were hoping to use for your real work.
                    Obviously the communication/networking hardware will have to be built directly into the CPU core. That's why I said HyperTransport is the writing on the wall. These CPUs will also have special instructions for message passing. Incoming messages will be stored in priority FIFOs.
                    Yes, yes, using parallelism to mask latency. Yawn. Irrelevant.
                    The I/O will be (at least) a thousand times slower than the core. Code that isn't batching up data 1,000 clock cycles in advance is pissing away CPU capability. Code that uses round-trip synchronization is benchmarking the I/O latency.

                    The thing is, I/O latency isn't improvable. You can only put chips so close together, and the speed of light is sort of non-negotiable. CPU speed, however, is improvable. So the ratio of I/O latency to clock period is going to keep increasing. Code that doesn't batch up data will not run any faster on the machines of tomorrow.

                    If context switches aren't all that bad, why were you bitching about context switching in multithreaded applications?
                    I wasn't bitching about it. I was correcting the misstatement that its influence on performance was zero.
                    I already referred to regularly decomposable applications with high compute-to-communicate ratios, and that's exactly what you're talking about. Yes, what you say is true for some applications, but does it work in general?
                    I was pointing out that all applications that want maximum performance will be designed that way. They will do whatever it takes to deserialize the algorithms. If several CPUs need to know the results of a simple calculation, they will each calculate it themselves, because calculation it in one place and distributing the results would take a thousand times longer.

                    Take a look at what the Linux kernel is doing sometime. Everything is moving to per-processor implementations. Each processors gets its own memory allocator, thread/process queue, interrupt handlers, and so forth. Inter-CPU locks and shared memory are avoided like a plague. They know that the I/O-to-core clock ratio is bad, and it's going to get much, much worse.

                    • core clock speeds are getting faster and faster...Suppose you have two processors that are 6 centimeters apart...[lots of other irrelevant drivel deleted]

                      Gee, maybe all that's why people are going to things like multiple cores on one die, hyperthreading, etc. Programming-wise, these present the same interface as multiple physical CPUs, but they also ameliorate many of the problems you mention...speaking of which, everything you're presenting as a "killer" for multithreading is even worse for your multi-process model.

                      A two microsecond RTT (e.g., Myrinet)

                      As a former Dolphin employee, I have to point out that Myrinet was never that fast.

                      Dependence on round-trip latency implies that the program is I/O bound.

                      You remember that little thing about using parallelism to mask latency? Most serious programs outside of the scientific community are I/O bound anyway and the whole point of multithreading is to increase parallelism.

                      Obviously the communication/networking hardware will have to be built directly into the CPU core.

                      Getting rather far afield here, aren't you?

                      I wasn't bitching about [context switching]. I was correcting the misstatement that its influence on performance was zero

                      You're forgetting that an operation's effect on performance is a product of its cost and frequency. Go read H&P; it'll spell this out for you much better than I can.

                      I was pointing out that all applications that want maximum performance will be designed that way.

                      What about those that can't? That's a lot of applications, including important ones like databases which must preserve operation ordering across however many threads/processes/nodes are in use. You can't just point to some exceptional cases that are amenable to a particular approach, and then wave your hands about the others. Well, you can - you just did - but it doesn't convince anyone.

                      Take a look at what the Linux kernel is doing sometime.

                      Since you're practically an AC I can't be sure, but odds are pretty good that I know more about what the Linux kernel is doing and excellent that I know more about what kernels in general are doing. Your appeal to (anonymous) authority won't get you anywhere.

                      The real point that we started with is your claim that any multithreaded application will "suck". That statement only has meaning relative to other approaches that accomplish the same goals. Are you ever going to get around to backing that up, or will you just keep going around and around the issue in ever-widening circles hoping I'll get nauseous and quit?

                    • Gee, maybe all that's why people are going to things like multiple cores on one die, hyperthreading, etc. Programming-wise, these present the same interface as multiple physical CPUs, but they also ameliorate many of the problems you mention...
                      Hyperthreading doesn't gain a lot. (Where "a lot" == capable of improving performance by factor of 10.) Multi-core dice run out of steam at 2-4 cores/die. (Maybe it's 8-16. The point is it'll never get to 256.) And multi-core dice will still have substantial latency problems. Multi-chip modules are better than pins-and-traces, but not by a lot.
                      ...everything you're presenting as a "killer" for multithreading is even worse for your multi-process model.
                      I was describing a multi-process system that ran in batch mode. To the extent that you can fit your problem into that framework, it makes comm latency much less important.
                      Getting rather far afield here, aren't you?
                      Huh? Describe the problem -> describe a solution.
                      Most serious programs outside of the scientific community are I/O bound anyway and the whole point of multithreading is to increase parallelism.
                      By "I/O bound" I mean "I/O bound at the ping rate". Which means "terribly slow".
                      You're forgetting that an operation's effect on performance is a product of its cost and frequency.
                      To repeat, I did not say it was important or unimportant, I said it was not identically zero.
                      What about those that can't?
                      They'll see no benefits. However very few problem spaces are truly sequential, especially if you are willing to trade latency for throughput.
                      That's a lot of applications, including important ones like databases which must preserve operation ordering across however many threads/processes/nodes are in use.
                      Indeed. Pipelining the algorithms will take considerable cleverness.

                      I wonder if databases will be that hard, though. Good ones already provide on-line replication and failover, multi-version concurrency, and transactions that automatically roll-back if a collision is detected.

                      You can't just point to some exceptional cases that are amenable to a particular approach, and then wave your hands about the others. Well, you can - you just did - but it doesn't convince anyone.
                      Did I say that all programs will be easily adaptable to that approach? No. I said that those programs that do not adapt will tend to have poor performance.
                      Since you're practically an AC I can't be sure, but odds are pretty good that I know more about what the Linux kernel is doing and excellent that I know more about what kernels in general are doing.
                      Ah, I state an inarguable and easily-verified fact, you talk about how much more you know. Smooth move, Ex-Lax.
                      Your appeal to (anonymous) authority won't get you anywhere.
                      Look, Mr. Smarty Pants, I don't have the time to write a frickin tutorial on things that are common knowledge, where the reader can find enlightenment at their nearest search engine nearly as fast I can write an essay on the topic.

                      But since you can't be arsed to do it, here's a link to search Google for "Linux per-cpu" [google.com]. See? Tens of thousands of hits. If you add "allocator" to the search criteria, this [lwn.net] is the first hit. It assumes background knowledge, but should make sense.

                      The real point that we started with is your claim that any multithreaded application will "suck". That statement only has meaning relative to other approaches that accomplish the same goals.
                      Ladies and gentlemen, we have just lost cabin pressure.

                      The statement was "If you have a few million tasks to do, pretty much any threading system is going to suck."

                      The context makes it clear that "tasks" means "things that make the threads talk to each other". And it will, indeed, truly and royally suck.

                    • The statement was "If you have a few million tasks to do, pretty much any threading system is going to suck."


                      The context makes it clear that "tasks" means "things that make the threads talk to each other". And it will, indeed, truly and royally suck.

                      ...and anything else - most especially the approach you espouse - will truly and royally suck more. Life just sucks sometimes; too bad, you lose.

    • by Silh ( 70926 ) on Saturday November 09, 2002 @03:55AM (#4631426)
      While Quake 1 was developed on NEXT, the target platform at that time would have been DOS, so multithreading would be a bit of a problem...

      As to further licencees of the engine, revamping the engine to use multithreading was probably not a very high priority in making a game.

      On the other hand, for someone writing an engine from scratch is a different matter.
    • While it's great that Linux has excellent multithreading support, it's a shame, however, that many programmers do not take advantage of multi-threading in their programs.

      The problem wasn't fixed in Half-Life...

      Heh.. I just wanted to point out that they probably need to port it to linux before they can take advantage of the elite linux multithreading.

    • Yeah yeah yeah... When life isn't perfect, blame Abrash...

      Troll! ;)

      ---
      (And yes, Mike Abrash did WinQuake, not Carmack)

    • by 0x0d0a ( 568518 ) on Saturday November 09, 2002 @11:21AM (#4632281) Journal
      While it's great that Linux has excellent multithreading support, it's a shame, however, that many programmers do not take advantage of multi-threading in their programs.

      Multi-threading is an easy way to cut down response latency in programs and produce a responsive UI. Unfortunately, it also has many drawbacks -- it can actually be slower (due to having to maintain a bunch of locks...you're usually only better off with threads if you have a very few), and it's one of the very best ways to introduce very hard to debug bugs.

      I do think that a lot of GTK programmers, at least, block the UI when they'd be better off creating a single thread to handle UI issues and hand this data off to the core program. Also, when doing I/O that doesn't affect the rest of the program heavily, it can be more straightforward to use threads -- if you have multiple TCP connections running, it can be worthwhile to use a thread for each.

      There are a not insignificant number of libraries that are non-reentrant, and have issues with threads. Xlib, gtk+1 (dunno about 2), etc.

      Threading is just a paradigm. Just about anything you can manage to pull off with threading you can pull off without threading. The question is just which is cleaner in your case -- worrying about the interactions of multiple threads, or having more complex event handlers in a single-threaded program.

      The other problem is that UNIX has a good fork() multi-process model, so a lot of times when a Windows programmer would have to use threads, a UNIX programmer can get away with fork().

      So you only really want to use threads when:
      * you have a number of tasks, each of which operates mostly independently
      * when these tasks *do* need to affect each other, they do so with *large* amounts of data (so the traditional process model doesn't get as good performance).
      * You have more CPU-bound "tasks" than CPUs, so you derive a benefit from avoiding context switching that characterizes the fork() model.
      * you are using reentrant libraries in everything that the threads must use.
    • While it's great that Linux has excellent multithreading support

      Actually, I believe the point is that Linux will have excellent thread support. In other words when 2.6/3.0 is stable enough for production use (2008? :) )

    • by Kynde ( 324134 )
      While it's great that Linux has excellent multithreading support, it's a shame, however, that many programmers do not take advantage of multi-threading in their programs.

      What a load of crap. There are plenty on threaded applications for linux. The problem is that all these inexperienced threads-every-fucking-where-programmers" that Java spawned fail to understand that threading is NOT the solution for everything.
      Besides in unix style coding few tings are as common as forking about, which in many cases is the what people also do with java all the time. Real single memory space cloned processes (i.e. threads) have less uses than people actually think.

      The worst example of this was the Quake I source code, which was used for many games, including Half-Life. The code was not multi-threaded, and the network code sat idle while everything else drew -- adding about 20ms of lag, unless you cut the frame rate down to about 15 or so.

      If you'd EVER actually used threads in linux, you'd know that if there are busy threads you would still get to run atmost once in 20ms and even more likely far less seldom.

      It's easy to try out even. Write a code that preforms usleep(1) or sched_yields() every now and then and checks how long that takes. Especially try out the case by putting few totally separate processes in the back ground doing while(1); loops. There's your 20ms and way more...

      When quake 1 was written the 20 ms lag was concidered NOTHING. At that time online gaming was limited to mainly xpilot and muds. It started the boots and naturally the demands changed, too. THUS ID wrote the Quake World client which was quite different.

      Besides a brutal fact always is that single thread process can be made faster than _ANY_ multithreaded approach, although it's often quite difficult. Moreover, threading is never chosen as an approach due to performance, but rather because it simplifies the structure in some cases.

      Given the amount of optimization already present in Quake 1, I feel quite safe saying that lack of threads in Dos had jack to do with Quake 1 being single threaded.
      • Besides a brutal fact always is that single thread process can be made faster than _ANY_ multithreaded
        approach, although it's often quite difficult. Moreover, threading is never chosen as an approach due to
        performance, but rather because it simplifies the structure in some cases.


        Apparently you never use multi-CPU computers?


        In any case, for some tasks, raw speed isn't as important as low latency. By using multiple threads with a good scheduler and a well-thought-out priority system, you can end up with a very responsive program, something which would be much harder to do with a single thread. See BeOS's GUI for a good example.

      • Um, if you're getting a minimum of 20ms of latency, then you're kernel is borked. You do realize that if priority of the network thread is higher than that of the compute thread, then it'll preempt the compute thread? So it'll get to run whenever it's ready, not just at time-slice boundries. Granted, it's 10ms on current Linux kernels but nothing's stopping you from jacking HZ to 1000.
    • GUI programs particularly suck for this (though it's getting better). The lack of threading in programs such as Galeon and Konqueror are blatent. While the rendering engine is doing something complex, the rest of the program stops responding to events. Compare this to the behavior of a highly threaded program like Pan, where you can send it any number of requests, and the UI will still respond to the user.
  • ...so we don't end up with a lesser version because they don't like the other implementation.

    I thought that was what OSS was about, getting the best of all worlds? ;)

    --
    Regards,
    Helmers
  • by Second_Derivative ( 257815 ) on Saturday November 09, 2002 @04:04AM (#4631448)
    From what I understand NGPT is mainly a user space thing. Why not go with the 1:1 one in the kernel (NPTL or whatever), just have a libpthread.so (NPTL runtime) and libpthread-mn.so (NGPT). From a programmer's standpoint, when I say pthread_create() I want to know exactly what that does: with NPTL I know what happens. With NGPT I don't. Also, the old rule of "Don't pay for what you don't use" applies. If I'm going to have just, say, four threads, those four threads are going to run better as four kernel threads as opposed to 2 LWP's dynamically mapped between 4 CPU contexts.

    But, again, I might want to write a server of some sort which handles hundreds of thousands of connections at once, but 99% are idle at any given time and the other 1% require some nontrivial processing sometimes and/or a long stream of data to be sent without prejudicing the other 99%. Now, for ANY 1:1 threading system, I can't just create x * 10^5 threads because the overhead would be colossal. But equally so, implementing this with poll() is going to be horrid, and if the amount of processing done on a connection is nontrivial and/or DoS'able, there's going to be tons of hairy context management code in there, until lo and behold you end up with a 1:N or M:N scheduling implementation yourself. NGPT could be very useful as a portable userspace library here, as these people have implemented an efficient M:N scheduler under GPL, something that hasn't existed before and could be very useful. I think these libraries might be much more complimentary than the article makes out.
    • If you have hundreds of thousands of connections you should be using aio, which is the new scalable replacement for lots of polls...
      • AIO is a way of beginning a large series of IO operations and leaving the kernel to complete them while you get on with something else (or that's the best definition I can find so far). That still doesn't solve the problem of how to efficiently serve a small number of active connections without ignoring the inactive connections for any extended period of time.
        • I think it does (but I need to write some programs using it). You just say I want to start reads async on all these connections, tell me what has finished (which will be the active ones). People on the list seem to be using it for these types of apps (thats why they want aio network io).
    • by rweir ( 96112 ) on Saturday November 09, 2002 @07:21AM (#4631663) Homepage Journal
      Now, for ANY 1:1 threading system, I can't just create x * 10^5 threads because the overhead would be colossal.

      Actually, it's kind of famous for that [slashdot.org].
    • Now, for ANY 1:1 threading system, I can't just create x * 10^5 threads because the overhead would be colossal

      If you read the article, it shows benchmarks done by the NPTL folks which shows a 2x improvement in thread start/stop timings over NGPT (which itself is a 2x improvement over POLT (plain old Linux threads)).

      Read more about NPTL here [redhat.com] (PDF file).

    • Why is the context management code going to be so hairy that you end up with something as complex as a 1:N or M:N scheduler?

  • by drinkypoo ( 153816 ) <drink@hyperlogos.org> on Saturday November 09, 2002 @04:05AM (#4631451) Homepage Journal
    And is there any chance of getting both maintained and in the kernel (As options) if they are?

    I can easily imagine that one of them might be more efficient for gigantic numbers of threads that don't individually do much, or maybe one might be more efficient for very large numbers of processors, but I don't know jack about the issues involved, so I'm just talking out my ass. (Hello! I'd like to "ass" you a few questions!)

    So, someone who knows... Are these threading systems good for different things? And would it really be that hard to make them both come with the kernel?

    • by sql*kitten ( 1359 ) on Saturday November 09, 2002 @04:20AM (#4631475)
      So, someone who knows... Are these threading systems good for different things? And would it really be that hard to make them both come with the kernel?

      They both implement the POSIX threading API (a good thing IMHO). NPTL is more radical; the IBM team made a conscious decision to keep the impact of their changes to the minimum. For that reason, I expect that NGPT will be accepted; it has a shorter path to deployment in production systems, even though NPTL is a more "correct" solution (i.e. it uses purely kernel threads). But it changes userspace, libc and the kernel - it will be much harder to verify.

      Are these threading systems good for different things? And would it really be that hard to make them both come with the kernel?

      Developers shouldn't care, or more accurately it doesn't matter for them. Both implement POSIX threads, so it simply depends what is installed on the system on which their code ends up running - the same application code will work the same on both, altho' each will have its "quirks". Sysadmins will prefer the NGPT because it is easier to deploy and test. Linux purists will prefer NPTL because a) it's the "right" way to do it, and b) it was written by Red Hat.

      They could both come with the kernel source and you could choose one when you compiled it. I don't see how they could coexist on a single system.
      • This is totally wrong. Read the white paper. "Yes and No" below the parent post post gets it right.

        It seems to me that the NPTL will smoke the NGPT. The author of the article is just being diplomatic. Keep in mind that Ulrich is/was a key developer on both. Usually when a good engineer changes his approach to solving a problem, it's because he has found a better solution. :)

    • Yes and No (Score:5, Informative)

      by krmt ( 91422 ) <.moc.oohay. .ta. .erehmrfereht.> on Saturday November 09, 2002 @04:33AM (#4631491) Homepage
      I don't understand this all that well myself, but I did just read the whitepaper linked to in the article written by Ingo Molnar and Ulrich Drepper. From the looks of things, NGPT's M:N model will cause a lot more problems because of the difficulty of getting the two schedulers (userspace and kernelspace) to dance well together.

      By sticking with the 1:1 solution that's currently used in the kernel and the NPTL model, there's really only the kernel scheduler to worry about, making things run a lot more smoothly generally. I'd imagine latency being a big issue with M:N (I'm pretty sure that it was mentioned in the whitepaper). I haven't read the other side of the issue, but I think that pretty graph in the O'Reilly article says it all performance-wise.

      There are other issues though, like getting full POSIX compliance with signal handling. The 1:1 model apparently makes signal handling much more difficult (I don't know anything about the POSIX signaling model, but there's a paper about it on Drepper's homepage [redhat.com] that could probably shed some light on the subject if you were so inclined. There are other issues in the current thread model that have to be dealt with in a new 1:1 model (and are) such as a messy /proc directory when a process has tons of threads.

      From the whitepaper, it seems that the development of the O(1) scheduler was meant to facilitate the new thread model they've developed, which I hadn't thought about before even though it makes sense. There's still some issues to work through, but both models look promising. If the signal handling issues can be resolved it looks like from the article that NPTL's model will win on sheer performance.

      As for making them both come with the kernel, that's really really difficult, since this stuff touches on some major pieces of the kernel like signal handling. The same way you're only going to get one scheduler and VM subsystem, you're only going to get one threading model. You're able to patch your own tree to your heart's content, but as per a default install, there can be only one.
      • It would seem to me that NGPT could be modified easily to run on an NPTL kernel. In any case, I don't see why the o(1) scheduler and 0(1) kernel thread creation code woouldn't be worked into the 2.6 kernels. As far as which Linus likes better, my guess is NPTL, as it drastically improves kernel performance. Linus has shown a willingness to make drastic changes even in a production kernel if he feels the performance gains are substancial. I would guess that (most if not all of) the NPTL kernel mods will make it into Linus's tree. In the end people will go with Linus's decission, so it really comes down to who convinces Linus.

        I'd personally like to see the NPTL kernel mods with the NGPT libraries. This would seem to provide the most forward-looking approach, as it offers lots of scalability and flexability.

        I personally can't wait for the 2.6 kernel, whichever model wins. Java apps are nice, but they tend to use way too many threads. I really don't know why a select() wasnn't present in the beginning for a langugae designed to be used in a networing environment. Oh well.

  • LWN (Score:5, Informative)

    by KidSock ( 150684 ) on Saturday November 09, 2002 @04:46AM (#4631507)
    has a nice article about the state of threading on Linux. See the Sept. 27th Weekly Edition [lwn.net].
  • Golly!!! That was an informative article.

    I was aware of the debate about the linux threading issue, but the kernel mailing list was too noisy to pick out this kind of detail.

    Someone should start a site that covers long term issues, rather than the week by week stuff I've found on the web... or maybe someone has, and I'm just too out of the loop....

  • But as a Windows programmer you have to know how hopelessly amateurish this makes you all sound?

    I guess since Windows sucks so bad at shitloads of processes, programming on 4 or more CPUs, you really quickly have to learn how to write multithreaded code that works, and correctly. You poor Unix guys are struggling through something we all went through years ago -- learning how to think more sophisticated than a single thread of control correctly.

    CPU-bound tasks (spinning in a loop calculating PI) are easy to saturate all the resources using any model. How come y'all are switching to a thread-based model now? Was the other way running out of steam?

    Honestly curious...
    • by Fnord ( 1756 ) <joe@sadusk.com> on Saturday November 09, 2002 @06:48AM (#4631618) Homepage
      Mostly because the was unix VMs are designed is much more efficient at multiple process programs than windows is. Windows started doing threads long before smp was all that common. They did it because multi process was slow as hell. But for 90% of tasks it worked just fine in linux. And its not like linux is just now moving to a thread model. Its just making the existing one (which worked well until you scale to many many threads) a bit better. And by better I don't mean similar to windows performance, I mean similar to solaris (which has threading from the gods).
    • IBM's System 360 had multithreading in 1964.

      Multics had multithreading in the early 1970s.

      Windows was still launched from DOS in 1992.

      Please go back to your "innovating" with Windows.

      -Kevin

    • Seriously, what do you use threads for? Unix traditionally uses seperate processes to handle most things Windows folks use threads for. Since Linux has things like copy-on-write and shared memory, it's not that much less efficient. Plus you get the advantage of complete address seperation, aka they can't crash each other.
      • The "complete advantage" of running 'would-be threads' in separate process spaces also has the "complete disadvantage" of an inability to share data without using some heavyweight mechanism. Not to mention a heavyweight context switch.

        Yes, threads can be misused and abused, but for some problems they're the right tool. All the developers in this article are trying to do is develop a hammer that won't break the first time they hit a nail with it (IMHO, Linux's inability to deal with massive numbers of threads without severe performance degredation makes the threading implementation approach uselessness. I'm glad someone is fixing it).
    • . How come y'all are switching to a thread-based model now? Was the other way running out of steam?

      Correctly programming threads is hard, so they should only be used when necessary. Many of the things that can be done with threads can be done more safely with fork() and/or select(). Since Windows lacks the former and has a broken version of the latter, Windows programmers tend to use threads when Unix programmers would use an alternative.

    • You poor Unix guys are struggling through something we all went through years ago -- learning how to think more sophisticated than a single thread of control correctly.

      What the heck does altering the structure of a thread *library* have to do with application-level thread programming? What are you talking about?
  • by parabyte ( 61793 ) on Saturday November 09, 2002 @05:33AM (#4631551) Homepage
    Among the issues with threads beeing half a process, half a thread (getpid() bug, signal handling etc.) that are mentioned in the article, I found issues in two other areas:

    scheduler does not immediately respond to priority changes

    thread-specific storage access is slow

    There is a well known effect in multi-threaded programming called priority inversion that can cause deadlocks when a low-priority thread has acquired a resource that a high priority thread is waiting for, but a medium priority thread keeps the low priority thread from beeing executed and so the medium priority thread effectively gets more cycles than the high priority thread.

    One way to overcome this problem is to use priority ceiling locks where the priority of a thread is boosted to a ceiling value when it acquires a lock. Unfortunately I found that changing the priority of a thread for a short interval does not have any effect at all with the current 2.4.x standard pthreads implementation.

    The second problem I ecountered is that accessing thread-specific storage with pthread_getspecific() takes 100-200 processor cycles on 1 Ghz PIII, which makes this common approach to overcome hotspots almost as slow as locking.

    Does anyone know if any of these issues are adressed by the new implementations ?

    p.

    • In the current version priorities only work SCHED_RR and SCHED_FIFO (both require superuser privileges), SCHED_OTHER (the default policy) doesn't support changing priorities.

      Regarding thread specific data access: If your LinuxThreads library uses floating stacks (for ix86 this means it has been built with --enable-kernel=2.4 and for i686) it already will be faster.

      For other TLS enhancements take a look at http://people.redhat.com/drepper/tls.pdf [redhat.com].

      • Thank you for the hints on thread local storage; I am glad to see this has been adressed.

        Regarding changing of priorities, I think that with SCHED_OTHER the priority is beeing automatically modified by the scheduler to distibute cycles in a more fair fashion.

        I tried both SCHED_RR and SCHED_FIFO and changing priorities basically works, but it seemed to me that changing priorities did not have an immediate effect as required to implement priority ceiling locks.

        For example, when boosting the priority of a thread to the ceiling priority, and the thread is the only one with this priority, I expect it to run without beeing preempted by anyone before the priority is lowered or the process blocks. On the other hand, when lowering the priority, I expect a higher prio thread to be executed immediately. I would also expect the order of unblocking threads is correctly adjusted when their priority was changed while suspended.

        However, it seems that priority changes do not much affect the actual timeslice or the unblocking order, but I did not have the means to find out what exactly happens; using a debugger is outright impossible with fine-grained multi threaded programs.

        Is it possible that some system thread needs to run inbetween to do some housekeeping ? Do you have any hints about the scheduler's inner workings ?

        Thank you

        p.

  • by iamacat ( 583406 ) on Saturday November 09, 2002 @06:00AM (#4631580)
    I don't see how someone can say that "kernel thread scheduling" is slower than "user thread scheduling". Whatever algorithms pthreads library is using could also be used by a kernel process scheduler and offer the same benefits for daemons that fork() a lot of processes. Indeed, most of the time threads are not used to take advantage of multiple processors. Instead they are used in place of multiple processes with some shared memory that handle multiple requests at once. If they could be re-written to really be multiple processes with some shared memory, the resulting application will be simplier and possibly more stable/secure because only some portions would need to worry about concurrent access. Conceptually, there is no reason why kernel code shouldn't use virtual memory, start system-use processes/threads, load shared libraries and so on. Or why "user" code shouldn't handle IRQs, call internal kernel functions or run in CPU supervisor code. Some tasks demand a certain programming model. For example, one would hope that a disk IRQ handler doesn't use virtual memory. But there is no need to place artificial restrictions to the point that multi-level schedulers and duplicated code are needed to run a nice Java web server.
    • by zak ( 19849 )
      Switching between user and kernel mode takes time. If all your primitive operations are implemented in user mode, synchronisation (for instance) takes several cycles in the best case (resource is free, lock it), and a bit over a hundred in the worst (resource is busy, context switch). When you also add the user/kernel mode transition (which may be a couple dozen cycles on some RISCs but takes more than a hundred on some x86 architectures), you can see how performance may degrade.
      • I guess I agree that we shouldn't do a context switch just for executing a single xchg instruction. But if the resource is busy, user level scheduler can not make a good decision. For one thing, it can only switch to threads in the same process where as kernel can make a global decision, such as switching to a process holding the resource we are waiting for. Also, user scheduler doesn't have execution statistics - working set, % of cpu slice used I/O behaviour etc - even for it's own threads. It can only do round-robin scheduling rather than optimizing potentian througput based on each thread's history.
    • Two words: context switches.

      Whenever execution switches between user mode and kernel mode, a context switch is required. Context switches are expensive.

      Inidentally, this is one of the advantages of the microkernel approach: by severely limiting the code that must be run in kernel space, you can minimize context switches between kernel and user mode and save a lot of time.
      • Typically, the microkernel approach INCREASES the number of context switches. However, a microkernel also normally has very fast context switches.

        The context switches are increased because a single operation (say, and I/O read) requires switching into the kernel from the user process, and then out into a device driver. A non-microkernel would have the device driver in the kernel. This is just an example - it may be that the switch is to the file system manager instead, or some other helper process. The point is that the nature of a microkernel is to have lots of helper processes that perform what are normally macro-kernel functions.

        Context switches typically are expensive because they involve more than just a switch into kernel mode. They are likely to involve some effort to see if there is other work to do (such as preempt this thread). They may involve some privelege checks, and some statistical gathering.

        A microkernel just does less of this stuff.

        BTW... the first elegant running micro-kernel I ran into was the original Tandem operating system. The kernel was primarily a messaging system and scheduler (I think scheduling *policy* may have been handled by a task, btw). I/O, file system activity, etc was handled by privileged tasks. It was very elegant, and conveniently fit into their "Non-Stop (TM)" operation.
  • by truth_revealed ( 593493 ) on Saturday November 09, 2002 @08:12AM (#4631741)
    Debugging multithreaded programs in Linux is a complete bitch. As the article mentioned, the core dump only has the stack of the thread that caused the fault. Yes, I know any competant multithreaded programmer uses log files extensively in debugging such code but any additional tool helps. Either of these LinuxThreads replacements would be a major improvement. I just hope the major distros roll in either package in their next release.
    I bet the 1:1 package would have finer-grained context switching, though. M:N models tend to switch thread contexts only during I/O or blocking system calls. With finer-grained thread switching you tend to expose more bugs in multithreaded code, which is a very good thing. But I suppose even in an M:N model you could always set M=N to acheive similar results.
  • by Anonymous Coward
    The article makes the statement
    Virtually all compliance problems can be traced to the decision to use lightweight processes... New processes are created by clone(), with a shared-everything approach. While the new process is lighter due to the sharing

    This is not true. Every process in Linux is a full-weight process. The fact that some of those processes may map to the same memory space does not make them in any way lighter to the other parts of the kernel. What Linux does have is a lightweight clone() system call.

    Solaris is an excellent example of the differences between processes, lightweight processes (lwp's or kernel threads), and user threads.

  • Linux will prevail (Score:3, Insightful)

    by mithras the prophet ( 579978 ) on Saturday November 09, 2002 @09:36AM (#4631941) Homepage Journal

    I am not well-versed in the world of Linux, ( have my own allegiances [mammals.org] but am being drawn to it more and more. Reading the article, it felt very clear to me that Linux will prevail (with a nod to William Faulkner's Nobel speech [nobel.se]).

    Consider a few quotes from the article:

    The LinuxThreads implementation of the POSIX threads standard (pthreads), originally written by Xavier Leroy
    A group at IBM and Intel, led by Bill Abt at IBM, released the first version of the New Generation POSIX Threads (NGPT) library in May 2001
    On March 26-27, 2002, Compaq hosted a meeting to discuss the future replacement for the LinuxThreads library. In attendance were members of the NGPT team, some employees of (then distinct) Compaq and Hewlett-Packard, and representatives of the glibc team
    On September 19, 2002, Ulrich Drepper and Ingo Molnar (also of Red Hat) released an alternative to NGPT called the Native POSIX Thread Library (NPTL)

    Perhaps others have already pointed this out [tuxedo.org], but I am newly impressed with the universal nature of Linux. The power of an operating system that *everyone* is interested in improving, and has the opportunity to improve, is awesome. Yes, Microsoft has tremendous resources, and very earnest, good-willed, brilliant people. But to improve Microsoft's kernels, you have to work for Microsoft. That means switching the kid's schools, moving to Redmond, etc. etc. On the other hand, everyone from IBM to HP to some kid in, say, Finland, can add a good idea to Linux. When the kernel's threads implementation is a topic for conversation at conferences, with multiple independent teams coming up with their best ideas, Linux is sure to win in the long run.

    I'm struck by the parallels to my own field of scientific research: Yes, the large multinational companies have made tremendous contributions in materials science, seminconductors, and biotech. They work on the "closed-source", or perhaps "BSD" model of development. But it is the "GPL"-like process of peer-reviewed, openly shared, and collaborative academic science that has truly prevailed.

  • It's not going to be 3.0?? I thought that was the decision since so many changes and additions/features are being put into this kernel..
    • by WNight ( 23683 )
      There aren't really any incompatibilities with older code, so you don't need to go to a new kernel version like you would if you broke anything.

      In one of the discussions with Linus on this issue he said there was a planned change that broke something but it wouldn't be in for this version. Because that would warrant a major version change of its own, he didn't want to go from 2.5 to 3.0 then from 3.3 or something to 4.0, he'd rather go from 2.9(or so) to 3.0, and avoid the version inflation.

      I agree. There's no stigma in having a product numbered 1.x or 2.x, it simply means you got it right early on, without needing to break old applications too often.
    • I probably will be 3.0. It's a pretty minor detail really. The tentative plan is 2.6 but it will most likely be a 3.0 release. There are huge differences between it an 2.0 and substantial ones between it and 2.4.

      FWIW, it's looking to be a hell of a kernel.

  • by dbrower ( 114953 ) on Saturday November 09, 2002 @01:35PM (#4632898) Journal
    For a long time, Sun used M:N threading, and many people thought this was a good idea. They have recently changed their minds, and been moving towards 1:1.

    The change in thinking for this is argued in this Sun Whitepaper [sun.com], and this FAQ [sun.com].

    If one believes the Sun guys have a clue, you can take this as a vote in favor of 1:1.

    IMO, anyone who runs more than about 4*NCPUS threads in a program is an idiot; the benchmarks on 10^5 threads are absurd and irrelevant.

    Once you run a reasonable number of threads, you can be quickly driven to internal queueing of work from thread to thread; and by the time you have done that, you may already have reached a point of state abstraction that lets you run event driven in a very small number of threads, approaching NCPUs as the lower useful limit. Putting all your state in per-thread storage or on the thread stack is a sign of weak state abstraction.

    -dB

    • IMO, anyone who runs more than about 4*NCPUS threads in a program is an idiot; the benchmarks on 10^5 threads are absurd and irrelevant.
      >>>>>>>>>
      Typical *NIX developer. Threads are useful for two things:

      1) Keeping CPUs busy. This is where the whole NCPU business comes from.
      2) Keeping the program responsive. *NIX developers, with their fear of user-interactive applications, seem to ignore this point. If an external event (be it a mouse click or network connection) needs the attention of the program, the program should respond *immediatly* to that request. Now, you can achieve this either by breaking up your compute thread into pieces, checking for pending requests after a specific amount of time, or you can just let the OS handle it. The OS is going to be interrupting your program very 1-10 ms anyway (timer interrupt) and with a good scheduler, it's trivial for it to check to see if another thread has become ready to run. The second model is far cleaner than the first. A thread becomes a single process that does a single specific task. No internal queueing of work is necessary, and threads split up according to logical abstractions (different tasks that need to be done) instead of physical ones (different CPUs that need to be kept busy).
      • I'm perfectly happy devoting a whole thread to UI events to get responsiveness. I shouldn't need 100 of them behind the scenes doing the real work if I only have 1 or 4 cpus.

        If your application design calls for 100 concurrently operating threads, there is something broken about the decomposition.

        -dB

    • anyone who runs more than about 4*NCPUS threads in a program is an idiot

      I'd look at it a different way, and count likely-to-be-active threads. If that number's much greater than NCPUS you're probably hurting yourself, but threads that are certain or almost certain to be sleeping (e.g. periodic tasks that are between invocations) don't really count. I discuss this issue more in an article on my website [pl.atyp.us] about avoiding context switches (part of a larger article about server design) if you're interested.

      • The linked article [pl.atyp.us] is pretty reasonable. I'd quibble about the active thread count thing, though; I think at the limits it's bad to abstract into something that will establish scads of threads even if they aren't all active at the same time. For instance, say you did do a thread-per-connection, but made it be the 'reading' thread only, so it could sit in read() or recv(). Whenever it got a message, it could queue it to a 'worker' thread that would consume cpu and send a response. For a zillion connections, this is a zillion stacks, which doesn't scale well. Probably better to i/o multiplex some fraction of connections into a smaller number of reader threads, and queue; or using the articles' suggestion, have N generic threads, which are either doing demux reading or cpu intensive work. In either of these events, you are breaking the binding of connection/client state to a thread, which is the important state abstraction to do. Having broken clean, any number of possible implmenentations are easy to try. Without it, you are kind of stuck.

        -dB

  • Nice, I look forward to the improvemenets. One big problem is that I can not find a way to determine that two processes are actually threads of the same process. It is possible to guess, of course, but is there a way to conclusively determine for sure, while we wait for these improvements?
  • by magellan ( 33560 ) on Saturday November 09, 2002 @04:58PM (#4633860)
    It seems the Linux kernel developers are going through the same thing Sun did from Sun OS 4 to 5, HP-UX from 9 to 10, and AIX from 3 to 4.

    While it certainly is useful that IBM is using its experience with a MxN threading model to improve Linux, this is not some new, ground-breaking concept. The other UNIX operating systems made this move anywhere from 5-10 years ago.

    My guess is the Redhat 1x1 model will prevail, as the only platform that MxN apparently benefits is the IA-32 platform, because of its 8192 threads per process limitation. I gather Itanium and Opteron do not have this problem.

    However, both threading models could continue to be included, as Sun offered two threading models in Solaris 8 for SPARC.

    We should also realize that all of these scalability improvements, while opening Linux up for bigger server hardware, will probably adversely affect performance on a single CPU workstation.

    Maybe we need two distinct, but compatable threading models, once focused on the workstation environment, where one is not typically running a bizillion threads, and one for the server environment, where the Java's and the Oracle's of the world can produce threads until their hearts are content.
    • My guess is the Redhat 1x1 model will prevail, as the only platform that MxN apparently benefits is the IA-32 platform, because of its 8192 threads per process limitation.
      The whitepaper mentioned in the story says that the new system removes the 8192 thread limit on IA-32. (Using yet another trick with virtual memory.)
      We should also realize that all of these scalability improvements, while opening Linux up for bigger server hardware, will probably adversely affect performance on a single CPU workstation.
      The same whitepaper also says that several huge bottlenecks have been removed on UP (uni-processor) systems. E.g., the time delay for finding an unused process ID is proportional to the number of existing processes squared in current kernels. (O(n^2) in big-O [wikipedia.org] notation.) The new kernel hardly cares how many processes there are. It takes about the same amount of time to find a free process ID if there are 10 existing processes, or 100,000. (O(1) in big-O notation.)

A Fortran compiler is the hobgoblin of little minis.

Working...