Follow Slashdot stories on Twitter

 



Forgot your password?
typodupeerror
×
Linux Software

Hyper-Threading Speeds Linux 246

developerWorks writes "The Intel Xeon processor introduces a new technology called Hyper-Threading (HT) that makes a single processor behave like two logical processors. The technology allows the processor to execute multiple threads simultaneously, which can yield significant performance improvement. But, exactly how much improvement can you expect to see? This article gives the results the investigation into the effects of Hyper-Threading (HT) on the Linux SMP kernel. It compares the performance of a Linux SMP kernel that was aware of Hyper-Threading to one that was not." Ah, the joys of high performance.
This discussion has been archived. No new comments can be posted.

Hyper-Threading Speeds Linux

Comments Filter:
  • by deathcow ( 455995 ) on Tuesday January 14, 2003 @03:56PM (#5082607)

    Xeon folks arent having the only fun. The 3 Ghz Pentium 4 is also hyperthreaded for that crunchy flavor and great taste.
    • by deathcow ( 455995 ) on Tuesday January 14, 2003 @04:02PM (#5082649)
      Here [intel.com] is the associated press release from Intel about the HT in 3 Ghz P4's. I have seen screenshots of Windows task manager showing (2) CPU performance graphs.
    • Does anyone know if AMD will be doing something similar or if their current processors do something like this? I know that many High Performance Clusters use SMP machines and multi-threaded code and could take advantage of HT. Many clusters are made with AMD processors due to the fact that they are so much less expensive than Intel.
      • Yes, they do something almost exactly like it. Simply buy two processors and a multi-processor motherboard. That defeats the purpose of this technology, of course, but it nearly accomplishes the same thing.

        Other than that, well, I'm--still--waiting for Hammer. AMD is dropping a long ways behind Intel. Price is all they've got, and AMD isn't even competeing on price-performance real well at the moment. My guess is that Intel hyperthreaded systems will probably be better price-performance wise than AMD before long--if they aren't already.
  • by esconsult1 ( 203878 ) on Tuesday January 14, 2003 @03:57PM (#5082611) Homepage Journal
    We've used XEON's on our DB server for a few months now. The performance has been outstanding. You also see 4 processors when you run top.

    At first we thought this was an error, and got in touch with Dell's tech support. But the geeks there said this is normal behavior.
    • > We've used XEON's on our DB server for a few months now. The performance
      > has been outstanding. You also see 4 processors when you run top.

      > At first we thought this was an error, and got in touch with Dell's tech support.
      > But the geeks there said this is normal behavior.

      Of course it's normal behavior. Windows is (well, basically) counting the number of threads that the system can simultaneously execute (that's probably not entirely an accurate depiction), not the number of physical processors. But this does not mean that you're getting the performance of four processors. You still only have the execution resources of two processors at your system's disposal. The best that simultaneous multithreading can do is make more efficient use of the existing execution units. This can result in very nice performance boosts, no performance boosts at all, and (in some rarer circumstances) performance penalties. But it is in no way anything near like having that actual number of processors.

      Probably, a good rule of thumb would be "if it already stresses the execution units, then you won't see a boost, but if the code causes frequent thread stalls, then you'll probably see a nice jump".

      *EDIT* Crap, I didn't notice that you said top. Made the assumption about Windows. Sorry about that. My post more or less stands as the same, though. :)
    • by DivideX0 ( 177286 ) on Tuesday January 14, 2003 @04:56PM (#5083136)
      But do you really want to see additional processors, wouldn't SCO want to charge you more for them?

      Earlier SCO Story [slashdot.org]

      • funny maybe, but the answer is probably yes. I know that If you have 2 mutithreading processors you have to have a versino of Windows that supports 4 processors - and that's a big additional chunk of change.
    • Yup, I've heard task manager in win2k also shows 4 meters on a 2 physical cpu box.

      ostiguy
      • Nope, win2k only shows 2, well at least for pro. It also binds to the first physical cpu and it hyperthreaded child. For this reason you have to turn off hyperthreading if you are going to install win2k pro on a 2 physical cpu workstation, I should know I have a system reimaging right now because it came from the factory with hyperthreading enabled and so only 1 physical cpu was being used.
        • by Richard_at_work ( 517087 ) on Tuesday January 14, 2003 @06:21PM (#5083675)
          Windows2000Pro only shows two as thats all it can handle. Its part of the Windows2000 limitation:
          • Windows2000pro - 2 cpus
          • Windows2000server - 4 cpus
          • Windows2000AdvServer - 8 cpus

          We put win2kserver on a dual Xeon with HT, and it showed 4 cpus (this was when we realised we had HT capable Xeons! Suree enough, after checking, we were right)
    • Yeah. I dunno about other systems, but on my supermicro p4dp6 the POST messages even say there are 4. Using my own ad hoc benchmarks with dnetc, it appears like there are 2 fast processors and the hyper-threaded ones crunch at around 20%-30% of those.
  • by cbcbcb ( 567490 ) on Tuesday January 14, 2003 @03:58PM (#5082623)
    >It compares the performance of a Linux SMP kernel that

    >was aware of Hyper-Threading to one that was not."

    But if you aren't going to use hyper threading you would use a UP (non-SMP) kernel, which would gain you considerable performance. The benefits are not so clear cut as many of the benchmarks show limited benefit from hyperthreading and would perform faster on a uniprocessor kernel.

    • Well, unless you have a computer that has multiple physical CPUs.
      • by JCholewa ( 34629 )
        > Well, unless you have a computer that has multiple physical CPUs.

        You have a point, but he does, as well. SMT ("hyper-threading") should work automatically for multiprocessor systems. So if you have a dual processor, SMT-capable board in a system that's unaware of the SMT functionality, you should still get a boost from SMT. Unless Hyper-Threading is a really, really bizarre implementation of SMT. Reviewers should really compare against an SMP system that is incapable of doing SMT, because it'll do it automatically (or it should), even if you don't tell it to. Alternatively, you could approximate the same results by forcing the system to only use a number of threads equivalent to the number of processors. Not all programs can do this, though (compiling is the only thing that immediately comes to mind).

        Granted, I've been out of the loop a bit, so I might be making some really off the wall (and inaccurate) assumptions about Intel's SMT implementation.

        • by afidel ( 530433 ) on Tuesday January 14, 2003 @05:19PM (#5083291)
          nope, you make incorrect assumptions, the hyperthreaded portion of the cpu shows up to software as a seperate cpu. For this reason a win2k pro machine has to have hyperthreading disabled on a dual xeon machine or else it will just use the first physical cpu and its child hyperthread. This is why artificial smp limitations suck. Also win2k server will only allow 4 cpus in standard edition so it can only utilize two physical cpus and their hyperthreads. Windows Server 2003 ups the amount of cpus allowed for standard edition to 8 to account for this.
        • Reviewers should really compare against an SMP system that is incapable of doing SMT, because it'll do it automatically (or it should), even if you don't tell it to.

          By default, hyperthreading will be used. Every board i've seen that supports it has a BIOS option to disable the virtual processor(s) by setting a bit in one of the MSRs.

  • by Anonymous Coward on Tuesday January 14, 2003 @04:00PM (#5082633)
    All operating on a single chip!
  • by Jace of Fuse! ( 72042 ) on Tuesday January 14, 2003 @04:00PM (#5082635) Homepage
    Does SMP support automatically allow benefits from Hyperthreading, or does that require special support all it's own?
    • >> Does SMP support automatically allow benefits from Hyperthreading

      Yes

      HT essentially partitions out the CPUs pipeline into two pipelines executing concurrently: That is, two CPUs on the same die.
    • by norton_I ( 64015 ) <hobbes@utrek.dhs.org> on Tuesday January 14, 2003 @04:14PM (#5082787)
      SMP already can gain benefit from hyperthreading. However, an OS really needs special support to A) get the most out of hyperthreading and B) avoid worst-case scenarios, especially when you have both multiple physical CPUs and multiple logical CPUs per physical CPU.

      For instance, if you have two processes running, you want to put them on different physical CPUs, and if you have a choice, grouping threads with the same memory image on a single processor improves cache usage.

      Without this, hyperthreading may
    • Yes, as said in the other posts,
      BUT, you want to schedule the
      same process on the same CPU in
      order to not trash the cache.

      I.e. you can make a huge inprovement
      by make the scheduler aware of
      processors *and* logical processors.
    • Well, the *article* says that you get more performance if you patch the kernel to be optimized for ht. =)
      It also says that you get a performance boost ever by using the standard smp kernel.
  • good stuff (Score:5, Insightful)

    by The Evil Couch ( 621105 ) on Tuesday January 14, 2003 @04:02PM (#5082651) Homepage
    The results on Linux kernel 2.4.19 show Hyper-Threading technology could improve multithreaded applications by 30%. Current work on Linux kernel 2.5.32 may provide performance speed-up as much as 51%.

    while it may not be very useful for a single-user box(it actually looks like it would be a detriment), integrating it into client-server situations would give us some nice boosts in performance. web servers ought to see some real gains with this.
  • The article clearly shows that syscalls and basically OS dependant stuff rarely improves in performance, in fact decreases in most spots.

    Of course multi-threaded applications are going to improve. What's your point?

    For those who didn't RTFA:

    Simple syscall 1.10 1.10 0%

    Simple read 1.49 1.49 0%

    Simple write 1.40 1.40 0%

    Simple stat 5.12 5.14 0%

    Simple fstat 1.50 1.50 0%

    Simple open/close 7.38 7.38 0%

    Select on 10 fd's 5.41 5.41 0%

    Select on 10 tcp fd's 5.69 5.70 0%

    Signal handler installation 1.56 1.55 0%

    Signal handler overhead 4.29 4.27 0%

    Pipe latency 11.16 11.31 -1%

    Process fork+exit 190.75 198.84 -4%

    Process fork+execve 581.55 617.11 -6%

    Process fork+/bin/sh -c 3051.28 3118.08 -2%

    is it just me? or does the linux kernel not perform so much better in SMP HT?

    • It's just you (Score:5, Insightful)

      by Royster ( 16042 ) on Tuesday January 14, 2003 @04:40PM (#5083013) Homepage
      WHat you've conveniently snipped out in your trollish post is all of the applications benchmarks showing improvements. If you're not going to run any application code, you might as well shut the machine off and save the marginal stress on the environment.

      Most of us have our computers do work and those applications, running on an OS which has *barely* slowed, will be able to do more work in the same amount of time under the HT-aware OS than under one which does not utilize the second, virtual processor.
    • Hah! The first poster on the OSNews thread about this story wasn't impressed either. Apparently, a lot of people don't have the attention span to read more than the first few tables in an article!
  • Only Threads ? (Score:2, Insightful)

    by makapuf ( 412290 )
    I know, there might be many places where it has been discussed before, but could someone please tell me if HT is only for threading or can it be used for precesses, too.
    And I know, they are essentially the same syscall under linux, and might be faster, b/c of synchronization issues wrt to the memory access IIRC ...
  • 51% speed-up! (Score:5, Interesting)

    by core plexus ( 599119 ) on Tuesday January 14, 2003 @04:05PM (#5082688) Homepage
    An excellent, detailed article. For those in a hurry:

    "Conclusion
    Intel Xeon Hyper-Threading is definitely having a positive impact on Linux kernel and multithreaded applications. The speed-up from Hyper-Threading could be as high as 30% in stock kernel 2.4.19, to 51% in kernel 2.5.32 due to drastic changes in the scheduler run queue's support and Hyper-Threading awareness."

    My questions: What's the downside? Is AMD doing anything similar?

    Fight with computer brings SWAT team [xnewswire.com]

    • Re:51% speed-up! (Score:5, Informative)

      by PCM2 ( 4486 ) on Tuesday January 14, 2003 @04:16PM (#5082813) Homepage
      The downside is that for code that isn't SMP/HT-aware, performance can actually degrade. Tom's Hardware ran tests [tomshardware.com] of hyperthreading on the 3.06GHz P-4, and in almost every case, it performed better with hyperthreading disabled.
      • The downside is that for code that isn't SMP/HT-aware, performance can actually degrade.

        How many modern programs use no kernel threads / multiple processes at all? Not many I'm guessing.
        • Although threading is popular for server based apps, for normal desktop apps threads should be sed lightly or not at all. Take an Mp3 coder for example. Sure, the MP3 encoding itself will launch a thread to update the status bar, but the real CPU hog is the encoding itself which is done in a single thread. According to Tom Pabst in this scenarion the MP3 encoding will perform slower than a non-HT proc.

          Also, consider another big peformance hog, games. Although a Game Server may take advantage of HT, I don't think (and this is purely speculation based on _minimal_ 3D engine programming experience) it would be a good idea for games to use threads. Threads carry overhead, and they also can make your codebase difficult to manage.
      • Those tests are very end user (luser) specific. Yes it's true that the majority of people running games on Windows won't benefit a bit from HT... neither would those people benefit from ordinary SMP. That's why Intel left HT disabled in all P4 models until recently. Server workloads are very different, where every decent app makes heavy use of threads, and therefore benefits much more from HT (and SMP). The IBM tests are pertinent to servers and some power users.
    • What's the downside?

      Well, if your apps aren't multi-threaded then they can't make use of it. If you don't run enough CPU-intensive processes on the box, it won't buy you anything and may actually hurt you.

      If you look at the benchmarks not all the numbers are in the positive realm... although if you exclude the sync read/write numbers then it's generally a rather small difference.

      Is AMD doing anything similar?

      Not to my knowledge. They're betting the farm on Opteron/Athlon64.
  • by PaschalNee ( 451912 ) <[pnee] [at] [toombeola.com]> on Tuesday January 14, 2003 @04:06PM (#5082695) Homepage
    The pretty detailed (for me anyway) article [slashdot.org] on Ars Technica concludes [arstechnica.com] that performance on a HyperThreaded CPU will be very much dependant on the application mix. While research like this is useful it will probably always be a try and see scenario.
    • Simply put, you'll need two or more processes consuming all available CPU power before you'll see some real benefits from HT. If you're severely IO-bound, running a high-end FC SAN solution on an old P2 server will outperform a 5ghz machine with a mediocre disk.

      So - yes, not all people and applications will benefit from this. But no - it is not try and see.
  • HT hurt perf (Score:5, Interesting)

    by steelerguy ( 172075 ) on Tuesday January 14, 2003 @04:07PM (#5082701) Homepage
    Tested HT running couple large jobs on a 2 CPU box with each process using over a GB of RAM. Performance went down.

    Also HT can play havoc with a openMosix cluster since processes can start being migrated around to CPU's that do not really exist and appear to have no load, yet the physical CPU may be 100% loaded in reality.

    It is not all peaches and cream.
    • Re:HT hurt perf (Score:3, Informative)

      by Zathrus ( 232140 )
      processes can start being migrated around to CPU's that do not really exist and appear to have no load, yet the physical CPU may be 100% loaded in reality

      The article indicates that they're fixing this in the 2.5 branch. Lots of additional patches to the scheduler to let it comprehend the difference between physical and logical processors and do the Right Thing with them.

      Oh, and if you're running a 2 CPU box with only a couple (as in two) large jobs then no, you won't see a performance gain. You already have 1 CPU/process and HT would just be additional overhead.
    • the "unbalance" problem between physical/virtual cpus was mentioned in the article, and it was said that as of 2.5.32 it was addressed.
      i guess it depends what apps you're running; from the article it looks like (web|file|db)servers (and a kernel that runs a smarter scheduler than 2.4.17 had) might be able to squeeze out a little (~30%) performance gain.
      • Keep in mind, a 30% gain (for the 2.4 series) in a 2GHz machine would equate to a machine that performed server-oriented functions at an effective 2.6GHz.

        When they benchmarked 2.5.32, they showed a 51% increase, which would boost your effective server performance to 3GHz.

        Granted, the way I understand it, the actual coordination of core components for the two threads is hard-wired or in firmware. That means Intel can still improve HT, to get a better performance boost. To further that line, consider if Intel were to add additional core sections of their CPUs, to be allocated dynamically by the firmware. That means you're increasing your per-clock performance without the major overhead of developing a whole new CPU core.

        I can't see Microsoft standing for it. Intel could put all the pieces for two CPUs on the same die, and call it HT. You might have all the functionality of a dual-CPU setup, with less latency, and still have it show up as a single HT-enabled processor.

        With the way Microsoft's handling SMP machines (with CPU licenses), in addition to their statement that they are developing a 64-bit version of Windows based on the Hammer architecture, I think AMD's future looks pretty bright.
  • by NixterAg ( 198468 ) on Tuesday January 14, 2003 @04:08PM (#5082710)
    Like most development shops, we do a great deal of development for multiprocessor machines so we write a lot of multithreaded code. Multithreaded code creates a whole host of new debugging pitfalls that don't show up if the developer is debugging on a single processor workstation. As John Robbins says in his terrific Debugging Applications [microsoft.com] book, if you are developing a multithreaded application, you better be certain you are doing your debugging in a multiprocessor environment.

    From a development standpoint, will a hyperthreaded chip provide an adequate environment in duplicating the behavior of a multi-processor PC well enough that shops can buy cheaper, one CPU machines for development and still be confident in their results? I'm guessing nothing will replace the real thing but I'd be interested in any commentary.
  • Humph! (Score:4, Funny)

    by airrage ( 514164 ) on Tuesday January 14, 2003 @04:08PM (#5082721) Homepage Journal
    Well if I must say something, it' this: that's really going to put a fancy how-do-you-do in the knickers of all those pay-per-processor software types. I mean Oracle, for heaven's sake, is going to have to go absolutely bonkers trying to figure out how to screw the light-bulb into that buffalo (if you pardon my french). I mean what's a meglomanic to do? I mean I've got expenses! I've got tricarbonfiberalloy yacht hulls to pay for! Can have people going around trying to process code in a processor without us getting some slice of that monkey, I'll tell you right here and now sir! No sir! Maybe it's your not patriotic enough. Trying to cut corners, eh gov'nor? Now I'm gonna have to go and rewrite all the contracts stating explicitly that "processor" is defined as a virtual space for processing. Yes that ought to do it. But I'll still have to have the lawyers check it, just to make sure they aren't any loopies. Drats those laywers! Taking all my money too!
  • by LookSharp ( 3864 ) on Tuesday January 14, 2003 @04:12PM (#5082758)
    If you overclock the Xeons (And newer P4 CPUs) too high...

    "Prepare to go to HyperThread."

    "Go to HyperThread!"

    *WHOOSH*

    "My God, they've gone plaid!"

    (Just to keep on topic, this is a very informative shootout between HT/non-HT Intel and AMD SMP processors setups here. [gamepc.com])

    Just couldn't resist the Spaceballs reference, tho!
  • Executive summary... (Score:3, Informative)

    by guido1 ( 108876 ) on Tuesday January 14, 2003 @04:13PM (#5082771)
    Hyperthread support vs not.

    Standard API calls (w/ hyper thread) Increase (a bad thing (tm)) of latency of calls by 1-6%.

    STD workload (w/ hyper thread) Increase in throughput an average of 5-10%. Disk writes decreased throughput by 30%.

    Client network perf: "Chat room" test, increase of throughput 22-28%.

    Server network perf: File serving, increase of 9-31%.

    Kernal 2.5.24 roughly doubles the above benefits.

    Looks like no real downfalls... (How often are you running a single thread? Me either.)
  • In other news... (Score:3, Insightful)

    by dirvish ( 574948 ) <dirvish@ f o undnews.com> on Tuesday January 14, 2003 @04:15PM (#5082792) Homepage Journal
    Hyper-Threading Speeds Windows
  • ...I don't understand how this helps. I'm typing this on a Dual 1.4 GHz system -- even if a process is multi-threaded, it's still not as fast as a 2800 MHz processor. In addition, many programs can't take advantage of SMP, rendering dual processors 'useless' (for any single process; Linux distributes processes across processors.)

    So if 2*1400 1400? Shouldn't taking, say, the 3 GHz P4 and 'emulating' SMP actually slow things down slightly? I don't understand how it can help, and am actually surprised that it doesn't *hurt* speedwise.
    • Re:But... (Score:2, Insightful)

      by stratjakt ( 596332 )
      It doesn't 'emulate' SMP, it actually performs two operations at the same time by splitting the instruction pipeline in half (well not in half, it varies as to how much pipeline each 'cpu' gets). It's not as good as SMP for various reasons, mostly boiling down to the two threads sharing the rest of the chip.

      It does 'hurt' sometimes, but it's usually negligable, and you have to pretty much go out of your way to design code that would run slower - such code can 'hurt' traditional SMP systems as well.

      I'm sure there will be plenty of cooked benchmarks for fanboys to rant about in the future, just like there are between 3DNow! and MMX/SSE/2..

      It is a cool development, and *can* be shut off if it's only hindering your system (ie; you're running Windows 98 or a linux kernel with no HT support - and thus wasting pipeline to a 'CPU' that isn't used)
    • Look at it this way. The CPU has a bunch of execution units on it. The P4, specifically, has two arithmatic units, two FPUs, and some other stuff. Since threads usually don't use all these units optimally, some are wasted. A second simultanious thread might be able to use the otherwise unused units, and thus the overall performance of the two threads combined increases.
  • * 128-byte lock alignment
    * Spin-wait loop optimization
    * Non-execution based delay loops
    * Detection of Hyper-Threading enabled processor and starting the logical processor as if machine was SMP
    * Serialization in MTRR and Microcode Update driver as they affect shared state
    * Optimization to scheduler when system is idle to prioritize scheduling on a physical processor before scheduling on logical processor
    * Offset user stack to avoid 64K aliasing

    Is that all?! I hoped it'd do the post-integer-supercooled-re-automation-longterm-bu zzword-cipher-reallignment too. That's something new that you guys haven't heard of yet ;)
  • by OS24Ever ( 245667 ) <trekkie@nomorestars.com> on Tuesday January 14, 2003 @04:28PM (#5082909) Homepage Journal
    Faster clock speed processors speed up Linux.
  • by Zathrus ( 232140 ) on Tuesday January 14, 2003 @04:35PM (#5082965) Homepage
    As if there wasn't enough already...

    processor : 0
    bogomips : 3191.60
    processor : 1
    bogomips : 3198.15

    According to that the logical processor is actually faster than the physical one! Just think of what you could wind up with if you instantiated a logical CPU on the logical CPU!
  • Anyone know of any details around SMP versions of HT CPUs. It's not a very google friendly set of search terms.

    I expect that there would be a performance difference if the scheduler knew which were real cpus and which were half of an HT pair.

    Even flags to fork concerning which processor to fork to. i.e. --this_cpu_but_different_HT_CPU
    Because you might want the freedom to attempt to reduce the in-CPU cache misses and the like.

    Likewise the the implmentation of Process Groups - setpgid() [die.net] warrants investigation.

  • Technical Summary (Score:5, Insightful)

    by 0x69 ( 580798 ) on Tuesday January 14, 2003 @04:48PM (#5083083) Journal
    If you're running code that's efficient on a P4 (few mis-predicted branches, low cache miss rate, good parallelism, etc.) then HT is pretty much useless.

    If you're running code that's inefficient on a P4 (which pays for its high GHz with long pipelines, large latencies, a slow decode stage, and several other drawbacks), then HT can usually paper over a fair percentage of these problems. But remember that HT requires OS support, may require application support, and "your mileage will vary".
    • but what about 2 unrelated apps runninng at the same time? Not everyone runs just one heavy program.
    • Re:Technical Summary (Score:3, Interesting)

      by cartman ( 18204 )
      What you said was false.

      Take the example of database & OLTP applications. Database transactions are heavily dependent on repeated access to RAM. Virtually no database is small enough to fit into cache, and there is often little regularity in which data is accessed. Memory latency will REQUIRE a non-SMT processor to wait IDLY each time there is a memory latency, which takes >100 proc cycles on a modern CPU. This has NOTHING to do with he p4 architecture or long pipelines.

      "But remember that HT requires OS support, may require application support..."

      HT does not require OS support as long as the OS is capable of recognizing more than 1 CPU. Any threaded app can benefit from HT.
  • by ponos ( 122721 ) on Tuesday January 14, 2003 @04:54PM (#5083115)

    In Europe P4 3.0 with HT costs ~745 euro (+tax)
    An Asus A7M for dual Athlon costs ~260 euro (+tax)
    Two Athlon XP 2200+ cost ~340 euro (+tax).
    Alternatively you can get two Athlon MP 2000+ for
    roughly the same money (if you don't trust the
    XPs).

    Now, please explain to me why would someone
    with real SMP needs in mind (and NOT games)
    consider the P4 with HT.

    P.

    P.S. I understand that the prices in the US are
    different, but still, it is VERY expensive.
  • This is fine, I guess, if you're going to run a processor as slow (!) as this. Point being that a hyperthreaded system will place greater demands on the ram bandwidth.

    With a slow processor they may be using 80% of the available bandwidth instead of 60% with HT switched off. Upping to processor speed to ... say 3GHz, where HT is enabled in vanilla P4's ... and we can expect to see the memory bandwidth being toasted continuously. Under these conditions I doubt we would see a speedup at all, and quite possibly the reduced cache efficiency would reduce it.

    Executive Summary: Can we do this again with a non-Xeon P4 3GHz?

    Dave
  • If the results are similar to running SMP with two processors (and they look roughly similar), isn't a system with 2 Athlon-MPs still cheaper for a given performance level?
  • Don't companies like (guessing) Oracle charge by how many processors you use with their software? I know for solaris (even intel) you are licensed by how many cpus you can use. (Just like windows I guess, 1, 2, 4, 8+ cpus)

    Also since XP Home is only single processor capable where does that leave the home users that buy 3.x Ghz computers? Surely it wouldn't be long before someone figures out how to swap a multiprocessor HAL into XP Home...
  • There are too many posts here asking how HT compares with SMP. Correct me if I'm wrong, but isn't HT quite a lot different:

    To simplify greatly, if the CPU has separate units for integer and floating-point math (for example), Hyperthreading means you can use these units in parallel. Therefore, HT will not speed up pure integer or pure FP math, like SMP would. It will only speed up things if you run different kinds of process simultaneously.

    Also, many people have noted that HT sometimes slows things down a bit. I don't find this very surprising because the OS needs more work to organize things for HT, but it may not have more CPU resources than a non-HT version.

    Personally, I think HT is a good idea because it's using the existing hardware more efficiently in a true hacker spirit. However, it's nowhere near proper SMP.

    • I don't believe that's correct.

      As I understand it HT can indeed speed up pure integer code (or more generally code that's competing for a single CPU resource). HT will allow another thread to exceute if the current one is waiting on anything from pipeline results to memory access. I believe that modern CPU/memory speed disaparity was one of the driving forces behind it - if one thread gets a cache miss then another may be able to continue executing rather than having to sit idle waiting for main memory.

  • by cartman ( 18204 ) on Tuesday January 14, 2003 @07:45PM (#5084322)
    One of the major impediments to increasing CPU performance has been increasing memory latency. Memory latency has grown worse as CPUs have gotten faster. Accessing RAM will now cause a >150 cycle latency, during which the processor sits IDLE.

    Cache only partly mitigates this problem. Some applications, such as databases and OLTP, are heavily dependent on repeatedly accessing non-cached RAM. There is no way to cache all the relevant data, since virtually all databases are larger than can fit in any present cache, no matter how large, and there is sometimes no way to predict which data will be accessed. ALL of these applications have CPUs that spend much of their time being IDLE, waiting for memory to be returned.

    SMT (hyperthreading) allows the processor to perform useful work during these otherwise idle periods, by allowing the cpu to switch to a thread that is not blocked on memory access. The "idle bubbles" in the execution pipeline can therefore be "filled in" by useful work that advances the state of relevant programs.

    SMT can cause a degredation in performance beceause it can lead to "cache thrashing." In an SMT-naive kernel, two unrelated threads could be scheduled for the same physical CPU. These unrelated threads will likely share very little code or data. The two threads will therefore "compete" for the single shared cache, with each thread's data being repeatedly displaced by the other's.

    This difficulty can be substantially mitigated by making the kernel aware of "virtual processors," and by implementing scheduleing algorithms to minimize the impact. The performance of hyperthreading will likely improve as kernels are better able to exploit it.
  • It really bugs me when I see benchmark numbers relied upon when they have not been presented as statistically significant.
    Whenever you run a benchmark, you MUST run it multiple times and do the proper statistical calculations for standard deviation.
    It is NOT VALID to do one run, and it is NOT VALID to average a bunch of runs without knowing what the deviation is.
    Some times a benchmark's time will vary by more than 100%. Sometimes the reasons are valid, sometimes they are because of an error in the benchmark.
    Without this sort of validation, the numbers presented should not be trusted.
  • by cartman ( 18204 ) on Tuesday January 14, 2003 @08:11PM (#5084482)
    SMT (hyperthreading) will become increasingly important when processors are able to execute more than 2 threads simultaneously.

    This development is inevitable. Previously, each new processor generation was faster than the prior one at a given clock rate, because each new processor core had more execution units, and was therefore able to perform more work in parallel. This trend abruptly ended recently, for one reason: there is no more instruction-level parallelism (ILP) to exploit. It is impossible for a processor to look at a thead of execution and find more than a few instructions to execute in parallel.

    The only parallelism left to exploit is THREAD-LEVEL parallelism (TLP). Therefore the only way to continually increase performance is to increase the number of threads that a CPU can execute in parallel. This requires two modifications to CPU cores: first, increase the number of thread contexts per CPU, and second, increase the number of pipelines to which those threads can be dispatched.

    With the P4, it would be pointless to have more than 2 thread contexts, because there aren't enough CPU resources lying idle to execute more than 2 threads. But future CPUs could make use of more than 2 thread contexts by having enough CPU resources to execute all of them. Future CPUs could have 20 execution units or more, which would be enough to execute several threads. Remember that the number of transistors per CPU continues to increase exponentially.

    It's easy to forsee a time when processors have 20 execution units (10 integer, 10 fp) and 4 thread contexts, offering more than triple the performance of a non-SMT cpu. In the future, non-SMT CPUs will make as little sense as a non-superscalar CPU would today.

2.4 statute miles of surgical tubing at Yale U. = 1 I.V.League

Working...