Become a fan of Slashdot on Facebook

 



Forgot your password?
typodupeerror
×
Linux Software

Is Profiling Useless in Today's World? 229

rngadam writes "gprof doesn't work in Linux multithreaded programs without a workaround that doesn't work that well. It seems that if you want to use profiling, you have to look for alternatives or agree with RedHat's Ulrich Drepper that "gprof is useless in today's world"... Is profiling useless? How do you profile your programs? Is the lack of good profiling tools under Linux leading us in a world of bloated applications and killing Linux adoption by the embedded developers? Or will the adoption of a LinuxThreads replacement solve our problems?"
This discussion has been archived. No new comments can be posted.

Is Profiling Useless in Today's World?

Comments Filter:
  • by stirfry714 ( 410701 ) on Friday July 05, 2002 @02:29PM (#3828761)
    Why can't my code be judged by the content of its characters, and not by the color of its extension?

    Down with profiling! :)
  • by Anonymous Coward on Friday July 05, 2002 @02:30PM (#3828776)
    Maybe gprof, as an implementation might not be useful. But profiling, especially under Java, can make a world of different to an application.

    Saying "profiling isn't useful" is similar to saying "having information isn't useful".

    That's just dumb.
    • by anonymous_wombat ( 532191 ) on Friday July 05, 2002 @03:02PM (#3829034)
      In single threaded programs, just one type of profiling needs to be done, the kind that standard profiling tools measure. In multi-threaded programs, the relative execution times of the various threads may be more important. The first thing to do is to figure out which threads are using most of the resources. After this is done, and any optimizations made, the old-style profiling and optimizing of slow methods is just as important as ever. If your program is spending 80% of its time sorting, then optimize your sorting code.

      Of course, for many applications, multi-threading achieves the vast majority of the speed increase, and profiling will only be of marginal utility. The profiler is just one tool of many, and is not a silver bullet.

  • Ulrich Drepper (Score:2, Insightful)

    by quigonn ( 80360 )
    Ulrich Drepper is a fool, he made glibc crappy, and messed up most things he had to do with. He simply should shut up and let other people do the work and the thinking.

    Yeah, mod me down, but I have insight into the things Ulrich does, and he mostly does sh*t. Just my 2 cents (USD or EUR, you decide).
    • I found that Ulrich is pretty easy to work with... if you have a clue. glibc is NO easy task (99.99 % of all programs depend on this library.. now make it feature rich and compatible).

      I tracked down a bug in __fsetlocking and he was most helpful in fixing glibc.

      Pan
      • You don't understand. glibc itself is flaUUlwed, badepper . The source distribution contains Linux header files (yes, from the kernel, Linus said ts is evil), and contains code just to copy some kernel data structure to glibc's own data structures, because glibc has some super fancy features that everybody needs (not really, actually). In other words: glibc is bloat. Ulrich Drepper doesn't care about this, and packs every feature into glibc he can think of. Other people proved that C libraries can be done much smaller, uClibc for example, or, my personal favorite, diet libc. The interesting thing about this is that programs that are _statically_ linked against diet libc are usually smaller than programs that are dynamically linked against glibc.
  • OProfile (Score:5, Informative)

    by mmontour ( 2208 ) <mail@mmontour.net> on Friday July 05, 2002 @02:32PM (#3828794)
    Take a look at OProfile [sourceforge.net]. It's quite a nice tool, although it's not a direct replacement for gprof. From their 'About' page:

    OProfile is a system-wide profiler for Linux x86 systems, capable of profiling all running code at low overhead. OProfile is released under the GNU GPL.

    It consists of a kernel module and a daemon for collecting sample data, and several post-profiling tools for turning data into information.

    OProfile leverages the hardware performance counters of the CPU to enable profiling of a wide variety of interesting statistics, which can also be used for basic time-spent profiling. All code is profiled: hardware and software interrupt handlers, kernel modules, the kernel, shared libraries, and applications (the only exception being the oprofile interrupt handler itself).
    • OProfile + Prospect (Score:4, Informative)

      by irix ( 22687 ) on Friday July 05, 2002 @04:24PM (#3829437) Journal
      And for getting even more useful information out, try Prospect [sourceforge.net]. It works with OProfile - there was a talk about it at this year's Ottawa Linux Symposium, which you can find in the conference proceedings [lwn.net] (gzipped PDF).
    • Take a look at FunctionCheck [univ-lyon1.fr]

      Five bucks says that this server is slashdot'ed within the hour, so you may have more success with the less descriptive SourceForge project page [sourceforge.net], indicates that the project is not dead, as the homepage says.

      I discovered this program when I was optimizing some code I wrote to multiply sparse matrices. By the time I had gotten it 100x faster than the initial code, gprof had lost all semblance of granularity and was giving me obviously bogus results. The problem is that such things as cache performance (i.e. optimizing for cache hits) were now heavily affecting the profile and gprof could not figure such things out. FunctionCheck [univ-lyon1.fr] works much better than gprof and actually generates accurate profile information under high-stress situations.

      From the homepage (all grammatical errors theirs):

      "I created FunctionCheck [univ-lyon1.fr] because the well known profiler gprof have some limitations:

      • it is not possible to change the profile data file name
      • multi-threads / multi-processes is not supported
      • time spend in non-profiled functions is discarded
      • you can't control the way profile is made
      • memory profile is not managed
      For all these limitations, and by the fact that I discovered a new gcc feature called -finstrument-functions, I decided to write my own profiler.

      My approach is simple: I add (small) treatments at each enter and exit of all the functions of the profiled program. It allows me to compute many information:

      • the current call-stack
      • the time at each action, to compute elapsed times in functions
      • process PID / thread ID, to manage multi-threads / multi-processes
      • number of calls to functions
      • ...
      With these information, I can generate profile data files (for each thread / process), which describes all the statistics (at function level) for the program execution."

      Try it out and please contribute some source code.

    • See http://www.BitWagon.com/tsprof/tsprof.html [bitwagon.com] for info on a process profiler that uses hardware performance counters (with no recompile and no relink) and gives both interactive and text output in tree and flat modes.
  • 'pstack' on Solaris (Score:2, Informative)

    by wavq ( 216458 )
    While it doesn't give the exact time spent in a
    given function, running 'pstack' against a
    processID under Solaris will give the execution
    stack trace of any threads present.

    If you find that 80% of your threads are in
    slow_function( someParam ) then ya better get to
    work fixing it. This also has the added advantage
    of not slowing down your program with profiling
    code and other hooks.

    Obviously this isn't great for fine-grained
    profiling, or with applications with few threads,
    but I've found it helpful on my larger projects.
    • Yes, the company I work for used this technique to build up gprof style call trace information on a huge embedded, persistent, realtime, multitasking, concurrent system we built (yes it is/was horrible ;-).

      Anyway, we ran the equivalent of pstack at frequent intervals (like once per millisecond) and then collected the addresses of all functions in the call tree present each time we polled the system. Got a humongous file. Then postprocessed the file to record which functions called what other functions, and how often and looked up the addresses in the symbol table to give usable names.

      It turns out that polling the system like that usually gives all the important information you could want- it tends to show not the most called functions but the heaviest users of the processor because they are much more likely to be running when the pstack happens- the number of times they will appear is proportional to the total time they run for, statistically. And the technique is minimally invasive and doesn't require recompilation of the code under test.

      Then we printed the summary out in a huge printout, each function sorted by percentage ticks spent in it; and then spent a week or two staring at it. It showed some amazing features like certain functions were spending an order of magnitude longer in them than originally designed, that kind of thing.

      It is really quite a useful technique.

    • On Solaris, I find pstack useful for debugging, but it really isn't useful for profiling. For that, I would use either truss or the profiling tools that some with Sun Workshop 6 (if you are using Sun cc/CC).
  • by PissingInTheWind ( 573929 ) on Friday July 05, 2002 @02:37PM (#3828834)
    Maybe the problems with today's profiler is that the compiler implementors spend too much time making a compiler that is going to try to optimize everything by itself, which then might not even get the best code in that case.

    What could be more useful is if the compiler implementor would spend as much time on the profiler than on the compiler: you would then be able to easily see faulty parts in your software and be able to determine what needs to be optimized.

    Good profilers would means efficient code. Don't think profilers are useless because most implementations of them sucks.

    • by maxwell demon ( 590494 ) on Friday July 05, 2002 @03:07PM (#3829066) Journal
      While imroved profilers would surely be useful, don't think optimizing compilers are useless.
      • Hand-optimized code tends to be less clear and less readable. Also, it makes it easy for new bugs to creep in.
      • Hand-optimized code would be machine-specific. While it would work on other machines, it would be dog slow there. So you'd basically be back to per-architecture versions of your program.
      • Some optimizations cannot be done by the programmer, because they ocur at levels below the language. For example, the POWER architecture has a multiply+add instruction. Most common programming languages don't have a multiply+add command. So how would you optimize the use of that instruction?
      • Hand-optimization at the level the compiler does it could even hinder hand-optimization in the area where it is most effective and the compiler cannot do it at all: algorithmic optimization. To do that efficiently, you need highly structured code so you can exchange algorithms easily. However microoptimizations of the sort the compiler does them tend to destroy such structures.
      However, with the compilers getting more sophisticated in optimization, profilers get even more important: While you may be able to add some "profiling instructions" for your own use, profiler-driven optimization in the compiler cannot use such a replacement.
    • What could be more useful is if the compiler implementor would spend as much time on the profiler than on the compiler: you would then be able to easily see faulty parts in your software and be able to determine what needs to be optimized.

      Better yet, if an architecture has a static branch predictor that encodes "mostly taken" or "mostly not taken", the compiler could emit profile code that measures how fast a particular variant runs and then take that into account for the next optimization pass.

  • I can't get any useful profiling information out of Microsoft Visual C++. When I compile in profiling mode, my program runs at less than 1% of normal speed, producing completely useless data. Am I doing something wrong? Should I be using 3rd party tools?
    • VTune and Quantify (Score:4, Informative)

      by Codex The Sloth ( 93427 ) on Friday July 05, 2002 @02:53PM (#3828950)
      If you want tree profiling (i.e. information about function and child performence) then Rational Quantify is a reasonable alternative to the crap profiler that comes with MSDev.

      If you want a flat profiler or need to analyze the cost of specific low level operations then you MUST get Intel VTune.
  • by Wesley Everest ( 446824 ) on Friday July 05, 2002 @02:40PM (#3828861)
    I work as a game developer, and we have to make sure that everything that is done for each frame takes less than 33ms. So we're always profiling our code to cram more functionality into a limited amount of time.

    But even if you aren't doing something that is speed intensive like games, you always have tradeoffs when you choose your data structures and algorithms. Generally you first code up the easiest algorithm that you think will use an acceptable amount of memory and CPU time. Then, later, if something is too slow, you have to identify where the problem is. If could be that you chose an O(N^2) algorithm not realizing that N might be 1,000 instead of the max of 100 you were counting on, forcing you to switch to an O(NlogN) algorithm that is more complex.

    Now, if it is a small application, you might have enough familiarity with the code to be able to guess where the problem is -- then you fix it and see if it is still slow. If that works, then you're set and profiling isn't necessary. But if the fix doesn't speed it up enough, then you're stuck. You have to profile it somehow.

    You might try simple tricks like changing the code to loop on a suspected bit of code 100 times and see how much longer it takes. Or maybe throw in some printf's that spit out the current time at different points. Or maybe create your own profiling code that you manually call in functions you want to time. Or, you might use an actual profiler without modifications to the code. But lacking a profiler doesn't mean you can't or won't profile your code.

    And even with CPU speed doubling every couple of years or so, that doesn't mean speed is no longer an issue. You can easily choose the wrong algorithm and have something take 1000s of times longer to run than the proper algorithm.

  • I used gprof (Score:3, Informative)

    by Zo0ok ( 209803 ) on Friday July 05, 2002 @02:40PM (#3828863) Homepage
    I used gprof quite much during my Master Thesis work this spring. gprof tells what functions consumes most cputime, and those functions could be optimised. Usually very small parts of the code consumes most of the cpu-time.

    This program was parallellised on network level - all clients were singlethreaded. If someone has multithreaded for performance (to utilize more than one cpu) I suppose gprof will still work well on a single cpu machine with just one thread.

    For programs that consumes lots of cpu time for well-defined computations it should not be hard to profile a single threaded version (a single threaded version is needed for debugging anyway).

    More complex applications (for example a web browser) I imagine are more dependant on multi-threading, and should pose a larger problem.

    gprof, is probably not dead - if you need it you can adapt the program...
  • by sane? ( 179855 ) on Friday July 05, 2002 @02:45PM (#3828891)
    The problem is not that certain tools have issues; but rather that today's programmers have no interest in creating efficient code.

    Those of us that started programming in 1k and sub megahurts can really feel the time taken by badly coded applications. We know that forgetting what is happening on the silcon can kill how well our code will run.

    However, those who started coding after ~1987 don't really have a gut feeling for it. To them the latest processor will make up for their bad coding. To a certain extent they are right. Today's advances STILL keep up with Moore's law, still make up for their lack of skill. However, when one looks at what is actually performed with all that power, one tends to question why we are paying so much, for so little.

    Can you actually say that MS WordXP is much better than the non-WYSIWYG wordprocessor of yesteryear (itself a blast from the past) ?

    We don't need profilers, we need coders have have that tacit knowledge of what really counts, where they should put real effort.

    Unfortunately that doesn't come in a software box.

    • These days instruction-level efficiency simply isn't important outside of a few niche areas (embedded systems, games, multimedia, certain kinds of low-level systems work). To imply that knowing what's happening on the silicon is "what really counts" is nonsense. Using appropriate data structures and algorithms counts, and making correct software counts even more, but worrying about how many cycles one instruction takes versus another is a serious misdirection of effort on modern machines!

      It's folks like you who are the reason people still write their SSH daemons in C, and why we live in a mixed up world where we have neither stability NOR speed!
      • Using appropriate data structures and algorithms counts, and making correct software counts even more, but worrying about how many cycles one instruction takes versus another is a serious misdirection of effort on modern machines!

        But he didn't say that... He said that programmers should know where to invest the effort, and take an interest in creating efficient code. That means, first and foremost, exactly what you just said: you have to be smart about your DS&As, aware of what you're writing and not pointlessly lazy when coding. It doesn't mean, and wasn't claimed to mean, that you have to micro-optimise everything at the assembler level.

    • by Malc ( 1751 )
      You talk such twaddle. Why waste your time trying to write efficient code from the start? It's much better to write easily unstandable, easily maintained, quickly written and minimal bug code. Unless you have a real need, such as with an embedded system. If the code doesn't perform well enough, then come back and optimise it later. It's a matter of where you want your efficiencies: memory and CPU utilisation, or development process. The latter is cheaper for a business, and so long as the product works on hardware that the users have, then the former is a waste of time and money.

      I used WordPerfect 5.0 (or whatever it was) on a dual 360K 5.25" floppy disk drive machine. Plain blue text screen only. I have to say, I *much* prefer Word XP. If given the choice, I would not go back to those crappy DOS days.

      By all means, be sentimental and reminisce about the old days. But things have changed - accept it.
      • Why not learn to write efficient code in the first place? Then you'll need neither a debugger nor a profiler.

        Not to be a troll, but I see a lot of programmers with this kind of attitude - "let the compiler catch mistakes", or "code it fast and use a debugger," etc... What invariably happens is that these programmers who learn to code this way spend their careers writing code which is neither efficient nor easy to maintain. Worse, they waste a lot of time using a debugger that they could have avoided had they thoroughly planned their code.

        I used to just blitz through the code, without really planning what I was going to do. While this worked well for small projects, when I got into the professional world, my debugging time went up by an order of magnitude. I've found that I actually get code done faster if I think it through and plan it out before I start writing. I've learned that if I want to fly through coding and debugging, I have to take some time first and plan what I'm going to do. Otherwise, if I scramble off to code writing without planning, I end up using the debugger quite a bit. But then again, YMMV.

      • No, he's right (Score:4, Insightful)

        by Anonymous Brave Guy ( 457657 ) on Friday July 05, 2002 @08:08PM (#3830492)
        Why waste your time trying to write efficient code from the start? It's much better to write easily unstandable, easily maintained, quickly written and minimal bug code.

        Why are these mutually exclusive? There's efficient and there's optimised, and one is a much easier subset of the other.

        He's not claiming that everyone should hand-optimise from the word go. He's saying programmers should have a basic knowledge of their craft. It doesn't take much extra effort to use an efficient sorting algorithm or store data in a fast look-up structure, rather than writing a naff, hand-crafted shuffle sort and using arrays for everything whether they're appropriate or not. And yet, through ignorance or plain laziness, most programmers in most languages take the latter approach. (If you've never seen any of the source code for big name applications/OSes, trust me, it's scary.)

        Similarly, it is just careless to pass large structures by value unnecessarily in a language that has reference semantics. You have to know the basics of what is efficient use of your tools of choice if you want to write good code, and the old Moore's Law excuse is just a cover for laziness and failure to do the requisite amount of homework.

        Note that, very importantly, none of these things requires more than a small effort. They certainly don't compromise maintainability, bug count or any other relevant metrics, and a competent programmer (if you can find one) will take these things in his stride, and still be faster than the others.

        I used WordPerfect 5.0 (or whatever it was) on a dual 360K 5.25" floppy disk drive machine. Plain blue text screen only. I have to say, I *much* prefer Word XP.

        Interesting... We have just acquired a new P4/2.2GHz with 512MB RAM and running WinXP as a development machine at work. You know what? It's way, way slower than the 1.4GHz P4 running 2000 we already had. And that in turn is way slower than the 1GHz P3 running NT4. This is not subjective, it is based on obvious, objective measures. For example, my new machine (the fastest of the above) sometimes takes 3-4 minutes to act on an OK'd dialog in Control Panel. The NT4 box reacts instantly when you configure the equivalent options. Something is wrong at this point, and I'm betting it's a combination of code bloat and feature creep.

      • quickly written and minimal bug code

        Thanks for making my point for me.

        Quality is designed IN, taking bugs OUT is an admission that you really didn't pay enough attention at the beginning. Sure you get the odd typo, but the real bugs are the ones in the logic of what your writing - and you often don't catch all of those.

        If you are thinking about what is actually happening, rather than just pasting in a bit of gash code, you are much more likely to create something with quality engineered IN. Trust me, its the only way its going to get there.

        As for the 'speed it up after the event' crowd - did you ever think that if you used the right approaches, the right concepts, from the start, you wouldn't have to spend the time tweaking some supposed critical element at the end? It should be second nature IF you really understand what you are doing. Sure there are always the games, device drivers etc., but I'm talking about the day to day code that gets executed every day by millions of people around the world. It generally takes no more effort to use the right technique as the wrong one - if you only knew the difference.

        Have a little pride in your work man! You might find that your 'good enough, lets stuff it out the door' mentality is why you don't go forward and your company goes to the wall as a result of a buggy product.

    • Can you actually say that MS WordXP is much better than the non-WYSIWYG wordprocessor of yesteryear

      Hell, yes. WYSIWYG is very useful.

      However, if you ask if WordXP is much better than Word for Windows 2.0, well, that's a much harder question to answer in the affirmative.

      For me, anyway.

      Tim

      • I think this is getting to the heart of what I was saying.

        I seem to spend time everyday helping out someone who is trying to fight Word into doing what THEY want, rather than what it wants to do. This is not you or me, the people who can just pick it up and use it; its the non expert, the majority. They simply find today's Wordprocessors no great advance over those textmode wordprocessors of yesteryear.

        The reason? Back then they KNEW what it was doing, they could SEE the control codes, and delete them if they were wrong. Sure it couldn't tell you your grammer was wrong, but it never really fought against you either.

        If you look back to Word 2.0 and compare it against today you can see certain elements that you can think of as advances. But you can't really see much, and its certainly not an order of magnitude better. But we do have a whole load of attendant junk. Basically, we're going backward again.

        If we are going to go in the direction of a 'smart' wordprocessor then I want a truely smart one. Something that means I do less work and produce a much better result. I don't want something with a level of complexity that means I'm forever fighting it in doing the actual job - the one of transfering knowledge from my head to someone else's with the minimum of time and effort.

        • If we are going to go in the direction of a 'smart' wordprocessor then I want a truely smart one. Something that means I do less work and produce a much better result.

          But Word is smart; it has IntelliSense Technology(TM)(R)(C). That's how it knows that when I type "6 July 2002" at the top of my letter, I really mean "6 July 2002-07-06". Come on, it's obvious... ;-)

    • i'll always choose a program that exists and works with a good user interface over one that is never released because the author(s) thought it could be faster.

      listen to your profiler. everything else lies.
    • Can you actually say that MS WordXP is much better than the non-WYSIWYG wordprocessor of yesteryear (itself a blast from the past) ?

      Considering my experience dates back to Wordstar, I can answer yes to that question. Of course, if you can show me a better way to do tables in Wordstar 1.0 for CP/M besides using | and - characters, it'd be greatly appreciated.
    • <Monty Python>

      1k? Luxury! I had 512 bytes, for both program and data, come home and Dad would beat us around the head and neck with a broken bottle, if we were LUCKY!

      </Monty Python>
    • The problem is not that certain tools have issues; but rather that today's programmers have no interest in creating efficient code. [...] Today's advances STILL keep up with Moore's law, still make up for their lack of skill. However, when one looks at what is actually performed with all that power, one tends to question why we are paying so much, for so little. [...] We don't need profilers, we need coders have have that tacit knowledge of what really counts, where they should put real effort.

      I couldn't agree more. Sadly, the fact that almost everyone replying to your post thinks it is advocating premature optimisation at the level of assembly-level tweaks makes your point all too well.

    • Can you actually say that MS WordXP is much better than the non-WYSIWYG wordprocessor of yesteryear (itself a blast from the past) ?

      Yes.

      I've had firsthand experience with two non-WYSIWYG word processors, Wordstar 2000 and Wordperfect 5.1 The first one was clunky, clutzy, and there was no way to tell what the darn thing was going to look like without printing it out. The second suffered from the "external blind manual" syndrome, to the point where it was necessary to memorize commands just in case the "secret cheat sheet" for the F* keys was missing.

      'course, I'd love it if the @!$@!#%ing thing acutally worked faster... but at least it gets em-dashes right.
  • Not useless (Score:5, Insightful)

    by pthisis ( 27352 ) on Friday July 05, 2002 @02:52PM (#3828939) Homepage Journal
    Profiling in general certainly isn't useless. I'll usually write new code primarily in a high-level, high-productivity language (e.g. Python), and if it's too slow I'll profile it and rewrite applicable parts in C. Some projects require a lower level (C) approach from the start, though those are pretty rare. Without profiling you'll spend a lot of time optimizing code that isn't a bottleneck.

    Remember the words of Knuth: "Premature optimization is the root of all evil." Without profiling, you don't know what optimization is really needed and what isn't.

    That said...
    BEGIN RANT
    I've used gprof successfully with plenty of recent code. It works perfectly fine in non-threaded code, which _should_ be the majority (99%+) of code out there. Yes, that includes big network servers (the last one I wrote just recently passed the 6 billion requests served mark without blinking). Threads are a really nasty programming rathole that should be applied in a limited way; they take much of the time and effort spent developing protected memory OSes and toss it out the window. They also tend to encourage highly synchronized executions instead of decoupled execution, which often makes things both slower and more bug-prone (locking issues are _tough_ to get right when they become more than 1-level) and slower to implement than a well-designed multiprocess solution with an appropriate I/O paradigm. Just because two popular platforms (Windows and Java) make good non-threaded programming difficult doesn't mean you should cave in.
    END RANT
    • Just because two popular platforms (Windows and Java) make good non-threaded programming difficult doesn't mean you should cave in.

      WTF? How does Java make it hard to write non-threaded programs? If anything Java makes it easy to START writing threaded programs. When all the details start hitting you, you realize that it's trickier than it looks.

      -jon

      • WTF? How does Java make it hard to write non-threaded programs?

        No fork(). No multiplexed I/O. Try writing a good scalable network server in a single thread without the moral equivalent of select()

        Java 1.4 recognized that, and added I/O multiplexing. Still no good multiprocess (but not multithreaded) framework, though, and I/O multiplexing only solves a limited subset of cases.

        Sumner
      • fork()-ing in Java would be an incredibly expensive operation. Mostly because you have to start an entire new VM. I'm not sure why you would prefer a new process instead of a simple thread. After all, separate processes have a hard time sharing data in an effective manner, making a global object cache overly difficult to implement.
    • Okay, so let's say threads are evil.

      But processes as provided by current operating systems are too expensive to use. If I have a network server (e.g. a httpd) that has to create a process for each network request, it will never scale. In theory all that has to happen is inetd (or equivalent) fork/execs and does the necessary plumbing so that the ends of the socket are STDIN and STDOUT. Then the process just reads and writes as necessary to fulfil the request. In practice, this just doesn't work.

      That's why you can't use cgi for high-volume transactions. So lets make the server a single multithreaded daemon process instead [aolserver.com], where each request is handled by a thread. Now you can handle each request much faster, but you lose the protected address space the OS gives you in a process.

      Obviously, the OS needs to change, and give use something (maybe a hybrid between processes and threads) that more closely meets applications needs. I don't see anybody making suggestions as to ways to move forward. Anybody know of research in this area?

      • by pthisis ( 27352 ) on Friday July 05, 2002 @03:42PM (#3829262) Homepage Journal
        Okay, so let's say threads are evil.

        Okay.

        But processes as provided by current operating systems are too expensive to use.

        No, they aren't. Have you measured fork() speeds under Linux vs. pthread_create() speeds()? Sure, Windows and Solaris blow at process creation (and Windows doesn't have a reasonable fork() alternative--it conflates fork() and exec() into CreateProcess*()), but that doesn't make all OSes brain-dead.

        If I have a network server (e.g. a httpd) that has to create a process for each network request, it will never scale.

        Right. And if you create a new thread for each network request, you'll never scale--give it a try some time. Good servers that use a thread/process for every connection do so with pre-fork()'d/pre-pthread_create()'d/whatever pools. Apache, for instance, uses multiple processes (but no multithreading, except in some builds of 2.x) but pre-forks a pool of them. This is really basic stuff, even an introductory threading book will talk about pooling and other server designs.

        Really scalable/fast implementations don't even do that. They use just one process (or one per CPU) and multiplex the I/O with something like select, poll, queued realtime signals (Linux), I/O completion ports (NT), /dev/poll (Solaris), /dev/epoll, signal-per-fd, kqueues (FreeBSD), etc. (select and poll don't scale well to 10s of thousands of connections when most are idle, but some of the others are highly scalable). See e.g. Dan Kegel's c10k page [kegel.com] for specifics.

        Obviously, the OS needs to change, and give use something (maybe a hybrid between processes and threads) that more closely meets applications needs

        http://www-124.ibm.com/pthreads/ proposes an M:N threading model and offers an implementation, but it still has the shared memory problems of threads. multiprocessing may not be sexy but it's really a lot cleaner for most problems and can be more efficient in a lot of domains.

        Sumner
        • fork() alternative--it conflates fork() and exec() into CreateProcess*())
          >>>>
          Umm, fork() is the one that's braindead. Who the hell dreamed up a system where creating a new process would copy the entire state of an existing one only to have it wiped out when the other process did an exec()? fork() requires all sorts of nasty stuff (like copy-on write in the VM) that is ditched if the OS follows a process/thread model. Windows might be braindead, but CreateProcess() makes a hell of a lot more sense than fork().
          • Umm, fork() is the one that's braindead. Who the hell dreamed up a system where creating a new process would copy the entire state of an existing one only to have it wiped out when the other process did an exec()? fork() requires all sorts of nasty stuff (like copy-on write in the VM) that is ditched if the OS follows a process/thread model.

            Uh, COW isn't ditched in a process/thread model. Shared libraries would suck without it. Demand paging of executables wouldn't work with it. It's a fundamentally good thing used by Unix, MacOS X, Windows, and almost all other modern OSes which support protected memory. Definitely not "nasty stuff", and by itself it eliminates 99% of the fork() overhead vs. threads.

            You really want to be able to create a new process with the same state as the existing one, and fork/exec allows that. There's system() if you want an entirely new executable (which might call fork()/exec() or might call spawn(), vfork()/exec(), or whatever...). I don't feel like arguing over whether a spawn()/CreateProcess*()-style syscall is good, but not having a fork()-style syscall is simply braindead. There are things you can do with fork()/exec() that you can't do with spawn() or CreateProcess*(); the reverse isn't true.

            Sumner
          • fork() gives you a chance to do all sorts of useful things in the child process before you exec() and start the new code running that you have no control over. You can redirect files, set resource limits, acquire or drop capabilities, open or close pipes, etc etc. Since you can't do these things directly inside the child process in Windows, Windows gives you a way to do some of them by passing parameters to CreateProcess, others using special purpose APIs such as DuplicateHandle and the Job APIs, and others you can't do at all without a world of pain. This is one reason why the Win32 APIs are so complex and bloated.

            So at the expense of a little complexity in the VM system for copy-on-write, you get a much simpler, cleaner interface for the programmer building stuff on top of the OS. (Not to mention the fact that you can fork() without exec'ing, and in many situations that turns out to be a much more convenient and safe approach to concurrency than using threads.) Sounds like a good tradeoff to me. We're not talking about much complexity either; students routinely do this stuff in OS course project assignments.

            I think understanding why fork()/exec() is better than CreateProcess() is an excellent lesson on how to design a good interface.
      • by Splork ( 13498 ) on Friday July 05, 2002 @03:46PM (#3829284) Homepage
        minimize the use of threads whenever possible. write your code in an event driven fashion as your friendly AC suggested. the poll() system call [superior to select(), though select() works well within its fixed size filedescriptor array limits] makes this possible.

        the basic mentality to switch from threads to event programming is this: anytime you're using a thread solely so that it can sit around and block on high latency events (network or disk I/O) most of its lifetime, it should not be a thread.

        its acceptable to have worker threads/processes that you hand computational tasks to and they trigger an event in your event loop when they hand a result back, but don't use threads of execution to manage your state. you'll pull your hair out and still have a nonfunctional program.
    • It works perfectly fine in non-threaded code, which _should_ be the majority (99%+) of code out there.

      When I'm running a graphical program, the UI must not lock up, no matter what processing is going on in the background. I don't care how you solve that problem, but a simple use of threads is one of the simplest methods.
      • When I'm running a graphical program, the UI must not lock up, no matter what processing is going on in the background. I don't care how you solve that problem, but a simple use of threads is one of the simplest methods.

        Not really. It seems simple until you get into the details. Yes, for some things multi-threaded is the way to go. But a multi-process solution is usually easier to implement and more stable, and a straight asynchronous single-threaded state machine solution is often the best (in terms of ease of implementation and performance). Remember, the difference between threads and process is that processes have protected memory and threads have all shared memory. The number of cases where you really don't want most of your memory protected is very small, especially when you remember that processes can easily get shared memory segments for select pieces of memory. Most people choose threads because they think threads are better/faster/smaller than processes (which is true on some broken OSes but not meaningfully true on Linux) rather than based on whether or not they want most memory shared.

        Sumner
        • (which is true on some broken OSes but not meaningfully true on Linux)
          >>>>>>
          Not so. Have you ever compared the time between a thread-switch and a process-switch? The only difference on Linux is changing the MMU context. Yet, on x86, changing the MMU context is the slowest operation you can perform.
          • Not so. Have you ever compared the time between a thread-switch and a process-switch?

            Yep. And for most applications, it's not meaningful. If you spend all your time context switching, you're definitely not efficiently designed whether you use threads or processes--you can definitely measure the overhead in that case, but when you go to a situation where you're synchronizing on anything (mutexes, sockets, whatever) the difference essentially disappears. And even in the measurable situation, the difference isn't huge--about 2 usecs on my home machine on a total overhead of 4 usecs vs. 6 usecs (threads vs. procs). Sure, it's 33% SLOWER!!!! Horror!!! In the real world, it generally doesn't matter and it's small enough that if context switch overhead is hurting your multiprocess app then switching to multithreading won't really help.

            Both are so fast that if you though about your design at all they won't even be a blip on the radar, unlike on some OSes where switching process can take 100s of usecs vs. 10 usecs for a thread switch.

            There are exceptions, which is why I didn't say that threads are always bad. But the performance argument here is almost always specious, brought up by people who learned about threaded programming on other platforms where it is a huge win and used to defend a poor design choice (look, I can measure the difference in contrived situation X even though it has no effect on system performance).

            Sumner
  • It isn't just gprof that's broken by pthreads, other Linux tools fall victim as well. Core dump? Almost useless with pthreads running. Gdb? Getting better, but still a little wonky. Certain aspects of signal handling don't work as expected with pthreads.
  • ACE has the answer (Score:3, Informative)

    by Ricdude ( 4163 ) on Friday July 05, 2002 @02:59PM (#3829000) Homepage
    There is a simple profiling capability in the ACE [wustl.edu] toolkit, the ACE_Profile_Timer [uci.edu]. Easy to wrap in a class with basic Start, Stop, and Elapsed methods. If you can guess what function or two the bulk of your program's time is being spent in, this can help pinpoint the worst offenders within that section of code. If not, create several timers, and time each function in your main loop, and print the information after the loop is finished. Drill down into subfunctions as needed. See where the milliseconds tick away. You might be surprised.

    And remember, in the immortal words of Michael Abrash, "Assume Nothing. Measure the improvements. If you don't measure, you're just guessing."

  • There are very few application that don't reach out across a network for information. The bottleneck is usually this network communications. Check out Performant [performant.com] for tools that work on the network level.

    There's also a continuing trend of software developers spending user's computing power to make thier jobs easier. Java, J2EE, C#, .NET, C++, C all can theoretically produce software that is just as speedy as assembly but it rarely is. People still write assembly where performance really counts (games, realtime, etc.)

    Some people thinks that the wasted processing power is a crime. Me, I think it's just economics. It's much cheaper to pay for processing power than it is to pay for the developers to squeeze every last bit of performance out of an app.

    However, there are some applications where profiling is absolutely required. Database engines, games, simulations, anything that is CPU-bound has the potential of benifiting from profiling.
  • Quantify! (Score:3, Insightful)

    by ptomblin ( 1378 ) <ptomblin@xcski.com> on Friday July 05, 2002 @03:15PM (#3829115) Homepage Journal
    I've solved some important real-world problems using Quantify and Purify, especially when dealing with a huge system with a lot of developers fingers in the pie. One of the programs was handling 100,000+ transacations a day, and Quantify helped shaved enough off so we didn't have to force all of our customers to upgrade their hardware.

    Faced with a similar problem in Linux, I'd probably port the program to Solaris, Quantify it there, and hope the results are similar under Linux.
  • For Java we have a really nice choice of profilers. There are basically three great products available, all of them have proved to be absolutely useful. There is JProbe [sitraka.com], OptimizeIt [borland.com] and JProfiler [ej-technologies.com] (the 2.0 beta of JProfiler looks cool [incors.com]). I don't know what the problems on Linux are, but when programming Java, profiling is quite an enjoyable task.
  • mind you I have my own threads package - you need to if you want 1,000,000+ really small threads running together, with totally minimal stack space (4 bytes not the 1Mb that pthreads gives you). The only hard part was making gprof use SIGALTSTACK (which was broken in the kernel when I started).
    Of course this worked because from gprof's point of view I was running in one kernel thread - apart from that oprofile rocks :-)
  • First, given that programmers seem to want the profiler to automatically correct mistakes in timing, coding, and design, one would expect the profiler to to work even less well in multithreaded programming. For instance, if the threads to not lock and unlock resources in a consistent order, and therefore the code takes a very long time to access those resource, the profiler will tell you that the section of code is taking a long time, but not why. Such a design will be very hard to fix, and not really in the domain of the profiler.

    On the other hand, profilers are very good at indicated, if the code is well designed, that out of 10K lines of code,these three function of 10 lines each eat up 80% of the time. A sufficiently clever programmer will focus on those areas for analysis. If the code is not good, the profiler will unlikely be able to reduce the problem domain. If the programmer is not good, the information will not be so useful.

    Wrt the multithreading issue, I find most problems occur in two cases. First, as in debugging, the programmer does not begin with sufficiently simple conditions. Often one cannot debug the whole application at once. Likewise, profiling an entire application in multithreading mode may no the proper approach. Second, The function to be profiled may not be properly designed to allow a useful profile. Multithreading applications are often best when they are made up of simple small purpose functions. These are easier to debug, and easier to profile.

  • by mfos.org ( 471768 )
    Just because there is a technical issue with multithreading, the concept of tracking where your program spends its memory and time isn't a dead concept.
  • by p3d0 ( 42270 ) on Friday July 05, 2002 @03:29PM (#3829197)
    The approach of instrumenting code is practiacally useless for high-performance code anyway, depending on how it is written, because the instrumentation disrupts the profile. However, it's not hard to build a sample-based profiler into a program (on Linux, anyway) and use that to get a statistical measure of where time is being spent.

    This can be done for about 40 lines of code. All you need is to set up an alarm timer, and then install a signal handler for it that spits out the current program counter to a file. After the run is finished, filter the PC values through addr2line and voila. If you want to get really fancy, make it walk the stack via the ebp register (on x86) and you can build yourself a call stack.

  • Profiling should be performed at the unit-test level, and not on full-blown applications.

    For the most part, this approach avoids hassles with threading and processes, and has worked effectively for me on multiprocessor clusters.

  • by Dan Kegel ( 49814 ) on Friday July 05, 2002 @03:37PM (#3829241)
    See www.kegel.com/gprof.html [kegel.com] for a patch that fixes
    another gprof problem: it chokes after 65534
    symbols. This makes it hard to profile large
    c++ programs.

    I think gprof is still useful. Ulrich is just
    being cranky. The workaround for the multithreaded support works pretty well...

  • First, the idea was to write in ASM to squeeze every drop of performance from the hardware.
    Then, the idea was to write in a high-level language, but always be careful about performance.
    Then, the idea was to develop apps quickly, then profile to optimize the important parts.
    Now, screw optimization, let the user buy more hardware!
    I think this attitude sucks. Even my 1.5Ghz Athlon-XP is slower running KDE 3.x (or any version of gnome for that matter) than my old 300Mhz PII was running Win98. And it doesn't do a hell of a lot of stuff that my old machine couldn't. I switched to Linux and took the performance hit because I hated Microsoft. I keep upgrading KDE (and my hardware) because the latest apps only work on the latest version. I don't expect more complex software to get faster, but I'd expect that as I upgrade my hardware, software should stay relatively the same speed. Yet, it seems as if software is getting slower more quickly than system bottlenecks (specifically RAM and hard-drive speed) can keep up. That means that the end-user experience is deteriorating, even as users pump more money into their hardware to get usable performance.
    • Even my 1.5Ghz Athlon-XP is slower running KDE 3.x (or any version of gnome for that matter) than my old 300Mhz PII was running Win98.

      Thanks for that information. I'm about to upgrade my trusty PII/350 running Win98 to a nice, new top-of-the-range custom-built beastie. Well, it's been four years, and it was my birthday last week. :-)

      I'd been considering installing Linux as an alternative to MS stuff, since I now object enough to the nature of Microsoft's attitudes to make the effort to switch. In the light of your information, I think I'll just install Win2K instead.

      • Nah, just use a lighter WM like Windowmaker, or fluxbox, or.....
        You can use whatever you like, not just the latest KDE or whatever.
        (Oh, and don't bother to upgrade your hardware, I am writing this on a Celeron 266MHz, 64 megs of RAM, and this is quite fast with KDE, Mozilla and Netscape running.).
  • How *NIX grognards always complain about multi-threading, but don't find signals (and their nasty interrupt-driven nature) to be the least bit unsettling!
  • what's the problem? (Score:4, Interesting)

    by g4dget ( 579145 ) on Friday July 05, 2002 @04:56PM (#3829607)
    You say that there is a problem with profiling multithreaded code with gprof. But the issue you point to seems to apply to both single and multithreaded code: Linux gprof doesn't seem to count time spent in system code.

    Now, compute intensive code tends not to spend a lot of time in system calls, so it isn't clear that it matters whether a profiler counts time spent in system calls. I kind of prefer if it doesn't because it doesn't clutter up the profile with I/O delays (which are usually unavoidable).

    If you want to find out where your code is spending time in system calls, you can use "strace -c".

    There are also gcov-like tools that can be used for profiling via code insertion (as opposed to statistical profiling like gprof), although I'm not sure whether PC hardware has the necessary timer support.

    Overall, the answer is: yes, profiling still matters for programs that push the limits of the machine. But fewer programs do. I think most people would be a lot better off not programming in C or C++ at all and not worrying about performance. Too much worry about "efficiency" often results in code that is not only buggy but also quite inefficient: tricks that are fine for optimizing a few inner loops wreak havoc with performance when applied throughout a program. Too much tuning of low-level stuff also causes people to miss opportunities for better data structure and program logic. This is actually an endemic problem in the industry that affects almost all big C/C++ software systems. Desktop software, major servers, and even major parts of the kernel should simply not be written in C/C++ anymore.

    The thing with profiling and optimization is to know when to stop, and few people know that. So, maybe the best thing to say is: "no, profiling doesn't matter anymore". That will keep most people out of trouble, and the few that still need to profile will figure it out themselves.

  • Don't use threads.

    The problem you are complaining about profiling having is that it can't profile threaded programs. Don't write threaded programs, and the problem is solved.

    Frankly, I've always considered threading useful for only a few situations:

    o When you have an SMP system, and you need to scale your applicaiton to multiple CPUs so that you can throw hardware at the problem instead of solving it the right way

    o When you have programmers who can't write finite state automata, because they don't understand computer science, and should really be asking "Would you like fries with that?" somewhere, instead of cranking out code

    o When your OS doesn't support async I/O, and you need to interleave your I/O in order to achieve better virtual concurrency

    Other than those situations, threads don't make a lot of sense: you have all this extra context switching overhead, and you have all sorts of other problems -- like an iniability to reasonably profile the code with a statistical profiler.

    OK... Whew! Boy do I feel better! 8-).

    Statistically examining the PC, unless it's done on a per thread basis, is just a waste of time in threaded programs.

    If you want to solve the profiling problem for threaded programs, then you need to go to non-statistical profiling. This requires compiler support. The compiler needs to call a profile_enter and profile_exit for each function, with the thread ID as one of the arguments. THis lets you create an arc-list per thread ID, and seperately deal with the profiling, as if you has written the threads as seperate programs. It also catches out inter-thread stalls.

    -- Terry
    • you need to scale your applicaiton to multiple CPUs so that you can throw hardware at the problem instead of solving it the right way

      "Solving it the right way"? If you know how to solve the travelling salesman problem, or chess, or simulate the world's weather without throwing hardware at the problem, you really ought to publish it for the good of mankind.

      threads don't make a lot of sense

      Some problems are conceptually parellel; it almost always easist to write a procedure in a way that mirrors the way it's conceptualized.

      you have all this extra context switching overhead,

      So your multitasking system does 1001 context switches a millisecond rather than 1000. Woo hoo.
      • "Some problems are conceptually parellel; it almost always easist to write a procedure in a way that mirrors the way it's conceptualized."

        In that case... fork and use IPC. It's not substantially more expensive and you wont have to ensure your parallel code is thread safe.
        • In that case... fork and use IPC. It's not substantially more expensive and you wont have to ensure your parallel code is thread safe.

          But then you're forced to serialize and deserialize all the data you need to share.
    • o When you have programmers who can't write finite state automata, because they don't understand computer science, and should really be asking "Would you like fries with that?" somewhere, instead of cranking out code

      I have a formal background in CS, I'm well aware of how to use FSAs, and I'm a professional software developer, and yet I disagree with this argument. One thing I've learned is that if a tool is available and its purpose matches your need, it's generally a better solution to use that tool than to reinvent the wheel.

      I've worked on several multithreaded systems, some small scale, some enormous. While it would theoretically have been possible to rewrite the multithreaded code as a FSA, it would surely have led to a maintenance handicap and an increased bug count, in exchange for -- possibly -- a tiny increase in performance, and even that is not guaranteed by any means. Why spend hours writing a multithreading system of my own when there's a tried and tested one already there for me to use?

  • I wouldnt say that gprof is useless... threading, however, comes very close to it.

    Threading is useful in the instance where you have an application that needs to scale with SMP and which you cannot, for whatever reason, fork. But the accompanying pain of being forced to pay extremely close attention and mutex lock the code all over makes it not worth it for most situations.

    Use fork. Use other IPC methods if necessary. But dont thread or you'll spend an order of magnitude more time debugging.

He has not acquired a fortune; the fortune has acquired him. -- Bion

Working...