Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
Graphics AMD Linux Games

AMD Catalyst Linux Driver Performs Wildly Different Based On Program's Name 114

An anonymous reader writes: In past years the AMD Catalyst Linux driver has yielded better performance if naming the executable "doom3.x86" or "compiz" (among other choices), but these days this application profile concept is made more absurd with more games coming to Linux but AMD not maintaining well their Linux application profile database. The latest example is by getting ~40% better performance by renaming Counter-Strike: Global Offensive on Linux. If renaming the "csgo_linux" binary to "hl2_linux" for Half-Life 2 within Steam, the frame-rates suddenly increase across the board, this is with the latest Catalyst 15.7 Linux driver while CS:GO has been on Linux for nearly one year. Should driver developers re-evaluate their optimization practices for Linux?
This discussion has been archived. No new comments can be posted.

AMD Catalyst Linux Driver Performs Wildly Different Based On Program's Name

Comments Filter:
  • by Z00L00K ( 682162 ) on Monday July 20, 2015 @09:02AM (#50144395) Homepage Journal

    Speed increases may be sacrificing some reliability or cutting some corners. In a FPS game it may be worth it to reduce number of bits in the graphics to increase the frame rate in fast moving images but if you work on photo editing then you want precision rather than speed.

    Maybe looking at the name of the executable was an easy way around that.

    • by i.r.id10t ( 595143 ) on Monday July 20, 2015 @09:05AM (#50144415)

      But then wouldn't it be better to let the end user choose whether something is optimized in a certain direction?

      IE, if this can be done based on file name and looking up a profile from a list of Knows, would it be possible then to have an environment variable that if not set or set to "none" or whatever no in-driver corner cutting (ie, the game wasn't renamed), or if set to "FPS" do the speed thing at expense of precision, and if set to "photo" or whatever set to precision at cost of speed?

      • by Z00L00K ( 682162 ) on Monday July 20, 2015 @09:09AM (#50144431) Homepage Journal

        Don't underestimate the power of human laziness.

      • by Anonymous Coward

        But then wouldn't it be better to let the end user choose whether something is optimized in a certain direction?

        What? Letting the customer have control over a product they bought!
        Nope, that isn't the direction we are moving towards.

      • You have professionals doing the tweaking now. Publishing a documented interface for you to do your own tweaking seems like a waste of money to both AMD and the game developer.
        • by jiriw ( 444695 ) on Monday July 20, 2015 @12:26PM (#50145787) Homepage

          Why? We're talking about Linux drivers here. Why not let the open source community write optimization profiles? Yes, you (AMD) have (has) to make some sort of interface and some documentation for it.
          But the beauty is, now the community is selecting the best options for applications, doing their own testing which, with my limited knowledge, for AMD seems to be a very time and resource (paid testers) intensive process. Which is why it's lagging behind, for the Linux drivers, in the first place. Of course only software that's used by community members actually wanting to put time in creating these profiles, will eventually have good profiles. Like every other open source project. It will only improve if there are community members willing to put time/effort in it. And maybe, if you open up this process, game developers wanting their games to have a solid performance on AMD cards will write their own profiles.
          Today nobody can optimize graphics performance beyond the usual resolution/AA/shader toggles on a per-application basis, except for a hand full of inside developers. And we know of some 'AAA' games coded so badly they are virtually unplayable without serious driver 'tweeks'. Why not give the open source community the possibility to optimize the games they think are worth it?

          As with all open source software, it only gets better. Unless no-one cares. But then... who cares?

          • by Anonymous Coward

            As with all open source software, it only gets better.

            GNOME 3, KDE4, systemd, pulseaudio and a bunch of other software begs to differ.

      • by vakuona ( 788200 )

        But then wouldn't it be better to let the end user choose whether something is optimized in a certain direction?

        No, asking end-user to optimize their own software is the silliest thing I have ever heard. And people wonder why Apple is such a success and ${insert_random_OSS_company/software} isn't.

        • It depend on how you measure success, if it's by market share then that's just not accurate.

        • by khellendros1984 ( 792761 ) on Monday July 20, 2015 @01:46PM (#50146583) Journal

          No, asking end-user to optimize their own software is the silliest thing I have ever heard. And people wonder why Apple is such a success and ${insert_random_OSS_company/software} isn't.

          The best of both worlds is somewhere in between. The vendor should provide optimizations for whatever they can, given time and budget, but they should also provide a mechanism for the user to tweak settings for non-vendor-optimized software. It's my perception that this is what both Nvidia and AMD do, through their respective control panels.

          The vendor that doesn't try to optimize anything on their own isn't going to look good enough in benchmarks to make any sales, but the vendor that doesn't allow the end-user to experiment with their own settings won't do well in the PC gaming market, where a lot of users like being able to experiment with settings to find the ones that they're happy with.

      • by Kjella ( 173770 ) on Monday July 20, 2015 @11:18AM (#50145231) Homepage

        Most of the time the information would be roughly as useful as a C compiler telling you what loops it will unroll. Game-specific optimizations basically means "take the whole rendering pipeline, make optimized shaders like ASM, reorder, parallelize, cut excess calculations, use conditional simplified algorithms and whatnot to achieve essentially the same output". It's not surprising that most of these tricks will work on a game built by the same engine, but it doesn't extend to the general case. So it wouldn't really be very useful, instead of "photo" or "fps" the profiles would basically be one per game.

        I remember at some point the AMD open source developers said that they didn't have manpower to optimize for different workloads, so they were going with a simple structure using only one algorithm. They guesstimated that they could typically get 70% performance, simply because past a certain point making some things run better would make other things run worse. At the time they were more busy making it work at all though, but it might have been based on experience from Catalyst. Remember there's a pretty big gap between DirectX/OpenGL and actual hardware, at least before DX12/Vulkan/Mantle.

      • Re: (Score:2, Insightful)

        by Anonymous Coward

        No.

        The things being optimized are things that you as the user will never see. Things like writing to the depth/stencil buffers despite never using the value, not disabling unused sections of the OpenGL pipeline, and in shader land, better optimized compilation of shader programs.

        The reason these optimizations are done by amd/nvidia is because people use these games to benchmark their products. "You're getting a crappy framerate because EA wrote a crappy renderer" is not an excuse you, as a customer, are g

    • by jrumney ( 197329 ) on Monday July 20, 2015 @09:06AM (#50144417)
      So they have a whitelist to deal with this problem rather than an API call. The cynic in me wonders how much AMD charges to have your game listed on that whitelist.
      • by drinkypoo ( 153816 ) <drink@hyperlogos.org> on Monday July 20, 2015 @09:35AM (#50144525) Homepage Journal

        So they have a whitelist to deal with this problem rather than an API call. The cynic in me wonders how much AMD charges to have your game listed on that whitelist.

        It won't work precisely like that. Instead, it'll be the influence of green privilege. Getting whitelisted will be easy for anyone who already spends a lot of money with AMD, and hard for everyone else... or anyone who spends a lot of money with nVidia.

        • by Megol ( 3135005 )

          As usual you are talking out of your ass. There are a lot of evidence of the reverse (Nvidia technologies and optimization support that artificially decrease AMD performance) but not that using AMD tech does the same for Nvidia. I'm not talking about simply missing optimizations here BTW.

          And we aren't talking about a whitelist, we are talking about drivers adjusting themselves using a coarse grained mechanism. The difference is obvious.

          • How much are companies willing to pay to get into that hot, hot linux desktop gaming market?

            I say this joking as a Linux user who realizes the Linux market isn't exactly setting wallstreet's pants on fire.

            • How much are companies willing to pay to get into that hot, hot linux desktop gaming market?

              I say this joking as a Linux user who realizes the Linux market isn't exactly setting wallstreet's pants on fire.

              Linux is actually setting wallstreet's pants on fire:
              http://www.forbes.com/sites/qu... [forbes.com]

              Theres not much Microsoft in there.

          • by Anonymous Coward

            Nvidia gameworks most certainly breaks games on AMD hardware on purpose and developers make money for using it AND nvidia sends people to implement it so of course a dev will use it.

            Nvidia claims they do not pay companies to use gameworks and that's true they don't...but they DO pay to insert that little nvidia banner in the games intro and sometimes in game ads like project cars and its million nvidia trackside banners.

            So they look like they are giving gameworks away for free and devs just use it cause its

            • AMD spends more time fixing nvidia sabotage than anything else. If nvidia would actually fight fair they likely wouldn't stand a damn chance. Hell nvidia became #1 by cheating benchmarks...how quickly everyone forgets.

              AMD sabotages themselves more than anything with their drivers. It's not new.

      • I doubt that there is an actual price list. At least on the Windows side, releasing driver optimizations(sometimes including overt cheating if the popular benchmarking programs of the day are detected, though that seems to be rarer in recent years) to support popular programs and games is something that both Nvidia and ATI do reasonably routinely in order to improve their products' perceived competitiveness. You don't get a profile unless you have moved enough units to make it worth the effort, or there is
      • Comment removed based on user account deletion
    • In particular, the number of corners that you can cut varies a lot depending on how the game is using the GPU. Dropping all geometry calculations back to 16-bit floating point is completely fine in some cases but would cause horrible artefacts in others. Newer nVidia GPUs will correctly handle subnormal floating point values, but in most games the slowdown if you encounter one will be noticeable, but the visual effects of just rounding them to zero will not be.
      • The Old New Thing blog discussed in the past how Windows treats different popular old programs differently simply based on the known bugs in those programs. Maybe this could be something similar?
    • by Rockoon ( 1252108 ) on Monday July 20, 2015 @09:38AM (#50144543)

      Speed increases may be sacrificing some reliability or cutting some corners.

      Probably mostly corner cutting.

      Don't need the 16x anisotropic shader if the texture isnt viewed from an extreme angle, or if the texture hasnt much high frequency detail, ...
      Don't need the 16x antialiasing filter on edges of a polygon that connects seamlessly to another polygon, etc...

      These two alone could be huge

      • Right, and what if hl2 uses small, low quality textures but you can force the renderer to do that badly with another app through some driver tunables? Crap rendering but OMG FPS! Be careful what you measure.

        It would be nice if all the tunables were at least available as environment variables or in /sys so tuners could be happy regardless.

        • It would be nice if all the tunables

          You seem to be under the impression that the stuff I mentioned are or could be "tunables" -- they are based on specific conditions of a specific rendering scenario, cannot be discerned at runtime without a performance hit, etc.. determined by extensive profiling and statistics gathering, not your imaginary "tunable parameters"

          It seems to me that you are quite far afield of your expertise to not know that determining if an edge of a polygon needs to be anti-aliased is something "tunable" - as if the end u

    • Speed increases may be sacrificing some reliability or cutting some corners.

      that word doesn't mean what you think it means.

    • by AmiMoJo ( 196126 )

      They don't usually reduce graphical quality unless absolutely necessary to get a reasonable frame rate. Most of the optimizations involve hand optimized shaders, or even just hints to the shader compiler built into the driver. Shaders are compiled to a high level bytecode a bit like Java, and then that is compiled into GPU native code, so there are opportunities for optimization.

      There can also be removal of checks that are confirmed not needed for a particular game, tweaks to GPU memory management code etc.

    • The driver has a set of generic operations that it does when it comes to allocating memory and moving that memory around, this is something that can have a dramatic impact on performance yet the developer has no control over it. Developers often work with the GPU manufacturer's driver department to optimize the driver for their specific use case which is done on a per-application basis. This is part of the reason that drivers these days are so large.

      This is one thing that is improving as part of the DirectX

  • you change it to HL3?
  • Confirmed (Score:5, Informative)

    by Anonymous Coward on Monday July 20, 2015 @09:19AM (#50144469)

    Just tested on the game I develop, Warsow, and our latest version of Qfusion engine.

    FPS jumped from 250 ~ to 300 ~ without a cap on the same location. WTF, AMD?

    • However... (Score:4, Insightful)

      by Junta ( 36770 ) on Monday July 20, 2015 @10:18AM (#50144787)

      Are there any rendering mistakes or quality differences? Are there any issues with stability? Frame rate is not the only metric, it's just the only metric anyone can simply publish.

      • Re:However... (Score:4, Interesting)

        by Anonymous Coward on Monday July 20, 2015 @11:27AM (#50145305)

        I didn't notice any visible difference, or rendering errors.

        Then again, our renderer is pretty ancient technology by many standards, although in some aspects at par with Doom3 / Source engine. All of these are based on the same-ish code (Quake1) originally and in many aspects OpenGL stuff is very similiar in all three, so I'm not surprised that optimizations carry over. I would expect Xonotic to get some boost too :)

  • by jones_supa ( 887896 ) on Monday July 20, 2015 @09:19AM (#50144471)

    Should driver developers re-evaluate their optimization practices for Linux?

    Not necessarily. For example, replacing game shaders with optimized platform-specific ones can offer great performance increase with no tradeoffs. The GPU makers know their chip architecture inside out, but game developers usually target a higher level concept such as some shader language. Unless you develop for fixed hardware such as consoles, of course.

    There's really two ways how you can relate to these kind of optimizations: "Hey, you're cheating!" or "Cool, thanks for help!". I personally are fine with them, but I would like to clearly know when specific optimizations are in use, and can turn them off when needed. Maybe after application startup the driver could render some popup in the frame buffer such as "AMD Catalyst(R) optimizations in use" which would fade out after a few seconds.

    • > I personally are fine with them, but I would like to clearly know when specific optimizations are in use, and can turn them off when needed.

      That's the entire point. AMD is changing things without your knowledge and not publicly letting you use them for other programs. Imagine writing software and you discover that the filename of the game you're working on changes the performance AND introduces graphics glitches. Somehow the debug copy goes twice as fast. That's sketchy as hell for someone trying to
    • This is a standard practice in the industry and has been for many years. Hand-optimized shaders are usually done in the development phase, the changes that are included in the driver are usually around optimizing memory allocation and management on a per-application basis rather than the generic solution that makes (often incorrect) assumptions about resource usage, something the application developer knows about but the driver developer does not. This is all changing in the next generation of graphics APIs
  • The STEAM store shows 504 pages of Windows games @ 25 per page = ~12,600 titles. OS X shows 173 pages @ 25 pages = ~4325 titles. Linux + SteamOS shows 99 pages @ 25 pages = 2475 titles but according to steamdb.info which has actual numbers for this category but not the other 2 there are 1,140 titles that work with 499 hinting at support.

    This says nothing of sales numbers. Linux has gotten a big boost for gaming from Valve but it's still a distant 3rd and that's only in the PC gaming world and doesn't account for consoles at all.

    I doubt AMD has the resources to dedicate to shit like this when they're consistently not the leader of anything. My speculation is that the only reason they still exist so to keep Intel/Nvidia out of monopoly court.

    • by Kartu ( 1490911 )

      They are better than most people think. Figure how they got to power both Sony and Microsoft consoles.
      Or how 290x beat Titan while costing several times more.

      Sadly, that "oh, but they suck" attitude does hurt a lot. Try to find a notebook with IPS screen and AMD's Carrizo APU... =[

      • by waspleg ( 316038 )

        I don't think you understood my point. My point is they're better off spending money making their Windows drivers/profiles better because that's where they have the most customers and since they don't have nearly the money, which you can see for yourself [ycharts.com], they're better off spending it where they get the biggest return. As you pointed out that likely includes the consoles.

  • "The driver has to apply imperfect heuristics to guess what the game is doing, and the game in turn has to do peculiar things in order to trigger the right heuristics. Again, for the big games somebody sits down and matches the two manually."

    Source: http://www.gamedev.net/topic/666419-what-are-your-opinions-on-dx12vulkanmantle/#entry5215019

    • by Megol ( 3135005 )

      So what does your post title and your quote have in common?
      The quote is correct: the standard mechanism for optimizations of the extremely complex graphics driver is heuristical but there is a coarse grain mechanism that allows bypassing that. It is triggered by the executable name in most cases.
      IFF a game not individually optimized in that manner have similar rendering patterns as a game that does renaming can help.

  • by cirby ( 2599 )

    Maybe the system checks program names and then tells the program it's actually running faster, instead of, you know, actually running faster? Do the programs themselves time the rate, or do they just rely on driver calls?

    "This is a really fast driver release!" "How can you tell?" "It says so."

    Or maybe they're doing a "faster without drawing more" trick.

    "Yeah, it's Half-Life 2. Just put in an occasional doubled frame. Nobody can tell the difference, right? They'll just think it's a headcrab effect."

    • Generally the application knows what kind of frames it has drawn successfully and can calculate its own frame rate based on that. But I guess you could cheat an external recording software. In the driver, just output 30 fps content with doubled frames and you get fake 60 fps.
      • In the driver, just output 30 fps content with doubled frames and you get fake 60 fps.

        Someone would definitely notice something so blatant as that. You could instead do half-assed frames every other frame, doing a crappy skew instead of a proper render or something. That was the whole idea behind Microsoft Talisman... which failed not for technical reasons, but due to mismanagement. Mostly, the highly questionable decision to involve Cirrus Logic.

  • Some idiot may be doing something funky with the name of the executable - taking up a String with an 250 max length vs a byte.
  • by Theovon ( 109752 ) on Monday July 20, 2015 @09:57AM (#50144669)

    AMD and Nvidia are constantly dealing with bugs and pecularities in specific games and apps. I've seen examples where some unexpected or unusual drawing configuration made an Nvidia GPU totally make a mess on the screen. The solution, to achieve correctness, was to do something relatively slow. This kind of thing can be caused by hardware bugs. And it can be caused by hardware LIMITATIONS. For instance, say the hardware only has 8 bits of fractional precision and 16 bits of integer precision. It is possible for an app to try to draw something that runs into limits of those precisions, making two triangles not abut in the way that they should. This is commonly caused by having a triangle with a vertex WAY off the screen, so the software has to clip it, but clipping it requires subpixel precision that the hardware can't do.

    Now, sure, some of these could be cases of "we could fix it properly, but it's just easier to select a slow rendering algorithm to get it right." And yes, if some company paid more, maybe they could get the proper solution sooner. But keep in mind that they're running into release cycle issues here. The driver is DONE, except for this list of 3 apps that don't work right. Do we spend an extra 3 months finding clever solutions? Or do we release right now something that benefits all other applications? The latter is more sensible. Those corner cases can be fixed in the next few releases.

    In general, these problems are caused by applications doing something WEIRD. Not necessarily wrong, but definitely something unexpected that no other app does. And all the corner case apps do different weird things. Tracking it all down and making them ALL work both correctly and fast is HARD.

  • HL2 and CS:GO are meant for different DX versions.

    HL2 being originally DX 8.1 and CS:GO being DX9.

    Switch to HL2.exe = DX8.1 mode, nuts insane framerate increase.

    Rendering capabilities are pre-set by application name in the drivers.

    There's your answer.

  • by tepples ( 727027 ) <tepplesNO@SPAMgmail.com> on Monday July 20, 2015 @10:45AM (#50144975) Homepage Journal

    This isn't even the first time AMD has done this. Back in the Quake III Arena days, renaming Quake3 to Quack3 would change its performance on a Radeon. Slashdot covered the Quack3 case [slashdot.org]

    • by Anonymous Coward

      I remember this! And my first thought was "not again".

      What I also remember clearly is the response from ATI. Not long after the quack3 fiasco, they asserted the next version of drivers would instead make general optimizations based on shaders (like an optimizing compiler), instead of hack based on process image names.

      As we now can determine, that was a lie.

    • This isn't even the first time AMD has done this. Back in the Quake III Arena days, renaming Quake3 to Quack3 would change its performance on a Radeon. Slashdot covered the Quack3 case [slashdot.org]

      Of course not, it is standard industry practice at this point. NVidia does the same.

  • You're right! They should waste their money hiring someone to test every single game extensively and raise the price of graphics cards to cover it.
  • Stepping beyond the frame rate difference, why are we needing more that 60 fps for single view and 120 fps for steroscopic?

    Back to AMD, do they provide any other method to hint at the sort of optimisation an application is needing, if it is a question of games vs non-games, for example?

    • by ChoGGi ( 522069 )

      On my system if I'm playing at 1920x1080 then it's fine and good for 60fps, but if I enable eyefinity and play at 5760x1080 then that 60fps isn't as likely to be around
      so faster is always better

  • by Lisandro ( 799651 ) on Monday July 20, 2015 @01:13PM (#50146205)

    Both AMD [techreport.com] and nVidia [geek.com] have been doing this for years with their Windows drivers. Why? Because apps like 3DMark and games like CS, Quake are used to benchmark video cards by reviewers.

  • it's ridiculous that drivers themselves aren't optimized and seem to be game-based optimized.. that's something that should stop..
  • Shows in general that hardware vendors still do not bother much to provide decent drivers or any at all for Linux. It all falls on the backs of clever developers who craft drivers that the manufacturer could have done better. As long as desktop computing and with that especially gaming is still mainly a Windows event not much will change. Chicken and egg problem... As far as laziness and bad decisions in software development go, they are plenty and all over the place. Such as referencing records in lookup

"Protozoa are small, and bacteria are small, but viruses are smaller than the both put together."

Working...