AMD Overhauls Open-Source Linux Driver 126
An anonymous reader writes "AMD's open-source developer has posted an incredible set of 165 patches against the Linux kernel that provide support for a few major features to their Linux graphics driver. Namely, the open-source Radeon Linux driver now supports dynamic power management on hardware going back to the Radeon HD 2000 (R600) generation. The inability to re-clock the GPU frequencies and voltages dynamically based upon load has been a major limiting factor for open-source AMD users where laptops have been warm and there is diminished battery power. The patches also provide basic support for the AMD Radeon HD 8000 'Sea Islands' graphics processors on their open-source Linux driver."
Yay AMD (Score:5, Insightful)
This is a great step in the right direction. Hopefully it's not the last step.
Re:Yay AMD (Score:5, Insightful)
This is a great step in the right direction. Hopefully it's not the last step.
AMD's penurious financials do make me nervous; but their strategic change in favor of *gasp* actually working to integrate support for their product into the kernel development process proper seems to be sincere and ongoing. Slower moving than one would like; but since they began their course-change, they've kept it up.
Re:Yay AMD (Score:4, Interesting)
How could you be nervous about AMD? They're in every single next generation console system.
And what do you think profit margins are for console components?
Re: (Score:1)
profitable for those MAKING the devices (AMD), not so much for those SELLING the devices (Microsoft/Sony)
Re: (Score:2)
profitable for those MAKING the devices (AMD), not so much for those SELLING the devices (Microsoft/Sony)
And, as I said, what do you think those profit margins are?
They sure as heck won't be anywhere near the margins from selling high-end GPUs or CPUs in the PC market.
Re: (Score:2)
Well, but what do you think are their expenses? They already developed Jaguar cores well as GPU (7xxx line), they wouldn't need to spend much on R&D.
Also note that there was basically no competition, nVidia doesn't have CPU side, Intel sucks on GPU side. (and actually even on CPU side, Jaguar cores do very well from performance/watt perspective) hence Sony/Microsoft couldn't press AMD on prices too much.
PS3/xbox 360 sold about 150 million consoles total over last 7-8 years, so it's 15-20 million additio
Re: (Score:2)
How could you be nervous about AMD? They're in every single next generation console system.
Maybe Peek at their financials?
http://finance.yahoo.com/q/ks?s=AMD+Key+Statistics [yahoo.com]
AMD Financials Summary (Score:5, Informative)
$2B in debt, $1B cash, lost $600M last year, sales dropped 30% last year. They have no assets (spun off their manufacturing facilities). If the next gen consoles do not sell well because of casual / tablet gaming and potential Apple TV games, AMD will be bankrupt in one year and shuttering in two. Spending money on open source drivers is a long term investment - it's not going to get them an additional $600M in revenue next year (>2M additional graphics cards or >5M systemic wins) when PC sales are on the decline.
Re: (Score:1)
You realize that the primary reason they're updating their *nix drivers is probably BECAUSE of tablets and the like? Most tablets and phone devices these days are not running Windows.
Re: (Score:1)
and they just got onto the ARM bandwagon too so you know they'll be pulling their GPU tech into their ARM platform. So again there's more immediate returns coming from getting their GPU tech updated on Linux.
Re: (Score:3)
Re: (Score:1)
What product did Intel have to compete with AMD's Jaguar + 7xxx APU? Let alone with CPU alone:
"In its cost and power band, Jaguar is presently without competition. Intel’s current 32nm Saltwell Atom core is outdated, and nothing from ARM is quick enough. It’s no wonder that both Microsoft and Sony elected to use Jaguar as the base for their next-generation console SoCs, there simply isn’t a better option today. As Intel transitions to its 22nm Silvermont architecture however Jaguar will fi
Re: (Score:2)
Not only that, but the HUMA design of both the XB1 and the PS4 means that any games for those platforms ported across to PC should be easily able to take advantage of HUMA on AMD's APUs. This is something Intel and NVidia can't replicate because they don't have the combined powerful CPU/GPU die to do it with.
By taking these contracts, AMD has basically guaranteed that all next-gen PC ports will take advantage of the latest graphics feature which neither of their competitors can use. A cheap AMD APU will be
Re:AMD Financials Summary (Score:4, Interesting)
$2B in debt, $1B cash, lost $600M last year, sales dropped 30% last year. They have no assets (spun off their manufacturing facilities). If the next gen consoles do not sell well because of casual / tablet gaming and potential Apple TV games, AMD will be bankrupt in one year and shuttering in two. Spending money on open source drivers is a long term investment - it's not going to get them an additional $600M in revenue next year (>2M additional graphics cards or >5M systemic wins) when PC sales are on the decline.
Right and within 1 year the number of GPGPUs sold via their custom APUs inside Consoles with be 6:1 to 10:1 of your sales. They are in the new Wii, PS4 and XBox. They're expanding their small-to-mid-tier server footprint [beginning to own that space] and with more and more laptops using AMD APUs will begin to own that space. Their partnership with ARM will make them an attractive provider for future Smart TVs, and other embedded products not even yet projected out. AMD is going to be in the black very shortly.
Good guys AMD (Score:5, Insightful)
I'm excited about getting the upcoming Kaveri. APUs are the way to go unless you have needs that call for huge CPU or GPU power, and I think AMD is definitely leading the innovation here. It's a nice bonus if I will be able to run Linux with good graphics acceleration as well.
Re:Good guys AMD (Score:5, Interesting)
Personally, I'm excited about HUMA and what it will mean for scientific computing. The second half of this year will be exciting!
HUMA (Score:4, Informative)
Hybrid Unified Memory Access.
Basically both your CPUs and GPUs having access to the same memory space without needing to 'swap' via apertures or anything else. It's currently intended for the gpu in APU packages, but I believe they've stated one of the next gen GPU platforms (HD9xxx?) is going to support it as well.
Re: (Score:2, Informative)
Minor correction: it's Heterogeneous Uniform Memory Access, in the AMD space, at least.
Also worth mentioning is that it'll be using very fast GDDR5, which means a huge increase in memory bandwidth for the CPU part of the APU, as well.
There may be some soldering of that RAM onto motherboards in the first generation of HUMA parts, but 8GB and >1TFLOPS would do wonders for scientific computing even if it was all soldered into a mass-produced media-centre PC.
Re: (Score:1)
HUMA [wikipedia.org]
Re:Good guys AMD (Score:5, Interesting)
Still not Stallman-approved. (Score:5, Informative)
Per http://stallman.org/to-4chan.html [stallman.org]:
"Regarding graphics accelerators for PCs, ATI mostly cooperates with the free software movement, while nVidia is totally hostile. ATI has released free drivers.
However, the ATI drivers use nonfree microcode blobs, whereas most of nVidia's products (excepting the most recent ones) work ok with Nouveau, which is entirely free and has no blobs.
Thus, paradoxically, if you want to be free you need to get a not-very-recent nVidia accelerator.
I wish ATI would free this microcode, or put it in ROM, so that we could endorse its products and stop preferring the products of a company that is no friend of ours."
This sort of thing gets discussed quite a bit on 4chan's technolo/g/y board. Also, installing Gentoo.
Re:Still not Stallman-approved. (Score:5, Insightful)
Blobs are definitely not ideal; but I've never really understood the distinction between people who put them in ROM and people who require them to be loaded at initialization time(as long as they aren't assholes about redistribution: if Distro X is legally unable to distribute firmware.bin and I have to go to your site, download the Windows driver, and then chop it open to get firmware.bin, just to get an unaltered copy of your firmware to run with your device, I'm going to be pissed).
Both approaches involve exactly the same binary firmware blob, one just stores it on comparatively expensive, board-space-consuming, flash ROM and one stores it on system mass storage.
Firmware that is open is better than either; but closed firmware that is handled behind the curtain on the card seems no better than closed firmware that is supplied to the card during startup(again, assuming proper redistribution terms and proper driver support for that aspect of initializing the device).
Re: (Score:3)
There are a few types of overhead involved in firmware distribution, and making that part of your system software pushes that work toward open source communities in a way they resent. If you look at things like Debian's policy [debian.org], none of these blobs fit their guidelines. That means those firmware blobs go into their non-free repository. That wart is annoying enough that people regularly try to eliminate it altogether [machard.org]. All of that means some of the overhead manufacturers are saving by not having flash on t
Re: (Score:3)
The entire point of firmware being upgradable is that it is... well... upgradable. Not only that, but different versions of firmware may be required for different versions of software. This way it is much easier to ensure compatibility, because the driver has the firmware baked into it.
Re: (Score:2)
Much easier for who though? It's certainly easier for manufacturers to move all their firmware issues so the OS has to deal with them. But the cost of doing that work is being pushed toward kernel developers and packagers. Whether the end result is better or worse is complicated, but that's not why people like Stallman complain. What you can't argue with is that it's frustrating for a Linux kernel developer to spend time chasing down a bug that's actually inside of the firmware blob, or in the part of t
Re:Still not Stallman-approved. (Score:5, Informative)
The entire point of firmware being upgradable is that it is... well... upgradable. Not only that, but different versions of firmware may be required for different versions of software. This way it is much easier to ensure compatibility, because the driver has the firmware baked into it.
If it were firmware, I would be in agreement.
The objection to binary blobs, that are simply loaded into the device as firmware is sort of short sighted,
in that it punishes vendors that actually plan in a method of upgrading their products with new firmware.
But by and large, that isn't the issue here.
Far to many of these blobs are loaded loaded into main memory and run as a process under the operating system,
free to do just about anything.
If blobs were ONLY firmware, they could run ONLY on the device, and could be loaded once at installation time.
Very few fall into this category. (Some wifi chips do load this way upon every boot).
Far too many remain running in main memory.
Re: (Score:2)
If blobs were ONLY firmware, they could run ONLY on the device, and could be loaded once at installation time. Very few fall into this category. (Some wifi chips do load this way upon every boot).
Even when a firmware blob runs only on the device I would expect it to be loaded every time the device is reset, particularly for a WiFi chip. If you want the blob to be persistent you must add a local Flash to the WiFi subsystem, which increases the BOM cost. And at the very low price in WiFi this is just not acceptable anymore, the chipmaker would sell nothing. So there's no such local storage (except for a minimum bootloader maybe, and that could be in the chip in ROM) and the chip will load it's executa
Re: (Score:2)
Perhaps. I haven't run the AMD proprietary drivers for a while.
When I did, I seem to recall a large binary always running.
Running the community Radeon drivers now, and I have the radeon driver (900k+) sitting in memory all the time.
Clearly a significant part of the card's work is done in main memory.
Re: (Score:3)
Perhaps. I haven't run the AMD proprietary drivers for a while. When I did, I seem to recall a large binary always running.
The blobs we're talking about here are NOT the AMD proprietary drivers, we're talking about the firmware blobs that the community drivers have to send to the cards at initialization.
Re: (Score:3)
Storing the binary firmware on the user's PC makes it way easier to be updated.
Re: (Score:1)
That's because you're a poo poo head.
Re:Still not Stallman-approved. (Score:4, Informative)
Yes, this is my opinion exactly. I recently blogged about this exact issue, and why I think the FSF, RMS, Trisquel, etc. all treat it differently - and I don't think it's a good enough reason.
https://systemsaviour.com/2013/06/16/why-i-will-not-back-fsfs-guidelines-for-free-software-distributions/ [systemsaviour.com]
I'll point out for Slasdot readers that, in the case of the radeon driver, it loads microcode into the card - not a huge firmware blob. The FSF just refers to microcode as firmware, so does not distinguish between them. The microcode is between 2K and 31K, depending on the model of the device. If running Debian GNU/Linux with the firmware-linux-nonfree package installed, these microcode files should be located under /lib/firmware/radeon.
Re:Still not Stallman-approved. (Score:5, Insightful)
I don't understand why simply putting the closed source firmware on the card suddenly makes it ok for free software. Same code, just different home.
Re:Still not Stallman-approved. (Score:5, Informative)
I don't understand why simply putting the closed source firmware on the card suddenly makes it ok for free software.
Licensing and distribution.
Anything that's in hardware has already dealt with the issues of licensing and distribution.
Closed source software represents and entirely different beast for free software distribution.
Re: (Score:3)
I don't understand why simply putting the closed source firmware on the card suddenly makes it ok for free software. Same code, just different home.
If that was what was being discussed it wouldn't be an issue.
If your closed source firmware actually ran on the card that would be fine. Load it once on boot and
it can only run in the Video card's GPUs, and interaction between it and the OS are somewhat
more controllable.
But take for example the Radeon driver (the so called open source one). It takes almost a meg of
main memory. The closed source one takes even more memory. Its running all the time that
your system is up.
Clearly its not just firmware we are
Re: (Score:3)
But take for example the Radeon driver (the so called open source one). It takes almost a meg of main memory. The closed source one takes even more memory. Its running all the time that your system is up.
Clearly its not just firmware we are talking about here.
The main memory aspect is taken up by the open source driver code. The firmware blob goes straight to the hardware.
Re:Still not Stallman-approved. (Score:5, Informative)
I don't understand why simply putting the closed source firmware on the card suddenly makes it ok for free software. Same code, just different home.
Back in the days of the Open Graphics Project [wikipedia.org] (since defunct, although Timothy N. Miller [binghamton.edu] is still working in this area and the mailing list [duskglow.com] is still active for those interested in the subject), we had several discussions about the borders between Free software, open firmware, and open hardware.
As I understood the FSF's position at that time, the point is that if the firmware is stored on the host, it can be changed, and frequently is (i.e. firmware updates). Typically, the manufacturer has some sort of assembler/compiler tool to convert firmware written in a slightly higher level language to a binary that is loaded into the hardware, which then contains some simplistic CPU to run it (that's how OGD1 worked anyway). So, the firmware is really just specialised software, and for the whole thing to be Free, you should have access to the complete corresponding source code, plus the tools to compile it, or at least a description of the bitstream format so you can create those. This last part is then an instance of the general rule that for hardware to be Free software-friendly, all its programming interfaces should be completely documented.
If the code is put into ROM, it cannot be changed without physically changing the hardware (e.g. desoldering the chip and putting in another one). At that point, the FSF considers it immutable, and therefore not having the firmware source code doesn't restrict the user's freedom to change the firmware, since they don't have any anyway. The consequences are a bit funny in practice, as you noted, but it is (as always with the FSF) a very consistent position.
We (of the OGP-related Open Hardware Foundation, now also defunct; the whole thing was just a bit too ambitious and too far ahead of its time) argued that since hardware can be changed (i.e. you can desolder and replace that ROM), keeping the design a secret restricts the users freedom just as well. So, we should have open hardware, which would be completely (not just programming interfaces, but the whole design) documented and can therefore be changed/extended/repaired/parts-reused by the user. The FSF wasn't hostile to that idea, but considered it beyond their scope. Of course, any open hardware would automatically also be Free software-friendly.
I tend to agree that in practice, especially if there are no firmware updates forthcoming but it's just a cost-savings measure, loading the code from the host rather than from a ROM is a marginal issue. Strictly speaking though, I do think that the FSF have a point.
Circuits vs firmware (Score:2)
In practice, that translates into a question of whether the firmware resides on a flash memory or ROM device. If it's on flash, it's alterable and updatable, but typically, the flash would contain firmware that is independent of the system software that resides on the main computer, and just has code that would cause the device to respond to the commands it is given. So whether it's on flash or ROM would make no difference in that sense.
In the real world, when is flash used, and when is ROM used? When
Re: (Score:2, Insightful)
Per http://stallman.org/to-4chan.html [stallman.org]:
"Regarding graphics accelerators for PCs, ATI mostly cooperates with the free software movement, while nVidia is totally hostile. ATI has released free drivers.
However, the ATI drivers use nonfree microcode blobs, whereas most of nVidia's products (excepting the most recent ones) work ok with Nouveau, which is entirely free and has no blobs.
Thus, paradoxically, if you want to be free you need to get a not-very-recent nVidia accelerator.
I wish ATI would free this microcode, or put it in ROM, so that we could endorse its products and stop preferring the products of a company that is no friend of ours."
This sort of thing gets discussed quite a bit on 4chan's technolo/g/y board. Also, installing Gentoo.
I won't comment on his liberated firmware blob comment, or the stupidity of his suggestion of putting it in ROM (and calling it a circuit), but why does RMS insist on calling the company ATI, when it's been acquired, merged & digested by AMD?
Re:AMD needs to do this 1000% more (Score:4, Insightful)
Nvidia is only worse in some sort of GPL zealot fantasy land. Out in the real world, it's not so bad actually. They provide the support. They just don't provide it in the precise manner that a noisy minority wants.
AMD can start by displacing 6 year old ION kit.
Re:AMD needs to do this 1000% more (Score:4, Insightful)
Then again, AMD's Linux drivers actually work, while nVidia's do not.
Then again, you have that exactly backwards.
Re: (Score:2)
Like I said... when I can replace my ION boxes with the AMD counterpart then you can say that AMD has caught up in the driver support department. Until then, Nvidia detractors are just spewing a lot of hot air.
Although it looks like Intel will beat them to that.
Re: (Score:1)
That's interesting. We use NVIDIA drivers here at work because they work very well, whereas Nouveau tends to be incapable of rendering what we want at the speeds we want.
I guess some random can just say anything and assume it has more merit than someone else's experiences. Given I'm also a random, we're at an impasse here. :)
Re: (Score:2)
I guess some random can just say anything and assume it has more merit than someone else's experiences. Given I'm also a random, we're at an impasse here. :)
In this case, they're an AC and you're not so you get to pull rank ;)
Re: (Score:2)
Then again, AMD's Linux drivers actually work, while nVidia's do not.
Wow realy? Which AMD card and driver provide accelerated 3d and accelerated video playback under linux with performace comparable to windows?
Thank God. (Score:4, Interesting)
My laptop ran ridiculously hot on the open-source until I got the closed-source drivers to install properly. Let's hope the fix means default installs of Ubuntu won't melt your igloo.
Re: (Score:2)
I spent hours trying to track down the cause of my laptop being a fusion reactor on my lap. I can't WAIT to see if this works.
Re: (Score:2)
Glad you figured it out. Before I had a chance to, my igloo melted and shorted out my laptop.
Not a bug, a feature (Score:2)
The extra heat is a feature. What do you expect when you buy a Canadian graphics card company?
Enough to switch from Intel for my next laptop? (Score:1)
Re: (Score:2)
In 4 to 5 months I'll be replacing my laptop and I would LOVE to have better graphics performance but the stability and power saving of my intel's integrated & merely adequate GPU's will probably be the deciding factor. Of course by then AMD is going to have to compete with Haswell too...
The thing is, on new machines, your chip set will probably still be able to run proprietary drivers.
The opensource drivers for AMD generally apply only to the older chips sets.
Re: (Score:2)
Re: (Score:2)
Most likely pulled from your butt. (Score:5, Interesting)
NVidia tried that and made the mistake of saying who the IP that was the roadblock was: Sun. Sun Microsystems said "There is nothing that they have of ours that we would refuse to have open sourced". NVidia's response was to clam up and let the fanbois repeat the claim for ever more.
Re:Why not open source it period? (Score:4, Interesting)
Maybe there's a boat load of trade secrets in the closed source drivers, but I'd imagine that this is a perfect area for patents to be used against competitors.
You have that backwards. If their drivers are inadvertantly violating a patent owned by Joe's Patent Trolls, Inc, then making the drivers open source makes that violation much easier to spot.
Patents are a huge disincentive to releasing open source drivers. Another issue the company I worked for had was hardware bugs, because having to put bizarre workarounds in closed source drivers was no big deal, but a bit embarrassing in open source.
Re: (Score:3)
It would seem to me that most hardware vendors would benefit from open sourcing their main drivers and documenting them lightly so that they could offload maintenance costs for smaller OSes to "the community" while relying on patent law to protect novel inventions.
I'd rather have a manufacturer-supported, in-house, full-feature, high-performance driver than something that is left in the hands of unpaid "community members", with a driver which supports the hardware properly 10 years after the device has been on the market.
Kernel Version? (Score:1)
Re: (Score:2, Informative)
I know it's difficult to click, but it says in the first sentence in the document linked first. Come on!
---8
These are the radeon patches for 3.11. Some of these patches
are huge so, it might be easier to review things here:
http://cgit.freedesktop.org/~agd5f/linux/log/?h=drm-next-3.11-wip
I'll send a formal pull in request in the next day or two.
Highlights of this series:
- DPM support (Dynamic Power Management) for r6xx-SI
- Support for CIK (Sea Islands): modesetting, 3D, compute, UVD
- ASPM support for R6xx-SI
Re: (Score:3)
Side effect of console design wins? (Score:5, Insightful)
I can't help but wonder if this is related to AMD's recent console design wins, especially PS4. Up until now, there hasn't really been a strong business case for putting a lot of effort into Unix-based video drivers. But since PS4 runs on FreeBSD and uses OpenGL as its API layer, a lot of the effort that AMD put into the drivers there can probably be ported over to the Linux drivers without much trouble. The PS4 and Xbone GPUs both use AMD's standard Graphics Core Next (GCN) architecture.
Re: (Score:2)
Re: (Score:2)
Re: (Score:1)
That's a story about the dev kit not the PS4.
Re:Side effect of console design wins? (Score:5, Informative)
You don't really know what a development kit is, do you?
A devkit is not an SDK. It's the same hardware and software as the retail product, but with additions/modifications that enable debugging (adding debugging ports, using libraries with debug symbols, etc). They also get the ability to run "unlicensed" software, since you can't go to Sony/Microsoft/Nintendo every time you compile in order to have it certified. And, finally, early devkits may not have the final case/board, since launch titles need to start development well before the case or even motherboard are finished (famously, the early Xbox 360 devkits used Power Mac G5 cases and motherboards).
So if the devkit is running a FreeBSD kernel, the final product will be running a slightly different version of the same kernel.
Re: (Score:2)
Re: (Score:2)
Why? Licensing.
Re: (Score:1)
The PS3 supposedly ran a modified FreeBSD as well. So between experience and licensing, it does make some sense.
Re: (Score:1)
News flash...dev kits run about the same stuff as the final release, minus some features that only developers would want.
The idea behind a dev kit is that it is close(ish) to the final release form so that when you actually *do* development on the dev kit, it is not so different from what you end up releasing on.
Re: (Score:2)
there's no reason why the mediabar etc blob wouldn't be freebsd based though.
the apps can't just run "bare to the metal" as in the way xbox1 used to anyways when there's always the os there to jump into..
Sony the company that put BSD/Linux in every home (Score:2)
and made it the year or Linux, say it ain't so...
Re: Side effect of console design wins? (Score:2)
There has no proof of any kind that PS4 is running FreeBSD or any *nix for that matter. SSYI
Re: (Score:3)
No the vast majority of games use LibGCM or go bare metal. Next to no one used PSGL on either the PS2 or PS3.
Speed based on heat is a feature? (Score:2)
The inability to re-clock the GPU frequencies and voltages dynamically based upon load has been a major limiting factor for open-source AMD users where laptops have been warm and there is diminished battery power.
A compute rate that varies with temperature would seem to be a bug, rather than a feature. I don't want a GPU that does that. I need repeatable Gazebo simulations.
Re: (Score:3)
I'm not sure if that was sarcasm. If it was, ignore the following.
Then turn off dynamic power/thermal management (e.g. Turbo Boost on Intel processors, I'm sure it has fancy marketing names on various GPUs/etc). You'll get consistent performance, at the expense of maximum possible speed.
Such systems typically have a nominal guaranteed rate, which is all you get if you turn off this feature, keeping the hardware within the acceptable maximum continuous-load power/thermal envelope, assuming that your power/
Re: (Score:2)
It is a feature. Too hot -> fans running at full tilt -> more power consumption. Mostly for laptops, and you can turn this off in Catalyst. I wonder why you didn't know this. NVidia's GPUs and even most CPUs do this.
Re: (Score:2)
A compute rate that varies with temperature would seem to be a bug, rather than a feature.
Only in an environment where a stable temperature is a feature, unless you think crashing is a good thing, chips can only get so warm.
Re: (Score:1)
Read again: The inability to re-clock the GPU frequencies and voltages dynamically based upon load has been... The reclocking (or lack thereof) affects temperature, not is caused by it.
(Though the other responses are technically accurate, I think they miss the main point of the complaint.)
Re: (Score:1)
I think they're talking about the opposite (a temperature that depends on load), which your CPU has probably been doing for a long, long time.
But you've lost this one, anyway; modern Intel processors have Turbo Boost [intel.com], meaning the performance does indeed depend on temperature. I was scared, too, from a worst-case provisioning perspective in an
Will the older cards/chipsets ever be supported? (Score:1)
It's great that AMD is improving support for a lot of their hardware on Linux. In fact, as a result, my next graphics card purchase will be from AMD.
One thing I'd like to see them address is older hardware... I have an older ATI chipset (RS600) on my laptop. It's pretty much unusable due to some severe bugs. It would be awesome if they opened up enough specifications on older cards to allow those people who still have them to fix buggy drivers since I don't expect them to go back and add in support on hardw
Thank You AMD (Score:1)
About time? (Score:3)
So, the announcement 6 years ago that they were fully supporting open source drivers and documentation [archive.org] is finally coming to fruition?
Good, but still a long way to go (Score:4)
It is possible to use the open source driver to have one monitor on the HD3300 IGP and two monitors on the HD7850 without accelration on any of the monitors. Try the same setup with accelration and their driver segfaults.
I basically bought a HD7850 because AMD claims there's a open source driver for it. When you try it you'll find that there is no such thing as a functioning open source driver for this card and cards in this family.
The new kernel patches are probably a step in the right direction, but it won't help with the latest cards. AMD developers have in their wizdom decided to provide no 2D driver at all, instead they rely on 3D for 2D using MESA and "glamor". glamor is a buggy joke at this point and it will probably take years before that changes. I wish I could point to AMD and say "they've got great open source support" but the truth is that it's crappy at best. Intel is the only alternative if you want working free software drivers as of now.
Re: (Score:3, Insightful)
Does intel make an HD7850 counterpart? Your comment lacks a lot of perspective.
Full prick mode: Also I wouldn't say that they claim there's a open source driver for it. It is not like they market it that way with a big sticker on the box. There are a lot of missing features yet for the whole south islands series [x.org] and there are a lot of bugs [freedesktop.org]. This is /. you should know these things.
Nice mode again: It is one thing that your card is a year old, but you should have bought something older or done some more resea
Re: (Score:2, Insightful)
Drivers can also be compiled separately as modules that can be loaded into the kernel (that is, a driver doesn't need to be included in the kernel, it's just a matter of convenience).
Example: The nvidia kernel module can't be distributed with the kernel, so it's not included in the kernel's code at all. When installing the driver, there's a shim that's compiled against your specific kernel that provides an interface between the binary-blob driver that NVidia provides and the kernel.
Re: (Score:2)
Why would you have to make kernel changes just to support a graphic driver? No wonder Linux is still just a child's toy.
This is a perfectly valid question.
I bet that if it was in the form "Why would you have to make kernel changes just to support a graphic driver? No wonder Windows is still just a child's toy." (assuming that graphics drivers were done like that in Windows), the comment would be +5 Insightful instead of being modded down by the Slashdot Linux-lover-bots.
So please, tell me why does the Linux kernel need manufacturer-specific modules to support graphics cards? Shouldn't the kernel just include the basic things
Re: (Score:2)
That's basically what happens, but today's graphics cards require large amounts of address space and low latency IO, and the kernel module bypasses the kernel userspace stuff that requires excess copying along the way. Also, things like power management support are handled via the kernel, and if KMS is used, the kernel supports a fully accelerated, native resolution system console.
However, the bulk of the 3D driver is in userspace already as mesa regardless of video hardware. Nvidia ships a modified bina
Re: (Score:2)
So please, tell me why does the Linux kernel need manufacturer-specific modules to support graphics cards? Shouldn't the kernel just include the basic things like the ability to talk to a PCI Express device, and then graphics drivers would be implemented at a higher level?
Because Linux is a monolithic kernel instead of a microkernel.
Re: (Score:1)
Why do you need specific wavelengths of light to see? Shouldn't your eyeballs support anything, and your brain sort it out?
Why do you need road surfaces of a particular firmness to drive? Shouldn't your tires just drive on any sort of atoms, regardless of
Re: (Score:2)