Intel Supports OpenGL ES 3.0 On Linux Before Windows 113
An anonymous reader writes "The Khronos Group has published the first products that are officially conformant to OpenGL ES 3.0. On that list is the Intel Ivy Bridge processors with integrated graphics, which support OpenGL ES 3.0 on open-source Linux Mesa. This is the best timing yet for Intel's open-source team to support a new OpenGL standard — the standard is just six months old whereas it took years for them to support OpenGL ES 2.0. There's also no OpenGL ES 3.0 Intel Windows driver yet that's conformant. Intel also had a faster turn-around time than NVIDIA and AMD with the only other hardware on the list being Qualcomm and PowerVR hardware. OpenGL ES 3.0 works with Intel Ivy Bridge when using the Linux 3.6 kernel and the soon-to-be-out Mesa 9.1."
Phoronix ran a rundown of what OpenGL ES 3.0 brings back in August.
ES is the key word. (Score:5, Insightful)
OpenGL ES is a cut-down version of OpenGL aimed at mobile and embedded. Windows has never supported any version of it, and probably won't anytime soon.
So to see Linux get it "first" is completely unsurprising.
It's like saying Linux supported the EXT3 filesystem before Windows. So?
Re:ES is the key word. (Score:5, Funny)
Re: (Score:1)
It uses Direct3d. Also the story was referring to desktop Windows.
Re: (Score:2)
Re: (Score:2, Informative)
Re: (Score:2)
Windows Embedded, just like Linux Embedded, permits you to select how much of the operating system you want to include.
Before Windows Embedded, there was Windows CE, the less said about which the better.
Your lack of Linux understanding is astounding (Score:2)
On modern embedded systems the Linux kernel is the same. In the old days, or in cases where Linux is used on processors without VMM support, the kernel might be substantially different. Today most embedded systems use an x86 or ARM architecture. They don't use a "stripped down" kernel. They use the same kernel, and configure at build time to use the features they want. The same is true on the desktop. The only difference that you may be mistaken
Re: (Score:2)
Re: (Score:2)
Again. You are simply wrong. Linux doesn't run a "stripped down" kernel. Just as with a desktop version the modules are either compiled at runtime into the kernel or loaded as needed. There is no difference. In an embedded system you know which modules are needed and which aren't so you build them into the kernel. There is no wasted storage space, since there is no need to have modules lying around on disk (including FLASH) jus
Re: (Score:1)
Except that ES 2.0 (2.X? 2.0 was directly followed by 3.0) supports very few features, even compared to various desktop GL 2.x implementations. Want to render to a depth buffer (for shadow mapping)? That's not supported by the core spec, you need an extension. Want multiple render targets in your FBO? Same thing. Even the GLSL is limited - officially, the fragment shader only supports loops with a compile-time fixed iteration count (some implementations relax this slightly, though). Not to mention that the
Re: (Score:3, Informative)
The only things that OpenGL provides that ES doesn't are the big, ugly, slow things which are useful for certain kinds of graphic design and industrial apps, but are completely useless for high-performance games. You're really not missing much, and in general if you're using things which are not provided by OpenGL ES to write apps where the real-time user experience counts, you are doing it wrong.
Re:Why? (Score:4, Informative)
I went from OpenGL 1.x over to OpenGL ES, so I don't know most of what modern OpenGL can do. But one glaring weakness is that OpenGL ES doesn't support drawing quads, only triangles. Yeah, the GPU processes a quad as two triangles internally, but if it supports quads, there's less vertex data to generate and pass to the GPU. You can somewhat make up for it by using glDrawArrays, which uses array indexing into the vertex list, but in a lot of cases (especially for 2D scenes), it's still less efficient than if you had quads.
Re: (Score:2)
Re: (Score:3)
Re: (Score:1)
He explained it in the comment: "there's less vertex data to generate and pass to the GPU"
This is false. If you're drawing a quad, you pass 4 vertices. If you draw 2 triangles forming a quad, you also pass 4 vertices (using a triangle strip and index buffer). The index buffer is not updated every frame, just once.
Re: (Score:2)
no you draw 6 vertices. the triangles are independently rendered.
Re:Why? (Score:4, Informative)
In my experience that will show an artifact. Like an odd line between triangles where there shouldn't be. It's been a while since i've worked in straight gl but you should reuse the vert to prevent that. Even if the verts are in the exact same position, it won't matter.
Re: (Score:2)
no you draw 6 vertices.
You don't draw vertices at all, and if you're using triangle strips you only need 4 vertices to create a quad out of 2 triangles.
Re: (Score:2, Informative)
Depends on the rendering method.
In one comment you manage to demonstrate that you've never worked on an SGI machine, before.
Re: (Score:1)
What SGI machine supports modern OpenGL with vertex shaders but no vertex caching? If you don't need vertex shaders, then you don't need to do things the newer way with later versions of OpenGL anyway. The drivers will just translate everything you do.
And SGI machines have split quads up into triangles for a long time now. It is faster than trying to check for coplanarity or better than dealing with artifacts that some pure quad rendering methods produce if the points are not coplanar (or even some dep
Re: (Score:2)
The index buffer is not updated every frame, just once.
That's not always true. Sometimes it's more efficient to just stick the vertex data directly into the vertex buffer as your frame is generated, then do your draw order sorting at the end by rewriting the index buffer.
Re: (Score:3)
Triangle strips require everything in one draw call to be connected. If you want to draw N quads, you have to make N draw calls, passing 4 vertices each time. There's a significant amount of overhead involved in a draw calls, so this is slow. With quad support, to draw N quads you can just make a big array of 4N vertices and process it all in one draw call.
Re: (Score:1)
Re:Why? (Score:5, Insightful)
On the other hand, if you're not the one writing the apps, it can be infuriating to use a system that supports only OpenGL ES. Last time I tried to use Ubuntu on a system with only OpenGL ES support, I discovered that OpenGL ES basically meant "no graphics acceleration", because nothing in the repository supported it; everything wanted OpenGL.
That's probably changed since then (it was a few years ago), but it was pretty frustrating at the time, especially since the GPU itself was rated for full OpenGL, it was only that PowerVR charged extra for that driver and TI didn't want to license it.
Re: (Score:2)
Re: (Score:3)
The only things that OpenGL provides that ES doesn't are the big, ugly, slow things which are useful for certain kinds of graphic design and industrial apps, but are completely useless for high-performance games.
Like geometry shaders?
Re: (Score:2)
You can do things like augmented reality on a smartphone. Use the built-in camera to take a live video stream of a particular location, the MEMS gyroscope and tilt sensors to determine the orientation of the system and GPS to determine the latitude and longitude. Combine this information together and render 3D information on top of this view. Maybe it's a terrain map, geological layers, the direction to the nearest public bar, train station, police station or A&E.
http://www.youtube.com/watch?v=gWrDaYP5w [youtube.com]
Not too suprising... (Score:4, Interesting)
Re: (Score:2)
Re: (Score:2)
Well depends on what the average consumer needs from their PC. If it is not gaming (which a consumer would buy a discrete card anyway), most consumers need some graphics for web surfing and the like. With the built-in graphics of Ivy Bridge, there is enough GPU power for the average consumer. Why would this average consumer need Direct3D for YouTube?
To say that they 'need' it would be a gross overstatement; but if they are doing their casual youtubing on a relatively recent wintel, they'll be using it anyway [msdn.com]...
Re: (Score:1)
Why would this average consumer need Direct3D for YouTube?
Most consumers need DXVA - i.e. DirectX Video Acceleration. On a netbook DXVA would mean that you could play Youtube videos with low CPU usage. That's particularly important on a netbook.
On my 1015PX - which is the second netbook I've bought so Intel have had two chances to get it right - it still doesn't work.
http://www.notebookcheck.net/Intel-Graphics-Media-Accelerator-3150.23264.0.html [notebookcheck.net]
According to Intel, the GMA 3150 can help the CPU decode MPEG2 videos. The DXVAChecker shows hooks for MPEG2 (VLD, MoComp, A, and C) up to 1920x1080. Therefore, the performance of the N450 and N470 with GMA 3150 is currently not sufficient to watch H.264 encoded HD videos with a higher resolution than 720p. HD flash videos (e.g. from youtube) are also not running fluently on the Atom CPUs.
It supports MPEG2. I don't think I've ever played an MPEG2 video on this machine and it could probably decode them fine
Re: (Score:1)
You use OpenGL ES on desktop Windows for what exactly?
Re: (Score:3)
Re: (Score:1)
You use a wrapper not ES directly.
Re: (Score:2)
Re: (Score:2)
Re: (Score:2)
The Android SDK VM is slow enough at the best of times and the OpenGL software emulation is abysmally slow. It's real
Re: (Score:1)
Re: (Score:1)
I was asking for what they specifically used it for. Not what someone could theoretically do. Also, ES is only supported through translation wrappers or emulators on Windows anyway.
Re: (Score:1)
Re: (Score:1)
Or one could just use the EGL or WGL wrappers for AMD or NVIDIA GPUs respectively. Wouldn't make this submission's title any less stupid.
Very very poor article (Score:5, Informative)
On Windows, the GPU is driven by either DirectX or OpenGL. Native OpenGL ES drivers for Windows are ONLY needed for cross-platform development where applications destined for mobile devices are built and tested on Windows first.
Now, this being so, the usual way to offer ES on the desktop is via EMULATION LAYERS that take ES calls and pass them on to the full blown OpenGL driver. So long as full OpenGL is a superset of ES (which is mostly the case), this method works fine.
The situation is different on Linux. Why? Because traditionally, Linux has terrible graphics drivers from AMD, Nvidia AND Intel. Full blown OpenGL contains tons of utterly useless garbage, and supporting this is more than Linux is worth. OpenGL ES is a chance to start over. OpenGL ES 2.0 already is good enough for ports of most AAA games (with a few rendering options turned off). OpenGL ES 3.0 will be an almost perfect alternative to DirectX and full blown OpenGL.
OpenGL ES 2.0/3.0 is getting first class driver support on Linux class systems because of Android and iOS. OpenGL ES 3.0 will be the future standard GPU API for the vast majority of computers manufactured. However, on Windows, there is no reason to expect DirectX and full blown OpenGL to be displaced. As I've said, OpenGL ES apps can easily be ported to systems with decent OpenGL drivers.
Intel is focusing on ES because, frankly, its drivers and GPU hardware have been terrible. It is their ONLY chance to start over and attempt to gain traction in the marketplace. On the Windows desktop, Intel is about to be wiped out by the new class of AMD fusion (CPU and GPU) parts that will power the new consoles. AMD is light-years ahead of Intel with integrated graphics, GPU driver support on Windows, and high speed memory buses with uniform memory addressing for fused CPU+GPU devices.
Inside Intel, senior management have convinced themselves (falsely) that they can compete with ARM in low power mobile devices. This is despite the fact that 'Ivybridge' (their first FinFET device) was a disaster as an ultra low power architecture, and their coming design, Haswell, needs a die size 5-10 times its ARM equivalent. The Intel tax alone ensures that Intel could never win in this market. Worse again is the fact that Intel needs massive margins per CPU to simply keep the company going.
PS Intel's products are so stinky, Apple is about to drop Intel altogether, and Microsoft's new tablet, Surface Pro 2, is going to use an AMD fusion part.
ABSOLUTELY FALSE. (Score:1)
OpenGL, unlike DirectX pre 11, doesn't lock your structures to a thread. Any thread from the same process can access OpenGL data and therefore supports multithreading.
DX forbid multithreading until 11. OpenGL never did.
Re: (Score:1)
lets the GPU drivers to read concurrently
There is currently no such thing.
and about removes context switching
Again, there is no such thing. The command buffer of the GPU is a shared resource that can only serve one logical core at a time.
Ohh, plenty of research says we need to thread this stuff.
What should we be "threading"? Obviously not context access, which is a gimmick. It would make sense for software rendering, but not for shared access to a single resource.
Please show us this research.
Re: (Score:2)
Your wait is over! (and has been for a long time)
https://www.opengl.org/wiki/OpenGL_and_multithreading [opengl.org]
Re: (Score:3)
Sorry, but you didn't get very far into that:
wglShareLists causes a few things to be shared such as textures, display lists, VBO/IBO, shaders. You should call wglShareLists as soon as possible, meaning before you create any resources. You can then make GL rendering context 1 current in thread1 and make GL rendering context 2 current in thread2 at the same time. If you upload a texture from thread2, then it will be available in thread1.
Are you wanting to perform draw calls at the same time? That should nev
Re: (Score:1)
Since there is only one interface to the GPU (and thus only one command pipeline being used to the best of its ability for each context), multi-threaded access to OpenGL contexts only serves to slow rendering down.
Re:Very very poor article (Score:5, Insightful)
Since youdon't use Linux, or don't know how to configure it properly, you should refrain from speaking as though you have. NVIDIA and Intel have great Linux drivers. I cannot speak for AMD, since I haven't used them in years, but you seem to confuse the Open Source NVIDIA driver (nouveau) with the proprietary drivers, which work awesome and allow full use of the GPU through CUDA. Intel's Open Source driver is also quite good.
Re: (Score:1)
Re: (Score:2)
Re: (Score:2)
I wouldn't use Nvidia anyway, but if I did, I sure wouldn't use the binary pile of shit.
Quality goes kinda like this:
fglrx < Nvidia < Nouveau < r300g/r600g < i965
Re: (Score:2)
I wouldn't use Nvidia anyway, but if I did, I sure wouldn't use the binary pile of shit.
Unless you wanted 3d acceleration that worked. Or XVBA that works. Or, basically, anything else that works beyond framebuffer and basic OpenGL.
Re: (Score:2)
My Intel 3D acceleration has recently been working great at playing some Steam games, actually. And so did my old Intel chip (well, Wine and ID, not Steam at that time), and so did my r300.
>XVBA
Eh, that would be nice, but even my old system played 1080p h.264 mkv files just fine, so I am not worried.
>framebuffer
the framebuffer is not a function of the Mesa driver...
Re: (Score:2)
Re: (Score:2)
ABI breakage is all you need to know about the blobs.
And I never said I hadn't used it in the past. Because I did, though it didn't take me long to go buy a Radeon to replace it (this was pre-Nouveau)
Re: (Score:2)
Having a stable ABI would BE the the problem.
Re: (Score:2)
Microsoft going AMD Fusion would be surprising, since the AMD Fusion chips that can compete with Haswell on a power usage standpoint make the Atom look high performance by comparison.
Re: (Score:2)
Re: (Score:2)
The PS4's use of an APU is an unsubstantiated rumour at this point, the Wii U definitely doesn't use an AMD CPU at all (it's a PowerPC for Pete's sake), and the XBox 720's rumoured to have a GPU that's more than double the size of the biggest one they're shipping today (as far as I can tell), indicating a more traditional CPU/GPU architecture...
Re: (Score:2)
Sorry, I should clarify, the GPU is more than double the size of the GPU in any APU.
Re: (Score:2)
Re: (Score:2)
I find it highly questionable that a console manufacturer, who are normally incredibly sensitive to die size and unnecessary complexity, would ship consoles with both an iGPU and a dGPU requiring questionable multi-GPU techniques (which tend to add latency and microstutter) when they could simply ship a pure CPU with a more powerful GPU. Considering Intel's edge in manufacturing and IPC, there seems to be little advantage to using an AMD APU if you're going to rely mainly on an external GPU anyhow.
Re: (Score:2)
Re: (Score:1)
On Windows, the GPU is driven by either DirectX or OpenGL. Native OpenGL ES drivers for Windows are ONLY needed for cross-platform development where applications destined for mobile devices are built and tested on Windows first.
That's a really good reason to have native OpenGL ES drivers IMHO. But why would you create an app on windows, and release it on any platform except windows?
Dream on (Score:2)
On the Windows desktop, Intel is about to be wiped out by the new class of AMD fusion (CPU and GPU) parts that will power the new consoles. AMD is light-years ahead of Intel with integrated graphics, GPU driver support on Windows, and high speed memory buses with uniform memory addressing for fused CPU+GPU devices.
Yet despite their allegedly superior technology, last year was an unmitigated disaster, a steady decline from Q1 to Q4 losing over 30% in revenue and where in Q4 2011 they had a gross margin of 46%, in Q4 2012 it was down to 15%. They're cutting in R&D, in 2011 they spent 1453 million, in 2012 1354 million and if they spend the whole of 2013 at Q4 2012 levels then 1252 million. Yes, hopefully the PS4/Xbox 720 will give AMD a much needed cash infusion but their technology is not at all selling so maybe t
Re: (Score:2)
Sales of personal computers are not doing well due to economic conditions and Intel has a tight grip on Dell etc, as well as closing the price gap that made AMD look like vastly better value for money in earlier years. It doesn't mean they are doomed.
Excuse me? Where did you get that from? There's a 32 core opteron coming out that
Re: (Score:2)
While Intel may have a tough time battling ARM on the low power front, AMD is totally lost.
Intel has already proven they can't battle ARM on the low power front. Even when they made ARM processors themselves they were higher-power than everyone else's. Literally. But your point about AMD power consumption is well-taken. Not since Geode have they managed to be impressive in that department.
And???? (Score:1)
Re: (Score:2)
No this is about OpenGL ES which is basically irrelevant on desktop Windows.
Re: (Score:2)
And it's basically irrelevent on x86/x64 linux too. Afaict when software supports ES it generally has a compile time switch between regular opengl and opengl ES. Pretty much all GPUs seen in x86 systems support regular opengl while only a subset of them support ES so the sane thing for a distro to do is to use regular opengl for the x86/x64 builds of their software and only build for ES on architectures where ES only GPUs are common.
Re: (Score:1)
Execpt that OpenGL ES has zero relevance on desktop Windows. No one uses it. Kinda blows your whole theory out of the water.