Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
Graphics Virtualization Linux Build Hardware

Experimental Virtual Graphics Port Support For Linux 74

Posted by Unknown Lamer
from the it's-like-we-live-in-the-future dept.
With his first accepted submission, billakay writes "A recently open-sourced experimental Linux infrastructure created by Bell Labs researchers allows 3D rendering to be performed on a GPU and displayed on other devices, including DisplayLink dongles. The system accomplishes this by essentially creating 'Virtual CRTCs', or virtual display output controllers, and allowing arbitrary devices to appear as extra ports on a graphics card." The code and instructions are at GitHub. This may also be the beginning of good news for people with MUX-less dual-GPU laptops that are currently unsupported.
This discussion has been archived. No new comments can be posted.

Experimental Virtual Graphics Port Support For Linux

Comments Filter:
  • Re:Video Streams? (Score:1, Informative)

    by Anonymous Coward on Tuesday November 08, 2011 @12:40AM (#37982310)

    Let's hope it doesn't "work" like PulseAudio.

  • by AHuxley (892839) on Tuesday November 08, 2011 @12:47AM (#37982350) Homepage Journal
    From the read me at https://github.com/ihadzic/vcrtcm-doc/blob/master/HOWTO.txt [github.com] :
    "In a nutshell, a GPU driver can create (almost) arbitrary number of virtual CRTCs and register them with the Direct Rendering Manager (DRM) module. These virtual CRTCs can then be attached to devices (real hardware or software modules emulating devices) that are external to the GPU. These external devices become display units for the frame buffer associated with the attached virtual CRTC. It is also possible to attach external devices to real (physical) CRTC and allow the pixels to be displayed on both the video connector of the GPU and the external device."
  • Re:Video Streams? (Score:4, Informative)

    by Gaygirlie (1657131) <gaygirlieNO@SPAMhotmail.com> on Tuesday November 08, 2011 @01:08AM (#37982432) Homepage

    Does anyone know if this would this provide a performance boost over something like VNC for similar things?

    The slowest part of VNC and similar is the actual transmit of image data over network, and this is obviously not a new, fancy image compression algorithm or anything like. So no, it might require a teeny tiny amount less CPU time on the VNC server, but on the client end it'll have absolutely no effect.

  • Aureal3D (Score:4, Informative)

    by Chirs (87576) on Tuesday November 08, 2011 @01:18AM (#37982474)

    That's basically what the old Aureal technology did a decade ago--took the 3D scene data and passed it to the audio card for processing. It was awesome--Half-Life with four speakers was eerily realistic.

  • PA works (Score:2, Informative)

    by Anonymous Coward on Tuesday November 08, 2011 @03:59AM (#37983000)

    PA works just fine as long as the one who sets it up more or less knows what he's doing. Ubuntu and most user friendly distros had packagers who didn't, hence massive problems. Of course, there are other real problems like Skype borks which mostly come from Skype using ALSA in an arguably incorrect way that when used with PA shows why directly accessing guestimated hw: stuff is a bad idea. But things people almost always complain about were caused by inept Ubuntu devs not real problems with PA.

  • In the Kernel please (Score:5, Informative)

    by sgt scrub (869860) <saintiumNO@SPAMyahoo.com> on Tuesday November 08, 2011 @08:25AM (#37984046)

    David Airlie's HotPlug video work is really cool. I'm not surprised something bigger is coming out of it. What I really like are Elija's thoughts on putting it in the kernel so support is for more than X. Below is from the DRI-Dev thread. http://lists.freedesktop.org/archives/dri-devel/2011-November/015985.html [freedesktop.org]

    On Thu, 3 Nov 2011, David Airlie wrote:

    >
    > Well the current plan I had for this was to do it in userspace, I don't think the kernel
    > has any business doing it and I think for the simple USB case its fine but will fallover
    > when you get to the non-trivial cases where some sort of acceleration is required to move
    > pixels around. But in saying that its good you've done what something, and I'll try and spend
    > some time reviewing it.
    >

    The reason I opted for doing this in kernel is that I wanted to confine
    all the changes to a relatively small set of modules. At first this was a
    pragmatic approach, because I live out of the mainstream development tree
    and I didn't want to turn my life into an ethernal
    merging/conflict-resolution activity.

    However, a more fundamental reason for it is that I didn't want to be tied
    to X. I deal with some userland applications (that unfortunately I can't
    provide much detail of .... yet) that live directly on the top of libdrm.

    So I set myself a goal of "full application transparency". Whatever is
    thrown at me, I wanted to be able to handle without having to touch any
    piece of application or library that the application relies on.

    I think I have achieved this goal and really everything I tried just
    worked out of the box (with an exception of two bug fixes to ATI DDX
    and Xorg, that are bugs with or without my work).

    -- Ilija

Every successful person has had failures but repeated failure is no guarantee of eventual success.

Working...