Debian Sid Moves to X.Org 212
debiansid writes "Yes, Debian sid finally has X.Org. The Changelogs suggest that some work has been taken from the Ubuntu packages of X.Org. Here is an
article that gives details on how to migrate to X.Org on sid. This article, by the way, has been posted from an X.Org based X-Window System, and it really IS much faster than XFree86."
Oh really? (Score:3, Interesting)
Last I checked, the only difference between the two was the license and a couple of new drivers. Certainly nothing to explain a "much faster" performance. Perhaps you could explain to us in a little more detail, how your's is "much faster"? Does it have anything to do with the fact that you are using it on a newer and more powerful machine?
*blink* *blink* (Score:3, Interesting)
Yesterday, I was having headaches updating something because Debian was again in motion and not all libjack packages had been recompiled to 0.100 yet. Among other things, libsdl1.2-dev was somehow suffering from this. I wanted to upgrade that package, but it depended on something called libglu1-xorg-dev. At which point I got worried...
apt-get search shows "xserver-xorg".
My first reaction was along the lines of "Well, as they might say, the End is Nigh" and the second thought was "wonder if anyone has a migration guide?"
Thanks for answering the second bit, I was already wondering why Slashdot hasn't noted this. I mean, I'm guessing I'm getting old if I find out the cool stuff before it gets posted =)
Comparisons? (Score:3, Interesting)
nvidia drivers? (Score:2, Interesting)
So... anyone yet tried how well this works with the NVIDIA drivers (specifically, using Debian's own nvidia packages - nvidia-glx and nvidia-kernel-source through make-kpkg)?
Anyone tried yet? How's things?
Applications can be broken for all I care, but I need my OpenGL =)
Utter crap (Score:3, Interesting)
As for the special effects: wrong, wrong, wrong. OSX shows how these effects can be useful. Also, the transition to a GL desktop will most likely be implemented in a new version of X.org which merges with Packard's work. A GL desktop actually helps the CPU by taking away the task of drawing stuff from it and having the GPU do it, which is the logical thing to do.
But I know, anything else than crude blitting is l4m3, hard core spartan X11 is l33t. Yeah....
Under the hood ... (Score:5, Interesting)
with all the good work on tranparancy, and nice effects, i'm still missing one big under-the-hood change: use something like DRM/DRI for all 2d graphics too! (similar to directfb, windows, maxosX, etc)
Currently there are hundreds of context-switches between the x-server and your applications just to draw things. Windows doens't have that (since w2k anyways) and it increased windows' graphics performance quite some bit. MacOS has quartz extreme 2d now, and it increased their performance. This really slows things down. :-(
I think before more fancy effects are added that only make the whole thing slower are added, these under-the-hood optimizations should be done!
Re:Under the hood ... (Score:2, Interesting)
Re:gentoo leads (Score:3, Interesting)
Very funny. Actually, X.Org recompiles aren't too bad (and yes, Gentoo does use it by default, as does Mandrake Linux or whatever it's called these days). The real killer is stuff like KDE - multi-day compile times, anyone?
Is X.org in some way tied into nvidia lockups? (Score:4, Interesting)
However, a lot of people seem to have this dreaded X lockup with nvidia binaries, and just about all of them were using Xorg. This can either be a complete freeze, or the pointer still moving but nothing is responsive. Usually you can still kill X but not always. This has also happened to my brother who was frustrated with mandrake and packages, so I recommended ubuntu to him. I went over to his house installed it and everything seemed fine. Then he had a lock up an hour in. Then another. The weird thing is, it doesn't usually happen during playing say an opengl game, but usually on the desktop by just moving the pointer quickly.
He never had these issues with mandrake 10. I installed various versions of the nvidia binary including the one he used to use with mandrake but all the same. I looked at the specs of mdk 10 (2.6.3 Xfree86). I'm not sure if it's a kernel issue, Xfree, or some other thing like (apci or apm?)
The logs give an error (i believe nvrm xid error) but nothing that would lead one to a solution.
Please don't reply to this saying this isn't a tech support forum. I've searched many forums trying to help my brother. At nvnews.net there are a couple of threads that go on for about 20 pages with many users having this problem with no solution in sight. I just thought I'd take a stab at the
Re:Under the hood ... (Score:3, Interesting)
I don't know if this is effective or possible, though
Re:No, its probably because in reality (Score:3, Interesting)
Yes, but is ATI's support of suspend-to-disk still broken? 'Cause I've got this laptop here, you see...
Schwab
Re:Under the hood ... (Score:5, Interesting)
Yes, it's strictly faster. The question is, how much faster is it, and where is the bottleneck? A context switch is on the order of a few thousand cycles, while a system call is on the order of a few hundred cycles. Meanwhile, uploading a 100KB DMA buffer over the PCI-Express bus is on the order of 50,000 cycles. Even if you reduce the context-switch overhead to zero, you're still only improving performance by around 6%.
Why do you think DRI/DRM is a lot faster than 'indirect' opengl?!
Because indirect OpenGL is not hardware accelerated!
If you don't beleive me, check the numerous articles about why w2k has this in the kernel (as opposed to winnt3.5/winnt4)
Actually, NT4 put graphics in the kernel. Of course, Windows NT 3.x also had a much more micro-kernel design, so its hard to say where the performance improvement came from.
why macosX now has quartx extreme 2d
MacOS X has Quartz Extreme 2D because it allows hardware acceleration of 2D, not because it reduces client/server overhead.
why dri/drm is so much faster.
Because it's accelerated...
I don't think you really understand what you're talking about. Having done some programming on the subject myself, I can say that the bottlenecks in X aren't really where people think they are. The top bottlenecks in X GUIs like KDE and GNOME are:
1) Synchronization. Konqueror on my 2GHz P4 can relayout and redraw Slashdot at like 20fps. Should be enough for smooth resizing, right? Wrong! There is no synchornization between window manager and client in X. The window manager would redraw the window frame a hundred times per second, and not let the contents catch up. Once you fix that synchronization problem (eg: via the SYNC counter spec), it becomes very smooth. The truth of graphics is that no matter how fast it is, it will never look "smooth" without synchronization. X doesn't have support, by default, for that synchronization.
2) Text layout. Pango (what GNOME uses for text layout) is glacially slow. That's why resizing gedit windows is slow, not because X can't draw things fast enough.
3) Text compositing. Only a few drivers properly accelerate XRender. Other drivers have a software compositing fallback, which is glacially slow because it requires the CPU to read/write to video memory over the AGP/PCI-E bus.
Re:Under the hood ... (Score:4, Interesting)
PS: DirectFB is a very dated architecture, as is DirectX. Modern graphics cards don't like apps to directly touch video memory or directly bang registers. Its touch to synchronize direct access with the GPU. They are built for the OpenGL ICD model, which depends on a protected kernel driver uploading command packets to the hardware. Indeed, "direct access" is going away in DX10, which will debut with Avalon.
Re:Under the hood ... (Score:2, Interesting)
Yes, it's strictly faster. The question is, how much faster is it, and where is the bottleneck? A context switch is on the order of a few thousand cycles, while a system call is on the order of a few hundred cycles. Meanwhile, uploading a 100KB DMA buffer over the PCI-Express bus is on the order of 50,000 cycles. Even if you reduce the context-switch overhead to zero, you're still only improving performance by around 6%.
I think what you're forgetting is, during these 50,000 cycles your app doesn't have to wait. Last time i checked DMA'ing was done mainly independent of the CPU, so your app can keep doing things in the mean time. (at least during the bulk of the 50,000 cycles) Also it's not one context-switch but two: one to X and one from X. Now you see a different picture: a couple of hundred cycles vs several thousand cycles. (ignoring overhead for setting up a DMA transfer ...) That's what i was trying to explain to you in the last post.
And your part about synchronization .. please don't tell me you want to add another context-switch to X, just to synchronize every now and then?! (ok, with rendering via X you're not adding one .. but you are in the case of apps using direct rendering) This is the kind of stuff why X needs to do many more things 'direct' via system calls. This synchronization can be implemented way more efficient with kernel primitives then with messages being sent back and forth.