Follow Slashdot stories on Twitter

 



Forgot your password?
typodupeerror
×
Debian Software Linux

Debian Sid Moves to X.Org 212

debiansid writes "Yes, Debian sid finally has X.Org. The Changelogs suggest that some work has been taken from the Ubuntu packages of X.Org. Here is an article that gives details on how to migrate to X.Org on sid. This article, by the way, has been posted from an X.Org based X-Window System, and it really IS much faster than XFree86."
This discussion has been archived. No new comments can be posted.

Debian Sid Moves to X.Org

Comments Filter:
  • Oh really? (Score:3, Interesting)

    by Anonymous Coward on Sunday July 17, 2005 @11:24AM (#13086735)
    This article, by the way, has been posted from an X.Org based X-Window System, and it really IS much faster than XFree86."

    Last I checked, the only difference between the two was the license and a couple of new drivers. Certainly nothing to explain a "much faster" performance. Perhaps you could explain to us in a little more detail, how your's is "much faster"? Does it have anything to do with the fact that you are using it on a newer and more powerful machine?
  • *blink* *blink* (Score:3, Interesting)

    by WWWWolf ( 2428 ) <wwwwolf@iki.fi> on Sunday July 17, 2005 @11:33AM (#13086776) Homepage

    Yesterday, I was having headaches updating something because Debian was again in motion and not all libjack packages had been recompiled to 0.100 yet. Among other things, libsdl1.2-dev was somehow suffering from this. I wanted to upgrade that package, but it depended on something called libglu1-xorg-dev. At which point I got worried...

    apt-get search shows "xserver-xorg".

    My first reaction was along the lines of "Well, as they might say, the End is Nigh" and the second thought was "wonder if anyone has a migration guide?"

    Thanks for answering the second bit, I was already wondering why Slashdot hasn't noted this. I mean, I'm guessing I'm getting old if I find out the cool stuff before it gets posted =)

  • Comparisons? (Score:3, Interesting)

    by MindNumbingOblivion ( 668443 ) on Sunday July 17, 2005 @11:39AM (#13086803)
    I realize I'm about to open a potential can of worms, but I really must know. I'm not that experienced with X, other than using GNOME or KDE. What are the pros and cons between XFree86 and X.Org? I think most of the boxen I've used were XFree86 based, and I am uncertain whether I have ever used one based on X.Org.
  • nvidia drivers? (Score:2, Interesting)

    by WWWWolf ( 2428 ) <wwwwolf@iki.fi> on Sunday July 17, 2005 @11:51AM (#13086849) Homepage

    So... anyone yet tried how well this works with the NVIDIA drivers (specifically, using Debian's own nvidia packages - nvidia-glx and nvidia-kernel-source through make-kpkg)?

    Anyone tried yet? How's things?

    Applications can be broken for all I care, but I need my OpenGL =)

  • Utter crap (Score:3, Interesting)

    by ardor ( 673957 ) on Sunday July 17, 2005 @11:51AM (#13086852)
    I have been using X.org for almost a year, and it works rock solid. It is MUCH faster than Xfree, and Debian still starts fairly quickly (X.org didn't lengthen the startup time at all).

    As for the special effects: wrong, wrong, wrong. OSX shows how these effects can be useful. Also, the transition to a GL desktop will most likely be implemented in a new version of X.org which merges with Packard's work. A GL desktop actually helps the CPU by taking away the task of drawing stuff from it and having the GPU do it, which is the logical thing to do.

    But I know, anything else than crude blitting is l4m3, hard core spartan X11 is l33t. Yeah....
  • Under the hood ... (Score:5, Interesting)

    by Ezdaloth ( 675945 ) on Sunday July 17, 2005 @11:54AM (#13086866) Homepage

    with all the good work on tranparancy, and nice effects, i'm still missing one big under-the-hood change: use something like DRM/DRI for all 2d graphics too! (similar to directfb, windows, maxosX, etc)

    Currently there are hundreds of context-switches between the x-server and your applications just to draw things. Windows doens't have that (since w2k anyways) and it increased windows' graphics performance quite some bit. MacOS has quartz extreme 2d now, and it increased their performance. This really slows things down. :-(

    I think before more fancy effects are added that only make the whole thing slower are added, these under-the-hood optimizations should be done!

  • by Anonymous Coward on Sunday July 17, 2005 @12:03PM (#13086910)
    It's allready in X.Org CVS, new EXA accelleration. Working in driver sis(4) and soon to be in radeon(4). Let's just hope EXA for radeon(4) gets into X.Org 6.9/7.0. It's mentioned here: http://xorg.freedesktop.org/wiki/ChangesSince68 [freedesktop.org]
  • Re:gentoo leads (Score:3, Interesting)

    by makomk ( 752139 ) on Sunday July 17, 2005 @12:22PM (#13087013) Journal
    Sure Gentoo switched a while ago, but everyone is still compiling it so no one is actually using it yet.

    Very funny. Actually, X.Org recompiles aren't too bad (and yes, Gentoo does use it by default, as does Mandrake Linux or whatever it's called these days). The real killer is stuff like KDE - multi-day compile times, anyone?
  • by Sark666 ( 756464 ) on Sunday July 17, 2005 @02:01PM (#13087430)
    I have debian sid installed with Xfree still without issue. I've always just installed the nvidia binaries from their site with no problems. I also wanted to check out ubuntu and installed it and still have no issues with nvidia binaries, not a single crash/lockup.

    However, a lot of people seem to have this dreaded X lockup with nvidia binaries, and just about all of them were using Xorg. This can either be a complete freeze, or the pointer still moving but nothing is responsive. Usually you can still kill X but not always. This has also happened to my brother who was frustrated with mandrake and packages, so I recommended ubuntu to him. I went over to his house installed it and everything seemed fine. Then he had a lock up an hour in. Then another. The weird thing is, it doesn't usually happen during playing say an opengl game, but usually on the desktop by just moving the pointer quickly.

    He never had these issues with mandrake 10. I installed various versions of the nvidia binary including the one he used to use with mandrake but all the same. I looked at the specs of mdk 10 (2.6.3 Xfree86). I'm not sure if it's a kernel issue, Xfree, or some other thing like (apci or apm?)

    The logs give an error (i believe nvrm xid error) but nothing that would lead one to a solution.

    Please don't reply to this saying this isn't a tech support forum. I've searched many forums trying to help my brother. At nvnews.net there are a couple of threads that go on for about 20 pages with many users having this problem with no solution in sight. I just thought I'd take a stab at the /. Maybe one of you have dealt with the problem and actually solved it.
  • by jrockway ( 229604 ) <jon-nospam@jrock.us> on Sunday July 17, 2005 @06:03PM (#13088771) Homepage Journal
    Why not have processes append to the buffer (which is a shared memory region) and have the video card poll this buffer every n milliseconds (say, 60 times a second if the refresh rate is 60Hz). Then there would be no context switch, just writes to memory (which is relatively speedy).

    I don't know if this is effective or possible, though :)
  • by ewhac ( 5844 ) on Sunday July 17, 2005 @07:12PM (#13089136) Homepage Journal

    They're using "fglrx" drivers from ATI instead of the default 2d "ati" drivers :)

    Yes, but is ATI's support of suspend-to-disk still broken? 'Cause I've got this laptop here, you see...

    Schwab

  • by be-fan ( 61476 ) on Sunday July 17, 2005 @08:27PM (#13089590)
    Of course you're right. I know that's what's really going on. But that doesn't make my argumentation less true: running things "directly" via system calls is a lot faster then going via X.

    Yes, it's strictly faster. The question is, how much faster is it, and where is the bottleneck? A context switch is on the order of a few thousand cycles, while a system call is on the order of a few hundred cycles. Meanwhile, uploading a 100KB DMA buffer over the PCI-Express bus is on the order of 50,000 cycles. Even if you reduce the context-switch overhead to zero, you're still only improving performance by around 6%.

    Why do you think DRI/DRM is a lot faster than 'indirect' opengl?!

    Because indirect OpenGL is not hardware accelerated!

    If you don't beleive me, check the numerous articles about why w2k has this in the kernel (as opposed to winnt3.5/winnt4)

    Actually, NT4 put graphics in the kernel. Of course, Windows NT 3.x also had a much more micro-kernel design, so its hard to say where the performance improvement came from.

    why macosX now has quartx extreme 2d

    MacOS X has Quartz Extreme 2D because it allows hardware acceleration of 2D, not because it reduces client/server overhead.

    why dri/drm is so much faster.

    Because it's accelerated...

    I don't think you really understand what you're talking about. Having done some programming on the subject myself, I can say that the bottlenecks in X aren't really where people think they are. The top bottlenecks in X GUIs like KDE and GNOME are:

    1) Synchronization. Konqueror on my 2GHz P4 can relayout and redraw Slashdot at like 20fps. Should be enough for smooth resizing, right? Wrong! There is no synchornization between window manager and client in X. The window manager would redraw the window frame a hundred times per second, and not let the contents catch up. Once you fix that synchronization problem (eg: via the SYNC counter spec), it becomes very smooth. The truth of graphics is that no matter how fast it is, it will never look "smooth" without synchronization. X doesn't have support, by default, for that synchronization.

    2) Text layout. Pango (what GNOME uses for text layout) is glacially slow. That's why resizing gedit windows is slow, not because X can't draw things fast enough.

    3) Text compositing. Only a few drivers properly accelerate XRender. Other drivers have a software compositing fallback, which is glacially slow because it requires the CPU to read/write to video memory over the AGP/PCI-E bus.
  • by be-fan ( 61476 ) on Sunday July 17, 2005 @08:37PM (#13089643)
    You said that X should use something like the DRI/DRM for all graphics ops, like Win2k, and DirectFB and Quartz 2D Extreme do. DRI/DRM is the OpenGL (3D) driver API. It doesn't just mean "kernel-mode graphics driver". Win2K and DirectFB don't use 3D driver APIs to do 2D operations. Quartz 2D Extreme does, and Microsoft's Avalon will.

    PS: DirectFB is a very dated architecture, as is DirectX. Modern graphics cards don't like apps to directly touch video memory or directly bang registers. Its touch to synchronize direct access with the GPU. They are built for the OpenGL ICD model, which depends on a protected kernel driver uploading command packets to the hardware. Indeed, "direct access" is going away in DX10, which will debut with Avalon.
  • by Ezdaloth ( 675945 ) on Monday July 18, 2005 @06:27AM (#13092067) Homepage

    Yes, it's strictly faster. The question is, how much faster is it, and where is the bottleneck? A context switch is on the order of a few thousand cycles, while a system call is on the order of a few hundred cycles. Meanwhile, uploading a 100KB DMA buffer over the PCI-Express bus is on the order of 50,000 cycles. Even if you reduce the context-switch overhead to zero, you're still only improving performance by around 6%.

    I think what you're forgetting is, during these 50,000 cycles your app doesn't have to wait. Last time i checked DMA'ing was done mainly independent of the CPU, so your app can keep doing things in the mean time. (at least during the bulk of the 50,000 cycles) Also it's not one context-switch but two: one to X and one from X. Now you see a different picture: a couple of hundred cycles vs several thousand cycles. (ignoring overhead for setting up a DMA transfer ...) That's what i was trying to explain to you in the last post.

    And your part about synchronization .. please don't tell me you want to add another context-switch to X, just to synchronize every now and then?! (ok, with rendering via X you're not adding one .. but you are in the case of apps using direct rendering) This is the kind of stuff why X needs to do many more things 'direct' via system calls. This synchronization can be implemented way more efficient with kernel primitives then with messages being sent back and forth.

2.4 statute miles of surgical tubing at Yale U. = 1 I.V.League

Working...