Debian Sid Moves to X.Org 212
debiansid writes "Yes, Debian sid finally has X.Org. The Changelogs suggest that some work has been taken from the Ubuntu packages of X.Org. Here is an
article that gives details on how to migrate to X.Org on sid. This article, by the way, has been posted from an X.Org based X-Window System, and it really IS much faster than XFree86."
Oh really? (Score:3, Interesting)
Last I checked, the only difference between the two was the license and a couple of new drivers. Certainly nothing to explain a "much faster" performance. Perhaps you could explain to us in a little more detail, how your's is "much faster"? Does it have anything to do with the fact that you are using it on a newer and more powerful machine?
Re:Oh really? (Score:4, Informative)
Re:Oh really? (Score:2)
I think it would be very helpful for attracting new developers if people could e
No, its probably because in reality (Score:4, Informative)
But what do I know, it only quadrupled my framerate in OpenGL apps. So all it comes down to, is probably much newer or more complete video drivers.
Re:No, its probably because in reality (Score:3, Interesting)
Yes, but is ATI's support of suspend-to-disk still broken? 'Cause I've got this laptop here, you see...
Schwab
Re:No, its probably because in reality (Score:2)
Re:Oh really? (Score:1)
Re:Oh really? (Score:5, Informative)
Not true. look here [x.org]
I have both the Radeon (at home) and the Intel i810 drivers in use witht he new Xorg in Sid, and performance in 2D is a little faster.
Using transparency with the damage extension is a whole other story....
My thanks to all who worked hard on getting Xorg into debian.
Re:Oh really? (Score:5, Informative)
* ATI Radeon driver updates:
o Merged Framebuffer support (dualhead with DRI)
o DynamicClocks option (reduced power usage)
o Render acceleration (r100, r200 chips only)
o Support for new ATI chips (R420/M18, R423, RV370/M22, RV380/M24, RS300)
o DRI support for IGP chips
o Xv gamma correction
o Updated 3D drivers
o Many other small fixes
* Chips driver update
o Improved BE support
* MGA driver updates
o Support for DDC and DPMS on second head on G400
o Updated 3D driver
* Neomagic driver updates
o Support for Xv on pre-nm2160 chips
o Pseudocolor overlay mode (=PseudoColor emulation)
o Improved support for lowres double scan modes
* i810 driver updates
o Dualhead support (i830+)
o i915 support
o New 3D driver (i830+)
o i810 driver is now supported for AMD64
* S3 driver updates
o Support for additional IBM RAMDACS
* Savage driver updates
o Pseudocolor overlay mode
* SiS driver updates include
o output device hotplugging
o lots of fixes for 661, 741, 760
o extended interface for SiSCtrl?
o extended LCD handling (allow more modes)
o HDTV support (480p, 480i, 720p. 1080i; 315/330 series)
o Added video blitter Xv adapter (315/330 series)
o extended RENDER acceleration
o SiS driver now supported on AMD64
* New Voodoo driver (Alan Cox)
o Provides native (glide-less) acceleration and mode setup for voodoo/voodoo2 boards
Re:Oh really? (Score:1)
changelogs (Score:5, Funny)
Re:changelogs (Score:2)
I would imagine that is the case, and I don't think anyone is pretending that it isn't.
While I think it is nice that Ubuntu is contributing to Linux, I am curious why it had to be the case, wouldn't that make Debian distribution development the slowest of the major distributions?
Re:changelogs (Score:1, Informative)
One complication... (Score:5, Informative)
One complication to the upgrade not really covered here (I wrote that article) is the simultaneous C++ ABI transition Debian Unstable is going through.
This means that upgrading might cause you to loose a lot of packages like gdm, etc.
So if you try the upgrade and apt-get, or aptitude demand you remove lots of packages then the reason is the C++ ABI change - and if you simply wait a few days/weeks it should resolve itself.
At the time the article was posted things were less bad.
Re:One complication... (Score:3, Insightful)
It would be really nice if Debian started another release process right after the transition to X.org and the C++ ABI are finished.
I really like Debian, and I'd prefer not to wait a couple years for the next release. :-)
Re:One complication... (Score:2)
Well, if I wanted multiple years between re-installs, I'd probably use CentOS.
I think yearly releases would be reasonable for Debian. I was under the impression that most Debian developers wanted releases to occur at a faster rate than they currently are.
And if there's real need for a longer release cycle, you can always skip a release, and talk to people about maintaining the previous release for longer (assuming it did go to a yearly release cycle).
A release cycle of 6 months or less is hard
Re:One complication... (Score:2)
Actually the only problems I'm having is losing packages due to the GL library package rename and conflict. Why couldn't they contact the blender, audacity, vlc or csound maintainers for instance? Yes I know it's sid, that's not an excuse for bad communication. This, more than anything is going to get Debian a bad reputation.
Re:One complication... (Score:1)
Re:One complication... (Score:3)
Re:One complication... (Score:2, Troll)
(Oh and please, don't make any sharp remarks about the quality of C++ as a language that I have already swallowed
Re:One complication... (Score:3, Informative)
GCC's ABIs are usually changed to fix flaws and increase efficiency. They do not change that often for the older architectures where common practice and optimal sequences are well documented.
Re:One complication... (Score:5, Insightful)
The GCC people are the ones changing the ABI, and they're the ones losing credibility.
Re:One complication... (Score:2, Insightful)
Re:One complication... (Score:2)
ABI != API
The ABI doesn't need to change to track API changes.
Re:One complication... (Score:2)
Because that's the way C++ is designed: it has lots and lots of dependencies on data structure and vtable layouts. Why? Because it may save a few cycles in a few places that don't usually matter. The price you pay is that upgrades are a PITA.
Objective-C is a little better in this regard (but unfortunately has lots of other problems).
*blink* *blink* (Score:3, Interesting)
Yesterday, I was having headaches updating something because Debian was again in motion and not all libjack packages had been recompiled to 0.100 yet. Among other things, libsdl1.2-dev was somehow suffering from this. I wanted to upgrade that package, but it depended on something called libglu1-xorg-dev. At which point I got worried...
apt-get search shows "xserver-xorg".
My first reaction was along the lines of "Well, as they might say, the End is Nigh" and the second thought was "wonder if anyone has a migration guide?"
Thanks for answering the second bit, I was already wondering why Slashdot hasn't noted this. I mean, I'm guessing I'm getting old if I find out the cool stuff before it gets posted =)
Re:*blink* *blink* (Score:2)
Re:*blink* *blink* (Score:2)
Re:*blink* *blink* (Score:2)
I don't know what crap the distro has thrown in, but if the package manager isn't forcing things to break it goes like this:
Re:*blink* *blink* (Score:2)
Re:*blink* *blink* (Score:3, Informative)
It is stable if you don't do crazy stuff like "apt-get dist-upgrade in a cronjob". =) The idea is to only upgrade when things don't look too broken.
Right now, if I say "apt-get install jackd", it says tons of packages are going to get nuked. Should I go ahead? No, the old versions of the proggies work. Will I go ahead? As soon as things get resolved.
Debian Unsta
How is support for radeon? (Score:1)
Re:How is support for radeon? (Score:2)
heh, sid (Score:3, Funny)
Sid Phillips: Huh?
Woody: This system ain't big enough for the two of us!
Sid Phillips: What?
Woody: Somebody's poisoned the XFree86!
Sid Phillips: It's busted.
Woody: Who are you calling busted, Buster?
Sid Phillips: Huh?
(Toy story)
Comparisons? (Score:3, Interesting)
Re:Comparisons? (Score:3, Informative)
The big pro is since the licensing change almost all Linux vendors have moved to X.org.
That means there's more momentum behind it, and a lot of work will be happening on the new codebase - in a more open way.
Technically, right now, there are some changes between the two but nothing major unless you're using one of the cards for which the driver has been updated.
Re:Comparisons? (Score:2)
Re:Comparisons? (Score:2)
XFree86 changed their license last year, and this is the reason several *BSD and Linux distributions changed to X.Org. Xorg is based upon the latest unencumbered Free (just before XFree86 4.4), and developed from there.
Most users won't see much difference, yet, but XFree86 alienated many (most?) of their developers .
OpenBSD care more about free licenses than most, and they where less than pleased with the XFree86 license change; enough to inclu
Re:Comparisons? (Score:3, Insightful)
Etch or Sid? (Score:1)
Re:Etch or Sid? (Score:2)
*mumble* (Score:5, Funny)
Re:*mumble* (Score:2)
I moved my g/f to Kubuntu after she lost KDE on her Debian unstable machine with an upgrade the other day. I'll probably stick to Gentoo myself but I must say so far it does look like the perfect combo of being fairly stable but up to date.
Debian Sid Moves to X.Org (Score:1, Funny)
USE KOMPRESSOR GRAMMATIK!
X.Org vs. XFree86? (Score:1)
nvidia drivers? (Score:2, Interesting)
So... anyone yet tried how well this works with the NVIDIA drivers (specifically, using Debian's own nvidia packages - nvidia-glx and nvidia-kernel-source through make-kpkg)?
Anyone tried yet? How's things?
Applications can be broken for all I care, but I need my OpenGL =)
Re:nvidia drivers? (Score:1)
Re:nvidia drivers? (Score:1)
Under the hood ... (Score:5, Interesting)
with all the good work on tranparancy, and nice effects, i'm still missing one big under-the-hood change: use something like DRM/DRI for all 2d graphics too! (similar to directfb, windows, maxosX, etc)
Currently there are hundreds of context-switches between the x-server and your applications just to draw things. Windows doens't have that (since w2k anyways) and it increased windows' graphics performance quite some bit. MacOS has quartz extreme 2d now, and it increased their performance. This really slows things down. :-(
I think before more fancy effects are added that only make the whole thing slower are added, these under-the-hood optimizations should be done!
Re:Under the hood ... (Score:2, Interesting)
Re:Under the hood ... (Score:4, Informative)
DRM/DRI work similarly to X (and similarly to how OpenGL works on Windows). When you draw a primitive, it puts some things in a command buffer. When the buffer is full, an ioctl() on the graphics card DMAs the command buffer to the GPU, which then draws all the lines, triangles, etc.
Doing a system call for each line would be painfully slow, since system calls cost you a lot of cycles. Even if you had the graphics card mapped directly into every process, so they could bang registers directly (which would be dangerous, but this is hypothetical), you'd still want to batch the primitives into a buffer, because small writes over the AGP bus are very slow. Now, once you're batching calls anyway, doing a kernel call to upload the buffer versus doing a context switch to upload the buffer doesn't make a whole lot of difference. Even if the latter is several times slower, you're not executing buffer flushes all that often anyway.
Re:Under the hood ... (Score:3, Interesting)
I don't know if this is effective or possible, though
Re:Under the hood ... (Score:3, Informative)
Re:Under the hood ... (Score:2)
The other end can interpret the data as you said and then free/reuse the memory. Such a polled operation may be very good on modern systems as it would get rid of tearing simply without using double buffering.
Re:Under the hood ... (Score:5, Interesting)
Yes, it's strictly faster. The question is, how much faster is it, and where is the bottleneck? A context switch is on the order of a few thousand cycles, while a system call is on the order of a few hundred cycles. Meanwhile, uploading a 100KB DMA buffer over the PCI-Express bus is on the order of 50,000 cycles. Even if you reduce the context-switch overhead to zero, you're still only improving performance by around 6%.
Why do you think DRI/DRM is a lot faster than 'indirect' opengl?!
Because indirect OpenGL is not hardware accelerated!
If you don't beleive me, check the numerous articles about why w2k has this in the kernel (as opposed to winnt3.5/winnt4)
Actually, NT4 put graphics in the kernel. Of course, Windows NT 3.x also had a much more micro-kernel design, so its hard to say where the performance improvement came from.
why macosX now has quartx extreme 2d
MacOS X has Quartz Extreme 2D because it allows hardware acceleration of 2D, not because it reduces client/server overhead.
why dri/drm is so much faster.
Because it's accelerated...
I don't think you really understand what you're talking about. Having done some programming on the subject myself, I can say that the bottlenecks in X aren't really where people think they are. The top bottlenecks in X GUIs like KDE and GNOME are:
1) Synchronization. Konqueror on my 2GHz P4 can relayout and redraw Slashdot at like 20fps. Should be enough for smooth resizing, right? Wrong! There is no synchornization between window manager and client in X. The window manager would redraw the window frame a hundred times per second, and not let the contents catch up. Once you fix that synchronization problem (eg: via the SYNC counter spec), it becomes very smooth. The truth of graphics is that no matter how fast it is, it will never look "smooth" without synchronization. X doesn't have support, by default, for that synchronization.
2) Text layout. Pango (what GNOME uses for text layout) is glacially slow. That's why resizing gedit windows is slow, not because X can't draw things fast enough.
3) Text compositing. Only a few drivers properly accelerate XRender. Other drivers have a software compositing fallback, which is glacially slow because it requires the CPU to read/write to video memory over the AGP/PCI-E bus.
Re:Under the hood ... (Score:2, Interesting)
Yes, it's strictly faster. The question is, how much faster is it, and where is the bottleneck? A context switch is on the order of a few thousand cycles, while a system call is on the order of a few hundred cycles. Meanwhile, uploading a 100KB DMA buffer over the PCI-Express bus is on the order of 50,000 cycles. Even if you reduce the context-switch overhead to zero, you're still only improving performance by around 6%.
I think what you're forgetting is, during these 50,000 cycles your app doesn't have
Re:Under the hood ... (Score:2)
It doesn't make a difference to your graphics performance. Having those 50,000 cycles free only helps you if you're CPU-bound. When drawing complex graphics, you're rarely CPU bound. In any case, even if you reduce the setup and context switch overhead to 0 cycles (and you do an upload say 120 tim
Re:Under the hood ... (Score:2)
This statement has nothing to do with what I said. I didn't say that the bottleneck in drawing your browser window is the graphics card, I said the bottleneck in the overall X -> kernel -> graphics card communication process is the AGP/PCI-E bus. The reason your browser window doesn't dra
Re:Under the hood ... (Score:2)
But we're not talking about 10 different 1%'s, we're talking about this particular 1%. The cost of changing it is high, and the gain is only 1%. If they other 9 1%'s don't involve moving large amounts of code into the kernel, well, you can have those, but leave this particular 1% alone.
Sorry, but that isn't true. X
Re:Under the hood ... (Score:2)
http://slashdot.org/article.pl?sid=05/06/28/124525 8&tid=104&tid=130 [slashdot.org]
This aims to do exactly what you are talking about and some more (as I understand it).
Osho
Re:Under the hood ... (Score:2)
Re:Under the hood ... (Score:4, Interesting)
PS: DirectFB is a very dated architecture, as is DirectX. Modern graphics cards don't like apps to directly touch video memory or directly bang registers. Its touch to synchronize direct access with the GPU. They are built for the OpenGL ICD model, which depends on a protected kernel driver uploading command packets to the hardware. Indeed, "direct access" is going away in DX10, which will debut with Avalon.
mirror? (Score:1)
Full Article Text (Score:2)
Posted by Steve [slashdot.org] in the Debian [slashdot.org] section on Wed 13 Jul 2005 at 17:10
Debian has now made the transition to the X.org installation of the X11 Window system. If you're running sid/etch you should be able to upgrade now.
The transition had previously been on hold until Sarge was released - as it was judged too major a change to add to the release at the last minute.
Now Sarge is out Debian development continues and one of the most anticipated changes is upon
Congratulations Debian community (Score:2)
Broken Packages (Score:2)
Re:Broken Packages (Score:4, Informative)
Re:Broken Packages (Score:2)
This is probably secondary to the gcc 4.0 migration and X.org however.
ash
KDE 3.4 users (Alioth packages) BEWARE! (Score:3, Informative)
If you use those packages you should hold off with this upgrade for a while as it will cause many of the core KDE packages to uninstall breaking KDE completely.
Re:KDE 3.4 users (Alioth packages) BEWARE! (Score:2)
Is X.org in some way tied into nvidia lockups? (Score:4, Interesting)
However, a lot of people seem to have this dreaded X lockup with nvidia binaries, and just about all of them were using Xorg. This can either be a complete freeze, or the pointer still moving but nothing is responsive. Usually you can still kill X but not always. This has also happened to my brother who was frustrated with mandrake and packages, so I recommended ubuntu to him. I went over to his house installed it and everything seemed fine. Then he had a lock up an hour in. Then another. The weird thing is, it doesn't usually happen during playing say an opengl game, but usually on the desktop by just moving the pointer quickly.
He never had these issues with mandrake 10. I installed various versions of the nvidia binary including the one he used to use with mandrake but all the same. I looked at the specs of mdk 10 (2.6.3 Xfree86). I'm not sure if it's a kernel issue, Xfree, or some other thing like (apci or apm?)
The logs give an error (i believe nvrm xid error) but nothing that would lead one to a solution.
Please don't reply to this saying this isn't a tech support forum. I've searched many forums trying to help my brother. At nvnews.net there are a couple of threads that go on for about 20 pages with many users having this problem with no solution in sight. I just thought I'd take a stab at the
Re: (Score:2, Informative)
Re:Is X.org in some way tied into nvidia lockups? (Score:2)
I've stopped at that point, wanting to investigate more before trying more drastic things, like going to 2.6.3 kernel as thats what worked for him in mdk. Also considered going to xfree86
Re:Is X.org in some way tied into nvidia lockups? (Score:2)
For fun, strace XFree86 while the system is frozen (you'll have to ssh in).
Install X.org, remove 1/2 your system (Score:2)
Yes, I too upgraded to X.org, specifically to get the DynamicCLocks feature working on my Thinkpad T42p to increase battery life and reduce heat on the GPU.
Unfortunately, installing X.org, and JUST X.org, required the removal of 526 packages from my system (yes, exactly 547 packages, which you can see here [gnu-designs.com], sorted alphabetically).
This included all of GNOME, themes, widgets, applets, all of KDE and related packages, pose (the Palm OS Emulator) and its foundation lib FLTK, abiword, OpenOffice.org, and
Re:Install X.org, remove 1/2 your system (Score:5, Insightful)
Re:Install X.org, remove 1/2 your system (Score:2)
Re:Install X.org, remove 1/2 your system (Score:2)
Re:Install X.org, remove 1/2 your system (Score:2)
"I run Debian Unstable. But I'll still moan like a fucking drain when it breaks..."
Re:Install X.org, remove 1/2 your system (Score:2)
+++
My last.fm page [www.last.fm]
+++
Husi is where's it at [hulver.com]
Problems with trident/white screen (Score:2)
Thanks
XInput broke? (Score:2)
$ xinput test gstylus
X Error of failed request: BadDevice, invalid or uninitialized input device
Major opcode of failed request: 150 (XInputExtension)
Minor opcode of failed request: 3 (X_OpenDevice)
Serial number of failed request: 11
Current serial number in output stream:
Re:XInput broke? (Score:2)
First of the wacom module build per default with gcc-4.0, but needed gcc-3.3, else it gets rejected by the kernel, manually tweaking the symlink
Secondly udev in sid requires kernel-2.6.12, but Debian only ships 2.6.11, so it won't start and some device files will be missing, workaround was to 'mknod' t
But, KDE has a conflict (Score:2)
So, to install kde I must uninstall xorg, or thereabouts. At least that's the impression I'm getting from Aptitude. Ah well, I'll give it some time to sort out.
1863 just telegraphed. (Score:1, Offtopic)
Utter crap (Score:3, Interesting)
As for the special effects: wrong, wrong, wrong. OSX shows how these effects can be useful. Also, the transition to a GL desktop will most likely be implemented in a new version of X.org which merges with Packard's work. A GL desktop actually helps the CPU by taking away the task of drawing stuff from it and having the GPU d
Re:Utter crap (Score:2)
But on OS X they are implemented nice. The way things zoom off the screen to show you your desktop or the slight transparency of sheets asking you questions are both nice. And the genie effect when you minimize a window or restore it is nice too and directs your eye to where it is going. Apple has done a great job.
Now that's not to say it can't
Re:Utter crap (Score:2)
Re:Utter crap (Score:2)
Re:Utter crap (Score:2)
Re:Utter crap (Score:2)
You can get the "nice smooth motion, without seeing stuff being redrawn" on Linux as well. Just install xserver-xorg, enable the Composite and Damage extensions, and run xcompmgr in your login script. Voila, smooth motion, no redraws.
Some of the distros have already packaged this up, so these "difficult" instructions are jus
Re:Utter crap (Score:2, Informative)
I dunno about that, OpenGL is rather CPU bound. Much of the OpenGL pipeline is still done by the processor, and the primitives for OpenGL are often optimized for fully 3D rendering, which is more than a little unoptimized for 2D blitting. glTranslate3f ?
A 2D-specific blitter is a simple matter of setting up a few registers and letting it happen. A 3D operat
Re:YES!!! X.Org! (Score:2)
>28MB RAM
>1.7GB HD
>1MB SVGA
With a system like that, I'm not surprised things are a bit slow. Join this decade, dude. Even if you're poor, a low end Pentium II system you can probably get for $50 will trounce that and you'll be much happier. }:)
-Z
Re:gentoo leads (Score:3, Interesting)
Very funny. Actually, X.Org recompiles aren't too bad (and yes, Gentoo does use it by default, as does Mandrake Linux or whatever it's called these days). The real killer is stuff like KDE - multi-day compile times, anyone?
Re:gentoo leads (Score:2)
Re:gentoo leads (Score:2)
The real killer is stuff like KDE - multi-day compile times, anyone?
Same here under FreeBSD. Qt/KDE recompiles are a huge time sink of a regular portupgrade -a run; esp. on mini-ITX. Same for firefox/mozilla/thunderbird updates! Worse is only a complete GNOME upgrade (yikes!).
But seriously, gcc sucks big time (speedwise) when compiling C++ code like Qt or KDE or Mozilla. Absolutely rock bottom performance. One ought to force GCC developers to use only slower (.le. 500 Mhz) CPUs for a while, so that th
Re:gentoo leads (Score:2, Insightful)
The new hand-written recursive descent parser added in 3.4 [gnu.org] improved performance a fair bit (making 3.4 the fastest g++ version ever as of the release they claim). The performance for compiling without optimization was improved even more in 4.0 [gnu.org]. For Gentoo users and other OCD-level recompilers it might not matter, but it does help developers everywhere. This is what I would
Re:gentoo leads (Score:2, Informative)
Nowadays Gentoo encourages the use of split ebuilds that make for a much more efficient and less bloated desktop, not to mention faster compile times, since only explicitly requested stuff gets compiled and installed.
Gentoo KDE Split EBuilds HOWTO [gentoo.org]
Re:Is sid testing now? (Score:2)
the version names are all from toy story, and sid is the kid that always broke the toys - therefore, sid will always be unstable.
packages from unstable trickle down into testing, which eventually get released all in one go to stable (like sarge did recently).
Re:wow (Score:2)