KDE and Canonical Developers Disagree Over Display Server 202
sfcrazy (1542989) writes "Robert Ancell, a Canonical software engineer, wrote a blog titled 'Why the display server doesn't matter', arguing that: 'Display servers are the component in the display stack that seems to hog a lot of the limelight. I think this is a bit of a mistake, as it’s actually probably the least important component, at least to a user.' KDE developers, who do have long experience with Qt (something Canonical is moving towards for its mobile ambitions), have refuted Bob's claims and said that display server does matter."
logic (Score:5, Insightful)
If they don't matter, why mir?
Re:logic (Score:4, Insightful)
They're saying that it doesn't matter to an app developer if you're using a middleware framework, as most developers do, because the eventual output on the display will be the same.
The reasons for introducing mir are performance, ability to run on low footprint devices, and cross device compatability.
So their point is that X11 vs wayland vs mir vs framebuffer vs blakjsrelhasifdj doesn't matter to a developer using the full QT stack. Their write their app to QT, and then developers on QT write the backend to talk to whatever the end user is using. It's more work for QT/other frameworks, but "should" be "no" more work for an app developer.
Re: (Score:2)
Jolla would like to know why the need for Mir when they have a Wayland compositor and window manager running on low-end/mid-range mobile devices with excellent (compared to other similar-spec devices) performance.
Re: (Score:2)
Jolla would like to know why the need for Mir when they have a Wayland compositor and window manager running on low-end/mid-range mobile devices with excellent (compared to other similar-spec devices) performance
I have no idea, and I don't pretend to. I was pointing out that the +5 rated comment I replied to was not insightful and was missing the point of the original article. It was talking to app developers, not framework/OS/etc developers.
Personal blog (Score:3, Informative)
Re:Personal blog (Score:5, Insightful)
Re: (Score:2, Troll)
NOTHING to do with Canonical at all.
Yet there is Mark Shuttleworth, replying the same day [google.com] to this supposedly "personal" blog with:
It was amazing to me that competitors would take potshots at the fantastic free software work of the Mir team
But hey... that's Google+, not ubuntu.com or whatever, so that's got nothing to do with Canonical either. Right?
Re: (Score:3)
They are terrified, because it would mean more work for them and less advancement of the linux graphics stack. Having three display servers ( Xorg, Wayland, Mir) increases the amount of code paths everything and everyone has to deal with.
Its not trivial as Robert suggested, and more importantly, it doesn't increase Robert's workload.
If there is one thing that's really annoying, its someone telling you how easy your really difficult job is. So I understand the frustration apparent in the kde blogs.
Re: (Score:2)
They are terrified, because it would mean more work for them and less advancement of the linux graphics stack. Having three display servers ( Xorg, Wayland, Mir) increases the amount of code paths everything and everyone has to deal with.
No it doesn't. No one but Canonical will be supporting Mir and Xorg will go away. Leaving Wayland for the adults. No one besides Canonical gives two shits about Mir and once Wayland is stable enough for primary use people will switch to it faster than they did to systemd.
Re: (Score:2)
Users, using applications on ubuntu will care when those applications break because of the Mir backend. They'll care. A number of them will probably report that the apps don't work to the application writers, when the real issue is in the MIr support for the toolkits that Ubuntu will have to write. Thus, app developers will have to spend some time trouble shooting the problem.
This is the argument the KDE guys are advancing. It makes sense to me, but I must admit, I don't know the guts, nuts or bolts of Mir,
Re: (Score:2)
no, really? (Score:2, Funny)
Interesting how KDE and those responsible for Unity have differing perspectives... who would have thought?
Bollocks (Score:4, Insightful)
The display server is hugely important. The fact that the user doesn't know they're using it is irrelevant, because they're using it at all times.
Shh... (Score:4, Insightful)
You heard the man, it's not important. Now stop talking about it! That way Canonical can more easily save face when they cancel their failed cluster-fuck of a display server and switch back to Wayland...
Re: (Score:3, Insightful)
Re:Shh... (Score:5, Insightful)
X.org, not Wayland. Wayland is still under development. Wayland devs must be elated that Mir has made the debate "Wayland vs Mir" rather than "Tried, trusted, works, and feature complete X.org vs Wayland."
X.org is not "feature complete" in any meaningful sense. It is incapable of doing the kind of GPU-accelerated, alpha-blended compositing that is expected on a modern user interface. Sure, you can get around most of this by ignoring all the X11 primitives and using X.org to blit bitmaps for everything, with all the real work done by other toolkits. But in that case, it's those other toolkits doing the heavy lifting, and X.org is just a vestigal wart taking up system resources unnecessarily.
Re:Shh... (Score:5, Informative)
This is all wrong. X has something called GLX which allows you to do hardware accelerated OpenGL graphics. GLX allows OpenGL commands to be sent over the X protocol connection. X protocol is sent over Unix Domain Sockets when both client and server are on the same system, this uses shared memory so it is very fast, there is no latency of network transparency when X is used locally in this manner. MIT SHM also supports forms of shared memory for transmission of image data. Only when Applications when they are being used over a network, do they need to fall back to send data over TCP/IP. Given this, the benefits of having network transparency are many, but there is no downside because where an application is run locally, it can use the Unix domain sockets, the MIT SHM and DRI.
X has also had DRI for years which has allowed an X application direct access to video hardware.
As for support for traditional X graphics primatives, these have no negative impact on the performance of applications which do not use them and use a GLX or DRI channel instead. Its not as if hardware accelerated DRI commands have to pass through XDrawCircle, so the existance of XDrawCircle does not impact a DRI operation in any significant way. The amount of memory that this code consumes is insignificant, especially when compared to the amount used by Firefox. Maybe back in 1984 a few kilobytes was a lot of RAM, that is when many of these misconceptions started, but the fact is, these issues were generally found with any GUI that would run on 1980s hardware. People are just mindlessly repeating some myth started in the 1980s which has little relevance today. Today, X uses far less memory than Windows 8 does and the traditional graphics commands consume an insignificant amount that is not worth being worried about, and which is needed to support the multitude of X applications that still use them.
Re:Shh... (Score:5, Insightful)
Today, X uses far less memory than Windows 8
Nice, you just compared a single process on one OS to the entire OS and its subprocesses of another. Totally fair.
How about you compare X to the Win32 Desktop Window Manager instead? Which is a lot closer, though still not exact since Windows has this mentality that GUI in the kernel is a good idea.
My point however is that your comparison is not really a comparison.
Re: (Score:3)
I also forgot to mention X has had the X Composition Extension and X Render Extension which have allowed for alpha blending operations for quite some time. Your information is a bit out of date.
Re: (Score:2)
You did mention hardware accelerated compositing, and I wanted to clarify that X protocols can indeed support this, it is mainly internal improvements in the X server that may be needed to support them. You dont really need an entirely new windowing system for this.
Re: (Score:2)
Think you need to watch Daniel Stone's presentation on why X11, well, sucks: https://www.youtube.com/watch?... [youtube.com]
Long story, X11 has been hacked to add things like GLX and composite, and these things go around the X protocol essentially. X is pretty much a complicated and poorly-working IPC nowadays. Yet even if you removed all the cruft, you'd be left with the fact that X makes a very poor IPC mechanism. Also with GLX and compositing, X is no longer network transparent. It's network-capable, but it's not
Re: (Score:2)
I tunnel over ssh to a remote server that runs an X application which pops open a window on my workstation that is indistinguishable from any other window on my desktop. That includes cut/paste just working. Try that with any of the alternatives you suggested. Now try throwing Xpra into the mix. Good luck with that.
Yes, perhaps once Wayland quits hand waving and spending all it's time talking about how it's going to cram itself down everyone's throat, it might get some traction.
Re: (Score:2)
So you're saying the solution is to run X? I agree. Not sure why, given that I will have X running at all times, I would want to bother with Wayland.
Currently, I don't have to care what toolkit was used and I don't have to run a display server on a machine that doesn't have a monitor and may not even have a video card at all (why would it, it's racked in a lights out facility). I just ssh with X11 forwarding turned on and it just works. Unless or until Wayland can do that with the same complete lack of muss
Re: (Score:2)
Also tear-free video seems to be one god awfully big workaround for limitations in X. The stated goal of Wayland was a system in which "every frame is perfect, by which I mean that applications will be able to control the rendering enough that we'll never see tearing, lag, redrawing or flicker." I doubt he'd say that if X had no tearing lag, redrawing or flicker which seems like rather huge deficiencies to me.
Re: (Score:2)
Except it doesn't really come up in X except under conditions Wayland doesn't even handle. It's easy to design a car that never has a fatal crash, just leave out the wheels and engine.
Just one question... (Score:4, Insightful)
Just one question. If the display server is of such minimal importance in the big scheme of things, then why did Canonical develop their own?
Display server matters for some people (Score:2)
True and false. (Score:2)
The reason so much attention is being put on display servers is as a distraction from the real problems, such as the fact that so much attention is being put on the display servers. They're not the weak point, there are a lot of them, and one exercise that remains THROUGHOU
Re: (Score:2)
I believe the main difference is that remote X is rootless. People like that. Somehow they forget that remote X is non-persistent, uselessly slow, and that session integration is almost entirely missing.
Do not misunderstand me, I would love a persistent rootless remote display with decent performance and session integration. Alas, X is not it.
So why did Apple and Google toss it? (Score:5, Interesting)
Re: (Score:3, Insightful)
Not only that, but each example (NeXT/OSX and Android) are undeniable success stories.
X11 has severe limitations, like a cramped network abstraction layer that can't share windows or desktops with multiple people. Supposedly the NX server gets around this, but the X11 people haven't shown any interest in adopting the NX features.
People need displays that look like they computer is operating smoothly (instead of barfing text-mode logs here and there when transitioning between users, runlevels, etc).
People ne
Re: (Score:3, Interesting)
And both are now incompatible ecosystems. Do we want to repeat this nonsense?
Re: (Score:2, Troll)
Single digit market share really isn't "hugely successful". MacOS based on Unix really isn't that much more successful than MacOS NOT based on Unix. Whatever "success" this alleged Unix has had really has nothing to do with it's Unix-ness. What meagre success it has had has been being tied to a well established brand name that's about as far away from Unix as you can get.
What's the point of a "successful Linux" if it abandons all of the useful design ideas of Unix?
At best, something like that is redundant.
Re:So why did Apple and Google toss it? (Score:4, Interesting)
WRT to OSX, there is history. Back in the days of NeXT, Jobs & co. decided to use Display Postscript for a variety of reasons. A few of the reasons: X back then was huge, ungainly and a total beast to work with using the limited memory and cycles available (The NeXTstation used a 25MHz 68000); their team were not ever going to be able to morph X into an object-oriented platform, which NeXT definitely was; Display Postscript was Adobe's new Hotness; the NeXT folks could write drivers for DP that worked with the Texas Instruments signal processor (TM-9900? I forget), which was truly amazingly fast at screen manipulation; and the X architecture didn't fit well with either Display Postscript or the TM-9900.
In 2001 I had a NeXTstation that I added some memory and a bigger disk to. The machine was by then more than 10 years old. For normal workstation duties, it was faster than my brand new desktop machine due entirely to the display architecture. But compiling almost anything on that 25MHz CPU was an overnight task - I had one compile that ran three days.
Re: (Score:2)
Isn't Quartz based on Display PDF, though?
Re: (Score:2)
I haven't kept up, but it sez here: [wikipedia.org]:
It is widely stated that Quartz "uses PDF" internally (notably by Apple in Quartz's early developer documentation[5]), often by people making comparisons with the Display PostScript technology used in NeXTSTEP and OPENSTEP (of which Mac OS X is a descendant). Quartz's internal imaging model correlates well with the PDF object graph, making it easy to output PDF to multiple devices.[6]
Re: (Score:3)
Its the not made here syndrome, plus the fact that Google and Apple want to create a fleet of applications that are totally incompatable with other platforms in order to create user lock in to their respective platforms. Obviously, business and political reasons and nothing to do with technical issues. X would have been a fine display platform for either but, then the platforms would be compatable with mainstream Linux distros and you would have portable applications so your users wouldnt be locked into you
Slow News Day (Score:2)
Is there an actual story here, or it just about two different groups of open-source developers having a difference of opinion on whether display servers are important or not? The summary doesn't suggest this disagreement is having any real ramifications on Ubuntu/Kubuntu.
He's Right (Score:4, Insightful)
KDE vs. Who? (Score:2)
I've been a KDE user for a very long time, hated Gnome. Frankly I hate Unity even more Gnome (which is a lot). I've seen KDE do things that Microsoft can't, using less CPU and overall better performance, and it's always been compatible with X. So now we have a nextgen X and Canonical want's to disperse the market. Nothing new there, they did it with Unity. Fragmentation is good for some people, and I have to wonder if Canonical gets paid to cause fragmentation? Sure, they have a product that is "their
wayland, systemd (Score:5, Interesting)
Figured systemd would get dragged into this.
One of the biggest problems with systemd is simply documentation. System administrators have a lot of learning invested in SysV and BSD, and systemd changes nearly everything. Changing everything may be okay, may be good, but to do it without explanation is bad no matter how good the changes. I'd like to see some succinct explanation, with data and analysis to back it up. Likely there is such an explanation, and I just don't know about it. But the official systemd site doesn't seem to have much, I'd also like to see a list with common system admin commands on one side, and systemd equivalents on the other, like this one [fedoraproject.org] but with more. For example, to look at the system log, "less /var/log/syslog" might be one way, and in systemd, it is "journalctl". To restart networking it might be "/etc/rc.d/net restart", and in systemd it's "systemctl restart network.service". Or maybe the adapter is wrongly configured, DHCP didn't work or received the wrong info, in which case it may be something like "ifconfig eth0 down" followed by an "up" with corrected IP addresses and gateway info.
When information is not available, it looks suspicious. How can we judge if systemd is ready for production? Is well designed? And that the designers aren't trying to hide problems, aren't letting their egos blind them to problems? To be brusquely told that we shouldn't judge it we should just accept it and indeed ought to stop whining and complaining and be grateful someone is generously spending their free time on this problem, because we haven't invested the time to really learn it ourselves and don't know what we're talking about, doesn't sit well with me.
Same goes for Wayland and MIR. Improving X sounds like a fine idea. But these arguments the different camps are having-- get some solid data, and let's see some resolution. Otherwise, they're just guessing and flinging mud. Makes great copy, but I'd rather see the differences carefully examined and decisions made, not more shouting.
I've been using Wayland and systemd for nearly a m (Score:2)
SailfishOS, running on the current Jolla device, is quite smooth and nice, in a way that my N9 (despite the slickness of the design of the UI) never was. Both were underpowered hardware for their times, but Wayland allows the kinds of GPU-accelerated and compositing-oriented display that allow for what people are increasingly used to from other OSes.
Now, in terms of systemd I'm more on your side, there's certainly a baseline of arrogance that the primary devs have shown. On the other hand, they seem sometim
Re:I've been using Wayland and systemd for nearly (Score:4, Insightful)
My main issue with systemd is that it is monolithic; it violates the fundamental Unix philosophy in a most egregious way, and whenever anyone comments on this, we are (to quote the GP) "brusquely told that we shouldn't judge it we should just accept it and indeed ought to stop whining and complaining and be grateful someone is generously spending their free time on this problem, because we haven't invested the time to really learn it ourselves and don't know what we're talking about".
We used to have separate, replaceable systems for each aspect of systemd - e.g. if you didn't like syslog, there was syslog-ng, or metalog, or rsyslog; each different and meant for a different purpose. Now, it's "all or nothing" - except that it's becoming progressively more difficult to opt for "nothing" because it's integrating itself into fundamental bits like the kernel and udev.
Re: (Score:2)
Wayland allows the kinds of GPU-accelerated and compositing-oriented display that allow for what people are increasingly used to from other OSes
No, it doesn't. That's nothing to do with Wayland at all. Wayland is just a compositor that allows you to manage pixel buffers (and when they are ready) and input devices.
Any hardware GPU accelerated stuff is supported by OpenGL or other parts of the stack, just like on Xorg.
Display server does matter (Score:5, Interesting)
Obviously, display server does matter to users. If users cannot use a whole set of applications because they are not compatable with Distro Xs display server, that is a problem for users. This can be addressed by distros standardizing around display servers that uses the same protocol. Its also possible, but more complex, is if distros using different display protocols support each others display protocols by running a copy of a rootless display server that supports the others display protocol. Relying on widget sets to support all display protocols is too unreliable as we are bound to end up with widget sets which do not support some display protocols. Needless to say, it is best to have a single standard, it would have been easiest and best if Canonical had gone with Wayland and actually worked with Wayland to address whatever needs they had.
Its also true a new display protocol wasnt really necessary. The issue with X was the lack of vertical syncronisation. X already has DRI, Xrender, Xcomposite, MIT SHM, and so on for other purposes. An X extension could have been created to allow a way for an application to get the timing of the display, the milliseconds between refreshes, the time of the next refresh, etc.. X applications could then use this timing information, starting its graphics operations just after the last refresh, X applications could then use an X command to place its finished graphics pixmap for a window into a "current completed buffer" for the window, allowing for double buffering to be used. This could be either a command to provide the memory address, or a shared memory location where the address would be placed. All of the current completed buffers for all windows are then composited in the server to generate the master video buffer for drawing to screen. There is a critical section during which the assembly of the master video buffer would occur, any current completed buffer swap by an application during that time by an application would have to be deferred for the next refresh cycle. A new XSetCompletedBuffer could be created which would provide a pointer to a pixmap, this is somewhat similar to XPutPixmap or setting the background of an X Window, but provided that XPutPixmap might do a memory copy it may not be appropriate, since the point is to provide a pointer to the pixmap that the X server would use in the next screen redraw. Said pixmaps would be used as drawables for opengl operations, traditional X primatives, and such. This scheme would work with all of the existing X drawing methods. the pixmaps are of course transferred using MIT SHM, its also possible to use GLX to do rendering server side, for use of x clients over the network, GLX is preferable, otherwise the entire pixmap for the window would have to be sent over the network. The GLX implementation already allows GL graphics to be rendered into a shared memory pixmap. Currently however, some drivers do not support GL rendering into a pixmap, only a pbuffer, which is not available in client memory at all, however, the DRI/GEM stuff is supposed to fix this and the X server should be updated to support GLX drawing to a pixmap with all such DRI drivers.
Another issue is window position and visibility in how it relates to vertical synchronization. Simplistically the refresh cycle can be broken into an application render period and a master render period. If the X server has a whole pixmap buffer of a window, it grabs at a snapshot of the display window visibility/position state the beginning of the master rendering period and uses that to generate the final master pixmap by copying visible regions of windows into the master buffer.
It can be a good idea to allow the option for applications to only render areas of their windows that are visible, this saves on CPU resources and also avoid needless rasterization of offscreen vector data. In order to do this, applications would need to access visibility data at the beginning of the application render period. Applications would then have to, instead of providing a single
I don't think you know what "refuted" means (Score:2)
Typical Canoncial (Score:2)
oh slashdot (Score:2)
Why aren't you linking to the blogpost, instead of the article quoting the blogpost?
Re: (Score:2)
From your comment about a shitty UI one can only conclude you have never used KDE.
Although better graphics would be nice calling them amateurish is rather silly.
Re:oh good (Score:5, Interesting)
Too easy to downmod you.
From your comment about a shitty UI one can only conclude you have never used KDE.
Although better graphics would be nice calling them amateurish is rather silly.
I actually see KDE as the best Linux desktop right now: fast, feature-rich and stable. However I recently watched an interesting criticism piece [youtube.com] regarding some funky and misleading behavior of this desktop environment. The user experience could be improved.
Re:oh good (Score:5, Insightful)
KDE 4 is great except for Akonadi, which killed Kmail.
Re: (Score:2)
it (kde) could have a bit less bugs, though ;)
Re: (Score:2)
1. New Folder
Already fixed, the dialog shows now "New Folder 1", "New Folder 2", etc
2. New Text File
Fixed, see 1.
3. Rename Dialog
Not fixed, behaves the same way as in the video.
4. Copy file
Fixed, it says now "Paste one file"
5. Dialog
Not fixed, behaves the same way as in the video.
6. Trash/delete dialog
I disagree. Show the user all options in the menu, that is a good thing. But was fixed anyway. Now Shift+Menu will show "Delete". Without Shift it shows "Move to trash". Bad change.
7. Open and Edit Trashed fil
Re: (Score:2)
Re: (Score:2)
Re: (Score:2)
Re:oh good (Score:5, Insightful)
Although better graphics would be nice calling them amateurish is rather silly.
Why? The KDE desktop looks like the state-of-the-art from say 1993. If I wanted my desktop to look like Xaw3d, I'd just fall through a time warp and go back there. At least the music was better.
I'm pretty happy with my KDE desktop, but I use it as a tool to get work done, not because it looks pretty.
I bought a hammer from the hardware store that looks almost exactly like the 1920's era hammer my great grandfather used (though the handle is fiberglass instead of wood), but it works well and gets the job done. Just because a desktop "looks" old doesn't make it useless. I tried Unity and Windows Metro and found them to be much less usable for my developer/operations tasks.
Re: (Score:2)
But the professionals have moved on to nail guns.
Re: (Score:3)
Good luck extracting a nail with your nail gun Mr. Professional.
Re: (Score:2)
Miss once, try again... miss twice, keep on shootin' - pulling nails is a waste of time, time is money, professionals aren't paid to save nails.
Actually, "professionals" will miss the stud with 3 of 4 nails that are supposed to hold the sheathing, and "keep on rollin' 'till the day is done." This is why owner-built houses survive hurricanes and the same design built by a contractor doesn't.
Re: (Score:2)
I don't know nuthin' about construction, but I know the sound of a hammer and the sound of a nail gun, and it's been a long time since I've walked by a construction site and heard the former. I have a feeling they have it worked out.
Re: (Score:2)
Next time look at what is hanging from the tool belt of nearly every woodworker on the site. They don't carry a hammer for decoration.
Re: (Score:2)
To continue the analogy, sometimes you gotta use the shell...
Re: (Score:2)
Indeed. Pretty graphics only get you so far.
@MightyYar - Re:oh good (Score:2)
and it's been a long time since I've walked by a construction site and heard [a hammer]
You should have walked past my place 4 weeks ago. Contractors were re-roofing my house and they used hammers. Made a good job too.
Re: (Score:2)
Did you have some kind of special roofing? I definitely hear the guns for asphalt. Not sure how they handle slate or metal.
Re: (Score:2)
Did you have some kind of special roofing? I definitely hear the guns for asphalt. Not sure how they handle slate or metal.
It is slate (artificial, but slate-like). I should think a nail gun would shatter it. But they also used hammers to fix the underlying battens.
Not sure what advantage a nail gun would bring unless someone is making something like cheap furniture in a mass-production factory, the gun fed by nails down a flexible tube. Have guys become such pussies that they cannot move their arms any more, or is it that the 'Elf & Safey Nazis ae terrified someone will hit their thumbnail?
I suspect the "advanta
Re: (Score:2)
It's faster and you don't deal with loose nails. I think hammers are still a must, but real experts use the best tool for the job, not the one that best demonstrates their unique skills.
Re: (Score:2)
I wouldn't consider myself a KDE fanboy, having used it only for oh, like 3 years but I moved to it after some of that Unity/Gnome2/Gnome3/I-forget-the-details mess. Suddenly I found I could tweak things to my preference (nothing fanboyish, just being able to turn on editable paths, different views, etc. in the file explorer; a searchable "Start" button). I did find the default appearance ugly, but customized it (KFaenza icon-set, Smaragd window theme engine-thingy that lets me use a really nice Emerald win
Re: (Score:2)
Re:oh good (Score:5, Insightful)
Sure a DE is an acquired taste but it does have to be functional without being ugly.
Nor is there outside the commercial offerings any DE with such a well integrated package of applications.
That doesn't mean I don't see the attractivity of something like Enlightenment but compared to KDE it is seriously limited, both in options and looks.
Re: (Score:2)
Win32 GDI? You mean like metro? At least kde is more usable than that. If you mean the win2k desktop, you might have a point there, but kde can be configured to mimic that.
Re: (Score:2)
Re: (Score:2)
They do that deliberately just to keep you away.
Re:oh good (Score:4, Interesting)
Because Metro sure knocked the socks off of everything...
It points to a new direction; you know one where UI designers cut the tops off their skulls, take an ice cream scooper and remove about two thirds of the brains, put the top of the skull back on.
Metro UI concepts are actually showing up in more and more places. The biggest problem with it was that Microsoft, in their wisdom, tried to force a touchscreen interface on desktop users. The interface itself isn't the problem, the lack of choice for the primary user was...
Re: (Score:3)
Windows 7 sure has a better working and cleaner UI
Eh, maybe? If I turn off all the Aero or whatever and make it look like 9x, I can live with that. The translucent borders, Big Button start menu and "pins," the god-awful switcher...7 is probably my only choice for Windows moving forward, if I have to have it, but it won't look like it does after a fresh install for very long.
Re: (Score:2)
Or maybe he's somebody who:
* cares about real-estate on his screen and the density of information displayed vs shininess. Those borders and "modern" task bar are huge!
* is interested in having a UI that responds in a timely manner rather than having pretty but utterly useless animations that make him wait half a second every time he clicks on something.
* wants to use less memory for caching pretty animations and more for the programs he's running
* wants his processor to be working on the task he has assigne
Re:Of course it matters (Score:5, Funny)
Re: (Score:2)
Re: (Score:2)
Re: (Score:2)
Well, there is also SurfaceFlinger and Quartz...
Re: (Score:3)
Because the tool boxes (QT, GTK, and others) don't provide a complete abstraction layer, at least not when your project gets to the point of doing anything 'fancy' if all your application does is display some forms fine, but more complex stuff, window managers, media players with odd shapes and over lays etc; you have to interact with the display server directly or its APIs anyway.
More display servers more code paths and its not easy for one developer to test all that, sure they can have a bunch of VMs but
Re:How are these things related? (Score:5, Informative)
The whole point about all of this, X/Wayland/MIR, is getting closer to the video card without having to yank one's hair out whilst doing it. Why would one need a little close interaction with the bare metal? If you've ever used Linux and saw tearing while moving windows around, then you've hit on one of the points for why closer to metal is a bit more ideal.
With that said, let's not fool ourselves and think, "OMG, they just want access to the direct buffers!" That wouldn't be correct. However, developers want to have an ensured level of functionality with their applications visual appearance. If the app shows whited out menus for half a second, blink, and then there is your menu options, then there is something very wrong.
It was pretty clear that with X, politically speaking, that developers couldn't fix a lot of the problems due to legacy and the foaming at the mouth hordes that would call said developer out for ruining their precious X. You can already see those hordes from all the "take X and my network transparency from my cold dead hands" comments. It is to a degree those people, and a few other reasons, that provided the impetus for Wayland. You just cannot fix X the way it should be fixed.
Toolkits understand that display servers and pretty much the whole display stack in general suck. Granted there is a few moments of awesome, but they are largely out weighted by the suck factor, usually when you code an application, you'll note that sometimes you'll gravitate to the "winning" parts of the toolkit being used versus the pure suck ones. Qt has a multitude for all the OSes/Display Servers it supports. Be that Windows, Mac, X11, and so on. Likewise for GTK+ but to a lesser extent, but that is what make GTK+ a pretty cool toolkit. Because let's face it, no display stack is perfect in delivering every single developer's wish to the monitor. Likewise, no toolkit is perfect either. The GNOME and KDE people know this, they write specific code to get around some of the "weirdness" that comes with GTK+ or Qt. Obviously, that task is made slightly easier with Wayland and the way it allows a developer to send specifics to the display stack or even to the metal itself.
Projects like KDE and GNOME have to write window managers and a lot of times those window managers have to get around some of the most sucktacular parts of the underlying display server. However, once those parts are isolated, the bulk of the work left is done in the toolkit. So display servers matter a bit to the desktop environments because they need to find all of the pitfalls of using said display server and work around them. Sometimes, it can be as simple as a patch to the toolkit or the display server upstream. Sometimes it can be as painful as a kludge that looks like it was the dream of a madman, all depends on how much upstream a patch is needed to be effective and how effective it would be for other projects all around.
That leads into the problem with MIR. MIR seems pretty gravitated to its own means. If KDE has a problem with MIR that can be easily fixed with a patch to MIR or horribly fixed by a kludge in KDE's code base, it currently seems that the MIR team wouldn't be as happy go lucky to accept the patch if it meant that could potentially delay Ubuntu or break some future unknown to anyone else outside of MIR feature. Additionally, you have the duplicated work argument as well, which I think honestly holds a bit of water. I fondly remember the debates of aRts and Tomboy. While I think it's awesome that Ubuntu is developing their own display server, I pepper that thought with, "don't be surprised if everyone finds this whole endeavor a fools errand."
I think the NIH argument gets tossed around way too much, like its FOSS McCarthyism. Every team has their own goals and by their very nature, that would classify them as NIH heretics. Canonical's idea is this mobile/desktop nexus of funafication, MIR helps them drive that in a way that is better suited to them. That being said
Re: (Score:2)
You: "You just cannot fix X the way it should be fixed."
Reality: "... It's entirely possible to incorporate the buffer exchange and update models that Wayland is built on into X..." (Wayland FAQ)
And now?
Re: How are these things related? (Score:2)
Re: (Score:2)
So just make sure you don't remove useful functionality with your 'improvement'. You claim X has cruft nobody uses? Fine, mark it deprecated and see if anyone actually cares (you might be surprised what is used). If nobody cares, remove it from the next version.
As for the network transport, that is not some minor and obscure feature, it gets used all the time. For example, virt-manager uses it along with ssh for remote VM consoles. Sounds pretty useful and modern to me. If Wayland can't do it well, it's as
Re: (Score:2)
So just make sure you don't remove useful functionality with your 'improvement'.
Which is why X cannot be fixed the way it should be fixed. There is always going to be some group of people that uses some random function of X that is horribly ineffective. I can pretty much bet you a lunch that one can point to any random function of X and find some group of people that rely on that function. Most of these functions just get in the way of the majority of users that just want to use desktop applications. I mean come on, who really uses XPrint? Or needs a non-rectangular window? I gue
Re: (Score:2)
If Wayland doesn't fit for you, don't move to it, that easy. But X11 is old, slow, and bloated and if that's what you need, then go for it.
If the Wayland supporters weren't continually talking about making sure X goes away and forcing everyone onto Wayland, that would be fine.
They could also stand to learn a thing or two about IPC. Did you know that modern PC's access memory by sending a request on a very local network and receiving a reply? In some cases the messages are routed through a neighboring CPU. UNIX sockets use shared memory.
The sad part is that if Wayland would learn that, it might actually be a great thing.
I haven't heard a lot o
Re: How are these things related? (Score:2)
Re: (Score:2)
I don't think X11 is going away, the Wayland people do. They seem anxious to get rid of it fast and cram Wayland down my throat. They are so incredibly rude they don't even seem to understand why that might make people want them to fail.
Communication between app (client) and display server is IPC. Even if you use shared memory, it is IPC.
Re: (Score:2)
Also, I like how you pointed out X tunneling over ssh. Which has been shown time and time again as the slowest method for remote connections into a system hands down. Yes, it's nice in that it is easy on admin costs and takes very little to get up and running, however, that comes at the cost of it being slow. Just to compare, Doing Mathlab via X11 over ssh versus (randomly grabbing a tool out of thin air that I know is a really bad choice) VNC. X11 over ssh is close to about 35 seconds to finally see th
Re: (Score:2)
I have used VNC, but it fails hard at the whole seamless integration thing. Same for RDP. Networks are getting faster and lower latency every day.
There are some X apps that make really inefficient use of the X protocol. Fixing them wouldn't be a bad thing at all. Personally, I find the remote apps to respond quite nicely even over a cablemodem connection.
So, if there is this vastly superior way to do that exact thing, name it.
Re: (Score:2)
There obviously isn't any point. I don't think I'll ever be able to convince you personally that the feverish hang ups that you have with X are the exact same things that has played out on X11 mailing lists for the last decade and a half that has gotten X11 nowhere. The whole XFree/X.org breakup was all politically motivated. Same thing here. There is just too much drag to worry about trying to fix X11, it just isn't worth the headache to fix. Fixing X just causes more problems and more headaches. Mig
Re: (Score:2)
Point me to that superior way you speak of. You do actually know of one, don't you? You sure implied that you did by confidently claiming there is one.
My guess is you were bluffing and just got called. Lay those cards down and let's just see!
As for XFree/XOrg, the big difference is that it resulted in XOrg, a capable display system that worked much better than XFree.
There is a capability that I see a being well worth maintaining, the ability to operate over a network without a crazy Rube Goldberg setup invo
Re: (Score:2)
Feverish much?
There isn't a point because you'll just sit here and come up with half brained reasons as to why it is wrong. So why even go down the road to begin with? It'll just lead to a conversation that neither of us really want to have, wouldn't you agree?
X11's tunneling isn't worth it to me and that's the way I feel about the matter, case closed. It obviously is something you are worth foaming at the mouth about, so if that's what tickles your fancy so be it. We just don't agree on how we'd go abo
Re: (Score:2)
Network transparency isn't a problem anyway. X isn't network transparent anymore for the vast majority of applications (no one uses Motif any more), it's only network capable. And just take a look at Windows: it works just fine over a network, much better in fact than Linux, thanks to RDP. I hate to praise Windows over Linux, but for doing GUI work over a network, Windows wins hands-down. Even better, RDP is an open protocol (or at least it has an open-source implementation, which is why you can remote
Re: How are these things related? (Score:2, Funny)
This is all horseshit anyway. Any decent windowing system would be implemented with ncurses.
Re: (Score:2)
You just cannot fix X the way it should be fixed.
Translation: It's protocol-oriented.
Wayland is the obvious choice
Really? It's protocol-oriented too (and is slow-progressing because of that).
Mir is library-oriented so no-longer will DEs paper-over the ugly parts, but instead they'll just fix the client library.
Re: (Score:2)
As you know there are lot of protocol-oriented application implementations out there are insanely useful and their effectiveness is hardly ever challenged. For example, TCP as a generic and NTP as a more application specific example.
Being protocol based isn't instantly a mark of slowness, just like using a hash map isn't a sign of slowness versus a binary tree. It is how it is used and X has had so much tacked on and so many extensions that need to be negotiated, it's just turned into a really bad use of
Re: (Score:2)
Why? When canonical tries to make an own OS instead of a linux distro, they should not require others to do the work for them.