Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
GUI Software Linux

Linux Gets Kernel-Based Modesetting 81

An anonymous reader writes "Next month when Fedora 9 is released it will be the first Linux distribution with support for kernel mode-setting, which is (surprisingly) a feature end-users should take note over. Kernel-based modesetting provides a flicker-free boot process, faster and more reliable VT switching, a Linux BSOD, and of most interest is much-improved suspend/resume support! The process of moving the modesetting code from the X.Org video driver into the Linux kernel isn't easy, but it should become official with the Linux 2.6.27 kernel, and the Intel video driver can already use this technology. Phoronix has a preview of kernel-based modesetting covering more of this new Linux feature accompanied by videos showing the dramatic improvements in virtual terminal switching."
This discussion has been archived. No new comments can be posted.

Linux Gets Kernel-Based Modesetting

Comments Filter:
  • KGI was a damn good system - somewhat overshaddowed by GGI and other similar efforts, though, as the argument of the time was that the kernel shouldn't do what userspace can do. KGI might have stood a better chance if development had been faster, or some significant card could not be made to work correctly in userspace, or there was a demonstrable vulnerability implied.

    As I recall, there was also the arument that grahics in the kernel risked instability that would impact the system and be hard to trace. I can sympathize with this argument a bit more, but in the end it is true of all hardware drivers - hence the efforts of microkernel and exokernel developers to move such stuff into isolatable containers. It's a good idea, not terribly efficient because of all the message passing, but I can understand the reasoning.

  • by Anonymous Coward on Saturday April 19, 2008 @09:32PM (#23131638)
    If not, what is the least commented on story of all time ? Maybe starting at 1999.
  • by techno-vampire ( 666512 ) on Saturday April 19, 2008 @10:16PM (#23131906) Homepage
    As I recall, there was also the arument that grahics in the kernel risked instability that would impact the system...


    How true that is! I once worked at a shop where everybody was on NT4, and my box kept blue-screening because of a bug in the graphics driver. Putting drivers like that in kernel space just to get a little more speed is downright stupid, especially when you consider that NT 4 was largely marketed as a server OS where graphics weren't exactly important. I can't help but feel, given my experience, that this isn't exactly the best of ideas.

  • by laffer1 ( 701823 ) <luke&foolishgames,com> on Saturday April 19, 2008 @11:42PM (#23132338) Homepage Journal
    Apple uses a hybrid kernel called XNU actually. http://en.wikipedia.org/wiki/XNU [wikipedia.org]
  • by RAMMS+EIN ( 578166 ) on Sunday April 20, 2008 @12:19AM (#23132522) Homepage Journal
    ``KGI was a damn good system - somewhat overshaddowed by GGI and other similar efforts, though, as the argument of the time was that the kernel shouldn't do what userspace can do.''

    There is a point to that. On the other hand, it is questionable whether, in Linux, userspace _should_ be able to do all the things needed to drive the graphics card. Userspace directly accessing hardware and reading and writing arbitrary memory locations?

    On the gripping hand, the reason this is unsafe is only that the languages we use are unsafe. They don't gurantee that processes don't access things that aren't theirs. Essentially, we solve this by imposing a sort of dynamic type checking: we run these unsafe processes in a restricted mode, where the hardware limits their access to memory and I/O ports. Of course, sometimes, you _do_ need more than this restricted access, and that's what the kernel does. We trust the kernel to do it right. But now we've used indirection into the process: to access the hardware, a process needs to go through the kernel, which (hopefully) restricts the process to only doing benign things to the hardware. This is, of course, slower than it could be. Especially on x86, where switching from user mode to kernel mode is quite an expensive operation. This is the real reason why microkernels are slow.

    An alternative would be to have the compiler perform or insert the checks that, in current systems, are performed by the kernel and the hardware at run-time. This way, processes don't have to run in restricted mode and go through the kernel anymore, because they aren't going to do any of the things the kernel would prevent them from doing anyway. Of course, this requires a rather safer type system than C's, and it shifts trust from the kernel to the compiler - which raises issues about how you can know that the code you want to run was indeed compiled by a trustworthy compiler. However, these issues can be solved, and you end up with a system that can be more modular _and_ more efficient.
  • by Kjella ( 173770 ) on Sunday April 20, 2008 @01:13AM (#23132708) Homepage

    as the argument of the time was that the kernel shouldn't do what userspace can do.
    Well, from what I can read out of the description this has absolutely zero benefits for servers so I figure the discussion in 1998 went a little differently. KDE 1.0 was released in july 1998 and Gnome 1.0 wasn't out either, and things like "smooth graphical booting process" probably wasn't a major issue to say the least. There's always a balance between creating layers and hindering features, like for example ZFS which breaks the traditional file system model. At the time, I think it was probably right for Linux as they had more important things to focus on.
  • Re:What for? (Score:3, Insightful)

    by MrHanky ( 141717 ) on Sunday April 20, 2008 @05:38AM (#23133464) Homepage Journal
    Then you don't have a recent ATI card. The free driver will hard crash my system when playing video through Xv (this is with an experimental driver pulled from the GIT tree), the proprietary driver will freeze the system on log-out (with K/GDM). An X11 error can easily take down Linux, even if you don't use DRI.
  • by ThePhilips ( 752041 ) on Sunday April 20, 2008 @06:45AM (#23133602) Homepage Journal

    [...] the reason this is unsafe is only that the languages we use are unsafe.

    Yeeees. Right. Absolutely.

    This has nothing to do with sloppy programming and bunch of incompetent monkeys who managed to get to keyboard because it is cool.

    The difference between good developer and bad developer is that good developer always listens to user feed back. To system developers that's application developers are users. To application developers that's end-users are users. To hardware developers that's system developers are users.

    NT4 in that respect set all time record for shittiest programming ever. M$ with Win2k/WinXP/Vista was doing essentially one thing: making system simpler, system which wouldn't drive developers insane.

    Most of previous Linux attempts to moving parts of graphics into kernel were hindered in some part by lack of documentation and most importantly oversized ego of some developers calling for revolution. But in fact there were no revolution and nothing extraordinary new was done: people were trying to randomly move stuff around kernel/user space, nothing more. GGI was classical example: you could do the same with X - but it wasn't ever tried nor done. Because its developers were dead set on changing everything - no compromises allowed. For sake of highly experimental project recrafting stable kernel interfaces? That will never happen. The fact now that most of the mod setting development is done under hood of X.Org is a sign that it is going to be done right - because it is going to be done using stable ground, not shaking foundation of highly experimental library.

This file will self-destruct in five minutes.

Working...