Follow Slashdot stories on Twitter

 



Forgot your password?
typodupeerror
×
GNU is Not Unix Operating Systems Security Software Linux

What to Expect from Linux 2.6.12 505

apt-get writes "Saw this Linuxworld report from the annual Australian Linux conference, Linux.conf.au, in Canberra last week. The article outlines some of the new features we can expect for the 2.6.12 kernel release, including: support for trusted computing, and security enhanced Linux. The kernel developers are also working on improving the 'feel' of the Linux desktop with inotify for file managers and events notification so hardware 'just works'. Unfortunately no release date other than 'sometime soon' is given."
This discussion has been archived. No new comments can be posted.

What to Expect from Linux 2.6.12

Comments Filter:
  • Yay! (Score:3, Funny)

    by qwertphobia ( 825473 ) on Wednesday April 27, 2005 @08:16AM (#12357590)
    does this mean I can tust my computer now?

    we've had a growing apart since it started cheating on me and got a virus :-(
  • Trusted Computing (Score:3, Interesting)

    by khujifig ( 875862 ) on Wednesday April 27, 2005 @08:18AM (#12357606)
    Is the inclusion of trusted computing a good thing here? Many people in the /. crowd didn't seem to like the idea of it's inclusion in Windows...

    Was its inclusion in the kernel by choice?
    • by Anonymous Coward
      Trusted Computing is just a technology and like probably all technology it can be used to do something good with it (make computers safer), or something bad (take control away from the user to enforce DRM, for example).

      As the former seems to be what the inclusion in the Linux kernel is about, I think it's a good thing. And remeber, it's a free system, so you'll always have the choice not to use it or only use what you want to use it for.
      • And remeber, it's a free system, so you'll always have the choice not to use it or only use what you want to use it for.

        Sure you'll have a choice if you decide to compile your own kernel. But I don't see grandma or uncle bob downloading theirs from kernel.org and compiling it from source. I don't oppose it's inclusion in the tree but I think if money is involved (RIAA and MPAA have deep pockets), it won't be too difficult to persuade some of the more user-friendly distros to compile a stock kernel with
        • As much as I would like it to be so, I don't know any grandmas or uncle bobs that install their own Linux machine. All the grandmas and uncle bobs that I know don't even know how to install windows (not that's it's necessarily easier but it is the general perception). They either get it shipped with the OS or get their grandson/cousin to install it.
          those who get it with the PC most probably will end up with windows anyway. The others will have the support of the half geek to either install a distribution t
      • Re:Trusted Computing (Score:4, Interesting)

        by madaxe42 ( 690151 ) on Wednesday April 27, 2005 @08:48AM (#12357851) Homepage
        do something good with it (make computers safer)

        Call me silly, but how is 'making computers safer' a good thing? I don't *need* protecting from the big bad wide world, there are enough intrusions into my life to make it 'safer' as it is - each and almost every one of them pisses me off.
        • by zootm ( 850416 )
          If you don't need protection, don't turn it on. I assume that the kernel segments that deal with trusted computing will be able to be compiled out, and you'll be fine. As for the fact that it'll probably be included by default on many systems, I have to say that I don't consider "safe by default" a fallacy in any way.
          • If you don't need protection, don't turn it on.

            And then don't bother connecting to the internet either, because no web-site operators will let you view their pages without Trusted Computing enabled.

            Otherwise, you might republish their copyrighted works without compensation... that's just too much of a risk. Or you could execute many other forms of abusive programs to disrupt the experiences of their other users.

            Really, untrusted PCs are just too dangerously unpredictable to allow out in public.
        • Yeah, you might have good judgment, but is your box an island?
          In the case of cars, traffic lights, while an admitted PITA, do make commuting possible.
          Or are you one of those just-put-in-a-roundabout Brits? :)
    • by Anonymous Coward
      M$'s trusted computing is aimed at MPAA/RIAA: "You can trust M$ to not allow users access to your data even though its on their computer"

      Linux trused computing is aimed at users/admins: "You can trust that User A can't muck with User B, expecially if User B is root!"
      • Linux trused computing is aimed at users/admins: "You can trust that User A can't muck with User B, expecially if User B is root!"
        That doesn't convince me. Why aren't the existing mechanisms (cpu running user code in protected mode) sufficient for that?
    • Re:Trusted Computing (Score:5, Informative)

      by Anonymous Coward on Wednesday April 27, 2005 @08:33AM (#12357716)
      It's a different thing. The 'trusted computing' in Windows is all about DRM, preventing you from getting access to data on your machine.

      The 'trusted computing' in Linux 2.6.12 is about being able to run a process that is restricted in what it can do (read and write to a pipe, essentially), so that you can run an arbitary downloaded binary without worrying that it will do bad things. (think: distributed.net, SETI, etc).
    • Re:Trusted Computing (Score:4, Interesting)

      by paulpach ( 798828 ) on Wednesday April 27, 2005 @08:50AM (#12357866)

      Yes. Trusted computing is a very good thing. This is some of the things you can expect:

      When you compile or install a software, you can sign it. The computer will not execute anything that is not signed. This stops many viruses and trojan horses, so you can trust that you authorized everything the computer executes. It is just a security layer just like the no execution bit.

      The important thing here is that the user is in full control of the system. The user gets to sign the packages or he can choose to use a distro that signs them for him. He chooses what the computer runs and what not. There is no third party that limits what the user can/cannot execute.

      Besides signing software, TCPA (the chip that is going to be supported by the kernel) does encryption on hardware. So you can have hardware accelerated encryption/decryption, and your CPU will be free to do other things. This is not much different from hardware accelerated 2d & 3d graphics. Again, this is a very good thing.

      Many people opose trusted computer because they confuse this with DRM (Digital rights management). DRM is technology that limits the right to open media. Trusted computer does not limit your rights at all. The confusion arises from the fact that microsoft plans to use TCPA (Trusted computer) to implement DRM.

      TCPA support will totally be optional. You can enable/disable it when compiling the kernel. You normally want it enabled to take advantage of hw accelerated encryption, but if you are still paranoid (read misinformed) and think there is some evil corporation that is going to use TCPA to limit your rights, you can just turn it off.

      There is a nice article [ibm.com] from ibm that clarifies the issue

      • Many activeX controls have to be signed for them to run, and they have no trouble getting signed by very high profile companies such as VeriSign. Signing files doesn't prove anything. To get real security, you should run it in a sandbox. When a sandbox is properly implemented, you don't have to worry about whether the code is signed or not.
        • Re:Trusted Computing (Score:3, Informative)

          by Anonymous Coward
          (posting anonymously, cuz I work at verisign, though not in any of the cert-related depts...)

          Free clue -- VeriSign's raison d'etre is not to convince end users that Business X is "trustworthy", only to verify whether or not someone representing themselves as Business X is in fact Business X. We verify the connection(s) between a real-world/meatspace identity and an electronic identity.

          If We Install Spyware, Inc. applies for a SSL cert for www.weinstallspyware.com, our job is to verify that the guy re
      • This is some of the things you can expect:

        Already, we see software design flaws. Just because you mention there are multiple things tells us that it's not a clean system, and that it ignores the traditional Unix dictate: "Do one thing, and do it well".

        You list the user blocking unsigned programs from running, and you also list hardware-accelerated encryption. Those are two entirely different features, and there is no good reason why they should be part of the same system. If I desire either of those,
      • Re:Trusted Computing (Score:4, Interesting)

        by Alsee ( 515537 ) on Wednesday April 27, 2005 @09:30PM (#12367279) Homepage
        Who modded this up? It is wrong on almost every point.

        When you compile or install a software, you can sign it. The computer will not execute anything that is not signed.

        Which has absolutely nothing to do with Trusted Computing.

        If you want to do that you can do it right now with a trivial change to the EXE loader code. Hell, you can do it on a Win98 machine without a patch - all you need to do is redirect EXE and similar filetype association to point you your own little stub code to do that check. You can obviously do it with a trivial patch to Linux or DOS or any system.

        Trusted Computing has nothing to do with signing files. In Trusted Computing any code's hash *is* it's "signature" and controlls what data it may decrypt. It is that hash which is reported over the internet. No need for any signature from anyone. You can certainly add signatures for various purposes on top of the Trust system, but it really has nothing to do with Trusted Computing itself.

        The important thing here is that the user is in full control of the system.

        Sure - in the sense that if he does not "voluntarily" turn over total control to the Trust system and to other people then it is impossible to install and run the new Trusted software and impossible to read or use any Trusted files and it will be impossible to view any Trusted website, and potentially in about 5-8 years he may be denied any internet access. The Trusted Computing Group has announced a project for routers that would deny an internet connection to any computer that is not locked down in Trusted Compliant mode. In fact at the Washington DC Global Tech Summit the president's Cyber Cecurity Advisor called on ISPs to plan on making exactly this sort of system a mandatory part of their Terms of Service to get internet access. I can dig up a link to this speech if you don't beleive me.

        So short term refusal to submit to Trusted Computing and give up control of your computer just means you can't use a few new peices of software and you won't be able to buy the RIAA and MPAA's new DRM download sales. However the problem gets worse over a couple of years. Refusal to submit means you get locked out of more and more software and more and more files and more and more websites. Eventually you may be be effectively banned from the internet unless you 'voluntarily' activate the Trust chip.

        But yes, you are always 'free' to leave the Trust system off. You are 'free' to crawl into a hole in the ground and use nothing new and connect to no one. You are 'free' to to choose to get locked in a prison cell instead of giving up control of your computer.

        So you can have hardware accelerated encryption/decryption

        Lie.

        To be fair I assume *you* are not lying, merely that you are honestly echoing a lie that has been told to you.

        Trust chips cheap low horsepower silicon. Running crypo on them is SLOWER than on even the lowest of ordinary low end CPUs. In fact a single basic crypto operations may take a full second or more to run on these very low capability Trust chips.

        If you want crypto accelleration, great, get a standard hardware crypto accellerator. They've been around forever and they have absolutely nothing to do with Trusted Comptuing.

        Many people opose trusted computer because they confuse this with DRM

        You could get EVERY claimed benefit to the owner of Trusted Computing with identical hardware where the owner is given a printed copy of his master key. The fundamental design requirement of Trusted Computing is that they owner is forbidden to know his master key and the specification requires that the chip must self-destruct and destroy your data if you attempt to get at your master key.

        The *ONLY* purpose of forbidding the owner to know his own key is to enable DRM enforcment and DRM-type functionality, to restrict the owner. Being forbidden to know your own key has the sole effect of restricting what you c
    • by Ford Prefect ( 8777 ) on Wednesday April 27, 2005 @08:52AM (#12357887) Homepage
      Is the inclusion of trusted computing a good thing here? Many people in the /. crowd didn't seem to like the idea of it's inclusion in Windows...

      I think the complaints about locking machines down are more in who gets the keys...

    • Comment removed (Score:5, Interesting)

      by account_deleted ( 4530225 ) on Wednesday April 27, 2005 @08:58AM (#12357922)
      Comment removed based on user account deletion
      • Re:Trusted Computing (Score:5, Interesting)

        by Minna Kirai ( 624281 ) on Wednesday April 27, 2005 @11:58AM (#12360083)
        Trusted computing as a whole is a good thing, with one componant that is a very bad thing: Remote attestation.

        Nuclear bombs are on a whole good things, with one componant that is a very bad thing: widespread death.

        You can't admit that the single motivating factor of a system is bad, but then say that the afterthoughts and bonus utilities somehow make up for it. And if you don't believe remote attestation was the driving factor to create Trusted Computing, just look at its history of sponsors.
  • Feature creep (Score:4, Insightful)

    by Dancin_Santa ( 265275 ) <DancinSanta@gmail.com> on Wednesday April 27, 2005 @08:19AM (#12357615) Journal
    I know I'm going to rub a few feathers the wrong way, but I think this kind of feature creep is actually good for the Linux kernel.

    The more features we can get into kernel mode, the less we need to rely on "chaining" and other Unix-way solutions and we can think more about applications and OS services as "whole units".

    And since the majority of installations of this latest version will be on desktops, the more hardware support, the better the hardware support, the more seamless the hardware support, the better.

    It would be nice to see some componentization of the kernel to allow for easy stripping of unnecessary features, but as the kernel will stand, the features are all necessary.
    • Re:Feature creep (Score:5, Insightful)

      by Tim C ( 15259 ) on Wednesday April 27, 2005 @08:30AM (#12357697)
      You got one thing right - you *are* going to rub a lot of feathers the wrong way saying that. I'm not saying I agree or disagree with the idea, but understand that having lots (and lots) of little tools that do one thing only, that can be chained together is the "Unix way".

      For a lot of people, that's a lot of the appeal of Unix and Unix-like systems.
      • Well, "solutions" are often built of lots of modules (functions) chained together... with a wrapper app that makes them appear to be one cohesive lump..
        It does make far more sense to have the functions available outside of the larger solution, so they can be adapted to service other solutions aswell. But that's not to say you can't have large monolithic-looking apps that internally use the same smaller components to get their work done.
    • And since the majority of installations of this latest version will be on desktops, the more hardware support, the better the hardware support, the more seamless the hardware support, the better.

      What on earth makes you say that ? Linux Desktop installations aren't suddenly going to ramp up just because we have a new kernel, and changes to the kernel alone will not make Linux suitable for the desktop.

      Linux's major market is still on the server. It's only now really starting to make the move from the
    • Re:Feature creep (Score:3, Informative)

      by JohnFluxx ( 413620 )
      I'm not sure what your post is saying.

      Hardware support has nothing to do with feature creep (directly anyway - indirectly they effect underlying device systems like usb,scsi,ide etc).

      Seemless hardware support (HAL etc) is a new feature, so point there.

      The inotify thing is a replacement for dnotify (I know you didn't mention it, but it was in the article) so doesn't add any features really, just fixes bugs.

      The whole thing about relying less on chaining... I just didn't get.
      Can you give any example where
    • Re:Feature creep (Score:2, Informative)

      by Anonymous Coward
      "It would be nice to see some componentization of the kernel to allow for easy stripping of unnecessary features, but as the kernel will stand, the features are all necessary."

      Erm...you can do that now and have been able to for most (all?) of the last decade.

      At runtime, there are modules. At compile time, whole sections of code can be removed.

      The Linux kernel is only monolythic at the lowest levels; it's not a microkernel message passing system and that's not going to change. That's one of the reasons
    • Re:Feature creep (Score:3, Insightful)

      by Vo0k ( 760020 )
      Linux is supposed to be fun. That's the most important part about it. Not "good for mission-critical applications", not "suitable for enterprise solutions", not "desktop-ready", not "lower in TCO than competition", not "faster", not "more free", not "less bloated", not "more robust", not "scalable", "stable", "secure", "efficient", "competitive", "easy". Just fun. This is the ultimate priority and all the rest results directly from it. Make it a corporate monster and you take all the fun away from it, so le
  • What this means (Score:5, Informative)

    by JohnFluxx ( 413620 ) on Wednesday April 27, 2005 @08:20AM (#12357618)
    Just for those not in the know..

    Inotify is a replacement for dnotify. With both you can watch for a file for changes. You can even watch a directory for changes. However with dnotify you couldn't recursively watch a directory for changes. To do so required basically 'opening' each folder and quickly you use up the maximum number of files you can open.

    With inotify it still doesn't directly support recursively watching a directory but example code for doing so is given and doesn't have the same problems. One distro uses this for watching /home recursively. I don't remember why or which. :)

    As for the notification thing - that's part of HAL, and means usb pens, cameras, etc should be 'auto detected' and the user can be notified and asked what to do automatically.

    • One distro uses this for watching /home recursively. I don't remember why or which. :)

      It probably is for Beagle [gnome.org] for indexing and searching.
    • Re:What this means (Score:4, Informative)

      by JohnFluxx ( 413620 ) on Wednesday April 27, 2005 @08:26AM (#12357664)
      Oh just to reply to myself.. dnotify had this problem where if you watched a file say on a CD, it meant that file was 'opened' and hence the CD couldn't be ejected because it was being used..

      inotify fixes this.

      (waiting 2 mins between posts... sigh)
    • As for the notification thing - that's part of HAL, and means usb pens, cameras, etc should be 'auto detected' and the user can be notified and asked what to do automatically.

      This should be taken care of by hotplug, but I have an interesting problem with Ubuntu 4.05.

      The first time a USB storage device is connected to the system Gnome detects and automounts the disk. Unplugging the disk removes the icon from the gnome desktop.

      The next time the device is connected gnome does not mount the device. The only

      • I don't know what fixed the problem, but Fedora Core 2 had the same issue for me, but it works quite differently (and correctly) in FC3.
      • Re:What this means (Score:3, Interesting)

        by dtfinch ( 661405 ) *
        I tried plugging a laptop hard drive to a USB adapter and then into a Windows desktop so I could recover the drive (the laptop was dead). It recognized it as a USB mass storage device, but did not give it a drive letter. Took a look in the Disk Management control panel. It saw the drive, and its partitions, and acknowledged that there was no drive letter. I right clicked the partition and the option to assign it a drive letter was greyed out. So I tried the diskpart command line tool. It said that the drive
      • Re:What this means (Score:4, Informative)

        by tialaramex ( 61643 ) on Wednesday April 27, 2005 @10:04AM (#12358542) Homepage
        hotplug isn't enough

        The hotplug system is part of the OS, running as root, and is intended to do things like insert driver modules, pump firmware around, and set permissions. This is useful even on a server, although its more important for a laptop or desktop machine. It doesn't do anything to your desktop directly though...

        HAL uses DBUS to notify the user's desktop software about these exciting events so that it can do something appropriate. The desktop doesn't have dangerous privileges (so it's unlikely to accidentally format your main SCSI drive instead of the freshly inserted USB flash) and is able to interfact with the user through pop-ups and making icons appear in file managers etc.

        This system (Hotplug + HAL + DBUS) replaces earlier systems where desktop software polled for any interesting changes every few seconds. The new system is event driven, using resources only when they're needed, and should hopefully be more powerful too.
    • Re:What this means (Score:3, Insightful)

      by Ford Prefect ( 8777 )
      Inotify is a replacement for dnotify. With both you can watch for a file for changes. You can even watch a directory for changes.

      I've seen the older dnotify thing at work in KDE, and even that seemed to work much better than the equivalent in MacOS X on my iBook. I've frequently saved a file from Safari, gone to attach it in Mail only to find it's not present in the file selection dialogue box (that I've opened after saving the file). A quick click on the desktop makes things update, but it's bloody annoy
  • inotify (Score:2, Funny)

    by Anonymous Coward
    'iNotify' Apple about this release and let's see what they have to say about 'iT'.
  • by UnderAttack ( 311872 ) * on Wednesday April 27, 2005 @08:20AM (#12357620) Homepage
    I think these changes are nice. But what Linux needs is a rethinking of the way device drivers are integrated. Bundling them all with the kernel will just no longer work (did you ever try to configure a kernel these days?). What I am looking for is a way to be able to use the same driver (aka 'module') in different kernels without having to recompile all over again, and the ability to compile a driver without having the complete kernel source installed.
    • 'did you ever try to configure a kernel these days?' - no, my distro does it for me.

      I don't see what the problem is. My distro has all the drivers compiled for me. What use case do you have other than compiling your own kernel for the sake of it?

      On a practical level, Linus has said many times that he won't do this because it would require freezing the internal kernel api. While this might sound good for an outsider, you only have to consider how much say the USB structure has been reorganised to realis
      • by pkphilip ( 6861 ) on Wednesday April 27, 2005 @09:18AM (#12358112)
        I think it will be useful to have a system whereby drivers can be loaded without requiring the entire kernel to be compiled.

        Granted most distributions do ship with as many of the drivers as possible, but I have found myself in a spot a few times when the Linux kernel did not have the drivers for something fairly critical which was needed during installation - for instance, I am trying to install linux onto my AMD64 machine but none of the linux kernels (including 2.6.11) support the southbridge chipset on my motherboard.. and so Linux cannot detect the harddisk on my computer...which means I cannot install Linux on the machine now.

        I installed XP on the same machine without a problem - just popped in the device driver CD and the harddisk was immediately recognized.

        It will be great to have that facility on Linux as well - changed your graphics card? just pop in the driver CD and install the driver and you are ready to go..
    • by Anonymous Coward

      I was just reading the latest Kernel Traffic [kerneltraffic.org] and it hit me how much of a flux the driver model seems to be in. Constantly.

      Microsoft Windows seems to have had a stable driver interface since at least Win2K (probably NT4 too). The weird thing is that eschewing binary compatibility, like Linus likes to do, really ought to make it easier to stabilize a model? I mean, they have all the upsides with none of the downsides.

      I really don't care personally -- I don't write drivers -- but isn't it a bit odd that th

    • by Anonymous Coward

      I've been a avid user of Linux for a long time.

      The days of compiling a custom kernel is over, except for people who like playing with the latest features or gentoo users.

      Not because it's impractical, just because there is no point.

      If you have a high-quality distro (Suse, Mandrake, Debian, Ubuntu, Fedora etc) then the distro people are quicker to apply kernel patches to fix security issues, test them for bugs, and release updated kernels then what you can normally get thru kernel.org.

      The performance adva
      • When I was younger I loved downloading the next kernel, going through every single option, reading the help, and deciding if I want it.

        Great fun :) Led to me contributing a tiny bit to the kernel (I have a 3 line patch in there still! :) )

        I don't these days.. perhaps I should. Great for learning even if I don't produce a working kernel heh.
        It would be fun to play with SElinux and everything. I'm such a geek.. ;)
  • by mi ( 197448 ) <slashdot-2017q4@virtual-estates.net> on Wednesday April 27, 2005 @08:21AM (#12357625) Homepage Journal
    Can Linux, please, implement the kqueue (PDF) [freebsd.org] interface, please?

    Also, how about growing files with mmap? Currently one can not mmap() beyond the end of the file on Linux...

    • IANACC (i'm not a c coder), BUT according to ftruncate [gnu.org] it would seem that you can do just that

      "However, sometimes it is necessary to reduce the size of a file. This can be done with the truncate and ftruncate functions. They were introduced in BSD Unix. ftruncate was later added to POSIX.1.

      Some systems allow you to extend a file (creating holes) with these functions. This is useful when using memory-mapped I/O (see Memory-mapped I/O), where files are not automatically extended. However, it is not port
      • You will not be able to mmap() the space "created" by ftruncate(). That's my grudge. I don't want the trouble of maintaining my own buffer(s) and write-ing it/them out.

        I want to mmap() way beyond the possible size of the result (very possible especially on 64-bit platforms) and just write to that memory. Once I'm done, I'll ftruncate the file to whatever length it ends up being.

        On BSD the method works fine. On Linux it does not :-(

        On Solaris (8 and 9 -- not sure about 10) mmap() is even worse, though

    • For file descriptor events, Linux has a better implementation than kqueue, called epoll [xmailserver.org]. It's better because it works with all types of file descriptor (thanks to using the same kernel functions as poll/select internally), not just the subset documented in the kqueue man page. Which means you can use epoll in a generic replacement for select/poll, which you can't quite do with kqueue.

      kqueue can do other things, including aio which is useful, and it is marginally more efficient due to fewer system calls f

  • Amount of changes (Score:3, Informative)

    by spineboy ( 22918 ) on Wednesday April 27, 2005 @08:22AM (#12357631) Journal
    I'm surprised at such a fine granular change in the kernel (2.6.11 -> 2.6.12) with all of these changes - some sound pretty big. This really sounds more like a larger version bump, e.g. 2.8. I guess it's debateable since it's such a grey area in terms of what constitutes a version change.

    But all in all, these new improvements sound great.
    -address space randomization for defence against buffer overflow attacks and remote script kiddies.
    Reiser 4, Xen suport, software suspend, trusted computing support,latency improvements and improved kernel space notification. - WOW - lot's o' stuff.

    • I think that the metric so far has been that things likely to make the installation of x.y*2.z difficult on an x.y.z-1 system, or were likely to require a few iterations of integration and testing were deferred until z.y*2+1.0 was forked.
  • by Salk ( 17203 ) on Wednesday April 27, 2005 @08:24AM (#12357647)
    This seems like a good thing to me. One of the advantages of Linux not been driven by a need to produce revenue.
  • by cluge ( 114877 ) on Wednesday April 27, 2005 @08:28AM (#12357672) Homepage
    The current linux kernel is pretty amazing if you think about it. It's running on everything from OS 390's right down to cell phones with features for everything inbetween. This flexability generally means that the kernel has a lot of untested combinations. Thats a potential problem.

    The kernel needs a team of people that specifically tries to break the kernel. Right now kernel testing is haphazard at best. By devoting a team of people (just like the developers) whose sole purpose in life is to break the kernel we (the community) will improve the security, and quality of future linux kernels. It will also improve the quality of code going into the kernel.

    The new code sounds very good - but the linux development community needs some hackers to break stuff.

    Cluge
  • Some time soon... (Score:3, Insightful)

    by Progman3K ( 515744 ) on Wednesday April 27, 2005 @08:32AM (#12357711)
    Let's keep it that way!
    As long as the developers release it when it's done, and not according to some abstract schedule, we'll have the best operating system there is.

  • "Kernel advances such as position independent executables, non-executable memory regions, stack smashing protection and execution capabilities are introduced. Implementations such as PAX and exec-shield are compared." Now if they can just get those last few kernels to execute properly, we will have created flawless popcorn!
  • Are some drivers for my Promise TX4000 IDE controller so I can upgrade to a 2.6 kernel and benefit from the better software RAID...
  • SELinux has AFAIK been included in 2.6 for a long time already. What's new in 2.6.12? The article is pretty light on details.
    • It's been available for 2.6 yes, but that does not mean it's been included in the main distribution package... which it hasn't been.
  • Essential links.... (Score:5, Interesting)

    by ssj_195 ( 827847 ) on Wednesday April 27, 2005 @08:39AM (#12357772)
    ... for people wishing to know more about the possible ramifications of Trusted ("Treacherous"...?) Computing:

    Ross Anderson's Critique [cam.ac.uk]

    IBM's Rebuttal [google.co.uk]

    Trusted Gentoo [gentoo.org]

    IBM's rebuttal does a decent job of allaying some of the fears - for example, it states that it will not prevent you from running any OS & programs you wish to on your own computer (which, for the record, I believe - witness the Trusted Gentoo project and e.g. this this [linuxworld.com.au] link). They state that their approach to Trusted Computing is not particularly well-suited to DRM, and on the face of it, I agree - there seems to be little attempt at restricting the user of a computer with the TPM from doing what they want. However, in my opinion, as a base for an utterly crippling DRM regime, distributors simply could not ask for a better setup, as I'll argue a little later.

    So to re-cap, it seems that if you are running Trusted hardware, there are no restrictions on what you can do on your computer in isolation; you can install Linux, run any number of Open Source apps, etc. But the keyword here is in isolation, and it is here that the dangers of Trusted Computing are revealed. For you see, Trusted Computing enables the usage of remote attestation wherein a server may request a hash of all software currently running on your computer. This hash is, for all intents and purposes, unforgeable, and if you disable your TPM (as IBM stress that you can, and again for the record, I see no reason to disbelieve them), no hash will be sent. The server may then assess this hash of software (or note that no hash has been provided, in which case it may well treat your computer as Untrusted) and decide, based on what software you are running, to simply not serve you with whatever material you requested - for example, it may decide that it will not deliver MP3's to your computer unless it knows for a fact that the receiving application is one that is known to encrypt the content as soon as it is received (so that e.g. it simply cannot be viewed while not running in Trusted mode) and which will take every step to ensure that once received, the unencrypted content never leaves your machine (e.g. by being written to CD, e-mailed , etc.). As you can imagine, the above scenario is not at all far-fetched as the **AA/ other media distributors are positively *creaming* themselves at the thought of stamping out casual file-sharing or even making backups for your own use in some of your other devices.

    So we are left with the situation where someone who does not use Trusted hardware (and is thus unable to respond to attestation requests) or those who do run Trusted hardware but whose software fingerprint is not deemed acceptable by the server will simply not be granted access to certain material, rendering such people at a big disadvantage. And it's no good buying hardware free from Trust chips from China or such places on the "black market"; this offers no advantage at all as Trusted hardware, as mentioned, does not stop you using your computer the way you want in isolation; the problem only occurs when you try to interact with other computers.

    So far, this sounds unpleasant but not too bad (although I would urge you to read Anderson's linked essay for some more imaginative and serious abuses), but if we allow ourselves to follow the slippery-slope, we end up at the state where ISPs will not allow your computer to access the internet at all (for surfing, e-mailing, anything) unless you are running Trusted hardware and software. Obviously, the social, political and legal barriers to this occurence are non-trivial, but we've all seen ridiculous Acts qu

    • I'm buggered if I can find an answer to this, but if anyone is using Konqueror 3.4 with famd,

      No, I doubt anybody is using famd. At least, someone who uses removable media (like cdroms) can't very well run it, because it will keep directories open and prevent umounting.

      Maybe once linux 2.6.12 brings out the new inotify things, famd will become tolerable to run continually, and it'll start getting bugfixes in.

      PS. I am only 30% joking.
    • by swillden ( 191260 ) *

      The offered advantages are, in my opinion, fairly weak - you can eliminate online cheating in multi-player games, and media companies are more likely to allow downloads of materials (DRM'd up the wazoo, of course).

      The real advantages appear primarily in corporate environments. Using hashes plus remote attestation to report precisely what version of what OS you're running, in an unforgeable way, is theoretically possible, but, IMO, impractical. It also requires a "secure" BIOS that cannot be flashed wit

      • by OeLeWaPpErKe ( 412765 ) on Wednesday April 27, 2005 @11:30AM (#12359711) Homepage
        Did you read what you just said ?

        Next time you find out by email your boss is about to fire you because you're e.g. colored, you will not be able to forward that mail to the authorities, and your boss will be able to destroy ALL traces of that email from his home.

        Next time the police really needs data on a person's computer it will not be possible to extract it, because "TPM" will prevent it.

        And ... next time microsoft needs all the internal documents from your company, they will just open their explorer, that just happens to be implicitly trusted by your company's software, and it can do 2 things : they can both read the documents, and they can make the documents AND any backup copies you may have inaccessible.

        Adobe will be able to do this for pdfs, your bank for your bank statements, ...

        THAT is what TPM is about.
  • Heh. (Score:2, Funny)

    by ggvaidya ( 747058 )
    so hardware 'just works'

    Begun, the Just Works wars have ...
  • Latency and preempt (Score:4, Interesting)

    by Anonymous Coward on Wednesday April 27, 2005 @08:52AM (#12357884)
    Apparently, accourding to some posts on the Linux Audio User list the latency in native 2.6.12 is as good as the patched 2.4 for audio use.
    This is great news for all of us using Linux for audio. It's also a pretty mean feat, as the 2.4 low latency patches were a little bit brute force compared to the 'correct' method in 2.6 of fixing all the problem spin lock areas in the kernel, a much harder task.
    Now all we need is to get the RT LSM module into the main kernel. (It allows non root uses real time scheduling without messing about, it's not vital for perfomance but nice for usability.)

    I have not tried 2.6.12 myself yet, but have got great results with unpatched 2.6.11 kernels.
  • As someone who was there I can tell you that the highlight of the conference was Eben Moglen from the FSF's speech, and the double standing ovation that followed.
  • by hanssprudel ( 323035 ) on Wednesday April 27, 2005 @09:08AM (#12358023)
    I realize that it is probably paid for by IBM as part of their campaign to try to dupe people into thinking that the DRM vehicle they call "trusted computing" (remember: that is "trusted" as in "other people can trust your computer to control you") is something benign. However, implementing "TC" in Linux feels like a gigantic waste of time: does anybody here REALLY think that the proprietary DRM applications that are the ONLY REASON WHY WE WOULD NEED "TC" are ever going to be ported to Linux?

    Do you see the DRMed "music stores" (it is more like a barter: "give us your money and control over your computer, and we'll let some Britney and Fiddy come from your speakers!") falling over themselves to run on Linux? Do you think that is because Linux doesn't support "TC" or because those companies couldn't possibly care less about Linux as a platform? I'll give you three guesses. And the ENTIRE POINT with "TC" is to make it impossible for us to reverse engineer and write our own replacements for those applications - so be definition we can forget about that alternative.

    All I can say is, I hope they had fun implementing it, and that they feel happy about the all the people who believe the astroturfing that "TC" isn't the Torjan Horse of DRM.

    "TC" is DRM is the tool of closed networks, closed source, a closed society, and a closed future. People who believe it will coexist with Linux are so naive that it would be quaint if it wasn't so fucking scary...
    • What is interesting is that "Trusted" used to be a label applied to systems like Trusted Solaris that implemented mandatory access controls (similar to what SELinux does for Linux). Which version of Trusted computing are they talking about? Mandatory access controls or the DRM nonsense?
    • Your post is as incoherent and paranoid as it is long.

      The problem with what you understand as Trusted Computing is that someone else gets the keys. They can decide what your computer can run and what it can't. Obviously this is bad and justifies the acute paranoia from which you seem to be suffering.

      With the Linux implementation, you get the keys. So you can sign all of the executables you normally use and tell the kernel to only run them. Anything unsigned (e.g trojans, rootkits etc..) won't run.

      It's a
  • by t35t0r ( 751958 ) on Wednesday April 27, 2005 @10:09AM (#12358600)
    Every day I see a new bug on the ieee1394 mailing list. There are some serious issues with firewire on linux. It is nowhere as mature as it is on winxp or macosx. DMESG spits out lots of errors, sometimes my drives unmount themselves when I transfer 50gb+ (ext3/reiser were massacres, xfs was slightly better). Even with the latest kernel these problems persist.
    • There are some serious issues with firewire on linux. It is nowhere as mature as it is on winxp or macosx.

      No doubt I'm opening myself up to a Troll/Flamebait mod, but...

      FreeBSD's Firewire support is much better than Linux's. FreeBSD had firewire support before Linux, and it was considered stable and released in the default kernel before Linux even had it's unstable Firewire drivers available as an option, IIRC.

      Having good firewire support leads to other interesting developments too, like the ability t

  • Go to the source (Score:4, Interesting)

    by Corbet ( 5379 ) on Wednesday April 27, 2005 @10:11AM (#12358614) Homepage
    Should you be curious, I've posted the slides to my talk on LWN.net [lwn.net].
  • by Jagasian ( 129329 ) on Wednesday April 27, 2005 @10:18AM (#12358678)
    One feature that isn't talked about much, but is very popular amongst gamers is the configurable USB mouse polling rate. For years it has been available as a kernel patch, but now it has finally been included in the kernel [linux.dk]. This means no more recompiling your kernel just to increase your mouse polling rate from 125hz to 500hz. It can now be set from your boot loader or from the command prompt.

    Why is this so great? Well, the typical polling rate of 125hz for USB mice is noticably less smooth than a polling rate of 500hz, whether you are using your mouse in games or a desktop app. For this reason many people preferred to use PS2 mice, as they could be polled at up to 200hz. Now with this new feature, PS2 can be retired. Get yourself a high resolution USB optical mouse and set the polling rate to 500hz.

    You can feel the difference.
  • 286 ? (Score:3, Funny)

    by ultranova ( 717540 ) on Wednesday April 27, 2005 @12:35PM (#12360562)

    When I saw this story on the front page, it had 286 comments. Very appropriate, since the purpose of "Trusted Computing" is to turn the clock back to the bad old days.

  • Many people are complaining about what Trusted Computing can/will be used for. Quit whining, for two reasons:

    First, Linux is open-source, so you can modify or disable whatever you want. Unlike a binary kernel, you can remove code you don't like, and the rest of the kernel will work without it (if you remove it cleanly). In other words, it's not being forced upon you by the OS distributors. If a company decides to make software that requires it, that will be their decision to make and their problem to solve.

    Second, TC has uses other than the oft-cited "make sure the computer only has $OMINOUS_ADJECTIVE software here", for Orwellian values of $OMINOUS_ADJECTIVE such as "permitted", "approved", and so on. In fact, Trusted Gentoo is setting up a system that uses the TPM (Trusted Platform Module--"the chip") to make sure your kernel and bootloader hasn't been tampered with and keep your SSH keys from being compromised. "Trusted" simply means that there is an uncompromisable encryption and verification (signing) system in the computer. It can be used for good or evil. Linux gives you that choice.

Say "twenty-three-skiddoo" to logout.

Working...