Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
Operating Systems Software Linux

Linux Kernel 2.6.30 Released 341

diegocgteleline.es writes "Linux kernel 2.6.30 has been released. The list of new features includes NILFS2 (a new, log-structured filesystem), a filesystem for object-based storage devices called exofs, local caching for NFS, the RDS protocol (which delivers high-performance reliable connections between the servers of a cluster), a new distributed networking filesystem (POHMELFS), automatic flushing of files on renames/truncates in ext3, ext4 and btrfs, preliminary support for the 802.11w drafts, support for the Microblaze architecture, the Tomoyo security MAC, DRM support for the Radeon R6xx/R7xx graphic cards, asynchronous scanning of devices and partitions for faster bootup, the preadv/pwritev syscalls, several new drivers and many other small improvements."
This discussion has been archived. No new comments can be posted.

Linux Kernel 2.6.30 Released

Comments Filter:
  • DRM? (Score:4, Informative)

    by corsec67 ( 627446 ) on Wednesday June 10, 2009 @09:44AM (#28278743) Homepage Journal

    Why would DRM be listed as a "feature"?

    Oh, wrong kind of DRM?

  • by TheGratefulNet ( 143330 ) on Wednesday June 10, 2009 @09:51AM (#28278843)

    different DRM. this isn't 'rights mgmt' drm.

    sometimes, 3 letters can mean different things.

  • Re:DRM? (Score:5, Informative)

    by sakdoctor ( 1087155 ) on Wednesday June 10, 2009 @09:55AM (#28278893) Homepage

    The Direct Rendering Manager (DRM) is a component of the Direct Rendering Infrastructure, a system to provide efficient video acceleration (especially 3D rendering) on Unix-like operating systems, e.g. Linux, FreeBSD, NetBSD, and OpenBSD.
    It consists of two in-kernel drivers (realized as kernel modules on Linux), a generic drm driver, and another which has specific support for the video hardware. This pair of drivers allows a userspace client direct access to the video hardware.

    I assume it's this. Either that, or linux now has Direct response marketing in the kernel.

  • Re:DRM? (Score:4, Informative)

    by Stoian Ivanov ( 818158 ) on Wednesday June 10, 2009 @09:57AM (#28278915)
    Direct Rendering Managment - this DRM not the bad one
  • DRM for Trolls (Score:4, Informative)

    by chill ( 34294 ) on Wednesday June 10, 2009 @09:57AM (#28278919) Journal

    The Direct Rendering Manager (DRM) is a component of the Direct Rendering Infrastructure, a system to provide efficient video acceleration (especially 3D rendering) on Unix-like operating systems, e.g. Linux, FreeBSD, NetBSD, and OpenBSD.

    It consists of two in-kernel drivers (realized as kernel modules on Linux), a generic drm driver, and another which has specific support for the video hardware. This pair of drivers allows a userspace client direct access to the video hardware.

    From WikiPedia.

    Karma Whoring FTW!

  • by zevans ( 101778 ) <zacktesting.googlemail@com> on Wednesday June 10, 2009 @09:57AM (#28278935)

    If you're using 2.7.x Intel xorg drivers you NEED this kernel. Anyone struggling with weird freezes, font corruption, and various other troubles - turns out most of these problems weren't in the Intel drivers at all, but in the GEM and DRI code in the kernel. Mine's been rock solid since RC5 for stability, and RC8 finally fixed the problem with fonts under UXA.

  • by harryandthehenderson ( 1559721 ) on Wednesday June 10, 2009 @10:05AM (#28279027)
    Direct Rendering Manager [wikipedia.org]
  • by fbjon ( 692006 ) on Wednesday June 10, 2009 @10:07AM (#28279069) Homepage Journal
    This is something quite different and exciting: a log-structured file system, for storing your files on dead trees.
  • by mrpacmanjel ( 38218 ) on Wednesday June 10, 2009 @10:10AM (#28279101)

    Have wireless "issues" been fixed with this release.

    I have a laptop with generic realtek rt2500 wifi hardware.
    For many kernel releases I have to compile seperate drivers (Legacy serialmonkey) because the "stock" drivers are woefully unstable.
    I either lose my connection, painfully slow( have tried the "rate 54" fix) or I cannot reconnect to my network at all.

    I don't mind compiling seperate drivers (a huge benefit of open source stuff & Linux) but I am concerned how long I will be able to do this (E.g. something changes in the kernel makes the "external" driver break - in fact actual development of the legacy drivers has ceased - http://rt2x00.serialmonkey.com/wiki/index.php/Main_Page [serialmonkey.com])?

    I know I should not be moaning about this but this issue has been around for ages and seems to affect a lot of hardware.

    This is my only niggle with Linux and I am grateful for everything. Computing become much more interesting and fun again.

    Huge thanks to Linus and the kernel developers.

  • by Vu1turEMaN ( 1270774 ) on Wednesday June 10, 2009 @10:19AM (#28279247)

    When they say "Support for rt3070 driver for recent RaLink Wi-Fi chipsets", they really mean support for RT2870, RT2770, RT307X, RT3572 chipsets (they're all the same, with just features enabled or disabled, or signal strength improved between them).

    This was the one last thing for me to fully switch over to linux. Netgear and alot of other Wireless-N USB adapters use these chipsets, and they are the best around.

    Previously, the method of installing this driver was the largest pain in the ass I've ever had to go through as a linux noob (http://ubuntuforums.org/showthread.php?t=960642) and I'm so very very glad to see that this chipset is now supported.

    The reason it was so hard is that the normal controlling app for the USB device has many advanced features you normally don't see on a wireless adapter (act as a router, full cisco network compatibility, etc etc).

  • Re:POHMEL (Score:3, Informative)

    by Anonymous Coward on Wednesday June 10, 2009 @10:21AM (#28279269)

    And Evgeniy Polyakov (the POHMELFS dev) sounds like a russian name. I guess he knows.

    in soviet russia file systems name hangovers after you

  • by peppepz ( 1311345 ) on Wednesday June 10, 2009 @10:29AM (#28279423)

    No kernel modelsetting in 2.6.30 for anything but Intel chips.

    There is some work in progress [phoronix.com] for ATI chips, but nothing in the mainline kernel.

    In the meantime you can use uvesafb in the current kernel to get a framebuffer console if you like it. But you will get a bad vt switching experience.

  • by wolrahnaes ( 632574 ) <sean.seanharlow@info> on Wednesday June 10, 2009 @10:32AM (#28279467) Homepage Journal

    It's the combination of a bit of NIH plus the freedom that Linux brings to a programmer. If you know enough C to not break things horribly and can operate Google, you can create a filesystem. There are also hundreds of proprietary filesystems from older hardware running other OSes, and Linux supports a number of those thanks to users of those older systems developing drivers for them.

    I'd bet that the vast majority of filesystems supported by Linux are rarely if ever used, and when used they're operated in read-only mode to retrieve data from old disks.

    There are still a number, probably in the low teens, of filesystems in active use on modern Linux systems. Those are typically chosen either for compatibility with other platforms (FAT and it's derivatives for example, no one sane would choose to use that when other options are available, but it's just so compatible that often other options don't exist) or for specific job requirements (at one point I ran XFS on my file server because it supported growing the FS while mounted and seemed to be the best choice at the time for a box primarily handling large files). So I guess after all that, yes, it is that hard to get it right because the definition of right varies. Some jobs might want a filesystem to just be incredibly fast with a certain type of data and possibly rely on a nice RAID controller for reliability and caching, others might want the filesystem to handle everything and allow the controller to be dumb. SSDs bring an entirely different set of needs to the table and a filesystem that was laid out to be fast on disk might have serious problems on some SSDs.

  • by morgan_greywolf ( 835522 ) on Wednesday June 10, 2009 @10:40AM (#28279585) Homepage Journal

    Because one filesystem isn't optimal for all cases?

    Exactly. You wouldn't use a journaling filesystem (ext3, JFS, XFS) on an SD card. In networked environments, some filesystems are optimized for general use (CIFS, NFS) while others are optimized for a clustered environment (GFS, VMFS), while others are optmized for a distributed environment (Andrew Filesystem, CODA Filesystem). Log-structured filesystems are a new technology that maximizes write throughput, something that is key to optimizing speed in write-heavy environments: this is as opposed to conventional filesystems which are optimized for randomly reading and writing files in-place.

    You wouldn't necessarily want a log-structured filesystem in a database environment, for example, because the performance hit from incurring more seeks that are necessarily a part of a log-structured filesystem would be prohibitive for queries.

  • by Ephemeriis ( 315124 ) on Wednesday June 10, 2009 @10:44AM (#28279651)

    Can anyone explain to me why Linux has so many filesystems? Windows has had NTFS for years (admittedly, several versions, but never any compatibility issues that I've come across), and Linux has, what, 73 or something?! Is it really that hard to get it right?

    First up, you've got some incorrect assumptions/information about Windows.

    Windows has not had just NTFS for years. Windows has gone through several different flavors of FAT (FAT12, FAT16, FAT32, exFAT, VFAT).

    As far as NTFS goes... You dismiss the various versions, but then you're counting revisions to the various filesystems in Linux. NTFS has gone through four or five major revisions. Microsoft doesn't really advertise those revisions... They just keep calling it NTFS. But those revisions have added features and fixed bugs and basically changed the way the filesystem operates. Those revisions are no more or less significant than the changes from EXT3 to EXT4.

    Windows also offers a couple special-purpose filesystems... Like EFS and DFS...

    Windows can also handle NFS shares.

    You can also install support for other filesystems (EXT, HFS) in Windows.

    So, ultimately, Windows has at least 15ish filesystems going on... And that's just right off the top of my head, without doing any research at all.

    Now, as for why Linux has so many different filesystems available, it's simply because no single filesystem is perfect for everything. One filesystem might be good if you've got tons of tiny files... Another filesystem might be better if you've got tons of huge files... Another filesystem might be better if you need extensive journaling and reliability... Another filesystem might be better if you just need raw speed...

  • by Keruo ( 771880 ) on Wednesday June 10, 2009 @10:44AM (#28279659)
    There's clear roadmap posted here [lkml.org] describing features and implications of version numbers.
  • by TheRaven64 ( 641858 ) on Wednesday June 10, 2009 @10:54AM (#28279807) Journal

    Log-structured filesystems are a new technology

    Haha! This is the kind of wonderful comment I see a lot from Linux users. The first operating system to ship with a log-structured filesystem was the Sprite kernel in 1990. It was rewritten for 4.4BSD, which was released in 1995. Then, 15 years later, suddenly Linux developers hear about it and it's a brand new technology.

    Linux is not the whole world. Most of the 'new' technologies in Linux appeared in other UNIX-like systems first, and many of the implementations in Linux are inferior to the originals (although some are better).

  • Performance? (Score:3, Informative)

    by omuls are tasty ( 1321759 ) on Wednesday June 10, 2009 @11:06AM (#28279975)

    Intel's integrated graphics performance has been pretty progressively worse ever since switching from XAA, and rather abysmal ever since Xorg 1.5. Since then every release of X/mesa/xf86-video-intel made it even worse. Hopefully this release brings the entire GEM/UXA/KMS/whatever stack to a usable state. All this on a 945GM.

    What's your experience with it so far? I'll try it out myself in a few days, but I'm eager to hear the results...

  • by pjr.cc ( 760528 ) on Wednesday June 10, 2009 @11:44AM (#28280433)

    Personally, i always get quite excited when i hear about a new fs in the linux kernel. everyone of them is unique and inventive and does serve a purpose. I always wanted to write a tag based file system (and ended up implementing one in fuse once i realised i didnt care about how the data actually got onto the disk), the idea was to get rid of directory structures as we know them (i personally think they are a crap way of storing data, but oh well). I never got it finished but it was going to work like file x.zip has the tags a, b and c. So its accessible a variety of ways (/a/x.zip, /b/x.zip, /a/b/c/x.zip or /b/c/a/x.zip), and you could just move it to change its tags (i.e. mv /a/b/c/x.zip /d/x.zip to remove the a, b and c tags and replace them with d, or cd /c; mv x.zip ../d would remove the "c" tag and replace it with "d"). Back when i first thought of the notion i personally think it was quite unique, but its not without its drawbacks (think backup or any kind of filesystem scraping tool).

    But lets put some things in perspective first. Sure windows has NTFS, but its not the only one (as has been pointed out numerous times) - i didnt see anyone list vxfs (veritas), but there are probably as many for windows fs's that aren't part of windows (xfs for eg is available for windows - commercially). The other thing is that its always been called NTFS but its gotten quite a few variants (much like solaris and UFS), so your layman only ever see's "NTFS" and "FAT".

    So does it matter? the reality is that 99.999% of people only know ntfs and fat for windows, the same goes for linux really, 99.999% of people are using ext2/3 (now 4) plus FAT. In the windows world they would all have just been called "Ext" and you wouldn't have known that windows NT used ext2 while XP used ext3 and vista used ext4. Linux hasn't chosen to do it that way and for good reasons.

    But you should also define "right". Show me a filesystem like ext3cow (essentially a compliance file system) for windows, they do exist, they're not NTFS, you've just never heard of them cause you dont need them, and you've probably not heard of ext3cow either for that matter. The truth is that 99% of people just need ext2/3/4 for hard disks, will use fat for flash and wont care which one they get so long as it stores files. Which again, is exactly like windows, 99% of people know only NTFS for hard disks, fat for removable storage and dont need (or want) to know about the rest - but they do exist and we haven't even mentioned things like win CE, XP embedded, etc.

    You will get the occasional lunatic (i say that in a loving way) that'll do something like reiserfs or xfs, but thats your getting-a-bit-hardcore linux type.

    One other thing worth pointing out is that part of the reason there are few FS's you've heard of for windows is cause its a nightmare to code them (or was, may not be true anymore). I tried to do one for around the time XP/2003 were available, and it had about 12 different ways of doing exactly the same thing in the driver that you had to implement (i.e. "get the list of files in this directory" had to be implemented 6 different ways for backward compatibility, that was painful). Probably a good reason why any third-party commercial FS's for windows costs a minor fortune.

  • by PitaBred ( 632671 ) <slashdot&pitabred,dyndns,org> on Wednesday June 10, 2009 @11:59AM (#28280649) Homepage
    You can get the absolute latest info from the horse's mouth (AKA the ATI dev's that are working on the open source driver) at the Phoronix open-source AMD forums [phoronix.com]
  • by Anonymous Coward on Wednesday June 10, 2009 @12:13PM (#28280889)

    Linux has a lot of filesystem drivers, yes. However, pretty much all of them have a reason to exist, and absolutely no reason for anyone to bother still using them.

    The main reason for most filesystem drivers is compatibility with contemporary operating system. At the moment, that includes FAT 12, FAT 16, FAT 32 and NTFS for compatiblity with Windows, and HFS+ for compatibility with Mac OS X. Aside from the native filesystems, these are the most commonly used, but really have nothing to do with Linux itself.

    Since Linux has been around for a very long time, and has been ported to a lot of different platforms, there's been a need to share data with all kinds of other operating systems running on the same hardware. Just a few examples - the Amiga's FFS (and OFS), BeOS BFS, Acorn ADFS, HPFS for OS/2, and the old UFS filesystem used by lots of old Unix systems. Most (aside from trivial filesystems like the Amiga FFS) are read-only, because full read-write support was unnecessary, and probably too hard for the sheer number of filesystems out there. This makes up the bulk of the Linux filesystem drivers, and none of these are anything to do with Linux.

    Next, we have the filesystems for optical media - ISO9660 and UDF. Pretty much every OS needs to support those, nothing special about Linux support here.

    The Linux native filesystems - ext, ext2, ext3 (which is an updated ext2, not a new filesystem), ext4, and (arguably) ReiserFS. These were all developed specifically for use as the primary filesystem on Linux. ext -> ext2/3 -> ext4 form a single series, and it's possible to upgrade from ext2 to ext3 and then to ext4 quite painlessly. They're actually a good set of filesystems, and are at least as good as NTFS in their current iterations.

    Ported Linux filesystems - XFS and JFS. Originally written for other operating systems (IRIX and AIX / OS/2), and ported to Linux by their original developers, likely for compatibility with their own operating systems. Although they can be used as primary native Linux filesystems (I use XFS on my MythTV box), they usually aren't, and really have little to do with Linux anyway.

    The experimental Linux filesystems. These are either historical or current development filesystems, either as a testbed for future filesystems, or attempts to actually build a new filesystem for use. The current major experimental Linux filesystem is BTRFS. Has previously included Reiser4, Tux / Tux2 / Tux3, and probably a load more. Although pretty much all of them were usable as primary filesystems, they're all either still experimental (BTRFS) or have since become unmaintained (like Reiser4).

    Network filesystems - NFS and SMB / CIFS are the major ones, but Linux also supports serveral others, either for compatibility with other (older) operating systems, or new ones. Again, nothing really to do with Linux,

    That just leaves oddball filesystems. Things like SquashFS (read-only compressed FS for use on small read-only media), various filesystems designed for directly connected flash devices that you can't directly use a conventional filesystem on (JFFS / JFFS2, probably more), or other filesystems designed for specific kinds of storage device, like NILFS2. Most of these are for embedded systems, which have strange storage requirements anyway,

    So, you've got one set of native filesystem drivers, a few experimental filesystems, a few filesystems for embedded use, and almost everything else is for compatibility with another OS.

  • by DrogMan ( 708650 ) on Wednesday June 10, 2009 @12:30PM (#28281161) Homepage
    Downloaded. Compiled. Installed and rebooted, and it's running on a little test "embedded" box I'm playing with. (Geode LX800) It's passed all my own tests, and that's that.

    Like the new compression stuff. Compressed kernel under 1MB again - First time I've seen that for a while.

    Now to try it on my Acer Aspire One...

  • Re:Performance? (Score:2, Informative)

    by zevans ( 101778 ) <zacktesting.googlemail@com> on Wednesday June 10, 2009 @01:11PM (#28281791)

    GEM/UXA/KMS/whatever stack is now GEM/UXA/KMS without the "whatever" - the other options have been dropped out of the code tree for the drivers 2.7.99 and onwards.

    The kernel support for these now works properly in 2.6.30 for the first time, but that's only necessary, not sufficient!

    At the time of writing the nightly(ish) version from xorg-edgers of the driver blows up on 3D apps, so I'm still on 2.7.1 with some ubuntu patches (that have been pushed upstream because they really did fix stuff). With RC8 of the kernel and xorg-1.6 or the edgers version 1.6.1.901.blahblahblah it's all been nicely stable on UXA for last few days and quite snappy in 2D. UXA is still half the speed of EXA for 3D, even simple stuff like ppracer. Maybe 2.7.99 will fix that soon.

  • by eean ( 177028 ) <slashdot@monrTIGERoe.nu minus cat> on Wednesday June 10, 2009 @02:11PM (#28282715) Homepage

    Well they changed their whole development methodology that they don't have an unstable branch anymore and do feature releases about every 6 months. So kind of.

  • Comment removed (Score:3, Informative)

    by account_deleted ( 4530225 ) on Wednesday June 10, 2009 @03:57PM (#28284225)
    Comment removed based on user account deletion
  • by Atriqus ( 826899 ) on Wednesday June 10, 2009 @04:40PM (#28284911) Homepage

    Considering it's called a New Implementation of a Log File System, perhaps the people who think that this is a new concept aren't exactly the cream-of-the-crop of the userbase. Every group has that guy who'll say uninformed stuff; it's not exactly something worth getting a complex over.

  • by Cyberax ( 705495 ) on Wednesday June 10, 2009 @04:58PM (#28285173)

    http://www.rsdn.ru/forum/philosophy/1710544.1.aspx [www.rsdn.ru] - sorry, it's in Russian. You can download benchmark here: http://www.rsdn.ru/File/37054/benchmark.zip [www.rsdn.ru] Basically, it creates, stat()s and deletes lots of files. As you can see, performance in Windows is quite poor.

    I have several more microbenchmarks and _all_ of them work faster on Linux. As a not-very-micro-benchmark: git works way faster on Linux.

    And it's not the problem of NTFS itself, because ntfs-3g on my computer _still_ works faster for a lot of operations than the native NTFS in Windows!

  • by Alsee ( 515537 ) on Saturday June 13, 2009 @09:19AM (#28319303) Homepage

    don't worry, if you have the source code, you have the power to remove the DRM. Freedom, yeah, baby.

    Unfortunately, incorrect. I'm a programmer and I have studied the Trusted Computing technical specifications in depth.

    One of the central points of Trusted Computing is exactly to defeat that. Trusted Computing in fact manages to make the source code substantially useless. Under Trusted Computing you an "remove the DRM" lines of code from the source, but all that does is leave you with unreadable files and an effectively non-functioning software.

    The key you need to unlock the files does not exist in the source code, the key does not exist in the executable. The key you need for successful internet communication does not exist in the source code, the key does not exist in the executable.

    I'm going to oversimplify here, but essentially the key you need is locked inside the Trusted Platform Module chip. The chip will only supply that key to the exact unmodified executable. If you alter so much as a single line of code, the chip hands over a completely different effectively random key to you. A useless random garbage key. Your modified program cannot read the files you want it to read, and the internet connections you want will fail to open. If you modify so much as a single bit of source - if you modify so much as a single bit of the executable - the software no longer works. The source code is largely useless.

    In general it's impossible to crack Trusted Computing in software. There's a chance you might find an exploitable bug defeat DRM in some particular program, but it would only apply to that one program, and they have ways to actually FORCE down patches on everyone to fix that bug (the program will refuse to run at all until you accept the patch, and you could even be any access until you apply operating system patches). The Trusted Computing scheme itself is pretty well immune to software crackage. To beat the Trusted Computing system you need to physically crack the chip on your motherboard. With current deigns you might be able to get away with hacking into the wiring on your motherboard, but once they move the Trust chip inside the CPU you need to physically rip open your Trust chip and read out your key locked in the silicon. And even that is a limited victory. Every computer or other Trusted device has its own unique key. You need to crack them open and physically read out the keys from the devices one by one. If you want four Trust-unlocked computers you pretty well need to physically rip open four microchips and read out four keys. You can't put a key to multiple-use because they will spot that multiple use and revoke that key. Any DRM files you unlocked with that key obviously still have their DRM removed, but that revoked key becomes useless for unlocking any more DRM files, and it no longer enables you to open Trusted internet connections. And even when you do physically crack systems one by one, you still have to be ultra careful that they do not detect that you system is Trust-cracked. If any of your internet connections or any of your software in any way leak the fact that you have cracked your system, in any way leak the fact that you can do things that you're not supposed to be able to do, again, your key gets revoked. That key becomes useless, it can no longer unlock any additional files all attempted Trust-related internet connections will be denied. If they detect that you managed to read your key, if they detect that you managed to regain full control of your computer and override the Trust locks, then they revoke that key and you then need to go out and pay for new physical hardware with a new key locked inside, and you need to physically rip open that chip and read out a new key. Each time they detect that you cracked your computer, they revoke the key and you pretty much need to buy a new computer over and over and over again.

    I have simplified and glossed over a huge number of issues. If you have any questions or you want a technical level explanatio

  • by Alsee ( 515537 ) on Saturday June 13, 2009 @11:20AM (#28320003) Homepage

    don't buy it if it's using the TPM hardware

    While I agree with that for moral and philosophical reasons, the fact is that from a strictly practical or functional view, that is essentially incorrect.

    Trusted Computing is incredibly insidious. It is essentially the old Microsoft "Embrace, Extend, and Exterminate" tactic. he way Trusted Computing is designed there is absolutely no practical or functional reason NOT to buy a computer with a TPM in it. That's the "Embrace" part. A TPM computer can do everything and anything a normal computer can do. Think of it like speakers - there is absolutely no reason NOT to buy a computer that has built in speakers. If the price is cheaper, or if it's all the store carries, you might as wall accept the computer-with-speakers, take it home, and simple never turn the speakers on.

    Their plan is to ship TPMs as standard hardware on all motherboards. If you go in and buy a new PC, you may as well buy the TPM computer, take it home, use it just like a normal computer, and just never "turn on" the TPM chip. It's incredibly insidious... there's no actual reason to reject a TPM computer, so they can just make it standard on all PCs and in just a few years everyone will have a TPM-equipped computer by default.

    A TPM computer is a normal computer "plus more", it is a normal computer "plus" it has the option of the new TPM mode. The TPM computer can run all your old programs and read all your old files, "plus" it can enter TPM mode and run the new Trusted programs and it can read the new Trusted files and it can access the new Trusted websites.

    If you have an old computer, or if you have a new TPM computer and refuse to use the TPM, then none of the new Trusted stuff works. You can't run the new Trusted software, you can't use any of the new Trusted files, you can't view any of the Trusted websites.

    If you buy a TPM computer you have a choice - you can "opt in" and "voluntarily" put on a pair of handcuffs and activate the TPM chip - in which case everything works, all the old stuff works and all the new stuff works. Or you can refuse to turn on the TPM chip and you get screwed, the old stuff still works but none of the new stuff works.

    If you by a computer without a TPM chip then you have no choice, you just plain get screwed. he old stuff still works, but you get locked out of all the new stuff.

    The really evil part is Trusted Network Connect (TNC). TNC is currently targeted for businesses for securing their internal networks, but in a number of years - maybe a decade or so - it might be used by ISPs. In fact government officials have already called on ISPs to implement something like TNC in order to "secure the National Information Infrastructure against terrorist attack". What does TNC do? It checks the "health" of your computer to make sure that it's not infected by viruses or trojans, and that your operating system is has the latest patches to secure your computer against infection. Because your ISP doesn't want you connecting to their network and spewing virus infections to others. They are protecting their network and they are protecting you. Gee, isn't that good? Gee, isn't TNC a swell thing?

    Oh, did I forget to mention.... the way TNC works is that it uses Trusted Computing to do that "health check" on your computer. If your computer doesn't have a TPM then you can't preform the health check. If you computer does have a TPM but you refuse to turn it on, then you can't preform the health check. If you can't or wont do the TPM health check then you can't PASS the health check. And what do you think happens if you don't pass that health check? Well your computer might be infected or be vulnerable to infection. And of course they can't allow an infected machine (or a vulnerable machine) onto their network. So what happens is that TNC "quarantines" your computer. It denies you any internet access until you "fix" your "problem" and pass the TPM health check.

    If you reject the TPM you may eventually get effectivel

  • by Alsee ( 515537 ) on Saturday June 13, 2009 @11:29AM (#28320063) Homepage

    Will there come a day when all computers will ship with TPM?

    Members of the Trusted Computing Computing Group have explicitly stated the intention for all motherboards to come with a TPM as standard hardware. An explicit design goal was to keep the chip low-processing-power and simple and small enough to be a sub-$5 item mounted on all motherboards and in all cellphones and included in all digital TVs and in all iPod-type media devices. A lot of work went into minimizing the chip horsepower requirements and components, exactly so it could be "ubiquitous" in any networked device or any small electronic device that might come in contact with copyrighted content.

    -

UNIX is hot. It's more than hot. It's steaming. It's quicksilver lightning with a laserbeam kicker. -- Michael Jay Tucker

Working...