What to Expect from Linux 2.6.12 505
apt-get writes "Saw this Linuxworld report from the annual Australian Linux conference, Linux.conf.au, in Canberra last week. The article outlines some of the new features we can expect for the 2.6.12 kernel release, including: support for trusted computing, and security enhanced Linux. The kernel developers are also working on improving the 'feel' of the Linux desktop with inotify for file managers and events notification so hardware 'just works'. Unfortunately no release date other than 'sometime soon' is given."
Yay! (Score:3, Funny)
we've had a growing apart since it started cheating on me and got a virus
Re:Yay! (Score:5, Funny)
Not if you're using the spell checker at the moment, no.
Re:Yay! (Score:2)
we've had a growing apart since it started cheating on me and got a virus
I thought we were talking about the Linux kernel?
Re:Yay! (Score:2)
Sadly not. More likely this means someone else can trust your computer but you can't.
Trusted Computing (Score:3, Interesting)
Was its inclusion in the kernel by choice?
Re:Trusted Computing (Score:2, Insightful)
As the former seems to be what the inclusion in the Linux kernel is about, I think it's a good thing. And remeber, it's a free system, so you'll always have the choice not to use it or only use what you want to use it for.
Re:Trusted Computing (Score:2, Insightful)
Sure you'll have a choice if you decide to compile your own kernel. But I don't see grandma or uncle bob downloading theirs from kernel.org and compiling it from source. I don't oppose it's inclusion in the tree but I think if money is involved (RIAA and MPAA have deep pockets), it won't be too difficult to persuade some of the more user-friendly distros to compile a stock kernel with
Re:Trusted Computing (Score:2, Insightful)
those who get it with the PC most probably will end up with windows anyway. The others will have the support of the half geek to either install a distribution t
Re:Trusted Computing (Score:4, Interesting)
Call me silly, but how is 'making computers safer' a good thing? I don't *need* protecting from the big bad wide world, there are enough intrusions into my life to make it 'safer' as it is - each and almost every one of them pisses me off.
Re:Trusted Computing (Score:3, Insightful)
Re:Trusted Computing (Score:3, Funny)
And then don't bother connecting to the internet either, because no web-site operators will let you view their pages without Trusted Computing enabled.
Otherwise, you might republish their copyrighted works without compensation... that's just too much of a risk. Or you could execute many other forms of abusive programs to disrupt the experiences of their other users.
Really, untrusted PCs are just too dangerously unpredictable to allow out in public.
Re:Trusted Computing (Score:3, Funny)
In the case of cars, traffic lights, while an admitted PITA, do make commuting possible.
Or are you one of those just-put-in-a-roundabout Brits?
This ain't M$'s "trusted computing" (Score:2, Informative)
Linux trused computing is aimed at users/admins: "You can trust that User A can't muck with User B, expecially if User B is root!"
Re:This ain't M$'s "trusted computing" (Score:2)
Re:Trusted Computing (Score:5, Informative)
The 'trusted computing' in Linux 2.6.12 is about being able to run a process that is restricted in what it can do (read and write to a pipe, essentially), so that you can run an arbitary downloaded binary without worrying that it will do bad things. (think: distributed.net, SETI, etc).
Re:Trusted Computing (Score:5, Informative)
Re:Trusted Computing (Score:4, Interesting)
Yes. Trusted computing is a very good thing. This is some of the things you can expect:
When you compile or install a software, you can sign it. The computer will not execute anything that is not signed. This stops many viruses and trojan horses, so you can trust that you authorized everything the computer executes. It is just a security layer just like the no execution bit.
The important thing here is that the user is in full control of the system. The user gets to sign the packages or he can choose to use a distro that signs them for him. He chooses what the computer runs and what not. There is no third party that limits what the user can/cannot execute.
Besides signing software, TCPA (the chip that is going to be supported by the kernel) does encryption on hardware. So you can have hardware accelerated encryption/decryption, and your CPU will be free to do other things. This is not much different from hardware accelerated 2d & 3d graphics. Again, this is a very good thing.
Many people opose trusted computer because they confuse this with DRM (Digital rights management). DRM is technology that limits the right to open media. Trusted computer does not limit your rights at all. The confusion arises from the fact that microsoft plans to use TCPA (Trusted computer) to implement DRM.
TCPA support will totally be optional. You can enable/disable it when compiling the kernel. You normally want it enabled to take advantage of hw accelerated encryption, but if you are still paranoid (read misinformed) and think there is some evil corporation that is going to use TCPA to limit your rights, you can just turn it off.
There is a nice article [ibm.com] from ibm that clarifies the issue
Re:Trusted Computing (Score:3, Insightful)
Re:Trusted Computing (Score:3, Informative)
Free clue -- VeriSign's raison d'etre is not to convince end users that Business X is "trustworthy", only to verify whether or not someone representing themselves as Business X is in fact Business X. We verify the connection(s) between a real-world/meatspace identity and an electronic identity.
If We Install Spyware, Inc. applies for a SSL cert for www.weinstallspyware.com, our job is to verify that the guy re
Re:Trusted Computing (Score:3, Insightful)
Already, we see software design flaws. Just because you mention there are multiple things tells us that it's not a clean system, and that it ignores the traditional Unix dictate: "Do one thing, and do it well".
You list the user blocking unsigned programs from running, and you also list hardware-accelerated encryption. Those are two entirely different features, and there is no good reason why they should be part of the same system. If I desire either of those,
Re:Trusted Computing (Score:4, Interesting)
When you compile or install a software, you can sign it. The computer will not execute anything that is not signed.
Which has absolutely nothing to do with Trusted Computing.
If you want to do that you can do it right now with a trivial change to the EXE loader code. Hell, you can do it on a Win98 machine without a patch - all you need to do is redirect EXE and similar filetype association to point you your own little stub code to do that check. You can obviously do it with a trivial patch to Linux or DOS or any system.
Trusted Computing has nothing to do with signing files. In Trusted Computing any code's hash *is* it's "signature" and controlls what data it may decrypt. It is that hash which is reported over the internet. No need for any signature from anyone. You can certainly add signatures for various purposes on top of the Trust system, but it really has nothing to do with Trusted Computing itself.
The important thing here is that the user is in full control of the system.
Sure - in the sense that if he does not "voluntarily" turn over total control to the Trust system and to other people then it is impossible to install and run the new Trusted software and impossible to read or use any Trusted files and it will be impossible to view any Trusted website, and potentially in about 5-8 years he may be denied any internet access. The Trusted Computing Group has announced a project for routers that would deny an internet connection to any computer that is not locked down in Trusted Compliant mode. In fact at the Washington DC Global Tech Summit the president's Cyber Cecurity Advisor called on ISPs to plan on making exactly this sort of system a mandatory part of their Terms of Service to get internet access. I can dig up a link to this speech if you don't beleive me.
So short term refusal to submit to Trusted Computing and give up control of your computer just means you can't use a few new peices of software and you won't be able to buy the RIAA and MPAA's new DRM download sales. However the problem gets worse over a couple of years. Refusal to submit means you get locked out of more and more software and more and more files and more and more websites. Eventually you may be be effectively banned from the internet unless you 'voluntarily' activate the Trust chip.
But yes, you are always 'free' to leave the Trust system off. You are 'free' to crawl into a hole in the ground and use nothing new and connect to no one. You are 'free' to to choose to get locked in a prison cell instead of giving up control of your computer.
So you can have hardware accelerated encryption/decryption
Lie.
To be fair I assume *you* are not lying, merely that you are honestly echoing a lie that has been told to you.
Trust chips cheap low horsepower silicon. Running crypo on them is SLOWER than on even the lowest of ordinary low end CPUs. In fact a single basic crypto operations may take a full second or more to run on these very low capability Trust chips.
If you want crypto accelleration, great, get a standard hardware crypto accellerator. They've been around forever and they have absolutely nothing to do with Trusted Comptuing.
Many people opose trusted computer because they confuse this with DRM
You could get EVERY claimed benefit to the owner of Trusted Computing with identical hardware where the owner is given a printed copy of his master key. The fundamental design requirement of Trusted Computing is that they owner is forbidden to know his master key and the specification requires that the chip must self-destruct and destroy your data if you attempt to get at your master key.
The *ONLY* purpose of forbidding the owner to know his own key is to enable DRM enforcment and DRM-type functionality, to restrict the owner. Being forbidden to know your own key has the sole effect of restricting what you c
Re:Trusted Computing (Score:3, Insightful)
The solution for this is an easily configurable sandbox, which the vapor factory in Redmond says they are working on. Maybe "Sandboxed computing" is a better term, but the DoD called it "trusted computing", so that's what we're stuck with.
"Sandbox computing" also has little relationship to the trust
Re:Trusted Computing (Score:5, Insightful)
Or, you could just combine ExecShield and SELinux by themselves and have a useful security layer, without needing Trusted Computing at all.
Brushing aside the minor side-features, Trusted Computing is really about tamper-resistant hardware enforcing the signatures of software on the PC. The main use of that is preventing the legal and physical owner of that PC from hacking programs on his own computer, so that RIAA music publishers can continue to trust it.
Re:Trusted Computing (Score:5, Insightful)
I think the complaints about locking machines down are more in who gets the keys...
Comment removed (Score:5, Interesting)
Re:Trusted Computing (Score:5, Interesting)
Nuclear bombs are on a whole good things, with one componant that is a very bad thing: widespread death.
You can't admit that the single motivating factor of a system is bad, but then say that the afterthoughts and bonus utilities somehow make up for it. And if you don't believe remote attestation was the driving factor to create Trusted Computing, just look at its history of sponsors.
Feature creep (Score:4, Insightful)
The more features we can get into kernel mode, the less we need to rely on "chaining" and other Unix-way solutions and we can think more about applications and OS services as "whole units".
And since the majority of installations of this latest version will be on desktops, the more hardware support, the better the hardware support, the more seamless the hardware support, the better.
It would be nice to see some componentization of the kernel to allow for easy stripping of unnecessary features, but as the kernel will stand, the features are all necessary.
Re:Feature creep (Score:5, Insightful)
For a lot of people, that's a lot of the appeal of Unix and Unix-like systems.
Re:Feature creep (Score:2)
It does make far more sense to have the functions available outside of the larger solution, so they can be adapted to service other solutions aswell. But that's not to say you can't have large monolithic-looking apps that internally use the same smaller components to get their work done.
Re:Feature creep (Score:2)
What on earth makes you say that ? Linux Desktop installations aren't suddenly going to ramp up just because we have a new kernel, and changes to the kernel alone will not make Linux suitable for the desktop.
Linux's major market is still on the server. It's only now really starting to make the move from the
Re:Feature creep (Score:3, Informative)
Hardware support has nothing to do with feature creep (directly anyway - indirectly they effect underlying device systems like usb,scsi,ide etc).
Seemless hardware support (HAL etc) is a new feature, so point there.
The inotify thing is a replacement for dnotify (I know you didn't mention it, but it was in the article) so doesn't add any features really, just fixes bugs.
The whole thing about relying less on chaining... I just didn't get.
Can you give any example where
Re:Feature creep (Score:2, Informative)
Erm...you can do that now and have been able to for most (all?) of the last decade.
At runtime, there are modules. At compile time, whole sections of code can be removed.
The Linux kernel is only monolythic at the lowest levels; it's not a microkernel message passing system and that's not going to change. That's one of the reasons
Re:Feature creep (Score:3, Insightful)
What this means (Score:5, Informative)
Inotify is a replacement for dnotify. With both you can watch for a file for changes. You can even watch a directory for changes. However with dnotify you couldn't recursively watch a directory for changes. To do so required basically 'opening' each folder and quickly you use up the maximum number of files you can open.
With inotify it still doesn't directly support recursively watching a directory but example code for doing so is given and doesn't have the same problems. One distro uses this for watching
As for the notification thing - that's part of HAL, and means usb pens, cameras, etc should be 'auto detected' and the user can be notified and asked what to do automatically.
Re:What this means (Score:2)
It probably is for Beagle [gnome.org] for indexing and searching.
Re:What this means (Score:4, Informative)
inotify fixes this.
(waiting 2 mins between posts... sigh)
Re:What this means (Score:2)
This should be taken care of by hotplug, but I have an interesting problem with Ubuntu 4.05.
The first time a USB storage device is connected to the system Gnome detects and automounts the disk. Unplugging the disk removes the icon from the gnome desktop.
The next time the device is connected gnome does not mount the device. The only
Re:What this means (Score:2)
Re:What this means (Score:2)
Thanks. I have a FC3 system to test on. Most likely it will be the version of Nautilus.
Re:What this means (Score:3, Interesting)
Re:What this means (Score:4, Informative)
The hotplug system is part of the OS, running as root, and is intended to do things like insert driver modules, pump firmware around, and set permissions. This is useful even on a server, although its more important for a laptop or desktop machine. It doesn't do anything to your desktop directly though...
HAL uses DBUS to notify the user's desktop software about these exciting events so that it can do something appropriate. The desktop doesn't have dangerous privileges (so it's unlikely to accidentally format your main SCSI drive instead of the freshly inserted USB flash) and is able to interfact with the user through pop-ups and making icons appear in file managers etc.
This system (Hotplug + HAL + DBUS) replaces earlier systems where desktop software polled for any interesting changes every few seconds. The new system is event driven, using resources only when they're needed, and should hopefully be more powerful too.
Re:What this means (Score:4, Interesting)
I know, they may have trashed their data because they did not unmount. However it is silly to "punish" them by making it impossible to stick the disk back in to see if it is trashed.
Here is what I consider the ideal solution, far better than Windows or OS/X. Lets see if somebody can actually do this right:
When the drive is pulled, the system checks to see if all I/O had been flushed to it. If so it unmouts. The desktop environment responds instantly by removing any display of that drive or it's contents in file browsers.
If I/O has not been flushed the disk indicator remains in the desktop display, with a big red mark indicating that it had been pulled. Usually sticking it back in and pulling it after a second will flush the rest of the data and unmount it correctly. The user can also ignore it and stick new USB drives in (getting new icons) or do something on the menu to make it forget about the drive.
Attempting to shut down or log off with any red marked disks will ask the user to stick them back in so the data can be flushed. The user can hit cancel if they don't want to.
This flushing of a reinserted device must check carefully that it is the same device and it has not been written to by another machine while it was pulled.
Re:What this means (Score:3, Informative)
Re:What this means (Score:3, Insightful)
I've seen the older dnotify thing at work in KDE, and even that seemed to work much better than the equivalent in MacOS X on my iBook. I've frequently saved a file from Safari, gone to attach it in Mail only to find it's not present in the file selection dialogue box (that I've opened after saving the file). A quick click on the desktop makes things update, but it's bloody annoy
Re:What this means (Score:3, Insightful)
Example: COM, DCOM, ActiveX, now the various
I am increasingly convinced of two things: One, this is why Windows is so bloated; once a feature gets in, it never gets out. I suspect this is the sole reason; if anything, Microsoft pro
inotify (Score:2, Funny)
What about a better solution for device drivers (Score:5, Interesting)
Re:What about a better solution for device drivers (Score:3, Insightful)
I don't see what the problem is. My distro has all the drivers compiled for me. What use case do you have other than compiling your own kernel for the sake of it?
On a practical level, Linus has said many times that he won't do this because it would require freezing the internal kernel api. While this might sound good for an outsider, you only have to consider how much say the USB structure has been reorganised to realis
Re:What about a better solution for device drivers (Score:4, Interesting)
Granted most distributions do ship with as many of the drivers as possible, but I have found myself in a spot a few times when the Linux kernel did not have the drivers for something fairly critical which was needed during installation - for instance, I am trying to install linux onto my AMD64 machine but none of the linux kernels (including 2.6.11) support the southbridge chipset on my motherboard.. and so Linux cannot detect the harddisk on my computer...which means I cannot install Linux on the machine now.
I installed XP on the same machine without a problem - just popped in the device driver CD and the harddisk was immediately recognized.
It will be great to have that facility on Linux as well - changed your graphics card? just pop in the driver CD and install the driver and you are ready to go..
Re:What about a better solution for device drivers (Score:3, Insightful)
A script copies the compiler, driver sources, and kernel headers to a chroot; compiles the driver; exits the chroot; copies the driver to the appropriate location; and loads it. Another script remov
Re:What about a better solution for device drivers (Score:3, Interesting)
The way Linux driver development works is: release your driver under the GPL. Show that you are capable of maintaining it. Once it works well enough, get it merged into the Kernel. Continue to maintain it.
If you don't like it, fork it, and leave the developers with a development model that actually works.
If Linux had a HAL, we would have the Windows situation: hundreds of drivers that were written, worked for a while, and then were dumped as so
Re:What about a better solution for device drivers (Score:3, Informative)
I was bitten by Wifi too, I saw that prism54 was in the kernel so I bought an SMC 2802W. Unfortunatly, it turns out that the 2802W was silently replaced everywhere with the 2802Wv2 (same model number, FCC ID, no way to tell the cards apart).
Of course, the 2802Wv2 is of course totally different on the inside, and was produced after Conexant; they seem to have used the same shitty design as they did for their Winmodems; apparan
Re:What about a better solution for device drivers (Score:5, Insightful)
Drivers, drivers, drivers. (Score:2, Interesting)
I was just reading the latest Kernel Traffic [kerneltraffic.org] and it hit me how much of a flux the driver model seems to be in. Constantly.
Microsoft Windows seems to have had a stable driver interface since at least Win2K (probably NT4 too). The weird thing is that eschewing binary compatibility, like Linus likes to do, really ought to make it easier to stabilize a model? I mean, they have all the upsides with none of the downsides.
I really don't care personally -- I don't write drivers -- but isn't it a bit odd that th
Re:What about a better solution for device drivers (Score:3, Interesting)
I've been a avid user of Linux for a long time.
The days of compiling a custom kernel is over, except for people who like playing with the latest features or gentoo users.
Not because it's impractical, just because there is no point.
If you have a high-quality distro (Suse, Mandrake, Debian, Ubuntu, Fedora etc) then the distro people are quicker to apply kernel patches to fix security issues, test them for bugs, and release updated kernels then what you can normally get thru kernel.org.
The performance adva
Re:What about a better solution for device drivers (Score:3, Insightful)
Great fun
I don't these days.. perhaps I should. Great for learning even if I don't produce a working kernel heh.
It would be fun to play with SElinux and everything. I'm such a geek..
Boring missing features... (Score:5, Interesting)
Also, how about growing files with mmap? Currently one can not mmap() beyond the end of the file on Linux...
Re:Boring missing features... (Score:2)
"However, sometimes it is necessary to reduce the size of a file. This can be done with the truncate and ftruncate functions. They were introduced in BSD Unix. ftruncate was later added to POSIX.1.
Some systems allow you to extend a file (creating holes) with these functions. This is useful when using memory-mapped I/O (see Memory-mapped I/O), where files are not automatically extended. However, it is not port
Re:Boring missing features... (Score:2)
I want to mmap() way beyond the possible size of the result (very possible especially on 64-bit platforms) and just write to that memory. Once I'm done, I'll ftruncate the file to whatever length it ends up being.
On BSD the method works fine. On Linux it does not :-(
On Solaris (8 and 9 -- not sure about 10) mmap() is even worse, though
Re:Boring missing features... (Score:3, Interesting)
For file descriptor events, Linux has a better implementation than kqueue, called epoll [xmailserver.org]. It's better because it works with all types of file descriptor (thanks to using the same kernel functions as poll/select internally), not just the subset documented in the kqueue man page. Which means you can use epoll in a generic replacement for select/poll, which you can't quite do with kqueue.
kqueue can do other things, including aio which is useful, and it is marginally more efficient due to fewer system calls f
Re:Boring missing features... (Score:3, Insightful)
The initial version [xmailserver.org] started in December 2001. epoll in its present form was added to the base kernel in 2.5.45, October 2002.
Wrong. It's in every 2.6-based distro, which is most of them right now, and for the "commercial enterprise class" users that includes Red Hat Enterprise Linux 4 and SuSE Linux Enterprise Server 9.
Re:Solutions in search of a problem (Score:3, Informative)
kqueue lets me know, when the file grows. For example, tail(1) on FreeBSD uses it (with -f and -F switches). How would you do that with select/poll?
Is this language normal for Linux-related discourse?
Funny, it works on FreeBSD -- once you ftruncate the file beyond its end, you
Amount of changes (Score:3, Informative)
But all in all, these new improvements sound great.
-address space randomization for defence against buffer overflow attacks and remote script kiddies.
Reiser 4, Xen suport, software suspend, trusted computing support,latency improvements and improved kernel space notification. - WOW - lot's o' stuff.
Re:Amount of changes (Score:2)
Re:Amount of changes (Score:3, Interesting)
"Unfortunately no release date" (Score:3, Insightful)
We need a "break the kernel" team (Score:5, Insightful)
The kernel needs a team of people that specifically tries to break the kernel. Right now kernel testing is haphazard at best. By devoting a team of people (just like the developers) whose sole purpose in life is to break the kernel we (the community) will improve the security, and quality of future linux kernels. It will also improve the quality of code going into the kernel.
The new code sounds very good - but the linux development community needs some hackers to break stuff.
Cluge
Re:We need a "break the kernel" team (Score:2, Funny)
Sorry mom! *ducks* No I mean it, I'm sor*smack*
Some time soon... (Score:3, Insightful)
As long as the developers release it when it's done, and not according to some abstract schedule, we'll have the best operating system there is.
Kernel advances (Score:2, Funny)
What I'd like to see... (Score:2)
SELinux (Score:2)
Re:SELinux (Score:2)
Essential links.... (Score:5, Interesting)
Ross Anderson's Critique [cam.ac.uk]
IBM's Rebuttal [google.co.uk]
Trusted Gentoo [gentoo.org]
IBM's rebuttal does a decent job of allaying some of the fears - for example, it states that it will not prevent you from running any OS & programs you wish to on your own computer (which, for the record, I believe - witness the Trusted Gentoo project and e.g. this this [linuxworld.com.au] link). They state that their approach to Trusted Computing is not particularly well-suited to DRM, and on the face of it, I agree - there seems to be little attempt at restricting the user of a computer with the TPM from doing what they want. However, in my opinion, as a base for an utterly crippling DRM regime, distributors simply could not ask for a better setup, as I'll argue a little later.
So to re-cap, it seems that if you are running Trusted hardware, there are no restrictions on what you can do on your computer in isolation; you can install Linux, run any number of Open Source apps, etc. But the keyword here is in isolation, and it is here that the dangers of Trusted Computing are revealed. For you see, Trusted Computing enables the usage of remote attestation wherein a server may request a hash of all software currently running on your computer. This hash is, for all intents and purposes, unforgeable, and if you disable your TPM (as IBM stress that you can, and again for the record, I see no reason to disbelieve them), no hash will be sent. The server may then assess this hash of software (or note that no hash has been provided, in which case it may well treat your computer as Untrusted) and decide, based on what software you are running, to simply not serve you with whatever material you requested - for example, it may decide that it will not deliver MP3's to your computer unless it knows for a fact that the receiving application is one that is known to encrypt the content as soon as it is received (so that e.g. it simply cannot be viewed while not running in Trusted mode) and which will take every step to ensure that once received, the unencrypted content never leaves your machine (e.g. by being written to CD, e-mailed , etc.). As you can imagine, the above scenario is not at all far-fetched as the **AA/ other media distributors are positively *creaming* themselves at the thought of stamping out casual file-sharing or even making backups for your own use in some of your other devices.
So we are left with the situation where someone who does not use Trusted hardware (and is thus unable to respond to attestation requests) or those who do run Trusted hardware but whose software fingerprint is not deemed acceptable by the server will simply not be granted access to certain material, rendering such people at a big disadvantage. And it's no good buying hardware free from Trust chips from China or such places on the "black market"; this offers no advantage at all as Trusted hardware, as mentioned, does not stop you using your computer the way you want in isolation; the problem only occurs when you try to interact with other computers.
So far, this sounds unpleasant but not too bad (although I would urge you to read Anderson's linked essay for some more imaginative and serious abuses), but if we allow ourselves to follow the slippery-slope, we end up at the state where ISPs will not allow your computer to access the internet at all (for surfing, e-mailing, anything) unless you are running Trusted hardware and software. Obviously, the social, political and legal barriers to this occurence are non-trivial, but we've all seen ridiculous Acts qu
Re:Essential links.... (Score:3, Interesting)
No, I doubt anybody is using famd. At least, someone who uses removable media (like cdroms) can't very well run it, because it will keep directories open and prevent umounting.
Maybe once linux 2.6.12 brings out the new inotify things, famd will become tolerable to run continually, and it'll start getting bugfixes in.
PS. I am only 30% joking.
Re:Essential links.... (Score:3, Interesting)
The offered advantages are, in my opinion, fairly weak - you can eliminate online cheating in multi-player games, and media companies are more likely to allow downloads of materials (DRM'd up the wazoo, of course).
The real advantages appear primarily in corporate environments. Using hashes plus remote attestation to report precisely what version of what OS you're running, in an unforgeable way, is theoretically possible, but, IMO, impractical. It also requires a "secure" BIOS that cannot be flashed wit
Re:Essential links.... (Score:5, Insightful)
Next time you find out by email your boss is about to fire you because you're e.g. colored, you will not be able to forward that mail to the authorities, and your boss will be able to destroy ALL traces of that email from his home.
Next time the police really needs data on a person's computer it will not be possible to extract it, because "TPM" will prevent it.
And
Adobe will be able to do this for pdfs, your bank for your bank statements,
THAT is what TPM is about.
Heh. (Score:2, Funny)
Begun, the Just Works wars have
Latency and preempt (Score:4, Interesting)
This is great news for all of us using Linux for audio. It's also a pretty mean feat, as the 2.4 low latency patches were a little bit brute force compared to the 'correct' method in 2.6 of fixing all the problem spin lock areas in the kernel, a much harder task.
Now all we need is to get the RT LSM module into the main kernel. (It allows non root uses real time scheduling without messing about, it's not vital for perfomance but nice for usability.)
I have not tried 2.6.12 myself yet, but have got great results with unpatched 2.6.11 kernels.
FSF Standing ovation (Score:2)
What a waste of effort... (Score:3, Insightful)
Do you see the DRMed "music stores" (it is more like a barter: "give us your money and control over your computer, and we'll let some Britney and Fiddy come from your speakers!") falling over themselves to run on Linux? Do you think that is because Linux doesn't support "TC" or because those companies couldn't possibly care less about Linux as a platform? I'll give you three guesses. And the ENTIRE POINT with "TC" is to make it impossible for us to reverse engineer and write our own replacements for those applications - so be definition we can forget about that alternative.
All I can say is, I hope they had fun implementing it, and that they feel happy about the all the people who believe the astroturfing that "TC" isn't the Torjan Horse of DRM.
"TC" is DRM is the tool of closed networks, closed source, a closed society, and a closed future. People who believe it will coexist with Linux are so naive that it would be quaint if it wasn't so fucking scary...
Re:What a waste of effort... (Score:3, Interesting)
Re:What a waste of effort... (Score:3, Informative)
The problem with what you understand as Trusted Computing is that someone else gets the keys. They can decide what your computer can run and what it can't. Obviously this is bad and justifies the acute paranoia from which you seem to be suffering.
With the Linux implementation, you get the keys. So you can sign all of the executables you normally use and tell the kernel to only run them. Anything unsigned (e.g trojans, rootkits etc..) won't run.
It's a
I Wish firewire would just work (Score:3, Informative)
Re:I Wish firewire would just work (Score:3, Interesting)
No doubt I'm opening myself up to a Troll/Flamebait mod, but...
FreeBSD's Firewire support is much better than Linux's. FreeBSD had firewire support before Linux, and it was considered stable and released in the default kernel before Linux even had it's unstable Firewire drivers available as an option, IIRC.
Having good firewire support leads to other interesting developments too, like the ability t
Go to the source (Score:4, Interesting)
Gamers: Configurable USB Mouse Polling Rate! (Score:5, Interesting)
Why is this so great? Well, the typical polling rate of 125hz for USB mice is noticably less smooth than a polling rate of 500hz, whether you are using your mouse in games or a desktop app. For this reason many people preferred to use PS2 mice, as they could be polled at up to 200hz. Now with this new feature, PS2 can be retired. Get yourself a high resolution USB optical mouse and set the polling rate to 500hz.
You can feel the difference.
Re:Gamers: Configurable USB Mouse Polling Rate! (Score:4, Informative)
Is it a night and day difference? No.
Re:Gamers: Configurable USB Mouse Polling Rate! (Score:5, Insightful)
It is actually more complicated than that, but those lag values are for lag due to mouse rate alone. Of course the CRT refresh rate introduces its own lag. But in short, keeping monitor refresh rate constant, because the monitor is not synchronized with the mouse, increasing the polling rate of the mouse makes for an improvement. Conversly the same can be said for increasing the refresh rate of the monitor.
You don't have to take my word for it. If you are already using a good USB mouse at 125hz, try it at 500hz. You will notice the difference. Once you use 500hz for several days, try switching back to 125hz. You will hate it. The difference is even more noticable with higher resolution mice, such as 800 dpi and 1600 dpi optical mice because the movement delta can be quite large and a delay of 8 miliseconds of a large delta "feels" awkward.
Of course, if you use a very crappy low resolution USB mouse, the difference is harder to notice.
286 ? (Score:3, Funny)
When I saw this story on the front page, it had 286 comments. Very appropriate, since the purpose of "Trusted Computing" is to turn the clock back to the bad old days.
The Point of Trusted Computing on Linux (Score:3, Informative)
Many people are complaining about what Trusted Computing can/will be used for. Quit whining, for two reasons:
First, Linux is open-source, so you can modify or disable whatever you want. Unlike a binary kernel, you can remove code you don't like, and the rest of the kernel will work without it (if you remove it cleanly). In other words, it's not being forced upon you by the OS distributors. If a company decides to make software that requires it, that will be their decision to make and their problem to solve.
Second, TC has uses other than the oft-cited "make sure the computer only has $OMINOUS_ADJECTIVE software here", for Orwellian values of $OMINOUS_ADJECTIVE such as "permitted", "approved", and so on. In fact, Trusted Gentoo is setting up a system that uses the TPM (Trusted Platform Module--"the chip") to make sure your kernel and bootloader hasn't been tampered with and keep your SSH keys from being compromised. "Trusted" simply means that there is an uncompromisable encryption and verification (signing) system in the computer. It can be used for good or evil. Linux gives you that choice.
Re:Those are pretty big changes (Score:3, Informative)
2. Why do you assume, that the interest is sudden? Maybe the technology is simply deemed ready (as in tested and reliable enough) now to go into the main kernel?
Re:Those are pretty big changes (Score:2)
When you have a multi-developer environment with a single tree, you push your changes up and pull your and other people's changes down. Hopefully everything remains stable at every level except the individual developer's trees. Back in reality, of course, this is not entirely true
That's how Linux is developed... prolly on
Re:Those are pretty big changes (Score:3, Funny)
Re:Linux x (Score:5, Funny)
Eric, is that you?
Re:Linus retiring? (Score:2)
The kernel crew is making good coin at various companies or by consulting, working on a project that they enjoy.
Re:Linus retiring? (Score:2)
Maybe not but since you suggested it, go ahead.
It is free software, after all.
Re:And then... (Score:2)
This is why the bug fixes are passed back through the kernel releases.
If you constantly upgrade to the latest release then ofcourse you will bump into glitches now and then.
The highest number is not a gaurentee of stability , if you want stable then keep your system on a kernel a few releases back and just keep it patched.
The very latest releases feature
2.6.12 will be "bleeding edge" when its released and i wouldnt trust my working sy
Re:Just use Solaris (Score:3, Interesting)
SELinux, please. Solaris has had..
Reiser 4!? C'mon! Solaris 10 will have..
Xen you say? Eh, not to burst your bubbles but Solaris 10 now features...
Isn't that the exact point? This is noteworthy because these are features of LINUX, which LINUX didn't have before. By your arguements there would be no reason to ever start a new OS project. "Oh shit, we're adding harddisk support. That's b
I've had no problems at all. (Score:3, Interesting)
I'm running a circa-1999 machine, and have been running 2.6 since 2.6.0, and am currently running 2.6.11. I use it everyday, so it isn't just sitting idle. Here is my current uptime :
At the risk of starting a religious war, are you running any binary modules ? They can cause some stability problems.
I avoid binary modules, or rather, make sure that the hardware I buy is supported by official kernel device drivers. Back in 1993, when I