Linux Kernel 2.6.30 Released 341
diegocgteleline.es writes "Linux kernel 2.6.30 has been released. The list of new features includes NILFS2 (a new, log-structured filesystem), a filesystem for object-based storage devices called exofs, local caching for NFS, the RDS protocol (which delivers high-performance reliable connections between the servers of a cluster), a new distributed networking filesystem (POHMELFS), automatic flushing of files on renames/truncates in ext3, ext4 and btrfs, preliminary support for the 802.11w drafts, support for the Microblaze architecture, the Tomoyo security MAC, DRM support for the Radeon R6xx/R7xx graphic cards, asynchronous scanning of devices and partitions for faster bootup, the preadv/pwritev syscalls, several new drivers and many other small improvements."
DRM? (Score:4, Informative)
Why would DRM be listed as a "feature"?
Oh, wrong kind of DRM?
Re:DRM? (Score:5, Informative)
The Direct Rendering Manager (DRM) is a component of the Direct Rendering Infrastructure, a system to provide efficient video acceleration (especially 3D rendering) on Unix-like operating systems, e.g. Linux, FreeBSD, NetBSD, and OpenBSD.
It consists of two in-kernel drivers (realized as kernel modules on Linux), a generic drm driver, and another which has specific support for the video hardware. This pair of drivers allows a userspace client direct access to the video hardware.
I assume it's this. Either that, or linux now has Direct response marketing in the kernel.
Re:DRM? (Score:4, Informative)
Re:DRM? (Score:4, Insightful)
The bad one (not in Linux thankfully) is Dumb Restrictions on Media.
Also stands for Dinasaurs Require Money.
Re: (Score:3, Funny)
Doesn't Really Matter:
Democrat/Republican Madness
Devours Remaining Milkshake
Re: (Score:2)
Re: (Score:3, Insightful)
Re: (Score:3, Funny)
Re: (Score:2)
They should rename it to ADRM where A=AMD or ATI
When I saw "DRM" in the list of feature I cringed.
In the world of computing, DRM has the same effect as calling a product/service NAZI in the rest of the world.
Re: (Score:2)
Nonono, the right kind of DRM!
Re: (Score:3, Insightful)
I have a hard time envisioning ethical uses for technology to weaponize pathogens
Would you consider it ethical to pursue the technology to gain an understanding of it for purposes of defending against it? Development of vaccines or treatments can come from such research; the US Army still practices and develops techniques for weaponizing biological and chemical agents even as the existing stockpiles are being destroyed. The military has no intentions of using them offensively, and concluded decades ago th
In related news (Score:5, Funny)
Meanwhile back at GNOME H.Q. the developers are still undecided whether to move the "Ok" button on the default help screen 10 pixels to the right. Most think it would be a good idea but a hard core few insist that such a momentous change requires further study as it may confuse new users.
A new version of the dialogue is expected in 2037.
Re: (Score:3, Insightful)
Say what you want about the glacial speed with which GNOME progresses. Their developers don't rip out 2/3 of the features of their applications, and call it a " major upgrade."
There's also a key difference between 'minimalism' and 'feature-deprived'. Apple understand this, and the GNOME team seem to be catching on. XFce's flexibility also makes it a surprisingly good environment to work in, despite being billed as a 'bare bones' environment. KDE almost certainly doesn't understand this distinction, and
Re: (Score:3, Insightful)
Say what you want about the glacial speed with which GNOME progresses. Their developers don't rip out 2/3 of the features of their applications, and call it a " major upgrade."
You obviously don't remember gnome 2.0
POHMEL (Score:4, Funny)
Re: (Score:3, Informative)
And Evgeniy Polyakov (the POHMELFS dev) sounds like a russian name. I guess he knows.
in soviet russia file systems name hangovers after you
DRM for Trolls (Score:4, Informative)
The Direct Rendering Manager (DRM) is a component of the Direct Rendering Infrastructure, a system to provide efficient video acceleration (especially 3D rendering) on Unix-like operating systems, e.g. Linux, FreeBSD, NetBSD, and OpenBSD.
It consists of two in-kernel drivers (realized as kernel modules on Linux), a generic drm driver, and another which has specific support for the video hardware. This pair of drivers allows a userspace client direct access to the video hardware.
From WikiPedia.
Karma Whoring FTW!
So when's KMS going to happen? (Score:2)
Does anyone know the status of kernel modesetting for R6/700? As in, being able to run a regular framebuffer console without X. I can't find any mention of anyone working on this.
Re:So when's KMS going to happen? (Score:4, Informative)
No kernel modelsetting in 2.6.30 for anything but Intel chips.
There is some work in progress [phoronix.com] for ATI chips, but nothing in the mainline kernel.
In the meantime you can use uvesafb in the current kernel to get a framebuffer console if you like it. But you will get a bad vt switching experience.
Re: (Score:3, Informative)
Intel integrated graphics now work properly (Score:5, Informative)
If you're using 2.7.x Intel xorg drivers you NEED this kernel. Anyone struggling with weird freezes, font corruption, and various other troubles - turns out most of these problems weren't in the Intel drivers at all, but in the GEM and DRI code in the kernel. Mine's been rock solid since RC5 for stability, and RC8 finally fixed the problem with fonts under UXA.
Re: (Score:2)
I could never get XvMC working on anything, but using kernel modesetting/DRI2 crashed X every time I tried to play a video in Mplayer. I hope this will fix both issues.
Re: (Score:2)
Well, I noticed that it has to fit the driver. And some installations change the association. So you then end up with an XvMC of nVidia, with the main driver from Xorg, or something like that. It gets even worse, when you did not reinstall the external driver after a kernel update, so that the module can't get loaded anyway.
In gentoo you would do
emerge -atv nvidia-drivers
and
eselect xvmc set nvidia
after a kernel update, when using the nvidia binary blob drivers.
Performance? (Score:3, Informative)
Intel's integrated graphics performance has been pretty progressively worse ever since switching from XAA, and rather abysmal ever since Xorg 1.5. Since then every release of X/mesa/xf86-video-intel made it even worse. Hopefully this release brings the entire GEM/UXA/KMS/whatever stack to a usable state. All this on a 945GM.
What's your experience with it so far? I'll try it out myself in a few days, but I'm eager to hear the results...
Thottle Capability (Score:5, Interesting)
Still no support for SLA\95% throttling of processing power allocated to VMs.
Case in Point:
VM 1 : 80% Of processor utilization
VM 2 : 20% of processor utilization
: Can borrow up to 20% of VM1's allocation
: if unused.
The scheduler does great things don't get me wrong but when it comes to provisioning systems for various clients some want a garuntee on the level of processing power that is available at any time. This is true in test systems as well where yout Integration, Acceptance, and Performance virtual environments may share Bare Iron with some production VMs.
Now this is old hat easy with mainframes (MIP allocation\weights between LPARS\SYSPLEX) but with more and more focus on VMs and hosted VMs SLAs on processing power is becoming more of an issue.
Nice values are not enough when writing contracts... Great work Linux team but could we get some more granular control over VM provisioning with SLAs in mind? Yeah we can build user space systems to help manage VMs but kernel level provisioning and auditing is something we need with KVM. Gotta have the reports to show the customer you are meeting the agreeded upon SLAs.
And for my own personal use, I'd love to be able to throttle a dos 6.22 VM to 486 speeds so some of those ancient programs can be ran for historical purposes. (Without bombing the processor with dummy NOP and other MOSLO crap so we keep our power consumption down.)
Just some musings as Linux rolls along...
Nice, But... (Score:4, Insightful)
If you want a mainframe, maybe calling IBM and ordering one is a better way to go?
Re: (Score:2)
Or he could buy commodity hardware and install a VM.
Re: (Score:2)
That is the problem, no way to throttle the VMs on commodity hardware, thus the whole point of the post.
Re: (Score:3, Insightful)
I know :-) I both strengthened your point and explained the issue to maz2331, who seems to have missed the point entirely.
Re:Thottle Capability (Score:4, Interesting)
In large enterprises no, your test environments are still "Production" machines, aka they are mission critical with the expected uptimes. The "test" part of it is what you are running in the system, not the maturity of the iron itself. When a test environment is down that is just as important as the side the consumer sees. The hardware, especially with modern VM infrastructure is all production class. The VMs which the whole point of VMs is to isolate an environment.
Bare Iron in virtual infrastructure is just a resource now in most enterprises. It has become a Fabric of sorts now with SAN, ISCSI, etc. Along with clustering and failover the model has changed drastically on how hardware and software are managed.
Virutal Machines have changed the data center and now VMs result in hardware pools and fabrics rather then discrete machines.
This is important for EOM\EOQ\EOY system activity.
By establishing high\med\low power fabrics VMs can be shifted as needed based on expected hardware resources.
During End of Month say at a bank you may transfer all of your test VMs to a low power fabric to allow production to capitalize all the power. As certain development phases come and go you may want to shift which fabric your VM is running on. This is also crucial for testing VM functionality in various LOCATIONs within the network fabric.
Example
Before we promote this code to production lets move the ACPT systems to HPERF Pool (where production always exists) to see if traffic is routed correctly (transforming the ACPT environment VMs effectively into a DRESS rehersal envionrment.)
For performance testing this may be necessary for mid-sized corporation that cannot afford to duplicate their high performance fabric. So we know that given the 3rd week of second quarter the activity on HIPERF1 is at 5% so we can move the ACPT environment to HIPERF1 and run a full load test and reserve the existing HIPERF1 applications 10% (so our load test can pin the system up to 90%).
That kind of provisioning is a pain in user space but soo damn useful. Same for facility relocation or hardware maintenance. Shutting down MEDPERF1 fabric for hardware maintenance? Shove 50% of the VMs onto HIPERF1 and 50% to LOWPERF1 until maintenance is complete.
The idea is that if you have a production ANYTHING in a VM then it is usually part of a cluster or pool. If a test VM, or any VM, is capable of bringing down the whole bare iron system then you wouldn't have VMs at all to being with. So if you do have PROD VMs then the risk of another VM dropping the system has already been defined as an acceptable risk.
This is what is driving the debate with cloud computing and why mainframes still are around. Some things you can virtualize with low risk, some things can live in the cloud, and for everything else there is a mainframe.
Re: (Score:3, Interesting)
Why another filesystem?! (Score:4, Interesting)
Can anyone explain to me why Linux has so many filesystems? Windows has had NTFS for years (admittedly, several versions, but never any compatibility issues that I've come across), and Linux has, what, 73 or something?! Is it really that hard to get it right?
Re:Why another filesystem?! (Score:5, Informative)
Re:Why another filesystem?! (Score:5, Funny)
Uh... you just got modded as informative. Genius.
Re:Why another filesystem?! (Score:5, Insightful)
Can anyone explain to me why Linux has so many filesystems?
Because one filesystem isn't optimal for all cases? Because people want to experiment with new things? Why does it matter?
Windows has had NTFS for years (admittedly, several versions, but never any compatibility issues that I've come across), and Linux has, what, 73 or something?! Is it really that hard to get it right?
And Windows has had FAT12, FAT16, FAT32, NTFS, exFAT, VFAT, FFS2, DFS, EFS. Was it really that hard to get it right?
Re:Why another filesystem?! (Score:5, Funny)
You forgot High Sierra, ISO9660, UDF.
And WinFS. Oh, wait...
Re: (Score:2, Informative)
Because one filesystem isn't optimal for all cases?
Exactly. You wouldn't use a journaling filesystem (ext3, JFS, XFS) on an SD card. In networked environments, some filesystems are optimized for general use (CIFS, NFS) while others are optimized for a clustered environment (GFS, VMFS), while others are optmized for a distributed environment (Andrew Filesystem, CODA Filesystem). Log-structured filesystems are a new technology that maximizes write throughput, something that is key to optimizing speed in write-heavy environments: this is as opposed to conv
Re:Why another filesystem?! (Score:5, Informative)
Log-structured filesystems are a new technology
Haha! This is the kind of wonderful comment I see a lot from Linux users. The first operating system to ship with a log-structured filesystem was the Sprite kernel in 1990. It was rewritten for 4.4BSD, which was released in 1995. Then, 15 years later, suddenly Linux developers hear about it and it's a brand new technology.
Linux is not the whole world. Most of the 'new' technologies in Linux appeared in other UNIX-like systems first, and many of the implementations in Linux are inferior to the originals (although some are better).
Re:Why another filesystem?! (Score:5, Funny)
Re: (Score:3, Informative)
Considering it's called a New Implementation of a Log File System, perhaps the people who think that this is a new concept aren't exactly the cream-of-the-crop of the userbase. Every group has that guy who'll say uninformed stuff; it's not exactly something worth getting a complex over.
Re:Why another filesystem?! (Score:4, Insightful)
Because one filesystem isn't optimal for all cases?
The thing is, Linux strives so hard for the "optimum" that, while doing so, they end up in mediocrity. That's because its programmers are so concerned with micro-optimizations and top speed that they lack the ability to design properly and make good abstractions.
Would it really be that hard to have ONE good fs that you could tune to different use cases? Probably not. But the average Linux coder sees that something isn't fast in case X and goes ahead redoing the entire wheel. And why? Because the thing he just looked at wasn't designed very well either and can't be adapted easily to different use scenarios. And why? Because it was done by a half-assed coder like himself. And so the circle closes.
Linux needs more people that can properly design software and make good abstractions - instead of narrow-minded code monkeys that can't see beyond their own crap that they are willing to completely rewrite in two revisions anyway because they lost the big picture.
Re:Why another filesystem?! (Score:4, Insightful)
Did you miss the abstraction layer linux already has for file systems -- VFS? The layer that lets all file-related system calls like be unified among all file systems, so that a file system is only responsible for actually talking to the disk? The same sort of system used by BSD and Windows? Doesn't that essentially make new file systems as minimal as possible while still allowing "tuning"?
Re: (Score:2)
Why does it matter?
Complexity, attack vectors, having to spread out resources to commit to maintaining multiple file systems. I'm not saying I'm an advocate for anything specific here, and I'm not saying Windows is better, I'm just trying to answer your question on why it *can* be a bad idea to just keep pushing out file systems. Of course, Linux might need this direly, and then this fs might be a good idea.
Re:Why another filesystem?! (Score:5, Funny)
Re: (Score:2)
Funny but has Windows had many Viruses lately. I know that Virus has become a common term for all sorts of programs but this is Slashdot. I thought that a Virus was a program that would self propagate usually using a boot block method. Not many people boot from floppies any more.
You do have some email viruses but the vast majority of those I would call Trojans and depend on somebody to actually run them.
Then of course your have all sorts of exploits that then try to infect other systems but wouldn't those
Re: (Score:3, Insightful)
Must... not... feed... [kuro5hin.org] Ah, screw it.
There are a lot of reasons why windows has so many viruses. The one touted by Windows fans is that 90% of PCs have Windows, making it a fat target. Of course, this discounts the fact that Apple sells millions of computers every year, which should make it a fat target, too, but I don't see any Apple viruses either.
But the 90% seems to me to be the reason, but a different reason - Microsoft has no incentive to "get it right". As long as they can get their OS preinstalled on
Re: (Score:2)
The reason why you have so much Windows malware and so little for Mac (aside of the smaller target) is simply the same why you get more Windows software and fewer Mac software (at least in areas where core system knowledge is required, as is for malware): Fewer programmers who know the inner workings of the OS.
It's actually that simple. How many people do you know that could write a driver for MacOS? And how many for Windows?
Malware doesn't come to life automagically when someone wants it. Like any software
Re: (Score:2)
Because one virus isn't optimal for all cases? Because people want to experiment with new things? Why does it matter?
Re: (Score:2)
NILFS2 is better than MILFS2 (Score:5, Funny)
NILFS2 is the successor to MILFS2, which was based on the "Mother" specification.
NILFS2 is based on the "Nanny" specification, which means it is younger, firmer, *and* keeps the child nodes quiet when you are not actively updating its data.
Re: (Score:3, Informative)
It's the combination of a bit of NIH plus the freedom that Linux brings to a programmer. If you know enough C to not break things horribly and can operate Google, you can create a filesystem. There are also hundreds of proprietary filesystems from older hardware running other OSes, and Linux supports a number of those thanks to users of those older systems developing drivers for them.
I'd bet that the vast majority of filesystems supported by Linux are rarely if ever used, and when used they're operated in
Re: (Score:3, Informative)
Can anyone explain to me why Linux has so many filesystems? Windows has had NTFS for years (admittedly, several versions, but never any compatibility issues that I've come across), and Linux has, what, 73 or something?! Is it really that hard to get it right?
First up, you've got some incorrect assumptions/information about Windows.
Windows has not had just NTFS for years. Windows has gone through several different flavors of FAT (FAT12, FAT16, FAT32, exFAT, VFAT).
As far as NTFS goes... You dismiss the various versions, but then you're counting revisions to the various filesystems in Linux. NTFS has gone through four or five major revisions. Microsoft doesn't really advertise those revisions... They just keep calling it NTFS. But those revisions have added
Re: (Score:2)
Well the idea that NTFS got it right is funny to start with. Defrag??? I still have to freaking Defrag?
Take a look at the feature set of ZFS and NTFS and tell me that NTFS got it right.
Also Windows does have few more file systems like Fat32 and VFat. You can also install EXT support in XP as well.
The real answer is that one file system doesn't work for everything. You only think that Windows has a single filesystem because that is the the default option. If you install Ubuntu you will get EXT3 I think by
Re: (Score:2)
XP will kill Shadow Copy data from Vista on NTFS volumes. Granted, all the data should be there and read / writes should work fine so it's not really a serious "compatibility" issue, it's more just like feature incompatibility. Of course after going back to Vista if you needed a prior version it's gone. And there might be some problems with System Restore if it's using Shadow Copy featu
Re:Why another filesystem?! (Score:5, Funny)
Can anyone explain to me why Ford has so many kinds of cars? Tesla has had a 2 seat roadster for years (admittedly, several versions, but never any compatibility issues that I've come across), and Ford has, what, 73 or something?! Is it really that hard to get it right?
Re: (Score:2)
Re: (Score:2)
Sorry, lady, but no matter what your husband says, one size condom does NOT fit all.
Re: (Score:3, Informative)
http://www.rsdn.ru/forum/philosophy/1710544.1.aspx [www.rsdn.ru] - sorry, it's in Russian. You can download benchmark here: http://www.rsdn.ru/File/37054/benchmark.zip [www.rsdn.ru] Basically, it creates, stat()s and deletes lots of files. As you can see, performance in Windows is quite poor.
I have several more microbenchmarks and _all_ of them work faster on Linux. As a not-very-micro-benchmark: git works way faster on Linux.
And it's not the problem of NTFS itself, because ntfs-3g on my computer _still_ works faster for a lot of oper
Trusted Computing Slithered In? (Score:5, Interesting)
Integrity Management Architecture
Contributor: IBM
Recommended LWN article: http://lwn.net/Articles/227937/ [lwn.net]
The Trusted Computing Group(TCG) runtime Integrity Measurement Architecture(IMA) maintains a list of hash values of executables and other sensitive system files, as they are read or executed. If an attacker manages to change the contents of an important system file being measured, we can tell. If your system has a TPM chip, then IMA also maintains an aggregate integrity value over this list inside the TPM hardware, so that the TPM can prove to a third party whether or not critical system files have been modified.
From the recommended article, the key dilemma:
There are clear advantages to a structure like this. A Linux-based teller machine, say, or a voting machine could ensure that it has not been compromised and prove its integrity to the network. Administrators in charge of web servers can use the integrity code in similar ways. In general, integrity management can be a powerful tool for people who want to be sure that the systems they own (or manage) have not be reconfigured into spam servers when they weren't looking.
The other side of this coin is that integrity management can be a powerful tool for those who wish to maintain control over systems they do not own. Should it be merged, the kernel will come with the tools needed to create a locked-down system out of the box. As these modules get closer to mainline confusion, we may begin to see more people getting worried about them. Quite a few kernel developers may oppose license terms intended to prevent "tivoization," but that doesn't mean they want to actively support that sort of use of their software. Certainly it would be harder to argue against the shipping of locked-down, Linux-based gadgets when the kernel, itself, provides the lockdown tools.
OK, maybe this is overdramatic, but trading freedom from third-party oversight through trusted computing for the security of first-party oversight through trusted computing seems a little like:
"They that can give up essential liberty to obtain a little temporary safety deserve neither liberty nor safety." - Benjamin Franklin
But I can see both sides. Pondering... what are your thoughts?
Re: (Score:3, Informative)
don't worry, if you have the source code, you have the power to remove the DRM. Freedom, yeah, baby.
Unfortunately, incorrect. I'm a programmer and I have studied the Trusted Computing technical specifications in depth.
One of the central points of Trusted Computing is exactly to defeat that. Trusted Computing in fact manages to make the source code substantially useless. Under Trusted Computing you an "remove the DRM" lines of code from the source, but all that does is leave you with unreadable files and an
Re: (Score:3, Informative)
don't buy it if it's using the TPM hardware
While I agree with that for moral and philosophical reasons, the fact is that from a strictly practical or functional view, that is essentially incorrect.
Trusted Computing is incredibly insidious. It is essentially the old Microsoft "Embrace, Extend, and Exterminate" tactic. he way Trusted Computing is designed there is absolutely no practical or functional reason NOT to buy a computer with a TPM in it. That's the "Embrace" part. A TPM computer can do everything an
Re: (Score:3, Informative)
Will there come a day when all computers will ship with TPM?
Members of the Trusted Computing Computing Group have explicitly stated the intention for all motherboards to come with a TPM as standard hardware. An explicit design goal was to keep the chip low-processing-power and simple and small enough to be a sub-$5 item mounted on all motherboards and in all cellphones and included in all digital TVs and in all iPod-type media devices. A lot of work went into minimizing the chip horsepower requirements and
Yes to NFS local caching! (Score:2)
Re: (Score:3, Interesting)
Some Great Work...But "rt2500 Realtek Drivers" (Score:4, Informative)
Have wireless "issues" been fixed with this release.
I have a laptop with generic realtek rt2500 wifi hardware.
For many kernel releases I have to compile seperate drivers (Legacy serialmonkey) because the "stock" drivers are woefully unstable.
I either lose my connection, painfully slow( have tried the "rate 54" fix) or I cannot reconnect to my network at all.
I don't mind compiling seperate drivers (a huge benefit of open source stuff & Linux) but I am concerned how long I will be able to do this (E.g. something changes in the kernel makes the "external" driver break - in fact actual development of the legacy drivers has ceased - http://rt2x00.serialmonkey.com/wiki/index.php/Main_Page [serialmonkey.com])?
I know I should not be moaning about this but this issue has been around for ages and seems to affect a lot of hardware.
This is my only niggle with Linux and I am grateful for everything. Computing become much more interesting and fun again.
Huge thanks to Linus and the kernel developers.
Re: (Score:2)
Can't remember when I last had dealings with this. I have used ralink devices though....
Looking at the rt2x00, kernels from 2.6.24 onwards should have the rt2x00 driver right there in the kernel, so it should Just Work(TM). You shouldn't need to build the older, legacy drivers any more.
I agree though, it's a pain making sure to rebuild the driver modules every time you have a kernel update. I've had to do it with the atheros chipset in my laptop. Hopefully, as these device drivers become official, we get to
Re: (Score:2)
That's right - think it was that version of the kernel I started having problems.
Maybe the rt2500 in some laptops may affect the driver - I'm not sure.
I have to blacklist the following drivers; rt2x00pci, rt2500pci and rt2500lib. (I think thats right)
They just don't seem to work properly.
I then compile the legacy version of the rt2500 drivers.
(Mind you I could not get the drivers to compile properly on SUSE 11.1 - it's the only thing that stopped me using SUSE)
Re: (Score:2)
I don't have any issues with rt2500 in 2.6.28. However, I do have issues with ath9k and also issues with using smbfs and autofs, like my card is half duplex.
Re: (Score:3, Insightful)
That's not the half of it. The kernel devs appear to break things - intentionally - and leave them that way.
Case in point, PCMCIA was/is supposedly being rewritten. It broke around kernel 2.6.27 for me (I think) on several systems with ricoh integrated chipsets: I'm unable to use my cardbus or CF slot unless I boot with the device in the slot (and not remove it). Supposedly (according to mailing list info I found) this is due to a 'rewrite' of the pcmcia architecture code. I guess they didn't want to leave
Ralink Driver Clarification (Score:5, Informative)
When they say "Support for rt3070 driver for recent RaLink Wi-Fi chipsets", they really mean support for RT2870, RT2770, RT307X, RT3572 chipsets (they're all the same, with just features enabled or disabled, or signal strength improved between them).
This was the one last thing for me to fully switch over to linux. Netgear and alot of other Wireless-N USB adapters use these chipsets, and they are the best around.
Previously, the method of installing this driver was the largest pain in the ass I've ever had to go through as a linux noob (http://ubuntuforums.org/showthread.php?t=960642) and I'm so very very glad to see that this chipset is now supported.
The reason it was so hard is that the normal controlling app for the USB device has many advanced features you normally don't see on a wireless adapter (act as a router, full cisco network compatibility, etc etc).
Re: (Score:2)
Another great installation tutorial for SLED
http://forums.novell.com/novell-product-support-forums/suse-linux-enterprise-desktop-sled/sled-hardware/340340-ralink-rt2870-usb-stick-works-well-sled-10-a.html [novell.com]
Just get your shit together or give up (Score:2, Insightful)
If Linux is ever going to make it on the desktop, developers are going to need to get their shit together and: make webcams work (they don't in the majority of cases at the moment); stop regressions in graphics drivers; get other hardware working, e.g. iPods; make dual-screen work without spending 20 minutes fucking around (see Lunduke's presentation); get GNOME on to QT and develop a decent HIG (sorry, the current GNOME HIG is an excuse to put off doing anything about bugs, see Apple's for how this should
It's too easy these days ... (Score:3, Informative)
Like the new compression stuff. Compressed kernel under 1MB again - First time I've seen that for a while.
Now to try it on my Acer Aspire One...
NILFS2 (Score:3, Interesting)
So I've been reading that NILFS is the dog bollocks when it comes to solid-state disks in terms of speed and longevity of the disk. However, what I'd like to know is whether any of the advantages will hold for regular old mechanical disks as well. If so, I'd love to try NILFS. Having a real honest-to-goodness versioning filesystem with instant snapshots on my file servers would be so great, I can hardly find the words to describe it.
Comment removed (Score:4, Funny)
Re:Sad, but true: (Score:4, Funny)
Eric Allman might well agree.
Re:Sad, but true: (Score:5, Funny)
Actually, you COULD use Linux as an OS for a British cigarette vending machine, in which case it WOULD be for fags!
Re:DRM support? In the kernel? (Score:5, Informative)
different DRM. this isn't 'rights mgmt' drm.
sometimes, 3 letters can mean different things.
Re:DRM support? In the kernel? (Score:4, Funny)
Re: (Score:2)
Re:DRM support? In the kernel? (Score:4, Informative)
Re: (Score:2)
Re: (Score:2)
Yes, yes and yes.
Next please.
Re: (Score:2)
Can I run MS Office?
Yes.
Can I have a webcam conversation?
Yes.
Can I play games?
Yes.
Re: (Score:2)
Plenty of good games for Linux. Not really the kernel's domain though, dufus.
Re: (Score:2)
Yes, you can. No, it isn't.
Re: (Score:2)
There are no plans for an unstable branch. Without a 2.7, there will never be a 2.8.
Re:2.8.x kernel soon? (Score:4, Funny)
They should just drop the 2.8. prefix. Linux 30 sounds much cooler than 2.8.30, and man it's got to be light years ahead of Windows 7!
Re: (Score:3, Funny)
waiting for linux 3.1415
Re: (Score:2)
didn't they change the numbering versions to where they don't do the specific numbered unstable anymore?
Re: (Score:3, Informative)
Well they changed their whole development methodology that they don't have an unstable branch anymore and do feature releases about every 6 months. So kind of.
Re:2.8.x kernel soon? (Score:4, Informative)
Re:2.8.x kernel soon? (Score:5, Funny)
- 2.6.<odd>: still a stable kernel, but accept bigger changes leading up
to it (timeframe: a month or two).
- 2.<odd>.x: aim for big changes that may destabilize the kernel for
several releases (timeframe: a year or two)
- <odd>.x.x: Linus went crazy, broke absolutely _everything_, and rewrote
the kernel to be a microkernel using a special message-passing version
of Visual Basic. (timeframe: "we expect that he will be released from
the mental institution in a decade or two").
Re:2.8.x kernel soon? (Score:5, Funny)
... rewrote the kernel to be a microkernel using a special message-passing version of Visual Basic.
Oh, so that is what GNU/Hurd guys are up to these days!
Re: (Score:3, Interesting)
In Windows, something like this Just Works(tm).
Not always. I had a USB WiFi adapter that I attempted to install on a Windows laptop and after several attempts at uninstalling and reinstalling the driver, I took it back to the store and got a different model. Probably that WiFi adapter just sucked, but still, just because something "Just Works(tm)" for one OS and one piece of hardware doesn't mean that is always the case.
Re: (Score:2)
Re:LINUX IS SHIT (Score:4, Interesting)
Interesting you'd bring up what "Just Works" in windows.
My wifi card in my home PC doesn't work in windows out of the box, and doesn't have a readily available XP driver. I had to hunt for a generic driver and jump through hoops to get it to work.
On the other hand, the same wifi card, in the same machine Just Works in Linux. No fuss, no command line, no configuration. Just enter my wep key when prompted.
In windows, my sound card doesn't work *AT ALL*. Can't find a driver. Not even from the mainboard mfg.
On the other hand, the same sound card, in the same machine Just Works in Linux.
Go figure... apparently my system is confused :P
Or maybe, its you that it confused. Linux now supports more hardware natively than any other operating system in existance. And thanks to projects like the Linux Driver Project, that develops drivers for hardware for companies *FOR FREE*, thats unlikely to change.
Don't get me wrong, I'm sure windows has a place in this world, but Windows should no longer be allowed to lead the market on the desktop. It's far too dangerous.
Re: (Score:3, Informative)
Re: (Score:3, Insightful)
ext4? (Score:4, Funny)
With Reiser in jail, the only thing you have left is to blame ext4. :)
Err, excuse me. The application developers.