Slashdot is powered by your submissions, so send in your scoop

 



Forgot your password?
typodupeerror
×
Operating Systems Software Linux

Linux Kernel 2.6.30 Released 341

diegocgteleline.es writes "Linux kernel 2.6.30 has been released. The list of new features includes NILFS2 (a new, log-structured filesystem), a filesystem for object-based storage devices called exofs, local caching for NFS, the RDS protocol (which delivers high-performance reliable connections between the servers of a cluster), a new distributed networking filesystem (POHMELFS), automatic flushing of files on renames/truncates in ext3, ext4 and btrfs, preliminary support for the 802.11w drafts, support for the Microblaze architecture, the Tomoyo security MAC, DRM support for the Radeon R6xx/R7xx graphic cards, asynchronous scanning of devices and partitions for faster bootup, the preadv/pwritev syscalls, several new drivers and many other small improvements."
This discussion has been archived. No new comments can be posted.

Linux Kernel 2.6.30 Released

Comments Filter:
  • Nice, But... (Score:4, Insightful)

    by maz2331 ( 1104901 ) on Wednesday June 10, 2009 @10:06AM (#28279053)

    If you want a mainframe, maybe calling IBM and ordering one is a better way to go?

  • by harryandthehenderson ( 1559721 ) on Wednesday June 10, 2009 @10:08AM (#28279089)

    Can anyone explain to me why Linux has so many filesystems?

    Because one filesystem isn't optimal for all cases? Because people want to experiment with new things? Why does it matter?

    Windows has had NTFS for years (admittedly, several versions, but never any compatibility issues that I've come across), and Linux has, what, 73 or something?! Is it really that hard to get it right?

    And Windows has had FAT12, FAT16, FAT32, NTFS, exFAT, VFAT, FFS2, DFS, EFS. Was it really that hard to get it right?

  • by Anonymous Coward on Wednesday June 10, 2009 @10:14AM (#28279175)

    Do you work in marketing? Who cares what it's called.

  • Re:LINUX IS SHIT (Score:1, Insightful)

    by Anonymous Coward on Wednesday June 10, 2009 @10:14AM (#28279177)

    And the Lunix community wonders why there is such a public relations problem between users and developers... User complains about the experience in Lunix and the difficulty of doing what should be something simple and the response is "RTFM", "PEBKAC", or other various insults.

    In Windows, something like this Just Works(tm). Perhaps you should learn something before throwing stones at others. A little humility would go a LONG way.

  • Re:DRM? (Score:4, Insightful)

    by mcgrew ( 92797 ) on Wednesday June 10, 2009 @10:16AM (#28279205) Homepage Journal

    The bad one (not in Linux thankfully) is Dumb Restrictions on Media.

    Also stands for Dinasaurs Require Money.

  • Re:LINUX IS SHIT (Score:1, Insightful)

    by Anonymous Coward on Wednesday June 10, 2009 @10:32AM (#28279471)

    Exactly. Your not alone in having had problems with wifi adapters in windows.
    If it happens in Windows it tends to be the peripheral which gets blamed because windows, allegedly, Just Works(tm). If it happens in Linux/Gnu its the OS which is a POS and toys get thrown out of the pram.

  • Re:Nice, But... (Score:3, Insightful)

    by ta bu shi da yu ( 687699 ) on Wednesday June 10, 2009 @10:38AM (#28279567) Homepage

    I know :-) I both strengthened your point and explained the issue to maz2331, who seems to have missed the point entirely.

  • by Anonymous Coward on Wednesday June 10, 2009 @10:42AM (#28279621)

    Because one filesystem isn't optimal for all cases?

    The thing is, Linux strives so hard for the "optimum" that, while doing so, they end up in mediocrity. That's because its programmers are so concerned with micro-optimizations and top speed that they lack the ability to design properly and make good abstractions.

    Would it really be that hard to have ONE good fs that you could tune to different use cases? Probably not. But the average Linux coder sees that something isn't fast in case X and goes ahead redoing the entire wheel. And why? Because the thing he just looked at wasn't designed very well either and can't be adapted easily to different use scenarios. And why? Because it was done by a half-assed coder like himself. And so the circle closes.

    Linux needs more people that can properly design software and make good abstractions - instead of narrow-minded code monkeys that can't see beyond their own crap that they are willing to completely rewrite in two revisions anyway because they lost the big picture.

  • by mcgrew ( 92797 ) on Wednesday June 10, 2009 @10:45AM (#28279677) Homepage Journal

    Must... not... feed... [kuro5hin.org] Ah, screw it.

    There are a lot of reasons why windows has so many viruses. The one touted by Windows fans is that 90% of PCs have Windows, making it a fat target. Of course, this discounts the fact that Apple sells millions of computers every year, which should make it a fat target, too, but I don't see any Apple viruses either.

    But the 90% seems to me to be the reason, but a different reason - Microsoft has no incentive to "get it right". As long as they can get their OS preinstalled on all the Dells and HPs and etc, and aren't losing revenue by writing a secure OS, why bother? After all, their only aim, unlike Linux's aim, is to make money, like every other corporation. You don't start a corporation to better the world, you start a corporation to make money. period. Apple makes their PCs secure because they have to - they don't own the market likd MS does.

    And the 90% also means that Windows users are, on the whole, less tech-savvy, making not only the OS but its users easy targets. A non-tech savvy user will install a trojan where someone who knows better will think twice. A tech savvy user will have a password like Xc4-99_Zza?R2D2, while most Windows users will use something like 1234.

    Windows almost requires its users to run as admin (remember, Microsoft has no incentive to do it differently) while better written OSes don't need this. No other OS has anything as stupidly dangerous as Active-X.

    There are many, many more reasons. These are just a few that popped into the top of my head.

  • Re:DRM? (Score:3, Insightful)

    by MadKeithV ( 102058 ) on Wednesday June 10, 2009 @10:51AM (#28279763)
    Derivative Regurgitated Music?
  • by Anonymous Coward on Wednesday June 10, 2009 @10:54AM (#28279815)

    If Linux is ever going to make it on the desktop, developers are going to need to get their shit together and: make webcams work (they don't in the majority of cases at the moment); stop regressions in graphics drivers; get other hardware working, e.g. iPods; make dual-screen work without spending 20 minutes fucking around (see Lunduke's presentation); get GNOME on to QT and develop a decent HIG (sorry, the current GNOME HIG is an excuse to put off doing anything about bugs, see Apple's for how this should be done); finally pick one -- namely .deb -- package format and stick to it; so developers aren't put-off by the idea of spending days creating packages for different platforms.

    I'm sure some smug twat will pop-up and say how they don't care about Linux on the desktop, my answer is: why are you bothering to reply, if you don't care? There are obviously loads of people who do care, just look around at all the advocates. They told me Linux is ready for the desktop, and I tried it, only to find everything's slower, my iPod didn't work, then upgrading hosed my sound and video!

    If you're thinking of advocating Linux to someone: stop! Go and do some work on getting drivers working instead, your time won't be wasted and you won't lose any friends.

  • Re:DRM? (Score:3, Insightful)

    by Martin Blank ( 154261 ) on Wednesday June 10, 2009 @11:57AM (#28280627) Homepage Journal

    I have a hard time envisioning ethical uses for technology to weaponize pathogens

    Would you consider it ethical to pursue the technology to gain an understanding of it for purposes of defending against it? Development of vaccines or treatments can come from such research; the US Army still practices and develops techniques for weaponizing biological and chemical agents even as the existing stockpiles are being destroyed. The military has no intentions of using them offensively, and concluded decades ago that the effectiveness over conventional weapons is non-existent when you factor in all of the costs of extra handling precautions and risks that come with actual use. However, since other nations (and more recently non-state entities) were continuing to develop weapons, the need to understand how they could be used and how to react was important.

  • by profplump ( 309017 ) <zach-slashjunk@kotlarek.com> on Wednesday June 10, 2009 @01:24PM (#28282023)

    Did you miss the abstraction layer linux already has for file systems -- VFS? The layer that lets all file-related system calls like be unified among all file systems, so that a file system is only responsible for actually talking to the disk? The same sort of system used by BSD and Windows? Doesn't that essentially make new file systems as minimal as possible while still allowing "tuning"?

  • by dotgain ( 630123 ) on Wednesday June 10, 2009 @03:20PM (#28283677) Homepage Journal
    • He's not trolling, he's moderated troll. He's talking about touchy subjects.
    • Tarballs have no inherent way of specifying package dependencies. Basically, a tarball is /just/ a tarball. Why .deb's? Well, they're prevalent, and seem to have worked well. There's a wealth of utilities for working with them. They're not the only choice, but they wouldn't be a bad one.
    • Yes, that's what he meant. I don't know if I entirely agree, but I see what he's getting at for sure. While GTK is indeed horrible, getting Gnome on QT just is not going to happen. Gnome will die first.
    • Exactly, as long as it's a hobby OS, it's always going to look and feel like one. That's what's really giving the commercial OS's the edge here. Bosses. Deadlines. Sackings. All alien to FOSS. Where on one hand you've got a leader taking responsibility for the direction of his team and project, on the other you get faction-ism, infighting and ultimately, forks (which the FOSS crowd talk about like it's a good thing).

    I was explaining to my sister the other day that Linux is not one OS, but available though any of thousands of distributions. Yes, thousands. How did there come to be so many? I explained it with an alalogy to the 'parallel universe'. For every single yes/no left/right up/down design consideration, every single distribution forks into exactly two, and goes on to the next design consideration. The process will continue until every single person including Steve Ballmer is running their own distro, and society as we know it will end with each person slicing himself in two while arguing over which suicidal filesystem to use as the default.

  • by CAIMLAS ( 41445 ) on Wednesday June 10, 2009 @06:36PM (#28286515)

    That's not the half of it. The kernel devs appear to break things - intentionally - and leave them that way.

    Case in point, PCMCIA was/is supposedly being rewritten. It broke around kernel 2.6.27 for me (I think) on several systems with ricoh integrated chipsets: I'm unable to use my cardbus or CF slot unless I boot with the device in the slot (and not remove it). Supposedly (according to mailing list info I found) this is due to a 'rewrite' of the pcmcia architecture code. I guess they didn't want to leave it well enough alone until they got it right.

    Likewise, I have a USB card reader (recognized as "Bus 002 Device 002: ID 0bda:0151 Realtek Semiconductor Corp. Mass Stroage Device") which does not work with the current Ubuntu 9.04 stock kernel. It's recognized, but no devices plugged in work. It worked under 8.10 just fine.

    Maybe it's Ubuntu breaking things, but since it appears to be a cross-distro problem (in both cases), I'm betting it's just the kernel devs doing "business as usual" and "letting the distros sort it out".

  • Re:In related news (Score:3, Insightful)

    by moosesocks ( 264553 ) on Wednesday June 10, 2009 @08:37PM (#28287635) Homepage

    Say what you want about the glacial speed with which GNOME progresses. Their developers don't rip out 2/3 of the features of their applications, and call it a " major upgrade."

    There's also a key difference between 'minimalism' and 'feature-deprived'. Apple understand this, and the GNOME team seem to be catching on. XFce's flexibility also makes it a surprisingly good environment to work in, despite being billed as a 'bare bones' environment. KDE almost certainly doesn't understand this distinction, and I'd frankly be surprised if they had any sort of UI-design review process in place.

    Take a look [kde.org] at the most recent release of Amarok, and tell me how the user interface effectively helps the user complete the task that the program was designed to accomplish. Now consider the percentage of screen real-estate that the application devotes to this task (it's around 30%, although you could argue that it's even less than that).

    Now compare it to Winamp's famous classic skin, which only takes up a fraction of a 640x480 monitor, has collapsable UI elements to make it smaller if desired, and offers more options to the user up-front with textually-labeled controls. I can only guess what 1 of the 7 icons on the bottom right corner of the previously-linked screenshot actually do. I'll give credit to the KDE team for moving away from the 'Dozens of identical-looking blue icons' paradigm, although the new standard frankly isn't much better.

  • Re:In related news (Score:3, Insightful)

    by kigrwik ( 462930 ) on Thursday June 11, 2009 @04:04AM (#28290481)

    Say what you want about the glacial speed with which GNOME progresses. Their developers don't rip out 2/3 of the features of their applications, and call it a " major upgrade."

    You obviously don't remember gnome 2.0

Living on Earth may be expensive, but it includes an annual free trip around the Sun.

Working...