Catch up on stories from the past week (and beyond) at the Slashdot story archive

 



Forgot your password?
typodupeerror
×
Operating Systems Software Linux

Linux Kernel 2.6.30 Released 341

diegocgteleline.es writes "Linux kernel 2.6.30 has been released. The list of new features includes NILFS2 (a new, log-structured filesystem), a filesystem for object-based storage devices called exofs, local caching for NFS, the RDS protocol (which delivers high-performance reliable connections between the servers of a cluster), a new distributed networking filesystem (POHMELFS), automatic flushing of files on renames/truncates in ext3, ext4 and btrfs, preliminary support for the 802.11w drafts, support for the Microblaze architecture, the Tomoyo security MAC, DRM support for the Radeon R6xx/R7xx graphic cards, asynchronous scanning of devices and partitions for faster bootup, the preadv/pwritev syscalls, several new drivers and many other small improvements."
This discussion has been archived. No new comments can be posted.

Linux Kernel 2.6.30 Released

Comments Filter:
  • 2.8.x kernel soon? (Score:1, Interesting)

    by xjlm ( 1073928 ) on Wednesday June 10, 2009 @09:49AM (#28278809)
    I remember when I was running the 2.4.29 kernel in Mandrake 9.0, when it jumped to the 2.6 kernel. Maybe some big improvements are in the wind...
  • Thottle Capability (Score:5, Interesting)

    by kenp2002 ( 545495 ) on Wednesday June 10, 2009 @10:01AM (#28278997) Homepage Journal

    Still no support for SLA\95% throttling of processing power allocated to VMs.

    Case in Point:

    VM 1 : 80% Of processor utilization
    VM 2 : 20% of processor utilization
              : Can borrow up to 20% of VM1's allocation
              : if unused.

    The scheduler does great things don't get me wrong but when it comes to provisioning systems for various clients some want a garuntee on the level of processing power that is available at any time. This is true in test systems as well where yout Integration, Acceptance, and Performance virtual environments may share Bare Iron with some production VMs.

    Now this is old hat easy with mainframes (MIP allocation\weights between LPARS\SYSPLEX) but with more and more focus on VMs and hosted VMs SLAs on processing power is becoming more of an issue.

    Nice values are not enough when writing contracts... Great work Linux team but could we get some more granular control over VM provisioning with SLAs in mind? Yeah we can build user space systems to help manage VMs but kernel level provisioning and auditing is something we need with KVM. Gotta have the reports to show the customer you are meeting the agreeded upon SLAs.

    And for my own personal use, I'd love to be able to throttle a dos 6.22 VM to 486 speeds so some of those ancient programs can be ran for historical purposes. (Without bombing the processor with dummy NOP and other MOSLO crap so we keep our power consumption down.)

    Just some musings as Linux rolls along...

  • by Psiren ( 6145 ) on Wednesday June 10, 2009 @10:04AM (#28279015)

    Can anyone explain to me why Linux has so many filesystems? Windows has had NTFS for years (admittedly, several versions, but never any compatibility issues that I've come across), and Linux has, what, 73 or something?! Is it really that hard to get it right?

  • by Bob9113 ( 14996 ) on Wednesday June 10, 2009 @10:05AM (#28279029) Homepage

    Integrity Management Architecture

    Contributor: IBM

    Recommended LWN article: http://lwn.net/Articles/227937/ [lwn.net]

    The Trusted Computing Group(TCG) runtime Integrity Measurement Architecture(IMA) maintains a list of hash values of executables and other sensitive system files, as they are read or executed. If an attacker manages to change the contents of an important system file being measured, we can tell. If your system has a TPM chip, then IMA also maintains an aggregate integrity value over this list inside the TPM hardware, so that the TPM can prove to a third party whether or not critical system files have been modified.

    From the recommended article, the key dilemma:

    There are clear advantages to a structure like this. A Linux-based teller machine, say, or a voting machine could ensure that it has not been compromised and prove its integrity to the network. Administrators in charge of web servers can use the integrity code in similar ways. In general, integrity management can be a powerful tool for people who want to be sure that the systems they own (or manage) have not be reconfigured into spam servers when they weren't looking.

    The other side of this coin is that integrity management can be a powerful tool for those who wish to maintain control over systems they do not own. Should it be merged, the kernel will come with the tools needed to create a locked-down system out of the box. As these modules get closer to mainline confusion, we may begin to see more people getting worried about them. Quite a few kernel developers may oppose license terms intended to prevent "tivoization," but that doesn't mean they want to actively support that sort of use of their software. Certainly it would be harder to argue against the shipping of locked-down, Linux-based gadgets when the kernel, itself, provides the lockdown tools.

    OK, maybe this is overdramatic, but trading freedom from third-party oversight through trusted computing for the security of first-party oversight through trusted computing seems a little like:

    "They that can give up essential liberty to obtain a little temporary safety deserve neither liberty nor safety." - Benjamin Franklin

    But I can see both sides. Pondering... what are your thoughts?

  • by TooMuchToDo ( 882796 ) on Wednesday June 10, 2009 @10:12AM (#28279155)
    Have you looked at pnfs for performance reasons? We use it with upwards of 300TB of spinning media and 17PB of tape, and it works like a champ.
  • Re:LINUX IS SHIT (Score:3, Interesting)

    by morgan_greywolf ( 835522 ) on Wednesday June 10, 2009 @10:23AM (#28279315) Homepage Journal

    In Windows, something like this Just Works(tm).

    Not always. I had a USB WiFi adapter that I attempted to install on a Windows laptop and after several attempts at uninstalling and reinstalling the driver, I took it back to the store and got a different model. Probably that WiFi adapter just sucked, but still, just because something "Just Works(tm)" for one OS and one piece of hardware doesn't mean that is always the case.

  • by kenp2002 ( 545495 ) on Wednesday June 10, 2009 @12:43PM (#28281355) Homepage Journal

    In large enterprises no, your test environments are still "Production" machines, aka they are mission critical with the expected uptimes. The "test" part of it is what you are running in the system, not the maturity of the iron itself. When a test environment is down that is just as important as the side the consumer sees. The hardware, especially with modern VM infrastructure is all production class. The VMs which the whole point of VMs is to isolate an environment.

    Bare Iron in virtual infrastructure is just a resource now in most enterprises. It has become a Fabric of sorts now with SAN, ISCSI, etc. Along with clustering and failover the model has changed drastically on how hardware and software are managed.

    Virutal Machines have changed the data center and now VMs result in hardware pools and fabrics rather then discrete machines.

    This is important for EOM\EOQ\EOY system activity.

    By establishing high\med\low power fabrics VMs can be shifted as needed based on expected hardware resources.

    During End of Month say at a bank you may transfer all of your test VMs to a low power fabric to allow production to capitalize all the power. As certain development phases come and go you may want to shift which fabric your VM is running on. This is also crucial for testing VM functionality in various LOCATIONs within the network fabric.

    Example
    Before we promote this code to production lets move the ACPT systems to HPERF Pool (where production always exists) to see if traffic is routed correctly (transforming the ACPT environment VMs effectively into a DRESS rehersal envionrment.)

    For performance testing this may be necessary for mid-sized corporation that cannot afford to duplicate their high performance fabric. So we know that given the 3rd week of second quarter the activity on HIPERF1 is at 5% so we can move the ACPT environment to HIPERF1 and run a full load test and reserve the existing HIPERF1 applications 10% (so our load test can pin the system up to 90%).

    That kind of provisioning is a pain in user space but soo damn useful. Same for facility relocation or hardware maintenance. Shutting down MEDPERF1 fabric for hardware maintenance? Shove 50% of the VMs onto HIPERF1 and 50% to LOWPERF1 until maintenance is complete.

    The idea is that if you have a production ANYTHING in a VM then it is usually part of a cluster or pool. If a test VM, or any VM, is capable of bringing down the whole bare iron system then you wouldn't have VMs at all to being with. So if you do have PROD VMs then the risk of another VM dropping the system has already been defined as an acceptable risk.

    This is what is driving the debate with cloud computing and why mainframes still are around. Some things you can virtualize with low risk, some things can live in the cloud, and for everything else there is a mainframe.

  • Re:LINUX IS SHIT (Score:4, Interesting)

    by Synchis ( 191050 ) on Wednesday June 10, 2009 @12:43PM (#28281369) Homepage Journal

    Interesting you'd bring up what "Just Works" in windows.

    My wifi card in my home PC doesn't work in windows out of the box, and doesn't have a readily available XP driver. I had to hunt for a generic driver and jump through hoops to get it to work.

    On the other hand, the same wifi card, in the same machine Just Works in Linux. No fuss, no command line, no configuration. Just enter my wep key when prompted.

    In windows, my sound card doesn't work *AT ALL*. Can't find a driver. Not even from the mainboard mfg.

    On the other hand, the same sound card, in the same machine Just Works in Linux.

    Go figure... apparently my system is confused :P

    Or maybe, its you that it confused. Linux now supports more hardware natively than any other operating system in existance. And thanks to projects like the Linux Driver Project, that develops drivers for hardware for companies *FOR FREE*, thats unlikely to change.

    Don't get me wrong, I'm sure windows has a place in this world, but Windows should no longer be allowed to lead the market on the desktop. It's far too dangerous.

  • Comment removed (Score:3, Interesting)

    by account_deleted ( 4530225 ) on Wednesday June 10, 2009 @01:26PM (#28282059)
    Comment removed based on user account deletion
  • NILFS2 (Score:3, Interesting)

    by Eil ( 82413 ) on Wednesday June 10, 2009 @05:00PM (#28285203) Homepage Journal

    So I've been reading that NILFS is the dog bollocks when it comes to solid-state disks in terms of speed and longevity of the disk. However, what I'd like to know is whether any of the advantages will hold for regular old mechanical disks as well. If so, I'd love to try NILFS. Having a real honest-to-goodness versioning filesystem with instant snapshots on my file servers would be so great, I can hardly find the words to describe it.

  • by ejasons ( 205408 ) on Wednesday June 10, 2009 @07:59PM (#28287299)

    The reason why you have so much Windows malware and so little for Mac (aside of the smaller target) is simply the same why you get more Windows software and fewer Mac software (at least in areas where core system knowledge is required, as is for malware): Fewer programmers who know the inner workings of the OS.

    This doesn't explain why MacOS8/9, which had much less penetration, had (relatively) quite a large number of viruses. No, I don't have an explanation...

To do nothing is to be nothing.

Working...