Slashdot is powered by your submissions, so send in your scoop

 



Forgot your password?
typodupeerror
×
GNU is Not Unix Open Source Operating Systems Upgrades Linux

Linux 3.4 Released 385

jrepin writes with news of today's release (here's Linus's announcement) of Linux 3.4: "This release includes several Btrfs updates: metadata blocks bigger than 4KB, much better metadata performance, better error handling and better recovery tools. There are other features: a new X32 ABI which allows to run in 64 bit mode with 32 bit pointers; several updates to the GPU drivers: early modesetting of Nvidia Geforce 600 'Kepler', support of AMD RadeonHD 7xxx and AMD Trinity APU series, and support of Intel Medfield graphics; support of x86 cpu driver autoprobing, a device-mapper target that stores cryptographic hashes of blocks to check for intrusions, another target to use external read-only devices as origin source of a thin provisioned LVM volume, several perf improvements such as GTK2 report GUI and a new 'Yama' security module."
This discussion has been archived. No new comments can be posted.

Linux 3.4 Released

Comments Filter:
  • by Anonymous Coward

    Hi,

    I've always used Windowz and I consider myself an exceptional Visual Basic programmer, so I know computers pretty good. In fact I got an A- in my programming class last term. But I'm a little wary of how much power Microsoft has in the computer field. Many of my friends use RedHat and I've recently installed it on my machine at home. Although I haven't had as much chance to play with it as I'd like, I've been greatly impressed.

    This weekend I gave some thoughts to the things that are wrong with Linux

    • by Anonymous Coward on Sunday May 20, 2012 @09:39PM (#40060719)

      As much as Linux is doing rather well despite the plethora of different versions and security risk from the open code base, using it is rather risky for legal reasons as well. Red Hat stole much of Linux from SCO's Caldera, and are distributing it without paying royalties, meaning users could be on the hook for several hundred dollars a license and casting the future of Red Hat's offerings in jeopardy.. Litigation is ongoing now, and experts expect SCO to win a crushing verdict any day now. Linux has some neat features, but there's a lot of fear, uncertainty, and doubt in the community about its legal future.

    • From what I understand, Microsoft did write portions of NT in x86 assembly in the old days to make it not crawl, and that was one of the things that caused less portability b/w the different NTs. But these days, w/ CPU power being what it is, they don't have to, and neither does Linux. In fact, if I understand right, from Vista onwards, they've done everything in C/C++ only, and not bothered about assembly. That's also what makes Windows 8 so portable to ARM.
    • by tqk ( 413719 )

      I've always used Windowz and I consider myself an exceptional Visual Basic programmer ...

      Ick.

  • by bzipitidoo ( 647217 ) <bzipitidoo@yahoo.com> on Sunday May 20, 2012 @09:38PM (#40060711) Journal

    I tried btrfs, and ended up going back to ext4. Hoped btrfs might be a good choice for a small hard drive, and it is-- it uses space more efficiently. But it's not a good choice for a slow hard drive or the obsolete computer that the small size goes with.

    Firefox ran especially poorly on btrfs. I was told this is because Firefox does lots of syncs, and btrfs had very poor performance on syncs. Maybe this improvement in performance on metadata is just the thing to fix that?

    • I'm not as up to speed on filesystems as I probably should be, but I agree. I tried it and the speed difference is noticeable. Like, you, I went back to ext4.
      • by KiloByte ( 825081 ) on Monday May 21, 2012 @06:33AM (#40062895)

        Yes, there is a massive speed difference. Unpacking a particular tarball takes 10 seconds on btrfs, 124 seconds on ext4.

        The problem is with certain broken programs that fsync after every single write. But that's a problem with those programs, not btrfs. The words "fsync" and "performance" don't belong in the same sentence. fsync may be legitimately uses when durability is actually needed, but in almost all use cases the programs want only consistency.

        An example: there's a file where a transaction consists of appending a few bytes to the file then updating something in the header. Let's say it's four bytes in both writes. The program can handle power loss between transactions or if it happens between the first write and a header update, but not any other reordering. The disk in question has 100MB linear speed and 10ms seek time. Even with some kind of a barrier syscall, ext4 would need to alternate writes anyway, plus it needs to write to the journal and inode. This gives you a whooping 25 transaction per second. btrfs and log-structured filesystems with atomic flushes get 25M instead (assuming infinitely fast CPU).

        The primary offender here is Firefox which fsyncs all the time. This not only slows down writes but also causes insane fragmentation. The data it protects is not vital in the first place (mostly browsing history), and if it used sqlite in WAL mode on relevant platforms instead it wouldn't have to fsync for consistency almost at all.

    • by Gothmolly ( 148874 ) on Sunday May 20, 2012 @09:53PM (#40060841)

      Fix Firefox? Why does it "need" to do a lot of syncs?

      • by isopropanol ( 1936936 ) on Sunday May 20, 2012 @10:19PM (#40060967) Journal

        Also, put your firefox browser.cache.disk.parent_directory on tmpfs on single user systems.

      • by Sycraft-fu ( 314770 ) on Sunday May 20, 2012 @10:25PM (#40060981)

        It is something the FS should handle. The "Just fix the program," is a bad answer because while maybe one could change Firefox, you'll find another program that can't be changed because the nature of what it does requires many syncs.

        The low level systems should be robustly written to do what apps need, they shouldn't be telling apps "You can't do that."

        • I both agree and disagree. Obviously if you are writing low level systems you want to make them as efficient as possible. One can't forget though that there are sometimes inherent limits that an application should respect. It isn't unreasonable, for example, to suggest that an application shouldn't try to use an SD card as its display buffer. Sometimes there are good reasons to tell an app "you can't do that".

        • by hcs_$reboot ( 1536101 ) on Monday May 21, 2012 @01:01AM (#40061713)
          Yes, "It is something the FS should handle", but if the OS itself does not sync all the time, there is a reason. This is the applications responsibility to use and not abuse the sync feature. So the question is relevant, why would Firefox need to sync all the time?? What kind of critical data has to be written to the disk and not kept only in buffers in case of a crash? No, sounds like more a sync abuse imo.
        • by batkiwi ( 137781 )

          What options does the FS have though?

          The app is calling sync, which is used to flush disk buffers and ensure a change is physically written to disk. btrfs does this safely, with barriers, as it should.

          Should it simply ignore repeated sync() calls and NOOP? If not, what should it do?

          If I write a program that tries to allocate 20GB of ram on startup and never uses it, then complain that "linux" is making this huge swap file and making my app slow, should they change the malloc/etc APIs to cater for my using

      • by smellotron ( 1039250 ) on Sunday May 20, 2012 @11:12PM (#40061199)

        Fix Firefox? Why does it "need" to do a lot of syncs?

        Sync (or fsync) is the way to ensure that files are committed to disk and not just cached somewhere. This is a precondition for reliable "restore session" and similar functionality. However, application developers cannot rely on the OS to sync data in the background, because e.g. on a laptop where frequent disk access is both expensive (battery life) and risky (physical motion), the OS will cache as much as possible. If FF did not sync, the OS might delay writes for hours, which means a computer crash leads to lost hours of browsing history for the user. It doesn't sound like a big deal, but I can tell you that it is infuriating as a user to see a browser say, "whoops, I lost your tabbed windows, hope you weren't using the WWW for anything important!". Not having looked at the source myself, I don't know if it's possible to optimize FF's sync behavior; but I do know that it's impossible to eliminate it.

        • by smash ( 1351 )

          This sort of application thinking is retarded. If the OS crashes, it is an OS problem. Firefox (which being a browser stores SFA that needs to be permanent) should not be forcing sync in the fear that the OS crashes. Let the cache work as intended, don't cripple it because the user is retarded or the underlying OS is crap.

          If the underlying OS is crap and causing data corruption due to crashing with outstanding cached writes, then the OS is broken and needs to be fixed. NOT the browser.

        • by linhux ( 104645 )

          This doesn't sound right. OS crashes should be very uncommon and thus you shouldn't design your software around them. Much more common is that Firefox itself crashes, or X11, in which case the data will be there even if the process didn't sync. If your computer crashes, you usually have bigger problems than a bit of lost browser history (not that I've ever seen *hours* of uncommitted data being lost due to a computer crash).

          • by Nutria ( 679911 )

            OS crashes should be very uncommon and thus you shouldn't design your software around them.

            By far, the predominate OS that FF runs on is Windows. Thus, the developers are concerned about frequent OS crashes.

            • by MrHanky ( 141717 )

              Right. Since Mozilla design their browser with Windows 98 and ME in mind, and then drops support for them only at compile time. Windows hasn't been crash prone for more than 10 years.

            • By far, the predominate OS that FF runs on is Windows. Thus, the developers are concerned about frequent OS crashes.

              Then, the sync should be conditional on the OS it is running on, and disabled on Linux.

              And even on Windows, it should be configurable. Some users might care more about appropriate battery life than about their browser history in the rare event of a crash. Even Windows crashes much less nowadays than it used to...

        • by nadaou ( 535365 )

          If FF did not sync, the OS might delay writes for hours, which means a computer crash leads to lost hours of browsing history for the user.

          For kernels tweaked into "laptop mode" this may be different, but for stock modern Linux the maximum time delay for disk cache writes is 30+5=35 seconds, not hours.


          $ cat /proc/sys/vm/dirty_expire_centisecs
          $ cat /proc/sys/vm/dirty_writeback_centisecs
          $ cat /proc/sys/vm/laptop_mode

  • by Myria ( 562655 ) on Sunday May 20, 2012 @10:18PM (#40060955)

    The new x86-64 ABI with 32-bit pointers is cool because it allows you to get the architecture improvements of x86-64, such as extra registers and RIP-relative addressing, without increasing memory usage substantially due to larger data structures. Also, 64-bit operations will just use the 64-bit registers. The vast majority of programs simply do not need the extra address space.

    One reason that this ABI works so well is that the majority of the x86-64 instruction set uses 32-bit operations. Some operations involving pointers can be done in one instruction without using a temporary register to load a 64-bit constant.

    Windows actually also can support this, in theory, but you're on your own in trying to communicate with the Win32 API. The linker option /LARGEADRESSAWARE:NO causes the NT kernel to limit your program's address space to 2^31 bytes.

    • But it will reduce the address space available for ASLR, am I right?

      • by Mr Z ( 6791 ) on Sunday May 20, 2012 @10:41PM (#40061041) Homepage Journal

        Yes, but so what? A system that supports x32 should also support x86-64. So, if you're relying on ASLR for security purposes, compile those sensitive apps as x86-64.

        Granted, the potential attack surface grows as you consider larger and larger threats. For example, a GCC compiled as x32 makes a fair bit of sense. What about Open/Libre Office? Well, that depends on if you open untrusted documents that might try to exploit OOo / LO. (Odds seem pretty low, though.) And what about Firefox? Far less to trust on the web...

        So, at some point, you have to make a tradeoff between the marginal benefit of increased performance/better memory footprint in x32 mode vs. increased security against certain overflow attacks that ASLR offers. For most people in most situations, the former likely wins for anything with a decent memory footprint. For people building hardened Internet-facing servers, the latter probably wins.

        • by smash ( 1351 )
          I very much doubt the former "wins" at all. Code is far, far smaller than data it operates on, which has nothing to do with pointer size or register size causing increased footprint. 2 systems is ambiguous and enables applications to be written for the wrong security model. Just compile everything for x64 and be done with it.
          • by Mr Z ( 6791 )

            There are those much more famous than I who would disagree with you. [stanford.edu] (Scroll down to "A Flame...") Of course, appeal-to-authority is not a great way to argue a point that should be settled by data.

            Some workloads are amazingly pointer heavy. Compilers and interpreters are very pointer heavy, for example. At least one SPEC benchmark sped up by over 30% in early testing. Then again, a couple others slowed down, which seems odd. I imagine we'll just have to see what happens as the compilers get tuned and

    • It's true that most programs won't need 64-bit address space - right now - but that's only as long as their memory requirements are within 2GB. If Linux itself is 64-bit, then is there any compelling reason that the ABIs were made 32-bit? In fact, what exactly are the x86 targets for Linux - is it both 32-bit and 64-bit PCs? If that's the case, wouldn't there exist 2 versions of Linux in the tree, and wouldn't it make sense for the 32-bit Linux to have a 32-bit ABI, and the 64-bit Linux to have a 64-bit

      • by EvanED ( 569694 )

        It's true that most programs won't need 64-bit address space - right now - but that's only as long as their memory requirements are within 2GB.

        And a lot of programs this is true, and will "always" be true. Will Emacs ever need more than 2GB for most people?

        (And actually it's 4GB on Linux, or at least close to it.)

        If that's the case, wouldn't there exist 2 versions of Linux in the tree

        It's more like 99% of the code is shared, and changes depending on how you compile it.

        wouldn't it make sense for the 32-bit L

        • I thought that Linux didn't have an ABI - at least not one for device drivers
          • Linux kernel does not have a standard ABI, in a sense that the way modules communicate with the kernel and each other is not fixed on binary level. Linux as an OS most certainly does have a standard ABI - how else would you be able to take a binary that was compiled 10 years ago (say, a proprietary game), and run it today?

            ABI can also mean a more lower-level and generalized stuff - things like function calling convention or how the stack is arranged.

            In this case, it's both - it's an generalized ABI for 64-b

    • Re: (Score:2, Interesting)

      by smash ( 1351 )
      You've seen the prices of 16+ GB of ram recently, right? Shaving a few bytes here and there in your CODE (not in your data, which is far and away larger) by writing for 32 bit pointer use in the days where 16GB of ram is under a hundred bucks is retarded.
      • by Narishma ( 822073 ) on Monday May 21, 2012 @01:50AM (#40061897)

        The problem is not the memory but the CPU cache. No reason to clog it with bloated 64 bits pointers when 32 bits pointers will do.

      • You've seen the prices of 16+ GB of ram recently, right?

        16GB is pretty cheap, beyond that it starts to get expensive because you either need either expensive 8GB modules or a high end CPU platform (intel LGA2011 or amd G34) with more ram channels.

        If you plan for all your ram to be used by one big process thaen x32 won't be of interest to you. OTOH if you are serving webapps (lots of processes but each individual process not using much ram) written in languages that make heavy use of pointers then x32 starts to look attractive.

        not in your data, which is far and away larger

        All depends on what form your data i

  • by Anonymous Coward on Sunday May 20, 2012 @11:12PM (#40061195)

    What is the rationale for moving up to 3.4 so soon?

    Obviously big tech companies, as well as the Mozilla Foundation play the versioning game aggressively, but the Linux kernel always had a reputation of being conservative.

    • I'd say too conservative, if they were only updating the third digit every few months. Just do it the OpenBSD way - make every update rev incremented by 0.1, so that #.0 versions are just as stable as every other, and not bug ridden.
      • by shaitand ( 626655 ) on Monday May 21, 2012 @12:34AM (#40061607) Journal

        "I'd say too conservative, if they were only updating the third digit every few months."

        I beg to differ. This is the kernel not some userland app or even a daemon. Stable releases are supposed to be reliable enough to trust with billions of dollars in data flow and human life support systems on the day of release.

        • by amorsen ( 7485 )

          I beg to differ. This is the kernel not some userland app or even a daemon. Stable releases are supposed to be reliable enough to trust with billions of dollars in data flow and human life support systems on the day of release.

          In Linux, that level of QA has been moved to the distributors. The only QA done on the official release is that volunteers have tried the release candidates. Some volunteers run compile/test farms, at least sometimes.

          People who run life critical systems can generally afford to pay for the kind of testing they need. It is certainly difficult to find volunteers willing to do it.

    • Versioning schemes should suit the software being versioned.

      Linux, used to use a three part versionin number. The first part was a 2 that basically never changed. The second part indicated which series it was (odd numbered were development series even numbered were stable series) The third part indicated releases within a series.

      AIUI that worked for a while but during the 2.4/2.5 era it became clear it was no longer working well. The linux kernel different features were maturing at different times and distr

Life's the same, except for the shoes. - The Cars

Working...