Slashdot is powered by your submissions, so send in your scoop

 



Forgot your password?
typodupeerror
×
Linux

Linux 3.14 Kernel Released 132

An anonymous reader writes "The Linux 3.14 "Shuffling Zombie Juror" kernel has been released. Significant improvements to Linux 3.14 include the mainlining of SCHED_DEADLINE, stable support for Intel Broadwell CPU graphics, Xen PVH support, stable support for ZRAM, and many other additions. There's also a tentative feature list on KernelNewbies.org."
This discussion has been archived. No new comments can be posted.

Linux 3.14 Kernel Released

Comments Filter:
  • PI KERNEL (Score:5, Funny)

    by Anonymous Coward on Monday March 31, 2014 @08:14AM (#46620297)

    Yay! We've finally reached that!

  • They should have released it on Pi Day
  • Comment removed (Score:3, Interesting)

    by account_deleted ( 4530225 ) on Monday March 31, 2014 @08:28AM (#46620427)
    Comment removed based on user account deletion
    • by Chrisq ( 894406 )

      The Intel Broadwell CPU has got a machine code pseudo random number generator in it's extended instruction set! Immense! Gimme Gimme Gimme ...

      And what's more the pseudo random number generator is NSA approved [wikipedia.org].

      • Comment removed based on user account deletion
      • by TechyImmigrant ( 175943 ) on Monday March 31, 2014 @10:54AM (#46621963) Homepage Journal

        The Intel Broadwell CPU has got a machine code pseudo random number generator in it's extended instruction set! Immense! Gimme Gimme Gimme ...

        And what's more the pseudo random number generator is NSA approved [wikipedia.org].

        No. In designing it, I plotted a path around the obvious back doors in SP800-90 and FIPS140-2. I don't think the part of the NSA that likes weak RNGs likes that one. The obvious back doors being the Dual-EC-DRBG and FIPS140-2 section 4.9.2, which I call the FIPS entropy destroyer.

        The reseeding 2 million times a second thing is an effective defense against a class of hypothetical attacks which wouldn't work anyway.

        It is FIPS compliant, but we won't be claiming FIPS certification until it is actually FIPS certified.

    • The current x86 instruction set is already so vast it past "extended" about 10 years ago and is way too complex for most humans to grok in its entirety. Its the C++ of assembly languages these days. I'm not sure adding ever more instructions is really the way forward. x86 was always CISC but even so , seems to me intel has deliberately taken the RISC how-to manual and never mind ignored it, they've set light to it with a blowtorch then pissed all over the ashes afterwards. Even their early decisions were du

      • Adding instructions is another way to take advantage of the (up until recently) ever increasing density of integrated circuits.

        The intel instruction set may not be ideal in many ways, but intel has done a pretty good job advancing the state of microprocessor design and execution for 40 years now, don't you think? I mean they've driven desktop processors all the way to their end game, all the way to the end of Moore's Law. And you act like they've hamstrung the computer industry or something.

        Also your noti

        • by Viol8 ( 599362 )

          So you can remember and properly use the entire x86 instruction set? Really? Then I take my hat off to you. But in case you've got confused and think the latest chips just have the 386 set with a few extra bits and pieces you might like to check this out:

          http://en.wikipedia.org/wiki/X... [wikipedia.org]

          And I don't think anyone should slavishly adhere to RISC, but intels default position on any new functionality seems to be add yet another set of opcodes rather than letting smart compilers figure it out. These days its pow

          • It doesn't actually look that bad. I did a lot of Pentium 1(pre MMX) stuff many years ago, and I could remember and properly use a fair percentage of it. That is only perhaps twice as long. I would imagine a compiler designer would be comfortable with the majority of that, and then some (pipeline optimizations, etc).

            But I do agree with you, RISC or near RISC is perhaps better. Especially now memory is cheaper. Cache memory never seems to get cheaper though, not the real fast Level 1 stuff anyway...

            • Cache memory never seems to get cheaper though, not the real fast Level 1 stuff anyway...

              The most important "price" for L1 cache is paid in latency. It won't become much cheaper unless transistors become arranged in 3D, and after that the game is mostly over.

      • Apparently TechyImmigrant below may not be human....

      • You are forgetting that the entire idea behind RISC was that since compilers use only a tiny fraction of a typical instruction set, getting rid of redundant instruction types and reducing the number of instructions also helped in simpler, and thereby faster CPU design. From there on, there were the 2 schools - the speed demons (super-pipelined) vs the brainiacs (super-scalar) CPUs that tried different approaches, but most smoked CISC designs.

        Ever since the Pentium debuted, the x86 has been more RISC like

    • >The Intel Broadwell CPU has got a machine code pseudo random number generator in it's extended instruction set! Immense! Gimme Gimme Gimme ...

      Actually, it's a hardware RNG feeding the instruction, and in Broadwell there are two instructions, RdRand and RdSeed. RdRand for an often (2 million times a second) reseeded SP800-90A compliant AES-CTR-DRBG. RdSeed for an XOR construction ENRNG built about the DRBG using the AES-CBC-MAC conditioner output for the full entropy seed.

      I thought everyone knew that.

  • The Antibufferbloat draw my attention...
    Maybe it will be worth using at home for my custom fw/gateway.
    at the end of page [kernelnewbies.org]
  • by disi ( 1465053 )
    I use tmpfs a lot, but why would I use memory as swap space? Reading the Wikipedia article doesn't convince me, why not provide any swap space in the first place?
    • by Rob Riggs ( 6418 )
      Compressing/decompressing the data in RAM is faster than writing/reading from disk. CPUs are getting much faster than disk. And flash/SSDs have a limited number of write cycles. It improves performance and preserves the life SSDs. What's not to like?
      • by disi ( 1465053 )
        I do not dislike the module, more variety is good. The compression does also make sense and might help, but I would rather have my kernel only swap when needed and this is when it runs out of memory.
        • by Rob Riggs ( 6418 )

          I would rather have my kernel only swap when needed and this is when it runs out of memory.

          You really don't want your kernel swapping when it runs out of memory. That is too late and will kill performance. Instead, your kernel moves pages that are not used to swap so that it can be freed for other, more important things when the need arises. That is a much more efficient way to manage memory.

          That said, the kernel provides tuning parameters that will give you what you want.

          sysctl vm.swappiness=0

    • That's how they get to compressed RAM - they don't build a new RAM subsystem, they allocate it as swap and then use the swap system to get at it. Saves on code, doesn't require duplicated work.

      I tried it on my wife's laptop, which at 2GB is apparently too anemic to open KDE on Fedora with 5 Facebook tabs open in Firefox while Thunderbird is also running (:shakes fist about 32MB Mac running Netscape Communicator).

      Anyway, it seemed to make performance rather terrible, which was a bit surprising. That was la

      • by pmontra ( 738736 )

        I have a 16 GB laptop and I don't have any swap. I never run out of memory. free -m tells me that it's using about 4 GB for programs and data and almost 10 GB for file system buffers. I understand that I could get some more buffers if it compressed in RAM those pages that would have been swapped out, but is that really important? If you have little RAM you don't want to swap into it, if you have plenty you don't swap.

        • Sure, it may not make sense for everyone, but I bet there are cases that will see significant gains.

          For example, imagine you're running a server with too much data to fit in RAM uncompressed but a lot more (maybe all of it) will fit in RAM if you compress it. So by doing compressed swap, you spend a bit of CPU power (to do the compression/decompression) to avoid a lot of waiting on I/O.

          Sure, if you put in a bunch more RAM you could fit it all, but that might require buying new hardware, or maybe you've alr

          • by mlts ( 1038732 )

            This isn't a completely new feature. AIX has had this since at least version 7.1.

            It is useful for virtualization. VMs that don't really do much (a tertiary DNS or a rarely used DB server for example) can still be kept in RAM, but the RAM they use minimized so other tasks/VMs have it available.

            Of course the downside is if all the VMs decide to go for maximum activity at the same time. On AIX, this will peg the CPU, and cause swapping (especially if the compression ratio is set high.) Not sure what this w

            • by Bengie ( 1121981 )
              Using ZFS arguments, most VM type servers have way too much CPU and not enough memory or IO. ZFS can do 2GB/s per core for compression. I assume a similar thing happens with this "zram" feature. Plenty of CPU and not enough IO. Compress it, lots of memory is full of zeros with all of that padding going on.

              I assume that the major work types of computers involve little memory being actively use. I bet a lot of it is just allocated and has data filling it up, but not being used. Compress it.
              • by mlts ( 1038732 )

                In the AIX world, compression does come into handy. Probably the ideal place are applications like low-volume Splunk indexers that end up getting handed redundant data (syslog entries, performance counters), so even the in-RAM read/write disk cache can be compressed.

                Then there are those Web servers that have something oddball internally, but have to remain. Someone wants an internal wiki which nobody maintains, so that one is ideal for turning compression to max and just forgetting about.

                Of course, there

      • by robmv ( 855035 )

        Android 4.4 KitKat is using ZRAM on low memory devices, apparently they managed to get good results out of it to use it on final production devices

  • by tigersha ( 151319 ) on Monday March 31, 2014 @08:57AM (#46620683) Homepage

    I remember installing the 0.99.14 kernel in 1993. SLS Linux. My first distribution. So in more than 20 years we only went up 3 versions??!!

  • It looks like it transparently compresses pages going to swap, it's not like you need a SEPARATE block device to be your 'zswap' device.

  • by suso ( 153703 ) * on Monday March 31, 2014 @10:07AM (#46621409) Journal

    Linux 3.14159265358979323846264338327950288...

    • Obligatory
      1. Donald Knuth called, he wants his joke back.
      2. Kernel versions are integer vectors. You may have noticed, after 3.8 and 3.9 we didn't have 4.0, but 3.10, which is "greater than" 3.1 despite the apparent decimal equality.

  • Out of the box rt YAHOO. Let the games begin...OR more to the point for those who could care less about gaming but record music, stream transcoded AV and do serious studio work LINUX will knock it out of the park! Provided ALSA, THE PULSE MONSTER, Rosegarden, Audacity, Ardour retool to use the rt headers correctly so the linux install does not have to have a hacked up security_limits.conf and a patched kernel. HALLELUJAH I say. Mind you one still might have to increase the frequency from stock 250 to 1000

Whoever dies with the most toys wins.

Working...