Follow Slashdot stories on Twitter

 



Forgot your password?
typodupeerror
×
Open Source Operating Systems Linux

Linux 5.0 Released (phoronix.com) 107

An anonymous reader writes: Linus Torvalds has released Linux 5.0 in kicking off the kernel's 28th year of development. Linux 5.0 features include AMD FreeSync support, open-source NVIDIA Turing GPU support, Intel Icelake graphics, Intel VT-d scalable mode, NXP PowerPC processors are now mitigated for Spectre Variant Two, and countless other additions. eWeek adds: Among the new features that have landed in Linux 5.0 is support for the Adiantum encryption system, developed by Google for low power devices. Google's Android mobile operating system and ChromeOS desktop operating system both rely on the Linux kernel. "Storage encryption protects your data if your phone falls into someone else's hands," Paul Crowley and Eric Biggers, Android Security and Privacy Team at Google wrote in a blog post. "Adiantum is an innovation in cryptography designed to make storage encryption more efficient for devices without cryptographic acceleration, to ensure that all devices can be encrypted. Memory management in Linux also gets a boost in the 5.0 kernel with a series of improvements designed to help prevent memory fragmentation, which can reduce performance.
This discussion has been archived. No new comments can be posted.

Linux 5.0 Released

Comments Filter:
  • by Anonymous Coward
    I'm not sure I remember my login anymore.
    • you should be rebooting your server for some updates.

      • by jellomizer ( 103300 ) on Monday March 04, 2019 @09:20AM (#58212546)

        Also for a sanity check on your hardware. I never experience this with a Linux box, but on specialized Sun Hardware back in the early 2000's. Back then Uptime was a big deal, because system crashes were common. Linux at the time, you can get about a year of uptime, Windows NT perhaps 3 months max. However Sun Hardware can keep running for many years. However being that most of the time server hardware was used for specialized tasks, that most of the storage requirements were cached in RAM (Which back in the day have 1 or 2 gigs of RAM, was enough for nearly anything). So the Service will work and run constantly, even after the drive failed, because everything was running in RAM (and your logging went to an other drive). Only to have a long time power outage affect your years uptime, with a server that wouldn't start back up, because the boot and OS drive had failed years ago.

        • too young to remember vax vms uptimes... range of ten years (for clusters not single node)
          • I had a machine with over 10 years of uptime at my last job; it was a cobbled together firewall made from an old x86 load balancer, with the tiny little ide-flash drive replaced with a laptop drive (!) which eventually failed. It developed some bad sectors over the years on /home, which luckily wasn't needed for this machines role. I unmounted that filesystem probably 6 years in. I had backups...

            Btw, it was a pf firewall running on openbsd 3.x, can't remember exact release.

          • I wouldn't consider a cluster uptime to be a fare comparison against a single computer server. This is why Cloud services are not going down all the time, because they too are clustered. And if the cloud service provider is actually good at their job, failure don't cause outages.
             

        • Wouldn't it be easier (and more up-time friendly) to just schedule automated hardware diagnostics once a week or so? Especially if you're managing banks of computers, an automated email report that says "computer #2839 has a failing component X" is a lot more helpful than "#2839 failed it's reboot, go find the problem".

          It also has the benefit of finding most problems sooner, and letting you be standing by ready with a new, freshly imaged drive to swap in when the computer is shut down.

          Hmm, you'd probably n

          • In today's more mature IT culture, yes. Back in the late 1990's and early 2000's Hot swapping components was often a bad idea, and required expensive often unreliable hardware. Also reporting on Failing components wasn't as much an option a lot of the time, it was either working or it didn't. The ultra expensive systems that cost over $50k had the ability, but the low end systems (like a Sun Ultra 5) wouldn't really have the ability.

            Back in the 1990's and early 2000's a lot of servers were ran under some

            • Who said anything about hot swapping? That certainly simplifies things even more, but just knowing that a component has failed *before* it becomes a problem, means that you could have a component in hand and ready to replace before shutting down the impaired computer, and thus lose only a few minutes of uptime during to the replacement, instead of likely adding at least a few hours to discover the problem and image a replacement drive while the computer is down.

        • A company I worked for had an old HP server running Linux. It had an up time record of two years and they shut it down to replace the power supply. Of course now uptime has taken the backseat to updating software.

          • now uptime has taken the backseat to updating software

            Speak for Windows and Apple. With Linux it is normal to update without rebooting. You only need to reboot to change the kernel, and even then hot patching [wikipedia.org] is a thing. This isn't just servers, but even general purpose computers that you are installing and deinstalling all kinds of things on constantly, including nasty things like games. I have often upgraded across major versions of Debian and even Ubuntu without rebooting.

            I had my primary workstation up for over 600 days at one point, only ended by a blacko

        • by Shaitan ( 22585 )

          "back in the early 2000's. Back then Uptime was a big deal, because system crashes were common. Linux at the time, you can get about a year of uptime"

          What were you doing on a Linux box that only allowed you to get a year of uptime? Several of the systems I deployed in the early OO's were replaced 6-7yrs later without a reboot.

          That said, you are absolutely right. This is one of those outdated dick measuring contests that is doing nobody any good. Ideally, reboot every system on a rotation no longer than 60 d

          • This is one of those outdated dick measuring contests that is doing nobody any good.

            Nonsense, it is one measure of the design quality.

            Ideally, reboot every system on a rotation no longer than 60 days.

            Why on earth would you do that? Does it make everything feel nice and fresh to you? If you enjoy that then I suggest you also rotate the tires on your car every 60 days, and drop the engine for good measure. Of course you do that don't you?

          • What were you doing on a Linux box that only allowed you to get a year of uptime?

            We used to dream of having an uptime of a year. Woulda' been a miracle to us. We used to run an old Windows 95 box found on a rubbish tip. We had a hundred and sixty of them situated in a small shoebox in the middle of the road. We got woken up every morning by having an alarm go off when Windows crashed, and then had a load of rotting fish dumped all over us! A year!? Hmph.

        • Also for a sanity check on your hardware.

          I would say that the person who reboots unnecessarily needs a sanity check. Obsessive/compulsive maybe?

        • Had the same thing in the 90s, a Sparc 1+ that had been running for years without interruption doing whatever it was it was set up for. Then one day there was a power cut, and we found it had a trashed superblock and hadn't been capable of booting for an unknown period of time.
      • the OP is clearly joking! /sarcasm
      • But muh kpatch!!!!!
        I thought that was going to take care of everything!
    • by Shaitan ( 22585 )

      You should be rebooting regardless. Excessive uptimes are NOT a good thing. How do you even know your box can come back up from a power cycle?

    • Not if you use solutions like Oracle Ksplice Uptrack, Canonical Livepatch, Red Hat's Kpatch or SUSE Kgraft. Anyway, on libc update, yes, reboot required.
  • Adiantum encryption system will be supervised by systemd thus soooo much better security
  • by Anonymous Coward

    They forgot the most important new feature: Code of Conduct v2.0. Every user of Linux now has to agree to a mile-long EULA upon installation (or updating), stating that if they are white and male, they must consume a minimum of 750 mg of estrogen pills every day to become "Trans Tux".

    Also, the previously hardcoded DNS fallback to Google's DNS servers has now become enabled by default and impossible to disable.

  • Linux 5.0 (Score:2, Funny)

    by MeanE ( 469971 )
    ...or as know by it's marking name Linux 2000.
  • Comment removed (Score:4, Informative)

    by account_deleted ( 4530225 ) on Monday March 04, 2019 @10:19AM (#58212836)
    Comment removed based on user account deletion
    • by stooo ( 2202012 )

      it actually IS something special when you run out of toes and fingers.

    • by Shaitan ( 22585 )

      Yeah but at the same time I recall Linus holding off on 4.0 so he could get more into it.

    • by harrkev ( 623093 )

      If Linus just dropped his pants, we could have gotten one more version out of 4.x.

  • by Anonymous Coward

    I came here for news about Tesla because Slashdot is the #1 source for all things Elon Musk and I'm finding this GNU/Linux stuff. What gives?

    • by Shaitan ( 22585 )

      It's a philosophical hijacking by GNU people who are butthurt their OS (Hurd) sucks and everyone uses Linux. Try to ignore it.

  • The real world effect of policy to arbitrarily increment major number is widespread unnecessary confusion.

  • We can increment them!

If all the world's economists were laid end to end, we wouldn't reach a conclusion. -- William Baumol

Working...