Please create an account to participate in the Slashdot moderation system

 



Forgot your password?
typodupeerror
×
Linux

Linux 5.10 Solves the Year 2038 Problem Until 2486 (phoronix.com) 77

The Linux 5.10 kernel's XFS file-system will have two new on-disk meta-data capabilities, reports Phoronix: 1. The size of inode btrees in the allocation group is now recorded. This is for increasing redundancy checks and also allowing faster mount times.

2. Support for timestamps now until the year 2486.

This "big timestamps" feature is the refactoring of their timestamp and inode encoding functions to handle timestamps as a 64-bit nanosecond counter and bit shifting to increase the effective size. This now allows XFS to run well past the Year 2038 problem (where storing the time since 1970 in seconds will no longer fit in a signed 32-bit integer and thus wraparound) to now the Year 2486. Making a new XFS file-system with bigtime enabled allows a timestamp range from December 1901 to July 2486 rather than December 1901 to January 2038. For preserving backwards compatibility, the big timestamps feature is not currently enabled by default.

This discussion has been archived. No new comments can be posted.

Linux 5.10 Solves the Year 2038 Problem Until 2486

Comments Filter:
  • Before the Time Child is awoken. This cannot be a coincidence. Oh, how the SC guys have fallen. Show used to be fun
  • _Only_ For XFS (Score:5, Informative)

    by lobiusmoop ( 305328 ) on Sunday October 18, 2020 @09:58AM (#60621490) Homepage

    ext2 and ext3 filesystems are still borked in 2038, need to be upgraded, usually to ext4.

    • Re:_Only_ For XFS (Score:5, Interesting)

      by drinkypoo ( 153816 ) <drink@hyperlogos.org> on Sunday October 18, 2020 @10:12AM (#60621524) Homepage Journal

      I used to use XFS but I stopped because you need an assload of RAM to carry out journal recovery after a power failure. I'm using Pogoplugs for NAS (because they are wildly cheap, super low power, and have 2xUSB3, 1xSATA, and GigE, and they run Debian) and they literally do not have enough RAM to remount a dirty XFS. So if a disk is unsafely unmounted you have to use another machine to remount it. So I went to ext4, which does not have this problem.

      • Why did skmeone mod parent offtopic? Misclick?

      • How much RAM are we talking about?

        Its one thing if "not enough RAM" means 64KB, another if it means 64MB.

        Honestly I think file systems have gotten grossly over-complicated. They have become a layer on top of a database, but this database has too many masters. Its performing too many roles.
    • by Z00L00K ( 682162 )

      But why not a 64 bit timestamp?

      Will break compatibility, but why not do it right? If a 64 bit isn't enough then none of us will have to worry.

      • by jmccue ( 834797 )
        This is what I think NetBSD and OpenBSD people did, they went directly to 64 bit time_t with some kind of abstraction for 32 bit software that cannot be recompiled against the new time_t
      • For files in modern times, you still want higher than a 1 second granularity. Some systems already have had that granularity causing problems with "make" and other build systems.

      • From the summary:

        --
        This "big timestamps" feature is the refactoring of their timestamp and inode encoding functions to handle timestamps as a 64-bit nanosecond counter
        --

        I believe ext4 is also 64-bit.

    • Which is fine. The biggest flaw I see with using unix time as a universal time is that unix time was never meant to be univesal! It was meant solely for file dates when invented. Which would work until after 2100 if you make this an unsigned 32-bit number because we have zero unix files that were created before 1970. Many embedded systems use the unsigned 32-bit date format for simplicity; 64-bit times is absurd if you're on a 8 or 16 bit system, and overkill on many 32-bit systems that only care about

  • IMO, anything duration measured in nanoseconds should, at most, only be relative to the time that a system was last powered on, and absolutely never to some fixed date.

    time_t is 64-bit and is itself in seconds. If nanosecond precision was really required, I think that a second 32-bit field for the nanoseconds which could be accessed as needed would have been a far saner choice.

    • If we're still using Linux in a few centuries the problem can be solved again. But realistically if we actually make it that far we'll likely have very different architectures that would benefit from a new operating system.

      Preferably a microkernel. Maybe 2486 is the year of Hurd on the desktop ;)

      • by mark-t ( 151149 )

        I am actually suggesting that time periods on that scale do not typically require nanosecond precision in the first place, so they are needlessly throwing away many billions of years worth of perfectly reasonable timekeeping.

        Any sort of nanoseconds measurement should at most only ever be relative to some other fixed date which should be permitted to vary every time that a computer is powered on or rebooted. There's simply just no logic in it for time scales that can be measured on the order if years, w

        • Oh good, 2486, now I sleep again. - Rip van Winkel
        • It could be necessary for ordering ... but if it's for ordering it would really need to be attoseconds too to get a comfortable safety margin against technological progress.

        • Erm ...
          You ever used a RAM Disk?

          Guessed, so ...

          • by mark-t ( 151149 )

            My point is that nanosecond precision isn't even really required for anything that might be measured in longer terms than just a few days.

            A 64-bit to-the-second timestamp would be adequate for all purposes that do not involve measuring time until long after the last stars in the universe have burned out and all that is left are black holes, and meanwhile a separate field could be queried for a nanosecond offset from any to-the-second absolute time, if desired.

            • by mark-t ( 151149 )
              Blargh... I should really think before I type. A to-the-second timestamp would measure time until only half a trillion years after the big bang, which is somewhat less than the 10^40 years I had suggested, to put it mildly.
            • In a RAM disk a file might be newer than another one by half a millisecond, which is 500 nano seconds ...

              • by mark-t ( 151149 )

                I believe that for filesystems, which more often than not are persistent, and not simply based in ram, one is generally more concerned with far longer time durations than that.

                I'm not saying don't store nanoseconds at all, but it seems wasteful to lose out on billions of years of timekeeping for what is more likely to be an edge case than the primary use.. A separate field could reasonably still be queried to measure a nanosecond offset from the one-second-resolution if it is really needed.

                • But that is not the use case.

                  You basically always only have two questions:
                  a) when did it happen, to be able to display that in a directory listing (does not matter if it is ctime, atime etc.) - here you are right, nanoseconds are irrelevant
                  b) which file is newer - file1.o or file1.c - and in that case it makes no sense to "fetch nano seconds" from somewhere else - how should that work transparently for the tools asking which file is newer?

                  • by mark-t ( 151149 )

                    For a file that is newer, you could do an explicit nanoseconds query on a file that might only be meaningful on something like a ram-based filesystem, and would be reflective of the nanoseconds since the machine was last booted, and not reflect any kind of absolute time at all. On a persistent filesystem, the query would generally be meaningless.

                    Ultimately, as I said, they are throwing away most of the timekeeping potential that could be gained by using a 64-bit integer for a timestamp.

                    The fact that n

                    • Well,
                      perhaps somewhere is an article that explains why they took that solution.
                      Usually there is an idea behind something "odd" (even if it is a dumb idea).

                      Regarding Linux still used in 400 years, I doubt we will find anything much better, so I guess OSes at that time will be traced back to "*n*x" systems. However raw computation will change, different main memory, probably no real filesystems, but object storages that are part of the process space, optical computing, persistent RAM (via memristors, magneti

      • Maybe 2486 is the year of Hurd on the desktop ;)

        I expect instead of Hurd we'll be using FaceTop with our Oculus Quest 2000 headset -- that is if you can't afford the retina-embedded version. Just like X-Window Terminals, it doesn't need much power when it's only a display system.

        well past the Year 2038 problem (where storing the time since 1970 in seconds will no longer fit in a signed 32-bit integer and thus wraparound)

        Signed int?? Really? Does ANYone have any dates that stretch before 1960? Or is Dr Who Cares also a kernel and file-system programmer? Why don't we "only" recompile things with a new cast-type, which would stretch things out another 60 years? (Although in that case, why NOT

      • If weâ(TM)re still using Linux in a few centuries, we will probably come up with another backwards compatible kluge rather than just refactoring dates to use a 128 bit timestamp like would make sense.

      • I was more thinking about Plan 9 :P

      • Not likely, because people would re-architecture Linux along the way to handle these new architectures. These new architectures will also likely be developed in parallel with Linux and they'll probably be the ones to make Linux work on them. It will be much cheaper for everyone involved, and much more useful early for everyone involved, and that compounding inertia will continue to carry it forward.
        Given how many billions of dollars worth of work has gone into Linux, it is basically impossible for anyone
    • There are two unused bits in a 32-bit field, using them allows keeping the whole timestamp in a single word. It's much better to keep metadata in a single sector, and there are many timestamps: atime, mtime, ctime, btime, ....

      There's no way any current filesystems will be in use 50 years ahead, much less 466.

      • And then you have Time Lords who think anything less than 16777216-bit is unacceptable.

        • by mark-t ( 151149 )
          At just 512 bits, you can have planck time level precision and still have more than enough bits to count time from the moment of the big bang to the heat death of the universe.
          • Time Lords include the Universe cycle in their timestamps. FIY we're in cycle 543, or so I've been told.

            • by mark-t ( 151149 )
              I would suggest that cycles outside of the current one are irrelevant, as they do not affect what happens in this one. At best they are as meaningless as measurements more precise than Planck length.
      • by mark-t ( 151149 )

        There's no way any current filesystems will be in use 50 years ahead, much less 466.

        Which is largely why nanosecond precision isn't really required there in the first place. If nanosecond precision is needed, it usually just be asked for in a separate field or systems query... it's just an edge case and not the norm for file timestamps. Also, it is rare to the point of being unheard of that any sort of nanosecond precision is needed for anything beyond something relative to the time that an application

        • it's just an edge case and not the norm for file timestamps

          Then show me a mainstream filesystem on a real operating system (ie, not Windows) that uses a granularity other than 1ns. ext*, xfs, btrfs, nfs, tmpfs, ...

          This convention is so set in stone that a niche filesystem (sshfs) not obeying it breaks programs: synchronization ends in two or more full transfers either way, etc.

          • by mark-t ( 151149 )
            I am suggesting that the time that a file wsas modifed or created does not *REQUIRE* nanosecond level precision, because it is rare that such precision would actually be needed in any kind of absolute sense. If sub-second precision is required, it can be taken relative to some other point that could be computed when the application starts up but there is no reason to use sub-second precision on something that is actually being measured from a fixed point in time that is actually many years ago.
            • If the timestamp format covers common as well as rare use cases, it's more useful than if it doesn't. And it seems far more likely that someone will need nanosecond precision than a time span measured in billions of years.

              • by mark-t ( 151149 )
                The only time someone would actually ever require nanosecond precision as a file timestamp is in the special case of some sort of entirely ram-based filesystem, but not on a filesystem that is ever intended to be persistent. All they've done here with this choice is create what is basically just another Y2K bug . The fact that nobody needs to worry about it for 400 years doesn't mean that it isn't there and that nobody will have to try and engineer a solution for it that will cost who knows how much money
      • There's no way any current filesystems will be in use 50 years ahead, much less 466.

        This is very similar to the argument that caused the Y2K problem.

        "Two digit years in timestamps are just fine -- nobody will be using this software in 20 years!"

        We need to learn from history...

      • by skullandbones99 ( 3478115 ) on Sunday October 18, 2020 @12:36PM (#60621922)

        You do realise that 25 year old MS-DOS FAT filesystems are still in use today, right? If you buy a car today with USB Mass Storage for audio playback then the car is likely to have support for FAT. If the car lasts 25 years then FAT will have been supported for 50 years.

        The problem is the need to interoperate with equipment and standards over the lifetime of the car. For example, CD drives got replaced by memory sticks but cars still exist today with CD drives. So there is a 25 year delay for the old tech to die off.

        But Y2038 hits in 17 years time on 19th January 2038 which is within the lifetime of new vehicles today. Y2038 is going to be a lawyer's payday with mass class action law suites being deployed for a well understood defect that currently is not being taken seriously enough by industry.

        Also on the radar, is that the current Internet Network Time Protocol (NTP) rolls over to the next era in 2036. If your embedded device has no real-time clock and starts its system time with 1st January 1970 then your device will get a wrong time using NTP in 2036.

        It is also a myth to believe that 64-bit systems are immune from Y2038 because there is the need to interoperate with old standards and equipment.

        My expectation is that governments will have to pass Y2038 laws to protect consumer rights.

    • But time_t is also 32 bits in many systems. Most systems though split up time into different domains rather than try to have a single catch-all format. Thus seconds and sub-seconds.

      Also, who know how time is going to be used anyway. So sure, you go with 64 bit seconds and 32 bit subseconds - but then it fails for an astronomer who wants longer time scales or for physicists who want shorter times.

      • by mark-t ( 151149 )

        64 bits worth of seconds would be good for over half a trillion years as an unsigned integer representing the age of the universe since the big bang. Why do you think astronomers would find this problematic?

        Also, if you actually need precision to the sub-second, when do you actually need to also track long scale time-keeping? You could have wholly separate mechanisms for fetching the number of nanoseconds as a 64-bit integer since the machine was last rebooted, for instance.

  • by Terje Mathisen ( 128806 ) on Sunday October 18, 2020 @10:58AM (#60621676)

    First, all 64-bit OSs have changed to a 64-bit time_t which will last long enough (2.92e11 years for the signed version), that the survival of the planet is not guaranteed.

    On the other hand, Network Time Protocol have been using a fixed-point 32:32 format since day 1, and that is fine since NTP is only used to exchange/measure small time differences.

    On the gripping hand the real problem here is that all these UTC timestamps, with ridiculously high resolution, are ignoring the elephant in the room which is leap seconds! It makes zero sense to have fractional UTC seconds without also specifying exactly how timestamps taken during a leap event should be handled. :-(

    NTP btw have wrestled with the leap second issues for many years, so that server timestamps can include a warning about an upcoming leap event.

    Terje

    • A 64-bit OS needs to be compatible with old standards and equipment especially in consumer markets. Despite using 64-bit time_t for the main system time, your 64-bit system can still fail due to the limitations of old standards and equipment. In other words, timestamps within or external to the system come from 3rd party entities such as Internet protocols and file systems.

      It is insufficient to just use 64-bit time_t to get a compliant Y2038 system. The problem is bigger than that.

      Linux will be supporting 6

    • by bosef1 ( 208943 )

      Concur on the need for support for leap seconds in time standards (and, presumably, leap everything else). Leap seconds have been happening since the 1970s, so the nature of their insertion into UTC is well-known. And it should be real support, like supporting 61 seconds in a minute; and not that weird thing where the clock smears out the previous and next seconds across the leap second to make it work.

    • The timestamps are well defined - the number of elapsed seconds (or nanoseconds) since a fixed epoch. The problem is converting that to and from day number and time.
      • I am not sure that they are as well defined as you say. Time is a rather difficult thing to correctly handle. (Fortunately it is rare that this is really needed.)

        When you start getting into nanosecond-scale, you need to start including your frame of reference. Has your laptop been in an airplane? Relativistic effects become measurable and can alter by a few nanoseconds your time compared to a "stationary" reference (in quotes, because of course we're always moving/rotating/etc...). There isn't going to be

    • The correct approach would be everything to use TAI [wikipedia.org] internally (or, more properly, TT [ucolick.org] since TAI isn't supposed to be used in operational systems; there are also a lot of other timescales around [ucolick.org]), and only ever convert to UTC etc. (thus inserting the leap seconds via tzdata) when needing to display localized info to a human.
    • Leap seconds also aren't strictly determined indefinitely. As recently as 2015 [krqe.com]:

      A leap second will be added Tuesday to keep our clocks more closely synced with the Earth’s rotation, per the International Earth Rotation and Reference System Service in Paris. Universal time says the Earth completes a full rotation around the sun every 86,400.002 seconds. The “.002” is what causes the leap second.

      According to NASA, the “Earth’s rotation is gradually slowing down a bit, due to a kind of braking force caused by the gravitational tug of war between Earth, the moon and the sun.”

      So if 1-second precision is important in your embedded device, how do you make it account for manually-added seconds?

      • You don't!

        If you care about sub-second time intervals, and still want to measure large timescales with seconds, then you should use TAI (i.e. atomic) seconds.

        The TAI scale is monotonic, so the difference between TAI and UTC increases each time a leap second is added, and would decrease if we ever removed a leap second. The latter is possible, but exceedingly unlikely.

        The main/only problem with using TAI is that you can't use it for exact calendar style timestamps in the future, i.e. beyond 6-12 months you

  • This just means we'll kick the can down the road until we get close to 2485 and our decendents will discuss this on slashdot then as well
    • 2486 will be a very special year. It marks the end of the Great Development Cycle, and the humanity will enter in a new stage of existence: Linux on the desktop.

  • by martynhare ( 7125343 ) on Sunday October 18, 2020 @12:41PM (#60621936)
    No joke, try running a bunch of Linux-native games from Humble Indie Bundle on XFS and rage wondering why the heck they don't work.Then try the same games on ext4 and watch them "just work" again (64-bit inodes without any indirection to keep old 32-bit binaries compatible). Also, who wants to use the filesystem which still doesn't handle bad blocks properly? Even FAT32 does better than that!

    Sorry for being pessimistic, I'm just not sure why XFS is seeing so much love when it's clearly dogshit for desktop and average server use cases.
  • Not my problem anymore.

    • I am 66 and retired so i think 2036, then i will be 82 if i live so long? be able to do anything on/with a computer? I gues also not my problem.

  • Nothing to see here....

    systemctl enable systemd-twentythirtyeightd.service

"An idealist is one who, on noticing that a rose smells better than a cabbage, concludes that it will also make better soup." - H.L. Mencken

Working...