Linux 5.10 Solves the Year 2038 Problem Until 2486 (phoronix.com) 77
The Linux 5.10 kernel's XFS file-system will have two new on-disk meta-data capabilities, reports Phoronix:
1. The size of inode btrees in the allocation group is now recorded. This is for increasing redundancy checks and also allowing faster mount times.
2. Support for timestamps now until the year 2486.
This "big timestamps" feature is the refactoring of their timestamp and inode encoding functions to handle timestamps as a 64-bit nanosecond counter and bit shifting to increase the effective size. This now allows XFS to run well past the Year 2038 problem (where storing the time since 1970 in seconds will no longer fit in a signed 32-bit integer and thus wraparound) to now the Year 2486. Making a new XFS file-system with bigtime enabled allows a timestamp range from December 1901 to July 2486 rather than December 1901 to January 2038. For preserving backwards compatibility, the big timestamps feature is not currently enabled by default.
2. Support for timestamps now until the year 2486.
This "big timestamps" feature is the refactoring of their timestamp and inode encoding functions to handle timestamps as a 64-bit nanosecond counter and bit shifting to increase the effective size. This now allows XFS to run well past the Year 2038 problem (where storing the time since 1970 in seconds will no longer fit in a signed 32-bit integer and thus wraparound) to now the Year 2486. Making a new XFS file-system with bigtime enabled allows a timestamp range from December 1901 to July 2486 rather than December 1901 to January 2038. For preserving backwards compatibility, the big timestamps feature is not currently enabled by default.
Exactly 60 years (Score:1)
Re: (Score:1)
I get the South Park reference, but what does "SC guys" refer to?
Re: (Score:3)
Sudo move the problem past the singularity (Score:2)
BY that date all operating systems will be created by AIs not humans. We'll have a complete clean sheet. And in fact no human will even understand how it even works.
The good news is that after the machines enslave us maybe they will be defeated by some glitch in their systems too.
Re: (Score:1)
Re: (Score:1)
But still no actual hoverboard ala BTTF2.
_Only_ For XFS (Score:5, Informative)
ext2 and ext3 filesystems are still borked in 2038, need to be upgraded, usually to ext4.
Re:_Only_ For XFS (Score:5, Interesting)
I used to use XFS but I stopped because you need an assload of RAM to carry out journal recovery after a power failure. I'm using Pogoplugs for NAS (because they are wildly cheap, super low power, and have 2xUSB3, 1xSATA, and GigE, and they run Debian) and they literally do not have enough RAM to remount a dirty XFS. So if a disk is unsafely unmounted you have to use another machine to remount it. So I went to ext4, which does not have this problem.
Re: (Score:2)
Why did skmeone mod parent offtopic? Misclick?
Re: (Score:2)
Its one thing if "not enough RAM" means 64KB, another if it means 64MB.
Honestly I think file systems have gotten grossly over-complicated. They have become a layer on top of a database, but this database has too many masters. Its performing too many roles.
Re: (Score:3)
It means multiple GB if your filesystem is multiple TB. My poor little dockstars only have 256MB, which I realize is very little. But they were the cheapest thing I could find with decent I/O.
Re: (Score:3)
How much RAM are we talking about?
An assload. Can't you read?
Re: (Score:3)
Re: (Score:3)
But why not a 64 bit timestamp?
Will break compatibility, but why not do it right? If a 64 bit isn't enough then none of us will have to worry.
Re: (Score:3)
Re: (Score:2)
For files in modern times, you still want higher than a 1 second granularity. Some systems already have had that granularity causing problems with "make" and other build systems.
The summary says it is 64-bit (Score:2)
From the summary:
--
This "big timestamps" feature is the refactoring of their timestamp and inode encoding functions to handle timestamps as a 64-bit nanosecond counter
--
I believe ext4 is also 64-bit.
Re: The summary says it is 64-bit (Score:3)
ext4 comes in 2 flavours; 32 bit and 64 bit timestamps. You need to check which timestamp option was used when ext4 was created.
The one to worry about is cpio.
Re: (Score:3)
Which is fine. The biggest flaw I see with using unix time as a universal time is that unix time was never meant to be univesal! It was meant solely for file dates when invented. Which would work until after 2100 if you make this an unsigned 32-bit number because we have zero unix files that were created before 1970. Many embedded systems use the unsigned 32-bit date format for simplicity; 64-bit times is absurd if you're on a 8 or 16 bit system, and overkill on many 32-bit systems that only care about
Re:_Only_ For XFS (Score:5, Funny)
It's a similar situation with Windows. It was meant solely to run Solitaire, Minesweeper and calculator but things got out of hand really fast.
Re: (Score:2)
I once played Tetris at SCO on Windows 3.0 on a 286, and the score counter wrapped around backwards because they stored it in a signed int. And the rest is history.
Metadata steganography (Score:4, Interesting)
Re: (Score:2)
"Which is fine." Speak for yourself. This means that when the year 2486 rolls around, I will need to switch to some other OS. And think how many photos and videos (not to mention emails) I'll need to move from my trusty Linux machine to something else.
Re: (Score:2)
Though it may be true that zero unix file were created before 1970, I expected there were some use cases where the user wanted to preserve timestamps from files transferred from other systems. Or, did no one ever want to do that?
This just pushes the problem back a few centuries (Score:4, Interesting)
IMO, anything duration measured in nanoseconds should, at most, only be relative to the time that a system was last powered on, and absolutely never to some fixed date.
time_t is 64-bit and is itself in seconds. If nanosecond precision was really required, I think that a second 32-bit field for the nanoseconds which could be accessed as needed would have been a far saner choice.
Re:This just pushes the problem back a few centuri (Score:5, Insightful)
If we're still using Linux in a few centuries the problem can be solved again. But realistically if we actually make it that far we'll likely have very different architectures that would benefit from a new operating system.
Preferably a microkernel. Maybe 2486 is the year of Hurd on the desktop ;)
Re: (Score:2)
I am actually suggesting that time periods on that scale do not typically require nanosecond precision in the first place, so they are needlessly throwing away many billions of years worth of perfectly reasonable timekeeping.
Any sort of nanoseconds measurement should at most only ever be relative to some other fixed date which should be permitted to vary every time that a computer is powered on or rebooted. There's simply just no logic in it for time scales that can be measured on the order if years, w
Re: (Score:2)
Stay awake! There is another problem: (Score:2)
Don't go to sleep! The Sun will only last a few billion more years. [livescience.com] Worry about that.
Re: This just pushes the problem back a few centur (Score:2)
It could be necessary for ordering ... but if it's for ordering it would really need to be attoseconds too to get a comfortable safety margin against technological progress.
Re: (Score:2)
Re: (Score:2)
Erm ...
You ever used a RAM Disk?
Guessed, so ...
Re: (Score:2)
My point is that nanosecond precision isn't even really required for anything that might be measured in longer terms than just a few days.
A 64-bit to-the-second timestamp would be adequate for all purposes that do not involve measuring time until long after the last stars in the universe have burned out and all that is left are black holes, and meanwhile a separate field could be queried for a nanosecond offset from any to-the-second absolute time, if desired.
Re: (Score:2)
Re: (Score:2)
In a RAM disk a file might be newer than another one by half a millisecond, which is 500 nano seconds ...
Re: (Score:2)
I believe that for filesystems, which more often than not are persistent, and not simply based in ram, one is generally more concerned with far longer time durations than that.
I'm not saying don't store nanoseconds at all, but it seems wasteful to lose out on billions of years of timekeeping for what is more likely to be an edge case than the primary use.. A separate field could reasonably still be queried to measure a nanosecond offset from the one-second-resolution if it is really needed.
Re: (Score:2)
But that is not the use case.
You basically always only have two questions:
a) when did it happen, to be able to display that in a directory listing (does not matter if it is ctime, atime etc.) - here you are right, nanoseconds are irrelevant
b) which file is newer - file1.o or file1.c - and in that case it makes no sense to "fetch nano seconds" from somewhere else - how should that work transparently for the tools asking which file is newer?
Re: (Score:2)
For a file that is newer, you could do an explicit nanoseconds query on a file that might only be meaningful on something like a ram-based filesystem, and would be reflective of the nanoseconds since the machine was last booted, and not reflect any kind of absolute time at all. On a persistent filesystem, the query would generally be meaningless.
Ultimately, as I said, they are throwing away most of the timekeeping potential that could be gained by using a 64-bit integer for a timestamp.
The fact that n
Re: (Score:2)
Well,
perhaps somewhere is an article that explains why they took that solution.
Usually there is an idea behind something "odd" (even if it is a dumb idea).
Regarding Linux still used in 400 years, I doubt we will find anything much better, so I guess OSes at that time will be traced back to "*n*x" systems. However raw computation will change, different main memory, probably no real filesystems, but object storages that are part of the process space, optical computing, persistent RAM (via memristors, magneti
Re: (Score:2)
Maybe 2486 is the year of Hurd on the desktop ;)
I expect instead of Hurd we'll be using FaceTop with our Oculus Quest 2000 headset -- that is if you can't afford the retina-embedded version. Just like X-Window Terminals, it doesn't need much power when it's only a display system.
well past the Year 2038 problem (where storing the time since 1970 in seconds will no longer fit in a signed 32-bit integer and thus wraparound)
Signed int?? Really? Does ANYone have any dates that stretch before 1960? Or is Dr Who Cares also a kernel and file-system programmer? Why don't we "only" recompile things with a new cast-type, which would stretch things out another 60 years? (Although in that case, why NOT
Re: This just pushes the problem back a few centur (Score:2)
time_t is a signed integer to allow it to be used in relative time calculations. This means negative time is valid which means that the event was in the past.
Re: This just pushes the problem back a few centur (Score:2)
If weâ(TM)re still using Linux in a few centuries, we will probably come up with another backwards compatible kluge rather than just refactoring dates to use a 128 bit timestamp like would make sense.
Re: (Score:2)
I was more thinking about Plan 9 :P
Re: (Score:2)
Given how many billions of dollars worth of work has gone into Linux, it is basically impossible for anyone
Re: (Score:2)
There are two unused bits in a 32-bit field, using them allows keeping the whole timestamp in a single word. It's much better to keep metadata in a single sector, and there are many timestamps: atime, mtime, ctime, btime, ....
There's no way any current filesystems will be in use 50 years ahead, much less 466.
Re: (Score:2)
And then you have Time Lords who think anything less than 16777216-bit is unacceptable.
Re: (Score:3)
Re: (Score:2)
Time Lords include the Universe cycle in their timestamps. FIY we're in cycle 543, or so I've been told.
Re: (Score:3)
Re: (Score:2)
Which is largely why nanosecond precision isn't really required there in the first place. If nanosecond precision is needed, it usually just be asked for in a separate field or systems query... it's just an edge case and not the norm for file timestamps. Also, it is rare to the point of being unheard of that any sort of nanosecond precision is needed for anything beyond something relative to the time that an application
Re: (Score:2)
it's just an edge case and not the norm for file timestamps
Then show me a mainstream filesystem on a real operating system (ie, not Windows) that uses a granularity other than 1ns. ext*, xfs, btrfs, nfs, tmpfs, ...
This convention is so set in stone that a niche filesystem (sshfs) not obeying it breaks programs: synchronization ends in two or more full transfers either way, etc.
Re: (Score:2)
Re: (Score:2)
If the timestamp format covers common as well as rare use cases, it's more useful than if it doesn't. And it seems far more likely that someone will need nanosecond precision than a time span measured in billions of years.
Re: (Score:2)
Re: (Score:1)
There's no way any current filesystems will be in use 50 years ahead, much less 466.
This is very similar to the argument that caused the Y2K problem.
"Two digit years in timestamps are just fine -- nobody will be using this software in 20 years!"
We need to learn from history...
Re: This just pushes the problem back a few centur (Score:5, Interesting)
You do realise that 25 year old MS-DOS FAT filesystems are still in use today, right? If you buy a car today with USB Mass Storage for audio playback then the car is likely to have support for FAT. If the car lasts 25 years then FAT will have been supported for 50 years.
The problem is the need to interoperate with equipment and standards over the lifetime of the car. For example, CD drives got replaced by memory sticks but cars still exist today with CD drives. So there is a 25 year delay for the old tech to die off.
But Y2038 hits in 17 years time on 19th January 2038 which is within the lifetime of new vehicles today. Y2038 is going to be a lawyer's payday with mass class action law suites being deployed for a well understood defect that currently is not being taken seriously enough by industry.
Also on the radar, is that the current Internet Network Time Protocol (NTP) rolls over to the next era in 2036. If your embedded device has no real-time clock and starts its system time with 1st January 1970 then your device will get a wrong time using NTP in 2036.
It is also a myth to believe that 64-bit systems are immune from Y2038 because there is the need to interoperate with old standards and equipment.
My expectation is that governments will have to pass Y2038 laws to protect consumer rights.
Re: (Score:3)
But time_t is also 32 bits in many systems. Most systems though split up time into different domains rather than try to have a single catch-all format. Thus seconds and sub-seconds.
Also, who know how time is going to be used anyway. So sure, you go with 64 bit seconds and 32 bit subseconds - but then it fails for an astronomer who wants longer time scales or for physicists who want shorter times.
Re: (Score:2)
64 bits worth of seconds would be good for over half a trillion years as an unsigned integer representing the age of the universe since the big bang. Why do you think astronomers would find this problematic?
Also, if you actually need precision to the sub-second, when do you actually need to also track long scale time-keeping? You could have wholly separate mechanisms for fetching the number of nanoseconds as a 64-bit integer since the machine was last rebooted, for instance.
This is at best a hack, and not the best solution (Score:5, Interesting)
First, all 64-bit OSs have changed to a 64-bit time_t which will last long enough (2.92e11 years for the signed version), that the survival of the planet is not guaranteed.
On the other hand, Network Time Protocol have been using a fixed-point 32:32 format since day 1, and that is fine since NTP is only used to exchange/measure small time differences.
On the gripping hand the real problem here is that all these UTC timestamps, with ridiculously high resolution, are ignoring the elephant in the room which is leap seconds! It makes zero sense to have fractional UTC seconds without also specifying exactly how timestamps taken during a leap event should be handled. :-(
NTP btw have wrestled with the leap second issues for many years, so that server timestamps can include a warning about an upcoming leap event.
Terje
Re: This is at best a hack, and not the best solut (Score:2)
A 64-bit OS needs to be compatible with old standards and equipment especially in consumer markets. Despite using 64-bit time_t for the main system time, your 64-bit system can still fail due to the limitations of old standards and equipment. In other words, timestamps within or external to the system come from 3rd party entities such as Internet protocols and file systems.
It is insufficient to just use 64-bit time_t to get a compliant Y2038 system. The problem is bigger than that.
Linux will be supporting 6
Re: (Score:2)
Concur on the need for support for leap seconds in time standards (and, presumably, leap everything else). Leap seconds have been happening since the 1970s, so the nature of their insertion into UTC is well-known. And it should be real support, like supporting 61 seconds in a minute; and not that weird thing where the clock smears out the previous and next seconds across the leap second to make it work.
Re: This is at best a hack, and not the best solut (Score:2)
Re: (Score:2)
I am not sure that they are as well defined as you say. Time is a rather difficult thing to correctly handle. (Fortunately it is rare that this is really needed.)
When you start getting into nanosecond-scale, you need to start including your frame of reference. Has your laptop been in an airplane? Relativistic effects become measurable and can alter by a few nanoseconds your time compared to a "stationary" reference (in quotes, because of course we're always moving/rotating/etc...). There isn't going to be
Re: (Score:2)
Re: (Score:2)
A leap second will be added Tuesday to keep our clocks more closely synced with the Earth’s rotation, per the International Earth Rotation and Reference System Service in Paris. Universal time says the Earth completes a full rotation around the sun every 86,400.002 seconds. The “.002” is what causes the leap second.
According to NASA, the “Earth’s rotation is gradually slowing down a bit, due to a kind of braking force caused by the gravitational tug of war between Earth, the moon and the sun.”
So if 1-second precision is important in your embedded device, how do you make it account for manually-added seconds?
Re: (Score:2)
You don't!
If you care about sub-second time intervals, and still want to measure large timescales with seconds, then you should use TAI (i.e. atomic) seconds.
The TAI scale is monotonic, so the difference between TAI and UTC increases each time a leap second is added, and would decrease if we ever removed a leap second. The latter is possible, but exceedingly unlikely.
The main/only problem with using TAI is that you can't use it for exact calendar style timestamps in the future, i.e. beyond 6-12 months you
Just great (Score:1)
Re: Just great (Score:3)
2486 will be a very special year. It marks the end of the Great Development Cycle, and the humanity will enter in a new stage of existence: Linux on the desktop.
Fixes timestamps, breaks video games... (Score:3)
Sorry for being pessimistic, I'm just not sure why XFS is seeing so much love when it's clearly dogshit for desktop and average server use cases.
I'll be retired so... (Score:2)
Not my problem anymore.
Re: (Score:1)
I am 66 and retired so i think 2036, then i will be 82 if i live so long? be able to do anything on/with a computer? I gues also not my problem.
Why worry? (Score:1)
Nothing to see here....
systemctl enable systemd-twentythirtyeightd.service