On Fri, Sep 29, 2023 at 3:19 AM Linus Torvalds <torvalds@xxxxxxxxxxxxxxxxxxxx> wrote: ... > So yes, real programs to cache stat information, and it matters for performance. > > But I don't think any actual reasonable program will have > *correctness* issues, though - I beg to disagree. > because there are certainly filesystems > out there that don't do nanosecond resolution (and other operations > like copying trees around will obviously also change times). > > Anybody doing steganography in the timestamps is already not going to > have a great time, really. > Your thesis implies that all applications are portable across different filesystems and all applications are expected to cope with copying trees around. There are applications that work on specific filesystems and those applications are very much within sanity if they expect that past observed values of nsec will not to change if the file was not changed. But even if we agree that will "only" hurt performance, your example of performance hit (10s of git diff) is nowhere close to the performance hit of invalidating the mtime cache of billions of files at once (i.e. after kernel upgrade), which means that rsync-like programs need to re-read all the data from remote locations. I am not saying that filesystems cannot decide to *stop storing nsec granularity* from this day forth, but like btrfs pre-historic timestamps, those fs have an obligation to preserve existing metadata, unless users opted to throw it away. OTOH, it is perfectly fine if the vfs wants to stop providing sub 100ns services to filesystems. It's just going to be the fs problem and the preserved pre-historic/fine-grained time on existing files would only need to be provided in getattr(). It does not need to be in __i_mtime. Thanks, Amir.