On Tue, 2024-07-02 at 05:04 -0700, Christoph Hellwig wrote: > On Tue, Jul 02, 2024 at 07:44:19AM -0400, Jeff Layton wrote: > > Complaining about it is fairly simple. We could just throw a pr_warn in > > inode_set_ctime_to_ts when the time comes back as KTIME_MAX. This might > > also be what we need to do for filesystems like NFS, where a future > > ctime on the server is not necessarily a problem for the client. > > > > Refusing to load the inode on disk-based filesystems is harder, but is > > probably possible. There are ~90 calls to inode_set_ctime_to_ts in the > > kernel, so we'd need to vet the places that are loading times from disk > > images or the like and fix them to return errors in this situation. > > > > Is warning acceptable, or do we really need to reject inodes that have > > corrupt timestamps like this? > > inode_set_ctime_to_ts should return an error if things are out of range. Currently it just returns the timespec64 we're setting it to (which makes it easy to do several assignments), so we'd need to change its prototype to handle this case, and fix up the callers to recognize the error. Alternately it may be easier to just add in a test for when __i_ctime == KTIME_MAX in the appropriate callers and have them error out. I'll have a look and see what makes sense. > How do we currently catch this when it comes from userland? > Not sure I understand this question. ctime values should never come from userland. They should only ever come from the system clock. -- Jeff Layton <jlayton@xxxxxxxxxx>