On Mon, Jul 15, 2024 at 08:48:52AM -0400, Jeff Layton wrote: > diff --git a/fs/stat.c b/fs/stat.c > index 6f65b3456cad..df7fdd3afed9 100644 > --- a/fs/stat.c > +++ b/fs/stat.c > @@ -26,6 +26,32 @@ > #include "internal.h" > #include "mount.h" > > +/** > + * fill_mg_cmtime - Fill in the mtime and ctime and flag ctime as QUERIED > + * @stat: where to store the resulting values > + * @request_mask: STATX_* values requested > + * @inode: inode from which to grab the c/mtime > + * > + * Given @inode, grab the ctime and mtime out if it and store the result > + * in @stat. When fetching the value, flag it as queried so the next write > + * will ensure a distinct timestamp. > + */ > +void fill_mg_cmtime(struct kstat *stat, u32 request_mask, struct inode *inode) > +{ > + atomic_t *pcn = (atomic_t *)&inode->i_ctime_nsec; > + > + /* If neither time was requested, then don't report them */ > + if (!(request_mask & (STATX_CTIME|STATX_MTIME))) { > + stat->result_mask &= ~(STATX_CTIME|STATX_MTIME); > + return; > + } > + > + stat->mtime = inode_get_mtime(inode); > + stat->ctime.tv_sec = inode->i_ctime_sec; > + stat->ctime.tv_nsec = ((u32)atomic_fetch_or(I_CTIME_QUERIED, pcn)) & ~I_CTIME_QUERIED; > +} > +EXPORT_SYMBOL(fill_mg_cmtime); > + [trimmed the ginormous CC] This performs the atomic every time (as in it sets the flag even if it was already set), serializing all fstats and reducing scalability of stat of the same file. Bare minimum it should be conditional -- if the flag is already set, don't dirty anything. Even that aside adding an atomic to stat is a bummer, but off hand I don't have a good solution for that. Anyhow, this being in -next, perhaps the conditional dirty can be massaged into the thing as present? There are some cosmetic choices to be made how to express, may be the fastest if you guys just augment it however you see fit. If not I can submit a patch tomorrow.