On Thu, 2024-06-13 at 01:00 -0400, trondmy@xxxxxxxxx wrote: > From: Trond Myklebust <trond.myklebust@xxxxxxxxxxxxxxx> > > Now that https://datatracker.ietf.org/doc/draft-ietf-nfsv4-layrec/ is > mostly done with the review process, I'd like to move the final patches > for the client implementation upstream. > > The following patch series therefore adds support to the flexfiles pNFS > driver so that if a metadata server reboot occurs while a client has > layouts outstanding, and is performing I/O, then the client will report > layoutstats and layout errors through a LAYOUTRETURN during the grace > period, after the metadata server comes back up. > This has implications for mirrored workloads, since it allows the client > to report exactly which mirror data instances may have been corrupted > due to the presence of errors during WRITEs or COMMITs. > > Trond Myklebust (11): > NFSv4/pnfs: Remove redundant list check > NFSv4.1: constify the stateid argument in nfs41_test_stateid() > NFSv4: Clean up encode_nfs4_stateid() > pNFS: Add a flag argument to pnfs_destroy_layouts_byclid() > NFSv4/pnfs: Add support for the PNFS_LAYOUT_FILE_BULK_RETURN flag > NFSv4/pNFS: Add a helper to defer failed layoutreturn calls > NFSv4/pNFS: Handle server reboots in pnfs_poc_release() > NFSv4/pNFS: Retry the layout return later in case of a timeout or > reboot > NFSv4/pnfs: Give nfs4_proc_layoutreturn() a flags argument > NFSv4/pNFS: Remove redundant call to unhash the layout > NFSv4/pNFS: Do layout state recovery upon reboot > > fs/nfs/callback_proc.c | 5 +- > fs/nfs/flexfilelayout/flexfilelayout.c | 2 +- > fs/nfs/nfs4_fs.h | 3 +- > fs/nfs/nfs4proc.c | 53 ++++-- > fs/nfs/nfs4state.c | 4 +- > fs/nfs/nfs4xdr.c | 7 +- > fs/nfs/pnfs.c | 223 +++++++++++++++++++------ > fs/nfs/pnfs.h | 30 +++- > include/linux/nfs_fs_sb.h | 1 + > include/linux/nfs_xdr.h | 2 +- > 10 files changed, 249 insertions(+), 81 deletions(-) > These have been used for a while inside of Meta vs. Hammerspace's servers and have been behaving. Reviewed-by: Jeff Layton <jlayton@xxxxxxxxxx>