On Fri, Aug 25, 2023 at 10:56:27AM -0700, Darrick J. Wong wrote: > On Fri, Aug 25, 2023 at 05:09:20PM +0800, cheng.lin130@xxxxxxxxxx wrote: > > > On Thu, Aug 24, 2023 at 03:43:52PM +0800, cheng.lin130@xxxxxxxxxx wrote: > > >> From: Cheng Lin <cheng.lin130@xxxxxxxxxx> > > >> An dir nlinks overflow which down form 0 to 0xffffffff, cause the > > >> directory to become unusable until the next xfs_repair run. > > > Hmmm. How does this ever happen? > > > IMO, if it does happen, we need to fix whatever bug that causes it > > > to happen, not issue a warning and do nothing about the fact we > > > just hit a corrupt inode state... > > Yes, I'm very agree with your opinion. But I don't know how it happened, > > and how to reproduce it. > > Wait, is this the result of a customer problem? Or static analysis? > > > >> Introduce protection for drop nlink to reduce the impact of this. > > >> And produce a warning for directory nlink error during remove. > > >> > > >> Signed-off-by: Cheng Lin <cheng.lin130@xxxxxxxxxx> > > >> --- > > >> fs/xfs/xfs_inode.c | 16 +++++++++++++++- > > >> 1 file changed, 15 insertions(+), 1 deletion(-) > > >> > > >> diff --git a/fs/xfs/xfs_inode.c b/fs/xfs/xfs_inode.c > > >> index 9e62cc5..536dbe4 100644 > > >> --- a/fs/xfs/xfs_inode.c > > >> +++ b/fs/xfs/xfs_inode.c > > >> @@ -919,6 +919,15 @@ STATIC int xfs_iunlink_remove(struct xfs_trans *tp, struct xfs_perag *pag, > > I'm not sure why your diff program thinks this hunk is from > xfs_iunlink_remove, seeing as the line numbers of the chunk point to > xfs_droplink. Maybe that's what's going on in this part of the thread? Yes. I don't expect patches to be mangled like this - I generally take the hunk prefix to indicate what code is being modified when reading patches, not expecting that the hunk is modifying code over a thousand lines prior to the function in the prefix... So, yeah, something went very wrong with the generation of this patch... -Dave. -- Dave Chinner david@xxxxxxxxxxxxx