Re: [PATCH] libfrog: fix the if condition in xfrog_bulk_req_v1_setup

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Sat, Jul 30, 2022 at 03:51:40PM +0800, Stephen Zhang wrote:
> Darrick J. Wong <djwong@xxxxxxxxxx> 于2022年7月30日周六 09:30写道:
> >
> > It's probably ok to resend with that change, but ... what were you doing
> > to trip over this error, anyway?
> >
> > --D
> >
> 
> Well, I was running xfs/285, and ran into some other error, which was
> already fixed by the latest xfsprogs.
> But in the process of examining the code logic in xfs_scrub, i still find
> there may exist a flaw here, although it hasn't cause any problem so far.
> Maybe it's still neccessary to submit the fix.Or am I just understanding
> the code in a wrong way?

FSBULKSTAT was always weird.  Look at the current kernel implementation,
which translates the V1 FSBULKSTAT call into a V5 BULKSTAT call:

	if (cmd == XFS_IOC_FSINUMBERS) {
		breq.startino = lastino ? lastino + 1 : 0;
		error = xfs_inumbers(&breq, xfs_fsinumbers_fmt);
		lastino = breq.startino - 1;
	} else if (cmd == XFS_IOC_FSBULKSTAT_SINGLE) {
		breq.startino = lastino;
		breq.icount = 1;
		error = xfs_bulkstat_one(&breq, xfs_fsbulkstat_one_fmt);
	} else {	/* XFS_IOC_FSBULKSTAT */
		breq.startino = lastino ? lastino + 1 : 0;
		error = xfs_bulkstat(&breq, xfs_fsbulkstat_one_fmt);
		lastino = breq.startino - 1;
	}

We always bump lastino by one, except in the case where it's 0, because
0 is the magic signal to start at the first inode in the filesystem.
This "only bump it if nonzero" behavior works solely because the fs
layout prevents there ever from being an inode 0.

Now, why does it behave like that?  Before the creation of v5 bulkstat,
which made the cursor work like a standard cursor (i.e. breq->startino
points to the inode that should be stat'd next), the old bulkstat-v1
xfs_bulkstat_grab_chunk did this to mask off all inumbers before and
including the passed in *lastinop:

	idx = agino - irec->ir_startino + 1;
	if (idx < XFS_INODES_PER_CHUNK &&
	    (xfs_inobt_maskn(idx, XFS_INODES_PER_CHUNK - idx) & ~irec->ir_free)) {
		int	i;

		/* We got a right chunk with some left inodes allocated at it.
		 * Grab the chunk record.  Mark all the uninteresting inodes
		 * free -- because they're before our start point.
		 */
		for (i = 0; i < idx; i++) {
			if (XFS_INOBT_MASK(i) & ~irec->ir_free)
				irec->ir_freecount++;
		}

		irec->ir_free |= xfs_inobt_maskn(0, idx);
		*icount = irec->ir_count - irec->ir_freecount;
	}

Notice the "idx = agino - irec->ir_startino + 1".  That means that to go
from bulkstat v5 back to v1, we have to subtract 1 from the inode number
except in the case of zero, which is what libfrog does.  So I don't
think this patch is correct, though the reasons why are ... obscure and
took me several days to remember.

--D

> Thanks,
> 
> Stephen.



[Index of Archives]     [XFS Filesystem Development (older mail)]     [Linux Filesystem Development]     [Linux Audio Users]     [Yosemite Trails]     [Linux Kernel]     [Linux RAID]     [Linux SCSI]


  Powered by Linux