Re: [PATCH] xfs_io: Implement inodes64 command - bug in XFS_IOC_FSINUMBERS?

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Thanks Brian and Eric,

I'll rework my patch according to this talk,


On Thu, Sep 24, 2015 at 09:10:40AM +1000, Dave Chinner wrote:
> On Wed, Sep 23, 2015 at 12:28:34PM +0200, Carlos Maiolino wrote:
> > Howdy folks,
> > 
> > I was working in implementing the suggested feature in my patch, about getting
> > the next inode used after one is provided, and I hit something that I'm not really
> > sure if this might be considered a bug, or just a work form.
> > 
> > XFS_IOC_FSINUMBERS, is supposed to be called with a zeroed
> > xfs_fsop_bulkreq.lastip, so at each call, kernel will update this number to the
> > last inode returned, and, the next call will return in xfs_inogrp.xi_startino,
> > the next existing inode after .lastip.
> > 
> > So, I was expecting that, passing a non-zero .lastip at the first call, I would
> > be able to get the next inode right after the one I passed through .lastip, but,
> > after some tests and reading the code, I noticed that this is not the case.
> 
> XFS_IOC_FSNUMBERS is not a "does this inode exist" query API - you
> use the bulkstat interface for that. XFS_IOC_FSNUMBERS is for
> iterating the "inode table", and it's API returns records, not
> individual inodes.
> 
> Those records contain information about a chunk of inodes, not
> individual inodes. The "lastino" cookie it uses always points to the
> last inode in the last chunk it returns - the next iteration will
> start at the chunk *after* the one that contains lastino.
> 
> Hence it is behaving as intended...
> 
> > I'm not sure if this is the desired behavior or not, but, I'd say that, if the
> > inode passed in .lastip, is not the first in the chunk, the output should start
> > for its own chunk, instead of the next one, but, I prefer to see you folks POV
> > before starting to fix something that I'm not sure if it's actually broken :-)
> 
> It doesn't matter if it is "desired behaviour" or not, we can't
> change it. If we change it we risk breaking userspace applications
> that relies on it working the way it currently does. Most likely
> that application will be xfsdump, and the breakage will be silent
> and very hard to detect....

I thought about this possibility too, but didn't mention in my e-mail, but, it's
good to know that.
> 
> Perhaps reading the recent history fs/xfs/xfs_itable.c would be
> instructive. ;)
> 
I certainly will :)


Cheers.

-- 
Carlos

_______________________________________________
xfs mailing list
xfs@xxxxxxxxxxx
http://oss.sgi.com/mailman/listinfo/xfs



[Index of Archives]     [Linux XFS Devel]     [Linux Filesystem Development]     [Filesystem Testing]     [Linux USB Devel]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]

  Powered by Linux