Re: [PATCH 1/4] man: add documentation for v5 bulkstat ioctl

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Thu, Sep 26, 2019 at 01:18:52PM -0500, Eric Sandeen wrote:
> On 9/25/19 4:32 PM, Darrick J. Wong wrote:
> > From: Darrick J. Wong <darrick.wong@xxxxxxxxxx>
> > 
> > Add a new manpage describing the V5 XFS_IOC_BULKSTAT ioctl.
> > 
> > Signed-off-by: Darrick J. Wong <darrick.wong@xxxxxxxxxx>
> > ---
> >  man/man2/ioctl_xfs_bulkstat.2   |  330 +++++++++++++++++++++++++++++++++++++++
> >  man/man2/ioctl_xfs_fsbulkstat.2 |    6 +
> >  2 files changed, 336 insertions(+)
> >  create mode 100644 man/man2/ioctl_xfs_bulkstat.2
> > 
> 
> couple errors/typos and a nitpick below
> 
> > diff --git a/man/man2/ioctl_xfs_bulkstat.2 b/man/man2/ioctl_xfs_bulkstat.2
> > new file mode 100644
> > index 00000000..f687cfe8
> > --- /dev/null
> > +++ b/man/man2/ioctl_xfs_bulkstat.2
> > @@ -0,0 +1,330 @@
> > +.\" Copyright (c) 2019, Oracle.  All rights reserved.
> > +.\"
> > +.\" %%%LICENSE_START(GPLv2+_DOC_FULL)
> > +.\" SPDX-License-Identifier: GPL-2.0+
> > +.\" %%%LICENSE_END
> > +.TH IOCTL-XFS-BULKSTAT 2 2019-05-23 "XFS"
> > +.SH NAME
> > +ioctl_xfs_bulkstat \- query information for a batch of XFS inodes
> > +.SH SYNOPSIS
> > +.br
> > +.B #include <xfs/xfs_fs.h>
> > +.PP
> > +.BI "int ioctl(int " fd ", XFS_IOC_BULKSTAT, struct xfs_bulkstat_req *" arg );
> > +.SH DESCRIPTION
> > +Query stat information for a group of XFS inodes.
> > +This ioctl uses
> > +.B struct xfs_bulkstat_req
> > +to set up a bulk transfer with the kernel:
> > +.PP
> > +.in +4n
> > +.nf
> > +struct xfs_bulkstat_req {
> > +	struct xfs_bulk_ireq    hdr;
> > +	struct xfs_bulkstat     bulkstat[];
> > +};
> > +
> > +struct xfs_bulk_ireq {
> > +	uint64_t                ino;
> > +	uint32_t                flags;
> > +	uint32_t                icount;
> > +	uint32_t                ocount;
> > +	uint32_t                agno;
> > +	uint64_t                reserved[5];
> > +};
> 
> maybe a "see below for the xfs_bulkstat structure definition?"

Dunno?  Seems fairly obvious to me?  <shrug> I'll put it in.

> > +.fi
> > +.in
> > +.PP
> > +.I hdr.ino
> > +should be set to the number of the first inode for which the caller wants
> > +information, or zero to start with the first inode in the filesystem
> 
> , or to a special value if XFS_BULK_IREQ_SPECIAL is set in the flags field.

Ok.

> > +Note that this is a different semantic than the
> > +.B lastip
> > +in the old
> > +.B FSBULKSTAT
> > +ioctl.
> > +After the call, this value will be set to the number of the next inode for
> > +which information could supplied.
> > +This sets up the next call for an iteration loop.
> > +.PP
> > +If the
> > +.B XFS_BULK_REQ_SPECIAL
> 
> XFS_BULK_IREQ_SPECIAL
> 
> > +flag is set, this field is interpreted as follows:
> > +.RS 0.4i
> > +.TP
> > +.B XFS_BULK_IREQ_SPECIAL_ROOT
> > +Return stat information for the root directory inode.
> 
> How about something like:
> 
> ---
> 
> If the XFS_BULK_IREQ_SPECIAL flag is set in the flags field, then this ino field
> is interpreted according to the following special values:
> 
> XFS_BULK_IREQ_SPECIAL_ROOT
>     Return stat information for the root directory inode.
> 
> ---
> 
> or
> 
> If the XFS_BULK_REQ_SPECIAL flag is set, this field is interpreted according
> to the following special values:
> 
> XFS_BULK_IREQ_SPECIAL_ROOT
>     Return stat information for the root directory inode.

I'll go do that.

> ----
> 
> > +.RE
> > +.PP
> > +.PP
> > +.I hdr.flags
> > +is a bit set of operational flags:
> > +.RS 0.4i
> > +.TP
> > +.B XFS_BULK_REQ_AGNO
> 
> XFS_BULK_IREQ_AGNO
> 
> > +If this is set, the call will only return results for the allocation group (AG)
> > +set in
> > +.BR hdr.agno .
> > +If
> > +.B hdr.ino
> > +is set to zero, results will be returned starting with the first inode in the
> > +AG.
> > +This flag may not be set at the same time as the
> > +.B XFS_BULK_REQ_SPECIAL
> 
> XFS_BULK_IREQ_SPECIAL
> 
> > +flag.
> > +.TP
> > +.B XFS_BULK_REQ_SPECIAL
> 
> XFS_BULK_IREQ_SPECIAL ...

Fixed all this stuff up.

--D

> 
> > +If this is set, results will be returned for only the special inode
> > +specified in the
> > +.B hdr.ino
> > +field.
> > +This flag may not be set at the same time as the
> > +.B XFS_BULK_REQ_AGNO
> > +flag.
> > +.RE
> > +.PP
> > +.I hdr.icount
> > +is the number of inodes to examine.

"hdr.icount  is the maximum number of records to return.  This should be
the size of the array that comes after the header."

> > +.PP
> > +.I hdr.ocount
> > +will be set to the number of records returned.
> > +.PP
> > +.I hdr.agno
> > +is the number of the allocation group (AG) for which we want results.
> > +If the
> > +.B XFS_BULK_REQ_AGNO
> > +flag is not set, this field is ignored.
> > +.PP
> > +.I hdr.reserved
> > +must be set to zero.
> > +
> > +.PP
> > +.I bulkstat
> > +is an array of
> > +.B struct xfs_bulkstat
> > +which is described below.
> > +The array must have at least
> > +.I icount
> > +elements.
> > +.PP
> > +.in +4n
> > +.nf
> > +struct xfs_bulkstat {
> > +	uint64_t                bs_ino;
> > +	uint64_t                bs_size;
> > +
> > +	uint64_t                bs_blocks;
> > +	uint64_t                bs_xflags;
> > +
> > +	uint64_t                bs_atime;
> > +	uint64_t                bs_mtime;
> > +
> > +	uint64_t                bs_ctime;
> > +	uint64_t                bs_btime;
> > +
> > +	uint32_t                bs_gen;
> > +	uint32_t                bs_uid;
> > +	uint32_t                bs_gid;
> > +	uint32_t                bs_projectid;
> > +
> > +	uint32_t                bs_atime_nsec;
> > +	uint32_t                bs_mtime_nsec;
> > +	uint32_t                bs_ctime_nsec;
> > +	uint32_t                bs_btime_nsec;
> > +
> > +	uint32_t                bs_blksize;
> > +	uint32_t                bs_rdev;
> > +	uint32_t                bs_cowextsize_blks;
> > +	uint32_t                bs_extsize_blks;
> > +
> > +	uint32_t                bs_nlink;
> > +	uint32_t                bs_extents;
> > +	uint32_t                bs_aextents;
> > +	uint16_t                bs_version;
> > +	uint16_t                bs_forkoff;
> > +
> > +	uint16_t                bs_sick;
> > +	uint16_t                bs_checked;
> > +	uint16_t                bs_mode;
> > +	uint16_t                bs_pad2;
> > +
> > +	uint64_t                bs_pad[7];
> > +};
> > +.fi
> > +.in
> > +.PP
> > +.I bs_ino
> > +is the inode number of this record.
> > +.PP
> > +.I bs_size
> > +is the size of the file, in bytes.
> > +.PP
> > +.I bs_blocks
> > +is the number of filesystem blocks allocated to this file, including metadata.
> > +.PP
> > +.I bs_xflags
> > +tell us what extended flags are set this inode.
> > +These flags are the same values as those defined in the
> > +.B XFS INODE FLAGS
> > +section of the
> > +.BR ioctl_xfs_fsgetxattr (2)
> > +manpage.
> > +.PP
> > +.I bs_atime
> > +is the last time this file was accessed, in seconds
> > +.PP
> > +.I bs_mtime
> > +is the last time the contents of this file were modified, in seconds.
> > +.PP
> > +.I bs_ctime
> > +is the last time this inode record was modified, in seconds.
> > +.PP
> > +.I bs_btime
> > +is the time this inode record was created, in seconds.
> > +.PP
> > +.I bs_gen
> > +is the generation number of the inode record.
> > +.PP
> > +.I bs_uid
> > +is the user id.
> > +.PP
> > +.I bs_gid
> > +is the group id.
> > +.PP
> > +.I bs_projectid
> > +is the the project id.
> > +.PP
> > +.I bs_atime_nsec
> > +is the nanoseconds component of the last time this file was accessed.
> > +.PP
> > +.I bs_mtime_nsec
> > +is the nanoseconds component of the last time the contents of this file were
> > +modified.
> > +.PP
> > +.I bs_ctime_nsec
> > +is the nanoseconds component of the last time this inode record was modified.
> > +.PP
> > +.I bs_btime_nsec
> > +is the nanoseconds component of the time this inode record was created.
> > +.PP
> > +.I bs_blksize
> > +is the size of a data block for this file, in units of bytes.
> > +.PP
> > +.I bs_rdev
> > +is the encoded device id if this is a special file.
> > +.PP
> > +.I bs_cowextsize_blks
> > +is the Copy on Write extent size hint for this file, in units of data blocks.
> > +.PP
> > +.I bs_extsize_blks
> > +is the extent size hint for this file, in units of data blocks.
> > +.PP
> > +.I bs_nlink
> > +is the number of hard links to this inode.
> > +.PP
> > +.I bs_extents
> > +is the number of storage mappings associated with this file's data.
> > +.PP
> > +.I bs_aextents
> > +is the number of storage mappings associated with this file's extended
> > +attributes.
> > +.PP
> > +.I bs_version
> > +is the version of this data structure.
> > +Currently, only 1 or 5 are supported.
> > +.PP
> > +.I bs_forkoff
> > +is the offset of the attribute fork in the inode record, in bytes.
> > +.PP
> > +The fields
> > +.IR bs_sick " and " bs_checked
> > +indicate the relative health of various allocation group metadata.
> > +Please see the section
> > +.B XFS INODE METADATA HEALTH REPORTING
> > +for more information.
> > +.PP
> > +.I bs_mode
> > +is the file type and mode.
> > +.PP
> > +.I bs_pad[7]
> > +is zeroed.
> > +.SH RETURN VALUE
> > +On error, \-1 is returned, and
> > +.I errno
> > +is set to indicate the error.
> > +.PP
> > +.SH XFS INODE METADATA HEALTH REPORTING
> > +.PP
> > +The online filesystem checking utility scans inode metadata and records what it
> > +finds in the kernel incore state.
> > +The following scheme is used for userspace to read the incore health status of
> > +an inode:
> > +.IP \[bu] 2
> > +If a given sick flag is set in
> > +.IR bs_sick ,
> > +then that piece of metadata has been observed to be damaged.
> > +The same bit should be set in
> > +.IR bs_checked .
> > +.IP \[bu]
> > +If a given sick flag is set in
> > +.I bs_checked
> > +but is not set in
> > +.IR bs_sick ,
> > +then that piece of metadata has been checked and is not faulty.
> > +.IP \[bu]
> > +If a given sick flag is not set in
> > +.IR bs_checked ,
> > +then no conclusion can be made.
> > +.PP
> > +The following flags apply to these fields:
> > +.RS 0.4i
> > +.TP
> > +.B XFS_BS_SICK_INODE
> > +The inode's record itself.
> > +.TP
> > +.B XFS_BS_SICK_BMBTD
> > +File data extent mappings.
> > +.TP
> > +.B XFS_BS_SICK_BMBTA
> > +Extended attribute extent mappings.
> > +.TP
> > +.B XFS_BS_SICK_BMBTC
> > +Copy on Write staging extent mappings.
> > +.TP
> > +.B XFS_BS_SICK_DIR
> > +Directory information.
> > +.TP
> > +.B XFS_BS_SICK_XATTR
> > +Extended attribute data.
> > +.TP
> > +.B XFS_BS_SICK_SYMLINK
> > +Symbolic link target.
> > +.TP
> > +.B XFS_BS_SICK_PARENT
> > +Parent pointers.
> > +.RE
> > +.SH ERRORS
> > +Error codes can be one of, but are not limited to, the following:
> > +.TP
> > +.B EFAULT
> > +The kernel was not able to copy into the userspace buffer.
> > +.TP
> > +.B EFSBADCRC
> > +Metadata checksum validation failed while performing the query.
> > +.TP
> > +.B EFSCORRUPTED
> > +Metadata corruption was encountered while performing the query.
> > +.TP
> > +.B EINVAL
> > +One of the arguments was not valid.
> > +.TP
> > +.B EIO
> > +An I/O error was encountered while performing the query.
> > +.TP
> > +.B ENOMEM
> > +There was insufficient memory to perform the query.
> > +.SH CONFORMING TO
> > +This API is specific to XFS filesystem on the Linux kernel.
> > +.SH SEE ALSO
> > +.BR ioctl (2),
> > +.BR ioctl_xfs_fsgetxattr (2)
> > diff --git a/man/man2/ioctl_xfs_fsbulkstat.2 b/man/man2/ioctl_xfs_fsbulkstat.2
> > index 8f880c5a..3f059942 100644
> > --- a/man/man2/ioctl_xfs_fsbulkstat.2
> > +++ b/man/man2/ioctl_xfs_fsbulkstat.2
> > @@ -15,6 +15,12 @@ ioctl_xfs_fsbulkstat \- query information for a batch of XFS inodes
> >  .BI "int ioctl(int " fd ", XFS_IOC_FSBULKSTAT_SINGLE, struct xfs_fsop_bulkreq *" arg );
> >  .SH DESCRIPTION
> >  Query stat information for a group of XFS inodes.
> > +.PP
> > +NOTE: This ioctl has been superseded.
> > +Please see the
> > +.BR ioctl_xfs_bulkstat (2)
> > +manpage for information about its replacement.
> > +.PP
> >  These ioctls use
> >  .B struct xfs_fsop_bulkreq
> >  to set up a bulk transfer with the kernel:
> > 



[Index of Archives]     [XFS Filesystem Development (older mail)]     [Linux Filesystem Development]     [Linux Audio Users]     [Yosemite Trails]     [Linux Kernel]     [Linux RAID]     [Linux SCSI]


  Powered by Linux