Re: [PATCH 01/27] xfs: update mount options documentation

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 6/12/13 5:22 AM, Dave Chinner wrote:
> From: Dave Chinner <dchinner@xxxxxxxxxx>
> 
> Because it's horribly out of date.
> 
> And mark various deprecated options as deprecated and give them a
> removal date.

thanks for doing this.  some nitpicks below.

> Signed-off-by: Dave Chinner <dchinner@xxxxxxxxxx>
> ---
>  Documentation/filesystems/xfs.txt |  282 +++++++++++++++++++++++++------------
>  1 file changed, 192 insertions(+), 90 deletions(-)
> 
> diff --git a/Documentation/filesystems/xfs.txt b/Documentation/filesystems/xfs.txt
> index 83577f0..28afbd1 100644
> --- a/Documentation/filesystems/xfs.txt
> +++ b/Documentation/filesystems/xfs.txt
> @@ -25,6 +25,13 @@ When mounting an XFS filesystem, the following options are accepted.
>  	Valid values for this option are page size (typically 4KiB)
>  	through to 1GiB, inclusive, in power-of-2 increments.
>  
> +	The default behaviour is for dynamic end-of-file
> +	preallocation size, which uses a set of heuristics to
> +	optimise the preallocation size based on the current
> +	allocation patterns within the file and the access patterns
> +	to the file. Specifying a fixed allocsize value turns off
> +	the dynamic behaviour.
> +
>    attr2/noattr2
>  	The options enable/disable (default is disabled for backward
>  	compatibility on-disk) an "opportunistic" improvement to be
> @@ -36,86 +43,108 @@ When mounting an XFS filesystem, the following options are accepted.
>  	CRC enabled filesystems always use the attr2 format, and so
>  	will reject the noattr2 mount option if it is set.
>  
> -  barrier
> -	Enables the use of block layer write barriers for writes into
> -	the journal and unwritten extent conversion.  This allows for
> -	drive level write caching to be enabled, for devices that
> -	support write barriers.
> +  barrier/nobarrier
> +	Enables/disables the use of block layer write barriers for
> +	writes into the journal and for data integrity operations.
> +	This allows for drive level write caching to be enabled, for
> +	devices that support write barriers.
>  
> -  discard
> -	Issue command to let the block device reclaim space freed by the
> -	filesystem.  This is useful for SSD devices, thinly provisioned
> -	LUNs and virtual machine images, but may have a performance
> -	impact.
> +	The default behaviour is to enable barriers.
>  
> -  dmapi
> -	Enable the DMAPI (Data Management API) event callouts.
> -	Use with the "mtpt" option.
> +  discard/nodiscard
>  
> -  grpid/bsdgroups and nogrpid/sysvgroups
> -	These options define what group ID a newly created file gets.
> -	When grpid is set, it takes the group ID of the directory in
> -	which it is created; otherwise (the default) it takes the fsgid
> -	of the current process, unless the directory has the setgid bit
> -	set, in which case it takes the gid from the parent directory,
> -	and also gets the setgid bit set if it is a directory itself.
> +	Enable/disable the issuing of commands to let the block
> +	device reclaim space freed by the filesystem.  This is
> +	useful for SSD devices, thinly provisioned LUNs and virtual
> +	machine images, but may have a performance impact.

should we talk about fstrim as an alternative here?

> -  ihashsize=value
> -	In memory inode hashes have been removed, so this option has
> -	no function as of August 2007. Option is deprecated.
> +	The default behaviour is disable discard commands.
> +
> +	Note: It is currently recommended that you use the fstrim
> +	application to discard unused blocks rather than the discard
> +	mount option because the performance impact of this option
> +	is quite severe.

oh right!

> +  grpid/bsdgroups and nogrpid/sysvgroups
> +	These options define what group ID a newly created file
> +	gets.  When grpid is set, it takes the group ID of the
> +	directory in which it is created; otherwise (the default) it
> +	takes the fsgid of the current process, unless the directory
> +	has the setgid bit set, in which case it takes the gid from
> +	the parent directory, and also gets the setgid bit set if it
> +	is a directory itself.
> +
> +  filestreams
> +	Make the data allocator use the filestreams allocation mode
> +	across the entire filesystem rather than just on directories
> +	configured to use it.
>  
>    ikeep/noikeep
> -	When ikeep is specified, XFS does not delete empty inode clusters
> -	and keeps them around on disk. ikeep is the traditional XFS
> -	behaviour. When noikeep is specified, empty inode clusters
> -	are returned to the free space pool. The default is noikeep for
> -	non-DMAPI mounts, while ikeep is the default when DMAPI is in use.
> +	When ikeep is specified, XFS does not delete empty inode
> +	clusters and keeps them around on disk.  When noikeep is
> +	specified, empty inode clusters are returned to the free
> +	space pool.
> +
> +	The default behaviour is delete inode clusters (noikeep).

is to delete inode clusters

>  
>    inode64
> -	Indicates that XFS is allowed to create inodes at any location
> -	in the filesystem, including those which will result in inode
> -	numbers occupying more than 32 bits of significance.  This is
> -	the default allocation option. Applications which do not handle
> -	inode numbers bigger than 32 bits, should use inode32 option.
> +	When inode64 is specified, it indicates that XFS is allowed
> +	to create inodes at any location in the filesystem,
> +	including those which will result in inode numbers occupying
> +	more than 32 bits of significance.  Applications which do
> +	not handle inode numbers bigger than 32 bits should use
> +	inode32 option.

While we're rewriting . . . applications don't use mount options,
not really.  So maybe:

If applications are in use which do not handle inode numbers bigger
than 32 bits, the inode32 option should be specified.

> +	This is the default allocation behaviour, even on 32 bit
> +	machines when neither inode64 or inode32 is specified.
>  
>    inode32
> -	Indicates that XFS is limited to create inodes at locations which
> -	will not result in inode numbers with more than 32 bits of
> -	significance. This is provided for backwards compatibility, since
> -	64 bits inode numbers might cause problems for some applications
> -	that cannot handle large inode numbers.
> +	When inode32 is specified, it indicates that XFS limits
> +	inode creation to locations which will not result in inode
> +	numbers with more than 32 bits of significance. This is
> +	provided for backwards compatibility with older systems and
> +	applications, since 64 bits inode numbers might cause
> +	problems for some applications that cannot handle large
> +	inode numbers.


Any point to talking about what this does to locality etc?

>    largeio/nolargeio
>  	If "nolargeio" is specified, the optimal I/O reported in
> -	st_blksize by stat(2) will be as small as possible to allow user
> -	applications to avoid inefficient read/modify/write I/O.
> -	If "largeio" specified, a filesystem that has a "swidth" specified
> -	will return the "swidth" value (in bytes) in st_blksize. If the
> -	filesystem does not have a "swidth" specified but does specify
> -	an "allocsize" then "allocsize" (in bytes) will be returned
> -	instead.
> +	st_blksize by stat(2) will be as small as possible to allow
> +	user applications to avoid inefficient read/modify/write
> +	I/O.  This is typically the page size of the machine, as
> +	this is the granularity of the page cache.
> +
> +	If "largeio" specified, a filesystem that was created with a
> +	"swidth" specified will return the "swidth" value (in bytes)
> +	in st_blksize. If the filesystem does not have a "swidth"
> +	specified but does specify an "allocsize" then "allocsize"
> +	(in bytes) will be returned instead. Otherwise the behaviour
> +	is the same as if "nolargeio" was specified.
> +
>  	If neither of these two options are specified, then filesystem
>  	will behave as if "nolargeio" was specified.

I have to wonder when anyone would want to use largeio.  This doesn't
tell me, either.  ;)

I wonder if there's any clearer way to show defaults.  Maybe:

     largeio
     nolargeio (*)
   	If "nolargeio" is specified, the optimal I/O reported in
	...



>    logbufs=value
> -	Set the number of in-memory log buffers.  Valid numbers range
> -	from 2-8 inclusive.
> -	The default value is 8 buffers for filesystems with a
> -	blocksize of 64KiB, 4 buffers for filesystems with a blocksize
> -	of 32KiB, 3 buffers for filesystems with a blocksize of 16KiB
> -	and 2 buffers for all other configurations.  Increasing the
> -	number of buffers may increase performance on some workloads
> -	at the cost of the memory used for the additional log buffers
> -	and their associated control structures.
> +	Set the number of in-memory log buffers.  Valid numbers
> +	range from 2-8 inclusive.
> +
> +	The default value is 8 buffers.
> +
> +	If the memory cost of 8 log buffers is too high on small
> +	systems, then it may be reduced at some cost to performance
> +	on metadata intensive workloads.

	The logbsize option below controls the size of each buffer.

>    logbsize=value
> -	Set the size of each in-memory log buffer.
> -	Size may be specified in bytes, or in kilobytes with a "k" suffix.
> -	Valid sizes for version 1 and version 2 logs are 16384 (16k) and
> -	32768 (32k).  Valid sizes for version 2 logs also include
> -	65536 (64k), 131072 (128k) and 262144 (256k).
> -	The default value for machines with more than 32MiB of memory
> -	is 32768, machines with less memory use 16384 by default.
> +	Set the size of each in-memory log buffer.  The size may be
> +	specified in bytes, or in kilobytes with a "k" suffix.
> +	Valid sizes for version 1 and version 2 logs are 16384 (16k)
> +	and 32768 (32k).  Valid sizes for version 2 logs also
> +	include 65536 (64k), 131072 (128k) and 262144 (256k). The
> +	version 2 log size must be an integer multiple of the log
> +	stripe unit configured at mkfs time.

the version 2 log size, or logbsize?

> +	The default value for for version 1 logs is 32768, while the
> +	default value for version 2 logs is MAX(32768, log_sunit).
>  
>    logdev=device and rtdev=device
>  	Use an external log (metadata journal) and/or real-time device.
> @@ -124,16 +153,12 @@ When mounting an XFS filesystem, the following options are accepted.
>  	optional, and the log section can be separate from the data
>  	section or contained within it.
>  
> -  mtpt=mountpoint
> -	Use with the "dmapi" option.  The value specified here will be
> -	included in the DMAPI mount event, and should be the path of
> -	the actual mountpoint that is used.
> -
>    noalign
> -	Data allocations will not be aligned at stripe unit boundaries.
>  
> -  noatime
> -	Access timestamps are not updated when a file is read.
> +	Data allocations will not be aligned at stripe unit
> +	boundaries. This is only relevant to filesystems created
> +	with non-zero data alignment parameters (sunit, swidth) by
> +	mkfs.

why would I use this?

>    norecovery
>  	The filesystem will be mounted without running log recovery.
> @@ -144,8 +169,14 @@ When mounting an XFS filesystem, the following options are accepted.
>  	the mount will fail.
>  
>    nouuid
> -	Don't check for double mounted file systems using the file system uuid.
> -	This is useful to mount LVM snapshot volumes.
> +	Don't check for double mounted file systems using the file
> +	system uuid.  This is useful to mount LVM snapshot volumes,
> +	and often used in combination with "norecovery" for mounting
> +	read-only snapshots.
> +
> +  noquota
> +	Forcibly turns off all quota accounting and enforcement
> +	within the filesystem.
>  
>    uquota/usrquota/uqnoenforce/quota
>  	User disk quota accounting enabled, and limits (optionally)
> @@ -160,24 +191,68 @@ When mounting an XFS filesystem, the following options are accepted.
>  	enforced.  Refer to xfs_quota(8) for further details.
>  
>    sunit=value and swidth=value
> -	Used to specify the stripe unit and width for a RAID device or
> -	a stripe volume.  "value" must be specified in 512-byte block
> -	units.
> -	If this option is not specified and the filesystem was made on
> -	a stripe volume or the stripe width or unit were specified for
> -	the RAID device at mkfs time, then the mount system call will
> -	restore the value from the superblock.  For filesystems that
> -	are made directly on RAID devices, these options can be used
> -	to override the information in the superblock if the underlying
> -	disk layout changes after the filesystem has been created.
> -	The "swidth" option is required if the "sunit" option has been
> -	specified, and must be a multiple of the "sunit" value.
> +	Used to specify the stripe unit and width for a RAID device
> +	or a stripe volume.  "value" must be specified in 512-byte
> +	block units. These options are only relevant to filesystems
> +	that were created with non-zero data alignment parameters.

(i.e. su/sw or sunit/swidth was specified at mkfs time).

> +
> +	The sunit and swidth parameters specified must be compatible
> +	with the existing filesystem alignment characteristics. If
> +	the filesystem was not created with data alignment
> +	constraints, then it may be impossible to set a valid sunit
> +	(and hence swidth) value.  In general, that means the only
> +	valid changes to sunit are increasing it by a power-of-2
> +	multiple. Valid swidth values are any integer multiple of a
> +	valid sunit value.

now I'm confused.  It's only relevant to a filesystem w/ geometry
specified, but if it wasn't specified, it may be possible . . . ?

And if nothing was specified (i.e. 0 su/sw) then we can only increase
that 0 by a power of 2?


> +	For filesystems that have existing data alignment values on
> +	disk (i.e. specified by mkfs), any new valid values passed
> +	in as mount options will overwrite the values stored on
> +	disk. Hence this mount option does not need to be specified
> +	for every mount operation in this case.

so I think this all needs to clarify whether it works on filesystems
w/o existing geometry, or not.  And "why you might want this" would
be helpful too.

>    swalloc
>  	Data allocations will be rounded up to stripe width boundaries
>  	when the current end of file is being extended and the file
>  	size is larger than the stripe width size.
>  
> +  wsync
> +	When specified, all filesystem namespace operations are
> +	executed synchronously. This ensures that when the namespace
> +	operation (create, unlink, etc) completes, the change to the
> +	namespace is on stable storage. This is useful in HA setups
> +	where failover must not result in clients seeing
> +	inconsistent namespace presentation during or after a
> +	failover event.
> +
> +
> +Deprecated Mount Options
> +========================
> +
> +  delaylog/nodelaylog
> +	Delayed logging is the only logging method that XFS supports
> +	now, so these mount options are now ignored.
> +
> +	Due for removal in 3.12.
> +
> +  ihashsize=value
> +	In memory inode hashes have been removed, so this option has
> +	no function as of August 2007. Option is deprecated.
> +
> +	Due for removal in 3.12.
> +
> +  irixsgid
> +	This behaviour is now controlled by a sysctl, so the mount
> +	option is ignored.
> +
> +	Due for removal in 3.12.
> +
> +  osyncisdsync
> +  osyncisosync
> +	O_SYNC and O_DSYNC are fully supported, so there is no need
> +	for these options any more.
> +
> +	Due for removal in 3.12.

>  sysctls
>  =======
> @@ -189,15 +264,20 @@ The following sysctls are available for the XFS filesystem:
>  	in /proc/fs/xfs/stat.  It then immediately resets to "0".
>  
>    fs.xfs.xfssyncd_centisecs	(Min: 100  Default: 3000  Max: 720000)
> -  	The interval at which the xfssyncd thread flushes metadata
> -  	out to disk.  This thread will flush log activity out, and
> -  	do some processing on unlinked inodes.
> +	The interval at which the filesystem flushes metadata
> +	out to disk and runs internal cache cleanup routines.
>  
> -  fs.xfs.xfsbufd_centisecs	(Min: 50  Default: 100	Max: 3000)
> -	The interval at which xfsbufd scans the dirty metadata buffers list.
> +  fs.xfs.filestream_centisecs	(Min: 1  Default: 3000  Max: 360000)
> +	The interval at which the filesystem ages filestreams cache
> +	references and returns timed-out AGs back to the free stream
> +	pool.

I bet Documentation/filesystem/xfs-filestreams.txt would be handy, but *shrug*

> -  fs.xfs.age_buffer_centisecs	(Min: 100  Default: 1500  Max: 720000)
> -	The age at which xfsbufd flushes dirty metadata buffers to disk.
> +  fs.xfs.speculative_prealloc_lifetime
> +		(Units: seconds   Min: 1  Default: 300  Max: 86400)
> +	The interval at which the background scanning for inodes
> +	with unused speculative preallocation runs at. The scan

The interval at which the background scanning . . . runs.  (no at.)

> +	removes unused preallocation from clean inodes and releases
> +	the unused space back to the free pool.
>  
>    fs.xfs.error_level		(Min: 0  Default: 3  Max: 11)
>  	A volume knob for error reporting when internal errors occur.
> @@ -254,9 +334,31 @@ The following sysctls are available for the XFS filesystem:
>  	by the xfs_io(8) chattr command on a directory to be
>  	inherited by files in that directory.
>  
> +  fs.xfs.inherit_nodefrag	(Min: 0  Default: 1  Max: 1)
> +	Setting this to "1" will cause the "nodefrag" flag set
> +	by the xfs_io(8) chattr command on a directory to be
> +	inherited by files in that directory.
> +
>    fs.xfs.rotorstep		(Min: 1  Default: 1  Max: 256)
>  	In "inode32" allocation mode, this option determines how many
>  	files the allocator attempts to allocate in the same allocation
>  	group before moving to the next allocation group.  The intent
>  	is to control the rate at which the allocator moves between
>  	allocation groups when allocating extents for new files.
> +
> +Deprecated Sysctls
> +==================
> +
> +  fs.xfs.xfsbufd_centisecs	(Min: 50  Default: 100	Max: 3000)
> +	Dirty metadata is now tracked by the log subsystem and
> +	flushing is driven by log space and idling demands. The
> +	xfsbufd no longer exists, so this syctl does nothing.
> +
> +	Due for removal in 3.14.
> +
> +  fs.xfs.age_buffer_centisecs	(Min: 100  Default: 1500  Max: 720000)
> +	Dirty metadata is now tracked by the log subsystem and
> +	flushing is driven by log space and idling demands. The
> +	xfsbufd no longer exists, so this syctl does nothing.
> +
> +	Due for removal in 3.14.
> 

_______________________________________________
xfs mailing list
xfs@xxxxxxxxxxx
http://oss.sgi.com/mailman/listinfo/xfs




[Index of Archives]     [Linux XFS Devel]     [Linux Filesystem Development]     [Filesystem Testing]     [Linux USB Devel]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]

  Powered by Linux