Re: [dm-devel] REQUEST for new 'topology' metrics to be moved out of the 'queue' sysfs directory.

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



>>>>> "Neil" == Neil Brown <neilb@xxxxxxx> writes:

Neil> Providing the fields are clearly and unambiguously documented so
Neil> that it I can use the documentation to verify the implementation
Neil> (in md at least), I will be satisfied.

The current sysfs documentation says:

/sys/block/<disk>/queue/minimum_io_size:
[...] For RAID arrays it is often the stripe chunk size.

/sys/block/<disk>/queue/optimal_io_size:
[...] For RAID devices it is usually the stripe width or the internal
block size.

The latter should be "internal track size".  But in the context of MD I
think those two definitions are crystal clear.


As far as making the application of these values more obvious I propose
the following:

What:		/sys/block/<disk>/queue/minimum_io_size
Date:		April 2009
Contact:	Martin K. Petersen <martin.petersen@xxxxxxxxxx>
Description:
		Storage devices may report a granularity or minimum I/O
		size which is the device's preferred unit of I/O.
		Requests smaller than this may incur a significant
		performance penalty.

		For disk drives this value corresponds to the physical
		block size. For RAID devices it is usually the stripe
		chunk size.

		A properly aligned multiple of minimum_io_size is the
		preferred request size for workloads where a high number
		of I/O operations is desired.


What:		/sys/block/<disk>/queue/optimal_io_size
Date:		April 2009
Contact:	Martin K. Petersen <martin.petersen@xxxxxxxxxx>
Description:
		Storage devices may report an optimal transfer length or
		streaming I/O size which is the device's preferred unit
		of sustained I/O.  This value is a multiple of the
		device's minimum_io_size.

		optimal_io_size is rarely reported for disk drives. For
		RAID devices it is usually the stripe width or the
		internal track size.

		A properly aligned multiple of optimal_io_size is the
		preferred request size for workloads where sustained
		throughput is desired.

After contemplating for a bit I think I prefer to keep them I/O
direction agnostic.  Granted, the potential penalties mostly apply to
writes.  But I think the application of the values apply to reads as
well.  They will in a hw RAID context for sure.


Neil> I'm looking forward to seeing how you justify the name
Neil> "physical_block_size" in a way the encompasses possibilities like
Neil> a device that stripes over a heterogeneous set of disk drives ;-)

I explained that in my mails yesterday.  But that is of no concern to
MD.

-- 
Martin K. Petersen	Oracle Linux Engineering
--
To unsubscribe from this list: send the line "unsubscribe linux-fsdevel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html

[Index of Archives]     [Linux Ext4 Filesystem]     [Union Filesystem]     [Filesystem Testing]     [Ceph Users]     [Ecryptfs]     [AutoFS]     [Kernel Newbies]     [Share Photos]     [Security]     [Netfilter]     [Bugtraq]     [Yosemite News]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux Cachefs]     [Reiser Filesystem]     [Linux RAID]     [Samba]     [Device Mapper]     [CEPH Development]
  Powered by Linux