Re: [PATCH 1/1] dm mpath: add IO affinity path selector

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Thu, Oct 22 2020 at  8:27pm -0400,
Mike Christie <michael.christie@xxxxxxxxxx> wrote:

> This patch adds a path selector that selects paths based on a CPU to
> path mapping the user passes in and what CPU we are executing on. The
> primary user for this PS is where the app is optimized to use specific
> CPUs so other PSs undo the apps handy work, and the storage and it's
> transport are not a bottlneck.
> 
> For these io-affinity PS setups a path's transport/interconnect
> perf is not going to flucuate a lot and there is no major differences
> between paths, so QL/HST smarts do not help and RR always messes up
> what the app is trying to do.
> 
> On a system with 16 cores, where you have a job per CPU:
> 
> fio --filename=/dev/dm-0 --direct=1 --rw=randrw --bs=4k \
> --ioengine=libaio --iodepth=128 --numjobs=16
> 
> and a dm-multipath device setup where each CPU is mapped to one path:
> 
> // When in mq mode I had to set dm_mq_nr_hw_queues=$NUM_PATHS.

OK, the modparam was/is a means to an end but the default of 1 is very
limiting (especially in that it becomes one-size-fits-all, which isn't
true, for all dm-multipath devices in the system).

If you have any ideas for what a sane heuristic would be for
dm_mq_nr_hw_queues I'm open to suggestions.  But DM target <-> DM core
<-> early block core interface coordination is "fun". ;)

> // Bio mode also showed similar results.
> 0 16777216 multipath 0 0 1 1 io-affinity 0 16 1 8:16 1 8:32 2 8:64 4
> 8:48 8 8:80 10 8:96 20 8:112 40 8:128 80 8:144 100 8:160 200 8:176
> 400 8:192 800 8:208 1000 8:224 2000 8:240 4000 65:0 8000
> 
> we can see a IOPs increase of 25%.

Great. What utility/code are you using to extract the path:cpu affinity?
Is it array specific?  Which hardware pins IO like this?

Will you, or others, be enhancing multipath-tools to allow passing such
io-affinity DM multipath tables?

> The percent increase depends on the device and interconnect. For a
> slower/medium speed path/device that can do around 180K IOPs a path
> if you ran that fio command to it directly we saw a 25% increase like
> above. Slower path'd devices that could do around 90K per path showed
> maybe around a 2 - 5% increase. If you use something like null_blk or
> scsi_debug which can multi-million IOPs and hack it up so each device
> they export shows up as a path then you see 50%+ increases.
> 
> Signed-off-by: Mike Christie <michael.christie@xxxxxxxxxx>
> ---
>  drivers/md/Kconfig          |   9 ++
>  drivers/md/Makefile         |   1 +
>  drivers/md/dm-io-affinity.c | 272 ++++++++++++++++++++++++++++++++++++++++++++
>  3 files changed, 282 insertions(+)
>  create mode 100644 drivers/md/dm-io-affinity.c
> 
> diff --git a/drivers/md/Kconfig b/drivers/md/Kconfig
> index 30ba357..c82d8b6 100644
> --- a/drivers/md/Kconfig
> +++ b/drivers/md/Kconfig
> @@ -463,6 +463,15 @@ config DM_MULTIPATH_HST
>  
>  	  If unsure, say N.
>  
> +config DM_MULTIPATH_IOA
> +	tristate "I/O Path Selector based on CPU submission"
> +	depends on DM_MULTIPATH
> +	help
> +	  This path selector selects the path based on the CPU the IO is
> +	  executed on and the CPU to path mapping setup at path addition time.
> +
> +	  If unsure, say N.
> +
>  config DM_DELAY
>  	tristate "I/O delaying target"
>  	depends on BLK_DEV_DM
> diff --git a/drivers/md/Makefile b/drivers/md/Makefile
> index 6d3e234..4f95f33 100644
> --- a/drivers/md/Makefile
> +++ b/drivers/md/Makefile
> @@ -59,6 +59,7 @@ obj-$(CONFIG_DM_MULTIPATH)	+= dm-multipath.o dm-round-robin.o
>  obj-$(CONFIG_DM_MULTIPATH_QL)	+= dm-queue-length.o
>  obj-$(CONFIG_DM_MULTIPATH_ST)	+= dm-service-time.o
>  obj-$(CONFIG_DM_MULTIPATH_HST)	+= dm-historical-service-time.o
> +obj-$(CONFIG_DM_MULTIPATH_IOA)	+= dm-io-affinity.o
>  obj-$(CONFIG_DM_SWITCH)		+= dm-switch.o
>  obj-$(CONFIG_DM_SNAPSHOT)	+= dm-snapshot.o
>  obj-$(CONFIG_DM_PERSISTENT_DATA)	+= persistent-data/

Thinking about renaming all PS files to have a dm-ps prefix...

Fact that we have dm-io.c makes dm-io-affinity.c all the more confusing.

Can you rename to dm-ps-io-affinity.c and post v2?

(Code looks good, pretty simple)

Thanks,
Mike

--
dm-devel mailing list
dm-devel@xxxxxxxxxx
https://www.redhat.com/mailman/listinfo/dm-devel




[Index of Archives]     [DM Crypt]     [Fedora Desktop]     [ATA RAID]     [Fedora Marketing]     [Fedora Packaging]     [Fedora SELinux]     [Yosemite Discussion]     [KDE Users]     [Fedora Docs]

  Powered by Linux