On Wed, 17 Dec 2014, Mike Snitzer wrote:
As for blk-mq support... I don't have access to any NVMe hardware, etc. I only tested with virtio-blk (to a ramdisk, scsi-debug, device on the host) so I'm really going to need to lean on Keith and others to validate blk-mq performance.
There's a reason no one has multipath capable NVMe drives: they are not generally available to anyone right now. :) Mine is a prototype so not a good candidate for performance comparisons. I was able to get my loaner back a couple hours ago though, so I built and tested your tree and happy to say it is very successful. While running filesystem fio, I simulated path alternating hot-removal/add sequences and everything worked. So functionally it appears great, but I can't speak on performance right now. One thing with dual ported PCI-e SSDs is each path can be on a different pci domain local to different NUMA nodes. I think there's performance to gain if we select the target path closest to the CPU that the thread is scheduled on. I don't have data to back that up yet, but could such a path selection algorithm be considered in the future? -- dm-devel mailing list dm-devel@xxxxxxxxxx https://www.redhat.com/mailman/listinfo/dm-devel