Firstly, thanks for the hard work you guys are doing on the dm drivers.
I'm curious to know if I correctly understand the limitations of the multipath target. I'm using kernel 4.19.0-rc5
When attempting to create a device from two NVMEs connected over nvme_rdma / nvmf, I get the following message:
[43012.002418] device-mapper: table: table load rejected: including non-request-stackable devices
[43012.004067] device-mapper: table: unable to determine table type
Here's an example request that will fail:
dmsetup create test_path --table "0 1562824368 multipath 0 0 2 1 round-robin 0 1 1 /dev/nvme1n1 1 round-robin 0 1 1 /dev/nvme2n1 1"
This will work:
dmsetup create test_path --table "0 1562824368 multipath 0 0 2 1 round-robin 0 1 1 /dev/sdc 1 round-robin 0 1 1 /dev/sdb 1"
After looking at dm-table.c, it seems like this may be an issue of these underlying devices not being compatible with the multipath driver. I had noticed this same error when trying "non-physical" devices in other tests, such as trying to multipath other virtual devices.
Is there anything that can be done to do the (relatively "basic") multipath that detects IO errors and switches to the other device? It seems like fundamentally multipath's error detection works in the same way raid1 implements IO error detection of the underlying devices, so why is it that multipath seems to have this limitation but raid1 does not?
Is md-multipath.c still in use? It seems like it might not have this same restriction on the underlying device.
Thank you so much for your time!
-- dm-devel mailing list dm-devel@xxxxxxxxxx https://www.redhat.com/mailman/listinfo/dm-devel