40% performance loss with multipath since kernel 3.12

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi

We have an iSCSI-based SAN that uses dm-multipath to aggregate two 10 Gb links in multibus mode. With kernel 3.11.6 we got very good performance, achieving up to 1550 MB/s read on the multipath device.

After upgrading to kernel 4.0.5, performance has dropped by 40%. Read speed is now limited to 960 MB/s, which is slower than the 975 MB/s that the individual paths can achieve.

We did a lot of testing on 3.11.6 to find an optimal rr_min_io_rq value and settled on 9 using the round-robin path selector. With 4.0.5, we’ve tried all the path selectors with rr_min_io_rq from 1 to 100, but can’t seem to beat the performance of a single path any more. For interest’s sake, the default rr_min_io_rq of 1 was actually one of the worst performing values.

We also tried kernel 3.12 and performance was also degraded but not quite as bad as 4.0.5. I am wondering if there was some significant change since 3.11 (dm-multipath 1.5.1) and 3.12 (dm-multipath 1.6.0).

Regards,
Derick

--
dm-devel mailing list
dm-devel@xxxxxxxxxx
https://www.redhat.com/mailman/listinfo/dm-devel




[Index of Archives]     [DM Crypt]     [Fedora Desktop]     [ATA RAID]     [Fedora Marketing]     [Fedora Packaging]     [Fedora SELinux]     [Yosemite Discussion]     [KDE Users]     [Fedora Docs]

  Powered by Linux