Hi, I have observed a severe performance degradation of DM devices with a simple dd test in linux-5.9 comparing to linux-5.8.
This test contains the following steps:
1. Create a dm-thin pool
2. Create a volume in this pool
3. Run "dd if=/dev/zero of=/dev/mapper/vol bs=512K count=10k"
In my own setup, I use a SSD as thin pool's metadata device and a HDD as data device. Here is what I get from both linux versions.
--- Thin Volume ---
linux-5.9.11 10.5 MB/s
linux-5.8.18 77.7 MB/s
--- Linear device over HDD ---
linux-5.9.11 77.0 MB/s
linux-5.8.18 136.0 MB/s
--- Linear device over SSD ---
linux-5.9.11 256.0 MB/s
linux-5.8.18 369.0 MB/s
From iostat, I can tell that DM devices will get a smaller bio with length equals to only 1 sector in linux-5.9.11 comparing to 8 sectors in linux-5.8.18. I dig a little deeper to this issue, and it turns out this patch made the size of bio of buffer I/Os equal to the logical block size of target block device which is 512 bytes of all my HDD and SSD in my cluster.
After reverting this patch in linux-5.9.11, I can have the same performance as linux-5.8. However, I am not sure if this is the right thing to do or something needs to be taken care of in the device-mapper layer.
Any comment would be highly appreciated.
Thanks for your time.
Dennis
-- dm-devel mailing list dm-devel@xxxxxxxxxx https://www.redhat.com/mailman/listinfo/dm-devel