On 03/23/2017 05:35 PM, Michael
McCarthy wrote:
Hello, collective wisdom,
I'm new to the list and apologize if this is not the right
place to ask this type of question and if so, would be glad
receive pointers to the correct one.
Now the question:
I'm looking for ways to improve snapshot-merge target
performance. We're using CentOS 7.3 here. Both, the
snapshot-origin and the snapshot (cow data holder) reside on
NVMe SSDs. What we've seen in our tests is that the speed of
the merge isn't approaching neither the throughput nor the
IOPS limits of the NVMe devices. I suspect it might be because
the merge operation is single threaded and uses QD of 1.
Yes, that's what it is.
Using dm-kcopyd the snapshot merge sequentially copies exception
store chunks across to the origin.
Though it aims to copy consecutive chunks across, the success of
that optimization depends on
sequential write patterns to the origin and the snapshot populating
the exception store.
In case writes were random, one chunk will be transferred per copy
worst case.
There's no knob to tune this (but io schedulers which ain't help
with yur NVMe backends).
The "snaphsot-merge" target would need to be enhanced to provide
higher throughput.
Heinz
Could anyone with enough knowledge about the DM code shed
some light on how it operates during the merge? Are there any
interfaces to improve the speed of this operation without
altering the code?
Thanks,
Mike
--
dm-devel mailing list
dm-devel@xxxxxxxxxx
https://www.redhat.com/mailman/listinfo/dm-devel
|
--
dm-devel mailing list
dm-devel@xxxxxxxxxx
https://www.redhat.com/mailman/listinfo/dm-devel