Dne 13.9.2017 v 21:18 Eivind Sarto napsal(a):
I am considering using dm-thin for a project and have been running some
performance tests.
After a snapshot has been taken, writes to the existing blocks in the original
volume appears to take a severe performance hit unless the blocks written are
>= than the thin-pool chunksize. If the writes are smaller than the
chunksize, dm-thin appears to do a RMW of the chunk that is being written and
the throughput drops to less than half. If the writes are >= chunksize, the
chunk is simply overwritten (without reading it first) and throughput is much
better.
The minimum chunksize is hard-coded to 64k (in both lvm2 tools and kernel).
If this chunksize could be set to 4k, then data that is (over)written in the
original volume would never any any RMW.
My question is, why is 64k the minimum chunksize supported?
What would be the impact of reducing this hard-coded minimum?
Any feedback would be appreciated.
Hi
The minimum chunks has been carefully selected with respect to amount of
metadata which needs to be handled.
Yes, there is unfortunate penalty that's paid, when the block is provisioned
first time. However maintenance of commit points with smaller blocks with
current thin-pool architecture would present much higher load on systems'
resources...
Regards
Zdenek
--
dm-devel mailing list
dm-devel@xxxxxxxxxx
https://www.redhat.com/mailman/listinfo/dm-devel