在 6/14/22 10:54 PM, Gionatan Danti 写道:
Il 2022-06-14 15:29 Zhiyong Ye ha scritto:
The reason for this may be that when the volume creates a snapshot,
each write to an existing block will cause a COW (Copy-on-write), and
the COW is a copy of the entire data block in chunksize, for example,
when the chunksize is 64k, even if only 4k of data is written, the
entire 64k data block will be copied. I'm not sure if I understand
this correctly.
Yes, in your case, the added copies are lowering total available IOPs.
But note how the decrease is sub-linear (from 64K to 1M you have a 16x
increase in chunk size but "only" a 10x hit in IOPs): this is due to the
lowered metadata overhead.
It seems that the consumption of COW copies when sending 4k requests is
much greater than the loss from metadata.
A last try: if you can, please regenerate your thin volume with 64K
chunks and set fio to execute 64K requests. Lets see if LVM is at least
smart enough to avoid coping a to-be-completely-overwritten chunks.
I regenerated the thin volume with the chunksize of 64K and the random
write performance data tested with fio 64k requests is as follows:
case iops
thin lv 9381
snapshotted thin lv 8307
_______________________________________________
linux-lvm mailing list
linux-lvm@xxxxxxxxxx
https://listman.redhat.com/mailman/listinfo/linux-lvm
read the LVM HOW-TO at http://tldp.org/HOWTO/LVM-HOWTO/