Dne 20. 01. 20 v 11:36 Gionatan Danti napsal(a):
Il 20-01-2020 10:22 Zdenek Kabelac ha scritto:
So having thousands of LVs in a single VG will become probably your
bottleneck.
Hi Zdenek, I was thinking more about having few LVs, but with different amount
of data/mapping.
For example, is a very fragmented volume (ie: one written randomically)
significantly slower to snapshot than an almost empty volume? I fully expect
some small difference; however, if an empty volume take 0.2s and a fragmented
one 20s, this would surely be significant.
Note that I never had such a slow snapshot, rather, even on aged and big
volumes, it always take <1s. However, other experiences are welcome.
Yep - kernel metadata 'per thin LV' are reasonably small - so even for big
thin devices it still should fit within your time boundaries.
(effectively thin snapshot just increases 'mapping' sharing between origin and
its snapshot - so the time needed depends on how many bTree nodes needs to be
updated - so if you would manage to create heavily fragmented multi TiB
thinLV, the time depends on speed of you metadata device - as long as device
is fast (i.e. >= SSD) - operation should be quick.
But ATM there is no scientific proof for the worst case scenario.
Regards
Zdenek
_______________________________________________
linux-lvm mailing list
linux-lvm@xxxxxxxxxx
https://www.redhat.com/mailman/listinfo/linux-lvm
read the LVM HOW-TO at http://tldp.org/HOWTO/LVM-HOWTO/