And the block size for thick snapshots can be set when using the lvcreate command.
And the automatic growing of a snapshot can be configured too in the lvm configuration.
Same issues with both thin and thick, if you run out of space.
//T
Den ons 23 okt. 2019 kl 13:24 skrev Tomas Dalebjörk <tomas.dalebjork@xxxxxxxxx>:
I have tested FusionIO together with old thick snapshots.I created the thick snapshot on a separate old traditional SATA drive, just to check if that could be used as a snapshot target for high performance disks; like a Fusion IO card.For those who doesn't know about FusionIO; they can deal with 150-250,000 IOPS.And to be honest, I couldn't bottle neck the SATA disk I used as a thick snapshot target.The reason for why is simple:- thick snapshots uses sequential write techniquesIf I would have been using thin snapshots, than the writes would most likely be more randomized on disk, which would have required more spindles to coop with this.Anyhow;I am still eager to hear how to use an external device to import snapshots.And when I say "import"; I am not talking about copyback, more to use to read data from.Regards TomasDen ons 23 okt. 2019 kl 13:08 skrev Gionatan Danti <g.danti@xxxxxxxxxx>:On 23/10/19 12:46, Zdenek Kabelac wrote:
> Just few 'comments' - it's not really comparable - the efficiency of
> thin-pool metadata outperforms old snapshot in BIG way (there is no
> point to talk about snapshots that takes just couple of MiB)
Yes, this matches my experience.
> There is also BIG difference about the usage of old snapshot origin and
> snapshot.
>
> COW of old snapshot effectively cuts performance 1/2 if you write to
> origin.
If used without non-volatile RAID controller, 1/2 is generous - I
measured performance as low as 1/5 (with fat snapshot).
Talking about thin snapshot, an obvious performance optimization which
seems to not be implemented is to skip reading source data when
overwriting in larger-than-chunksize blocks.
For example, consider a completely filled 64k chunk thin volume (with
thinpool having ample free space). Snapshotting it and writing a 4k
block on origin will obviously cause a read of the original 64k chunk,
an in-memory change of the 4k block and a write of the entire modified
64k block to a new location. But writing, say, a 1 MB block should *not*
cause the same read on source: after all, the read data will be
immediately discarded, overwritten by the changed 1 MB block.
However, my testing shows that source chunks are always read, even when
completely overwritten.
Am I missing something?
--
Danti Gionatan
Supporto Tecnico
Assyoma S.r.l. - www.assyoma.it
email: g.danti@xxxxxxxxxx - info@xxxxxxxxxx
GPG public key ID: FF5F32A8
_______________________________________________ linux-lvm mailing list linux-lvm@xxxxxxxxxx https://www.redhat.com/mailman/listinfo/linux-lvm read the LVM HOW-TO at http://tldp.org/HOWTO/LVM-HOWTO/