Using a classic snapshot for backup does not normally involve activating a large CoW. I generally create a smallish snapshot (a few gigs), that will not fill up during the backup process. If for some reason, a snapshot were to fill up before backup completion, reads from the snapshot get I/O errors (I've tested this), which alarms and aborts the backup. Yes, keeping a snapshot around and activating it at boot can be a problem as the CoW gets large. If you are going to keep snapshots around indefinitely, the thinpools are probably the way to go. (What happens when you fill up those? Hopefully it "freezes" the pool rather than losing everything.) On 04/07/2017 12:33 PM, Gionatan Danti wrote: > For the logical volume itself, I target a 8+ TB size. However, what > worries me is *not* LV size by itself (I know that LVM can be used on > volume much bigger than that), rather the snapshot CoW table. In > short, from reading this list and from first-hand testing, big > snapshots (20+ GB) require lenghtly activation, due to inefficiency in > how classic metadata (ie: non thinly-provided) are layed out/used. > However, I read that this was somewhat addressed lately. Do you have > any insight? > _______________________________________________ linux-lvm mailing list linux-lvm@redhat.com https://www.redhat.com/mailman/listinfo/linux-lvm read the LVM HOW-TO at http://tldp.org/HOWTO/LVM-HOWTO/