While researching thinpool provisioning, it seems one of the issues is that the size of the metadata is fixed as of creation, and that if the metadata allocation fills up, your pool is corrupted? In many of the places that concern was mentioned, it was also said that extending the size of the metadata lv was a feature coming soon, but I didn't find anything confirming whether or not that functionality had been released. Is the size of the metadata lv still fixed? My intention is to have a 4TB PV (4 x 2TB RAID10), allocated completely to a thin pool, with the metadata stored separately on a 256G RAID1 of a couple SSD's (the rest of the SSD mirror will eventually be used for dm-cache when lvm support for that is released). This storage will be used for virtualization, with fairly heavy snapshots, where there will be half a dozen or so template volumes which will be snapshotted when a new vm is created, then each of those will have some number of snapshots for backup purposes (although those snapshots will never be written to). Given such a usage pattern, is there a best practice recommendation for sizing the metadata lv? It looks like going with the defaults would result in approximately 3.6G allocated for metadata. Thanks. _______________________________________________ linux-lvm mailing list linux-lvm@redhat.com https://www.redhat.com/mailman/listinfo/linux-lvm read the LVM HOW-TO at http://tldp.org/HOWTO/LVM-HOWTO/