Hello all, I'm experimenting with lvmcache, trying to use NVMe disk to speed up access to rotational disks. I was using the cachepool in a writeback mode when a power failure occurred, this left the cache in a inconsistent state. So far I managed to activate the underlying _corig LV and copy the data from there, but I'm wondering if it's still possible to repair such cache and if not how do I remove failed LVs to reclaim the disk space. This is the relevant portion of the lvs: tpg1-wdata vg0 Cwi---C--- 500,00g [wdata_cachepool_cpool] [tpg1-wdata_corig] [tpg1-wdata_corig] vg0 owi---C--- 500,00g [wdata_cachepool_cpool] vg0 Cwi---C--- 50,00g [wdata_cachepool_cpool_cdata] vg0 Cwi------- 50,00g [wdata_cachepool_cpool_cmeta] vg0 ewi------- 40,00m Trying to activate the tpg1-wdata LV results in an error: sudo lvchange -ay -v vg0/tpg1-wdata Activating logical volume vg0/tpg1-wdata. activation/volume_list configuration setting not defined: Checking only host tags for vg0/tpg1-wdata. Creating vg0-wdata_cachepool_cpool_cdata Loading table for vg0-wdata_cachepool_cpool_cdata (253:17). Resuming vg0-wdata_cachepool_cpool_cdata (253:17). Creating vg0-wdata_cachepool_cpool_cmeta Loading table for vg0-wdata_cachepool_cpool_cmeta (253:18). Resuming vg0-wdata_cachepool_cpool_cmeta (253:18). Creating vg0-tpg1--wdata_corig Loading table for vg0-tpg1--wdata_corig (253:19). Resuming vg0-tpg1--wdata_corig (253:19). Executing: /usr/sbin/cache_check -q /dev/mapper/vg0-wdata_cachepool_cpool_cmeta /usr/sbin/cache_check failed: 1 Piping: /usr/sbin/cache_check -V Found version of /usr/sbin/cache_check 0.9.0 is better then requested 0.7.0. Check of pool vg0/wdata_cachepool_cpool failed (status:1). Manual repair required! Removing vg0-tpg1--wdata_corig (253:19) Removing vg0-wdata_cachepool_cpool_cmeta (253:18) Removing vg0-wdata_cachepool_cpool_cdata (253:17) I tried repairing the volume, but no change: sudo lvconvert --repair -v vg0/tpg1-wdata activation/volume_list configuration setting not defined: Checking only host tags for vg0/lvol6_pmspare. Creating vg0-lvol6_pmspare Loading table for vg0-lvol6_pmspare (253:17). Resuming vg0-lvol6_pmspare (253:17). activation/volume_list configuration setting not defined: Checking only host tags for vg0/wdata_cachepool_cpool_cmeta. Creating vg0-wdata_cachepool_cpool_cmeta Loading table for vg0-wdata_cachepool_cpool_cmeta (253:18). Resuming vg0-wdata_cachepool_cpool_cmeta (253:18). Executing: /usr/sbin/cache_repair -i /dev/mapper/vg0-wdata_cachepool_cpool_cmeta -o /dev/mapper/vg0-lvol6_pmspare Removing vg0-wdata_cachepool_cpool_cmeta (253:18) Removing vg0-lvol6_pmspare (253:17) Preparing pool metadata spare volume for Volume group vg0. Archiving volume group "vg0" metadata (seqno 51). Creating logical volume lvol7 Creating volume group backup "/etc/lvm/backup/vg0" (seqno 52). Activating logical volume vg0/lvol7. activation/volume_list configuration setting not defined: Checking only host tags for vg0/lvol7. Creating vg0-lvol7 Loading table for vg0-lvol7 (253:17). Resuming vg0-lvol7 (253:17). Initializing 40,00 MiB of logical volume vg0/lvol7 with value 0. Temporary logical volume "lvol7" created. Removing vg0-lvol7 (253:17) Renaming lvol7 as pool metadata spare volume lvol7_pmspare. WARNING: If everything works, remove vg0/tpg1-wdata_meta1 volume. WARNING: Use pvmove command to move vg0/wdata_cachepool_cpool_cmeta on the best fitting PV. Trying to remove the cache volume also fails: sudo lvremove -ff vg0/tpg1-wdata Check of pool vg0/wdata_cachepool_cpool failed (status:1). Manual repair required! Failed to activate vg0/tpg1-wdata to flush cache. Any help in resolving this is appreciated! Thanks, _______________________________________________ linux-lvm mailing list linux-lvm@xxxxxxxxxx https://listman.redhat.com/mailman/listinfo/linux-lvm read the LVM HOW-TO at http://tldp.org/HOWTO/LVM-HOWTO/