Hi all, I accidentally run into this weird situation which looks like a bug to me. This bug can be reproduced every time with the following steps. 1) Create a thin pool and a thin volume. 2) Write some data to this thin volume. 3) Reserve metadata snapshot by sending "reserve_metadata_snap" to pool. 4) Create a snapshot for the thin volume. 5) Release metadata snapshot by sending "release_metadata_snap" to pool 6) Remove both the snapshot and thin volume. After these steps, pool blocks allocated to the thin volume are never returned to the pool. I trace the code of releasing metadata snapshot, and I might find the root cause of this. When reserving metadata snapshot, we will increase the reference count of data mapping root by 1. However, the subsequent changes to the data mapping tree will split the data mapping tree which results in increasing reference counts of all bottom level roots. When releasing metadata snapshot, we simply decrease the reference count of the old data mapping root without propagating these reference count decrements all the way down. IMHO, maybe we should call dm_btree_del() on the old data mapping root instead of dm_sm_dec_refcount(). Any help would be grateful. Best Regards, Dennis -- dm-devel mailing list dm-devel@xxxxxxxxxx https://www.redhat.com/mailman/listinfo/dm-devel