Hello all, For dm-thin volumes that are snapshotted often, there is a performance penalty for writes because of COW overhead since the modified chunk needs to be copied into a freshly allocated chunk. What if we were to implement some sort of LRU for COW operations on chunks? We could then queue chunks that are commonly COWed within the inter-snapshot interval to be background copied immediately after the next snapshot. This would hide the latency and increase effective throughput when the thin device is written by its user since only the meta data would need an update because the chunk has already been copied. I can imagine a simple algorithm where the COW increments the chunk LRU by 2, and decrements the LRU by 1 for all stored LRUs when the volume is snapshotted. After the snapshot, any LRU>0 would be queued for early copy. The LRU would be in memory only, probably stored in a red/black tree. Pre-copied chunks would not update on-disk meta data unless a write occurs to that chunk. The allocator would need to be updated to ignore chunks that are in the LRU list which have been pre-copied (perhaps except in the case of pool free space exhaustion). Does this sound viable? -- Eric Wheeler -- dm-devel mailing list dm-devel@xxxxxxxxxx https://www.redhat.com/mailman/listinfo/dm-devel