Hi, I'm seeing the following "dmsetup status" on one volume: > 0 5368709120 cache 8 3926/32768 128 335265/1978880 5589425 8052258 2254781 3910141 0 335265 4294967293 1 writeback 2 migration_threshold 2048 mq 10 random_threshold 4 sequential_threshold 512 discard_promote_adjustment 1 read_promote_adjustment 4 write_promote_adjustment 8 Note the clearly wrong 4294967293 in the nr_dirty field. Looking at the code, I see nr_dirty is set in the following functions in dm-cache-target.c: > static void set_dirty(struct cache *cache, dm_oblock_t oblock, dm_cblock_t cblock) > { > if (!test_and_set_bit(from_cblock(cblock), cache->dirty_bitset)) { > cache->nr_dirty = to_cblock(from_cblock(cache->nr_dirty) + 1); > policy_set_dirty(cache->policy, oblock); > } > } > > static void clear_dirty(struct cache *cache, dm_oblock_t oblock, dm_cblock_t cblock) > { > if (test_and_clear_bit(from_cblock(cblock), cache->dirty_bitset)) { > policy_clear_dirty(cache->policy, oblock); > cache->nr_dirty = to_cblock(from_cblock(cache->nr_dirty) - 1); > if (!from_cblock(cache->nr_dirty)) > dm_table_event(cache->ti->table); > } > } That looks like a race to me? As nothing is protecting cache->nr_dirty from multiple access (unlike cache->dirty_bitset). Unless I'm missing something, as I'm not familiar with this code... -- Anssi Hannula -- dm-devel mailing list dm-devel@xxxxxxxxxx https://www.redhat.com/mailman/listinfo/dm-devel