Hi Ramesh, It looks like fsck error I've been chasing on my branch is a general problem with the bitmap granularity. The ObjectStore/StoreTest.SyntheticMatrixCsumVsCompression/2 test sets min_alloc_size to 32k and then to something smaller after that. My branch adds an occasional umount+fsck+mount to the synthetic workload test that uncovers a problem: if we start with a small min_alloc_size, write some objects, and then umount and remount with a larger min_alloc_size (say, 32k), things can go wrong. The allocator defines its bits in terms of min_alloc_size, but some used extents are smaller than that, and when they get released we trigger an assert like /home/sage/src/ceph/src/os/bluestore/BitMapAllocator.cc: In function 'void BitMapAllocator::insert_free(uint64_t, uint64_t)' thread 7ffb44deb700 time 2016-09-06 15:23:39.055902 /home/sage/src/ceph/src/os/bluestore/BitMapAllocator.cc: 76: FAILED assert(!(off % m_block_size)) There was a related issue with fsck that its used_blcoks bitmap was min_alloc_size granularity. I see two options: we can either unconditionally maintain the bitmap in block_size units, or we can store persistently the smallest min_alloc_size that we have ever mounted with and use that ("min_min_alloc_size?"). What do you think? sage -- To unsubscribe from this list: send the line "unsubscribe ceph-devel" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html