If I have only one rbd ssd pool, 3 replicated, and 4 ssd osd's. Why are these objects so unevenly spread across the four osd's? Should they all not have 162G? [@c01 ]# ceph osd status 2>&1 +----+------+-------+-------+--------+---------+--------+---------+----- ------+ | id | host | used | avail | wr ops | wr data | rd ops | rd data | state | +----+------+-------+-------+--------+---------+--------+---------+----- ------+ | 19 | c01 | 133G | 313G | 0 | 0 | 0 | 0 | exists,up | | 20 | c02 | 158G | 288G | 0 | 0 | 0 | 0 | exists,up | | 21 | c03 | 208G | 238G | 0 | 0 | 0 | 0 | exists,up | | 30 | c04 | 149G | 297G | 0 | 0 | 0 | 0 | exists,up | +----+------+-------+-------+--------+---------+--------+---------+----- ------+ All objects in the rbd pool are 4MB not? Should be easy to spread them evenly, what am I missing here? _______________________________________________ ceph-users mailing list ceph-users@xxxxxxxxxxxxxx http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com