ceph data replication not even on every osds

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi,

I set the same weight for all the hosts, same weight for all the osds under the hosts in crushmap, and set pool replica size to 3. However, after upload 1M/4M/400M/900M files to the pool, I found the data replication is not even on every osds and the utilization for the osds are not the same, they are 25% to 70% respectively. Could you advice, it's the nature of ceph, or there are some tricky setting in crushmap?

Rule r1 {
         ruleset 0
         type replicated
         min_size 0
         max_size 10
         step take root
         step chooseleaf firstn 0 type host
         step emit
}

Wei Cao (Buddy)

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.ceph.com/pipermail/ceph-users-ceph.com/attachments/20140701/9225418e/attachment.htm>


[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux