Dear Vasilily, On 11/27/2015 02:00 PM, Irek Fasikhov
wrote:
I think this is normal and should be no problem. I had this also previously.
I do not know how the weight of the hosts contribute to determine were to store the 3-th copy of the PG. As you explained, you have enough space on all hosts, but maybe if the weights of the hosts do not count up and the crushmap maybe come to the conclusion it is not able to place the PGs. What you can try, is to artificially raise the weights of these hosts, to see if it starts mapping the thirth copies for the pg's onto the available host. I had a similiar problem in the past, this was solved by upgrading to the latest crush tunables. But be aware, that can create massive datamovement behavior.
Normally you get an error warning in "ceph status" if time is out of sync. Nevertheless, you can try to restart the OSD's. I had issues with timing in the past and discovered it sometime helps to restart the daemons *after* syncing the times, before the accepted the new timings. But this was mostly the case with monitors though. Regards, Mart
-- Mart van Santen Greenhost E: mart@xxxxxxxxxxxx T: +31 20 4890444 W: https://greenhost.nl A PGP signature can be attached to this e-mail, you need PGP software to verify it. My public key is available in keyserver(s) see: http://tinyurl.com/openpgp-manual PGP Fingerprint: CA85 EB11 2B70 042D AF66 B29A 6437 01A1 10A3 D3A5 |
Attachment:
signature.asc
Description: OpenPGP digital signature
_______________________________________________ ceph-users mailing list ceph-users@xxxxxxxxxxxxxx http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com