Re: 3 node setup with pools size=3

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 




On 01/14/2014 09:44 AM, Dietmar Maurer wrote:
>>> When using a pool size of 3, I get the following behavior when one OSD
>>> fails:
>>> * the affected PGs get marked active+degraded
>>>
>>> * there is no data movement/backfill
>>
>> Works as designed, if you have the default crush map in place (all replicas must
>> be on DIFFERENT hosts). You need to tweak your crush map in this case, but be
>> aware that this can have serious effects (think of all your data residing on 3 disks
>> on a single host).
> 
> The old behavior was that data is automatically distributed to remaining 3 disks.
> So the question is why this is different when we use 'ceph osd crush tunables optimal'?

Sorry, it seems as if I had misread your question: Only a single OSD
fails, not the whole server? Then there should definitively be a
backfilling in place. Check if you have set the 'osd noout' flag in your
cluster.

-- 
http://www.wogri.com
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux