Re: 3 node setup with pools size=3

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 




On 01/13/2014 12:39 PM, Dietmar Maurer wrote:
> I am still playing around with a small setup using 3 Nodes, each running
> 4 OSDs (=12 OSDs).
> 
>  
> 
> When using a pool size of 3, I get the following behavior when one OSD
> fails:
> * the affected PGs get marked active+degraded
> 
> * there is no data movement/backfill

Works as designed, if you have the default crush map in place (all
replicas must be on DIFFERENT hosts). You need to tweak your crush map
in this case, but be aware that this can have serious effects (think of
all your data residing on 3 disks on a single host).

Wolfgang


-- 
http://www.wogri.com
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux