Re: 3 node setup with pools size=3

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



> > When using a pool size of 3, I get the following behavior when one OSD
> > fails:
> > * the affected PGs get marked active+degraded
> >
> > * there is no data movement/backfill
> 
> Works as designed, if you have the default crush map in place (all replicas must
> be on DIFFERENT hosts). You need to tweak your crush map in this case, but be
> aware that this can have serious effects (think of all your data residing on 3 disks
> on a single host).

The old behavior was that data is automatically distributed to remaining 3 disks.
So the question is why this is different when we use 'ceph osd crush tunables optimal'?


_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux