Re: rbd pool:replica size choose: 2 vs 3

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 




> -----Original Message-----
> From: ceph-users [mailto:ceph-users-bounces@xxxxxxxxxxxxxx] On Behalf Of Ja. C.A.
> Sent: 23 September 2016 10:38
> To: nick@xxxxxxxxxx; ceph-users@xxxxxxxxxxxxxx
> Subject: Re:  rbd pool:replica size choose: 2 vs 3
> 
> ummm....ok.
> 
> and, how would the affected PG recover, just replacing the affected OSD/DISK? or would the affected PG migrate to othe OSD/DISK?

Yes, Ceph would start recovering the PG's to other OSD's. But until your PG size=min_size then IO will be blocked.

> 
> thx
> 
> On 23/09/16 10:56, Nick Fisk wrote:
> >
> >> -----Original Message-----
> >> From: ceph-users [mailto:ceph-users-bounces@xxxxxxxxxxxxxx] On Behalf Of Ja. C.A.
> >> Sent: 23 September 2016 09:50
> >> To: ceph-users@xxxxxxxxxxxxxx
> >> Subject: Re:  rbd pool:replica size choose: 2 vs 3
> >>
> >> Hi
> >>
> >> with rep_size=2 and min_size=2, what drawbacks are removed compared
> >> with
> >> rep_size=2 and min_size=1?
> > If you lose a disk, everything will stop working until the affected PG's are at size=2 again.
> >
> >> thx
> >> J.
> >>
> >> On 23/09/16 10:07, Wido den Hollander wrote:
> >>>> Op 23 september 2016 om 10:04 schreef mj <lists@xxxxxxxxxxxxx>:
> >>>>
> >>>>
> >>>> Hi,
> >>>>
> >>>> On 09/23/2016 09:41 AM, Dan van der Ster wrote:
> >>>>>> If you care about your data you run with size = 3 and min_size = 2.
> >>>>>>
> >>>>>> Wido
> >>>> We're currently running with min_size 1. Can we simply change this,
> >>>> online, with:
> >>>>
> >>>> ceph osd pool set vm-storage min_size 2
> >>>>
> >>>> and expect everything to continue running?
> >>>>
> >>> Yes, it will. No rebalance will happen. min_size = 2 just tells Ceph
> >>> that 2 replicas need to be online for I/O (Read and Write)
> > to
> >> continue.
> >>> Wido
> >>>
> >>>> (our cluster is HEALTH_OK, enough disk space, etc, etc)
> >>>>
> >>>> MJ
> >>>> _______________________________________________
> >>>> ceph-users mailing list
> >>>> ceph-users@xxxxxxxxxxxxxx
> >>>> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
> >>> _______________________________________________
> >>> ceph-users mailing list
> >>> ceph-users@xxxxxxxxxxxxxx
> >>> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
> >> _______________________________________________
> >> ceph-users mailing list
> >> ceph-users@xxxxxxxxxxxxxx
> >> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
> 
> _______________________________________________
> ceph-users mailing list
> ceph-users@xxxxxxxxxxxxxx
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux