Re: 2x replication: A BIG warning

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



> Op 9 december 2016 om 22:31 schreef Oliver Humpage <oliver@xxxxxxxxxxxxxxx>:
> 
> 
> 
> > On 7 Dec 2016, at 15:01, Wido den Hollander <wido@xxxxxxxx> wrote:
> > 
> > I would always run with min_size = 2 and manually switch to min_size = 1 if the situation really requires it at that moment.
> 
> Thanks for this thread, it’s been really useful.
> 
> I might have misunderstood, but does min_size=2 also mean that writes have to wait for at least 2 OSDs to have data written before the write is confirmed? I always assumed this would have a noticeable effect on performance and so left it at 1.
> 
> Our use case is RBDs being exported as iSCSI for ESXi. OSDs are journalled on enterprise SSDs, servers are linked with 10Gb, and we’re generally getting very acceptable speeds. Any idea as to how upping min_size to 2 might affect things, or should we just try it and see?
> 

As David already said, when all OSDs are up and in for a PG Ceph will wait for ALL OSDs to Ack the write. Writes in RADOS are always synchronous.

Only when OSDs go down you need at least min_size OSDs up before writes or reads are accepted.

So if min_size = 2 and size = 3 you need at least 2 OSDs online for I/O to take place.

Wido

> Oliver.
> 
> _______________________________________________
> ceph-users mailing list
> ceph-users@xxxxxxxxxxxxxx
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux