Re: Worst thing that can happen if I have size= 2

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Thanks Simon and thanks to other people that have replied.
Sorry but I try to explain myself better.
It is evident to me that if I have two copies of data, one brokes and while
ceph creates again a new copy of the data also the disk with the second
copy brokes you lose the data.
It is obvious and a bit paranoid because many servers on many customers run
on raid1 and so you are saying: yeah you have two copies of the data but
you can broke both. Consider that in ceph recovery is automatic, with raid1
some one must manually go to the customer and change disks. So ceph is
already an improvement in this case even with size=2. With size 3 and min 2
it is a bigger improvement I know.

What I ask is this: what happens with min_size=1 and split brain, network
down or similar things: do ceph block writes because it has no quorum on
monitors? Are there some failure scenarios that I have not considered?
Thanks again!
Mario



Il giorno mer 3 feb 2021 alle ore 17:42 Simon Ironside <
sironside@xxxxxxxxxxxxx> ha scritto:

> On 03/02/2021 09:24, Mario Giammarco wrote:
> > Hello,
> > Imagine this situation:
> > - 3 servers with ceph
> > - a pool with size 2 min 1
> >
> > I know perfectly the size 3 and min 2 is better.
> > I would like to know what is the worst thing that can happen:
>
> Hi Mario,
>
> This thread is worth a read, it's an oldie but a goodie:
>
>
> http://lists.ceph.com/pipermail/ceph-users-ceph.com/2016-December/014846.html
>
> Especially this post, which helped me understand the importance of
> min_size=2
>
>
> http://lists.ceph.com/pipermail/ceph-users-ceph.com/2016-December/014892.html
>
> Cheers,
> Simon
> _______________________________________________
> ceph-users mailing list -- ceph-users@xxxxxxx
> To unsubscribe send an email to ceph-users-leave@xxxxxxx
>
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux