There is a big difference between traditional RAID1 and Ceph. Namely, with Ceph, there are nodes where OSDs are running, and these nodes need maintenance. You want to be able to perform maintenance even if you have one broken OSD, that's why the recommendation is to have three copies with Ceph. There is no such "maintenance" consideration with traditional RAID1, so two copies are OK there. чт, 4 февр. 2021 г. в 00:49, Mario Giammarco <mgiammarco@xxxxxxxxx>: > Thanks Simon and thanks to other people that have replied. > Sorry but I try to explain myself better. > It is evident to me that if I have two copies of data, one brokes and while > ceph creates again a new copy of the data also the disk with the second > copy brokes you lose the data. > It is obvious and a bit paranoid because many servers on many customers run > on raid1 and so you are saying: yeah you have two copies of the data but > you can broke both. Consider that in ceph recovery is automatic, with raid1 > some one must manually go to the customer and change disks. So ceph is > already an improvement in this case even with size=2. With size 3 and min 2 > it is a bigger improvement I know. > > What I ask is this: what happens with min_size=1 and split brain, network > down or similar things: do ceph block writes because it has no quorum on > monitors? Are there some failure scenarios that I have not considered? > Thanks again! > Mario > > > > Il giorno mer 3 feb 2021 alle ore 17:42 Simon Ironside < > sironside@xxxxxxxxxxxxx> ha scritto: > > > On 03/02/2021 09:24, Mario Giammarco wrote: > > > Hello, > > > Imagine this situation: > > > - 3 servers with ceph > > > - a pool with size 2 min 1 > > > > > > I know perfectly the size 3 and min 2 is better. > > > I would like to know what is the worst thing that can happen: > > > > Hi Mario, > > > > This thread is worth a read, it's an oldie but a goodie: > > > > > > > http://lists.ceph.com/pipermail/ceph-users-ceph.com/2016-December/014846.html > > > > Especially this post, which helped me understand the importance of > > min_size=2 > > > > > > > http://lists.ceph.com/pipermail/ceph-users-ceph.com/2016-December/014892.html > > > > Cheers, > > Simon > > _______________________________________________ > > ceph-users mailing list -- ceph-users@xxxxxxx > > To unsubscribe send an email to ceph-users-leave@xxxxxxx > > > _______________________________________________ > ceph-users mailing list -- ceph-users@xxxxxxx > To unsubscribe send an email to ceph-users-leave@xxxxxxx > -- Alexander E. Patrakov CV: http://u.pc.cd/wT8otalK _______________________________________________ ceph-users mailing list -- ceph-users@xxxxxxx To unsubscribe send an email to ceph-users-leave@xxxxxxx