No, the problem is that a storage system should never tell a client that it has written data if it cannot guarantee that the data is still there if one device fails. Scenario: one OSD is down for whatever reason and another one fails. You've now lost all writes that happened while one OSD was down. You never want this scenario if you care about your data. I've worked on a lot of broken clusters and all cases of data loss where related to running with min_size 1 or erasure coding with this setting. Paul -- Paul Emmerich Looking for help with your Ceph cluster? Contact us at https://croit.io croit GmbH Freseniusstr. 31h 81247 München www.croit.io Tel: +49 89 1896585 90 On Tue, Apr 16, 2019 at 12:02 PM Igor Podlesny <ceph-user@xxxxxxxx> wrote: > > On Tue, 16 Apr 2019 at 16:52, Paul Emmerich <paul.emmerich@xxxxxxxx> wrote: > > On Tue, Apr 16, 2019 at 11:50 AM Igor Podlesny <ceph-user@xxxxxxxx> wrote: > > > On Tue, 16 Apr 2019 at 14:46, Paul Emmerich <paul.emmerich@xxxxxxxx> wrote: > [...] > > > Looked at it, didn't see any explanation of your point of view. If > > > there're 2 active data instances > > > (and 3rd is missing) how is it different to replicated pools with 3/2 config(?) > > > > each of these "copies" has only half the data > > Still not seeing how come. > > EC(2, 1) is conceptually RAID5 on 3 devices. You're basically saying > that if one disk of those 3 disks is missing > you can't safely write to 2 others that are still in. But CEPH's EC > has no partial update issue, has it? > > Can you elaborate? > > -- > End of message. Next message? _______________________________________________ ceph-users mailing list ceph-users@xxxxxxxxxxxxxx http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com