Den tors 8 dec. 2022 kl 18:52 skrev Charles Hedrick <hedrick@xxxxxxxxxxx>: > > I'm aware that the file system will remain available. My concern is about long jobs using it failing because a single operation returns an error. While none of the discussion so far has been explicit, I assume this can happen if an OSD fails, since it might have done an async acknowledgement for an operation that won't actually be possible to complete. I'm assuming that it won't happen during a cephadm upgrade. If your pool has repl=3, then the ack will only get back to the client when all 3 replicas have the data, so a single (or even dual) OSD failure in such a case would still not mean that this particular write was lost. If you are down to 1 replica the pool will go read-only until a new replica can be made, but the last write will still be represented. -- May the most significant bit of your life be positive. _______________________________________________ ceph-users mailing list -- ceph-users@xxxxxxx To unsubscribe send an email to ceph-users-leave@xxxxxxx