Well said! Brett On Tue, Apr 23, 2024 at 7:05 AM Janne Johansson <icepic.dz@xxxxxxxxx> wrote: > Den tis 23 apr. 2024 kl 11:32 skrev Frédéric Nass > <frederic.nass@xxxxxxxxxxxxxxxx>: > > Ceph is strongly consistent. Either you read/write objects/blocs/files > with an insured strong consistency OR you don't. Worst thing you can expect > from Ceph, as long as it's been properly designed, configured and operated > is a temporary loss of access to the data. > > This is often more important than you think. All centralized storage > systems will have to face some kind of latency when sending data over > the network, when splitting the data into replicas or erasure coding > shards, when waiting for all copies/shards are actually finished > written (perhaps via journals) to the final destination and then > lastly for the write to be acknowledged back to the writing client. If > some vendor says that "because of our special code, this part takes > zero time", they are basically telling you that they are lying about > the status of the write in order to finish more quickly, because this > wins them contracts or wins competitions. > > It will not win you any smiles when there is an incident and data that > was ACKed to be on disk suddenly isn't because some write cache lost > power at the same time as the storage box and now some database have > half-written transactions in it. Ceph is by no means the fastest > possible way to store data on a network, but it is very good while > still retaining the strong consistencies mentioned by Frederic above > allowing for many clients to do many IOs in parallel against the > cluster. > > -- > May the most significant bit of your life be positive. > _______________________________________________ > ceph-users mailing list -- ceph-users@xxxxxxx > To unsubscribe send an email to ceph-users-leave@xxxxxxx > _______________________________________________ ceph-users mailing list -- ceph-users@xxxxxxx To unsubscribe send an email to ceph-users-leave@xxxxxxx