Exactly, strong consistency is why we chose Ceph over other SDS solutions back in 2014 (and disabled any non persistent cache along the IO path like HDD disk cache). A major power outage in our town a few years back (a few days before Christmas) and a ups malfunction has proven us right. Another reason to adopt Ceph today is that a cluster you build today to match a specific workload (lets say capacity) will accommodate any future workloads (for example performance) you may have tomorrow, simply by adding specific nodes to the cluster whatever the hardware will look like in decades. Regards, Frédéric. ----- Le 23 Avr 24, à 13:04, Janne Johansson icepic.dz@xxxxxxxxx a écrit : > Den tis 23 apr. 2024 kl 11:32 skrev Frédéric Nass > <frederic.nass@xxxxxxxxxxxxxxxx>: >> Ceph is strongly consistent. Either you read/write objects/blocs/files with an >> insured strong consistency OR you don't. Worst thing you can expect from Ceph, >> as long as it's been properly designed, configured and operated is a temporary >> loss of access to the data. > > This is often more important than you think. All centralized storage > systems will have to face some kind of latency when sending data over > the network, when splitting the data into replicas or erasure coding > shards, when waiting for all copies/shards are actually finished > written (perhaps via journals) to the final destination and then > lastly for the write to be acknowledged back to the writing client. If > some vendor says that "because of our special code, this part takes > zero time", they are basically telling you that they are lying about > the status of the write in order to finish more quickly, because this > wins them contracts or wins competitions. > > It will not win you any smiles when there is an incident and data that > was ACKed to be on disk suddenly isn't because some write cache lost > power at the same time as the storage box and now some database have > half-written transactions in it. Ceph is by no means the fastest > possible way to store data on a network, but it is very good while > still retaining the strong consistencies mentioned by Frederic above > allowing for many clients to do many IOs in parallel against the > cluster. > > -- > May the most significant bit of your life be positive. _______________________________________________ ceph-users mailing list -- ceph-users@xxxxxxx To unsubscribe send an email to ceph-users-leave@xxxxxxx