Re: Gluster -> Ceph

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 




On December 17, 2023 5:40:52 AM PST, Diego Zuccato <diego.zuccato@xxxxxxxx> wrote:
>Il 14/12/2023 16:08, Joe Julian ha scritto:
>
>> With ceph, if the placement database is corrupted, all your data is lost (happened to my employer, once, losing 5PB of customer data).
>
>From what I've been told (by experts) it's really hard to make it happen. More if proper redundancy of MON and MDS daemons is implemented on quality HW.
>
LSI isn't exactly crap hardware. But when a flaw causes it to drop drives under heavy load, the rebalance from dropped drives can cause that heavy load causing a cascading failure. When the journal is never idle long enough to checkpoint, it fills the partition and ends up corrupted and unrecoverable.


>Neither Gluster nor Ceph are "backup solutions", so if the data is not easily replaceable it's better to have it elsewhere. Better if offline.
>

It's a nice idea but when you're dealing in petabytes of data, streaming in as fast as your storage will allow, it's just not physically possible.
________



Community Meeting Calendar:

Schedule -
Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC
Bridge: https://meet.google.com/cpu-eiue-hvk
Gluster-users mailing list
Gluster-users@xxxxxxxxxxx
https://lists.gluster.org/mailman/listinfo/gluster-users



[Index of Archives]     [Gluster Development]     [Linux Filesytems Development]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Bugtraq]     [Linux OMAP]     [Linux MIPS]     [eCos]     [Asterisk Internet PBX]     [Linux API]

  Powered by Linux