How do you handle large Ceph object storage cluster?

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi Everyone,

My company is dealing with quite large Ceph cluster (>10k OSDs, >60 PB of data). It is entirely dedicated to object storage with S3 interface. Maintenance and its extension are getting more and more problematic and time consuming. We consider to split it to two or more completely separate clusters (without replication of data among them) and create S3 layer of abstraction with some additional metadata that will allow us to use these 2+ physically independent instances as a one logical cluster. Additionally, newest data is the most demanded data, so we have to spread it equally among clusters to avoid skews in cluster load.

Do you have any similar experience? How did you handle it? Maybe you have some advice? I'm not a Ceph expert. I'm just a Ceph's user and software developer who does not like to duplicate someone's job.

Best,
Paweł
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux