Re: How do you handle large Ceph object storage cluster?

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Well you are probably in the top 1% of cluster size. I would guess that
trying to cut your existing cluster in half while not encountering any
downtime as you shuffle existing buckets between old cluster and new
cluster would be harder than redirecting all new buckets (or users) to a
second cluster. Obviously you will need to account for each cluster having
a single bucket namespace when attempting to redirect requests to a cluster
of clusters. Lots of ways to skin this cat and it would be a large and
complicated architectural undertaking.

Respectfully,

*Wes Dillingham*
wes@xxxxxxxxxxxxxxxxx
LinkedIn <http://www.linkedin.com/in/wesleydillingham>


On Mon, Oct 16, 2023 at 10:53 AM <pawel.przestrzelski@xxxxxxxxx> wrote:

> Hi Everyone,
>
> My company is dealing with quite large Ceph cluster (>10k OSDs, >60 PB of
> data). It is entirely dedicated to object storage with S3 interface.
> Maintenance and its extension are getting more and more problematic and
> time consuming. We consider to split it to two or more completely separate
> clusters (without replication of data among them) and create S3 layer of
> abstraction with some additional metadata that will allow us to use these
> 2+ physically independent instances as a one logical cluster. Additionally,
> newest data is the most demanded data, so we have to spread it equally
> among clusters to avoid skews in cluster load.
>
> Do you have any similar experience? How did you handle it? Maybe you have
> some advice? I'm not a Ceph expert. I'm just a Ceph's user and software
> developer who does not like to duplicate someone's job.
>
> Best,
> Paweł
> _______________________________________________
> ceph-users mailing list -- ceph-users@xxxxxxx
> To unsubscribe send an email to ceph-users-leave@xxxxxxx
>
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux