Re: max pool size (amount of data/number of OSDs)

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi Chris,

The actually limits are not software. Usually Ceph teams on Cloud Providers or Universities running out at physical resources at first: racks, racks power or network (ports, EOL switches that can't be upgraded) or hardware lifetime (There is no point in buying old hardware, and the new one is too new to interfere with the old one.  At the same time, replacing everything at once is very expensive (millions of dollars [depends on the region where the equipment is purchased and where it will be operated]))


k
Sent from my iPhone

> On 30 Dec 2022, at 19:52, Christopher Durham <caduceus42@xxxxxxx> wrote:
> 
> 
> Hi,
> Is there any information on this issue? Max number of OSDs per pool, or maxpool size (data) as opposed to cluster size? Thanks!
> -Chris
> 
> 
> -----Original Message-----
> From: Christopher Durham <caduceus42@xxxxxxx>
> To: ceph-users@xxxxxxx <ceph-users@xxxxxxx>
> Sent: Thu, Dec 15, 2022 5:36 pm
> Subject: max pool size (amount of data/number of OSDs)
> 
> 
> Hi,
> There are various articles, case studies, etc about large ceph clusters, storing 10s of PiB,with CERN being the largest cluster as far as I know.
> Is there a largest pool capacity limit?  In other words, while you may have a 30PiB cluster,is there a limit or recommendation as to max pool capacity. For example, in the 30PiB example,is there a limit or recommendation that says do not have a pool capacity of higher than 5iB, for 6pools in that cluster at a ttotal of 30PiB?
> 
> I know this would be contingent upon a variety of things, including, but not limited to network throughput, individual serversize (disk size and number, memory, compute). I am specifically talking about s3./rgw storage.
> 
> But is there a technical limit, or just a tested size, of a pool? Should I createdifferent pools when a given pool would otherwise reach a size capacity of Xor have N osds or PGs in it, when considering adding additional osds?
> Thanks for any info
> -Chris
> _______________________________________________
> ceph-users mailing list -- ceph-users@xxxxxxx
> To unsubscribe send an email to ceph-users-leave@xxxxxxx

_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux