Hi,
for a replicated pool there's a hard-coded limit of 10:
$ ceph osd pool set test-pool size 20
Error EINVAL: pool size must be between 1 and 10
And it seems reasonable to limit a replicated pool, so many replicas
increase the cost and network traffic without having too much of a
benefit.
For an erasure-coded pool the number of OSDs is basically the limit.
The largest pool size we have in a customer environment is 18 chunks
(k7 m11) across two datacenters (to sustain the loss of one DC) and it
works quite well. They don't have a huge load on the cluster though,
so those 18 chunks don't realy hurt. But I don't know how the impact
would be on a heavily used cluster. On a different cluster with a much
higher load we have an EC pool with 9 chunks (k4 m5) and it also works
perfectly fine.
But what is your question aiming at? Usually you'd carefully plan what
your resiliency requirements are depending on the DCs/racks/hosts etc.
and choose a fitting EC profile or replicated size.
Regards,
Eugen
Zitat von Christopher Durham <caduceus42@xxxxxxx>:
Hi,
I've seen Dan's talk:
https://www.youtube.com/watch?v=0i7ew3XXb7Q
and other similar ones that talk about CLUSTER size.
But, I see nothing (perhaps I have not looked hard enough), on any
recommendations regarding max POOL size.
So, are there any limitations on a given pool that has all OSDs of
the same type?
I know that this is vague, and may depend on device type, crush
rule, ec vs replicated, network bandwidth, etc. But if thereare any
limitations or experiences that have exposed limits you don't want
to go over, it would be nice to know.
Also, an ancedotal 'our biggest pool is X, and we don't have
problems', or, pools over Y started to show problem Z', would be
great too.
Thanks
-Chris
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx