Re: Advice on enabling autoscaler

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 




On 2/7/22 12:34 PM, Alexander E. Patrakov wrote:
пн, 7 февр. 2022 г. в 17:30, Robert Sander <r.sander@xxxxxxxxxxxxxxxxxxx>:
And keep in mind that when PGs are increased that you also may need to
increase the number of OSDs as one OSD should carry a max of around 200
PGs. But I do not know if that is still the case with current Ceph versions.
This is just the default limit. Even Nautilus can do 400 PGs per OSD,
given "mon max pg per osd = 400" in ceph.conf. Of course it doesn't
mean that you should allow this.

There are multiple factors that play into how many PGs you can have per OSD.  Some are tied to things like the pglog length (and associated memory usage), some are tied to the amount of pg statistics that gets sent to the mgr (the interval can be tweaked to lower this if you have many PGs), and some are tied to things like the pgmap size and mon limits.  It's likely that for small clusters it may be possible to tweak things to support far more PGs per OSD (I've tested well over a 1000/osd on small clusters), while for extremely large clusters with many thousands of OSDs you may struggle to hit 100 PGs per OSD without tweaking settings. YMMV, which is why we have fairly conservative estimates for typical clusters.


Mark

_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux