Hi, Last year this caused my long lasting issue, the autoscaler suggestion was completely irrelevant about pg number. I had an objectstore cluster with 2B objects, most of them small like 50kb, and the only solution to avoid the slow ops in my cluster was to: - increase the pg number - Use the balancer with max deviation 1 - Remove wal+rocksdb from a separated device because it was continuously spill overed - 4osd for each 15TB ssds Now the cluster is kind of stable in the last half year. Istvan Szabo Senior Infrastructure Engineer --------------------------------------------------- Agoda Services Co., Ltd. e: istvan.szabo@xxxxxxxxx --------------------------------------------------- -----Original Message----- From: Eugen Block <eblock@xxxxxx> Sent: Tuesday, July 26, 2022 4:39 PM To: ceph-users@xxxxxxx Subject: Impact of many objects per PG Email received from the internet. If in doubt, don't click any link nor open any attachment ! ________________________________ Hi *, are there any known limitations or impacts of (too) many objects per PG? We're dealing with a performance decrease on Nautilus (I know, but it can't be upgraded at this time) while pushing a million emails (many small objects) into the cluster. At some point, maybe between 600.000 and 900.000 emails or so, the client requests became slower (CephFS kernel clients) although all entities look fine, the MDS daemons are not overloaded, the OSDs not fully utilized (HDDs with shared rocksDB on SSDs). The HDD OSDs currently have around 100 PGs per OSD with 45 GB per PG (quite a lot) and around 180.000 objects per PG. The main data pool has an EC profile with k=4 m=5. We want to increase the pg_num anyway, we expect a general performance increase after the pg splitting, but we're still wondering where the limits are. For example, if we increase mon_max_pg_per_osd to more than 250 we could split the PGs even more to reduce PG size and number of objects per PG, if the OSDs can cope with it utilization-wise, but I'm not sure if that's a good idea. Are there any other tunables we could tweak instead to reduce the performance impact? Any comments or references are highly appreciated! Thanks, Eugen _______________________________________________ ceph-users mailing list -- ceph-users@xxxxxxx To unsubscribe send an email to ceph-users-leave@xxxxxxx ________________________________ This message is confidential and is for the sole use of the intended recipient(s). It may also be privileged or otherwise protected by copyright or other legal rules. If you have received it by mistake please let us know by reply email and delete it from your system. It is prohibited to copy this message or disclose its content to anyone. Any confidentiality or privilege is not waived or lost by any mistaken delivery or unauthorized disclosure of the message. All messages sent to and from Agoda may be monitored to ensure compliance with company policies, to protect the company's interests and to remove potential malware. Electronic messages may be intercepted, amended, lost or deleted, or contain viruses. _______________________________________________ ceph-users mailing list -- ceph-users@xxxxxxx To unsubscribe send an email to ceph-users-leave@xxxxxxx