I’ve sprinkled minimizers below. Free advice and worth every penny. ymmv. Do not taunt Happy Fun Ball. > during a lot of discussions in the past the comment that having "many PGs per OSD can lead to issues" came up without ever explaining what these issues will (not might!) be or how one would notice. It comes up as kind of a rumor without any factual or even anecdotal backing. A handful of years ago Sage IIRC retconned PG ratio guidance from 200 to 100 to help avoid OOMing, the idea being that more PGs = more RAM usage on each daemon that stores the maps. With BlueStore’s osd_memory_target, my sense is that the ballooning seen with Filestore is much less of an issue. > As far as I can tell from experience, any increase of resource utilization due to an increase of the PG count per OSD is more than offset by the performance impact of the reduced size of the PGs. Everything seems to benefit from smaller PGs, recovery, user IO, scrubbing. My understanding is that there is serialization in the PG code, and thus the PG ratio can be thought of as the degree of parallelism the OSD device can handle. SAS/SATA SSDs don’t seek so they can handle more than HDDS, and NVMe devices can handle more than SAS/SATA. > Yet, I'm holding back on an increase of PG count due to these rumors. My personal sense: HDD OSD: PG ratio 100-200 SATA/SAS SSD OSD: 200-300 NVMe SSD OSD: 300-400 These are not empirical figures. ymmv. > My situation: I would like to split PGs on large HDDs. Currently, we have on average 135PGs per OSD and I would like to go for something like 450. The good Mr. Nelson may have more precise advice, but my personal sense is that I wouldn’t go higher than 200 on an HDD. If you were at like 20 (I’ve seen it!) that would be a different story, my sense is that there are diminishing returns over say 150. Seek thrashing fu, elevator scheduling fu, op re-ordering fu, etc. Assuming you’re on Nautilus or later, it doesn’t hurt to experiment with your actual workload since you can scale pg_num back down. Without Filestore colocated journals, the seek thrashing may be less of an issue than it used to be. > I heard in related rumors that some users have 1000+ PGs per OSD without problems. On spinners? Or NVMe? On a 60-120 TB NVMe OSD I’d be sorely tempted to try 500-1000. > I would be very much interested in a non-rumor answer, that is, not an answer of the form "it might use more RAM", "it might stress xyz". I don't care what a rumor says it might do. I would like to know what it will do. It WILL use more RAM. > I'm looking for answers of the form "a PG per OSD requires X amount of RAM fixed plus Y amount per object” Derive the size of your map and multiple by the number of OSDs per system. My sense is that it’s on the order of MBs per OSD. After a certain point the RAM delta might have more impact by raising osd_memory_target instead. > or "searching/indexing stuff of kind A in N PGs per OSD requires N log N/N²/... operations", "peering of N PGs per OSD requires N/N log N/N²/N*#peers/... operations". In other words, what are the *actual* resources required to host N PGs with M objects on an OSD (note that N*M is a constant per OSD). With that info one could make an informed decision, informed by facts not rumors. > > An additional question of interest is: Has anyone ever observed any detrimental effects of increasing the PG count per OSD to large values>500? Consider this scenario: An unmanaged lab setup used for successive OpenStack deployments, each of which created two RBD pools and the panoply of RGW pools. Which nobody cleaned up before redeploys, so they accreted like plaque in the arteries of an omnivore. Such that the PG ratio hits 9000. Yes, 9000. Then the building loses power. The systems don’t have nearly enough RAM to boot, peer, and activate, so the entire cluster has to be wiped and redeployed from scratch. An extreme example, but remember that I don’t make stuff up. > > Thanks a lot for any clarifications in this matter! > ================= > Frank Schilder > AIT Risø Campus > Bygning 109, rum S14 > _______________________________________________ > ceph-users mailing list -- ceph-users@xxxxxxx > To unsubscribe send an email to ceph-users-leave@xxxxxxx _______________________________________________ ceph-users mailing list -- ceph-users@xxxxxxx To unsubscribe send an email to ceph-users-leave@xxxxxxx