Wido, I am walking an Example OSD now and counting the files. 3096 PGs for this pool. So far file counts inside the pool.pg_head directories are all coming in around ~80k. Is this an issue ? I will report back with all pg_head file counts in this example OSD once it finishes. Best Regards, Wade On Thu, Jun 16, 2016 at 9:38 AM, Wido den Hollander <wido@xxxxxxxx> wrote: > >> Op 16 juni 2016 om 14:14 schreef Wade Holler <wade.holler@xxxxxxxxx>: >> >> >> Hi All, >> >> I have a repeatable condition when the object count in a pool gets to >> 320-330 million the object write time dramatically and almost >> instantly increases as much as 10X, exhibited by fs_apply_latency >> going from 10ms to 100s of ms. >> > > My first guess is the filestore splitting and the amount of files per directory. > > You have 3*16=48 OSDs, is that correct? With roughly 100 PGs per OSD you have let's say 4800 PGs in total? > > That means you have ~66k objects per PG. > >> Can someone point me in a direction / have an explanation ? > > If you take a look at one of the OSDs, are there a huge amount of files in a single directory? Look inside the 'current' directory on that OSDs. > > Wido > >> >> I can add a new pool and it performs normally. >> >> Config is generally >> 3 Nodes 24 physical core each, 768GB Ram each, 16 OSD / node , all SSD >> with NVME for journals. Centos 7.2, XFS >> >> Jewell is the release; inserting objects with librados via some Python >> test code. >> >> Best Regards >> Wade >> -- >> To unsubscribe from this list: send the line "unsubscribe ceph-devel" in >> the body of a message to majordomo@xxxxxxxxxxxxxxx >> More majordomo info at http://vger.kernel.org/majordomo-info.html -- To unsubscribe from this list: send the line "unsubscribe ceph-devel" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html