Re: Dramatic performance drop at certain number of objects in pool

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



> Op 16 juni 2016 om 18:08 schreef Wade Holler <wade.holler@xxxxxxxxx>:
> 
> 
> Ok.  Of the 202 pgs on this example OSD,
> 
> 65 of them have around ~160k files
> 137 ( the rest ) -have around ~80k files
> 

Are those files in the same directory or spread out over multiple sub directories?

You might want to take a look at: http://docs.ceph.com/docs/jewel/rados/configuration/filestore-config-ref/

"filestore split multiple"

Wido

> 
> 
> On Thu, Jun 16, 2016 at 10:47 AM, Wade Holler <wade.holler@xxxxxxxxx> wrote:
> > Wido,
> >
> > I am walking an Example OSD now and counting the files. 3096 PGs for this pool.
> > So far file counts inside the pool.pg_head directories are all coming
> > in around ~80k.
> >
> > Is this an issue ?
> >
> > I will report back with all pg_head file counts in this example OSD
> > once it finishes.
> >
> > Best Regards,
> > Wade
> >
> >
> > On Thu, Jun 16, 2016 at 9:38 AM, Wido den Hollander <wido@xxxxxxxx> wrote:
> >>
> >>> Op 16 juni 2016 om 14:14 schreef Wade Holler <wade.holler@xxxxxxxxx>:
> >>>
> >>>
> >>> Hi All,
> >>>
> >>> I have a repeatable condition when the object count in a pool gets to
> >>> 320-330 million the object write time dramatically and almost
> >>> instantly increases as much as 10X, exhibited by fs_apply_latency
> >>> going from 10ms to 100s of ms.
> >>>
> >>
> >> My first guess is the filestore splitting and the amount of files per directory.
> >>
> >> You have 3*16=48 OSDs, is that correct? With roughly 100 PGs per OSD you have let's say 4800 PGs in total?
> >>
> >> That means you have ~66k objects per PG.
> >>
> >>> Can someone point me in a direction / have an explanation ?
> >>
> >> If you take a look at one of the OSDs, are there a huge amount of files in a single directory? Look inside the 'current' directory on that OSDs.
> >>
> >> Wido
> >>
> >>>
> >>> I can add a new pool and it performs normally.
> >>>
> >>> Config is generally
> >>> 3 Nodes 24 physical core each, 768GB Ram each, 16 OSD / node , all SSD
> >>> with NVME for journals. Centos 7.2, XFS
> >>>
> >>> Jewell is the release; inserting objects with librados via some Python
> >>> test code.
> >>>
> >>> Best Regards
> >>> Wade
> >>> --
> >>> To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
> >>> the body of a message to majordomo@xxxxxxxxxxxxxxx
> >>> More majordomo info at  http://vger.kernel.org/majordomo-info.html
> --
> To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
> the body of a message to majordomo@xxxxxxxxxxxxxxx
> More majordomo info at  http://vger.kernel.org/majordomo-info.html
--
To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html



[Index of Archives]     [CEPH Users]     [Ceph Large]     [Information on CEPH]     [Linux BTRFS]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]
  Powered by Linux