Re: Frequest LARGE_OMAP_OBJECTS in cephfs metadata pool

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Mon, Feb 24, 2020 at 2:28 PM Uday Bhaskar jalagam
<jalagam.ceph@xxxxxxxxx> wrote:
>
> Thanks Patrick,
>
> is this the bug you are referring to https://tracker.ceph.com/issues/42515 ?

Yes

> We also see performance issues mainly on metadata operations like finding file stats operations , however mds perf dump shows no sign of any latencies . could this bug cause any performance issues ?

Unlikely.

> do you see any clue in this that could cause slow down in such operations ?  our metadara pool has around 1.7 GB of data I gave mds cache 3 GB,

3GB cache is probably too small for your cluster. How many users? Your
perf dump indicates it probably is 8GB and not 3GB.

> I am not sure where to check how much used in the 3 GB or what is hit and miss count/ration in cache .

      "mds_co_bytes": 8160499164,

in your perf dump. You can also look at inodes added/removed (to
identify churn):

    "mds_mem": {
        "ino": 2740340,
        "ino+": 19461742,
        "ino-": 16721402,

-- 
Patrick Donnelly, Ph.D.
He / Him / His
Senior Software Engineer
Red Hat Sunnyvale, CA
GPG: 19F28A586F808C2402351B93C3301A3E258DD79D
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux