Re: CephFS ghost usage/inodes

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Tue, Jan 14, 2020 at 5:15 AM Florian Pritz
<florian.pritz@xxxxxxxxxxxxxx> wrote:
> `ceph daemon mds.$hostname perf dump | grep stray` shows:
>
> > "num_strays": 0,
> > "num_strays_delayed": 0,
> > "num_strays_enqueuing": 0,
> > "strays_created": 5097138,
> > "strays_enqueued": 5097138,
> > "strays_reintegrated": 0,
> > "strays_migrated": 0,

Can you also paste the purge queue ("pq") perf dump?

It's possible the MDS has hit an ENOSPC condition that caused the MDS
to go read-only. This would prevent the MDS PurgeQueue from cleaning
up. Do you see a health warning that the MDS is in this state? Is so,
please try restarting the MDS.

-- 
Patrick Donnelly, Ph.D.
He / Him / His
Senior Software Engineer
Red Hat Sunnyvale, CA
GPG: 19F28A586F808C2402351B93C3301A3E258DD79D
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux