Re: CephFS ghost usage/inodes

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hello Patrick,

    "purge_queue": {
        "pq_executing_ops": 0,
        "pq_executing": 0,
        "pq_executed": 5097138
    },

We already restarted the MDS daemons, but no change.
There are no other health warnings than that one what Florian already
mentioned.

cheers Oskar

Am 14.01.20 um 17:32 schrieb Patrick Donnelly:
> On Tue, Jan 14, 2020 at 5:15 AM Florian Pritz
> <florian.pritz@xxxxxxxxxxxxxx> wrote:
>> `ceph daemon mds.$hostname perf dump | grep stray` shows:
>>
>>> "num_strays": 0,
>>> "num_strays_delayed": 0,
>>> "num_strays_enqueuing": 0,
>>> "strays_created": 5097138,
>>> "strays_enqueued": 5097138,
>>> "strays_reintegrated": 0,
>>> "strays_migrated": 0,
> Can you also paste the purge queue ("pq") perf dump?
>
> It's possible the MDS has hit an ENOSPC condition that caused the MDS
> to go read-only. This would prevent the MDS PurgeQueue from cleaning
> up. Do you see a health warning that the MDS is in this state? Is so,
> please try restarting the MDS.
>
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux