Re: CephFS ghost usage/inodes

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



$ ceph daemon mds.who flush journal
{
    "message": "",
    "return_code": 0
}


$ cephfs-table-tool 0 reset session
{
    "0": {
        "data": {},
        "result": 0
    }
}

$ cephfs-table-tool 0 reset snap
{
    "result": 0
}

$ cephfs-table-tool 0 reset inode
{
    "0": {
        "data": {},
        "result": 0
    }
}

$ cephfs-journal-tool --rank=cephfs_test1:0 journal reset
old journal was 98282151365~92872
new journal start will be 98285125632 (2881395 bytes past old end)
writing journal head
writing EResetJournal entry
done

$ cephfs-data-scan init
Inode 0x0x1 already exists, skipping create.  Use --force-init to
overwrite the existing object.
Inode 0x0x100 already exists, skipping create.  Use --force-init to
overwrite the existing object.

Should i run with --force-init flag ?

Am 14.01.20 um 18:48 schrieb Patrick Donnelly:
> Please try flushing the journal:
>
> ceph daemon mds.foo flush journal
>
> The problem may be caused by this bug: https://tracker.ceph.com/issues/43598
>
> As for what to do next, you would likely need to recover the deleted
> inodes from the data pool so you can retry deleting the files:
> https://docs.ceph.com/docs/master/cephfs/disaster-recovery-experts/#recovery-from-missing-metadata-objects
>
>
> On Tue, Jan 14, 2020 at 9:30 AM Oskar Malnowicz
> <oskar.malnowicz@xxxxxxxxxxxxxx> wrote:
>> Hello Patrick,
>>
>>     "purge_queue": {
>>         "pq_executing_ops": 0,
>>         "pq_executing": 0,
>>         "pq_executed": 5097138
>>     },
>>
>> We already restarted the MDS daemons, but no change.
>> There are no other health warnings than that one what Florian already
>> mentioned.
>>
>> cheers Oskar
>>
>> Am 14.01.20 um 17:32 schrieb Patrick Donnelly:
>>> On Tue, Jan 14, 2020 at 5:15 AM Florian Pritz
>>> <florian.pritz@xxxxxxxxxxxxxx> wrote:
>>>> `ceph daemon mds.$hostname perf dump | grep stray` shows:
>>>>
>>>>> "num_strays": 0,
>>>>> "num_strays_delayed": 0,
>>>>> "num_strays_enqueuing": 0,
>>>>> "strays_created": 5097138,
>>>>> "strays_enqueued": 5097138,
>>>>> "strays_reintegrated": 0,
>>>>> "strays_migrated": 0,
>>> Can you also paste the purge queue ("pq") perf dump?
>>>
>>> It's possible the MDS has hit an ENOSPC condition that caused the MDS
>>> to go read-only. This would prevent the MDS PurgeQueue from cleaning
>>> up. Do you see a health warning that the MDS is in this state? Is so,
>>> please try restarting the MDS.
>>>
>> _______________________________________________
>> ceph-users mailing list -- ceph-users@xxxxxxx
>> To unsubscribe send an email to ceph-users-leave@xxxxxxx
>
>
> --
> Patrick Donnelly, Ph.D.
> He / Him / His
> Senior Software Engineer
> Red Hat Sunnyvale, CA
> GPG: 19F28A586F808C2402351B93C3301A3E258DD79D
>
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux