Re: CephFS ghost usage/inodes

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



as florian already wrote, `du -hc` shows a total usage of 31G, but `ceph
df` show us an usage of 2.1

</ mds># du -hs
31G

# ceph df
cephfs_data  6     2.1 TiB       2.48M     2.1 TiB     25.00       3.1 TiB

Am 14.01.20 um 20:44 schrieb Patrick Donnelly:
> On Tue, Jan 14, 2020 at 11:40 AM Oskar Malnowicz
> <oskar.malnowicz@xxxxxxxxxxxxxx> wrote:
>> i run this commands, but still the same problems
> Which problems?
>
>> $ cephfs-data-scan scan_extents cephfs_data
>>
>> $ cephfs-data-scan scan_inodes cephfs_data
>>
>> $ cephfs-data-scan scan_links
>> 2020-01-14 20:36:45.110 7ff24200ef80 -1 mds.0.snap  updating last_snap 1
>> -> 27
>>
>> $ cephfs-data-scan cleanup cephfs_data
>>
>> do you have other ideas ?
> After you complete this, you should see the deleted files in your file
> system tree (if this is indeed the issue). What's the output of `du
> -hc`?
>

_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux