Re: CephFS ghost usage/inodes

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



this was the new state. the results are equal to florians

$ time cephfs-data-scan scan_extents cephfs_data
cephfs-data-scan scan_extents cephfs_data  1.86s user 1.47s system 21%
cpu 15.397 total

$ time cephfs-data-scan scan_inodes cephfs_data
cephfs-data-scan scan_inodes cephfs_data  2.76s user 2.05s system 26%
cpu 17.912 total

$ time cephfs-data-scan scan_links
cephfs-data-scan scan_links  0.13s user 0.11s system 31% cpu 0.747 total

$ time cephfs-data-scan scan_links
cephfs-data-scan scan_links  0.13s user 0.12s system 33% cpu 0.735 total

$ time cephfs-data-scan cleanup cephfs_data
cephfs-data-scan cleanup cephfs_data  1.64s user 1.37s system 12% cpu
23.922 total

mds / $ du -sh
31G

$ df -h
ip1,ip2,ip3:/  5.2T  2.1T  3.1T  41% /storage/cephfs_test1

$ ceph df detail
RAW STORAGE:
    CLASS     SIZE        AVAIL       USED        RAW USED     %RAW USED
    hdd       7.8 TiB     7.5 TiB     312 GiB      329 GiB          4.14
    TOTAL     7.8 TiB     7.5 TiB     312 GiB      329 GiB          4.14

POOLS:
    POOL                ID     STORED      OBJECTS     USED       
%USED     MAX AVAIL     QUOTA OBJECTS     QUOTA BYTES     DIRTY     
USED COMPR     UNDER COMPR
    cephfs_data          6     2.1 TiB       2.48M     2.1 TiB    
25.00       3.1 TiB     N/A               N/A             
2.48M            0 B             0 B
    cephfs_metadata      7     7.3 MiB         379     7.3 MiB        
0       3.1 TiB     N/A               N/A                379           
0 B             0 B


Am 14.01.20 um 21:06 schrieb Patrick Donnelly:
> I'm asking that you get the new state of the file system tree after
> recovering from the data pool. Florian wrote that before I asked you
> to do this...
>
> How long did it take to run the cephfs-data-scan commands?
>
> On Tue, Jan 14, 2020 at 11:58 AM Oskar Malnowicz
> <oskar.malnowicz@xxxxxxxxxxxxxx> wrote:
>> as florian already wrote, `du -hc` shows a total usage of 31G, but `ceph
>> df` show us an usage of 2.1
>>
>> </ mds># du -hs
>> 31G
>>
>> # ceph df
>> cephfs_data  6     2.1 TiB       2.48M     2.1 TiB     25.00       3.1 TiB
>>
>> Am 14.01.20 um 20:44 schrieb Patrick Donnelly:
>>> On Tue, Jan 14, 2020 at 11:40 AM Oskar Malnowicz
>>> <oskar.malnowicz@xxxxxxxxxxxxxx> wrote:
>>>> i run this commands, but still the same problems
>>> Which problems?
>>>
>>>> $ cephfs-data-scan scan_extents cephfs_data
>>>>
>>>> $ cephfs-data-scan scan_inodes cephfs_data
>>>>
>>>> $ cephfs-data-scan scan_links
>>>> 2020-01-14 20:36:45.110 7ff24200ef80 -1 mds.0.snap  updating last_snap 1
>>>> -> 27
>>>>
>>>> $ cephfs-data-scan cleanup cephfs_data
>>>>
>>>> do you have other ideas ?
>>> After you complete this, you should see the deleted files in your file
>>> system tree (if this is indeed the issue). What's the output of `du
>>> -hc`?
>>>
>>
>

_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux