Re: Inconsistent Space Usage reporting

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Thanks. Let me try it and I'll report back.

-----Original Message-----
From: Adam Tygart <mozes@xxxxxxx> 
Sent: Tuesday, November 3, 2020 12:42 PM
To: Vikas Rana <vrana@xxxxxxxxxxxx>
Cc: ceph-users <ceph-users@xxxxxxx>
Subject: Re:  Re: Inconsistent Space Usage reporting

I'm not sure exactly what you're doing with your volumes.

It looks like fcp might be size 3. nfs is size 1, possibly with an 200TB rbd volume inside nbd mounted into another box. If so, it is likely you can reclaim space from deleted files with fstrim, if your filesystem supports it.

--
Adam

On Tue, Nov 3, 2020 at 11:00 AM Vikas Rana <vrana@xxxxxxxxxxxx> wrote:
>
> Any help or direction in this below case is highly appreciated.
>
> Thanks,
> -Vikas
>
> -----Original Message-----
> From: Vikas Rana <vrana@xxxxxxxxxxxx>
> Sent: Monday, November 2, 2020 12:53 PM
> To: ceph-users@xxxxxxx
> Subject:  Inconsistent Space Usage reporting
>
> Hi Friends,
>
>
>
> We have some inconsistent storage space usage reporting. We used only 
> 46TB with single copy but the space used on the pool is close to 128TB.
>
>
>
> Any idea where's the extra space is utilized and how to reclaim it?
>
>
>
> Ceph Version : 12.2.11 with XFS OSDs. We are planning to upgrade soon.
>
>
>
> # ceph df detail
>
> GLOBAL:
>
>     SIZE       AVAIL      RAW USED     %RAW USED     OBJECTS
>
>     363TiB     131TiB       231TiB         63.83      43.80M
>
> POOLS:
>
>     NAME        ID     QUOTA OBJECTS     QUOTA BYTES     USED        %USED
> MAX AVAIL     OBJECTS      DIRTY      READ        WRITE       RAW USED
>
>     fcp         15     N/A               N/A             23.6TiB     42.69
> 31.7TiB      3053801      3.05M     6.10GiB     12.6GiB      47.3TiB
>
>     nfs         16     N/A               N/A              128TiB     66.91
> 63.4TiB     33916181     33.92M     3.93GiB     4.73GiB       128TiB
>
>
>
>
>
>
>
>
>
> # df -h
>
> Filesystem      Size  Used Avail Use% Mounted on
>
> /dev/nbd0       200T   46T  155T  23% /vol/dir_research
>
>
>
>
>
> #ceph osd pool get nfs all
>
> size: 1
>
> min_size: 1
>
> crash_replay_interval: 0
>
> pg_num: 128
>
> pgp_num: 128
>
> crush_rule: replicated_ruleset
>
> hashpspool: true
>
> nodelete: false
>
> nopgchange: false
>
> nosizechange: false
>
> write_fadvise_dontneed: false
>
> noscrub: false
>
> nodeep-scrub: false
>
> use_gmt_hitset: 1
>
> auid: 0
>
> fast_read: 0
>
>
>
>
>
> Appreciate your help.
>
>
>
> Thanks,
>
> -Vikas
>
> _______________________________________________
> ceph-users mailing list -- ceph-users@xxxxxxx To unsubscribe send an 
> email to ceph-users-leave@xxxxxxx 
> _______________________________________________
> ceph-users mailing list -- ceph-users@xxxxxxx To unsubscribe send an 
> email to ceph-users-leave@xxxxxxx
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux