Re: Why is my cephfs almostfull?

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi Jorge,

On 4/6/23 07:09, Jorge Garcia wrote:
We have a ceph cluster with a cephfs filesystem that we use mostly for backups. When I do a "ceph -s" or a "ceph df", it reports lots of space:

    data:
      pools:   3 pools, 4104 pgs
      objects: 1.09 G objects, 944 TiB
      usage:   1.5 PiB used, 1.0 PiB / 2.5 PiB avail

  GLOBAL:
    SIZE        AVAIL       RAW USED     %RAW USED
    2.5 PiB     1.0 PiB      1.5 PiB         59.76
  POOLS:
    NAME                ID     USED        %USED     MAX AVAIL OBJECTS
    cephfs_data         2      944 TiB     87.63       133 TiB 880988429
    cephfs_metadata     3      128 MiB         0        62 TiB 206535313
    .rgw.root           4          0 B         0        62 TiB             0

The whole thing consists of 2 pools: metadata (regular default replication) and data (erasure k:5 m:2). The global raw space reports 2.5 PiB total, with 1.0 PiB still available. But, when the ceph filesystem is mounted, it only reports 1.1 PB total, and the filesystem is almost full:

   Filesystem         Size  Used Avail Use% Mounted on
   x.x.x.x:yyyy:/    1.1P  944T  134T  88% /backups

So, where is the rest of my space? Or what am I missing?

What's ceph version you are using ?

The late cephs' output will be like:

$ ./bin/ceph df
--- RAW STORAGE ---
CLASS     SIZE    AVAIL    USED  RAW USED  %RAW USED
hdd    303 GiB  288 GiB  15 GiB    15 GiB       4.98
TOTAL  303 GiB  288 GiB  15 GiB    15 GiB       4.98

--- POOLS ---
POOL           ID  PGS   STORED  OBJECTS     USED  %USED  MAX AVAIL
.mgr            1    1  577 KiB        2  1.7 MiB      0     95 GiB
cephfs.a.meta   2   16   28 MiB       29   84 MiB   0.03     95 GiB
cephfs.a.data   3   32  4.0 GiB    1.02k   12 GiB   4.04     95 GiB

$ df -h
Filesystem Size  Used Avail Use% Mounted on
192.168.0.104:40554,192.168.0.104:40556,192.168.0.104:40558:/ 99G  4.0G   95G   5% /mnt/kcephfs

From the mountpoint "/mnt/kcephfs" the used disk space is 4GB, which I just created a 4GB size file. And from `ceph df` the STORED is 4GB too while the USED is 12GB, which the pool is replica x3.

For you  with data (erasure k:5 m:2) the 1.1P should be just for efficient data STORED, does not include the compression, allocation and other overheads.

You can get more detail from [1], [2] and [3].


[1] https://tracker.ceph.com/issues/20870
[2] https://tracker.ceph.com/issues/22159
[3] https://github.com/ceph/ceph/commit/db5c5cce5513080ab81c69a705440174645332a8

Thanks

- Xiubo



Thanks!
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux