Why is my cephfs almostfull?

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



We have a ceph cluster with a cephfs filesystem that we use mostly for backups. When I do a "ceph -s" or a "ceph df", it reports lots of space:

    data:
      pools:   3 pools, 4104 pgs
      objects: 1.09 G objects, 944 TiB
      usage:   1.5 PiB used, 1.0 PiB / 2.5 PiB avail

  GLOBAL:
    SIZE        AVAIL       RAW USED     %RAW USED
    2.5 PiB     1.0 PiB      1.5 PiB         59.76
  POOLS:
    NAME                ID     USED        %USED     MAX AVAIL OBJECTS
    cephfs_data         2      944 TiB     87.63       133 TiB 880988429
    cephfs_metadata     3      128 MiB         0        62 TiB 206535313
    .rgw.root           4          0 B         0        62 TiB             0

The whole thing consists of 2 pools: metadata (regular default replication) and data (erasure k:5 m:2). The global raw space reports 2.5 PiB total, with 1.0 PiB still available. But, when the ceph filesystem is mounted, it only reports 1.1 PB total, and the filesystem is almost full:

   Filesystem         Size  Used Avail Use% Mounted on
   x.x.x.x:yyyy:/    1.1P  944T  134T  88% /backups

So, where is the rest of my space? Or what am I missing?

Thanks!
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux