Re: Why is my cephfs almostfull?

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi Jorge,

firstly, it would be really helpful if you would not truncate output of ceph status or omit output of commands you refer to, like ceph df. We have seen way too many examples where the clue was in the omitted part.

Without any information, my bets in order are (according to many cases of this type on this list):

- the pool does not actually use all OSDs
- you have an imbalance in your cluster and at least one OSD/failure domain is 85-90% full
- you have a huge amount of small files/objects in the data pool and suffer from allocation amplification
- you have a quota on the data pool
- there is an error in the crush map

If you provide a reasonable amount of information, like full output of 'ceph status', 'ceph df detail' and 'ceph osd df tree' (please use https://pastebin.com/), I'm willing to give it a second try. You may also - before replying - investigate a bit on your own to see if there is any potentially relevant information *additional* to the full output of these commands. Anything else that looks odd.

Best regards,
=================
Frank Schilder
AIT Risø Campus
Bygning 109, rum S14

________________________________________
From: Jorge Garcia <jgarcia@xxxxxxxxxxxx>
Sent: Thursday, April 6, 2023 1:09 AM
To: ceph-users
Subject:  Why is my cephfs almostfull?

We have a ceph cluster with a cephfs filesystem that we use mostly for
backups. When I do a "ceph -s" or a "ceph df", it reports lots of space:

     data:
       pools:   3 pools, 4104 pgs
       objects: 1.09 G objects, 944 TiB
       usage:   1.5 PiB used, 1.0 PiB / 2.5 PiB avail

   GLOBAL:
     SIZE        AVAIL       RAW USED     %RAW USED
     2.5 PiB     1.0 PiB      1.5 PiB         59.76
   POOLS:
     NAME                ID     USED        %USED     MAX AVAIL OBJECTS
     cephfs_data         2      944 TiB     87.63       133 TiB 880988429
     cephfs_metadata     3      128 MiB         0        62 TiB 206535313
     .rgw.root           4          0 B         0        62
TiB             0

The whole thing consists of 2 pools: metadata (regular default
replication) and data (erasure k:5 m:2). The global raw space reports
2.5 PiB total, with 1.0 PiB still available. But, when the ceph
filesystem is mounted, it only reports 1.1 PB total, and the filesystem
is almost full:

    Filesystem         Size  Used Avail Use% Mounted on
    x.x.x.x:yyyy:/    1.1P  944T  134T  88% /backups

So, where is the rest of my space? Or what am I missing?

Thanks!
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux