Nautilus cephfs usage

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hello,

I have a Nautilus cluster with a cephfs volume, on grafana, it shows that cephfs_data pool is almost full[1] but if I give a look to the pool
usage, it looks like I have plenty of space. Which metrics are used by grafana?

1. https://framapic.org/5r7J86s55x6k/jGSIsjEUPYMU.png

pool usage:

> artemis@icitsrv5:~$ ceph df detail
> RAW STORAGE:
>     CLASS     SIZE        AVAIL       USED        RAW USED     %RAW USED 
>     hdd       662 TiB     296 TiB     366 TiB      366 TiB         55.32 
>     TOTAL     662 TiB     296 TiB     366 TiB      366 TiB         55.32 
>  
> POOLS:
>     POOL                           ID     STORED      OBJECTS     USED        %USED     MAX AVAIL     QUOTA OBJECTS     QUOTA BYTES     DIRTY       USED COMPR     UNDER COMPR 
>     .rgw.root                       3     8.1 KiB          15     2.8 MiB         0        63 TiB     N/A               N/A                  15            0 B             0 B 
>     default.rgw.control             4         0 B           8         0 B         0        63 TiB     N/A               N/A                   8            0 B             0 B 
>     default.rgw.meta                5      26 KiB          85      16 MiB         0        63 TiB     N/A               N/A                  85            0 B             0 B 
>     default.rgw.log                 6         0 B         207         0 B         0        63 TiB     N/A               N/A                 207            0 B             0 B 
>     cephfs_data                     7     113 TiB     139.34M     186 TiB     49.47       138 TiB     N/A               N/A             139.34M            0 B             0 B 
>     cephfs_metadata                 8      54 GiB      10.21M      57 GiB      0.03        63 TiB     N/A               N/A              10.21M            0 B             0 B 
>     default.rgw.buckets.data        9     122 TiB      54.57M     173 TiB     47.70       138 TiB     N/A               N/A              54.57M            0 B             0 B 
>     default.rgw.buckets.index      10     2.6 GiB      19.97k     2.6 GiB         0        63 TiB     N/A               N/A              19.97k            0 B             0 B 
>     default.rgw.buckets.non-ec     11      67 MiB         186     102 MiB         0        63 TiB     N/A               N/A                 186            0 B             0 B 
>     device_health_metrics          12     1.2 MiB         145     1.2 MiB         0        63 TiB     N/A               N/A                 145            0 B             0 B 

Best,

-- 
Yoann Moulin
EPFL IC-IT
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux