"libceph: FULL or reached pool quota" wat does this mean?

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi,

OSDs are not full and pool I don't really see full either.
This doesn't say anything like which pool it is talking about.

Cluster state is healthy however user can't write into 1.

4osd/nvme I have in this cluster.

This is the ceph df detail:

RAW STORAGE:
    CLASS     SIZE        AVAIL       USED       RAW USED     %RAW USED
    nvme      143 TiB     105 TiB     38 TiB       39 TiB         26.98
    TOTAL     143 TiB     105 TiB     38 TiB       39 TiB         26.98

POOLS:
    POOL                 ID     STORED      OBJECTS     USED        %USED     MAX AVAIL     QUOTA OBJECTS     QUOTA BYTES     DIRTY       USED COMPR     UNDER COMPR
    w-mdc       12     958 GiB     246.69k     1.4 TiB      1.70        40 TiB     N/A               978 GiB         246.69k            0 B             0 B
    w-cdb       14     1.9 TiB     508.19k     2.9 TiB      3.55        40 TiB     N/A               2.7 TiB         508.19k            0 B             0 B
    w-mdb       16     2.3 TiB     615.01k     3.5 TiB      4.24        40 TiB     N/A               2.2 TiB         615.01k            0 B             0 B
    w-bfd       18     281 GiB      72.90k     468 GiB      0.57        40 TiB     N/A               483 GiB          72.90k            0 B             0 B
    w-app       20     1.5 TiB     390.38k     2.6 TiB      3.19        40 TiB     N/A               2.2 TiB         390.38k            0 B             0 B
    w-pay       22     883 GiB     226.49k     1.2 TiB      1.50        40 TiB     N/A               1.4 TiB         226.49k            0 B             0 B
    w-his       24     851 GiB     229.81k     1.6 TiB      1.94        40 TiB     N/A               2.4 TiB         229.81k            0 B             0 B
    w-dfs       26     206 GiB      54.50k     407 GiB      0.50        40 TiB     N/A               373 GiB          54.50k            0 B             0 B
    w-dfh       28     591 GiB     173.75k     1.1 TiB      1.42        40 TiB     N/A               1 TiB           173.75k            0 B             0 B
    w-dbm       30      31 GiB       9.98k      61 GiB      0.07        40 TiB     N/A               466 GiB           9.98k            0 B             0 B
    client      32     1.9 TiB     492.20k     3.8 TiB      4.49        40 TiB     N/A               14 TiB          492.20k            0 B             0 B
    airflow        33     403 GiB     119.27k     806 GiB      0.98        40 TiB     N/A               500 GiB         119.27k            0 B             0 B
    1212       34     265 GiB      72.32k     529 GiB      0.64        40 TiB     N/A               1 TiB            72.32k            0 B             0 B
    12121       35     141 GiB      37.22k     277 GiB      0.34        40 TiB     N/A               186 GiB          37.22k            0 B             0 B
    121212121     36     189 GiB      75.21k     378 GiB      0.46        40 TiB     N/A               466 GiB          75.21k            0 B             0 B

Istvan Szabo
Senior Infrastructure Engineer
---------------------------------------------------
Agoda Services Co., Ltd.
e: istvan.szabo@xxxxxxxxx<mailto:istvan.szabo@xxxxxxxxx>
---------------------------------------------------


________________________________
This message is confidential and is for the sole use of the intended recipient(s). It may also be privileged or otherwise protected by copyright or other legal rules. If you have received it by mistake please let us know by reply email and delete it from your system. It is prohibited to copy this message or disclose its content to anyone. Any confidentiality or privilege is not waived or lost by any mistaken delivery or unauthorized disclosure of the message. All messages sent to and from Agoda may be monitored to ensure compliance with company policies, to protect the company's interests and to remove potential malware. Electronic messages may be intercepted, amended, lost or deleted, or contain viruses.
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux