On Fri, Mar 11, 2022 at 12:02 PM Szabo, Istvan (Agoda) <Istvan.Szabo@xxxxxxxxx> wrote: > > Hi, > > OSDs are not full and pool I don't really see full either. > This doesn't say anything like which pool it is talking about. Hi Istvan, Yes, that's unfortunate. But you should be able to tell which pool reached quota from "ceph osd pool ls detail" output. > > Cluster state is healthy however user can't write into 1. > > 4osd/nvme I have in this cluster. > > This is the ceph df detail: > > RAW STORAGE: > CLASS SIZE AVAIL USED RAW USED %RAW USED > nvme 143 TiB 105 TiB 38 TiB 39 TiB 26.98 > TOTAL 143 TiB 105 TiB 38 TiB 39 TiB 26.98 > > POOLS: > POOL ID STORED OBJECTS USED %USED MAX AVAIL QUOTA OBJECTS QUOTA BYTES DIRTY USED COMPR UNDER COMPR > w-mdc 12 958 GiB 246.69k 1.4 TiB 1.70 40 TiB N/A 978 GiB 246.69k 0 B 0 B > w-cdb 14 1.9 TiB 508.19k 2.9 TiB 3.55 40 TiB N/A 2.7 TiB 508.19k 0 B 0 B > w-mdb 16 2.3 TiB 615.01k 3.5 TiB 4.24 40 TiB N/A 2.2 TiB 615.01k 0 B 0 B > w-bfd 18 281 GiB 72.90k 468 GiB 0.57 40 TiB N/A 483 GiB 72.90k 0 B 0 B > w-app 20 1.5 TiB 390.38k 2.6 TiB 3.19 40 TiB N/A 2.2 TiB 390.38k 0 B 0 B > w-pay 22 883 GiB 226.49k 1.2 TiB 1.50 40 TiB N/A 1.4 TiB 226.49k 0 B 0 B > w-his 24 851 GiB 229.81k 1.6 TiB 1.94 40 TiB N/A 2.4 TiB 229.81k 0 B 0 B > w-dfs 26 206 GiB 54.50k 407 GiB 0.50 40 TiB N/A 373 GiB 54.50k 0 B 0 B > w-dfh 28 591 GiB 173.75k 1.1 TiB 1.42 40 TiB N/A 1 TiB 173.75k 0 B 0 B > w-dbm 30 31 GiB 9.98k 61 GiB 0.07 40 TiB N/A 466 GiB 9.98k 0 B 0 B > client 32 1.9 TiB 492.20k 3.8 TiB 4.49 40 TiB N/A 14 TiB 492.20k 0 B 0 B > airflow 33 403 GiB 119.27k 806 GiB 0.98 40 TiB N/A 500 GiB 119.27k 0 B 0 B > 1212 34 265 GiB 72.32k 529 GiB 0.64 40 TiB N/A 1 TiB 72.32k 0 B 0 B > 12121 35 141 GiB 37.22k 277 GiB 0.34 40 TiB N/A 186 GiB 37.22k 0 B 0 B > 121212121 36 189 GiB 75.21k 378 GiB 0.46 40 TiB N/A 466 GiB 75.21k 0 B 0 B Quota is set on all of your pools (QUOTA BYTES column). Your "ceph df detail" output is misaligned here so it is a bit hard to read, but I'm guessing that the problem is with w-mdb (at least). Thanks, Ilya _______________________________________________ ceph-users mailing list -- ceph-users@xxxxxxx To unsubscribe send an email to ceph-users-leave@xxxxxxx