Re: 2 pgs backfill_toofull but plenty of space

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



What else is going on? (ceph -s). If there is a lot of data being shuffled around, it may just be because its waiting for some other actions to complete first.

Thanks,
Kevin

________________________________________
From: Torkil Svensgaard <torkil@xxxxxxxx>
Sent: Tuesday, January 10, 2023 2:36 AM
To: ceph-users@xxxxxxx
Cc: Ruben Vestergaard
Subject:  2 pgs backfill_toofull but plenty of space

Check twice before you click! This email originated from outside PNNL.


Hi

Ceph version 17.2.3 (dff484dfc9e19a9819f375586300b3b79d80034d) quincy
(stable)

Looking at this:

"
Low space hindering backfill (add storage if this doesn't resolve
itself): 2 pgs backfill_toofull
"

"
[WRN] PG_BACKFILL_FULL: Low space hindering backfill (add storage if
this doesn't resolve itself): 2 pgs backfill_toofull
     pg 3.11f is active+remapped+backfill_wait+backfill_toofull, acting
[98,51,39,100]
     pg 3.74c is active+remapped+backfill_wait+backfill_toofull, acting
[96,120,58,48]
"

But the disks are noway near being full as far as I can determine, so
why backfill_toofull? The PGs in question are in the rbd_data pool.

"
# ceph df
--- RAW STORAGE ---
CLASS     SIZE    AVAIL     USED  RAW USED  %RAW USED
hdd    1.4 PiB  730 TiB  686 TiB   686 TiB      48.46
ssd    1.3 TiB  1.2 TiB  162 GiB   162 GiB      12.11
TOTAL  1.4 PiB  731 TiB  686 TiB   686 TiB      48.42

--- POOLS ---
POOL             ID   PGS   STORED  OBJECTS     USED  %USED  MAX AVAIL
.mgr              1     1  1.1 GiB      273  545 MiB   0.05    549 GiB
rbd_data          3  4096  294 TiB   78.56M  450 TiB  45.72    267 TiB
rbd               4    32  4.1 MiB       26  3.5 MiB      0    549 GiB
rbd_internal      5    32   54 KiB       16  172 KiB      0    549 GiB
cephfs_data       6  2048  127 TiB  148.64M  229 TiB  29.99    267 TiB
cephfs_metadata   7   128   71 GiB    2.84M  142 GiB  11.46    549 GiB
libvirt           8    32   37 MiB      221   74 MiB      0    549 GiB
nfs-ganesha       9    32  2.7 KiB        7   52 KiB      0    366 GiB
.nfs             10    32   53 KiB       47  306 KiB      0    366 GiB
"

The top utilized disk is at 57% and the PGs in that pool are ~50GB.

"
        TOP                             BOTTOM
USE     WEIGHT  PGS     ID      |USE    WEIGHT  PGS     ID
--------------------------------+--------------------------------
57.71%  1.00000 54      osd.68  |46.60% 1.00000 286     osd.17
57.08%  1.00000 53      osd.80  |46.55% 1.00000 286     osd.99
54.95%  1.00000 70      osd.86  |46.48% 1.00000 284     osd.106
54.86%  1.00000 52      osd.63  |45.88% 1.00000 187     osd.27
54.06%  1.00000 68      osd.88  |45.81% 1.00000 279     osd.5
53.89%  1.00000 51      osd.79  |44.95% 1.00000 272     osd.13
53.65%  1.00000 51      osd.67  |43.63% 1.00000 269     osd.16
53.59%  1.00000 52      osd.65  |43.30% 1.00000 261     osd.12
53.58%  1.00000 51      osd.82  |32.17% 1.00000 172     osd.4
53.52%  1.00000 50      osd.72  |0%     0       0       osd.49
--------------------------------+--------------------------------
"

Mvh.

Torkil

--
Torkil Svensgaard
Systems Administrator
Danish Research Centre for Magnetic Resonance DRCMR, Section 714
Copenhagen University Hospital Amager and Hvidovre
Kettegaard Allé 30, 2650 Hvidovre, Denmark
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux