Re: osd nearfull is not detected

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Yes, pool is growing (~30k obj/sec) also client io is 10k op/s wr, 20k op/s rd
May be you right, may be PG in migration state. I was reweighted this OSD.
For 761,718 nearfull is also not triggered.

 705  nvme    0.91199  1.00000 913 GiB 743 GiB 613 GiB   8 KiB 130 GiB 170 GiB 81.39 1.30  46     up             osd.705
  12  nvme    0.91199  0.73999 912 GiB 744 GiB 609 GiB  12 KiB 135 GiB 168 GiB 81.55 1.31  45     up             osd.12
 651  nvme    0.91199  1.00000 913 GiB 749 GiB 619 GiB  12 KiB 130 GiB 164 GiB 82.04 1.31  45     up             osd.651
 759  nvme    0.91199  1.00000 913 GiB 760 GiB 624 GiB   8 KiB 136 GiB 153 GiB 83.23 1.33  45     up             osd.759
 667  nvme    0.91199  0.96999 913 GiB 762 GiB 633 GiB   8 KiB 129 GiB 151 GiB 83.41 1.34  46     up             osd.667
 720  nvme    0.91199  1.00000 913 GiB 762 GiB 626 GiB  12 KiB 136 GiB 151 GiB 83.45 1.34  47     up             osd.720
   2  nvme    0.91199  1.00000 912 GiB 763 GiB 626 GiB  12 KiB 137 GiB 149 GiB 83.66 1.34  47     up             osd.2
 712  nvme    0.91199  0.92999 913 GiB 770 GiB 632 GiB  12 KiB 138 GiB 143 GiB 84.32 1.35  46     up             osd.712
  38  nvme    0.91199  0.95999 912 GiB 773 GiB 639 GiB  12 KiB 133 GiB 139 GiB 84.71 1.36  48     up             osd.38
 761  nvme    0.91199  1.00000 913 GiB 778 GiB 639 GiB   8 KiB 138 GiB 135 GiB 85.19 1.36  48     up             osd.761
 718  nvme    0.91199  0.90999 913 GiB 783 GiB 643 GiB  12 KiB 139 GiB 130 GiB 85.73 1.37  48     up             osd.718
 696  nvme    0.91199  0.90999 912 GiB 837 GiB 689 GiB   4 KiB 148 GiB  75 GiB 91.79 1.47  47     up             osd.696
ID   CLASS WEIGHT     REWEIGHT SIZE    RAW USE DATA    OMAP    META    AVAIL   %USE  VAR  PGS STATUS TYPE NAME


k

> On 21 Apr 2021, at 21:21, Dan van der Ster <dan@xxxxxxxxxxxxxx> wrote:
> 
> Are you currently doing IO on the relevant pool? Maybe nearfull isn't
> reported until some pgstats are reported.
> 
> Otherwise sorry I haven't seen this.

_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux