Re: Cluster in ERR status when rebalancing

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



In data lunedì 9 dicembre 2019 11:46:34 CET, huang jun ha scritto:

> what about the pool's backfill_full_ratio value?

>

That vaule, as far as I can see, is 0.9000, which is not reached by any OSD:

root@s1:~# ceph osd df

ID CLASS WEIGHT REWEIGHT SIZE RAW USE DATA OMAP META AVAIL %USE VAR PGS STATUS

0 hdd 3.63869 1.00000 3.6 TiB 2.4 TiB 2.3 TiB 1.1 GiB 7.0 GiB 1.3 TiB 64.66 1.12 149 up

3 hdd 3.63869 1.00000 3.6 TiB 2.6 TiB 2.6 TiB 2.8 GiB 7.4 GiB 1.0 TiB 72.12 1.25 164 up

6 hdd 3.63869 1.00000 3.6 TiB 2.5 TiB 2.5 TiB 442 MiB 6.9 GiB 1.2 TiB 67.75 1.18 157 up

9 hdd 3.63869 1.00000 3.6 TiB 2.4 TiB 2.4 TiB 1.3 GiB 6.9 GiB 1.2 TiB 66.91 1.16 154 up

12 hdd 3.63869 0 0 B 0 B 0 B 0 B 0 B 0 B 0 0 131 up

15 hdd 3.63869 1.00000 3.6 TiB 2.5 TiB 2.5 TiB 1.7 GiB 7.0 GiB 1.1 TiB 69.93 1.22 154 up

18 hdd 3.63869 1.00000 3.6 TiB 2.4 TiB 2.4 TiB 1.4 GiB 6.9 GiB 1.3 TiB 65.15 1.13 147 up

21 hdd 3.63869 1.00000 3.6 TiB 2.1 TiB 2.1 TiB 900 MiB 6.5 GiB 1.5 TiB 57.46 1.00 136 up

26 hdd 3.63869 1.00000 3.6 TiB 107 GiB 106 GiB 533 MiB 1.1 GiB 3.5 TiB 2.88 0.05 8 up

1 hdd 3.63869 1.00000 3.6 TiB 2.0 TiB 2.0 TiB 615 MiB 5.1 GiB 1.7 TiB 53.93 0.94 129 up

4 hdd 3.63869 1.00000 3.6 TiB 2.0 TiB 2.0 TiB 30 MiB 5.7 GiB 1.6 TiB 55.38 0.96 127 up

7 hdd 3.63869 1.00000 3.6 TiB 1.9 TiB 1.9 TiB 1.3 MiB 5.4 GiB 1.7 TiB 52.97 0.92 125 up

10 hdd 3.63869 1.00000 3.6 TiB 2.3 TiB 2.3 TiB 486 KiB 6.0 GiB 1.3 TiB 64.13 1.12 148 up

13 hdd 3.63869 1.00000 3.6 TiB 2.3 TiB 2.3 TiB 707 MiB 5.9 GiB 1.3 TiB 63.90 1.11 150 up

16 hdd 3.63869 1.00000 3.6 TiB 2.1 TiB 2.0 TiB 981 KiB 5.7 GiB 1.6 TiB 56.38 0.98 134 up

19 hdd 3.63869 1.00000 3.6 TiB 2.1 TiB 2.1 TiB 536 MiB 6.2 GiB 1.5 TiB 58.78 1.02 135 up

23 hdd 3.63869 1.00000 3.6 TiB 1.9 TiB 1.9 TiB 579 MiB 6.2 GiB 1.8 TiB 51.72 0.90 122 up

25 hdd 3.63869 1.00000 3.6 TiB 2.1 TiB 2.0 TiB 564 MiB 6.6 GiB 1.6 TiB 56.48 0.98 130 up

2 hdd 3.63869 1.00000 3.6 TiB 2.1 TiB 2.1 TiB 358 MiB 6.4 GiB 1.5 TiB 58.47 1.02 137 up

5 hdd 3.63869 1.00000 3.6 TiB 2.2 TiB 2.2 TiB 1.4 GiB 6.7 GiB 1.4 TiB 60.67 1.06 140 up

8 hdd 3.63869 1.00000 3.6 TiB 2.0 TiB 2.0 TiB 376 MiB 6.1 GiB 1.7 TiB 53.88 0.94 125 up

11 hdd 3.63869 1.00000 3.6 TiB 2.0 TiB 2.0 TiB 2.0 GiB 6.2 GiB 1.6 TiB 55.48 0.97 132 up

14 hdd 3.63869 1.00000 3.6 TiB 1.9 TiB 1.9 TiB 990 MiB 5.7 GiB 1.8 TiB 51.64 0.90 124 up

17 hdd 3.63869 1.00000 3.6 TiB 2.3 TiB 2.3 TiB 182 MiB 6.8 GiB 1.4 TiB 62.70 1.09 146 up

20 hdd 3.63869 1.00000 3.6 TiB 2.1 TiB 2.1 TiB 901 MiB 6.4 GiB 1.5 TiB 57.73 1.00 134 up

22 hdd 3.63869 1.00000 3.6 TiB 2.0 TiB 2.0 TiB 621 MiB 6.0 GiB 1.6 TiB 55.15 0.96 128 up

24 hdd 3.63869 1.00000 3.6 TiB 2.1 TiB 2.1 TiB 425 MiB 6.4 GiB 1.5 TiB 58.21 1.01 134 up

TOTAL 98 TiB 57 TiB 56 TiB 21 GiB 166 GiB 42 TiB 57.48

MIN/MAX VAR: 0.05/1.25 STDDEV: 12.26

 


--

Simone Lazzaris
Staff R&D

Qcom S.p.A.
Via Roggia Vignola, 9 | 24047 Treviglio (BG)
T +39 0363 47905 | D +39 0363 1970352
simone.lazzaris@xxxxxxx | www.qcom.it

Qcom Official Pages
LinkedIn | Facebook





_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux