Sorry, I had lost access to my emails. Setting those affected OSDs out
would have been one of the next steps, great that it worked for you.
Zitat von Boris Behrens <bb@xxxxxxxxx>:
I've outed osd.18 and osd.54 and let it sync for some time and now the
problem is gone.
*shrugs
Thank you for the hints.
Am Mo., 8. Feb. 2021 um 14:46 Uhr schrieb Boris Behrens <bb@xxxxxxxxx>:
Hi,
sure
ID CLASS WEIGHT TYPE NAME STATUS REWEIGHT PRI-AFF
-1 672.68457 root default
-2 58.20561 host s3db1
23 hdd 14.55269 osd.23 up 1.00000 1.00000
69 hdd 14.55269 osd.69 up 1.00000 1.00000
73 hdd 14.55269 osd.73 up 1.00000 1.00000
79 hdd 3.63689 osd.79 up 1.00000 1.00000
80 hdd 3.63689 osd.80 up 1.00000 1.00000
81 hdd 3.63689 osd.81 up 1.00000 1.00000
82 hdd 3.63689 osd.82 up 1.00000 1.00000
-11 50.94173 host s3db10
63 hdd 7.27739 osd.63 up 1.00000 1.00000
64 hdd 7.27739 osd.64 up 1.00000 1.00000
65 hdd 7.27739 osd.65 up 1.00000 1.00000
66 hdd 7.27739 osd.66 up 1.00000 1.00000
67 hdd 7.27739 osd.67 up 1.00000 1.00000
68 hdd 7.27739 osd.68 up 1.00000 1.00000
70 hdd 7.27739 osd.70 up 1.00000 1.00000
-12 50.94173 host s3db11
46 hdd 7.27739 osd.46 up 1.00000 1.00000
47 hdd 7.27739 osd.47 up 1.00000 1.00000
48 hdd 7.27739 osd.48 up 1.00000 1.00000
49 hdd 7.27739 osd.49 up 1.00000 1.00000
50 hdd 7.27739 osd.50 up 1.00000 1.00000
51 hdd 7.27739 osd.51 up 1.00000 1.00000
72 hdd 7.27739 osd.72 up 1.00000 1.00000
-37 58.55478 host s3db12
19 hdd 3.68750 osd.19 up 1.00000 1.00000
71 hdd 3.68750 osd.71 up 1.00000 1.00000
75 hdd 3.68750 osd.75 up 1.00000 1.00000
76 hdd 3.68750 osd.76 up 1.00000 1.00000
77 hdd 14.60159 osd.77 up 1.00000 1.00000
78 hdd 14.60159 osd.78 up 1.00000 1.00000
83 hdd 14.60159 osd.83 up 1.00000 1.00000
-3 58.49872 host s3db2
1 hdd 14.65039 osd.1 up 1.00000 1.00000
3 hdd 3.63689 osd.3 up 1.00000 1.00000
4 hdd 3.63689 osd.4 up 1.00000 1.00000
5 hdd 3.63689 osd.5 up 1.00000 1.00000
6 hdd 3.63689 osd.6 up 1.00000 1.00000
7 hdd 14.65039 osd.7 up 1.00000 1.00000
74 hdd 14.65039 osd.74 up 1.00000 1.00000
-4 58.49872 host s3db3
2 hdd 14.65039 osd.2 up 1.00000 1.00000
9 hdd 14.65039 osd.9 up 1.00000 1.00000
10 hdd 14.65039 osd.10 up 1.00000 1.00000
12 hdd 3.63689 osd.12 up 1.00000 1.00000
13 hdd 3.63689 osd.13 up 1.00000 1.00000
14 hdd 3.63689 osd.14 up 1.00000 0
15 hdd 3.63689 osd.15 up 1.00000 1.00000
-5 58.49872 host s3db4
11 hdd 14.65039 osd.11 up 1.00000 1.00000
17 hdd 14.65039 osd.17 up 1.00000 1.00000
18 hdd 14.65039 osd.18 up 1.00000 1.00000
20 hdd 3.63689 osd.20 up 1.00000 1.00000
21 hdd 3.63689 osd.21 up 1.00000 1.00000
22 hdd 3.63689 osd.22 up 1.00000 1.00000
24 hdd 3.63689 osd.24 up 1.00000 1.00000
-6 58.89636 host s3db5
0 hdd 3.73630 osd.0 up 1.00000 1.00000
25 hdd 3.73630 osd.25 up 1.00000 1.00000
26 hdd 3.73630 osd.26 up 1.00000 1.00000
27 hdd 3.73630 osd.27 up 1.00000 1.00000
28 hdd 14.65039 osd.28 up 1.00000 1.00000
29 hdd 14.65039 osd.29 up 1.00000 1.00000
30 hdd 14.65039 osd.30 up 1.00000 1.00000
-7 58.89636 host s3db6
32 hdd 3.73630 osd.32 up 1.00000 1.00000
33 hdd 3.73630 osd.33 up 1.00000 1.00000
34 hdd 3.73630 osd.34 up 1.00000 1.00000
35 hdd 3.73630 osd.35 up 1.00000 1.00000
36 hdd 14.65039 osd.36 up 1.00000 1.00000
37 hdd 14.65039 osd.37 up 1.00000 1.00000
38 hdd 14.65039 osd.38 up 1.00000 1.00000
-8 58.89636 host s3db7
39 hdd 3.73630 osd.39 up 1.00000 1.00000
40 hdd 3.73630 osd.40 up 1.00000 1.00000
41 hdd 3.73630 osd.41 up 1.00000 1.00000
42 hdd 3.73630 osd.42 up 1.00000 1.00000
43 hdd 14.65039 osd.43 up 1.00000 1.00000
44 hdd 14.65039 osd.44 up 1.00000 1.00000
45 hdd 14.65039 osd.45 up 1.00000 1.00000
-9 50.92773 host s3db8
8 hdd 7.27539 osd.8 up 1.00000 1.00000
16 hdd 7.27539 osd.16 up 1.00000 1.00000
31 hdd 7.27539 osd.31 up 1.00000 1.00000
52 hdd 7.27539 osd.52 up 1.00000 1.00000
53 hdd 7.27539 osd.53 up 1.00000 1.00000
54 hdd 7.27539 osd.54 up 1.00000 1.00000
55 hdd 7.27539 osd.55 up 1.00000 1.00000
-10 50.92773 host s3db9
56 hdd 7.27539 osd.56 up 1.00000 1.00000
57 hdd 7.27539 osd.57 up 1.00000 1.00000
58 hdd 7.27539 osd.58 up 1.00000 1.00000
59 hdd 7.27539 osd.59 up 1.00000 1.00000
60 hdd 7.27539 osd.60 up 1.00000 1.00000
61 hdd 7.27539 osd.61 up 1.00000 1.00000
62 hdd 7.27539 osd.62 up 1.00000 1.00000
Am Mo., 8. Feb. 2021 um 14:42 Uhr schrieb Eugen Block <eblock@xxxxxx>:
Can you share 'ceph osd tree'? Are the weights of this OSD
appropriate? I've seen stuck PGs because of OSD weight imbalance. Is
the OSD in the correct subtree?
Zitat von Boris Behrens <bb@xxxxxxxxx>:
> Hi Eugen,
>
> I've set it to 0 but the "degraded objects" count does not go down.
>
> Am Mo., 8. Feb. 2021 um 14:23 Uhr schrieb Eugen Block <eblock@xxxxxx>:
>
>> Hi,
>>
>> one option would be to decrease (or set to 0) the primary-affinity of
>> osd.14 and see if that brings the pg back.
>>
>> Regards,
>> Eugen
>>
>>
>
> --
> Die Selbsthilfegruppe "UTF-8-Probleme" trifft sich diesmal abweichend im
> groüen Saal.
> _______________________________________________
> ceph-users mailing list -- ceph-users@xxxxxxx
> To unsubscribe send an email to ceph-users-leave@xxxxxxx
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx
--
Die Selbsthilfegruppe "UTF-8-Probleme" trifft sich diesmal abweichend im
groüen Saal.
--
Die Selbsthilfegruppe "UTF-8-Probleme" trifft sich diesmal abweichend im
groüen Saal.
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx