What troubleshooting have you tried? You don’t provide any log output
or information about the cluster setup, for example the ceph osd tree,
ceph status, are the failing OSDs random or do they all belong to the
same pool? Any log output from failing OSDs and the RGWs might help,
otherwise it’s just wild guessing. Is the cluster a new installation
with cephadm or an older cluster upgraded to Quincy?
Zitat von Monish Selvaraj <monish@xxxxxxxxxxxxxxx>:
Hi all,
I have one critical issue in my prod cluster. When the customer's data
comes from 600 MiB .
My Osds are down *8 to 20 from 238* . Then I manually up my osds . After a
few minutes, my all rgw crashes.
We did some troubleshooting but nothing works. When we upgrade ceph to
17.2.0. to 17.2.1 is resolved. Also we faced the issue two times. But both
times we upgraded the ceph.
*Node schema :*
*Node 1 to node 5 --> mon,mgr and osds*
*Node 6 to Node15 --> only osds*
*Node 16 to Node 20 --> only rgws.*
Kindly, check this issue and let me know the correct troubleshooting method.
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx