Re: ceph IO are interrupted when OSD goes down

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Octopus 15.2.14?
I have totally the same issue and it makes me prod issue.

Istvan Szabo
Senior Infrastructure Engineer
---------------------------------------------------
Agoda Services Co., Ltd.
e: istvan.szabo@xxxxxxxxx<mailto:istvan.szabo@xxxxxxxxx>
---------------------------------------------------

On 2021. Oct 18., at 12:01, Denis Polom <denispolom@xxxxxxxxx> wrote:

Email received from the internet. If in doubt, don't click any link nor open any attachment !
________________________________

Hi,

I have a EC pool with these settings:

crush-device-class= crush-failure-domain=host crush-root=default
jerasure-per-chunk-alignment=false k=10 m=2 plugin=jerasure
technique=reed_sol_van w=8

and my understanding is if some of the OSDs goes down because of read
error or just flapping due to some reason (mostly read errors , bad
sectors in my case) clients IO shouldn't be disturbed because we have
other object replicas and Ceph sould manage it. But clients IOs are
disturbed, cephfs mount point gets inaccessible on clients even if they
are mounting cephfs against all 3 monitors.

It's not happening always just sometimes. Is it right understanding that
it can happen if read error or flapping occures  on active OSD?


Thx!

_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux