Hi,
min_size = k is not the safest option, it should be only used in case
of disaster recovery. But in this case it's not related to IO
interruption, it seems. Are some disks utilized around 100% (iostat)
when this happens?
Zitat von Denis Polom <denispolom@xxxxxxxxx>:
Hi,
it's
min_size: 10
On 10/18/21 14:43, Eugen Block wrote:
What is your min_size for the affected pool?
Zitat von Denis Polom <denispolom@xxxxxxxxx>:
Hi,
I have 18 OSD nodes in this cluster. And it does happen even if
one OSD daemon goes down or flaps.
Running
ceph version 15.2.14 (cd3bb7e87a2f62c1b862ff3fd8b1eec13391a5be)
octopus (stable)
thx!
On 10/18/21 12:12, Eugen Block wrote:
Hi,
with this EC setup your pool min_size would be 11 (k+1), so in
case one host goes down (or several OSDs fail on this host), your
clients should not be affected. But as soon as a second host
fails you’ll notice IO pause until at least one host has
recovered. Do you have more than 12 hosts in this cluster so it
could recover one host failure?
Regards,
Eugen
Zitat von Denis Polom <denispolom@xxxxxxxxx>:
Hi,
I have a EC pool with these settings:
crush-device-class= crush-failure-domain=host crush-root=default
jerasure-per-chunk-alignment=false k=10 m=2 plugin=jerasure
technique=reed_sol_van w=8
and my understanding is if some of the OSDs goes down because of
read error or just flapping due to some reason (mostly read
errors , bad sectors in my case) clients IO shouldn't be
disturbed because we have other object replicas and Ceph sould
manage it. But clients IOs are disturbed, cephfs mount point
gets inaccessible on clients even if they are mounting cephfs
against all 3 monitors.
It's not happening always just sometimes. Is it right
understanding that it can happen if read error or flapping
occures on active OSD?
Thx!
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx