Hello,
Correct me if I'm wrong, but isn't your configuration just twice as bad as running with replication size=2? With replication size=2 when you lose a disk you lose data if there is even one defect block found when ceph is reconstructing the pgs that had a
replica on the failed disk. No, with your setup you have to be able to read twice as much data correctly in order to reconstruct the pgs. When using EC I think that you have to use m>1 in production.
-- Eino Tuominen
From: ceph-users <ceph-users-bounces@xxxxxxxxxxxxxx> on behalf of Jorge Pinilla López <jorpilo@xxxxxxxxx>
Sent: Tuesday, October 24, 2017 11:24 To: ceph-users@xxxxxxxxxxxxxx Subject: Re: Erasure Pool OSD fail Okay I think I can respond myself, the pool is created with a default min_size of 3, so when one of the OSDs goes down, the pool doenst perform any IO, manually changing the the pool min_size to 2 worked great. El 24/10/2017 a las 10:13, Jorge Pinilla López escribió:
I am testing erasure code pools and doing a rados test write to try fault tolerace. --
Jorge Pinilla López jorpilo@xxxxxxxxx Estudiante de ingenieria informática Becario del area de sistemas (SICUZ) Universidad de Zaragoza PGP-KeyID: A34331932EBC715A |
_______________________________________________ ceph-users mailing list ceph-users@xxxxxxxxxxxxxx http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com