Hi,
Yes, I realised that you are correct in that it's not twice as bad, it's just as bad. I did a trivial error when doing the math in my head which made the this case of erasure coding look worse than it is.
But, I still hold on to my previous statement: with m=1 you will lose data, it must not be used in production.
-- Eino Tuominen
From: Jorge Pinilla López <jorpilo@xxxxxxxxx>
Sent: Wednesday, October 25, 2017 01:37 To: Eino Tuominen; ceph-users@xxxxxxxxxxxxxx Subject: Re: [ceph-users] Erasure Pool OSD fail well, you should use M > 1, the more you have, less risk and more performance. You don't read twice as much data, you read it from different sources, further more you can even read less data and have to rebuild it, because on erasure pools you don't replicate the data.
On the other hand, the configuration it's not as bad as you think, its just different. 3 nodes cluster Replicate pool size = 2 -you can take 1 failure, then re-balance and take another failure. (max 2 separate) -you use 2*data space -you have to write 2*data, full data on one node and full data on the second one. Erasure code pool -you can only lose 1 node -you use less space -as you dont write 2*data, writes are also faster. You write half data on one node, half data on the other and parity on separate nodes, write work is a lot more distributed. -reads are slower because you need all the data parts.
On both configurations, if you have corrupted data you lose your data, so that's not really a point to compare. Replicate pool can achieve way more insensitive read works while Erasure pools are thought to perform big writes but really few reads.
I have check myself that both configurations can work with a 3 node cluster so it's not a better and a worse configuration, it really depend on your work, and the best thing :) you can have both in the same OSDs! El 24/10/2017 a las 12:37, Eino Tuominen escribió:
--
Jorge Pinilla López jorpilo@xxxxxxxxx Estudiante de ingenieria informática Becario del area de sistemas (SICUZ) Universidad de Zaragoza PGP-KeyID: A34331932EBC715A |
_______________________________________________ ceph-users mailing list ceph-users@xxxxxxxxxxxxxx http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com