Re: Erasure code profile

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



yes you can. but just like a raid5 array with a lost disk, it is not a comfortable way to run your cluster for any significant time. you also get performance degradations.

having a warning active all the time makes it harder to detect new issues, and such. One becomes numb to the warning allways beeing on.

strive to have your cluster in health ok all the time. and design so that you have the fault tolerance you want as overhead. having more nodes then strictly needed allow ceph to self heal quickly. and also gives better performance, by spreading load over more machines.
10+4 on 14 nodes means each and every  nodes are hit on each write.


kind regards
Ronny Aasen


On 23. okt. 2017 21:12, Jorge Pinilla López wrote:
I have one question, what can or can't do a cluster working on degraded mode?

With K=10 + M = 4 if one of my OSDs node fails it will start working on degraded mode, but can I still do writes and reads from that pool?


El 23/10/2017 a las 21:01, Ronny Aasen escribió:
On 23.10.2017 20:29, Karun Josy wrote:
Hi,

While creating a pool with erasure code profile k=10, m=4, I get PG status as
"200 creating+incomplete"

While creating pool with profile k=5, m=3 it works fine.

Cluster has 8 OSDs with total 23 disks.

Is there any requirements for setting the first profile ?


you need K+M+X  osd nodes. K and M comes from the profile, X is how many nodes you want to be able to tolerate failure of, without becoming degraded. (how many failed nodes ceph should be able to automatically heal)

so with K=10 + M = 4 you need minimum 14 nodes and you have 0 fault tolerance (a single failure = a degreded cluster)  so you have to scramble to replace the node to get HEALTH OK again.  if you have 15 nodes you can loose 1 node and cehp will automatically rebalance to the 14 needed nodes, and you can replace the lost node at your leisure.

kind regards
Ronny Aasen
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


--
------------------------------------------------------------------------
*Jorge Pinilla López*
jorpilo@xxxxxxxxx
Estudiante de ingenieria informática
Becario del area de sistemas (SICUZ)
Universidad de Zaragoza
PGP-KeyID: A34331932EBC715A <http://pgp.rediris.es:11371/pks/lookup?op=get&search=0xA34331932EBC715A>
------------------------------------------------------------------------


_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux