Re: Erasure coded pools and ceph failure domain setup

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hello, 

My question is how crush distributes chunks throughout the cluster with erasure coded pools. Currently, we have 4 OSD nodes with 36 drives(OSD daemons) per node. If we use ceph_failire_domaon=host, then we are necessarily limited to k=3,m=1, or k=2,m=2. We would like to explore k>3, m>2 modes of coding but are unsure how the crush rule set will distribute the chunks if we set the crush_failure_domain to OSD

Ideally, we would like CRUSH to distribute the chunks hierarchically so to spread them evenly across the nodes. For example, all chunks are on a single node. 

Are chunks evenly spread by default? If not, how might we go about configuring them?

Cheers, 
Ravi

---

Ravi Patel, PhD
Machine Learning Systems Lead



On Thu, 28 Feb 2019 at 14:26, Ravi Patel <ravi@xxxxxxxxxxxxxx> wrote:
Hello, 

My question is how crush distributes chunks throughout the cluster with erasure coded pools. Currently, we have 4 OSD nodes with 36 drives(OSD daemons) per node. If we use ceph_failire_domaon=host, then we are necessarily limited to k=3,m=1, or k=2,m=2. We would like to explore k>3, m>2 modes of coding but are unsure how the crush rule set will distribute the chunks if we set the crush_failure_domain to OSD

Ideally, we would like CRUSH to distribute the chunks hierarchically so to spread them evenly across the nodes. For example, all chunks are on a single node. 

Are chunks evenly spread by default? If not, how might we go about configuring them?

Cheers, 
Ravi

---

Ravi Patel, PhD
Machine Learning Systems Lead


Kheiron Medical Technologies

kheironmed.com | supporting radiologists with deep learning


Kheiron Medical Technologies Ltd. is a registered company in England and Wales. This e-mail and its attachment(s) are intended for the above named only and are confidential. If they have come to you in error then you must take no action based upon them but contact us immediately. Any disclosure, copying, distribution or any action taken or omitted to be taken in reliance on it is prohibited and may be unlawful. Although this e-mail and its attachments are believed to be free of any virus, it is the responsibility of the recipient to ensure that they are virus free. If you contact us by e-mail then we will store your name and address to facilitate communications. Any statements contained herein are those of the individual and not the organisation.

Registered number: 10184103. Registered office: RocketSpace, 40 Islington High Street, London, N1 8EQ

_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux