Help Configuring Replication

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



I'm starting to get a small Ceph cluster running. I'm to the point where I've created a pool, and stored some test data in it, but I'm having trouble configuring the level of replication that I want.

The goal is to have two OSD host nodes, each with 20 OSDs. The target replication will be:

osd_pool_default_size = 4
osd_pool_default_min_size = 2

That is, I want two copies on each host, allowing for OSD failures or host failures without data loss.

How to best achieve this replication? Is this strictly a CRUSH map rule, or can it be done with the cluster conf? Pointers or examples would be greatly appreciated.

Thanks!

--
Chris
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux