That seems to have worked. Thanks much!
And yes, I realize my setup is less than ideal, but I'm planning on
migrating from another storage system, and this is the hardware I have
to work with. I'll definitely keep your recommendations in mind when I
start to grow the cluster.
On 04/23/2018 12:22 PM, Paul Emmerich wrote:
Hi,
this doesn't sound like a good idea: two hosts is usually a poor
configuration for Ceph.
Also, fewer disks on more servers is typically better than lots of disks
in few servers.
But to answer your question: you could use a crush rule like this:
min_size 4
max_size 4
step take default
step choose firstn 2 type host
step choose firstn 2 type osd
step emit
And then create a pool with 4/2 and assign this crush rule to it.
See http://docs.ceph.com/docs/jewel/rados/operations/crush-map/
Paul
2018-04-23 17:17 GMT+02:00 Christopher Meadors
<christopher.meadors@xxxxxxxxxxxxxxxxxxxxx
<mailto:christopher.meadors@xxxxxxxxxxxxxxxxxxxxx>>:
I'm starting to get a small Ceph cluster running. I'm to the point
where I've created a pool, and stored some test data in it, but I'm
having trouble configuring the level of replication that I want.
The goal is to have two OSD host nodes, each with 20 OSDs. The
target replication will be:
osd_pool_default_size = 4
osd_pool_default_min_size = 2
That is, I want two copies on each host, allowing for OSD failures
or host failures without data loss.
How to best achieve this replication? Is this strictly a CRUSH map
rule, or can it be done with the cluster conf? Pointers or examples
would be greatly appreciated.
Thanks!
--
Chris
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com