Re: CEPH Replication

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



It will put each object on 2 OSD, on 2 separate node
All nodes, and all OSDs will have the same used space (approx)

If you want to allow both copies of an object to put stored on the same
node, you should use osd_crush_chooseleaf_type = 0 (see
http://docs.ceph.com/docs/master/rados/operations/crush-map/#crush-map-bucket-types
and
http://docs.ceph.com/docs/hammer/rados/configuration/pool-pg-config-ref/)


On 01/07/2016 13:49, Ashley Merrick wrote:
> Hello,
> 
> Looking at setting up a new CEPH Cluster, starting with the following.
> 
> 3 x CEPH OSD Servers
> 
> Each Server:
> 
> 20Gbps Network
> 12 OSD's
> SSD Journal
> 
> Looking at running with replication of 2, will there be any issues using 3 nodes with a replication of two, this should "technically" give me ½ the available total capacity of the 3 node's?
> 
> Will the CRUSH map automaticly setup each 12 OSD's as a separate group, so that the two replicated objects are put on separate OSD servers?
> 
> Thanks,
> Ashley
> 
> 
> 
> 
> _______________________________________________
> ceph-users mailing list
> ceph-users@xxxxxxxxxxxxxx
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
> 

_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux