Re: PGs get placed in the same datacenter (Trying to make a hybrid NVMe/HDD pool with 6 servers, 2 in each datacenter)

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Yes, I realized that, I updated it to 3.

On 10/7/2017 8:41 PM, Sinan Polat wrote:
You are talking about the min_size, which should be 2 according to your text.

Please be aware, the min_size in your CRUSH is _not_ the replica size. The replica size is set with your pools.

Op 7 okt. 2017 om 19:39 heeft Peter Linder <peter.linder@xxxxxxxxxxxxxx> het volgende geschreven:

On 10/7/2017 7:36 PM, Дробышевский, Владимир wrote:
Hello!

2017-10-07 19:12 GMT+05:00 Peter Linder <peter.linder@xxxxxxxxxxxxxx>:

The idea is to select an nvme osd, and
then select the rest from hdd osds in different datacenters (see crush
map below for hierarchy). 

It's a little bit aside of the question, but why do you want to mix SSDs and HDDs in the same pool? Do you have read-intensive workload and going to use primary-affinity to get all reads from nvme?
 

Yes, this is pretty much the idea, getting the performance from NVMe reads, while still maintaining triple redundancy and a reasonable cost.


--
Regards,
Vladimir


_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux