Just to make sure you understand that the reads will happen on the primary osd for the PG and not the nearest osd, meaning that reads will go between the datacenters. Also that each write will not ack until all 3 writes happen adding the latency to the writes and reads both.
On Sat, Oct 7, 2017, 1:48 PM Peter Linder <peter.linder@xxxxxxxxxxxxxx> wrote:
On 10/7/2017 7:36 PM, Дробышевский, Владимир wrote:
Hello!
2017-10-07 19:12 GMT+05:00 Peter Linder <peter.linder@xxxxxxxxxxxxxx>:
The idea is to select an nvme osd, and
then select the rest from hdd osds in different datacenters (see crush
map below for hierarchy).
It's a little bit aside of the question, but why do you want to mix SSDs and HDDs in the same pool? Do you have read-intensive workload and going to use primary-affinity to get all reads from nvme?
Yes, this is pretty much the idea, getting the performance from NVMe reads, while still maintaining triple redundancy and a reasonable cost._______________________________________________
--
Regards,Vladimir
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
_______________________________________________ ceph-users mailing list ceph-users@xxxxxxxxxxxxxx http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com