Hi Roland,
You should tune your Ceph Crushmap with a custom rule in order to do that (write first on s3 and then to others). This custom rule will be applied then to your proxmox pool.
(what you want to do is only interesting if you run VM from host s3)
Can you give us your crushmap ?
2015-01-13 22:03 GMT+01:00 Roland Giesler <roland@xxxxxxxxxxxxxx>:
I have a 4 node ceph cluster, but the disks are not equally distributed across all machines (they are substantially different from each other)One machine has 12 x 1TB SAS drives (h1), another has 8 x 300GB SAS (s3) and two machines have only two 1 TB drives each (s2 & s1).Now machine s3 has by far the most CPU's and RAM, so I'm running my VM's mostly from there, but I want to make sure that the writes that happen to the ceph cluster get written to the "local" osd's on s3 first and then the additional writes/copies get done to the network.Is this possible with ceph. The VM's are KVM in Proxmox in case it's relevant.
regardsRoland
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
_______________________________________________ ceph-users mailing list ceph-users@xxxxxxxxxxxxxx http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com