I was able to get this working with the
crushmap in my last post! I now have the intended behavior
together with the change of primary affinity on the slow hdds.
Very happy, performance is excellent.
One thing was a little weird though, I had to manually change the weight of each hostgroup so that they are in the same ballpark. If they were too far apart ceph couldn't properly allocate 3 buckets for each pg, some ended up being in state "remapped" or "degraded". When I changed the weights (The crush rule selects 3 out of 3 hostgroups anyway so weight isn't even a consideration there) to similar values that problem went away. Perhaps that is a bug? /Peter On 10/8/2017 3:22 PM, David Turner wrote:
|
_______________________________________________ ceph-users mailing list ceph-users@xxxxxxxxxxxxxx http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com