Re: read performance, separate client CRUSH maps or limit osd read access from each client

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 11/16/18 11:57 AM, Vlad Kopylov wrote:
Exactly. But write operations should go to all nodes.

This can be set via primary affinity [1], when a ceph client reads or writes data, it always contacts the primary OSD in the acting set.


If u want to totally segregate IO, you can use device classes:

Just create osds with different classes:

dc1

  host1

    red osd.0 primary

    blue osd.1

    green osd.2

dc2

  host2

    red osd.3

    blue osd.4 primary

    green osd.5

dc3

  host3

    red osd.6

    blue osd.7

    green osd.8 primary


create 3 crush rules:

ceph osd crush rule create-replicated red default host red

ceph osd crush rule create-replicated blue default host blue

ceph osd crush rule create-replicated green default host green


and 3 pools:

ceph osd pool create red 64 64 replicated red

ceph osd pool create blue 64 64 replicated blue

ceph osd pool create blue 64 64 replicated green


[1] http://docs.ceph.com/docs/master/rados/operations/crush-map/#primary-affinity'



k

_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux