You either need to accept that reads/writes will land on different data centers, primary OSD for a given pool is always in the desired data center, or some other non-Ceph solution which will have either expensive, eventual, or false consistency.
On Fri, Nov 16, 2018, 10:07 AM Vlad Kopylov <vladkopy@xxxxxxxxx wrote:
This is what Jean suggested. I understand it and it works with primary.But what I need is for all clients to access same files, not separate sets (like red blue green)Thanks Konstantin._______________________________________________On Fri, Nov 16, 2018 at 3:43 AM Konstantin Shalygin <k0ste@xxxxxxxx> wrote:On 11/16/18 11:57 AM, Vlad Kopylov wrote:
> Exactly. But write operations should go to all nodes.
This can be set via primary affinity [1], when a ceph client reads or
writes data, it always contacts the primary OSD in the acting set.
If u want to totally segregate IO, you can use device classes:
Just create osds with different classes:
dc1
host1
red osd.0 primary
blue osd.1
green osd.2
dc2
host2
red osd.3
blue osd.4 primary
green osd.5
dc3
host3
red osd.6
blue osd.7
green osd.8 primary
create 3 crush rules:
ceph osd crush rule create-replicated red default host red
ceph osd crush rule create-replicated blue default host blue
ceph osd crush rule create-replicated green default host green
and 3 pools:
ceph osd pool create red 64 64 replicated red
ceph osd pool create blue 64 64 replicated blue
ceph osd pool create blue 64 64 replicated green
[1]
http://docs.ceph.com/docs/master/rados/operations/crush-map/#primary-affinity'
k
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
_______________________________________________ ceph-users mailing list ceph-users@xxxxxxxxxxxxxx http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com