Re: Influencing reads/writes

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



My read of http://ceph.com/releases/v0-63-released/ has this for rbd reads in the dev branch.

Sent from my iPad

On Jun 15, 2013, at 7:27 PM, Matthew Walster <matthew@xxxxxxxxxxx> wrote:

In the same way that we have CRUSH maps for determining placement groups, I was wondering if anyone had stumbled across a way to influence a *client* (be it CephFS or RBD) as to where they should read/write data from/to.

That is to say, if I have three OSDs:

1 in $city[west]
1 in $city[east]
1 in $city[central]

I quite rightly want my data to end up at both locations in case of failure. However, is there a way that I could signal via the use of the monitor where a client is (one mon per site, client does some ICMP to each monitor to work out closest node or possibly hard coded into mount option) so that it writes to the local OSD, which then replicates it out to the other OSDs? This would essentially prevent the block from being passed over the link twice (one in each direction).

Similarly, if I read data, I'd rather it use my 10G backbone within a datacenter instead of fetching it from another DC all the way across town along my 1G backhaul.

I'd appreciate any input, even if it is "stop being blind at RTFM again" -- I looked first, honest ;)

M
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux