Re: Cache data consistency among multiple RGW instances

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi Greg/Zhou,

I have got a similar setup where I have got one HAProxy node and 3 RadosGW client. I have got rgw cache disabled in my setup.
earlier I had only one node running RadosGW, there  I can see the difference in inbound and outbound network traffic  sometimes to a tune of factor of 10. If the traffic recieved from OSDs are some 700-800 MB only 90-100 MB of data is sent back to client. Can you please guide me what could be the reason for that.

Next what I did, I setup one HAProxy node and three RadosGW nodes. Still I can see the difference in outbound and inbound traffic but it is less, somewhat 100 MB.

Not sure what is happening, is it due to the ceph not supporting parallelized reads or something else. Please help.

I am running on CentOS 7, and ceph version is Firefly.



On Tue, Jan 20, 2015 at 10:37 AM, Gregory Farnum <greg@xxxxxxxxxxx> wrote:
You don't need to list them anywhere for this to work. They set up the necessary communication on their own by making use of watch-notify.
On Mon, Jan 19, 2015 at 6:55 PM ZHOU Yuan <dunk007@xxxxxxxxx> wrote:
Thanks Greg, that's a awesome feature I missed. I find some
explanation on the watch-notify thing:
http://www.slideshare.net/Inktank_Ceph/sweil-librados.

Just want to confirm, it looks like I need to list all the RGW
instances in ceph.conf, and then these RGW instances will
automatically do the cache invalidation if necessary?


Sincerely, Yuan


On Mon, Jan 19, 2015 at 10:58 PM, Gregory Farnum <greg@xxxxxxxxxxx> wrote:
> On Sun, Jan 18, 2015 at 6:40 PM, ZHOU Yuan <dunk007@xxxxxxxxx> wrote:
>> Hi list,
>>
>> I'm trying to understand the RGW cache consistency model. My Ceph
>> cluster has multiple RGW instances with HAProxy as the load balancer.
>> HAProxy would choose one RGW instance to serve the request(with
>> round-robin).
>> The question is if RGW cache was enabled, which is the default
>> behavior, there seem to be some cache inconsistency issue. e.g.,
>> object0 was cached in RGW-0 and RGW-1 at the same time. Sometime later
>> it was updated from RGW-0. In this case if the next read was issued to
>> RGW-1, the outdated cache would be served out then since RGW-1 wasn't
>> aware of the updates. Thus the data would be inconsistent. Is this
>> behavior expected or is there anything I missed?
>
> The RGW instances make use of the watch-notify primitive to keep their
> caches consistent. It shouldn't be a problem.
> -Greg

_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com




--
    .- <O> -.        .-====-.      ,-------.      .-=<>=-.
   /_-\'''/-_\      / / '' \ \     |,-----.|     /__----__\
  |/  o) (o  \|    | | ')(' | |   /,'-----'.\   |/ (')(') \|
   \   ._.   /      \ \    / /   {_/(') (')\_}   \   __   /
   ,>-_,,,_-<.       >'=jf='<     `.   _   .'    ,'--__--'.
 /      .      \    /        \     /'-___-'\    /    :|    \
(_)     .     (_)  /          \   /         \  (_)   :|   (_)
 \_-----'____--/  (_)        (_) (_)_______(_)   |___:|____|
  \___________/     |________|     \_______/     |_________|

Thanks and Regards
Ashish Chandra
Openstack Developer, Cloud Engineering
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux