oops, the list bounced my reply because of html
-------- Forwarded Message --------
Subject: Re: Consistency problem with multiple rgws
Date: Thu, 15 Dec 2016 11:05:03 -0500
From: Casey Bodley <cbodley@xxxxxxxxxx>
To: 18896724396 <zhang_shaowen@xxxxxxx>, yehuda <yehuda@xxxxxxxxxx>
CC: ceph-devel <ceph-devel@xxxxxxxxxxxxxxx>, 郭占东
<guozhandong@xxxxxxxxxxxxxxxxxxxx>, lvshuhua
<lvshuhua@xxxxxxxxxxxxxxxxxxxx>
Hi,
On 12/15/2016 02:55 AM, 18896724396 wrote:
Hi,
We have two RGWs in master zone and two RGWs in slave zone. We use
cosbench to upload 50,000 objs to a single bucket. After the data sync
is finished, the bucket stats is not the same between master and slave
zone.
The data sync may take a while with that many objects. How are you
verifying that data sync finished? Have you tried 'radosgw-admin bucket
sync status --bucket=<name>'?
Then we test the same case with one RGW in master zone and slave zone,
the stats is also not same. At last we test with one RGW and modify
the config rgw_num_rados_handles to 1(we set it 2 before), and this
time the stats is same and correct. Though multiple RGWs still have
the problem.
According to the code, I find that when we update bucket index, rgw
will call cls_rgw_bucket_complete_op to update the bucket stats and at
last osd will call rgw_bucket_complete_op. In this function, osd first
read the bucket header, and then update the stats, last it write the
head back. So I think two concurrent request to update the stats may
lead to the consistency problem. And maybe some other operation also
have the same problem. How could we solve the consistency problem?
The osd guarantees that two operations in the same placement group won't
run concurrently, so this kind of logic in cls should be safe. How far
off are the bucket stats? Can you share some example output?
Best regards.
Zhang Shaowen
Thanks,
Casey
--
To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html