In the ceph multisite master-zone, read ,write,delete objects, and the master-zone has data remaining.

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



In my test environment, the ceph version is v14.2.5, and there are two
rgws, which are instances of two zones, respectively rgwA
(master-zone) and rgwB (slave-zone). Cosbench reads, writes, and
deletes to rgwA. , The final result rgwA has data residue, but rgwB
has no residue.

Looking at the log later, I found that this happened:
1. When rgwA deletes the object, the rgwA instance has not yet started
datasync (or the process is slow) to synchronize the object in the
slave-zone.
2. When rgwA starts data synchronization, rgwB has not deleted the object.
In process 2, rgwA will retrieve the object from the slave-zone, and
then rgwA will enter the incremental synchronization state to
synchronize the bilog, but the bilog about the del object will be
filtered out, because syncs_trace has  master zone.

Below I did a similar reproducing operation (both in the master
version and ceph 14.2.5)
rgwA and rgwB are two zones of the same zonegroup .rgwA and rgwB is
running ( set rgw_run_sync_thread=true)
rgwA and rgwB are two zones of the same zonegroup .rgwA and rgwB is
running ( set rgw_run_sync_thread=true)
t1: rgwA set rgw_run_sync_thread=false and restart it for it to take
effect. We use s3cmd to create a bucket in rgwA. And upload an object1
in rgwA. We use s3cmd to observe whether object1 has been synchronized
in rgwB. or  look radosgw-admin bucket sync status is cauht up it. If
the synchronization has passed, proceed to the next step.
t2:rgwB set rgw_run_sync_thread=false and restart it for it to take
effect. rgwA delete object1 .
t3:rgwA set rgw_run_sync_thread=true and restart it for it to take
effect. LOOK radosgw-admin bucket sync status is cauht up it.
t4: rgwB set rgw_run_sync_thread=true and restart it for it to take
effect. LOOK radosgw-admin bucket sync status is cauht up it .
The reslut: rgwA has object1,rgwB dosen't have object1.
This URL mentioned this problem  https://tracker.ceph.com/issues/47555

Could someone can help me? or If the bucket about the rgwA instance is
not in the incremental synchronization state, can we prohibit rgwA
from deleting object1?



[Index of Archives]     [CEPH Users]     [Ceph Large]     [Ceph Dev]     [Information on CEPH]     [Linux BTRFS]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]

  Powered by Linux