Re: RGW Multisite delete wierdness

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Thu, Jun 2, 2016 at 6:01 AM, Abhishek Lekshmanan <abhishek@xxxxxxxx> wrote:
> [..]
> Yehuda Sadeh-Weinraub writes:
>>
>> Yes, that would be a normal behaviour. The primary should not have
>> concurrent sync operations on the same object if object has not
>> completed previous sync operations. Looking at the log it really seems
>> that we don't identify the concurrent sync operation on the same
>> object. This should have been fixed by commit
>> edea6d58dd25995bcc1ed4fc5be6f72ce4a6835a. Can you try to verify what
>> went wrong there (whether can_do_op() returned true and why)?
>
> Looked into this a bit, can_do_op() has returned true for the case when
> primary issues a Fetch (or GET) and when a delete is issued,(even though
> the Fetch is still not complete yet) by putting a debug log around when
> we clear the keys, both the delete op and the get op creates and deletes
> the same key successfully.
>
> Which makes me suspect, that different instances of
> RGWBucketIncSyncShardMarkerTrack are at play here, leading to different
> independent values for key_to_marker. Is that possible?
>
Shouldn't happen, but maybe something went wrong. Try adding some more
info to the log message to see if that's the case.

Thanks,
Yehuda
--
To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html



[Index of Archives]     [CEPH Users]     [Ceph Large]     [Information on CEPH]     [Linux BTRFS]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]
  Powered by Linux