Cloudsync with AFR

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi,

We recently started testing cloudsync xlator on a replica volume.
And we have noticed a few issues. We would like some advice on how to proceed with them.

1) As we know, when stubbing a file cloudsync uses mtime of files to decide whether a file should be truncated or not.

If the mtime provided as part of the setfattr operation is lesser than the current mtime of the file on brick, stubbing isn't completed.

This works fine in a plain distribute volume. But in case of a replica volume, the mtime could be different for the files on each of the replica brick.


During our testing we came across the following scenario for a replica 3 volume with 3 bricks:

    We performed `setfattr -n "trusted.glusterfs.csou.complete" -v m1 file1` from our gluster mount to stub the files.
    It so happened that on brick1 this operation succeeded and truncated file1 as it should have. But on brick2 and brick3, mtime found on file1
    was greater than m1, leading to failure there.

    From AFR's perspective this operation failed as a whole because quorum could not be met. But on the brick where this setxattr succeeded, truncate was already performed. So now we have one of the replica bricks out of sync and AFR has no awareness of this. This file needs to be rolled back to its state before the

setfattr.

Ideally, it appears that we should add intelligence in AFR to handle this. How do you suggest we do that?

The case is also applicable to EC volumes of course.

2) Given that cloudsync depends on mtime to make the decision of truncating, how do we ensure that we don't end up in this situation again?

Thanks,
Anuradha

***************************Legal Disclaimer***************************
"This communication may contain confidential and privileged material for the
sole use of the intended recipient. Any unauthorized review, use or distribution
by others is strictly prohibited. If you have received the message by mistake,
please advise the sender by reply email and delete the message. Thank you."
**********************************************************************
_______________________________________________
Gluster-devel mailing list
Gluster-devel@xxxxxxxxxxx
https://lists.gluster.org/mailman/listinfo/gluster-devel

[Index of Archives]     [Gluster Users]     [Ceph Users]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Security]     [Bugtraq]     [Linux]     [Linux OMAP]     [Linux MIPS]     [eCos]     [Asterisk Internet PBX]     [Linux API]

  Powered by Linux