Hi Dietmar,
I am trying to understand the problem and have few questions.Usually this would be because of gfid mismatch but I don't see that in your case. So I am little more interested
On Mon, Mar 12, 2018 at 10:13 PM, Dietmar Putz <dietmar.putz@xxxxxxxxx> wrote:
Hello,
in regard to
https://bugzilla.redhat.com/show_bug.cgi?id=1434066
i have been faced to another issue when using the trashcan feature on a dist. repl. volume running a geo-replication. (gfs 3.12.6 on ubuntu 16.04.4)
for e.g. removing an entire directory with subfolders :
tron@gl-node1:/myvol-1/test1/b1$ rm -rf *
afterwards listing files in the trashcan :
tron@gl-node1:/myvol-1/test1$ ls -la /myvol-1/.trashcan/test1/b1/
leads to an outage of the geo-replication.
error on master-01 and master-02 :
[2018-03-12 13:37:14.827204] I [master(/brick1/mvol1):1385:crawl] _GMaster: slave's time stime=(1520861818, 0)
[2018-03-12 13:37:14.835535] E [master(/brick1/mvol1):784:log_failures] _GMaster: ENTRY FAILED data="" 0, 'gfid': 'c38f75e3-194a-4d22-9094-50ac8 f8756e7', 'gid': 0, 'mode': 16877, 'entry': '.gfid/5531bd64-ac50-462b-943e -c0bf1c52f52c/Oracle_VM_Virtua lBox_Extension', 'op': 'MKDIR'}, 2, {'gfid_mismatch': False, 'dst': False})
[2018-03-12 13:37:14.835911] E [syncdutils(/brick1/mvol1):299:log_raise_exception] <top>: The above directory failed to sync. Please fix it to proceed further.
both gfid's of the directories as shown in the log :
brick1/mvol1/.trashcan/test1/b1 0x5531bd64ac50462b943ec0bf1c52 f52c
brick1/mvol1/.trashcan/test1/b1/Oracle_VM_VirtualBox_Extensi on 0xc38f75e3194a4d22909450ac8f87 56e7
the shown directory contains just one file which is stored on gl-node3 and gl-node4 while node1 and 2 are in geo replication error.
since the filesize limitation of the trashcan is obsolete i'm really interested to use the trashcan feature but i'm concerned it will interrupt the geo-replication entirely.
does anybody else have been faced with this situation...any hints, workarounds... ?
best regards
Dietmar Putz
root@gl-node1:~/tmp# gluster volume info mvol1
Volume Name: mvol1
Type: Distributed-Replicate
Volume ID: a1c74931-568c-4f40-8573-dd344553e557
Status: Started
Snapshot Count: 0
Number of Bricks: 2 x 2 = 4
Transport-type: tcp
Bricks:
Brick1: gl-node1-int:/brick1/mvol1
Brick2: gl-node2-int:/brick1/mvol1
Brick3: gl-node3-int:/brick1/mvol1
Brick4: gl-node4-int:/brick1/mvol1
Options Reconfigured:
changelog.changelog: on
geo-replication.ignore-pid-check: on
geo-replication.indexing: on
features.trash-max-filesize: 2GB
features.trash: on
transport.address-family: inet
nfs.disable: on
performance.client-io-threads: off
root@gl-node1:/myvol-1/test1# gluster volume geo-replication mvol1 gl-node5-int::mvol1 config
special_sync_mode: partial
gluster_log_file: /var/log/glusterfs/geo-replication/mvol1/ssh%3A%2F%2Froot% 40192.168.178.65%3Agluster%3A% 2F%2F127.0.0.1%3Amvol1. gluster.log
ssh_command: ssh -oPasswordAuthentication=no -oStrictHostKeyChecking=no -i /var/lib/glusterd/geo-replication/secret.pem
change_detector: changelog
use_meta_volume: true
session_owner: a1c74931-568c-4f40-8573-dd344553e557
state_file: /var/lib/glusterd/geo-replication/mvol1_gl-node5-int_mvol1/ monitor.status
gluster_params: aux-gfid-mount acl
remote_gsyncd: /nonexistent/gsyncd
working_dir: /var/lib/misc/glusterfsd/mvol1/ssh%3A%2F%2Froot%40192.168. 178.65%3Agluster%3A%2F%2F127. 0.0.1%3Amvol1
state_detail_file: /var/lib/glusterd/geo-replication/mvol1_gl-node5-int_mvol1/ ssh%3A%2F%2Froot%40192.168. 178.65%3Agluster%3A%2F%2F127. 0.0.1%3Amvol1-detail.status
gluster_command_dir: /usr/sbin/
pid_file: /var/lib/glusterd/geo-replication/mvol1_gl-node5-int_mvol1/ monitor.pid
georep_session_working_dir: /var/lib/glusterd/geo-replication/mvol1_gl-node5-int_mvol1/
ssh_command_tar: ssh -oPasswordAuthentication=no -oStrictHostKeyChecking=no -i /var/lib/glusterd/geo-replication/tar_ssh.pem
master.stime_xattr_name: trusted.glusterfs.a1c74931-568c-4f40-8573-dd344553e557.d62bd a3a-1396-492a-ad99-7c6238d93c6 a.stime
changelog_log_file: /var/log/glusterfs/geo-replication/mvol1/ssh%3A%2F%2Froot% 40192.168.178.65%3Agluster%3A% 2F%2F127.0.0.1%3Amvol1- changes.log
socketdir: /var/run/gluster
volume_id: a1c74931-568c-4f40-8573-dd344553e557
ignore_deletes: false
state_socket_unencoded: /var/lib/glusterd/geo-replication/mvol1_gl-node5-int_mvol1/ ssh%3A%2F%2Froot%40192.168. 178.65%3Agluster%3A%2F%2F127. 0.0.1%3Amvol1.socket
log_file: /var/log/glusterfs/geo-replication/mvol1/ssh%3A%2F%2Froot% 40192.168.178.65%3Agluster%3A% 2F%2F127.0.0.1%3Amvol1.log
access_mount: true
root@gl-node1:/myvol-1/test1#
--
_______________________________________________
Gluster-users mailing list
Gluster-users@xxxxxxxxxxx
http://lists.gluster.org/mailman/listinfo/gluster-users
--
Thanks and Regards,
Kotresh H R_______________________________________________ Gluster-users mailing list Gluster-users@xxxxxxxxxxx http://lists.gluster.org/mailman/listinfo/gluster-users