Re: geo-replication 3.6.7 - no transition from hybrid to changelog crawl

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hello Aravinda,

thank you for your reply.
i just made a 'find /gluster-export -type f -exec ls -lisa {} \; > ls-lisa-gluster-export-`hostname`.out' on each brick and checked the output for files with less than 2 link counts.
i found nothing...all files on each brick have exact 2 links.

the entire output for all bricks contain more than 7 million lines including .glusterfs but without non relevant directories and files.. tron@dp-server:~/geo_rep_3$ cat ls-lisa-gluster-wien-0* | egrep -v 'indices|landfill|changelogs|health_check' | wc -l
7007316

link count is on $4 :
tron@dp-server:~/geo_rep_3$ cat ls-lisa-gluster-wien-0* | egrep -v 'indices|landfill|changelogs|health_check' | awk '{if($4=="2")print}' | tail -1 62648153697 4 -rw-rw-rw- 2 root root 1713 Jan 4 01:44 /gluster-export/3500/files/16/01/387233/3500-6dqMmBcVby97PQtR.ism

tron@dp-server:~/geo_rep_3$ cat ls-lisa-gluster-wien-0* | egrep -v 'indices|landfill|changelogs|health_check' | awk '{if($4=="1")print}'
tron@dp-server:~/geo_rep_3$
tron@dp-server:~/geo_rep_3$ cat ls-lisa-gluster-wien-0* | egrep -v 'indices|landfill|changelogs|health_check' | awk '{if($4!="2")print}'
tron@dp-server:~/geo_rep_3$
tron@dp-server:~/geo_rep_3$ cat ls-lisa-gluster-wien-0* | egrep -v 'indices|landfill|changelogs|health_check' | awk '{print $4}' | sort | uniq -c
7007316 2
tron@dp-server:~/geo_rep_3$

If i understood you right this can not be the reason for the problem.
is there any other hint which i can check on the master or slave to analyse the problem....?

Any help would be very appreciated
best regards
dietmar



Am 04.01.2016 um 07:14 schrieb Aravinda:
Hi,

Looks like issue with Geo-rep due to race between Create and Rename. Geo-replication uses gfid-access (Mount Volume with aux-gfid-mount) to create and rename files. If Create and Rename replayed more than once then Geo-rep creates a two files with same GFID(not hardlink). This causes one file without backend GFID link.

Milind is working on the patch to disallow the creation of second file with same GFID.
@Milind, Please provide more update about your patch.

As a workaround, identify all the files in Slave volume which do not have backend links and delete those files(Only in Slaves, keep backup if required)

In Brick backend, Crawl and look for files with link count less than 2. (Exclude .glusterfs and .trashcan directory)

regards
Aravinda

On 01/02/2016 09:56 PM, Dietmar Putz wrote:
Hello all,

one more time i need some help with a geo-replication problem.
recently i started a new geo-replication. the master volume contains about 45 TB data and the slave volume was new created before geo-replication setup was done. master and slave is a 6 node distributed replicated volume running glusterfs-server 3.6.7-ubuntu1~trusty1. geo-rep was starting without problems. since few days the slave volume contains about 200 GB more data than the master volume and i expected that the crawl status changes from 'hybrid crawl' to 'changelog crawl' but it remains in 'hybrid crawl'. the 'status detail' output far below shows more than 10 million synced files while the entire master volume contains just about 2 million files. some tests show that files are not deleted on the slave volume. as far as i know the hybrid crawl has the limitation of not replicating deletes and renames to the slave thus the geo-rep needs to achieve the 'changelog crawl' status after initial sync...
usually this should happen more or less automatically, is this right ?

the geo-rep frequently fails with below shown "OSError: [Errno 16] Device or resource busy", this error appears about every 3-4 hours on each active master node. i guess the frequent appearance of this error prevent geo-rep from changing to 'changelog crawl', does somebody experienced such problem, is this the cause of the problem ?

i found some similar reports on gluster.org for gfs 3.5, 3.6 and 3.7 but none of them point me to a solution... does anybody know a solution or is there a workaround to achieve the changelog crawl status...?

Any help would be very appreciated
best regards
dietmar




Master gluster-ger-ber-07:
-----------------------------

[2016-01-02 11:39:48.122546] I [master(/gluster-export):1343:crawl] _GMaster: processing xsync changelog /var/lib/misc/glusterfsd/ger-ber-01/ssh%3A%2F%2Froot%4082.199.131.132%3Agluster%3A%2F%2F127.0.0.1%3Aaut-wien-vol-01/9d7139ecf10a6fc33a
6e41d8d6e56984/xsync/XSYNC-CHANGELOG.1451724692
[2016-01-02 11:42:55.182342] I [master(/gluster-export):1343:crawl] _GMaster: processing xsync changelog /var/lib/misc/glusterfsd/ger-ber-01/ssh%3A%2F%2Froot%4082.199.131.132%3Agluster%3A%2F%2F127.0.0.1%3Aaut-wien-vol-01/9d7139ecf10a6fc33a
6e41d8d6e56984/xsync/XSYNC-CHANGELOG.1451724751
[2016-01-02 11:44:11.168962] I [master(/gluster-export):1340:crawl] _GMaster: finished hybrid crawl syncing, stime: (-1, 0) [2016-01-02 11:44:11.246845] I [master(/gluster-export):490:crawlwrap] _GMaster: primary master with volume id 6a071cfa-b150-4f0b-b1ed-96ab5d4bd671 ... [2016-01-02 11:44:11.265209] I [master(/gluster-export):501:crawlwrap] _GMaster: crawl interval: 3 seconds [2016-01-02 11:44:11.896940] I [master(/gluster-export):1192:crawl] _GMaster: slave's time: (-1, 0) [2016-01-02 11:44:12.171761] E [repce(/gluster-export):207:__call__] RepceClient: call 18897:139899553576768:1451735052.09 (entry_ops) failed on peer with OSError [2016-01-02 11:44:12.172101] E [syncdutils(/gluster-export):270:log_raise_exception] <top>: FAIL:
Traceback (most recent call last):
File "/usr/lib/x86_64-linux-gnu/glusterfs/python/syncdaemon/gsyncd.py", line 164, in main
    main_i()
File "/usr/lib/x86_64-linux-gnu/glusterfs/python/syncdaemon/gsyncd.py", line 643, in main_i
    local.service_loop(*[r for r in [remote] if r])
File "/usr/lib/x86_64-linux-gnu/glusterfs/python/syncdaemon/resource.py", line 1344, in service_loop
    g2.crawlwrap()
File "/usr/lib/x86_64-linux-gnu/glusterfs/python/syncdaemon/master.py", line 539, in crawlwrap
    self.crawl(no_stime_update=no_stime_update)
File "/usr/lib/x86_64-linux-gnu/glusterfs/python/syncdaemon/master.py", line 1204, in crawl
    self.process(changes)
File "/usr/lib/x86_64-linux-gnu/glusterfs/python/syncdaemon/master.py", line 956, in process
    self.process_change(change, done, retry)
File "/usr/lib/x86_64-linux-gnu/glusterfs/python/syncdaemon/master.py", line 920, in process_change
    self.slave.server.entry_ops(entries)
File "/usr/lib/x86_64-linux-gnu/glusterfs/python/syncdaemon/repce.py", line 226, in __call__
    return self.ins(self.meth, *a)
File "/usr/lib/x86_64-linux-gnu/glusterfs/python/syncdaemon/repce.py", line 208, in __call__
    raise res
OSError: [Errno 16] Device or resource busy
[2016-01-02 11:44:12.258982] I [syncdutils(/gluster-export):214:finalize] <top>: exiting. [2016-01-02 11:44:12.321808] I [repce(agent):92:service_loop] RepceServer: terminating on reaching EOF. [2016-01-02 11:44:12.349766] I [syncdutils(agent):214:finalize] <top>: exiting. [2016-01-02 11:44:12.435992] I [monitor(monitor):141:set_state] Monitor: new state: faulty [2016-01-02 11:44:23.164284] I [monitor(monitor):215:monitor] Monitor: ------------------------------------------------------------ [2016-01-02 11:44:23.169981] I [monitor(monitor):216:monitor] Monitor: starting gsyncd worker [2016-01-02 11:44:23.216662] I [changelogagent(agent):72:__init__] ChangelogAgent: Agent listining... [2016-01-02 11:44:23.239778] I [gsyncd(/gluster-export):633:main_i] <top>: syncing: gluster://localhost:ger-ber-01 -> ssh://root@gluster-wien-07-int:gluster://localhost:aut-wien-vol-01 [2016-01-02 11:44:26.358613] I [master(/gluster-export):75:gmaster_builder] <top>: setting up xsync change detection mode [2016-01-02 11:44:26.358983] I [master(/gluster-export):413:__init__] _GMaster: using 'rsync' as the sync engine [2016-01-02 11:44:26.359985] I [master(/gluster-export):75:gmaster_builder] <top>: setting up changelog change detection mode [2016-01-02 11:44:26.360243] I [master(/gluster-export):413:__init__] _GMaster: using 'rsync' as the sync engine [2016-01-02 11:44:26.361159] I [master(/gluster-export):75:gmaster_builder] <top>: setting up changeloghistory change detection mode [2016-01-02 11:44:26.361377] I [master(/gluster-export):413:__init__] _GMaster: using 'rsync' as the sync engine [2016-01-02 11:44:26.402601] I [master(/gluster-export):1311:register] _GMaster: xsync temp directory: /var/lib/misc/glusterfsd/ger-ber-01/ssh%3A%2F%2Froot%4082.199.131.132%3Agluster%3A%2F%2F127.0.0.1%3Aaut-wien-vol-01/9d7139ecf10a6fc33a6e
41d8d6e56984/xsync
[2016-01-02 11:44:26.402848] I [resource(/gluster-export):1318:service_loop] GLUSTER: Register time: 1451735066 [2016-01-02 11:44:27.26012] I [master(/gluster-export):490:crawlwrap] _GMaster: primary master with volume id 6a071cfa-b150-4f0b-b1ed-96ab5d4bd671 ... [2016-01-02 11:44:27.31605] I [master(/gluster-export):501:crawlwrap] _GMaster: crawl interval: 1 seconds [2016-01-02 11:44:27.66868] I [master(/gluster-export):1226:crawl] _GMaster: starting history crawl... turns: 1, stime: (-1, 0) [2016-01-02 11:44:27.67043] I [master(/gluster-export):1229:crawl] _GMaster: stime not available, abandoning history crawl [2016-01-02 11:44:27.112426] I [master(/gluster-export):490:crawlwrap] _GMaster: primary master with volume id 6a071cfa-b150-4f0b-b1ed-96ab5d4bd671 ... [2016-01-02 11:44:27.117506] I [master(/gluster-export):501:crawlwrap] _GMaster: crawl interval: 60 seconds [2016-01-02 11:44:27.140610] I [master(/gluster-export):1333:crawl] _GMaster: starting hybrid crawl..., stime: (-1, 0) [2016-01-02 11:45:23.417233] I [monitor(monitor):141:set_state] Monitor: new state: Stable [2016-01-02 11:45:48.225915] I [master(/gluster-export):1343:crawl] _GMaster: processing xsync changelog /var/lib/misc/glusterfsd/ger-ber-01/ssh%3A%2F%2Froot%4082.199.131.132%3Agluster%3A%2F%2F127.0.0.1%3Aaut-wien-vol-01/9d7139ecf10a6fc33a
6e41d8d6e56984/xsync/XSYNC-CHANGELOG.1451735067
[2016-01-02 11:47:08.65231] I [master(/gluster-export):1343:crawl] _GMaster: processing xsync changelog /var/lib/misc/glusterfsd/ger-ber-01/ssh%3A%2F%2Froot%4082.199.131.132%3Agluster%3A%2F%2F127.0.0.1%3Aaut-wien-vol-01/9d7139ecf10a6fc33a6
e41d8d6e56984/xsync/XSYNC-CHANGELOG.1451735148
...


slave gluster-wien-07 :
------------------------

[2016-01-02 11:44:12.007744] W [fuse-bridge.c:1261:fuse_err_cbk] 0-glusterfs-fuse: 1959820: SETXATTR() /.gfid/5e436e5b-086b-4720-9e70-0e49c8e09698 => -1 (File exists) [2016-01-02 11:44:12.010970] W [client-rpc-fops.c:240:client3_3_mknod_cbk] 0-aut-wien-vol-01-client-5: remote operation failed: File exists. Path: <gfid:666bceac-7c14-4efd-81fe-8185458fcf1f>/11-kxyrM3NgdtBWPFv4.webm [2016-01-02 11:44:12.011327] W [client-rpc-fops.c:240:client3_3_mknod_cbk] 0-aut-wien-vol-01-client-4: remote operation failed: File exists. Path: <gfid:666bceac-7c14-4efd-81fe-8185458fcf1f>/11-kxyrM3NgdtBWPFv4.webm [2016-01-02 11:44:12.012054] W [fuse-bridge.c:1261:fuse_err_cbk] 0-glusterfs-fuse: 1959822: SETXATTR() /.gfid/666bceac-7c14-4efd-81fe-8185458fcf1f => -1 (File exists) [2016-01-02 11:44:12.024743] W [client-rpc-fops.c:240:client3_3_mknod_cbk] 0-aut-wien-vol-01-client-5: remote operation failed: File exists. Path: <gfid:5bfd6f99-07e8-4b2f-844b-aa0b6535c055>/Gf4FYbpDTC7yK2mv.png [2016-01-02 11:44:12.024970] W [client-rpc-fops.c:240:client3_3_mknod_cbk] 0-aut-wien-vol-01-client-4: remote operation failed: File exists. Path: <gfid:5bfd6f99-07e8-4b2f-844b-aa0b6535c055>/Gf4FYbpDTC7yK2mv.png [2016-01-02 11:44:12.025601] W [fuse-bridge.c:1261:fuse_err_cbk] 0-glusterfs-fuse: 1959823: SETXATTR() /.gfid/5bfd6f99-07e8-4b2f-844b-aa0b6535c055 => -1 (File exists) [2016-01-02 11:44:12.100688] I [dht-selfheal.c:1065:dht_selfheal_layout_new_directory] 0-aut-wien-vol-01-dht: chunk size = 0xffffffff / 57217563 = 0x4b [2016-01-02 11:44:12.100765] I [dht-selfheal.c:1103:dht_selfheal_layout_new_directory] 0-aut-wien-vol-01-dht: assigning range size 0x5542c4a3 to aut-wien-vol-01-replicate-0 [2016-01-02 11:44:12.100785] I [dht-selfheal.c:1103:dht_selfheal_layout_new_directory] 0-aut-wien-vol-01-dht: assigning range size 0x5542c4a3 to aut-wien-vol-01-replicate-1 [2016-01-02 11:44:12.100800] I [dht-selfheal.c:1103:dht_selfheal_layout_new_directory] 0-aut-wien-vol-01-dht: assigning range size 0x5542c4a3 to aut-wien-vol-01-replicate-2 [2016-01-02 11:44:12.100839] I [MSGID: 109036] [dht-common.c:6296:dht_log_new_layout_for_dir_selfheal] 0-aut-wien-vol-01-dht: Setting layout of <gfid:d4815ee4-3348-4105-9136-d0219d956ed8>/.dstXXX0HUpRD with [Subvol_name: aut-wien-vol-01-re plicate-0, Err: -1 , Start: 0 , Stop: 1430439074 ], [Subvol_name: aut-wien-vol-01-replicate-1, Err: -1 , Start: 1430439075 , Stop: 2860878149 ], [Subvol_name: aut-wien-vol-01-replicate-2, Err: -1 , Start: 2860878150 , Stop: 4294967295 ], [2016-01-02 11:44:12.114192] W [client-rpc-fops.c:306:client3_3_mkdir_cbk] 0-aut-wien-vol-01-client-2: remote operation failed: File exists. Path: <gfid:cd3fd9ba-34b8-4c6b-ba72-4796b80b0ff2>/.dstXXb70G3x [2016-01-02 11:44:12.114275] W [client-rpc-fops.c:306:client3_3_mkdir_cbk] 0-aut-wien-vol-01-client-3: remote operation failed: File exists. Path: <gfid:cd3fd9ba-34b8-4c6b-ba72-4796b80b0ff2>/.dstXXb70G3x [2016-01-02 11:44:12.114879] W [fuse-bridge.c:1261:fuse_err_cbk] 0-glusterfs-fuse: 1959831: SETXATTR() /.gfid/cd3fd9ba-34b8-4c6b-ba72-4796b80b0ff2 => -1 (File exists) [2016-01-02 11:44:12.118473] I [dht-layout.c:663:dht_layout_normalize] 0-aut-wien-vol-01-dht: Found anomalies in /.gfid/cd3fd9ba-34b8-4c6b-ba72-4796b80b0ff2/.dstXXb70G3x (gfid = 00000000-0000-0000-0000-000000000000). Holes=1 overlaps=0 [2016-01-02 11:44:12.118537] I [dht-selfheal.c:1065:dht_selfheal_layout_new_directory] 0-aut-wien-vol-01-dht: chunk size = 0xffffffff / 57217563 = 0x4b [2016-01-02 11:44:12.118562] I [dht-selfheal.c:1103:dht_selfheal_layout_new_directory] 0-aut-wien-vol-01-dht: assigning range size 0x5542c4a3 to aut-wien-vol-01-replicate-2 [2016-01-02 11:44:12.118579] I [dht-selfheal.c:1103:dht_selfheal_layout_new_directory] 0-aut-wien-vol-01-dht: assigning range size 0x5542c4a3 to aut-wien-vol-01-replicate-0 [2016-01-02 11:44:12.118613] I [dht-selfheal.c:1103:dht_selfheal_layout_new_directory] 0-aut-wien-vol-01-dht: assigning range size 0x5542c4a3 to aut-wien-vol-01-replicate-1 [2016-01-02 11:44:12.120352] I [MSGID: 109036] [dht-common.c:6296:dht_log_new_layout_for_dir_selfheal] 0-aut-wien-vol-01-dht: Setting layout of /.gfid/cd3fd9ba-34b8-4c6b-ba72-4796b80b0ff2/.dstXXb70G3x with [Subvol_name: aut-wien-vol-01-rep licate-0, Err: -1 , Start: 1430439075 , Stop: 2860878149 ], [Subvol_name: aut-wien-vol-01-replicate-1, Err: -1 , Start: 2860878150 , Stop: 4294967295 ], [Subvol_name: aut-wien-vol-01-replicate-2, Err: -1 , Start: 0 , Stop: 1430439074 ], [2016-01-02 11:44:12.630949] I [fuse-bridge.c:4927:fuse_thread_proc] 0-fuse: unmounting /tmp/gsyncd-aux-mount-tOUOsz [2016-01-02 11:44:12.633952] W [glusterfsd.c:1211:cleanup_and_exit] (--> 0-: received signum (15), shutting down [2016-01-02 11:44:12.633964] I [fuse-bridge.c:5607:fini] 0-fuse: Unmounting '/tmp/gsyncd-aux-mount-tOUOsz'. [2016-01-02 11:44:23.946702] I [MSGID: 100030] [glusterfsd.c:2035:main] 0-/usr/sbin/glusterfs: Started running /usr/sbin/glusterfs version 3.6.7 (args: /usr/sbin/glusterfs --aux-gfid-mount --log-file=/var/log/glusterfs/geo-replication-slav es/6a071cfa-b150-4f0b-b1ed-96ab5d4bd671:gluster%3A%2F%2F127.0.0.1%3Aaut-wien-vol-01.gluster.log --volfile-server=localhost --volfile-id=aut-wien-vol-01 --client-pid=-1 /tmp/gsyncd-aux-mount-otU3wS) [2016-01-02 11:44:24.042128] I [dht-shared.c:337:dht_init_regex] 0-aut-wien-vol-01-dht: using regex rsync-hash-regex = ^\.(.+)\.[^.]+$ [2016-01-02 11:44:24.046315] I [client.c:2268:notify] 0-aut-wien-vol-01-client-0: parent translators are ready, attempting connect on transport [2016-01-02 11:44:24.046532] I [client.c:2268:notify] 0-aut-wien-vol-01-client-1: parent translators are ready, attempting connect on transport [2016-01-02 11:44:24.046664] I [client.c:2268:notify] 0-aut-wien-vol-01-client-2: parent translators are ready, attempting connect on transport [2016-01-02 11:44:24.046806] I [client.c:2268:notify] 0-aut-wien-vol-01-client-3: parent translators are ready, attempting connect on transport [2016-01-02 11:44:24.046940] I [client.c:2268:notify] 0-aut-wien-vol-01-client-4: parent translators are ready, attempting connect on transport [2016-01-02 11:44:24.047070] I [client.c:2268:notify] 0-aut-wien-vol-01-client-5: parent translators are ready, attempting connect on transport
Final graph:
+------------------------------------------------------------------------------+
  1: volume aut-wien-vol-01-client-0
  2:     type protocol/client
  3:     option ping-timeout 10
  4:     option remote-host gluster-wien-02-int
  5:     option remote-subvolume /gluster-export
  6:     option transport-type socket
  7:     option username 6b3d1fae-fa3e-4305-a4a0-fd27c7ac9929
  8:     option password 8777e154-476c-449a-89b2-3199872e4a1f
  9:     option send-gids true
 10: end-volume
 11:
 12: volume aut-wien-vol-01-client-1
 13:     type protocol/client
 14:     option ping-timeout 10
 15:     option remote-host gluster-wien-03-int
 16:     option remote-subvolume /gluster-export
 17:     option transport-type socket
 18:     option username 6b3d1fae-fa3e-4305-a4a0-fd27c7ac9929
 19:     option password 8777e154-476c-449a-89b2-3199872e4a1f
 20:     option send-gids true
 21: end-volume
 22:
 23: volume aut-wien-vol-01-replicate-0
 24:     type cluster/replicate
 25:     subvolumes aut-wien-vol-01-client-0 aut-wien-vol-01-client-1
 26: end-volume
 27:
 28: volume aut-wien-vol-01-client-2
 29:     type protocol/client
 30:     option ping-timeout 10
 31:     option remote-host gluster-wien-04-int
 32:     option remote-subvolume /gluster-export
 33:     option transport-type socket
 34:     option username 6b3d1fae-fa3e-4305-a4a0-fd27c7ac9929
 35:     option password 8777e154-476c-449a-89b2-3199872e4a1f
 36:     option send-gids true
 37: end-volume
 38:
 39: volume aut-wien-vol-01-client-3
 40:     type protocol/client
 41:     option ping-timeout 10
 42:     option remote-host gluster-wien-05-int
 43:     option remote-subvolume /gluster-export
 44:     option transport-type socket
 45:     option username 6b3d1fae-fa3e-4305-a4a0-fd27c7ac9929
 46:     option password 8777e154-476c-449a-89b2-3199872e4a1f
 47:     option send-gids true
 48: end-volume
 49:
 50: volume aut-wien-vol-01-replicate-1
 51:     type cluster/replicate
 52:     subvolumes aut-wien-vol-01-client-2 aut-wien-vol-01-client-3
 53: end-volume
 54:
 55: volume aut-wien-vol-01-client-4
 56:     type protocol/client
 57:     option ping-timeout 10
 58:     option remote-host gluster-wien-06-int
 59:     option remote-subvolume /gluster-export
 60:     option transport-type socket
 61:     option username 6b3d1fae-fa3e-4305-a4a0-fd27c7ac9929
 62:     option password 8777e154-476c-449a-89b2-3199872e4a1f
 63:     option send-gids true
 64: end-volume
 65:
 66: volume aut-wien-vol-01-client-5
 67:     type protocol/client
 68:     option ping-timeout 10
 69:     option remote-host gluster-wien-07-int
 70:     option remote-subvolume /gluster-export
 71:     option transport-type socket
 72:     option username 6b3d1fae-fa3e-4305-a4a0-fd27c7ac9929
 73:     option password 8777e154-476c-449a-89b2-3199872e4a1f
 74:     option send-gids true
 75: end-volume
 76:
 77: volume aut-wien-vol-01-replicate-2
 78:     type cluster/replicate
 79:     subvolumes aut-wien-vol-01-client-4 aut-wien-vol-01-client-5
 80: end-volume
 81:
 82: volume aut-wien-vol-01-dht
 83:     type cluster/distribute
84: subvolumes aut-wien-vol-01-replicate-0 aut-wien-vol-01-replicate-1 aut-wien-vol-01-replicate-2
 85: end-volume
 86:
 87: volume aut-wien-vol-01-write-behind
 88:     type performance/write-behind
 89:     subvolumes aut-wien-vol-01-dht
 90: end-volume
 91:
 92: volume aut-wien-vol-01-read-ahead
 93:     type performance/read-ahead
 94:     subvolumes aut-wien-vol-01-write-behind
 95: end-volume
 96:
 97: volume aut-wien-vol-01-io-cache
 98:     type performance/io-cache
 99:     option min-file-size 0
100:     option cache-timeout 2
101:     option cache-size 1024MB
102:     subvolumes aut-wien-vol-01-read-ahead
103: end-volume
104:
105: volume aut-wien-vol-01-quick-read
106:     type performance/quick-read
107:     option cache-size 1024MB
108:     subvolumes aut-wien-vol-01-io-cache
109: end-volume
110:
111: volume aut-wien-vol-01-open-behind
112:     type performance/open-behind
113:     subvolumes aut-wien-vol-01-quick-read
114: end-volume
115:
116: volume aut-wien-vol-01-md-cache
117:     type performance/md-cache
118:     subvolumes aut-wien-vol-01-open-behind
119: end-volume
120:
121: volume aut-wien-vol-01
122:     type debug/io-stats
123:     option latency-measurement off
124:     option count-fop-hits off
125:     subvolumes aut-wien-vol-01-md-cache
126: end-volume
127:
128: volume gfid-access-autoload
129:     type features/gfid-access
130:     subvolumes aut-wien-vol-01
131: end-volume
132:
133: volume meta-autoload
134:     type meta
135:     subvolumes gfid-access-autoload
136: end-volume
137:
+------------------------------------------------------------------------------+ [2016-01-02 11:44:24.047642] I [rpc-clnt.c:1761:rpc_clnt_reconfig] 0-aut-wien-vol-01-client-5: changing port to 49153 (from 0) [2016-01-02 11:44:24.047927] I [client-handshake.c:1413:select_server_supported_programs] 0-aut-wien-vol-01-client-5: Using Program GlusterFS 3.3, Num (1298437), Version (330) [2016-01-02 11:44:24.048044] I [client-handshake.c:1200:client_setvolume_cbk] 0-aut-wien-vol-01-client-5: Connected to aut-wien-vol-01-client-5, attached to remote volume '/gluster-export'. [2016-01-02 11:44:24.048050] I [client-handshake.c:1210:client_setvolume_cbk] 0-aut-wien-vol-01-client-5: Server and Client lk-version numbers are not same, reopening the fds [2016-01-02 11:44:24.048088] I [MSGID: 108005] [afr-common.c:3684:afr_notify] 0-aut-wien-vol-01-replicate-2: Subvolume 'aut-wien-vol-01-client-5' came back up; going online. [2016-01-02 11:44:24.048114] I [client-handshake.c:188:client_set_lk_version_cbk] 0-aut-wien-vol-01-client-5: Server lk version = 1 [2016-01-02 11:44:24.048124] I [rpc-clnt.c:1761:rpc_clnt_reconfig] 0-aut-wien-vol-01-client-0: changing port to 49153 (from 0) [2016-01-02 11:44:24.048132] I [rpc-clnt.c:1761:rpc_clnt_reconfig] 0-aut-wien-vol-01-client-1: changing port to 49153 (from 0) [2016-01-02 11:44:24.048138] I [rpc-clnt.c:1761:rpc_clnt_reconfig] 0-aut-wien-vol-01-client-2: changing port to 49153 (from 0) [2016-01-02 11:44:24.048146] I [rpc-clnt.c:1761:rpc_clnt_reconfig] 0-aut-wien-vol-01-client-3: changing port to 49153 (from 0) [2016-01-02 11:44:24.048153] I [rpc-clnt.c:1761:rpc_clnt_reconfig] 0-aut-wien-vol-01-client-4: changing port to 49153 (from 0) [2016-01-02 11:44:24.049070] I [client-handshake.c:1413:select_server_supported_programs] 0-aut-wien-vol-01-client-0: Using Program GlusterFS 3.3, Num (1298437), Version (330) [2016-01-02 11:44:24.049094] I [client-handshake.c:1413:select_server_supported_programs] 0-aut-wien-vol-01-client-3: Using Program GlusterFS 3.3, Num (1298437), Version (330) [2016-01-02 11:44:24.049113] I [client-handshake.c:1413:select_server_supported_programs] 0-aut-wien-vol-01-client-2: Using Program GlusterFS 3.3, Num (1298437), Version (330) [2016-01-02 11:44:24.049131] I [client-handshake.c:1413:select_server_supported_programs] 0-aut-wien-vol-01-client-1: Using Program GlusterFS 3.3, Num (1298437), Version (330) [2016-01-02 11:44:24.049224] I [client-handshake.c:1413:select_server_supported_programs] 0-aut-wien-vol-01-client-4: Using Program GlusterFS 3.3, Num (1298437), Version (330) [2016-01-02 11:44:24.049307] I [client-handshake.c:1200:client_setvolume_cbk] 0-aut-wien-vol-01-client-0: Connected to aut-wien-vol-01-client-0, attached to remote volume '/gluster-export'. [2016-01-02 11:44:24.049312] I [client-handshake.c:1210:client_setvolume_cbk] 0-aut-wien-vol-01-client-0: Server and Client lk-version numbers are not same, reopening the fds [2016-01-02 11:44:24.049324] I [MSGID: 108005] [afr-common.c:3684:afr_notify] 0-aut-wien-vol-01-replicate-0: Subvolume 'aut-wien-vol-01-client-0' came back up; going online. [2016-01-02 11:44:24.049384] I [client-handshake.c:1200:client_setvolume_cbk] 0-aut-wien-vol-01-client-3: Connected to aut-wien-vol-01-client-3, attached to remote volume '/gluster-export'. [2016-01-02 11:44:24.049389] I [client-handshake.c:1210:client_setvolume_cbk] 0-aut-wien-vol-01-client-3: Server and Client lk-version numbers are not same, reopening the fds [2016-01-02 11:44:24.049400] I [MSGID: 108005] [afr-common.c:3684:afr_notify] 0-aut-wien-vol-01-replicate-1: Subvolume 'aut-wien-vol-01-client-3' came back up; going online. [2016-01-02 11:44:24.049418] I [client-handshake.c:1200:client_setvolume_cbk] 0-aut-wien-vol-01-client-2: Connected to aut-wien-vol-01-client-2, attached to remote volume '/gluster-export'. [2016-01-02 11:44:24.049422] I [client-handshake.c:1210:client_setvolume_cbk] 0-aut-wien-vol-01-client-2: Server and Client lk-version numbers are not same, reopening the fds [2016-01-02 11:44:24.049460] I [client-handshake.c:1200:client_setvolume_cbk] 0-aut-wien-vol-01-client-1: Connected to aut-wien-vol-01-client-1, attached to remote volume '/gluster-export'. [2016-01-02 11:44:24.049465] I [client-handshake.c:1210:client_setvolume_cbk] 0-aut-wien-vol-01-client-1: Server and Client lk-version numbers are not same, reopening the fds [2016-01-02 11:44:24.049493] I [client-handshake.c:188:client_set_lk_version_cbk] 0-aut-wien-vol-01-client-0: Server lk version = 1 [2016-01-02 11:44:24.049567] I [client-handshake.c:188:client_set_lk_version_cbk] 0-aut-wien-vol-01-client-3: Server lk version = 1 [2016-01-02 11:44:24.049632] I [client-handshake.c:1200:client_setvolume_cbk] 0-aut-wien-vol-01-client-4: Connected to aut-wien-vol-01-client-4, attached to remote volume '/gluster-export'. [2016-01-02 11:44:24.049638] I [client-handshake.c:1210:client_setvolume_cbk] 0-aut-wien-vol-01-client-4: Server and Client lk-version numbers are not same, reopening the fds [2016-01-02 11:44:24.052103] I [fuse-bridge.c:5086:fuse_graph_setup] 0-fuse: switched to graph 0 [2016-01-02 11:44:24.052150] I [client-handshake.c:188:client_set_lk_version_cbk] 0-aut-wien-vol-01-client-2: Server lk version = 1 [2016-01-02 11:44:24.052163] I [client-handshake.c:188:client_set_lk_version_cbk] 0-aut-wien-vol-01-client-4: Server lk version = 1 [2016-01-02 11:44:24.052192] I [client-handshake.c:188:client_set_lk_version_cbk] 0-aut-wien-vol-01-client-1: Server lk version = 1 [2016-01-02 11:44:24.052204] I [fuse-bridge.c:4015:fuse_init] 0-glusterfs-fuse: FUSE inited with protocol versions: glusterfs 7.22 kernel 7.20 [2016-01-02 11:44:24.053991] I [afr-common.c:1491:afr_local_discovery_cbk] 0-aut-wien-vol-01-replicate-2: selecting local read_child aut-wien-vol-01-client-5 [2016-01-02 11:45:48.613563] W [client-rpc-fops.c:306:client3_3_mkdir_cbk] 0-aut-wien-vol-01-client-5: remote operation failed: File exists. Path: /keys [2016-01-02 11:45:48.614131] W [client-rpc-fops.c:306:client3_3_mkdir_cbk] 0-aut-wien-vol-01-client-4: remote operation failed: File exists. Path: /keys [2016-01-02 11:45:48.614436] W [fuse-bridge.c:1261:fuse_err_cbk] 0-glusterfs-fuse: 12: SETXATTR() /.gfid/00000000-0000-0000-0000-000000000001 => -1 (File exists)
...


[ 13:41:40 ] - root@gluster-ger-ber-07 /var/log/glusterfs/geo-replication/ger-ber-01 $gluster volume geo-replication ger-ber-01 gluster-wien-07::aut-wien-vol-01 status detail

MASTER NODE MASTER VOL MASTER BRICK SLAVE STATUS CHECKPOINT STATUS CRAWL STATUS FILES SYNCD FILES PENDING BYTES PENDING DELETES PENDING FILES SKIPPED ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- gluster-ger-ber-07 ger-ber-01 /gluster-export gluster-wien-07-int::aut-wien-vol-01 Active N/A Hybrid Crawl 10743644 8192 0 0 0 gluster-ger-ber-11 ger-ber-01 /gluster-export gluster-wien-03-int::aut-wien-vol-01 Active N/A Hybrid Crawl 16037091 8192 0 0 0 gluster-ger-ber-10 ger-ber-01 /gluster-export gluster-wien-02-int::aut-wien-vol-01 Passive N/A N/A 0 0 0 0 0 gluster-ger-ber-12 ger-ber-01 /gluster-export gluster-wien-06-int::aut-wien-vol-01 Passive N/A N/A 0 0 0 0 0 gluster-ger-ber-09 ger-ber-01 /gluster-export gluster-wien-05-int::aut-wien-vol-01 Active N/A Hybrid Crawl 16180514 8192 0 0 0 gluster-ger-ber-08 ger-ber-01 /gluster-export gluster-wien-04-int::aut-wien-vol-01 Passive N/A N/A 0 0 0 0 0


[ 13:41:55 ] - root@gluster-ger-ber-07 /var/log/glusterfs/geo-replication/ger-ber-01 $gluster volume geo-replication ger-ber-01 gluster-wien-07::aut-wien-vol-01 config
special_sync_mode: partial
state_socket_unencoded: /var/lib/glusterd/geo-replication/ger-ber-01_gluster-wien-07_aut-wien-vol-01/ssh%3A%2F%2Froot%4082.199.131.132%3Agluster%3A%2F%2F127.0.0.1%3Aaut-wien-vol-01.socket gluster_log_file: /var/log/glusterfs/geo-replication/ger-ber-01/ssh%3A%2F%2Froot%4082.199.131.132%3Agluster%3A%2F%2F127.0.0.1%3Aaut-wien-vol-01.gluster.log ssh_command: ssh -p 2503 -oPasswordAuthentication=no -oStrictHostKeyChecking=no -i /var/lib/glusterd/geo-replication/secret.pem
ignore_deletes: true
change_detector: changelog
ssh_command_tar: ssh -p 2503 -oPasswordAuthentication=no -oStrictHostKeyChecking=no -i /var/lib/glusterd/geo-replication/tar_ssh.pem state_file: /var/lib/glusterd/geo-replication/ger-ber-01_gluster-wien-07_aut-wien-vol-01/ssh%3A%2F%2Froot%4082.199.131.132%3Agluster%3A%2F%2F127.0.0.1%3Aaut-wien-vol-01.status
remote_gsyncd: /nonexistent/gsyncd
log_file: /var/log/glusterfs/geo-replication/ger-ber-01/ssh%3A%2F%2Froot%4082.199.131.132%3Agluster%3A%2F%2F127.0.0.1%3Aaut-wien-vol-01.log changelog_log_file: /var/log/glusterfs/geo-replication/ger-ber-01/ssh%3A%2F%2Froot%4082.199.131.132%3Agluster%3A%2F%2F127.0.0.1%3Aaut-wien-vol-01-changes.log
socketdir: /var/run
working_dir: /var/lib/misc/glusterfsd/ger-ber-01/ssh%3A%2F%2Froot%4082.199.131.132%3Agluster%3A%2F%2F127.0.0.1%3Aaut-wien-vol-01 state_detail_file: /var/lib/glusterd/geo-replication/ger-ber-01_gluster-wien-07_aut-wien-vol-01/ssh%3A%2F%2Froot%4082.199.131.132%3Agluster%3A%2F%2F127.0.0.1%3Aaut-wien-vol-01-detail.status
session_owner: 6a071cfa-b150-4f0b-b1ed-96ab5d4bd671
gluster_command_dir: /usr/sbin/
pid_file: /var/lib/glusterd/geo-replication/ger-ber-01_gluster-wien-07_aut-wien-vol-01/ssh%3A%2F%2Froot%4082.199.131.132%3Agluster%3A%2F%2F127.0.0.1%3Aaut-wien-vol-01.pid georep_session_working_dir: /var/lib/glusterd/geo-replication/ger-ber-01_gluster-wien-07_aut-wien-vol-01/
gluster_params: aux-gfid-mount
volume_id: 6a071cfa-b150-4f0b-b1ed-96ab5d4bd671
[ 13:42:11 ] - root@gluster-ger-ber-07 /var/log/glusterfs/geo-replication/ger-ber-01 $


_______________________________________________
Gluster-users mailing list
Gluster-users@xxxxxxxxxxx
http://www.gluster.org/mailman/listinfo/gluster-users


_______________________________________________
Gluster-users mailing list
Gluster-users@xxxxxxxxxxx
http://www.gluster.org/mailman/listinfo/gluster-users



[Index of Archives]     [Gluster Development]     [Linux Filesytems Development]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Bugtraq]     [Linux OMAP]     [Linux MIPS]     [eCos]     [Asterisk Internet PBX]     [Linux API]

  Powered by Linux