Hi all,
once again i need some help to get our geo-replication running again...
master and slave are 6-node distributed replicated volumes running
ubuntu 14.04 and glusterfs 3.7.6 from the ubuntu ppa.
the master volume already contains about 45 TByte of data, the slave
volume was created from scratch before geo-replication was setup and
started.
both cluster exist since gluster 3.3 and were updated step by step to
3.4, 3.5, 3.6 and 3.7. since update to 3.5 the geo-replication is not
running anymore.
the geo-replication started with 3 active and 3 passive connections in
hybrid crawl modus and was transferring about 99 % of the data to the
slave volume.
afterwards the geo-replication on the first master pair (the active and
the correspondig passive) node becomes faulty, about two hours later the
the second master pair.
the last active master node remains for further 36 hours in hybrid crawl
and was still transferring data to the slave until it fails too.
currently sometimes i can see an active master in history crawl for a
very short moment before the status is faulty again.
while i'm writing this mail i recognize some failures reported for
gluster-ger-ber-07. this was the last active master node...
does anybody have an idea what to do next...?
any help is welcome...
best regards
dietmar
[ 15:47:45 ] - root@gluster-ger-ber-07 ~/tmp/geo-rep-376 $gluster
volume geo-replication ger-ber-01 gluster-wien-02::wien-01 status detail
MASTER NODE MASTER VOL MASTER BRICK SLAVE USER
SLAVE SLAVE NODE STATUS CRAWL STATUS
LAST_SYNCED ENTRY DATA META FAILURES CHECKPOINT TIME
CHECKPOINT COMPLETED CHECKPOINT COMPLETION TIME
--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
gluster-ger-ber-07 ger-ber-01 /gluster-export root
gluster-wien-02::wien-01 N/A Faulty N/A
N/A N/A N/A N/A N/A N/A N/A N/A
gluster-ger-ber-10 ger-ber-01 /gluster-export root
gluster-wien-02::wien-01 N/A Faulty N/A
N/A N/A N/A N/A N/A N/A N/A N/A
gluster-ger-ber-12 ger-ber-01 /gluster-export root
gluster-wien-02::wien-01 N/A Faulty N/A
N/A N/A N/A N/A N/A N/A N/A N/A
gluster-ger-ber-09 ger-ber-01 /gluster-export root
gluster-wien-02::wien-01 N/A Faulty N/A
N/A N/A N/A N/A N/A N/A N/A N/A
gluster-ger-ber-11 ger-ber-01 /gluster-export root
gluster-wien-02::wien-01 N/A Faulty N/A
N/A N/A N/A N/A N/A N/A N/A N/A
gluster-ger-ber-08 ger-ber-01 /gluster-export root
gluster-wien-02::wien-01 gluster-wien-04-int Active History
Crawl N/A 0 0 0 0
N/A N/A N/A
[ 15:47:14 ] - root@gluster-ger-ber-07 ~/tmp/geo-rep-376 $gluster
volume geo-replication ger-ber-01 gluster-wien-02::wien-01 status detail
MASTER NODE MASTER VOL MASTER BRICK SLAVE USER
SLAVE SLAVE NODE STATUS CRAWL STATUS
LAST_SYNCED ENTRY DATA META FAILURES CHECKPOINT
TIME CHECKPOINT COMPLETED CHECKPOINT COMPLETION TIME
----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
gluster-ger-ber-07 ger-ber-01 /gluster-export root
gluster-wien-02::wien-01 gluster-wien-07-int Active History
Crawl 2016-02-07 22:12:51 0 0 0 2601
N/A N/A N/A
gluster-ger-ber-12 ger-ber-01 /gluster-export root
gluster-wien-02::wien-01 N/A Faulty N/A
N/A N/A N/A N/A N/A
N/A N/A N/A
gluster-ger-ber-11 ger-ber-01 /gluster-export root
gluster-wien-02::wien-01 N/A Faulty N/A
N/A N/A N/A N/A N/A
N/A N/A N/A
gluster-ger-ber-10 ger-ber-01 /gluster-export root
gluster-wien-02::wien-01 N/A Faulty N/A
N/A N/A N/A N/A N/A
N/A N/A N/A
gluster-ger-ber-09 ger-ber-01 /gluster-export root
gluster-wien-02::wien-01 N/A Faulty N/A
N/A N/A N/A N/A N/A
N/A N/A N/A
gluster-ger-ber-08 ger-ber-01 /gluster-export root
gluster-wien-02::wien-01 N/A Faulty N/A
N/A N/A N/A N/A N/A
N/A N/A N/A
master .log of the first faulty master node...it starts one time with
the 'line 165' message and since then it repeats the 'line 133' part
about every 40 seconds.
[2016-02-06 17:09:38.696657] W [master(/gluster-export):1050:process]
_GMaster: incomplete sync, retrying changelogs: CHANGELOG.1454585443
CHANGELOG.1454585458 CHANGELOG.1454585473 CHANGELOG.1454585488 ....very
long list.
[2016-02-06 17:09:44.589468] E [repce(/gluster-export):207:__call__]
RepceClient: call 22269:139717586876224:1454778584.49 (entry_ops) failed
on peer with OSError
[2016-02-06 17:09:44.590142] E
[syncdutils(/gluster-export):276:log_raise_exception] <top>: FAIL:
Traceback (most recent call last):
File
"/usr/lib/x86_64-linux-gnu/glusterfs/python/syncdaemon/gsyncd.py", line
165, in main
main_i()
File
"/usr/lib/x86_64-linux-gnu/glusterfs/python/syncdaemon/gsyncd.py", line
660, in main_i
local.service_loop(*[r for r in [remote] if r])
File
"/usr/lib/x86_64-linux-gnu/glusterfs/python/syncdaemon/resource.py",
line 1451, in service_loop
g2.crawlwrap()
File
"/usr/lib/x86_64-linux-gnu/glusterfs/python/syncdaemon/master.py", line
591, in crawlwrap
self.crawl()
File
"/usr/lib/x86_64-linux-gnu/glusterfs/python/syncdaemon/master.py", line
1115, in crawl
self.changelogs_batch_process(changes)
File
"/usr/lib/x86_64-linux-gnu/glusterfs/python/syncdaemon/master.py", line
1090, in changelogs_batch_process
self.process(batch)
File
"/usr/lib/x86_64-linux-gnu/glusterfs/python/syncdaemon/master.py", line
968, in process
self.process_change(change, done, retry)
File
"/usr/lib/x86_64-linux-gnu/glusterfs/python/syncdaemon/master.py", line
923, in process_change
failures = self.slave.server.entry_ops(entries)
File
"/usr/lib/x86_64-linux-gnu/glusterfs/python/syncdaemon/repce.py", line
226, in __call__
return self.ins(self.meth, *a)
File
"/usr/lib/x86_64-linux-gnu/glusterfs/python/syncdaemon/repce.py", line
208, in __call__
raise res
OSError: [Errno 16] Device or resource busy
[2016-02-06 17:09:44.605959] I
[syncdutils(/gluster-export):220:finalize] <top>: exiting.
[2016-02-06 17:09:45.846096] I [repce(agent):92:service_loop]
RepceServer: terminating on reaching EOF.
[2016-02-06 17:09:45.846317] I [syncdutils(agent):220:finalize] <top>:
exiting.
[2016-02-06 17:09:56.206406] I [monitor(monitor):221:monitor] Monitor:
------------------------------------------------------------
[2016-02-06 17:09:56.206561] I [monitor(monitor):222:monitor] Monitor:
starting gsyncd worker
[2016-02-06 17:09:56.242666] I [gsyncd(/gluster-export):650:main_i]
<top>: syncing: gluster://localhost:ger-ber-01 ->
ssh://root@gluster-wien-06-int:gluster://localhost:wien-01
[2016-02-06 17:09:56.243144] I [changelogagent(agent):75:__init__]
ChangelogAgent: Agent listining...
[2016-02-06 17:10:00.844651] I
[master(/gluster-export):83:gmaster_builder] <top>: setting up xsync
change detection mode
[2016-02-06 17:10:00.845278] I [master(/gluster-export):404:__init__]
_GMaster: using 'rsync' as the sync engine
[2016-02-06 17:10:00.846812] I
[master(/gluster-export):83:gmaster_builder] <top>: setting up changelog
change detection mode
[2016-02-06 17:10:00.847233] I [master(/gluster-export):404:__init__]
_GMaster: using 'rsync' as the sync engine
[2016-02-06 17:10:00.848237] I
[master(/gluster-export):83:gmaster_builder] <top>: setting up
changeloghistory change detection mode
[2016-02-06 17:10:00.848622] I [master(/gluster-export):404:__init__]
_GMaster: using 'rsync' as the sync engine
[2016-02-06 17:10:03.320529] I [master(/gluster-export):1229:register]
_GMaster: xsync temp directory:
/var/lib/misc/glusterfsd/ger-ber-01/ssh%3A%2F%2Froot%4082.199.131.2%3Agluster%3A%2F%2F127.0.0.1%3Awien-01/9d7139ecf10a6fc33a6e41d8d6e56984/xsync
[2016-02-06 17:10:03.320837] I
[resource(/gluster-export):1432:service_loop] GLUSTER: Register time:
1454778603
[2016-02-06 17:10:03.462205] I [master(/gluster-export):530:crawlwrap]
_GMaster: primary master with volume id
6a071cfa-b150-4f0b-b1ed-96ab5d4bd671 ...
[2016-02-06 17:10:03.471538] I [master(/gluster-export):539:crawlwrap]
_GMaster: crawl interval: 1 seconds
[2016-02-06 17:10:03.477514] I [master(/gluster-export):486:mgmt_lock]
_GMaster: Got lock : /gluster-export : Becoming ACTIVE
[2016-02-06 17:10:03.481477] I [master(/gluster-export):1144:crawl]
_GMaster: starting history crawl... turns: 1, stime: (1454585427,
414669), etime: 1454778603
[2016-02-06 17:10:03.539712] E [repce(agent):117:worker] <top>: call
failed:
Traceback (most recent call last):
File
"/usr/lib/x86_64-linux-gnu/glusterfs/python/syncdaemon/repce.py", line
113, in worker
res = getattr(self.obj, rmeth)(*in_data[2:])
File
"/usr/lib/x86_64-linux-gnu/glusterfs/python/syncdaemon/changelogagent.py",
line 54, in history
num_parallel)
File
"/usr/lib/x86_64-linux-gnu/glusterfs/python/syncdaemon/libgfchangelog.py",
line 100, in cl_history_changelog
cls.raise_changelog_err()
File
"/usr/lib/x86_64-linux-gnu/glusterfs/python/syncdaemon/libgfchangelog.py",
line 27, in raise_changelog_err
raise ChangelogException(errn, os.strerror(errn))
ChangelogException: [Errno 61] No data available
[2016-02-06 17:10:03.554912] E [repce(/gluster-export):207:__call__]
RepceClient: call 25488:140658334455616:1454778603.48 (history) failed
on peer with ChangelogException
[2016-02-06 17:10:03.555037] E
[resource(/gluster-export):1447:service_loop] GLUSTER: Changelog History
Crawl failed, [Errno 61] No data available
[2016-02-06 17:10:03.555208] I
[syncdutils(/gluster-export):220:finalize] <top>: exiting.
[2016-02-06 17:10:03.557096] I [repce(agent):92:service_loop]
RepceServer: terminating on reaching EOF.
[2016-02-06 17:10:03.557208] I [syncdutils(agent):220:finalize] <top>:
exiting.
[2016-02-06 17:10:03.848047] I [monitor(monitor):282:monitor] Monitor:
worker(/gluster-export) died in startup phase
the corresponging master .gluster.log :
[2016-02-06 09:16:29.304985] E [fuse-bridge.c:3347:fuse_xattr_cbk]
0-glusterfs-fuse: extended attribute not supported by the backend storage
[2016-02-06 16:57:06.963149] E [fuse-bridge.c:3347:fuse_xattr_cbk]
0-glusterfs-fuse: extended attribute not supported by the backend storage
[2016-02-06 16:57:24.800801] E [fuse-bridge.c:3347:fuse_xattr_cbk]
0-glusterfs-fuse: extended attribute not supported by the backend storage
[2016-02-06 16:57:49.709426] E [fuse-bridge.c:3347:fuse_xattr_cbk]
0-glusterfs-fuse: extended attribute not supported by the backend storage
[2016-02-06 16:58:52.414601] E [fuse-bridge.c:3347:fuse_xattr_cbk]
0-glusterfs-fuse: extended attribute not supported by the backend storage
[2016-02-06 16:59:21.483831] E [fuse-bridge.c:3347:fuse_xattr_cbk]
0-glusterfs-fuse: extended attribute not supported by the backend storage
[2016-02-06 17:04:40.133143] E [fuse-bridge.c:3347:fuse_xattr_cbk]
0-glusterfs-fuse: extended attribute not supported by the backend storage
[2016-02-06 17:09:42.530686] E [fuse-bridge.c:3347:fuse_xattr_cbk]
0-glusterfs-fuse: extended attribute not supported by the backend storage
[2016-02-06 17:09:45.845886] I [fuse-bridge.c:4984:fuse_thread_proc]
0-fuse: unmounting /tmp/gsyncd-aux-mount-ouVkPU
[2016-02-06 17:09:45.863741] W [glusterfsd.c:1236:cleanup_and_exit]
(-->/lib/x86_64-linux-gnu/libpthread.so.0(+0x8182) [0x7f8ee9657182]
-->/usr/sbin/glusterfs(glusterfs_sigwaiter+0xd5) [0x7f8eea3947c5]
-->/usr/sbin/glusterfs(cleanup_and_exit+0x69) [0x7f8eea394659] ) 0-:
received signum (15), shutting down
[2016-02-06 17:09:45.863758] I [fuse-bridge.c:5683:fini] 0-fuse:
Unmounting '/tmp/gsyncd-aux-mount-ouVkPU'.
[2016-02-06 17:09:59.685091] I [MSGID: 100030] [glusterfsd.c:2318:main]
0-/usr/sbin/glusterfs: Started running /usr/sbin/glusterfs version 3.7.6
(args: /usr/sbin/glusterfs --aux-gfid-mount --acl
--log-file=/var/log/glusterfs/geo-replication/ger-ber-01/ssh%3A%2F%2Froot%4082.199.131.2%3Agluster%3A%2F%2F127.0.0.1%3Awien-01.%2Fgluster-export.gluster.log
--volfile-server=localhost --volfile-id=ger-ber-01 --client-pid=-1
/tmp/gsyncd-aux-mount-DtzYIL)
[2016-02-06 17:09:59.709638] I [MSGID: 101190]
[event-epoll.c:632:event_dispatch_epoll_worker] 0-epoll: Started thread
with index 1
[2016-02-06 17:09:59.802030] I [graph.c:269:gf_add_cmdline_options]
0-ger-ber-01-md-cache: adding option 'cache-posix-acl' for volume
'ger-ber-01-md-cache' with value 'true'
[2016-02-06 17:09:59.806111] I [MSGID: 101190]
[event-epoll.c:632:event_dispatch_epoll_worker] 0-epoll: Started thread
with index 2
[2016-02-06 17:09:59.806941] I [MSGID: 114020] [client.c:2118:notify]
0-ger-ber-01-client-0: parent translators are ready, attempting connect
on transport
[2016-02-06 17:09:59.807222] I [MSGID: 114020] [client.c:2118:notify]
0-ger-ber-01-client-1: parent translators are ready, attempting connect
on transport
[2016-02-06 17:09:59.807422] I [MSGID: 114020] [client.c:2118:notify]
0-ger-ber-01-client-2: parent translators are ready, attempting connect
on transport
[2016-02-06 17:09:59.807560] I [rpc-clnt.c:1847:rpc_clnt_reconfig]
0-ger-ber-01-client-1: changing port to 49152 (from 0)
[2016-02-06 17:09:59.807589] I [MSGID: 114020] [client.c:2118:notify]
0-ger-ber-01-client-3: parent translators are ready, attempting connect
on transport
[2016-02-06 17:09:59.807866] I [MSGID: 114020] [client.c:2118:notify]
0-ger-ber-01-client-4: parent translators are ready, attempting connect
on transport
[2016-02-06 17:09:59.807919] I [MSGID: 114057]
[client-handshake.c:1437:select_server_supported_programs]
0-ger-ber-01-client-1: Using Program GlusterFS 3.3, Num (1298437),
Version (330)
[2016-02-06 17:09:59.808053] I [rpc-clnt.c:1847:rpc_clnt_reconfig]
0-ger-ber-01-client-2: changing port to 49152 (from 0)
[2016-02-06 17:09:59.808067] I [rpc-clnt.c:1847:rpc_clnt_reconfig]
0-ger-ber-01-client-0: changing port to 49153 (from 0)
[2016-02-06 17:09:59.808145] I [MSGID: 114020] [client.c:2118:notify]
0-ger-ber-01-client-5: parent translators are ready, attempting connect
on transport
Final graph:
...
[2016-02-06 17:09:59.808826] I [MSGID: 114046]
[client-handshake.c:1213:client_setvolume_cbk] 0-ger-ber-01-client-2:
Connected to ger-ber-01-client-2, attached to remote volume
'/gluster-export'.
[2016-02-06 17:09:59.808830] I [MSGID: 114057]
[client-handshake.c:1437:select_server_supported_programs]
0-ger-ber-01-client-0: Using Program GlusterFS 3.3, Num (1298437),
Version (330)
[2016-02-06 17:09:59.808835] I [MSGID: 114047]
[client-handshake.c:1224:client_setvolume_cbk] 0-ger-ber-01-client-2:
Server and Client lk-version numbers are not same, reopening the fds
[2016-02-06 17:09:59.808866] I [MSGID: 108005]
[afr-common.c:3841:afr_notify] 0-ger-ber-01-replicate-1: Subvolume
'ger-ber-01-client-2' came back up; going online.
[2016-02-06 17:09:59.808882] I [rpc-clnt.c:1847:rpc_clnt_reconfig]
0-ger-ber-01-client-4: changing port to 49152 (from 0)
[2016-02-06 17:09:59.808954] I [MSGID: 114035]
[client-handshake.c:193:client_set_lk_version_cbk]
0-ger-ber-01-client-2: Server lk version = 1
[2016-02-06 17:09:59.809530] I [MSGID: 114057]
[client-handshake.c:1437:select_server_supported_programs]
0-ger-ber-01-client-4: Using Program GlusterFS 3.3, Num (1298437),
Version (330)
[2016-02-06 17:09:59.809762] I [rpc-clnt.c:1847:rpc_clnt_reconfig]
0-ger-ber-01-client-3: changing port to 49152 (from 0)
[2016-02-06 17:09:59.809910] I [MSGID: 114046]
[client-handshake.c:1213:client_setvolume_cbk] 0-ger-ber-01-client-4:
Connected to ger-ber-01-client-4, attached to remote volume
'/gluster-export'.
[2016-02-06 17:09:59.809920] I [MSGID: 114047]
[client-handshake.c:1224:client_setvolume_cbk] 0-ger-ber-01-client-4:
Server and Client lk-version numbers are not same, reopening the fds
[2016-02-06 17:09:59.809941] I [MSGID: 108005]
[afr-common.c:3841:afr_notify] 0-ger-ber-01-replicate-2: Subvolume
'ger-ber-01-client-4' came back up; going online.
[2016-02-06 17:09:59.810134] I [MSGID: 114035]
[client-handshake.c:193:client_set_lk_version_cbk]
0-ger-ber-01-client-4: Server lk version = 1
[2016-02-06 17:09:59.810431] I [MSGID: 114057]
[client-handshake.c:1437:select_server_supported_programs]
0-ger-ber-01-client-3: Using Program GlusterFS 3.3, Num (1298437),
Version (330)
[2016-02-06 17:09:59.813085] I [MSGID: 114046]
[client-handshake.c:1213:client_setvolume_cbk] 0-ger-ber-01-client-0:
Connected to ger-ber-01-client-0, attached to remote volume
'/gluster-export'.
[2016-02-06 17:09:59.813096] I [MSGID: 114047]
[client-handshake.c:1224:client_setvolume_cbk] 0-ger-ber-01-client-0:
Server and Client lk-version numbers are not same, reopening the fds
[2016-02-06 17:09:59.813309] I [MSGID: 114035]
[client-handshake.c:193:client_set_lk_version_cbk]
0-ger-ber-01-client-0: Server lk version = 1
[2016-02-06 17:09:59.813555] I [rpc-clnt.c:1847:rpc_clnt_reconfig]
0-ger-ber-01-client-5: changing port to 49152 (from 0)
[2016-02-06 17:09:59.813894] I [MSGID: 114046]
[client-handshake.c:1213:client_setvolume_cbk] 0-ger-ber-01-client-3:
Connected to ger-ber-01-client-3, attached to remote volume
'/gluster-export'.
[2016-02-06 17:09:59.813902] I [MSGID: 114047]
[client-handshake.c:1224:client_setvolume_cbk] 0-ger-ber-01-client-3:
Server and Client lk-version numbers are not same, reopening the fds
[2016-02-06 17:09:59.814104] I [MSGID: 114057]
[client-handshake.c:1437:select_server_supported_programs]
0-ger-ber-01-client-5: Using Program GlusterFS 3.3, Num (1298437),
Version (330)
[2016-02-06 17:09:59.814211] I [MSGID: 114035]
[client-handshake.c:193:client_set_lk_version_cbk]
0-ger-ber-01-client-3: Server lk version = 1
[2016-02-06 17:09:59.825371] I [MSGID: 114046]
[client-handshake.c:1213:client_setvolume_cbk] 0-ger-ber-01-client-5:
Connected to ger-ber-01-client-5, attached to remote volume
'/gluster-export'.
[2016-02-06 17:09:59.825415] I [MSGID: 114047]
[client-handshake.c:1224:client_setvolume_cbk] 0-ger-ber-01-client-5:
Server and Client lk-version numbers are not same, reopening the fds
[2016-02-06 17:09:59.827978] I [fuse-bridge.c:5137:fuse_graph_setup]
0-fuse: switched to graph 0
[2016-02-06 17:09:59.828041] I [MSGID: 114035]
[client-handshake.c:193:client_set_lk_version_cbk]
0-ger-ber-01-client-5: Server lk version = 1
[2016-02-06 17:09:59.828104] I [fuse-bridge.c:4030:fuse_init]
0-glusterfs-fuse: FUSE inited with protocol versions: glusterfs 7.22
kernel 7.20
[2016-02-06 17:09:59.829484] I [MSGID: 108031]
[afr-common.c:1782:afr_local_discovery_cbk] 0-ger-ber-01-replicate-0:
selecting local read_child ger-ber-01-client-1
[2016-02-06 17:10:03.556996] I [fuse-bridge.c:4984:fuse_thread_proc]
0-fuse: unmounting /tmp/gsyncd-aux-mount-DtzYIL
[2016-02-06 17:10:03.557225] W [glusterfsd.c:1236:cleanup_and_exit]
(-->/lib/x86_64-linux-gnu/libpthread.so.0(+0x8182) [0x7f1e9d542182]
-->/usr/sbin/glusterfs(glusterfs_sigwaiter+0xd5) [0x7f1e9e27f7c5]
-->/usr/sbin/glusterfs(cleanup_and_ex
it+0x69) [0x7f1e9e27f659] ) 0-: received signum (15), shutting down
[2016-02-06 17:10:03.557247] I [fuse-bridge.c:5683:fini] 0-fuse:
Unmounting '/tmp/gsyncd-aux-mount-DtzYIL'.
the slave .log :
[2016-02-06 17:09:44.556490] E [repce(slave):117:worker] <top>: call
failed:
Traceback (most recent call last):
File
"/usr/lib/x86_64-linux-gnu/glusterfs/python/syncdaemon/repce.py", line
113, in worker
res = getattr(self.obj, rmeth)(*in_data[2:])
File
"/usr/lib/x86_64-linux-gnu/glusterfs/python/syncdaemon/resource.py",
line 725, in entry_ops
[ENOENT, EEXIST], [ESTALE])
File
"/usr/lib/x86_64-linux-gnu/glusterfs/python/syncdaemon/syncdutils.py",
line 475, in errno_wrap
return call(*arg)
OSError: [Errno 16] Device or resource busy
[2016-02-06 17:09:45.858520] I [repce(slave):92:service_loop]
RepceServer: terminating on reaching EOF.
geo-rep config :
root@gluster-ger-ber-07 ~/tmp/geo-rep-376 $gluster volume
geo-replication ger-ber-01 gluster-wien-02::wien-01 config
special_sync_mode: partial
session_owner: 6a071cfa-b150-4f0b-b1ed-96ab5d4bd671
state_socket_unencoded:
/var/lib/glusterd/geo-replication/ger-ber-01_gluster-wien-02_wien-01/ssh%3A%2F%2Froot%4082.199.131.2%3Agluster%3A%2F%2F127.0.0.1%3Awien-01.socket
gluster_log_file:
/var/log/glusterfs/geo-replication/ger-ber-01/ssh%3A%2F%2Froot%4082.199.131.2%3Agluster%3A%2F%2F127.0.0.1%3Awien-01.gluster.log
ssh_command: ssh -p 2503 -oPasswordAuthentication=no
-oStrictHostKeyChecking=no -i /var/lib/glusterd/geo-replication/secret.pem
ignore_deletes: false
change_detector: changelog
gluster_command_dir: /usr/sbin/
state_file:
/var/lib/glusterd/geo-replication/ger-ber-01_gluster-wien-02_wien-01/ssh%3A%2F%2Froot%4082.199.131.2%3Agluster%3A%2F%2F127.0.0.1%3Awien-01.status
remote_gsyncd: /nonexistent/gsyncd
log_file:
/var/log/glusterfs/geo-replication/ger-ber-01/ssh%3A%2F%2Froot%4082.199.131.2%3Agluster%3A%2F%2F127.0.0.1%3Awien-01.log
changelog_log_file:
/var/log/glusterfs/geo-replication/ger-ber-01/ssh%3A%2F%2Froot%4082.199.131.2%3Agluster%3A%2F%2F127.0.0.1%3Awien-01-changes.log
socketdir: /var/run/gluster
working_dir:
/var/lib/misc/glusterfsd/ger-ber-01/ssh%3A%2F%2Froot%4082.199.131.2%3Agluster%3A%2F%2F127.0.0.1%3Awien-01
state_detail_file:
/var/lib/glusterd/geo-replication/ger-ber-01_gluster-wien-02_wien-01/ssh%3A%2F%2Froot%4082.199.131.2%3Agluster%3A%2F%2F127.0.0.1%3Awien-01-detail.status
use_meta_volume: true
ssh_command_tar: ssh -p 2503 -oPasswordAuthentication=no
-oStrictHostKeyChecking=no -i /var/lib/glusterd/geo-replication/tar_ssh.pem
pid_file:
/var/lib/glusterd/geo-replication/ger-ber-01_gluster-wien-02_wien-01/ssh%3A%2F%2Froot%4082.199.131.2%3Agluster%3A%2F%2F127.0.0.1%3Awien-01.pid
georep_session_working_dir:
/var/lib/glusterd/geo-replication/ger-ber-01_gluster-wien-02_wien-01/
gluster_params: aux-gfid-mount acl
volume_id: 6a071cfa-b150-4f0b-b1ed-96ab5d4bd671
_______________________________________________
Gluster-users mailing list
Gluster-users@xxxxxxxxxxx
http://www.gluster.org/mailman/listinfo/gluster-users