after upgrade to 3.6.7 : Internal error xfs_attr3_leaf_write_verify

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hello all,

on 1st december i upgraded two 6 node cluster from glusterfs 3.5.6 to 3.6.7.
all of them are equal in hw, os and patchlevel, currently running ubuntu 14.04 lts by an do-release-upgrade from 12.04 lts (this was done before gfs upgrade to 3.5.6, not directly before upgrading to 3.6.7). because of a geo-replication issue all of the nodes have rsync 3.1.1.3 installed instead 3.1.0 which comes by the repositories. this is the only deviation from ubuntu repositories for 14.04 lts. since upgrade to gfs 3.6.7 the glusterd on two nodes of the same cluster are going offline after getting an xfs_attr3_leaf_write_verify error for the underlying bricks as shown below. this happens about every 4-5 hours after the problem was solved by an umount / remount of the brick. it makes no difference to run a xfs_check / xfs_repair before remount. xfs_check / xfs_repair did not show any faults. the underlying hw is a raid 5 vol on lsi-9271 8i. megacli does not show any errors.
the syslog does not show more than the dmesg output below.
every time the same two nodes of the same cluster are affected.
as shown in dmesg and syslog, the system recognizes the xfs_attr_leaf_write_verify error about 38 min. before finally giving up. for both events i can not found corresponding events in gluster logs. this is strange...the gluster is historical grown from 3.2.5, 3.3, to 3.4.6/7 which was running well for month, gfs 3.5.6 was running for about two weeks and upgrade to 3.6.7 was done because of a geo-repl log-flood. even when i have no hint/evidence that this is caused by gfs 3.6.7 somehow i believe that this is the case... does anybody experienced such an error or have some hints to getting out of this big problem...? unfortunately the affected cluster is the master of a geo-replication which is not well running since update from gfs 3.4.7...fortunately both affected gluster-nodes are not of the same sub-volume.

any help is appreciated...

best regards
dietmar




[ 09:32:29 ] - root@gluster-ger-ber-10  /var/log $gluster volume info

Volume Name: ger-ber-01
Type: Distributed-Replicate
Volume ID: 6a071cfa-b150-4f0b-b1ed-96ab5d4bd671
Status: Started
Number of Bricks: 3 x 2 = 6
Transport-type: tcp
Bricks:
Brick1: gluster-ger-ber-11-int:/gluster-export
Brick2: gluster-ger-ber-12-int:/gluster-export
Brick3: gluster-ger-ber-09-int:/gluster-export
Brick4: gluster-ger-ber-10-int:/gluster-export
Brick5: gluster-ger-ber-07-int:/gluster-export
Brick6: gluster-ger-ber-08-int:/gluster-export
Options Reconfigured:
changelog.changelog: on
geo-replication.ignore-pid-check: on
cluster.min-free-disk: 200GB
geo-replication.indexing: on
auth.allow: 10.0.1.*,188.138.82.*,188.138.123.*,82.193.249.198,82.193.249.200,31.7.178.137,31.7.178.135,31.7.180.109,31.7.180.98,82.199.147.*,104.155.22.202,104.155.30.201,104.155.5.117,104.155.11.253,104.155.15.34,104.155.25.145,146.148.120.255,31.7.180.148
nfs.disable: off
performance.cache-refresh-timeout: 2
performance.io-thread-count: 32
performance.cache-size: 1024MB
performance.read-ahead: on
performance.cache-min-file-size: 0
network.ping-timeout: 10
[ 09:32:52 ] - root@gluster-ger-ber-10  /var/log $




[ 19:10:55 ] - root@gluster-ger-ber-10  /var/log $gluster volume status
Status of volume: ger-ber-01
Gluster process                        Port    Online Pid
------------------------------------------------------------------------------
Brick gluster-ger-ber-11-int:/gluster-export 49152    Y 15994
Brick gluster-ger-ber-12-int:/gluster-export N/A    N N/A
Brick gluster-ger-ber-09-int:/gluster-export 49152    Y 10965
Brick gluster-ger-ber-10-int:/gluster-export N/A    N N/A
Brick gluster-ger-ber-07-int:/gluster-export 49152    Y 18542
Brick gluster-ger-ber-08-int:/gluster-export 49152    Y 20275
NFS Server on localhost                    2049    Y 13658
Self-heal Daemon on localhost                N/A    Y 13666
NFS Server on gluster-ger-ber-09-int            2049 Y    13503
Self-heal Daemon on gluster-ger-ber-09-int        N/A Y 13511
NFS Server on gluster-ger-ber-07-int            2049 Y    21526
Self-heal Daemon on gluster-ger-ber-07-int        N/A Y 21534
NFS Server on gluster-ger-ber-08-int            2049 Y    24004
Self-heal Daemon on gluster-ger-ber-08-int        N/A Y 24011
NFS Server on gluster-ger-ber-11-int            2049 Y    18944
Self-heal Daemon on gluster-ger-ber-11-int        N/A Y 18952
NFS Server on gluster-ger-ber-12-int            2049 Y    19138
Self-heal Daemon on gluster-ger-ber-12-int        N/A Y 19146

Task Status of Volume ger-ber-01
------------------------------------------------------------------------------
There are no active volume tasks

- root@gluster-ger-ber-10  /var/log $

- root@gluster-ger-ber-10  /var/log $dmesg -T
...
[Wed Dec  2 12:43:47 2015] XFS (sdc1): xfs_log_force: error 5 returned.
[Wed Dec  2 12:43:48 2015] XFS (sdc1): xfs_log_force: error 5 returned.
[Wed Dec  2 12:45:58 2015] XFS (sdc1): Mounting Filesystem
[Wed Dec  2 12:45:58 2015] XFS (sdc1): Starting recovery (logdev: internal)
[Wed Dec  2 12:45:59 2015] XFS (sdc1): Ending recovery (logdev: internal)
[Wed Dec  2 13:11:53 2015] XFS (sdc1): Mounting Filesystem
[Wed Dec  2 13:11:54 2015] XFS (sdc1): Ending clean mount
[Wed Dec 2 13:12:29 2015] init: statd main process (25924) killed by KILL signal
[Wed Dec  2 13:12:29 2015] init: statd main process ended, respawning
[Wed Dec 2 13:13:24 2015] init: statd main process (13433) killed by KILL signal
[Wed Dec  2 13:13:24 2015] init: statd main process ended, respawning
[Wed Dec 2 17:22:28 2015] ffff8807076b1000: 00 00 00 00 00 00 00 00 fb ee 00 00 00 00 00 00 ................ [Wed Dec 2 17:22:28 2015] ffff8807076b1010: 10 00 00 00 00 20 0f e0 00 00 00 00 00 00 00 00 ..... .......... [Wed Dec 2 17:22:28 2015] ffff8807076b1020: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 ................ [Wed Dec 2 17:22:28 2015] ffff8807076b1030: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 ................ [Wed Dec 2 17:22:28 2015] XFS (sdc1): Internal error xfs_attr3_leaf_write_verify at line 216 of file /build/linux-XHaR1x/linux-3.13.0/fs/xfs/xfs_attr_leaf.c. Caller 0xffffffffa01a66f0 [Wed Dec 2 17:22:28 2015] CPU: 4 PID: 13162 Comm: xfsaild/sdc1 Not tainted 3.13.0-67-generic #110-Ubuntu [Wed Dec 2 17:22:28 2015] Hardware name: Supermicro X10SLL-F/X10SLL-F, BIOS 1.1b 11/01/2013 [Wed Dec 2 17:22:28 2015] 0000000000000001 ffff8801c5691bd0 ffffffff817240e0 ffff8801b15c3800 [Wed Dec 2 17:22:28 2015] ffff8801c5691be8 ffffffffa01aa6fb ffffffffa01a66f0 ffff8801c5691c20 [Wed Dec 2 17:22:28 2015] ffffffffa01aa755 000000d800200200 ffff8804a59ac780 ffff8800d917e658
[Wed Dec  2 17:22:28 2015] Call Trace:
[Wed Dec  2 17:22:28 2015]  [<ffffffff817240e0>] dump_stack+0x45/0x56
[Wed Dec 2 17:22:28 2015] [<ffffffffa01aa6fb>] xfs_error_report+0x3b/0x40 [xfs] [Wed Dec 2 17:22:28 2015] [<ffffffffa01a66f0>] ? _xfs_buf_ioapply+0x70/0x3a0 [xfs] [Wed Dec 2 17:22:28 2015] [<ffffffffa01aa755>] xfs_corruption_error+0x55/0x80 [xfs] [Wed Dec 2 17:22:28 2015] [<ffffffffa01c7b70>] xfs_attr3_leaf_write_verify+0x100/0x120 [xfs] [Wed Dec 2 17:22:28 2015] [<ffffffffa01a66f0>] ? _xfs_buf_ioapply+0x70/0x3a0 [xfs] [Wed Dec 2 17:22:28 2015] [<ffffffffa01a83d5>] ? xfs_bdstrat_cb+0x55/0xb0 [xfs] [Wed Dec 2 17:22:28 2015] [<ffffffffa01a66f0>] _xfs_buf_ioapply+0x70/0x3a0 [xfs]
[Wed Dec  2 17:22:28 2015]  [<ffffffff8109ac90>] ? wake_up_state+0x20/0x20
[Wed Dec 2 17:22:28 2015] [<ffffffffa01a83d5>] ? xfs_bdstrat_cb+0x55/0xb0 [xfs] [Wed Dec 2 17:22:28 2015] [<ffffffffa01a8336>] xfs_buf_iorequest+0x46/0x90 [xfs] [Wed Dec 2 17:22:28 2015] [<ffffffffa01a83d5>] xfs_bdstrat_cb+0x55/0xb0 [xfs] [Wed Dec 2 17:22:28 2015] [<ffffffffa01a856b>] __xfs_buf_delwri_submit+0x13b/0x210 [xfs] [Wed Dec 2 17:22:28 2015] [<ffffffffa01a9000>] ? xfs_buf_delwri_submit_nowait+0x20/0x30 [xfs] [Wed Dec 2 17:22:28 2015] [<ffffffffa0207af0>] ? xfs_trans_ail_cursor_first+0x90/0x90 [xfs] [Wed Dec 2 17:22:28 2015] [<ffffffffa01a9000>] xfs_buf_delwri_submit_nowait+0x20/0x30 [xfs]
[Wed Dec  2 17:22:28 2015]  [<ffffffffa0207d27>] xfsaild+0x237/0x5c0 [xfs]
[Wed Dec 2 17:22:28 2015] [<ffffffffa0207af0>] ? xfs_trans_ail_cursor_first+0x90/0x90 [xfs]
[Wed Dec  2 17:22:28 2015]  [<ffffffff8108b7d2>] kthread+0xd2/0xf0
[Wed Dec 2 17:22:28 2015] [<ffffffff8108b700>] ? kthread_create_on_node+0x1c0/0x1c0
[Wed Dec  2 17:22:28 2015]  [<ffffffff81734c28>] ret_from_fork+0x58/0x90
[Wed Dec 2 17:22:28 2015] [<ffffffff8108b700>] ? kthread_create_on_node+0x1c0/0x1c0 [Wed Dec 2 17:22:28 2015] XFS (sdc1): Corruption detected. Unmount and run xfs_repair [Wed Dec 2 17:22:28 2015] XFS (sdc1): xfs_do_force_shutdown(0x8) called from line 1320 of file /build/linux-XHaR1x/linux-3.13.0/fs/xfs/xfs_buf.c. Return address = 0xffffffffa01a671c [Wed Dec 2 17:22:28 2015] XFS (sdc1): Corruption of in-memory data detected. Shutting down filesystem [Wed Dec 2 17:22:28 2015] XFS (sdc1): Please umount the filesystem and rectify the problem(s)
[Wed Dec  2 17:22:28 2015] XFS (sdc1): xfs_log_force: error 5 returned.
[Wed Dec  2 17:22:49 2015] XFS (sdc1): xfs_log_force: error 5 returned.
...

[ 19:10:49 ] - root@gluster-ger-ber-10  /var/log $xfs_info /gluster-export
meta-data=/dev/sdc1 isize=256 agcount=32, agsize=152596472 blks
         =                       sectsz=512   attr=2
data     =                       bsize=4096 blocks=4883087099, imaxpct=5
         =                       sunit=0      swidth=0 blks
naming   =version 2              bsize=4096   ascii-ci=0
log      =internal               bsize=4096 blocks=521728, version=2
         =                       sectsz=512   sunit=0 blks, lazy-count=1
realtime =none                   extsz=4096   blocks=0, rtextents=0
[ 19:10:55 ] - root@gluster-ger-ber-10  /var/log $

[ 09:36:37 ] - root@gluster-ger-ber-10  /var/log $stat /gluster-export
stat: cannot stat ‘/gluster-export’: Input/output error
[ 09:36:45 ] - root@gluster-ger-ber-10  /var/log $


[ 08:50:43 ] - root@gluster-ger-ber-10 ~/tmp/syslog $dmesg -T | grep xfs_attr3_leaf_write_verify [Di Dez 1 23:24:53 2015] XFS (sdc1): Internal error xfs_attr3_leaf_write_verify at line 216 of file /build/linux-XHaR1x/linux-3.13.0/fs/xfs/xfs_attr_leaf.c. Caller 0xffffffffa01a66f0 [Di Dez 1 23:24:53 2015] [<ffffffffa01c7b70>] xfs_attr3_leaf_write_verify+0x100/0x120 [xfs] [Mi Dez 2 12:19:16 2015] XFS (sdc1): Internal error xfs_attr3_leaf_write_verify at line 216 of file /build/linux-XHaR1x/linux-3.13.0/fs/xfs/xfs_attr_leaf.c. Caller 0xffffffffa01a66f0 [Mi Dez 2 12:19:16 2015] [<ffffffffa01c7b70>] xfs_attr3_leaf_write_verify+0x100/0x120 [xfs] [Mi Dez 2 17:22:28 2015] XFS (sdc1): Internal error xfs_attr3_leaf_write_verify at line 216 of file /build/linux-XHaR1x/linux-3.13.0/fs/xfs/xfs_attr_leaf.c. Caller 0xffffffffa01a66f0 [Mi Dez 2 17:22:28 2015] [<ffffffffa01c7b70>] xfs_attr3_leaf_write_verify+0x100/0x120 [xfs] [Mi Dez 2 23:06:32 2015] XFS (sdc1): Internal error xfs_attr3_leaf_write_verify at line 216 of file /build/linux-XHaR1x/linux-3.13.0/fs/xfs/xfs_attr_leaf.c. Caller 0xffffffffa01a66f0 [Mi Dez 2 23:06:32 2015] [<ffffffffa01c7b70>] xfs_attr3_leaf_write_verify+0x100/0x120 [xfs]

[ 08:06:28 ] - root@gluster-ger-ber-10 /var/log/glusterfs/geo-replication $grep xfs_attr3_leaf_write_verify /root/tmp/syslog/syslog* Dec 2 00:01:50 gluster-ger-ber-10 kernel: [2278489.906268] XFS (sdc1): Internal error xfs_attr3_leaf_write_verify at line 216 of file /build/linux-XHaR1x/linux-3.13.0/fs/xfs/xfs_attr_leaf.c. Caller 0xffffffffa01a66f0 Dec 2 00:01:50 gluster-ger-ber-10 kernel: [2278489.906448] [<ffffffffa01c7b70>] xfs_attr3_leaf_write_verify+0x100/0x120 [xfs] Dec 2 12:56:57 gluster-ger-ber-10 kernel: [2324952.509891] XFS (sdc1): Internal error xfs_attr3_leaf_write_verify at line 216 of file /build/linux-XHaR1x/linux-3.13.0/fs/xfs/xfs_attr_leaf.c. Caller 0xffffffffa01a66f0 Dec 2 12:56:57 gluster-ger-ber-10 kernel: [2324952.510414] [<ffffffffa01c7b70>] xfs_attr3_leaf_write_verify+0x100/0x120 [xfs]
xfs_check
xfs_repair -> no fault
Dec 2 18:00:27 gluster-ger-ber-10 kernel: [2343144.298098] XFS (sdc1): Internal error xfs_attr3_leaf_write_verify at line 216 of file /build/linux-XHaR1x/linux-3.13.0/fs/xfs/xfs_attr_leaf.c. Caller 0xffffffffa01a66f0 Dec 2 18:00:27 gluster-ger-ber-10 kernel: [2343144.298259] [<ffffffffa01c7b70>] xfs_attr3_leaf_write_verify+0x100/0x120 [xfs] Dec 2 23:44:52 gluster-ger-ber-10 kernel: [2363788.969849] XFS (sdc1): Internal error xfs_attr3_leaf_write_verify at line 216 of file /build/linux-XHaR1x/linux-3.13.0/fs/xfs/xfs_attr_leaf.c. Caller 0xffffffffa01a66f0 Dec 2 23:44:52 gluster-ger-ber-10 kernel: [2363788.970217] [<ffffffffa01c7b70>] xfs_attr3_leaf_write_verify+0x100/0x120 [xfs]
[ 08:06:37 ] - root@gluster-ger-ber-10 /var/log/glusterfs/geo-replication $

[ 08:04:51 ] - root@gluster-ger-ber-12 ~/tmp/syslog $grep xfs_attr3_leaf_write_verify syslog* Dec 2 00:01:10 gluster-ger-ber-12 kernel: [2276785.772229] XFS (sdc1): Internal error xfs_attr3_leaf_write_verify at line 216 of file /build/linux-XHaR1x/linux-3.13.0/fs/xfs/xfs_attr_leaf.c. Caller 0xffffffffa019a6f0 Dec 2 00:01:10 gluster-ger-ber-12 kernel: [2276785.772504] [<ffffffffa01bbb70>] xfs_attr3_leaf_write_verify+0x100/0x120 [xfs] Dec 2 12:59:08 gluster-ger-ber-12 kernel: [2323418.198659] XFS (sdc1): Internal error xfs_attr3_leaf_write_verify at line 216 of file /build/linux-XHaR1x/linux-3.13.0/fs/xfs/xfs_attr_leaf.c. Caller 0xffffffffa019a6f0 Dec 2 12:59:08 gluster-ger-ber-12 kernel: [2323418.199085] [<ffffffffa01bbb70>] xfs_attr3_leaf_write_verify+0x100/0x120 [xfs]
xfs_check
xfs_repair -> no fault
Dec 2 18:30:47 gluster-ger-ber-12 kernel: [2343298.342473] XFS (sdc1): Internal error xfs_attr3_leaf_write_verify at line 216 of file /build/linux-XHaR1x/linux-3.13.0/fs/xfs/xfs_attr_leaf.c. Caller 0xffffffffa019a6f0 Dec 2 18:30:47 gluster-ger-ber-12 kernel: [2343298.342850] [<ffffffffa01bbb70>] xfs_attr3_leaf_write_verify+0x100/0x120 [xfs] Dec 2 23:48:38 gluster-ger-ber-12 kernel: [15001.493190] XFS (sdc1): Internal error xfs_attr3_leaf_write_verify at line 216 of file /build/linux-XHaR1x/linux-3.13.0/fs/xfs/xfs_attr_leaf.c. Caller 0xffffffffa01936f0 Dec 2 23:48:38 gluster-ger-ber-12 kernel: [15001.493550] [<ffffffffa01b4b70>] xfs_attr3_leaf_write_verify+0x100/0x120 [xfs]
[ 08:05:02 ] - root@gluster-ger-ber-12  ~/tmp/syslog $

gluster-ger-ber-10-int:
glustershd.log :
[2015-12-02 23:45:33.160852] W [socket.c:620:__socket_rwv] 0-ger-ber-01-client-3: readv on 10.0.1.103:49152 failed (No data available) [2015-12-02 23:45:33.170590] I [client.c:2203:client_rpc_notify] 0-ger-ber-01-client-3: disconnected from ger-ber-01-client-3. Client process will keep trying to connect to glusterd until brick's port is available [2015-12-02 23:45:43.784388] E [client-handshake.c:1496:client_query_portmap_cbk] 0-ger-ber-01-client-3: failed to get the port number for remote subvolume. Please run 'gluster volume status' on server to see if brick process is running. [2015-12-02 23:45:43.784543] I [client.c:2203:client_rpc_notify] 0-ger-ber-01-client-3: disconnected from ger-ber-01-client-3. Client process will keep trying to connect to glusterd until brick's port is available [2015-12-02 23:45:50.000203] W [client-rpc-fops.c:1090:client3_3_getxattr_cbk] 0-ger-ber-01-client-3: remote operation failed: Transport endpoint is not connected. Path: / (00000000-0000-0000-0000-000000000001). Key: trusted.glusterfs.pathinfo [2015-12-02 23:49:33.524740] W [socket.c:620:__socket_rwv] 0-ger-ber-01-client-1: readv on 10.0.1.107:49152 failed (No data available) [2015-12-02 23:49:33.524934] I [client.c:2203:client_rpc_notify] 0-ger-ber-01-client-1: disconnected from ger-ber-01-client-1. Client process will keep trying to connect to glusterd until brick's port is available [2015-12-02 23:49:43.882976] E [client-handshake.c:1496:client_query_portmap_cbk] 0-ger-ber-01-client-1: failed to get the port number for remote subvolume. Please run 'gluster volume status' on server to see if
brick process is running.

sdn.log :
[2015-12-02 23:45:33.160963] W [socket.c:620:__socket_rwv] 0-ger-ber-01-client-3: readv on 10.0.1.103:49152 failed (No data available) [2015-12-02 23:45:33.168504] I [client.c:2203:client_rpc_notify] 0-ger-ber-01-client-3: disconnected from ger-ber-01-client-3. Client process will keep trying to connect to glusterd until brick's port is available [2015-12-02 23:45:43.395787] E [client-handshake.c:1496:client_query_portmap_cbk] 0-ger-ber-01-client-3: failed to get the port number for remote subvolume. Please run 'gluster volume status' on server to see if brick process is running.

nfs.log :
[2015-12-02 23:45:33.160856] W [socket.c:620:__socket_rwv] 0-ger-ber-01-client-3: readv on 10.0.1.103:49152 failed (No data available) [2015-12-02 23:45:33.180366] I [client.c:2203:client_rpc_notify] 0-ger-ber-01-client-3: disconnected from ger-ber-01-client-3. Client process will keep trying to connect to glusterd until brick's port is available [2015-12-02 23:45:43.780186] E [client-handshake.c:1496:client_query_portmap_cbk] 0-ger-ber-01-client-3: failed to get the port number for remote subvolume. Please run 'gluster volume status' on server to see if brick process is running. [2015-12-02 23:45:43.780340] I [client.c:2203:client_rpc_notify] 0-ger-ber-01-client-3: disconnected from ger-ber-01-client-3. Client process will keep trying to connect to glusterd until brick's port is available

geo-replication log :
[2015-12-02 23:44:34.624957] I [master(/gluster-export):514:crawlwrap] _GMaster: 0 crawls, 0 turns [2015-12-02 23:44:54.798414] E [syncdutils(/gluster-export):270:log_raise_exception] <top>: FAIL:
Traceback (most recent call last):
File "/usr/lib/x86_64-linux-gnu/glusterfs/python/syncdaemon/gsyncd.py", line 164, in main main_i() File "/usr/lib/x86_64-linux-gnu/glusterfs/python/syncdaemon/gsyncd.py", line 643, in main_i local.service_loop(*[r for r in [remote] if r]) File "/usr/lib/x86_64-linux-gnu/glusterfs/python/syncdaemon/resource.py", line 1325, in service_loop g3.crawlwrap(oneshot=True) File "/usr/lib/x86_64-linux-gnu/glusterfs/python/syncdaemon/master.py", line 527, in crawlwrap brick_stime = self.xtime('.', self.slave) File "/usr/lib/x86_64-linux-gnu/glusterfs/python/syncdaemon/master.py", line 362, in xtime return self.xtime_low(rsc, path, **opts) File "/usr/lib/x86_64-linux-gnu/glusterfs/python/syncdaemon/master.py", line 132, in xtime_low xt = rsc.server.stime(path, self.uuid) File "/usr/lib/x86_64-linux-gnu/glusterfs/python/syncdaemon/resource.py", line 1259, in <lambda> uuid + '.' + gconf.slave_id) File "/usr/lib/x86_64-linux-gnu/glusterfs/python/syncdaemon/resource.py", line 322, in ff return f(*a) File "/usr/lib/x86_64-linux-gnu/glusterfs/python/syncdaemon/resource.py", line 510, in stime 8) File "/usr/lib/x86_64-linux-gnu/glusterfs/python/syncdaemon/libcxattr.py", line 55, in lgetxattr return cls._query_xattr(path, siz, 'lgetxattr', attr) File "/usr/lib/x86_64-linux-gnu/glusterfs/python/syncdaemon/libcxattr.py", line 47, in _query_xattr cls.raise_oserr() File "/usr/lib/x86_64-linux-gnu/glusterfs/python/syncdaemon/libcxattr.py", line 37, in raise_oserr raise OSError(errn, os.strerror(errn))
OSError: [Errno 5] Input/output error
[2015-12-02 23:44:54.845763] I [syncdutils(/gluster-export):214:finalize] <top>: exiting. [2015-12-02 23:44:54.847527] I [repce(agent):92:service_loop] RepceServer: terminating on reaching EOF. [2015-12-02 23:44:54.847784] I [syncdutils(agent):214:finalize] <top>: exiting. [2015-12-02 23:44:54.849092] I [monitor(monitor):141:set_state] Monitor: new state: faulty



_______________________________________________
Gluster-users mailing list
Gluster-users@xxxxxxxxxxx
http://www.gluster.org/mailman/listinfo/gluster-users




[Index of Archives]     [Gluster Development]     [Linux Filesytems Development]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Bugtraq]     [Linux OMAP]     [Linux MIPS]     [eCos]     [Asterisk Internet PBX]     [Linux API]

  Powered by Linux