Hi list,
I am using a replica volume (3 nodes) gluster in an ovirt environment and after setting one node in maintenance mode and rebooting it, the "Online" flag in gluster volume status does not go to "Y" again.
[root@node1 glusterfs]# gluster volume status
Status of volume: my_volume
Gluster process TCP Port RDMA Port Online Pid
------------------------------------------------------------------------------
Brick 10.22.1.95:/gluster_bricks/my_glust
er/my_gluster N/A N/A N N/A
Brick 10.22.1.97:/gluster_bricks/my_glust
er/my_gluster 49152 0 Y 4954
Brick 10.22.1.94:/gluster_bricks/my_glust
er/my_gluster 49152 0 Y 3574
Self-heal Daemon on localhost N/A N/A Y 3585
Self-heal Daemon on node2 N/A N/A Y 3557
Self-heal Daemon on node3 N/A N/A Y 4973
Task Status of Volume my_volume
------------------------------------------------------------------------------
There are no active volume tasks
Status of volume: my_volume
Gluster process TCP Port RDMA Port Online Pid
------------------------------------------------------------------------------
Brick 10.22.1.95:/gluster_bricks/my_glust
er/my_gluster N/A N/A N N/A
Brick 10.22.1.97:/gluster_bricks/my_glust
er/my_gluster 49152 0 Y 4954
Brick 10.22.1.94:/gluster_bricks/my_glust
er/my_gluster 49152 0 Y 3574
Self-heal Daemon on localhost N/A N/A Y 3585
Self-heal Daemon on node2 N/A N/A Y 3557
Self-heal Daemon on node3 N/A N/A Y 4973
Task Status of Volume my_volume
------------------------------------------------------------------------------
There are no active volume tasks
Shouldn´t it go back to Online Y automatically?
This is the output from gluster volume info from the same node:
[root@node1 glusterfs]# gluster volume info
Volume Name: my_volume
Type: Replicate
Volume ID: 78b9299c-1df5-4780-b108-4d3a6dee225d
Status: Started
Snapshot Count: 0
Number of Bricks: 1 x 3 = 3
Transport-type: tcp
Bricks:
Brick1: 10.22.1.95:/gluster_bricks/my_gluster/my_gluster
Brick2: 10.22.1.97:/gluster_bricks/my_gluster/my_gluster
Brick3: 10.22.1.94:/gluster_bricks/my_gluster/my_gluster
Options Reconfigured:
cluster.granular-entry-heal: enable
storage.owner-gid: 36
storage.owner-uid: 36
cluster.lookup-optimize: off
server.keepalive-count: 5
server.keepalive-interval: 2
server.keepalive-time: 10
server.tcp-user-timeout: 20
network.ping-timeout: 30
server.event-threads: 4
client.event-threads: 4
cluster.choose-local: off
features.shard: on
cluster.shd-wait-qlength: 10000
cluster.shd-max-threads: 8
cluster.locking-scheme: granular
cluster.data-self-heal-algorithm: full
cluster.server-quorum-type: server
cluster.quorum-type: auto
cluster.eager-lock: enable
performance.strict-o-direct: on
network.remote-dio: off
performance.low-prio-threads: 32
performance.io-cache: off
performance.read-ahead: off
performance.quick-read: off
auth.allow: *
user.cifs: off
storage.fips-mode-rchecksum: on
transport.address-family: inet
nfs.disable: on
performance.client-io-threads: on
Volume Name: my_volume
Type: Replicate
Volume ID: 78b9299c-1df5-4780-b108-4d3a6dee225d
Status: Started
Snapshot Count: 0
Number of Bricks: 1 x 3 = 3
Transport-type: tcp
Bricks:
Brick1: 10.22.1.95:/gluster_bricks/my_gluster/my_gluster
Brick2: 10.22.1.97:/gluster_bricks/my_gluster/my_gluster
Brick3: 10.22.1.94:/gluster_bricks/my_gluster/my_gluster
Options Reconfigured:
cluster.granular-entry-heal: enable
storage.owner-gid: 36
storage.owner-uid: 36
cluster.lookup-optimize: off
server.keepalive-count: 5
server.keepalive-interval: 2
server.keepalive-time: 10
server.tcp-user-timeout: 20
network.ping-timeout: 30
server.event-threads: 4
client.event-threads: 4
cluster.choose-local: off
features.shard: on
cluster.shd-wait-qlength: 10000
cluster.shd-max-threads: 8
cluster.locking-scheme: granular
cluster.data-self-heal-algorithm: full
cluster.server-quorum-type: server
cluster.quorum-type: auto
cluster.eager-lock: enable
performance.strict-o-direct: on
network.remote-dio: off
performance.low-prio-threads: 32
performance.io-cache: off
performance.read-ahead: off
performance.quick-read: off
auth.allow: *
user.cifs: off
storage.fips-mode-rchecksum: on
transport.address-family: inet
nfs.disable: on
performance.client-io-threads: on
Regards,
Martin
________ Community Meeting Calendar: Schedule - Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC Bridge: https://meet.google.com/cpu-eiue-hvk Gluster-users mailing list Gluster-users@xxxxxxxxxxx https://lists.gluster.org/mailman/listinfo/gluster-users