On 10/13/2014 09:07 AM, Jesus Carretero wrote:
I updated my CentOS gluster to version 3.5.2.1.el6, and now the volume
doesn't start.
This is the peer:
# gluster peer status
Number of Peers: 1
Hostname: sandra
Uuid: a6234622-a205-4311-8f60-8e30236fdd72
State: Peer in Cluster (Connected)
This is the volume:
# gluster volume create storage transport rdma julia:/mnt/.storage1
sandra:/mnt/.storage2 force
volume create: storage: success: please start the volume to access data
# gluster volume info
Volume Name: storage
Type: Distribute
Volume ID: 5ea2f8f5-c404-484b-b7b4-5a26fe66d6ac
Status: Created
Number of Bricks: 2
Transport-type: rdma
Bricks:
Brick1: julia:/mnt/.storage1
Brick2: sandra:/mnt/.storage2
This is what happens when I try to start the storage:
# gluster volume start storage
volume start: storage: failed: Commit failed on localhost. Please
check the log file for more details.
This is the log /var/log/glusterfs/etc-glusterfs-glusterd.vol.log :
[2014-10-13 16:02:17.084383] I
[glusterd-pmap.c:271:pmap_registry_remove] 0-pmap: removing brick
(null) on port 49158
[2014-10-13 16:02:17.091475] E
[glusterd-utils.c:4704:glusterd_brick_start] 0-management: Unable to
start brick julia:/mnt/.storage1
[2014-10-13 16:02:17.091523] E
[glusterd-syncop.c:1014:gd_commit_op_phase] 0-management: Commit of
operation 'Volume Start' failed on localhost
What could be happening?
Look at the brick log on julia in
/var/log/glusterfs/bricks/mnt-.storage1.log
julia:/mnt/.storage1
_______________________________________________
Gluster-users mailing list
Gluster-users@xxxxxxxxxxx
http://supercolony.gluster.org/mailman/listinfo/gluster-users