I'm hoping that someone here can point me the right direction to help me solve a problem I am having. I've got 3 gluster peers and for some reason glusterd sill not start on one of them. All are running glusterfs version 3.4.0-8.el6 on Centos 6.4 (2.6.32-358.el6.x86_64). In /var/log/glusterfs/etc-glusterfs-glusterd.vol.log I see this error repeated 36 times (alternating between brick-0 and brick-1): *E [glusterd-store.c:1845:glusterd_store_retrieve_volume] 0-: Unknown key: > brick-0* This makes some sense to me since I have 18 replica 2 volumes resulting in a total of 36 bricks. Then there are a few more "I" messages and this is the rest of the file: *E [glusterd-store.c:2472:glusterd_resolve_all_bricks] 0-glusterd: resolve > brick failed in restore > **E [xlator.c:390:xlator_init] 0-management: Initialization of volume > 'management' failed, review your volfile again > **E [graph.c:292:glusterfs_graph_init] 0-management: initializing > translator failed > **E [graph.c:479:glusterfs_graph_activate] 0-graph: init failed > **W [glusterfsd.c:1002:cleanup_and_exit] > (-->/usr/sbin/glusterd(main+0x5d2) [0x406802] > (-->/usr/sbin/glusterd(glusterfs_volumes_init+0xb7) [0x4051b7] > (-->/usr/sbin/glusterd(glusterfs_process_volfp+0x103) [0x4050c3]))) 0-: > received signum (0), shutting down* Here are the contents of /etc/glusterfs/glusterd.vol: *volume management > ** type mgmt/glusterd > ** option working-directory /var/lib/glusterd > ** option transport-type socket,rdma > ** option transport.socket.keepalive-time 10 > ** option transport.socket.keepalive-interval 2 > ** option transport.socket.read-fail-log off > **end-volume* glusterd.vol is the same on all of the peers and the other ones work. Any help on where to look next would be greatly appreciated. Thanks, Mark -------------- next part -------------- An HTML attachment was scrubbed... URL: <http://supercolony.gluster.org/pipermail/gluster-users/attachments/20131007/017d7fa3/attachment.html>