Auto-mount on boot failing, works later

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



I'm running gluster 3.2.5 in a bog-standard 2-node AFR config on Ubuntu 10.04. I've set gluster to start on boot, and the fstab entry waits for networking to start, however, it still appears to try and mount the volume before gluster is running, resulting in a mount failure. Issuing a mount -a later on works fine. This is my fstab line:

127.0.0.1:/shared /var/lib/sitedata glusterfs defaults,noatime,_netdev 0 0

This is in the volume log:

[2012-01-16 16:54:00.716693] W [socket.c:1494:__socket_proto_state_machine] 0-glusterfs: reading from socket failed. Error (Tran
sport endpoint is not connected), peer (127.0.0.1:24007)
[2012-01-16 16:54:00.716889] W [socket.c:1494:__socket_proto_state_machine] 0-socket.glusterfsd: reading from socket failed. Err
or (Transport endpoint is not connected), peer ()
[2012-01-16 16:54:02.405297] W [glusterfsd.c:727:cleanup_and_exit] (-->/lib/libc.so.6(clone+0x6d) [0x7ff600bcf70d] (-->/lib/libp
thread.so.0(+0x69ca) [0x7ff600e729ca] (-->/usr/sbin/glusterfsd(glusterfs_sigwaiter+0xd5) [0x406335]))) 0-: received signum (15),
 shutting down
[2012-01-16 16:54:02.406082] I [socket.c:2275:socket_submit_request] 0-glusterfs: not connected (priv->connected = 0)
[2012-01-16 16:54:02.406116] W [rpc-clnt.c:1417:rpc_clnt_submit] 0-glusterfs: failed to submit rpc-request (XID: 0x3x Program: G
luster Portmap, ProgVers: 1, Proc: 5) to rpc-transport (glusterfs)
[2012-01-16 16:55:27.857675] I [glusterfsd.c:1493:main] 0-/usr/sbin/glusterfsd: Started running /usr/sbin/glusterfsd version 3.2
.5

You can see it's failing to connect to the local node, and then recording the gluster server starting a short time later. I'm pointing at localhost to get the share point rather than either of the external IPs. Is there some way to make it wait for gluster to be up? Alternatively, is it safer to point the mount at the other node on the basis that it's unlikely to be down at the same time as the current node? OTOH it's nice fro each node to be self-sufficient.

I've also seen this in the logs:

[2012-01-13 15:17:42.529690] E [name.c:253:af_inet_client_get_remote_sockaddr] 0-glusterfs: DNS resolution failed on host 127.0.0.1

I've no idea why it's looking for 127.0.0.1 in DNS! FWIW localhost has an entry in /etc/hosts.

Marcus
-- 
Marcus Bointon
Synchromedia Limited: Creators of http://www.smartmessages.net/
UK info at hand CRM solutions
marcus at synchromedia.co.uk | http://www.synchromedia.co.uk/





[Index of Archives]     [Gluster Development]     [Linux Filesytems Development]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Bugtraq]     [Linux OMAP]     [Linux MIPS]     [eCos]     [Asterisk Internet PBX]     [Linux API]

  Powered by Linux