Den 2014-10-03 16:23, Niels de Vos skrev:
On Fri, Oct 03, 2014 at 03:26:04PM +0200, Peter Haraldson wrote:
Hi all!
Hi Peter!
I'm rather new to glusterfs, trying it out for redundant storage for my very
small company.
I have a minimal setup of glusterfs, 2 servers (storage1 & storage2) with
one brick each, both added to volume "testvol1". I then mount "testvol1" on
a third server (app1). This is fine as long as I use any one servers ip and
filesystem glusterfs, but when trying to mount the volume using a file
"/owndata/conf/glusterfs/storage.vol", it does not work. Also I can't use
nfs - don't need it but maybe it's related.
It is not recommended to use the volume-file for mounting. Current
versions of Gluster manage the volume-file for you, and there is no need
to make changes there, or use it directly.
Mounting with "-t glusterfs ..." is the recommended way. If you would
like to fall-back on the 2nd server while mounting, you can use the
"backupvolfile-server=storage2" mount option.
I can not say why mounting over NFS fails to work. The output of
"gluster volume status" below shows that the NFS-server is running and
listening on port 2049. You can find the logs for the nfs-server in
/var/log/gluster/nfs.log, combine that with the output of
# mount -vvv -t nfs storage1:/testvol1 /mnt/tmp
to get some ideas on what might go wrong.
HTH,
Niels
Hi Niels,
thanks for your answer.
When I experimented with this last friday I tried using the
"backup-volfile-server=storage2" mount option, but when taking down the
first server connection to gluster was completely lost. However, trying
again today it works fine, the backup-server is written to!
What I want to achieve is to avoid having a SPOF, and it seems the
backupvolfile option actually does exactly that, don't know why it
didn't work during my first testings. (I could of course use rrdns
instead but for my small needs it's a bit overkill.)
So everything is fine, I wont bother with the nfs mounting problem as
it is not to be used anyway.
Kind regards
Peter
What do you mean with "
So:
"mount -t glusterfs 192.168.160.21:/testvol1 /mnt/tmp/" works. I write a
file to /mnt/tmp/filename, then mount 192.168.12.210:/testvol1 and the newly
created file is there.
Trying to mount using config file storage.vol:
mount -t glusterfs /owndata/conf/glusterfs/storage.vol /mnt/tmp
Mount failed. Please check the log file for more details.
The main error in the log is
/E [client-handshake.c:1778:client_query_portmap_cbk] 0-remote1:
failed to get the port number for remote subvolume. Please run
'gluster volume status' on server to see if brick process is running/.
There are lots & lots of pages on the 'net about this error message, none of
the solutions I've found has worked.
CentOS 6.5 on all servers, they are all kvm hosts under oVirt (this is just
the testing stage, will be on real iron in production).
No firewall anywhere, selinux is permissive.
*File storage.vol:*
volume remote1
type protocol/client
option transport-type tcp
option remote-host 192.168.12.210
option remote-subvolume testvol1
end-volume
volume remote2
type protocol/client
option transport-type tcp
option remote-host 192.168.160.21
option remote-subvolume testvol1
end-volume
volume replicate
type cluster/replicate
subvolumes remote1 remote2
end-volume
volume writebehind
type performance/write-behind
option window-size 1MB
subvolumes replicate
end-volume
volume cache
type performance/io-cache
option cache-size 256MB
subvolumes writebehind
end-volume
*# gluster volume info*
Volume Name: testvol1
Type: Replicate
Volume ID: bcca4aa2-46c0-44a2-8175-1305faa8b4f9
Status: Started
Number of Bricks: 1 x 2 = 2
Transport-type: tcp
Bricks:
Brick1: 192.168.12.210:/export/vdb/brick1
Brick2: 192.168.160.21:/export/vdb/brick1
*# gluster volume status*
Status of volume: testvol1
Gluster process Port Online Pid
-------------------------------------------------------------------------
Brick 192.168.12.210:/export/vdb/brick1 49152
Y 1656
Brick 192.168.160.21:/export/vdb/brick1 49152
Y 139090
NFS Server on localhost 2049 Y
1670
Self-heal Daemon on localhost N/A Y
1674
NFS Server on 192.168.160.21 2049 Y
1481
Self-heal Daemon on 192.168.160.21 N/A Y
139105
Task Status of Volume testvol1
------------------------------------------------------------------------------
There are no active volume tasks
*Complete log after fail:
*
[2014-10-02 14:38:22.252235] I [glusterfsd.c:2026:main]
0-/usr/sbin/glusterfs: Started running /usr/sbin/glusterfs version
3.4.0.57rhs (/usr/sbin/glusterfs
--fuse-mountopts=allow_other,default_permissions,max_read=131072
--volfile=/owndata/conf/glusterfs/storage.vol
--fuse-mountopts=allow_other,default_permissions,max_read=131072
/mnt/glust)
[2014-10-02 14:38:22.284438] W [options.c:848:xl_opt_validate]
0-writebehind: option 'window-size' is deprecated, preferred is
'cache-size', continuing with correction
[2014-10-02 14:38:22.284476] W [io-cache.c:1672:init] 0-cache:
dangling volume. check volfile
[2014-10-02 14:38:22.294306] I [socket.c:3505:socket_init]
0-remote2: SSL support is NOT enabled
[2014-10-02 14:38:22.294339] I [socket.c:3520:socket_init]
0-remote2: using system polling thread
[2014-10-02 14:38:22.294832] I [socket.c:3505:socket_init]
0-remote1: SSL support is NOT enabled
[2014-10-02 14:38:22.294848] I [socket.c:3520:socket_init]
0-remote1: using system polling thread
[2014-10-02 14:38:22.294870] I [client.c:2171:notify] 0-remote1:
parent translators are ready, attempting connect on transport
[2014-10-02 14:38:22.306697] I [client.c:2171:notify] 0-remote2:
parent translators are ready, attempting connect on transport
Final graph:
+------------------------------------------------------------------------------+
1: volume remote1
2: type protocol/client
3: option remote-subvolume testvol1
4: option remote-host 192.168.12.210
5: option transport-type socket
6: end-volume
7:
8: volume remote2
9: type protocol/client
10: option remote-subvolume testvol1
11: option remote-host 192.168.160.21
12: option transport-type socket
13: end-volume
14:
15: volume replicate
16: type cluster/replicate
17: subvolumes remote1 remote2
18: end-volume
19:
20: volume writebehind
21: type performance/write-behind
22: option cache-size 1MB
23: subvolumes replicate
24: end-volume
25:
26: volume cache
27: type performance/io-cache
28: option cache-size 256MB
29: subvolumes writebehind
30: end-volume
31:
+------------------------------------------------------------------------------+
[2014-10-02 14:38:22.310830] E
[client-handshake.c:1778:client_query_portmap_cbk] 0-remote1: failed
to get the port number for remote subvolume. Please run 'gluster
volume status' on server to see if brick process is running.
[2014-10-02 14:38:22.310887] I [client.c:2103:client_rpc_notify]
0-remote1: disconnected from 192.168.12.210:24007. Client process
will keep trying to connect to glusterd until brick's port is
available.
[2014-10-02 14:38:22.311031] E
[client-handshake.c:1778:client_query_portmap_cbk] 0-remote2: failed
to get the port number for remote subvolume. Please run 'gluster
volume status' on server to see if brick process is running.
[2014-10-02 14:38:22.311059] I [client.c:2103:client_rpc_notify]
0-remote2: disconnected from 192.168.160.21:24007. Client process
will keep trying to connect to glusterd until brick's port is
available.
[2014-10-02 14:38:22.311070] E [afr-common.c:4025:afr_notify]
0-replicate: All subvolumes are down. Going offline until atleast
one of them comes back up.
[2014-10-02 14:38:22.314827] I [fuse-bridge.c:5874:fuse_graph_setup]
0-fuse: switched to graph 0
[2014-10-02 14:38:22.316140] I [fuse-bridge.c:4811:fuse_init]
0-glusterfs-fuse: FUSE inited with protocol versions: glusterfs 7.13
kernel 7.13
[2014-10-02 14:38:22.321404] W [fuse-bridge.c:1134:fuse_attr_cbk]
0-glusterfs-fuse: 2: LOOKUP() / => -1 (Transport endpoint is not
connected)
[2014-10-02 14:38:22.324731] I [fuse-bridge.c:5715:fuse_thread_proc]
0-fuse: unmounting /mnt/glust
[2014-10-02 14:38:22.324931] W [glusterfsd.c:1099:cleanup_and_exit]
(-->/lib64/libc.so.6(clone+0x6d) [0x7f6e2ec5e86d]
(-->/lib64/libpthread.so.0(+0x79d1) [0x7f6e2f2f19d1]
(-->/usr/sbin/glusterfs(glusterfs_sigwaiter+0xcd) [0x4052ad]))) 0-:
received signum (15), shutting down
[2014-10-02 14:38:22.324946] I [fuse-bridge.c:6412:fini] 0-fuse:
Unmounting '/mnt/glust'.
Regards
Peter H
_______________________________________________
Gluster-users mailing list
Gluster-users@xxxxxxxxxxx
http://supercolony.gluster.org/mailman/listinfo/gluster-users
_______________________________________________
Gluster-users mailing list
Gluster-users@xxxxxxxxxxx
http://supercolony.gluster.org/mailman/listinfo/gluster-users