Problems Mounting GlusterFS 3.1 Volumes

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



That worked for me. Thanks!

On Thu, Nov 4, 2010 at 12:46 AM, Lakshmipathi <lakshmipathi at gluster.com> wrote:
> Hi Jeremy-
> Did you start the volume - like
> n128:~ # gluster volume start test
> before mounting it ? If not,Please refer - http://www.gluster.com/community/documentation/index.php/Gluster_3.1:_Configuring_Distributed_Volumes
>
>
> --
> ----
> Cheers,
> Lakshmipathi.G
> FOSS Programmer.
>
>
> ----- Original Message -----
> From: "Jeremy Stout" <stout.jeremy at gmail.com>
> To: gluster-users at gluster.org
> Sent: Thursday, November 4, 2010 12:11:38 AM
> Subject: Problems Mounting GlusterFS 3.1 Volumes
>
> Hello. I'm having problems with GlusterFS 3.1. Whenever I mount a
> GlusterFS volume and perform a 'df' command, the command hangs. The
> same thing also happens if I mount the volume and try to perform a
> directory listing on it. This happens if I try to mount the volume
> locally or over the Infiniband network.
>
> Here is a breakdown of my setup:
> Platform: x86_64
> Distro: openSUSE 11.3
> Kernel: 2.6.32.22
> OFED: 1.5.1
> GlusterFS: 3.1
>
> Assuming I've just performed a fresh install and started the daemon,
> here is what I do:
> n128:~ # gluster volume create test transport rdma n128:/scratch/
> Creation of volume test has been successful
> n128:~ # mount -t glusterfs n128:/test /mnt/glusterfs/
> n128:~ # df
> Filesystem ? ? ? ? ? 1K-blocks ? ? ?Used Available Use% Mounted on
> /dev/sda1 ? ? ? ? ? ?103210272 ? 6269264 ?91698232 ? 7% /
> devtmpfs ? ? ? ? ? ? ?12358632 ? ? ? 176 ?12358456 ? 1% /dev
> tmpfs ? ? ? ? ? ? ? ? 12367684 ? ? ? ? 4 ?12367680 ? 1% /dev/shm
> /dev/sda3 ? ? ? ? ? ?104804356 ? 2004256 102800100 ? 2% /scratch
> /dev/sda4 ? ? ? ? ? ? ?9174072 ? ?151580 ? 8556472 ? 2% /spool
>
> It hangs while trying to list /mnt/glusterfs. If I manually kill the
> glusterfs mount process in a second terminal, the terminal running the
> df command will return.
>
> Here are the log files:
> n128:/var/log/glusterfs # cat /var/log/glusterfs/etc-glusterfs-glusterd.vol.log
> [2010-11-03 14:12:17.261374] I [glusterd.c:274:init] management: Using
> /etc/glusterd as working directory
> [2010-11-03 14:12:17.284874] I [glusterd.c:86:glusterd_uuid_init]
> glusterd: retrieved UUID: 869f621b-622c-4ab9-9f61-3ee602c8ddf6
> Given volfile:
> +------------------------------------------------------------------------------+
> ?1: volume management
> ?2: ? ? type mgmt/glusterd
> ?3: ? ? option working-directory /etc/glusterd
> ?4: ? ? option transport-type socket,rdma
> ?5: ? ? option transport.socket.keepalive-time 10
> ?6: ? ? option transport.socket.keepalive-interval 2
> ?7: end-volume
> ?8:
>
> +------------------------------------------------------------------------------+
> [2010-11-03 14:20:43.874738] I
> [glusterd-handler.c:775:glusterd_handle_create_volume] glusterd:
> Received create volume req
> [2010-11-03 14:20:43.876305] I [glusterd-utils.c:223:glusterd_lock]
> glusterd: Cluster lock held by 869f621b-622c-4ab9-9f61-3ee602c8ddf6
> [2010-11-03 14:20:43.876326] I
> [glusterd-handler.c:2653:glusterd_op_txn_begin] glusterd: Acquired
> local lock
> [2010-11-03 14:20:43.876337] I
> [glusterd-op-sm.c:5061:glusterd_op_sm_inject_event] glusterd:
> Enqueuing event: 'GD_OP_EVENT_START_LOCK'
> [2010-11-03 14:20:43.876357] I [glusterd-op-sm.c:5109:glusterd_op_sm]
> : Dequeued event of type: 'GD_OP_EVENT_START_LOCK'
> [2010-11-03 14:20:43.876369] I
> [glusterd3_1-mops.c:1105:glusterd3_1_cluster_lock] glusterd: Sent lock
> req to 0 peers
> [2010-11-03 14:20:43.876378] I
> [glusterd-op-sm.c:5061:glusterd_op_sm_inject_event] glusterd:
> Enqueuing event: 'GD_OP_EVENT_ALL_ACC'
> [2010-11-03 14:20:43.876386] I
> [glusterd-op-sm.c:4740:glusterd_op_sm_transition_state] :
> Transitioning from 'Default' to 'Lock sent' due to event
> 'GD_OP_EVENT_START_LOCK'
> [2010-11-03 14:20:43.876395] I [glusterd-op-sm.c:5109:glusterd_op_sm]
> : Dequeued event of type: 'GD_OP_EVENT_ALL_ACC'
> [2010-11-03 14:20:43.876552] I
> [glusterd3_1-mops.c:1247:glusterd3_1_stage_op] glusterd: Sent op req
> to 0 peers
> [2010-11-03 14:20:43.876568] I
> [glusterd-op-sm.c:5061:glusterd_op_sm_inject_event] glusterd:
> Enqueuing event: 'GD_OP_EVENT_ALL_ACC'
> [2010-11-03 14:20:43.876576] I
> [glusterd-op-sm.c:4740:glusterd_op_sm_transition_state] :
> Transitioning from 'Lock sent' to 'Stage op sent' due to event
> 'GD_OP_EVENT_ALL_ACC'
> [2010-11-03 14:20:43.876584] I [glusterd-op-sm.c:5109:glusterd_op_sm]
> : Dequeued event of type: 'GD_OP_EVENT_ALL_ACC'
> [2010-11-03 14:20:43.876592] I
> [glusterd-op-sm.c:5061:glusterd_op_sm_inject_event] glusterd:
> Enqueuing event: 'GD_OP_EVENT_STAGE_ACC'
> [2010-11-03 14:20:43.876600] I
> [glusterd-op-sm.c:5061:glusterd_op_sm_inject_event] glusterd:
> Enqueuing event: 'GD_OP_EVENT_ALL_ACC'
> [2010-11-03 14:20:43.876608] I
> [glusterd-op-sm.c:4740:glusterd_op_sm_transition_state] :
> Transitioning from 'Stage op sent' to 'Stage op sent' due to event
> 'GD_OP_EVENT_ALL_ACC'
> [2010-11-03 14:20:43.876615] I [glusterd-op-sm.c:5109:glusterd_op_sm]
> : Dequeued event of type: 'GD_OP_EVENT_STAGE_ACC'
> [2010-11-03 14:20:43.880391] I
> [glusterd3_1-mops.c:1337:glusterd3_1_commit_op] glusterd: Sent op req
> to 0 peers
> [2010-11-03 14:20:43.880426] I
> [glusterd-op-sm.c:5061:glusterd_op_sm_inject_event] glusterd:
> Enqueuing event: 'GD_OP_EVENT_ALL_ACC'
> [2010-11-03 14:20:43.880436] I
> [glusterd-op-sm.c:4740:glusterd_op_sm_transition_state] :
> Transitioning from 'Stage op sent' to 'Commit op sent' due to event
> 'GD_OP_EVENT_STAGE_ACC'
> [2010-11-03 14:20:43.880445] I [glusterd-op-sm.c:5109:glusterd_op_sm]
> : Dequeued event of type: 'GD_OP_EVENT_ALL_ACC'
> [2010-11-03 14:20:43.880455] I
> [glusterd3_1-mops.c:1159:glusterd3_1_cluster_unlock] glusterd: Sent
> unlock req to 0 peers
> [2010-11-03 14:20:43.880463] I
> [glusterd-op-sm.c:5061:glusterd_op_sm_inject_event] glusterd:
> Enqueuing event: 'GD_OP_EVENT_ALL_ACC'
> [2010-11-03 14:20:43.880470] I
> [glusterd-op-sm.c:4740:glusterd_op_sm_transition_state] :
> Transitioning from 'Commit op sent' to 'Unlock sent' due to event
> 'GD_OP_EVENT_ALL_ACC'
> [2010-11-03 14:20:43.880478] I [glusterd-op-sm.c:5109:glusterd_op_sm]
> : Dequeued event of type: 'GD_OP_EVENT_ALL_ACC'
> [2010-11-03 14:20:43.880491] I
> [glusterd-op-sm.c:4561:glusterd_op_txn_complete] glusterd: Cleared
> local lock
> [2010-11-03 14:20:43.880562] I
> [glusterd-op-sm.c:4740:glusterd_op_sm_transition_state] :
> Transitioning from 'Unlock sent' to 'Default' due to event
> 'GD_OP_EVENT_ALL_ACC'
> [2010-11-03 14:20:43.880576] I [glusterd-op-sm.c:5109:glusterd_op_sm]
> : Dequeued event of type: 'GD_OP_EVENT_ALL_ACC'
> [2010-11-03 14:20:43.880584] I
> [glusterd-op-sm.c:4740:glusterd_op_sm_transition_state] :
> Transitioning from 'Default' to 'Default' due to event
> 'GD_OP_EVENT_ALL_ACC'
> [2010-11-03 14:20:57.327303] E [rdma.c:4370:rdma_event_handler]
> rpc-transport/rdma: rdma.management: pollin received on tcp socket
> (peer: 192.168.40.128:1021) after handshake is complete
> [2010-11-03 14:21:01.166661] E [rdma.c:4370:rdma_event_handler]
> rpc-transport/rdma: rdma.management: pollin received on tcp socket
> (peer: 192.168.40.128:1023) after handshake is complete
> [2010-11-03 14:21:05.169694] E [rdma.c:4370:rdma_event_handler]
> rpc-transport/rdma: rdma.management: pollin received on tcp socket
> (peer: 192.168.40.128:1020) after handshake is complete
> [2010-11-03 14:21:09.172853] E [rdma.c:4370:rdma_event_handler]
> rpc-transport/rdma: rdma.management: pollin received on tcp socket
> (peer: 192.168.40.128:1019) after handshake is complete
> [2010-11-03 14:21:13.176023] E [rdma.c:4370:rdma_event_handler]
> rpc-transport/rdma: rdma.management: pollin received on tcp socket
> (peer: 192.168.40.128:1018) after handshake is complete
> [2010-11-03 14:21:17.179219] E [rdma.c:4370:rdma_event_handler]
> rpc-transport/rdma: rdma.management: pollin received on tcp socket
> (peer: 192.168.40.128:1017) after handshake is complete
> [2010-11-03 14:21:21.182370] E [rdma.c:4370:rdma_event_handler]
> rpc-transport/rdma: rdma.management: pollin received on tcp socket
> (peer: 192.168.40.128:1016) after handshake is complete
> [2010-11-03 14:21:25.185607] E [rdma.c:4370:rdma_event_handler]
> rpc-transport/rdma: rdma.management: pollin received on tcp socket
> (peer: 192.168.40.128:1015) after handshake is complete
> [2010-11-03 14:21:29.188771] E [rdma.c:4370:rdma_event_handler]
> rpc-transport/rdma: rdma.management: pollin received on tcp socket
> (peer: 192.168.40.128:1014) after handshake is complete
>
>
> n128:/var/log/glusterfs # cat /var/log/glusterfs/mnt-glusterfs-.log
> [2010-11-03 14:20:57.163632] W [io-stats.c:1637:init] test: dangling
> volume. check volfile
> [2010-11-03 14:20:57.163670] W [dict.c:1204:data_to_str] dict: @data=(nil)
> [2010-11-03 14:20:57.163681] W [dict.c:1204:data_to_str] dict: @data=(nil)
> Given volfile:
> +------------------------------------------------------------------------------+
> ?1: volume test-client-0
> ?2: ? ? type protocol/client
> ?3: ? ? option remote-host n128
> ?4: ? ? option remote-subvolume /scratch
> ?5: ? ? option transport-type rdma
> ?6: end-volume
> ?7:
> ?8: volume test-write-behind
> ?9: ? ? type performance/write-behind
> ?10: ? ? subvolumes test-client-0
> ?11: end-volume
> ?12:
> ?13: volume test-read-ahead
> ?14: ? ? type performance/read-ahead
> ?15: ? ? subvolumes test-write-behind
> ?16: end-volume
> ?17:
> ?18: volume test-io-cache
> ?19: ? ? type performance/io-cache
> ?20: ? ? subvolumes test-read-ahead
> ?21: end-volume
> ?22:
> ?23: volume test-quick-read
> ?24: ? ? type performance/quick-read
> ?25: ? ? subvolumes test-io-cache
> ?26: end-volume
> ?27:
> ?28: volume test
> ?29: ? ? type debug/io-stats
> ?30: ? ? subvolumes test-quick-read
> ?31: end-volume
>
> +------------------------------------------------------------------------------+
> [2010-11-03 14:20:57.327245] E
> [client-handshake.c:773:client_query_portmap_cbk] test-client-0:
> failed to get the port number for remote subvolume
> [2010-11-03 14:20:57.327308] E [rdma.c:4370:rdma_event_handler]
> rpc-transport/rdma: test-client-0: pollin received on tcp socket
> (peer: 192.168.40.128:24008) after handshake is complete
> [2010-11-03 14:21:01.166622] E
> [client-handshake.c:773:client_query_portmap_cbk] test-client-0:
> failed to get the port number for remote subvolume
> [2010-11-03 14:21:01.166667] E [rdma.c:4370:rdma_event_handler]
> rpc-transport/rdma: test-client-0: pollin received on tcp socket
> (peer: 192.168.40.128:24008) after handshake is complete
> [2010-11-03 14:21:05.169655] E
> [client-handshake.c:773:client_query_portmap_cbk] test-client-0:
> failed to get the port number for remote subvolume
> [2010-11-03 14:21:05.169699] E [rdma.c:4370:rdma_event_handler]
> rpc-transport/rdma: test-client-0: pollin received on tcp socket
> (peer: 192.168.40.128:24008) after handshake is complete
> [2010-11-03 14:21:09.172815] E
> [client-handshake.c:773:client_query_portmap_cbk] test-client-0:
> failed to get the port number for remote subvolume
> [2010-11-03 14:21:09.172858] E [rdma.c:4370:rdma_event_handler]
> rpc-transport/rdma: test-client-0: pollin received on tcp socket
> (peer: 192.168.40.128:24008) after handshake is complete
> [2010-11-03 14:21:13.175981] E
> [client-handshake.c:773:client_query_portmap_cbk] test-client-0:
> failed to get the port number for remote subvolume
> [2010-11-03 14:21:13.176028] E [rdma.c:4370:rdma_event_handler]
> rpc-transport/rdma: test-client-0: pollin received on tcp socket
> (peer: 192.168.40.128:24008) after handshake is complete
> [2010-11-03 14:21:17.179181] E
> [client-handshake.c:773:client_query_portmap_cbk] test-client-0:
> failed to get the port number for remote subvolume
> [2010-11-03 14:21:17.179226] E [rdma.c:4370:rdma_event_handler]
> rpc-transport/rdma: test-client-0: pollin received on tcp socket
> (peer: 192.168.40.128:24008) after handshake is complete
> [2010-11-03 14:21:21.182325] E
> [client-handshake.c:773:client_query_portmap_cbk] test-client-0:
> failed to get the port number for remote subvolume
> [2010-11-03 14:21:21.182375] E [rdma.c:4370:rdma_event_handler]
> rpc-transport/rdma: test-client-0: pollin received on tcp socket
> (peer: 192.168.40.128:24008) after handshake is complete
> [2010-11-03 14:21:25.185567] E
> [client-handshake.c:773:client_query_portmap_cbk] test-client-0:
> failed to get the port number for remote subvolume
> [2010-11-03 14:21:25.185611] E [rdma.c:4370:rdma_event_handler]
> rpc-transport/rdma: test-client-0: pollin received on tcp socket
> (peer: 192.168.40.128:24008) after handshake is complete
> [2010-11-03 14:21:29.188732] E
> [client-handshake.c:773:client_query_portmap_cbk] test-client-0:
> failed to get the port number for remote subvolume
> [2010-11-03 14:21:29.188776] E [rdma.c:4370:rdma_event_handler]
> rpc-transport/rdma: test-client-0: pollin received on tcp socket
> (peer: 192.168.40.128:24008) after handshake is complete
>
> I've tried different versions of the mount command, but receive the
> same results.
>
> Any help would be appreciated.
> _______________________________________________
> Gluster-users mailing list
> Gluster-users at gluster.org
> http://gluster.org/cgi-bin/mailman/listinfo/gluster-users
>


[Index of Archives]     [Gluster Development]     [Linux Filesytems Development]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Bugtraq]     [Linux OMAP]     [Linux MIPS]     [eCos]     [Asterisk Internet PBX]     [Linux API]

  Powered by Linux