Why is volume creation unsuccessful?

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Monday 20 December 2010 08:23:58 Maurice R Volaski wrote:
> I'm trying the following command on OpenIndiana where the mpools are zfs
> pools and glusterfs is 3.1.1.
> 
> gluster volume create glustervol transport tcp 192.168.1.54:/mpool
> 192.168.1.55:/mpool2

Please try

gluster volume create glustervol replica 2 transport tcp 192.168.1.54:/mpool 
192.168.1.55:/mpool2

The pathes /mpool /mpool2 must exist.

I recommend using the gluster CLI, there's a builtin help an you have partial 
command completion.

> 
> The first time I run it, it runs for a while, produces no output, and no
> volumes get created. When I run it again, it reports Creation of volume
> glustervol has been unsuccessful
> 
> The debug log from the first run follows:
> 
>  [socket.c:1809:socket_server_event_handler] socket.management: Failed to
> set keep-alive: Option not supported by protocol
> 
>  [glusterd-op-sm.c:5385:glusterd_op_set_cli_op] : Returning 0
> 
>  [glusterd-handler.c:785:glusterd_handle_create_volume] glusterd: Received
> create volume req
> 
>  [glusterd-utils.c:484:glusterd_check_volume_exists] : Volume glustervol
> does not exist.stat failed with errno : 2 on path:
> /etc/glusterd/vols/glustervol
> 
>  [glusterd-utils.c:610:glusterd_brickinfo_new] : Returning 0
> 
>  [glusterd-utils.c:667:glusterd_brickinfo_from_brick] : Returning 0
> 
>  [glusterd-utils.c:2101:glusterd_friend_find_by_hostname] glusterd: Friend
> 192.168.1.54 found.. state: 3
> 
>  [glusterd-utils.c:2182:glusterd_hostname_to_uuid] : returning 0
> 
>  [glusterd-utils.c:622:glusterd_resolve_brick] : Returning 0
> 
>  [glusterd-utils.c:2344:glusterd_new_brick_validate] : returning 0
> 
>  [glusterd-utils.c:708:glusterd_volume_brickinfo_get] : Returning -1
> 
>  [glusterd-utils.c:610:glusterd_brickinfo_new] : Returning 0
> 
>  [glusterd-utils.c:667:glusterd_brickinfo_from_brick] : Returning 0
> 
>  [glusterd-utils.c:2101:glusterd_friend_find_by_hostname] glusterd: Friend
> 192.168.1.55 found.. state: 3
> 
>  [glusterd-utils.c:2182:glusterd_hostname_to_uuid] : returning 0
> 
>  [glusterd-utils.c:622:glusterd_resolve_brick] : Returning 0
> 
>  [glusterd-utils.c:2062:glusterd_friend_find_by_uuid] glusterd: Friend
> found.. state: Peer in Cluster
> 
>  [glusterd-utils.c:2344:glusterd_new_brick_validate] : returning 0
> 
>  [glusterd-utils.c:708:glusterd_volume_brickinfo_get] : Returning -1
> 
>  [glusterd-utils.c:232:glusterd_lock] glusterd: Cluster lock held by
> 1a5716a5-7a15-4dcd-85dd-6d30c6024d0f
> 
>  [glusterd-handler.c:2835:glusterd_op_txn_begin] glusterd: Acquired local
> lock
> 
>  [glusterd-op-sm.c:5242:glusterd_op_sm_inject_event] glusterd: Enqueuing
> event: 'GD_OP_EVENT_START_LOCK'
> 
>  [glusterd-handler.c:2839:glusterd_op_txn_begin] glusterd: Returning 0
> 
>  [glusterd-utils.c:560:glusterd_volume_bricks_delete] : Returning 0
> 
>  [glusterd-op-sm.c:5290:glusterd_op_sm] : Dequeued event of type:
> 'GD_OP_EVENT_START_LOCK'
> 
>  [glusterd3_1-mops.c:1091:glusterd3_1_cluster_lock] glusterd: Sent lock req
> to 2 peers
> 
>  [glusterd3_1-mops.c:1094:glusterd3_1_cluster_lock] glusterd: Returning 0
> 
>  [glusterd-op-sm.c:4068:glusterd_op_ac_send_lock] : Returning with 0
> 
>  [glusterd-utils.c:2599:glusterd_sm_tr_log_transition_add] glusterd:
> Transitioning from 'Default' to 'Lock sent' due to event
> 'GD_OP_EVENT_START_LOCK'
> 
>  [glusterd-utils.c:2601:glusterd_sm_tr_log_transition_add] : returning 0
> 
>  [glusterd-handler.c:426:glusterd_handle_cluster_lock] glusterd: Received
> LOCK from uuid: 1a5716a5-7a15-4dcd-85dd-6d30c6024d0f
> 
>  [glusterd-op-sm.c:5242:glusterd_op_sm_inject_event] glusterd: Enqueuing
> event: 'GD_OP_EVENT_LOCK'
> 
>  [glusterd-handler.c:442:glusterd_handle_cluster_lock] : Returning 0
> 
>  [glusterd-op-sm.c:5290:glusterd_op_sm] : Dequeued event of type:
> 'GD_OP_EVENT_LOCK'
> 
>  [glusterd-op-sm.c:4041:glusterd_op_ac_none] : Returning with 0
> 
>  [glusterd-utils.c:2599:glusterd_sm_tr_log_transition_add] glusterd:
> Transitioning from 'Lock sent' to 'Lock sent' due to event
> 'GD_OP_EVENT_LOCK'
> 
>  [glusterd-utils.c:2601:glusterd_sm_tr_log_transition_add] : returning 0
> 
> 
> On the second run, the log says
> 
> 
> [socket.c:1809:socket_server_event_handler] socket.management: Failed to
> set keep-alive: Option not supported by protocol
> 
> [glusterd-op-sm.c:5385:glusterd_op_set_cli_op] : Returning 16
> 
> [glusterd3_1-mops.c:1357:glusterd_handle_rpc_msg] : Unable to set cli op:
> 16
> 
> [glusterd-op-sm.c:4702:glusterd_op_send_cli_response] : Returning 0
> 
> --
> Maurice Volaski, maurice.volaski at einstein.yu.edu
> Computing Support, Dominick P. Purpura Department of Neuroscience
> Albert Einstein College of Medicine of Yeshiva University


[Index of Archives]     [Gluster Development]     [Linux Filesytems Development]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Bugtraq]     [Linux OMAP]     [Linux MIPS]     [eCos]     [Asterisk Internet PBX]     [Linux API]

  Powered by Linux