add brick fails

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi, Mohit and Pranith,

Thanks for your reply. that's very helpful.
Rebooting all glusterd "/etc/init.d/glusterd stop && 
/etc/init.d/glusterd start" works well!

Again, thank you very much!

mkey


(2011/05/17 11:47), Pranith Kumar. Karampuri wrote:
> Seems like you have run into the glusterd lock problem, most probably because you ran a script with both peer probes and volume operations.
> Can you check if you have the volumes/bricks you created on all the peers?. if yes just restart the glusterds on all the machines and you should be fine.
>
> Pranith.
> ----- Original Message -----
> From: "Mohit Anchlia"<mohitanchlia at gmail.com>
> To: "mkey"<mkey at inter7.jp>
> Cc: gluster-users at gluster.org
> Sent: Tuesday, May 17, 2011 12:28:45 AM
> Subject: Re: add brick fails
>
> Not sure if this is related but do you know why you are seeing "
> (127.0.0.1:1020)" ? Can you look at gluster peer status on all the
> hosts and see if they can see each other?
>
> On Mon, May 16, 2011 at 11:17 AM, mkey<mkey at inter7.jp>  wrote:
>> Hi,
>> I am trying to use tried to glusterfs_3.2.0 on ubuntu natty(11.04).
>> I have 3 servers, and 2 servers are already added to peer by following
>> command.
>>> root at natty3:~# gluster peer probe natty3
>>> root at natty3:~# gluster peer probe natty4
>> also, I created volume.
>>> root at natty3:~# gluster volume create test-volume transport tcp
>> natty3:/opt/gluster/distributed natty3:/opt/gluster/distributed
>>> root at natty3:~# gluster volume info
>>> Volume Name: test-volume
>>> Type: Distribute
>>> Status: Created
>>> Number of Bricks: 2
>>> Transport-type: tcp
>>> Bricks:
>>> Brick1: natty4:/opt/gluster/distributed
>>> Brick2: natty3:/opt/gluster/distributed
>>
>> but when I tried to add brick, it failed.
>>> root at natty3:~# gluster peer probe natty2
>>> root at natty3:~# gluster volume add-brick test-volume
>> natty2:/opt/gluster/distributed
>>> Another operation is in progress, please retry after some time
>> /var/log/glusterfs/etc-glusterfs-glusterd.vol.log said
>>> [2011-05-13 09:42:29.77565] E
>> [glusterd-handler.c:1288:glusterd_handle_add_brick] 0-: Unable to set
>> cli op: 16
>>> [2011-05-13 09:42:29.82016] W
>> [socket.c:1494:__socket_proto_state_machine] 0-socket.management:
>> reading from socket failed. Error (Transport endpoint is not connected),
>> peer (127.0.0.1:1020)
>>
>> /var/log/glusterfs/cli.log said
>>> [2011-05-13 09:43:28.135429] W
>> [rpc-transport.c:604:rpc_transport_load] 0-rpc-transport: missing
>> 'option transport-type'. defaulting to "socket"
>>> [2011-05-13 09:43:28.221217] I
>> [cli-rpc-ops.c:1010:gf_cli3_1_add_brick_cbk] 0-cli: Received resp to add
>> brick
>>> [2011-05-13 09:43:28.221348] I [input.c:46:cli_batch] 0-: Exiting with: -1
>> Mounting volume is OK because if I tried to mount test-volume from
>> natty3 natty4, and created some files, then I can see them from both
>> hosts. In addition, probing natty2(new server) is OK because "gluster
>> peer status" lists natty2.
>>
>> I have no idea how to get over this.
>> Any helps are appreciated.
>>
>> mkey
>>
>>
>> _______________________________________________
>> Gluster-users mailing list
>> Gluster-users at gluster.org
>> http://gluster.org/cgi-bin/mailman/listinfo/gluster-users
>>
> _______________________________________________
> Gluster-users mailing list
> Gluster-users at gluster.org
> http://gluster.org/cgi-bin/mailman/listinfo/gluster-users





[Index of Archives]     [Gluster Development]     [Linux Filesytems Development]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Bugtraq]     [Linux OMAP]     [Linux MIPS]     [eCos]     [Asterisk Internet PBX]     [Linux API]

  Powered by Linux