Trouble to add brick

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Look in the gluster logs and see what it says. /var/log/gluster*

If you intention is to migrate data then alternatively you can also do:

gluster volume rebalance VOLNAME start


On Tue, May 24, 2011 at 2:59 AM, Karim Latouche
<klatouche at digitalchocolate.com> wrote:
> Hi
>
> Thx for your help.
>
> I finaly could add the last brick .
>
> I now have a Distributed-replicated volume
>
>> gluster volume info
>>
>> Volume Name: dccbackup
>> Type: Distributed-Replicate
>> Status: Started
>> Number of Bricks: 2 x 2 = 4
>> Transport-type: tcp
>> Bricks:
>> Brick1: mysqldccbackup:/export
>> Brick2: mysqldccbackupmirror:/export
>> Brick3: gfsbrick1:/brick1
>> Brick4: gfsbrick2:/brick2
>
> On the client side though ?I don't see any increasing in storage size.
> I tried to rebalance
>
> gluster volume rebalance ?dccbackup fix-layout start
>
> and got .
>
>> starting rebalance on volume dccbackup has been unsuccessful
>
> I'm kind of stuck here.
>
> Thank you
>
> KL
>
>
>
>
>
>
>
>
>
>
>
>> What is the output of:
>>
>> # ls /etc/glusterd/peers
>> 39bd607b-b355-4e0d-80ae-ef507e0dbabe ?4583d8b7-f724-4017-b549-9aa9e27dd50f
>> ?a70858c6-cac3-43fa-b743-4d2226f8a433
>>
>> Above output is mine.
>>
>> If you cat each of these files, there should be one for each peer OTHER
>> than the host that your are logged into), and hostnames each ?files, like
>> this:
>> # cat /etc/glusterd/peers/*
>> uuid=39bd607b-b355-4e0d-80ae-ef507e0dbabe
>> state=3
>> hostname1=jc1letgfs15-pfs1
>> uuid=4583d8b7-f724-4017-b549-9aa9e27dd50f
>> state=3
>> hostname1=jc1letgfs18-pfs1
>> uuid=a70858c6-cac3-43fa-b743-4d2226f8a433
>> state=3
>> hostname1=jc1letgfs14-pfs1
>>
>> Here is a listing of all the peer files on all four nodes of my GlusterFS
>> servers - I created this configuration in the same way you did - two
>> servers, then add two servers:
>>
>> jc1letgfs14
>> /etc/glusterd/peers/39bd607b-b355-4e0d-80ae-ef507e0dbabe
>> uuid=39bd607b-b355-4e0d-80ae-ef507e0dbabe
>> state=3
>> hostname1=jc1letgfs15-pfs1
>> /etc/glusterd/peers/4583d8b7-f724-4017-b549-9aa9e27dd50f
>> uuid=4583d8b7-f724-4017-b549-9aa9e27dd50f
>> state=3
>> hostname1=jc1letgfs18-pfs1
>> /etc/glusterd/peers/94f28f8c-8958-435b-80e4-3bd6d4fc5156
>> uuid=94f28f8c-8958-435b-80e4-3bd6d4fc5156
>> state=3
>> hostname1=10.20.72.191
>>
>> jc1letgfs15
>> /etc/glusterd/peers/4583d8b7-f724-4017-b549-9aa9e27dd50f
>> uuid=4583d8b7-f724-4017-b549-9aa9e27dd50f
>> state=3
>> hostname1=jc1letgfs18-pfs1
>> /etc/glusterd/peers/94f28f8c-8958-435b-80e4-3bd6d4fc5156
>> uuid=94f28f8c-8958-435b-80e4-3bd6d4fc5156
>> state=3
>> hostname1=10.20.72.191
>> /etc/glusterd/peers/a70858c6-cac3-43fa-b743-4d2226f8a433
>> uuid=a70858c6-cac3-43fa-b743-4d2226f8a433
>> state=3
>> hostname1=jc1letgfs14-pfs1
>>
>> jc1letgfs17
>> /etc/glusterd/peers/39bd607b-b355-4e0d-80ae-ef507e0dbabe
>> uuid=39bd607b-b355-4e0d-80ae-ef507e0dbabe
>> state=3
>> hostname1=jc1letgfs15-pfs1
>> /etc/glusterd/peers/4583d8b7-f724-4017-b549-9aa9e27dd50f
>> uuid=4583d8b7-f724-4017-b549-9aa9e27dd50f
>> state=3
>> hostname1=jc1letgfs18-pfs1
>> /etc/glusterd/peers/a70858c6-cac3-43fa-b743-4d2226f8a433
>> uuid=a70858c6-cac3-43fa-b743-4d2226f8a433
>> state=3
>> hostname1=jc1letgfs14-pfs1
>>
>> jc1letgfs18
>> /etc/glusterd/peers/39bd607b-b355-4e0d-80ae-ef507e0dbabe
>> uuid=39bd607b-b355-4e0d-80ae-ef507e0dbabe
>> state=3
>> hostname1=jc1letgfs15-pfs1
>> /etc/glusterd/peers/94f28f8c-8958-435b-80e4-3bd6d4fc5156
>> uuid=94f28f8c-8958-435b-80e4-3bd6d4fc5156
>> state=3
>> hostname1=10.20.72.191
>> /etc/glusterd/peers/a70858c6-cac3-43fa-b743-4d2226f8a433
>> uuid=a70858c6-cac3-43fa-b743-4d2226f8a433
>> state=3
>> hostname1=jc1letgfs14-pfs1
>>
>> I hope this is of some help.
>>
>> James Burnash
>> Unix Engineer
>>
>>
>> -----Original Message-----
>> From: gluster-users-bounces at gluster.org
>> [mailto:gluster-users-bounces at gluster.org] On Behalf Of Karim Latouche
>> Sent: Monday, May 23, 2011 1:02 PM
>> To: Mohit Anchlia
>> Cc: gluster-users at gluster.org
>> Subject: Re: Trouble to add brick
>>
>> hi
>>
>> Here is the output of gluster peer status ?command of the four nodes
>>
>> mysqldccbackup mysqldccbackupmirror are the two first nodes
>>
>> I got issues when I tried to add gfsbrick1 and two to the already working
>> cluster
>>
>>
>>
>>> [root at gfsbrick1 ~]# gluster peer status
>>> Number of Peers: 1
>>>
>>> Hostname: 10.0.4.225
>>> Uuid: 00000000-0000-0000-0000-000000000000
>>> State: Establishing Connection (Connected)
>>
>>> [root at gfsbrick2 ~]# gluster peer status
>>> Number of Peers: 2
>>>
>>> Hostname: 10.0.4.225
>>> Uuid: 2f4a1869-8c7e-49ea-a7b1-d9b2d364e5d6
>>> State: Peer in Cluster (Connected)
>>>
>>> Hostname: mysqldccbackupmirror
>>> Uuid: 0961e0d2-e2e8-497d-9c51-4b96c061f939
>>> State: Peer Rejected (Connected)
>>> [root at mysqldccbackup ~]# gluster peer status
>>> Number of Peers: 3
>>>
>>> Hostname: mysqldccbackupmirror
>>> Uuid: 0961e0d2-e2e8-497d-9c51-4b96c061f939
>>> State: Peer in Cluster (Connected)
>>>
>>> Hostname: gfsbrick2
>>> Uuid: 14f07412-b2e5-4b34-8980-d180a64b720f
>>> State: Peer in Cluster (Connected)
>>>
>>> Hostname: gfsbrick1
>>> Uuid: 00000000-0000-0000-0000-000000000000
>>> State: Establishing Connection (Connected)
>>
>>
>>> root at mysqldccbackupmirror ~]# gluster peer status
>>> Number of Peers: 1
>>>
>>> Hostname: 10.0.4.225
>>> Uuid: 2f4a1869-8c7e-49ea-a7b1-d9b2d364e5d6
>>> State: Peer in Cluster (Connected)
>>> [root at mysqldccbackupmirror ~]# gluster peer status
>>> Number of Peers: 1
>>>
>>> Hostname: 10.0.4.225
>>> Uuid: 2f4a1869-8c7e-49ea-a7b1-d9b2d364e5d6
>>> State: Peer in Cluster (Connected)
>>
>> Thx a lot
>>
>> KL
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>> Please send the complete output of gluster peer status from both the
>>> nodes.
>>>
>>> On Mon, May 23, 2011 at 9:44 AM, Karim Latouche
>>> <klatouche at digitalchocolate.com> ? wrote:
>>>>
>>>> hi
>>>>
>>>> Thx for your answer.
>>>>
>>>> I should have mention that I already set up a proper hosts file to avoid
>>>> DNS
>>>> resolution issue but I still have same issue.
>>>>
>>>> The last brick is allways the one having issues to be added . I tried
>>>> adding
>>>> gfsbrick2 first then gfsbrick1 and the result is the same
>>>>
>>>> I get
>>>>
>>>>> Hostname: gfsbrick2
>>>>> Uuid: 14f07412-b2e5-4b34-8980-d180a64b720f
>>>>> State: Peer in Cluster (Connected)
>>>>>
>>>>> Hostname: gfsbrick1
>>>>> Uuid: 00000000-0000-0000-0000-000000000000
>>>>> State: Establishing Connection (Connected)
>>>>
>>>> instead of
>>>>
>>>>
>>>>> Hostname: gfsbrick2
>>>>> Uuid: 00000000-0000-0000-0000-000000000000
>>>>> State: Establishing Connection (Connected)
>>>>
>>>> It 's really strange.
>>>>
>>>>
>>>> Thx
>>>>
>>>> KL
>>>>
>>>>
>>>>
>>>>
>>>>> Karim,
>>>>> ? ? ?gfsbrick1 is not able to contact gfsbrick2 yet. Generally happens
>>>>> if
>>>>> the DNS resolutions dont happen as expected. You can map them manually
>>>>> in
>>>>> /etc/hosts for both the machines?.
>>>>>
>>>>> something like:
>>>>> on gfsbrick1
>>>>> <ip-of-brick2> ? ? gfsbrick2
>>>>> on gfsbrick2
>>>>> <ip-of-brick1> ? ? gfsbrick1
>>>>>
>>>>> Pranith.
>>>>>
>>>>> ----- Original Message -----
>>>>> From: "Karim Latouche"<klatouche at digitalchocolate.com>
>>>>> To: gluster-users at gluster.org
>>>>> Sent: Monday, May 23, 2011 7:56:22 PM
>>>>> Subject: Trouble to add brick
>>>>>
>>>>> Hi
>>>>>
>>>>> I got a strange problems
>>>>>
>>>>> I run a two bricks replicated volume and I tried to add 2 more bricks
>>>>>
>>>>> The first one went ok
>>>>>
>>>>>> gluster peer probe gfsbrick1
>>>>>> Probe successful
>>>>>
>>>>> It seems I can't add the second one
>>>>>
>>>>> I get a
>>>>>
>>>>>> Hostname: gfsbrick2
>>>>>> Uuid: 00000000-0000-0000-0000-000000000000
>>>>>> State: Establishing Connection (Connected)
>>>>>
>>>>> Did anyone encounter this issue ?
>>>>>
>>>>> Thx
>>>>>
>>>>> KL
>>>>>
>>>>>
>>>>> _______________________________________________
>>>>> Gluster-users mailing list
>>>>> Gluster-users at gluster.org
>>>>> http://gluster.org/cgi-bin/mailman/listinfo/gluster-users
>>>>
>>>> _______________________________________________
>>>> Gluster-users mailing list
>>>> Gluster-users at gluster.org
>>>> http://gluster.org/cgi-bin/mailman/listinfo/gluster-users
>>>>
>> _______________________________________________
>> Gluster-users mailing list
>> Gluster-users at gluster.org
>> http://gluster.org/cgi-bin/mailman/listinfo/gluster-users
>>
>>
>> DISCLAIMER:
>> This e-mail, and any attachments thereto, is intended only for use by the
>> addressee(s) named herein and may contain legally privileged and/or
>> confidential information. If you are not the intended recipient of this
>> e-mail, you are hereby notified that any dissemination, distribution or
>> copying of this e-mail, and any attachments thereto, is strictly prohibited.
>> If you have received this in error, please immediately notify me and
>> permanently delete the original and any copy of any e-mail and any printout
>> thereof. E-mail transmission cannot be guaranteed to be secure or
>> error-free. The sender therefore does not accept liability for any errors or
>> omissions in the contents of this message which arise as a result of e-mail
>> transmission.
>> NOTICE REGARDING PRIVACY AND CONFIDENTIALITY Knight Capital Group may, at
>> its discretion, monitor and review the content of all e-mail communications.
>> http://www.knight.com
>
>


[Index of Archives]     [Gluster Development]     [Linux Filesytems Development]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Bugtraq]     [Linux OMAP]     [Linux MIPS]     [eCos]     [Asterisk Internet PBX]     [Linux API]

  Powered by Linux