(3.1.5-1) "Another operation is in progress, please retry after some time"

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



  Hi Tomoaki,

Issuing peer related commands like 'peer probe' and 'peer detach' 
concurrently with volume operations
can cause the 'cluster' to get into an undefined state. We are working 
on getting glusterd cluster to handle concurrent commands robustly. See 
http://bugs.gluster.com/show_bug.cgi?id=3320 for updates on this issue.

thanks,
kp

On 08/22/2011 10:39 AM, Tomoaki Sato wrote:
> Hi kp,
>
> I've reproduce the issue in my environment.
> please find attached taz.
>
> there are 5 VMs, baz-1-private through baz-5-private.
> on each VMs, following commands are issued concurrently.
>
> on baz-1-private:
> # gluster volume create baz baz-1-private:/mnt/brick
> # gluster volume start baz
> # <register baz-1-private to DNS>
>
> on baz-2-private through baz-5-private:
> # <wait baz-1-private appears on the DNS>
> # ssh baz-1-private gluster peer probe <me>
> # gluster volume add-brick baz <me>:/mnt/brick
> # <register me to the DNS>
>
> <me> = baz-n-private (n: 2,3,4,5)
>
> I've noticed that following commands are stable.
>
> on baz-2-private through baz-5-private:
> # <wait baz-1-private appears on the DNS>
> # ssh baz-1-private gluster peer probe <me>
> # ssh baz-1-private gluster volume add-brick baz <me>:/mnt/brick
> # <register me to the DNS>
>
> thanks,
>
> tomo
>
>
> (2011/08/20 14:37), krish wrote:
>> Hi Tomoaki,
>>
>> Can you attach the glusterd log files of the peersseeing the problem?
>> Restarting glusterd(s) would solve the problem. Let me see the log 
>> files and let
>> you know if anything else can be done to resolve the problem.
>>
>> thanks,
>> kp
>>
>>
>> On 08/18/2011 07:37 AM, Tomoaki Sato wrote:
>>> Hi,
>>>
>>> baz-X-private and baz-Y-private, 2 newly probed peers, have issued 
>>> the each 'gluster volume add-brick baz baz-{X|Y}-private:/mnt/brick' 
>>> in very short period.
>>> Both the 'add-brick's have returned without "Add Brick successful" 
>>> messages.
>>> After that, 'add-brick' returns with "Another operation is in 
>>> progress, please retry after some time"  on the both peers every time.
>>> How should I clear this situation ?
>>>
>>> Best,
>>>
>>> tomo
>>> _______________________________________________
>>> Gluster-users mailing list
>>> Gluster-users at gluster.org
>>> http://gluster.org/cgi-bin/mailman/listinfo/gluster-users
>>
>



[Index of Archives]     [Gluster Development]     [Linux Filesytems Development]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Bugtraq]     [Linux OMAP]     [Linux MIPS]     [eCos]     [Asterisk Internet PBX]     [Linux API]

  Powered by Linux