Gluster 3.1.4 - Replica out-of-sync

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Thank you Pranith,

Yes, it appears to be the same. Is it planned a date for release the v3.1.5
with this bug fixed?

Thank you for your very fast reply and your great work!

Jorge

On Tue, Apr 19, 2011 at 1:18 PM, Pranith Kumar. Karampuri <
pranithk at gluster.com> wrote:

> Looks like the bug 2500 http://bugs.gluster.com/show_bug.cgi?id=2500.
>
> The fix is not available in 3.1.4 its available in future versions.
>
> Pranith.
> ----- Original Message -----
> From: "Jorge Pascual" <jordy.pascual at gmail.com>
> To: gluster-users at gluster.org
> Sent: Tuesday, April 19, 2011 4:38:49 PM
> Subject: Gluster 3.1.4 - Replica out-of-sync
>
>
>
> Hi Guys!
>
>
> I'm trying this amazing tool and my conclusion is that It's the best open
> source approach to elastic storage that I've seen.
>
>
> However, I have a problem with the sync of the replica.
>
>
> I've set-up a configuration with 2 servers / 2 bricks / one disk each one
> in replica 2 and one native client.
>
>
> When all servers are up and running, the system works like a charm. I
> shutdown one machine (Brick2) and do 2 operations in the client side:
>
>
> a) Create a new file
> b) Modify an old file (very small... only 7 bytes)
>
>
> The new file is created on the server side (Brick1) and the modified one
> it's also correct on the server side (Brick1).
> I start the Brick2 and trigger a self-heal. In the client I do that:
> find /opt/shared -print0 | xargs --null stat >/dev/null
>
>
> I look what has happened in the server side (Brick2) and see that:
>
>
> a) The new file is created in Brick2 <-- that's right!
> b) The old file continues with the old content. It hasn't synced <---
> that's bad!
>
>
> So, I have the modified archive that is ok in client and Brick1 but it's
> out of sync in Brick2.
>
>
> Anyone knows what could it be happening?
>
>
> The servers set-up is:
>
>
>
> gluster> volume create myvol replica 2 transport tcp 192.168.1.136:/opt/storage1
> 192.168.1.134:/opt/storage1
> gluster> volume set myvol auth.allow 192.168.1.*
> gluster> volume start myvol
> gluster> volume info myvol
>
> Volume Name: myvol
> Type: Replicate
> Status: Started
> Number of Bricks: 2
> Transport-type: tcp
> Bricks:
> Brick1: 192.168.1.136:/opt/storage1
> Brick2: 192.168.1.134:/opt/storage1
> Options Reconfigured:
> auth.allow: 192.168.1.*
>
>
> The client set-up is:
> mount -t glusterfs 192.168.1.136:/myvol /opt/shared
>
>
> Regards!!
>
>
> Jorge
> _______________________________________________
> Gluster-users mailing list
> Gluster-users at gluster.org
> http://gluster.org/cgi-bin/mailman/listinfo/gluster-users
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://gluster.org/pipermail/gluster-users/attachments/20110419/b623f5d1/attachment.htm>


[Index of Archives]     [Gluster Development]     [Linux Filesytems Development]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Bugtraq]     [Linux OMAP]     [Linux MIPS]     [eCos]     [Asterisk Internet PBX]     [Linux API]

  Powered by Linux