Existing Data and self mounts ?

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi David
Thanks for clearing it up
With regards to the  "self-heal":
find /mnt/gfstest -noleaf -print0 | xargs --null stat >/dev/null


a) I do this on the server(1|2|client|doesnt-matter) ? IF server1 is the
one with the latest copy of data

b) Would the self-heal i've been reading about in 3.3 not take care of this
over time ?

c) If server1:/data has the latest copy of the date compared to
server2:/data (so if there are any question regarding which data to
replicate/replace/use gluster MUST use data from server1 (until it's fully
sync again))
Does it matter what order i use the servers at the create stage  (keep in
mind i want to force all data to be from server1:/data")

"gluster volume create testvol  replica 2 server1:/data"server2:data"


Best Regards
Jacques


On Mon, Jun 4, 2012 at 12:21 PM, David Coulson <david at davidcoulson.net>wrote:

>
>
> On 6/4/12 4:05 AM, Jacques du Rand wrote:
>
> HI Guys
> This all applies to Gluster3.3
>
> I love gluster but I'm having  some difficulties understanding some things.
>
>  1.Replication(with existing data):
> Two servers in simple single brick replication. ie 1 volume (testvol)
> -server1:/data/ && server2:/data/
> -server1 has a few millions files in the /data dir
> -server2 has a no files in /data dir
>
>  So after i created the testvol and started the volume
> QUESTION (1): Do i  need to mount  each volume on the servers like so ? If
> yes why ?
> ---> on server1: mount -t gluster 127.0.0.1:/testvol /mnt/gfstest
> ---> on server2: mount -t gluster 127.0.0.1:/testvol /mnt/gfstest
>
> Only if you want to access the files within the volume on the two servers
> which have the bricks on them.
>
>
>  CLIENT:
> Then I mount the client:
> mount server-1-ip:/testvol /mnt/gfstest
>
>  Question(2) :
> I only see files from server2 ???
>
> Probably hit and miss what you see, since your bricks are not consistent.
>
>
>  Question (3)
> Whenever I'm writing/updating/working with the files on the SERVER i
> should ALWAYS do it via the (local mount )/mnt/gfstest. I should never work
> with files directly in the bricks /data ??
>
> Correct - Gluster can't keep track of writes if you don't do it through
> the glusterfs mount point.
>
>
>  Question (4.1)
> -Whats the best-practise to sync existing "data"  ?
>
> You will need to force a manual self-heal and see if that copies all the
> data over to the other brick.
>
> find /mnt/gfstest -noleaf -print0 | xargs --null stat >/dev/null
>
>
>
>  Question (4.2)
> -Is it safe to create a brick in a directory that already has files in it ?
>
>
> As long as you force a self-heal on it before you use it.
>
>
>
>
>
>
>
>
>
> _______________________________________________
> Gluster-users mailing listGluster-users at gluster.orghttp://gluster.org/cgi-bin/mailman/listinfo/gluster-users
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://gluster.org/pipermail/gluster-users/attachments/20120604/2c86c65d/attachment-0001.htm>


[Index of Archives]     [Gluster Development]     [Linux Filesytems Development]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Bugtraq]     [Linux OMAP]     [Linux MIPS]     [eCos]     [Asterisk Internet PBX]     [Linux API]

  Powered by Linux