question of Replicate(GFS 2.0)

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



  Hello :
      I have some  question of Replicate ,when i use tow servers and one client  , the configuration files  are these :
     
     GFS server 1 and 2 
glusterfsd.vol 
=======================================================
volume posix1
  type storage/posix                    # POSIX FS translator
  option directory /data1        # Export this directory
end-volume

volume posix2
  type storage/posix                    # POSIX FS translator
  option directory /data2        # Export this directory
end-volume
### Add POSIX record locking support to the storage brick
volume brick1
  type features/posix-locks
  #option mandatory-locks on          # enables mandatory locking on all files
  subvolumes posix1
end-volume

volume brick2
  type features/posix-locks
  #option mandatory-locks on          # enables mandatory locking on all files
  subvolumes posix2
end-volume

volume ns
  type storage/posix                    # POSIX FS translator
  option directory /export    # Export this directory
end-volume

volume name
  type features/posix-locks
  #option mandatory-locks on          # enables mandatory locking on all files
  subvolumes ns
end-volume

### Add network serving capability to above brick.
volume server
  type protocol/server
  option transport-type tcp                 # For TCP/IP transport
  subvolumes    brick1 brick2 name
  option auth.addr.brick1.allow *               # access to "brick" volume
  option auth.addr.brick2.allow *               # access to "brick" volume
  option auth.addr.name.allow *               # access to "brick" volume
end-volume
=================================================================
GFS client 
volume client1
  type protocol/client
  option transport-type tcp     # for TCP/IP transport
  option remote-host 172.20.92.249      # IP address of the remote brick
  option remote-subvolume brick1        # name of the remote volume
end-volume
### Add client feature and attach to remote subvolume of server2
volume client2
  type protocol/client
  option transport-type tcp     # for TCP/IP transport
  option remote-host 172.20.92.249      # IP address of the remote brick
  option remote-subvolume brick2        # name of the remote volume
end-volume

volume client3
  type protocol/client
  option transport-type tcp     # for TCP/IP transport
  option remote-host 172.20.92.250      # IP address of the remote brick
  option remote-subvolume brick1        # name of the remote volume
end-volume
volume client4
  type protocol/client
  option transport-type tcp     # for TCP/IP transport
  option remote-host 172.20.92.250     # IP address of the remote brick
  option remote-subvolume brick2        # name of the remote volume
end-volume

volume  ns1 
 type protocol/client
  option transport-type tcp     # for TCP/IP transport
  option remote-host 172.20.92.249      # IP address of the remote brick
  option remote-subvolume name        # name of the remote volume
end-volume
volume  ns2 
 type protocol/client
  option transport-type tcp     # for TCP/IP transport
  option remote-host 172.20.92.250      # IP address of the remote brick
  option remote-subvolume name        # name of the remote volume
end-volume

## Add replicate feature.
volume rep1
  type cluster/replicate
  subvolumes client1 client3
end-volume

volume rep2
  type cluster/replicate
  subvolumes client2 client4 
end-volume

volume rep-ns
  type cluster/replicate
 end-volume

volume bricks
  type cluster/unify
  option namespace rep-ns # this will not be storage child of unify.
  subvolumes rep1 rep2
  option self-heal background # foreground off # default is foreground
  option scheduler rr
end-volume
========================================================================
              glusterfs  -f /etc/glusterfs/glusterfs.vol  /data   
 
After mount ,I  touch 11 22 33 44 four files into  /data  ,for the Replicate,four files are both exist in 92.249 and 92.250
On GFS client  I echo "aaaaaaaaaaaaaaa" > 11 ,then  on 92.249  i rm -fr /data1/11 , just like the file was lost. So  on client I couldn't read 11 correct, I  " ll -h ",the file is appear again  in 92.249,but have not the right "aaaaaaaaaaaaaaa",it was like "@@@@@@@@@@@" messy code ! If i copy 11 from 92.250 to 92.249, on GFS client  I read the right file "aaaaaaaaaaaaaaa" . Was that my configuring wrong ?  why the file not renew accurate?


    
2009-03-04 



eagleeyes 
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://zresearch.com/pipermail/gluster-users/attachments/20090304/a4ca20f8/attachment-0001.htm>


[Index of Archives]     [Gluster Development]     [Linux Filesytems Development]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Bugtraq]     [Linux OMAP]     [Linux MIPS]     [eCos]     [Asterisk Internet PBX]     [Linux API]

  Powered by Linux