Hello: i had met the problem twice when i copy some files into the GFS space . i have five clients and two servers , when i copy files into /data which was GFS space on client A , the problem was appear. in the same path , A server can see the all files ,but B and C or D couldin't see the all files ,liks some files was missing ,but when i mount again ,the files was appear ,why ? Was there somebody met the problem like that ? my configur file is this : volume client1 type protocol/client option transport-type tcp option remote-host 10.4.11.134 # IP address of the remote brick option remote-port 6996 option transport-timeout 10 # seconds to wait for a reply option remote-subvolume brick1 # name of the remote volume end-volume volume client2 type protocol/client option transport-type tcp option remote-host 10.4.11.134 # IP address of the remote brick option remote-port 6996 option transport-timeout 10 # seconds to wait for a reply option remote-subvolume brick2 # name of the remote volume end-volume volume client3 type protocol/client option transport-type tcp option remote-host 10.4.11.134 # IP address of the remote brick option remote-port 6996 option transport-timeout 10 # seconds to wait for a reply option remote-subvolume brick3 # name of the remote volume end-volume volume client4 type protocol/client option transport-type tcp option remote-host 10.4.11.135 # IP address of the remote brick option remote-port 6996 option transport-timeout 10 # seconds to wait for a reply option remote-subvolume brick1 # name of the remote volume end-volume volume client5 type protocol/client option transport-type tcp option remote-host 10.4.11.135 # IP address of the remote brick option remote-port 6996 option transport-timeout 10 # seconds to wait for a reply option remote-subvolume brick2 # name of the remote volume end-volume volume client6 type protocol/client option transport-type tcp option remote-host 10.4.11.135 # IP address of the remote brick option remote-port 6996 option transport-timeout 10 # seconds to wait for a reply option remote-subvolume brick3 # name of the remote volume end-volume volume afr1 type cluster/afr subvolumes client1 client4 option favorite-child client1 end-volume volume afr2 type cluster/afr subvolumes client2 client5 option favorite-child client2 end-volume volume afr3 type cluster/afr subvolumes client3 client6 option favorite-child client3 end-volume volume dht type cluster/dht subvolumes afr1 afr2 afr3 end-volume ### Add readahead feature volume readahead type performance/read-ahead option page-size 1MB # unit in bytes option page-count 2 # cache per file = (page-count x page-size) subvolumes dht end-volume ### Add IO-Cache feature volume iocache type performance/io-cache option page-size 256KB option page-count 2 subvolumes readahead end-volume ### Add writeback feature volume writeback type performance/write-behind option block-size 1MB option cache-size 2MB option flush-behind off subvolumes iocache end-volume The "option flush-behind off" is the reason ? Waitting for your help imperative Thank a lot 2009-05-11 eagleeyes -------------- next part -------------- An HTML attachment was scrubbed... URL: <http://zresearch.com/pipermail/gluster-users/attachments/20090511/de2c575d/attachment.htm>