client hung during self heal/replicate

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi,

- 1 client 
- 3 servers

One server had been down when i started writing test data. Once my writes finished I brought up the third server and the self-heal/replication began. However, I noticed that an 'ls -l' from the client hung. Is this normal to have a client hang when 1 of 3 servers is healing ? 

my client and server configs at the bottom.

[2009-10-02 12:29:20] D [afr-self-heal-entry.c:1859:afr_sh_entry_sync_prepare] replicate: self-healing directory / from subvolume remote2 to 1 other
[2009-10-02 12:29:20] D [afr-self-heal-entry.c:1161:afr_sh_entry_impunge_mknod] replicate: creating missing file /test_file2 on remote1
[2009-10-02 12:29:20] D [afr-self-heal-entry.c:1161:afr_sh_entry_impunge_mknod] replicate: creating missing file /test_file3 on remote1
[2009-10-02 12:29:20] D [afr-self-heal-entry.c:1161:afr_sh_entry_impunge_mknod] replicate: creating missing file /test_file on remote1
[2009-10-02 12:29:20] D [afr-self-heal-entry.c:1161:afr_sh_entry_impunge_mknod] replicate: creating missing file /test_file4 on remote1
[2009-10-02 12:29:20] D [afr-self-heal-metadata.c:379:afr_sh_metadata_sync] replicate: self-healing metadata of /test_file2 from remote2 to remote1
[2009-10-02 12:29:20] D [afr-self-heal-data.c:797:afr_sh_data_sync_prepare] replicate: self-healing file /test_file2 from subvolume remote2 to 1 other
[2009-10-02 12:29:20] D [afr-self-heal-metadata.c:379:afr_sh_metadata_sync] replicate: self-healing metadata of /test_file from remote2 to remote1
[2009-10-02 12:29:20] D [afr-self-heal-data.c:797:afr_sh_data_sync_prepare] replicate: self-healing file /test_file from subvolume remote2 to 1 other
[2009-10-02 12:29:33] D [afr-self-heal-metadata.c:379:afr_sh_metadata_sync] replicate: self-healing metadata of /test_file3 from remote2 to remote1
[2009-10-02 12:29:33] D [afr-self-heal-data.c:797:afr_sh_data_sync_prepare] replicate: self-healing file /test_file3 from subvolume remote2 to 1 other
[2009-10-02 12:29:33] D [afr-self-heal-metadata.c:379:afr_sh_metadata_sync] replicate: self-healing metadata of /test_file4 from remote2 to remote1
[2009-10-02 12:29:33] D [afr-self-heal-data.c:797:afr_sh_data_sync_prepare] replicate: self-healing file /test_file4 from subvolume remote2 to 1 other


# client config
# file: /etc/glusterfs/glusterfs.vol
volume remote1
  type protocol/client
  option transport-type tcp
  option remote-host rh1
  option remote-subvolume brick
end-volume

volume remote2
  type protocol/client
  option transport-type tcp
  option remote-host rh2
  option remote-subvolume brick
end-volume

volume remote3
  type protocol/client
  option transport-type tcp
  option remote-host rh3
  option remote-subvolume brick
end-volume

volume replicate
  type cluster/replicate
  subvolumes remote1 remote2 remote3
end-volume

volume writebehind
  type performance/write-behind
  option window-size 1MB
  subvolumes replicate
end-volume

volume cache
  type performance/io-cache
  option cache-size 512MB
  subvolumes writebehind
end-volume

#
# server config
#
volume posix
  type storage/posix
  option directory /bigpartition
end-volume

volume locks
  type features/locks
  subvolumes posix
end-volume

volume brick
  type performance/io-threads
  option thread-count 8
  subvolumes locks
end-volume

volume server
  type protocol/server
  option transport-type tcp
  option auth.addr.brick.allow *
  subvolumes brick
end-volume

Thanks!
=cm



      


[Index of Archives]     [Gluster Development]     [Linux Filesytems Development]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Bugtraq]     [Linux OMAP]     [Linux MIPS]     [eCos]     [Asterisk Internet PBX]     [Linux API]

  Powered by Linux