Thanks for the hint, I changed network.ping-timeout to "5". But it seems only lightly different. I would expect for gluster the same behavior As I do with drbd??!! [root at ctdb1 ~]# gluster volume info all Volume Name: samba-vol Type: Replicate Status: Started Number of Bricks: 2 Transport-type: tcp Bricks: Brick1: 192.168.132.56:/glusterfs/export Brick2: 192.168.132.57:/glusterfs/export Options Reconfigured: network.ping-timeout: 5 What about :network.frame-timeout can I adjust this parameter to react quick if a node is down??? [root at ctdb1 ~]# gluster peer status Number of Peers: 1 Hostname: 192.168.132.57 Uuid: 9c52b89f-a232-4f20-8ff8-9bbc6351ab79 State: Peer in Cluster (Connected) [root at ctdb1 ~]# gluster peer status Number of Peers: 1 Hostname: 192.168.132.57 Uuid: 9c52b89f-a232-4f20-8ff8-9bbc6351ab79 State: Peer in Cluster (Connected) Or is it in my /etc/glusterfs/glusterd.vol: [root at ctdb1 glusterfs]# cat glusterd.vol volume management type mgmt/glusterd option working-directory /etc/glusterd option transport-type tcp,socket,rdma option transport.socket.keepalive-time 10 option transport.socket.keepalive-interval 2 end-volume ----------------------------------------------- EDV Daniel M?ller Leitung EDV Tropenklinik Paul-Lechler-Krankenhaus Paul-Lechler-Str. 24 72076 T?bingen Tel.: 07071/206-463, Fax: 07071/206-499 eMail: mueller at tropenklinik.de Internet: www.tropenklinik.de ----------------------------------------------- -----Urspr?ngliche Nachricht----- Von: Jacob Shucart [mailto:jacob at gluster.com] Gesendet: Dienstag, 21. Dezember 2010 18:40 An: mueller at tropenklinik.de; 'Daniel Maher'; gluster-users at gluster.org Betreff: RE: Gluster 3.1 newbe question Hello, Please don't write to /glusterfs/export as this is not compatible with Gluster. There is a ping timeout which controls how long Gluster will wait to write for a node that went down. By default this value is very high, so please run: gluster volume set samba-vol network.ping-timeout 15 Then mount your Gluster volume somewhere and try writing to it. You will see that it will pause for a while and then resume writing. -Jacob -----Original Message----- From: gluster-users-bounces at gluster.org [mailto:gluster-users-bounces at gluster.org] On Behalf Of Daniel M?ller Sent: Tuesday, December 21, 2010 7:29 AM To: mueller at tropenklinik.de; 'Daniel Maher'; gluster-users at gluster.org Subject: Re: Gluster 3.1 newbe question Hm, now I did not use the mount point of the volumes. I wrote diretctly in /glusterfs/export and gluster did not hang while the other peer restarted. But now the files I wrote in the meanwhile are not replicated???? How about this? Is there a command to get them replicated to the other node? ----------------------------------------------- EDV Daniel M?ller Leitung EDV Tropenklinik Paul-Lechler-Krankenhaus Paul-Lechler-Str. 24 72076 T?bingen Tel.: 07071/206-463, Fax: 07071/206-499 eMail: mueller at tropenklinik.de Internet: www.tropenklinik.de ----------------------------------------------- -----Urspr?ngliche Nachricht----- Von: gluster-users-bounces at gluster.org [mailto:gluster-users-bounces at gluster.org] Im Auftrag von Daniel M?ller Gesendet: Dienstag, 21. Dezember 2010 16:07 An: 'Daniel Maher'; gluster-users at gluster.org Betreff: Re: Gluster 3.1 newbe question Even started it is the same, perhaps I missed a thing: [root at ctdb1 ~]# gluster volume info Volume Name: samba-vol Type: Replicate Status: Started Number of Bricks: 2 Transport-type: tcp Bricks: Brick1: 192.168.132.56:/glusterfs/export Brick2: 192.168.132.57:/glusterfs/export I created the volumes like: gluster volume create samba-vol replica 2 transport tcp 192.168.132.56:/glusterfs/export 192.168.132.57:/glusterfs/export Both are mounted: # mount /dev/mapper/VolGroup00-LogVol00 on / type ext3 (rw) proc on /proc type proc (rw) sysfs on /sys type sysfs (rw) devpts on /dev/pts type devpts (rw,gid=5,mode=620) /dev/sda1 on /boot type ext3 (rw) tmpfs on /dev/shm type tmpfs (rw) glusterfs#192.168.132.56:/samba-vol on /mnt/glusterfs type fuse (rw,allow_other,default_permissions,max_read=131072) none on /proc/sys/fs/binfmt_misc type binfmt_misc (rw) sunrpc on /var/lib/nfs/rpc_pipefs type rpc_pipefs (rw) ----------------------------------------------- EDV Daniel M?ller Leitung EDV Tropenklinik Paul-Lechler-Krankenhaus Paul-Lechler-Str. 24 72076 T?bingen Tel.: 07071/206-463, Fax: 07071/206-499 eMail: mueller at tropenklinik.de Internet: www.tropenklinik.de ----------------------------------------------- -----Urspr?ngliche Nachricht----- Von: gluster-users-bounces at gluster.org [mailto:gluster-users-bounces at gluster.org] Im Auftrag von Daniel Maher Gesendet: Dienstag, 21. Dezember 2010 15:21 An: gluster-users at gluster.org Betreff: Re: Gluster 3.1 newbe question On 12/21/2010 02:54 PM, Daniel M?ller wrote: > I have build up a two peer gluster on centos 5.5 x64 > My Version: > glusterfs --version > glusterfs 3.1.0 built on Oct 13 2010 10:06:10 > Repository revision: v3.1.0 > Copyright (c) 2006-2010 Gluster Inc.<http://www.gluster.com> > GlusterFS comes with ABSOLUTELY NO WARRANTY. > You may redistribute copies of GlusterFS under the terms of the GNU Affero > General Public License. > > I set up my bricks and vols with success: > [root at ctdb1 peers]# gluster volume info > > Volume Name: samba-vol > Type: Replicate > Status: Created > Number of Bricks: 2 > Transport-type: tcp > Bricks: > Brick1: 192.168.132.56:/glusterfs/export > Brick2: 192.168.132.57:/glusterfs/export > > And mounted them everything great. > But when testing: Writing on one peer-server while the other is restarted > or > down gluster is hanging until the second > Is online again. > How do I manage to get around this. Users should work without interruption > or waiting and the files should be replicated > Again after the peers are online again!?? Hello, I have exactly the same set up and am (literally) testing it as i type this, and i can bring one of the nodes and up and down as much as i like without causing an interruption on the other. I notice that your output says "Status: Created" instead of "Status: Started". I don't know if that has anything to do with it, but it is notable. -- Daniel Maher <dma+gluster AT witbe DOT net> _______________________________________________ Gluster-users mailing list Gluster-users at gluster.org http://gluster.org/cgi-bin/mailman/listinfo/gluster-users _______________________________________________ Gluster-users mailing list Gluster-users at gluster.org http://gluster.org/cgi-bin/mailman/listinfo/gluster-users _______________________________________________ Gluster-users mailing list Gluster-users at gluster.org http://gluster.org/cgi-bin/mailman/listinfo/gluster-users