New to GlusterFS

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



hi, 
    I use 5 second to enable gluster-nfs ip shift when nodes poweroff, 
but in heavy IO load, client often timeout. maybe need longer timeout. 

> The reason for the long (42 second) ping-timeout is because 
> re-establishing fd's and locks can be a very expensive operation. 
> Allowing a longer time to reestablish connections is logical, unless you 
> have servers that frequently die.
> 
> If you shut down a server through the normal kill process, the TCP 
> connections will be closed properly. The client will be aware that the 
> server is going away and there will be no timeout. This allows server 
> maintenance without encountering that issue.
> 
> One issue with a 42 second timeout is that ext4 may detect an error and 
> remount itself read only should that happen while the VM is running. You 
> can override this behavior by specifying the mount option, 
> "errors=continue" in fstab ("errors=remount-ro" is the default). The 
> default can be changed, as well, by changing the superblock option with 
> tune2fs.
> 
> On 10/22/2013 03:12 AM, John Mark Walker wrote:
> >
> > Hi JC,
> >
> > Yes, the default is a 42-second timeout for failover. You can 
> > configure that to be a smaller window.
> >
> > -JM
> >
> > On Oct 22, 2013 10:57 AM, "JC Putter" <jcputter at gmail.com 
> > <mailto:jcputter at gmail.com>> wrote:
> >
> >     Hi,
> >
> >     I am new to GlusterFS, i am trying to accomplish something which i am
> >     not 100% sure is the correct use case but hear me out.
> >
> >     I want to use GlusterFS to host KVM VM's, from what I've read this was
> >     not recommended due to poor write performance however since
> >     libgfapi/qemu 1.3  this is now viable ?
> >
> >
> >     Currently i'am testing out GlusterFS with two nodes, both running as
> >     server and client
> >
> >     i have the following Volume:
> >
> >     Volume Name: DATA
> >     Type: Replicate
> >     Volume ID: eaa7746b-a1c1-4959-ad7d-743ac519f86a
> >     Status: Started
> >     Number of Bricks: 1 x 2 = 2
> >     Transport-type: tcp
> >     Bricks:
> >     Brick1: glusterfs1.example.com:/data
> >     Brick2: glusterfs2.example.com:/data
> >
> >
> >     and mounting the brick locally on each server as /mnt/gluster,
> >     replication works and everything but as soon as i kill one node, the
> >     directory /mnt/gluster/ becomes unavailable for 30/40 seconds
> >
> >     log shows
> >
> >     [2013-10-22 11:55:48.055571] W [socket.c:514:__socket_rwv]
> >     0-DATA-client-0: readv failed (No data available)
> >
> >
> >     Thanks in advance!
> >     _______________________________________________
> >     Gluster-users mailing list
> >     Gluster-users at gluster.org <mailto:Gluster-users at gluster.org>
> >     http://supercolony.gluster.org/mailman/listinfo/gluster-users
> >
> >
> >
> > _______________________________________________
> > Gluster-users mailing list
> > Gluster-users at gluster.org
> > http://supercolony.gluster.org/mailman/listinfo/gluster-users
> 



[Index of Archives]     [Gluster Development]     [Linux Filesytems Development]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Bugtraq]     [Linux OMAP]     [Linux MIPS]     [eCos]     [Asterisk Internet PBX]     [Linux API]

  Powered by Linux