At 01:14 PM 3/6/2010, Chad wrote: >I don't disagree that other network file systems may have issues. >It does not change that fact that while 5 >seconds may be "remarkable" it does not make it >any more acceptable when my services die and my clients complain. >The point of high availability is that any >single point of failure does not take down your services. The point is that it doesn't take down your services. If you have good quality equipment you should only experience this 5 second delay on the most rare of occasions and only if there is something terribly wrong. Perhaps it's a perception problem. The goal of ha isn't necessarily to prevent any notice of any problems in infrastructure, but instead to insure that they are available. Often this means in degraded mode. if a 5 second delay is unacceptable, then there's unlikely to be anything that will suit your needs. if they made the delay shorter, you'd be at risk for mirrors breaking when there is simply normal network latency which would make the situation worse since resyncs would happen much more frequently. your users might not see a 5 second delay, but instead they'd have a system which feels much slower since it's constantly doing all this unnecessary extra work. >^C > > > >Keith Freedman wrote: >>At 08:13 AM 3/6/2010, Chad wrote: >>>I second this question/request. >>>When the 1st server goes down, how do we >>>eliminate the hang time? 5 seconds is a long time for a file system to be hung. >>it is a long time, but if you think about other >>HA filesystems, this is one of the shortest I've seen. >>hardware NAS devices, when there's a node >>failure, will often take 30 seconds to 2 >>minutes to fully recover. In light of the >>alternatives, 5 seconds is remarkable. >>if you're getting these delays on a regular >>basis then something is wrong, but if it's >>something that just happens in the face of a >>failure, then it should be relatively rare. >>if it happens all the time, then you really >>need to figure out why your systems are failing and resolve that problem. >>Just my .02 >> >>>^C >>> >>> >>> >>>Richard de Vries wrote: >>>>Hello Eduardo, >>>>We had the same problem over here, with two nodes that are both server >>>>and client. >>>>You can try to lower the ping-timeout in the client volume file: >>>>option ping-timeout 5 >>>>5 seconds is the sadly the lowest possible ping-timeout, our >>>>applications on the main node can hang for about 5 seconds in case the >>>>standby node fails (although we have a dedicated interconnect). >>>>Maybe the gluster developers have a better solution to this. >>>>Regards, >>>>Richard >>>> >>>>>Hi i?m using GlusterFs V3.0.2 in Fedora 12. >>>>> >>>>>I have configure AFR with 3 nodes and mount the volume in the client, its >>>>>work fine, but when a node fail, the file system in the client locks and I >>>>>can?t execute any operations for about 40 to 50 Seconds. >>>> >>>>>After 40 to 50 Seconds the File system on the client start to work again. >>>>>How I can resolve this problem, because the file system can?t bee >>>>>inaccessible for so long Time. >>>>_______________________________________________ >>>>Gluster-users mailing list >>>>Gluster-users at gluster.org >>>>http://gluster.org/cgi-bin/mailman/listinfo/gluster-users >>>_______________________________________________ >>>Gluster-users mailing list >>>Gluster-users at gluster.org >>>http://gluster.org/cgi-bin/mailman/listinfo/gluster-users >>_______________________________________________ >>Gluster-users mailing list >>Gluster-users at gluster.org >>http://gluster.org/cgi-bin/mailman/listinfo/gluster-users