If you have the AFR on the server side, and if this server goes down then all the FDs associated with the files on this server will return ENOTCONN error. (If that is how your setup is? ) But if you had AFR on the client side it would have worked seamlessly. However this situation will be handled when we bring out the HA translator Krishna On Nov 30, 2007 3:01 AM, Mickey Mazarick <mic@xxxxxxxxxxxxxxxxxx> wrote: > Is this true for files that are currently open? For example I have a > virtual machine running that had a file open at all times. Errors are > bubbling back to the application layer instead of just waiting. After > that I have to unmount/remount the gluster vol. Is there a way of > preventing this? > > (This is the latest tla btw) > Thanks! > > > Anand Avati wrote: > > This is possible already, just that the files from the node which are > > down will not be accessible for the time the server is down. When the > > server is brought back up, the files are made accessible again. > > > > avati > > > > 2007/11/30, Mickey Mazarick <mic@xxxxxxxxxxxxxxxxxx > > <mailto:mic@xxxxxxxxxxxxxxxxxx>>: > > > > Is there currently a way to force a client connection to retry dist io > > until a failed resource comes back online? > > if a disk in a unified volume drops I have to remount on all the > > clients. Is there a way around this? > > > > I'm using afr/unify on 6 storage bricks and I want to be able to > > change > > a server config setting and restart the server bricks one at a time > > without losing the mount point on the clients. Is this currently > > possible without doing ip failover? > > -- > > _______________________________________________ > > Gluster-devel mailing list > > Gluster-devel@xxxxxxxxxx <mailto:Gluster-devel@xxxxxxxxxx> > > > http://lists.nongnu.org/mailman/listinfo/gluster-devel > > > > > > > > > > -- > > It always takes longer than you expect, even when you take into > > account Hofstadter's Law. > > > > -- Hofstadter's Law > > > -- > > _______________________________________________ > Gluster-devel mailing list > Gluster-devel@xxxxxxxxxx > http://lists.nongnu.org/mailman/listinfo/gluster-devel >