On Wed, 8 Jun 2011 at 12:12pm, Mohit Anchlia wrote > On Wed, Jun 8, 2011 at 12:09 PM, Joshua Baker-LePain <jlb17 at duke.edu> wrote: >> On Wed, 8 Jun 2011 at 9:40am, Mohit Anchlia wrote >> >>> On Tue, Jun 7, 2011 at 11:20 PM, Joshua Baker-LePain <jlb17 at duke.edu> >>> wrote: >>>> >>>> On Wed, 8 Jun 2011 at 8:16am, bxmatus at gmail.com wrote >>>> >>>>> When client is connecting to any gluster node it automaticly receive >>>>> list >>>>> of >>>>> all other nodes for that volume. >>>> >>>> Yes, but what if the node it first tries to contact (i.e., the one on the >>>> fstab line) is down? >>> >>> For client side use DNS round robin with all the hosts in your cluster. >> >> And if you use /etc/hosts rather than DNS...? > > What do you think should happen? I'm simply trying to find the most robust way to automatically mount GlusterFS volumes at boot time in my environment. In previous versions of Gluster, there was no issue since the volume files were on the clients. I understand that can still be done, but one loses the ability to manage all changes from the servers. Prior to this thread, I thought the best method was to use ucarp on the servers and mount using the ucarp address. If that won't work right (I haven't had time to fully test my setup yet), then I need to find another way. I don't run DNS on my cluster, so that solution is out. As far as I can tell, the only other solution is to mount in rc.local, with logic to detect a mount failure and go on to the next server. -- Joshua Baker-LePain QB3 Shared Cluster Sysadmin UCSF