Amar,
Is this documentation relevant for Diego?
"If
backupvolfile-server
option is added while mounting fuse client, when the first volfile server fails, then the server specified in backupvolfile-server
option is used as volfile server to mount the client."Or is there 'better' documentation?
On Thu, Jan 24, 2019 at 8:51 AM Amar Tumballi Suryanarayan <atumball@xxxxxxxxxx> wrote:
Also note that, this way of mounting with a 'static' volfile is not recommended as you wouldn't get any features out of gluster's Software Defined Storage behavior.this was an approach we used to have say 8 years before. With the introduction of management daemon called glusterd, the way of dealing with volfiles have changed, and it is created with gluster CLI.About having /etc/fstab not hang when a server is down, search for 'backup-volfile-server' option with glusterfs, and that should be used.Regards,
AmarOn Thu, Jan 24, 2019 at 7:17 PM Diego Remolina <dijuremo@xxxxxxxxx> wrote:Show us output of:
gluster v statusHave you configured firewall rules properly for all ports being used?Diego_______________________________________________On Thu, Jan 24, 2019 at 8:44 AM Gilberto Nunes <gilberto.nunes32@xxxxxxxxx> wrote:>I think your mount statement in /etc/fstab is only referencing ONE of the gluster servers.>>Please take a look at "More redundant mount" section:>>>Then try taking down one of the gluster servers and report back results.Guys! I have followed the very same instruction that found in the James's website.One of method his mentioned in that website, is create a file into /etc/glusterfs directory, named datastore.vol, for instance, with this content:volume remote1type protocol/clientoption transport-type tcpoption remote-host server1option remote-subvolume /data/storageend-volumevolume remote2type protocol/clientoption transport-type tcpoption remote-host server2option remote-subvolume /data/storageend-volumevolume remote3type protocol/clientoption transport-type tcpoption remote-host server3option remote-subvolume /data/storageend-volumevolume replicatetype cluster/replicatesubvolumes remote1 remote2 remote3end-volumevolume writebehindtype performance/write-behindoption window-size 1MBsubvolumes replicateend-volumevolume cachetype performance/io-cacheoption cache-size 512MBsubvolumes writebehindend-volumeand then include this line into fstab:
/etc/glusterfs/datastore.vol [MOUNT] glusterfs rw,allow_other,default_permissions,max_read=131072 0 0---Gilberto Nunes Ferreira(47) 3025-5907(47) 99676-7530 - Whatsapp / TelegramSkype: gilberto.nunes36
Em qui, 24 de jan de 2019 às 11:27, Scott Worthington <scott.c.worthington@xxxxxxxxx> escreveu:I think your mount statement in /etc/fstab is only referencing ONE of the gluster servers.Please take a look at "More redundant mount" section:Then try taking down one of the gluster servers and report back results.On Thu, Jan 24, 2019 at 8:24 AM Gilberto Nunes <gilberto.nunes32@xxxxxxxxx> wrote:Yep!But as I mentioned in previously e-mail, even with 3 or 4 servers this issues occurr.I don't know what's happen.---Gilberto Nunes Ferreira(47) 3025-5907(47) 99676-7530 - Whatsapp / TelegramSkype: gilberto.nunes36
_______________________________________________Em qui, 24 de jan de 2019 às 10:43, Diego Remolina <dijuremo@xxxxxxxxx> escreveu:Glusterfs needs quorum, so if you have two servers and one goes down, there is no quorum, so all writes stop until the server comes back up. You can add a third server as an arbiter which does not store data in the bricks, but still uses some minimal space (to keep metadata for the files).HTH,DIegoOn Wed, Jan 23, 2019 at 3:06 PM Gilberto Nunes <gilberto.nunes32@xxxxxxxxx> wrote:_______________________________________________Hit there...I have set up two server as replica, like this:gluster vol create Vol01 server1:/data/storage server2:/data/storageThen I create a config file in client, like this:volume remote1type protocol/clientoption transport-type tcpoption remote-host server1option remote-subvolume /data/storageend-volumevolume remote2type protocol/clientoption transport-type tcpoption remote-host server2option remote-subvolume /data/storageend-volumevolume replicatetype cluster/replicatesubvolumes remote1 remote2end-volumevolume writebehindtype performance/write-behindoption window-size 1MBsubvolumes replicateend-volumevolume cachetype performance/io-cacheoption cache-size 512MBsubvolumes writebehindend-volumeAnd add this line in /etc/fstab/etc/glusterfs/datastore.vol /mnt glusterfs defaults,_netdev 0 0After mount /mnt, I can access the servers. So far so good!But when I make server1 crash, I was unable to access /mnt or even usegluster vol statuson server2Everything hangon!I have tried with replicated, distributed and replicated-distributed too.I am using Debian Stretch, with gluster package installed via apt, provided by Standard Debian Repo, glusterfs-server 3.8.8-1I am sorry if this is a newbie question, but glusterfs share it's not suppose to keep online if one server goes down?Any adviced will be welcomeBest---Gilberto Nunes Ferreira(47) 3025-5907(47) 99676-7530 - Whatsapp / TelegramSkype: gilberto.nunes36
Gluster-users mailing list
Gluster-users@xxxxxxxxxxx
https://lists.gluster.org/mailman/listinfo/gluster-users
Gluster-users mailing list
Gluster-users@xxxxxxxxxxx
https://lists.gluster.org/mailman/listinfo/gluster-users
Gluster-users mailing list
Gluster-users@xxxxxxxxxxx
https://lists.gluster.org/mailman/listinfo/gluster-users--_______________________________________________Amar Tumballi (amarts)
Gluster-users mailing list
Gluster-users@xxxxxxxxxxx
https://lists.gluster.org/mailman/listinfo/gluster-users
_______________________________________________ Gluster-users mailing list Gluster-users@xxxxxxxxxxx https://lists.gluster.org/mailman/listinfo/gluster-users