Re: Access to Servers hangs after stop one server...

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Ok! Now I get it... However on server, server2, get me timeout when try gluster v status. Cmd gluster v info works ok, but gluster v status hangs and give up with Time out information.
Here is the gluster v status cmd:
gluster v status
Status of volume: Vol01
Gluster process                             TCP Port  RDMA Port  Online  Pid
------------------------------------------------------------------------------
Brick server1:/data/storage                 49164     0          Y       684  
Brick server2:/data/storage                 49157     0          Y       610  
Brick server3:/data/storage                 49155     0          Y       621  
Brick server4:/data/storage                 49155     0          Y       594  
Self-heal Daemon on localhost               N/A       N/A        Y       667  
Self-heal Daemon on server4                 N/A       N/A        Y       623  
Self-heal Daemon on server1                 N/A       N/A        Y       720  
Self-heal Daemon on server3                 N/A       N/A        Y       651  
 
Task Status of Volume Vol01
------------------------------------------------------------------------------
There are no active volume tasks

And gluster v info
gluster vol info
 
Volume Name: Vol01
Type: Distributed-Replicate
Volume ID: 7b71f498-8512-4160-93a0-e32a8c70ecac
Status: Started
Snapshot Count: 0
Number of Bricks: 2 x 2 = 4
Transport-type: tcp
Bricks:
Brick1: server1:/data/storage
Brick2: server2:/data/storage
Brick3: server3:/data/storage
Brick4: server4:/data/storage
Options Reconfigured:
nfs.disable: on
performance.readdir-ahead: on
transport.address-family: inet

Now fstab is like that:
server1:/Vol01 /mnt glusterfs defaults,_netdev,backupvolfile-server=server2:server3:server4 0 0

As I am working here with 4 KVM VM - it's just a lab - when I suspend the server1 - virsh suspend server1 - after some seconds I was able to access /mnt mounted point with no problems.

So it's seems backupvolfile-server works fine, after all!
Thanks



---
Gilberto Nunes Ferreira

(47) 3025-5907
(47) 99676-7530 - Whatsapp / Telegram

Skype: gilberto.nunes36





Em qui, 24 de jan de 2019 às 11:55, Gilberto Nunes <gilberto.nunes32@xxxxxxxxx> escreveu:
Thanks, I'll check it out. 
---
Gilberto Nunes Ferreira

(47) 3025-5907
(47) 99676-7530 - Whatsapp / Telegram

Skype: gilberto.nunes36





Em qui, 24 de jan de 2019 às 11:50, Amar Tumballi Suryanarayan <atumball@xxxxxxxxxx> escreveu:
Also note that, this way of mounting with a 'static' volfile is not recommended as you wouldn't get any features out of gluster's Software Defined Storage behavior.

this was an approach we used to have say 8 years before. With the introduction of management daemon called glusterd, the way of dealing with volfiles have changed, and it is created with gluster CLI.

About having /etc/fstab not hang when a server is down, search for 'backup-volfile-server' option with glusterfs, and that should be used.

Regards,
Amar

On Thu, Jan 24, 2019 at 7:17 PM Diego Remolina <dijuremo@xxxxxxxxx> wrote:
Show us output of: 

gluster v status

Have you configured firewall rules properly for all ports being used?

Diego

On Thu, Jan 24, 2019 at 8:44 AM Gilberto Nunes <gilberto.nunes32@xxxxxxxxx> wrote:
>I think your mount statement in /etc/fstab is only referencing ONE of the gluster servers.
>
>Please take a look at "More redundant mount" section:
>
>
>Then try taking down one of the gluster servers and report back results.

Guys! I have followed the very same instruction that found in the James's website.
One of method his mentioned in that website, is create a file into /etc/glusterfs directory, named datastore.vol, for instance, with this content:

volume remote1
 type protocol/client
 option transport-type tcp
 option remote-host server1
 option remote-subvolume /data/storage
 end-volume

 volume remote2
 type protocol/client
 option transport-type tcp
 option remote-host server2
 option remote-subvolume /data/storage
 end-volume

 volume remote3
 type protocol/client
 option transport-type tcp
 option remote-host server3
 option remote-subvolume /data/storage
 end-volume

 volume replicate
 type cluster/replicate
 subvolumes remote1 remote2 remote3
 end-volume

 volume writebehind
 type performance/write-behind
 option window-size 1MB
 subvolumes replicate
 end-volume

 volume cache
 type performance/io-cache
 option cache-size 512MB
 subvolumes writebehind
 end-volume


and then include this line into fstab:

/etc/glusterfs/datastore.vol [MOUNT] glusterfs rw,allow_other,default_permissions,max_read=131072 0 0

What I doing wrong???

Thanks 






---
Gilberto Nunes Ferreira

(47) 3025-5907
(47) 99676-7530 - Whatsapp / Telegram

Skype: gilberto.nunes36





Em qui, 24 de jan de 2019 às 11:27, Scott Worthington <scott.c.worthington@xxxxxxxxx> escreveu:
I think your mount statement in /etc/fstab is only referencing ONE of the gluster servers.

Please take a look at "More redundant mount" section:


Then try taking down one of the gluster servers and report back results.

On Thu, Jan 24, 2019 at 8:24 AM Gilberto Nunes <gilberto.nunes32@xxxxxxxxx> wrote:
Yep! 
But as I mentioned in previously e-mail, even with 3 or 4 servers this issues occurr.
I don't know what's happen.

---
Gilberto Nunes Ferreira

(47) 3025-5907
(47) 99676-7530 - Whatsapp / Telegram

Skype: gilberto.nunes36





Em qui, 24 de jan de 2019 às 10:43, Diego Remolina <dijuremo@xxxxxxxxx> escreveu:
Glusterfs needs quorum, so if you have two servers and one goes down, there is no quorum, so all writes stop until the server comes back up. You can add a third server as an arbiter which does not store data in the bricks, but still uses some minimal space (to keep metadata for the files).

HTH,

DIego

On Wed, Jan 23, 2019 at 3:06 PM Gilberto Nunes <gilberto.nunes32@xxxxxxxxx> wrote:
Hit there...

I have set up two server as replica, like this:

gluster vol create Vol01 server1:/data/storage server2:/data/storage

Then I create a config file in client, like this:
volume remote1
 type protocol/client
 option transport-type tcp
 option remote-host server1
 option remote-subvolume /data/storage
 end-volume

 volume remote2
 type protocol/client
 option transport-type tcp
 option remote-host server2
 option remote-subvolume /data/storage
 end-volume

 volume replicate
 type cluster/replicate
 subvolumes remote1 remote2
 end-volume

 volume writebehind
 type performance/write-behind
 option window-size 1MB
 subvolumes replicate
 end-volume

 volume cache
 type performance/io-cache
 option cache-size 512MB
 subvolumes writebehind
 end-volume

And add this line in /etc/fstab

/etc/glusterfs/datastore.vol /mnt glusterfs defaults,_netdev 0 0

After mount /mnt, I can access the servers. So far so good!
But when I make server1 crash, I was unable to access /mnt or even use 
gluster vol status
on server2

Everything hangon!

I have tried with replicated, distributed and replicated-distributed too.
I am using Debian Stretch, with gluster package installed via apt, provided by Standard Debian Repo, glusterfs-server 3.8.8-1

I am sorry if this is a  newbie question, but glusterfs share it's not suppose to keep online if one server goes down?

Any adviced will be welcome

Best






---
Gilberto Nunes Ferreira

(47) 3025-5907
(47) 99676-7530 - Whatsapp / Telegram

Skype: gilberto.nunes36



_______________________________________________
Gluster-users mailing list
Gluster-users@xxxxxxxxxxx
https://lists.gluster.org/mailman/listinfo/gluster-users
_______________________________________________
Gluster-users mailing list
Gluster-users@xxxxxxxxxxx
https://lists.gluster.org/mailman/listinfo/gluster-users
_______________________________________________
Gluster-users mailing list
Gluster-users@xxxxxxxxxxx
https://lists.gluster.org/mailman/listinfo/gluster-users


--
Amar Tumballi (amarts)
_______________________________________________
Gluster-users mailing list
Gluster-users@xxxxxxxxxxx
https://lists.gluster.org/mailman/listinfo/gluster-users

[Index of Archives]     [Gluster Development]     [Linux Filesytems Development]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Bugtraq]     [Linux OMAP]     [Linux MIPS]     [eCos]     [Asterisk Internet PBX]     [Linux API]

  Powered by Linux