backupvolfile-server (servers) not working for new mounts?

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



I have a gluster 4.1 system with three servers running Docker/Kubernetes.    The pods mount filesystems using gluster.

10.13.112.31 is the primary server [A] and all mounts specify it with two other servers [10.13.113.116 [B] and 10.13.114.16 [C]] specified in backup-volfile-servers.

I'm testing what happens when a server goes down.

If I bring down [B] or [C], no problem, everything restages and works.

But if I bring down [A], any *existing* mount continues to work, but any new mounts fail.  I'm seeing messages about all subvolumes being down in the pod.

But I've mounted this exact same volume on the same system (before I bring down the server) and I can access all the data fine.

Why the failure for new mounts?    I'm on AWS and all servers are in different availability zones, but I don't see how that would be an issue.

I tried using just backupvolfile-server and that didn't work either.


---
This email has been checked for viruses by Avast antivirus software.
https://www.avast.com/antivirus

_______________________________________________
Gluster-users mailing list
Gluster-users@xxxxxxxxxxx
https://lists.gluster.org/mailman/listinfo/gluster-users




[Index of Archives]     [Gluster Development]     [Linux Filesytems Development]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Bugtraq]     [Linux OMAP]     [Linux MIPS]     [eCos]     [Asterisk Internet PBX]     [Linux API]

  Powered by Linux