Glusterfs-client VM on proxmox

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Guys, I have a working glusterfs volume. The servers were installed with Debian 11. Here is the volume information:

Volume Name: pool-gluster01
Type: Replicate
Volume ID: ab9c0268-0942-495f-acca-9de567581a40
Status: Started
Snapshot Count: 0
Number of Bricks: 1 x (2 + 1) = 3
transport-type: tcp
Bricks:
Brick1:gluster01:/brick1/pool-gluster01
Brick2:gluster02:/brick1/pool-gluster01
Brick3: arbiter01:/brick1/pool-gluster01 (arbiter)
Options Reconfigured:
cluster.data-self-heal-algorithm: full
cluster.favorite-child-policy: mtime
network.ping-timeout: 2
cluster.quorum-count: 1
cluster.quorum-reads: false
cluster.self-heal-daemon: enable
cluster.heal-timeout: 5
cluster.granular-entry-heal: enable
storage.fips-mode-rchecksum: on
transport.address-family: inet
nfs.disable: on
performance.client-io-threads: off


The client accessing the volume was installed on a ubuntu VM that is running on a node of a proxmox cluster. When I migrate this VM to another cluster node the gluster volume stops working. If I go back to the source node it works again. Have you ever seen this happen?
--
André Probst
Consultor de Tecnologia
43 99617 8765
________



Community Meeting Calendar:

Schedule -
Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC
Bridge: https://meet.google.com/cpu-eiue-hvk
Gluster-users mailing list
Gluster-users@xxxxxxxxxxx
https://lists.gluster.org/mailman/listinfo/gluster-users

[Index of Archives]     [Gluster Development]     [Linux Filesytems Development]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Bugtraq]     [Linux OMAP]     [Linux MIPS]     [eCos]     [Asterisk Internet PBX]     [Linux API]

  Powered by Linux