clients are Centos 7.5 using gluster "url" setup for the VM disks - e.g. like:
<disk type='network' device='disk'>
<driver name='qemu' type='raw' cache='none' io='native'/>
<source protocol='gluster' name='glu-vol02-lab/build-plugin-01'>
<host name='gluvm-vol02-b' port='24007'/>
</source>
<driver name='qemu' type='raw' cache='none' io='native'/>
<source protocol='gluster' name='glu-vol02-lab/build-plugin-01'>
<host name='gluvm-vol02-b' port='24007'/>
</source>
...
</disk>
We have the problem that gluster on the 2 brick server keeps on running healing operations - it's
a bit hard to understand why as we cannot see any drops in network traffic between the 2 bricks
(either errors or bandwidth issue).
- Is there a fundamental versioning conflict ?
(libvirt-daemon-driver-storage-gluster-3.9.0-14.el7_5.5.x86_64 vs. gluster 4.1) - Is there a limit to how many clients a gluster server can handle ?
- Some setting we need to adjust ?
I hope someone has an idea,
Thanx,
Claus.
--
Claus Jeppesen |
Manager, Network Services |
Datto, Inc. |
p +45 6170 5901 | Copenhagen Office |
www.datto.com |
_______________________________________________ Gluster-users mailing list Gluster-users@xxxxxxxxxxx https://lists.gluster.org/mailman/listinfo/gluster-users