Hi,
it happened again: today I've upgraded some packages on node #3. Since the Kernel had a minor update, I was asked to reboot the server, and did so. At that time only one (non-critical) VM was running on that node. I've checked twice and Gluster was *not* healing when I've rebooted. After rebooting, and while *automatic* healing was in progress, one VM started to get HDD corruption again, up to the point that it wasn't able to boot anymore(!). That poor VM was one of the only two VMs that were still using NFS for accessing the Gluster storage - if that matters. The second VM survived the healing, even if it has rather large disks (~380 GB) and is rather busy. All other ~13 VMs had been moved to native glusterfs mount days before and had no problem with the reboot. The Gluster access type may be related or not - I don't know... All Gluster packages are at version "3.5.2-2+deb8u1" on all three servers - so Gluster has *not* been upgraded this time. Kernel on node #3: Linux metal3 4.2.6-1-pve #1 SMP Wed Dec 9 10:49:55 CET 2015 x86_64 GNU/Linux Kenrle node #1: Linux metal1 4.2.3-2-pve #1 SMP Sun Nov 15 16:08:19 CET 2015 x86_64 GNU/Linux Any idea?? Udo Am 10.12.2015 um 16:12 schrieb Udo Giacomozzi:
|
_______________________________________________ Gluster-users mailing list Gluster-users@xxxxxxxxxxx http://www.gluster.org/mailman/listinfo/gluster-users