Hi Lindsay, Can you share the glusterd log and the glfsheal log for the volume from the system on which you ran the heal command? This will help understand why volfile fetch failed. The files will be `/var/log/glusterfs/etc-glusterfs-glusterd.vol.log` and `/var/log/glusterfs/glfsheal-<volname>.log` On Wed, Jun 29, 2016 at 2:26 PM, <lindsay.mathieson@xxxxxxxxx> wrote: > Yes, but I hadn't restarted the servers either, so the clients (qemu/gfapi) > were still 3.7.11 until then. > > > > Still have same problems after reverting the settings. > > > > Waiting for heal to finish before I revert to 3.7.11 > > > > Any advice on the best way to use apt for that? > > > > Sent from my Windows 10 phone > > > > From: Kevin Lemonnier > Sent: Wednesday, 29 June 2016 6:49 PM > To: gluster-users@xxxxxxxxxxx > Subject: Re: 3.7.12 disaster > > > >> cluster.shd-max-threads:4 > >> cluster.locking-scheme:granular > > > > So you had no problems before setting that ? I'm currently re-installing my > test > > servers, as you can imagine really really hoping 3.7.12 fixes the corruption > problem, > > I hope there isn't a new horrible bug .. > > > > -- > > Kevin Lemonnier > > PGP Fingerprint : 89A5 2283 04A0 E6E9 0111 > > > > > _______________________________________________ > Gluster-users mailing list > Gluster-users@xxxxxxxxxxx > http://www.gluster.org/mailman/listinfo/gluster-users _______________________________________________ Gluster-users mailing list Gluster-users@xxxxxxxxxxx http://www.gluster.org/mailman/listinfo/gluster-users