If your volume has replication/erasure coding then it is mandatory.
On Fri, Apr 21, 2017 at 1:05 AM, mabi <mabi@xxxxxxxxxxxxx> wrote:
Thanks for pointing me to the documentation. That's perfect, I can now plan my upgrade to 3.8.11. By the way I was wondering why is a self-heal part of the upgrade procedure? Is it just in case or is it mandatory?RegardsM.-------- Original Message --------Subject: Re: [Gluster-users] Bugfix release GlusterFS 3.8.11 has landedLocal Time: April 20, 2017 5:17 PMUTC Time: April 20, 2017 3:17 PMFrom: ndevos@xxxxxxxxxxTo: mabi <mabi@xxxxxxxxxxxxx>On Wed, Apr 19, 2017 at 01:46:14PM -0400, mabi wrote:> Sorry for insisting but where can I find the upgrading to 3.8 guide?> This is the only guide missing from the docs... I would like to> upgrade from 3.7 and would like to follow the documentation to make> sure everything goes well.The upgrade guide for 3.8 has been lumbering in a HitHub Pull-Requestfor a while now. I've just updated it again and hope it will be mergedsoon:You can see the proposed document here:HTH,Niels>> -------- Original Message --------> Subject: Bugfix release GlusterFS 3.8.11 has landed> Local Time: April 18, 2017 4:34 PM> UTC Time: April 18, 2017 2:34 PM> From: ndevos@xxxxxxxxxx> To: announce@xxxxxxxxxxx>>> Bugfix release GlusterFS 3.8.11 has landed>> An other month has passed, and more bugs have been squashed in the> 3.8 release. Packages should be available or arrive soon at the usual> repositories. The next 3.8 update is expected to be made available just> after the 10th of May.>> Release notes for Gluster 3.8.11>> This is a bugfix release. The Release Notes for 3.8.0, 3.8.1, 3.8.2,> 3.8.3, 3.8.4, 3.8.5, 3.8.6, 3.8.7, 3.8.8, 3.8.9 and 3.8.10 contain a> listing of all the new features that were added and bugs fixed in the> GlusterFS 3.8 stable release.>> Bugs addressed>> A total of 15 patches have been merged, addressing 13 bugs:> * #1422788: [Replicate] "RPC call decoding failed" leading to IO hang & mount inaccessible> * #1427390: systemic testing: seeing lot of ping time outs which would lead to splitbrains> * #1430845: build/packaging: Debian and Ubuntu don't have /usr/libexec/; results in bad packages> * #1431592: memory leak in features/locks xlator> * #1434298: [Disperse] Metadata version is not healing when a brick is down> * #1434302: Move spit-brain msg in read txn to debug> * #1435645: Disperse: Provide description of disperse.eager-lock option.> * #1436231: Undo pending xattrs only on the up bricks> * #1436412: Unrecognized filesystems (i.e. btrfs, zfs) log many errors about "getinode size"> * #1437330: Sharding: Fix a performance bug> * #1438424: [Ganesha + EC] : Input/Output Error while creating LOTS of smallfiles> * #1439112: File-level WORM allows ftruncate() on read-only files> * #1440635: Application VMs with their disk images on sharded-replica 3 volume are unable to boot after performing rebalance>> _______________________________________________ > Announce mailing list> _______________________________________________ > Gluster-users mailing list
_______________________________________________
Gluster-users mailing list
Gluster-users@xxxxxxxxxxx
http://lists.gluster.org/mailman/listinfo/gluster-users
--
Pranith
_______________________________________________ Gluster-users mailing list Gluster-users@xxxxxxxxxxx http://lists.gluster.org/mailman/listinfo/gluster-users