[ repost from these pagesm with clicky links: http://blog.nixpanic.net/2016/08/glusterfs-382-bugfix-release.html https://github.com/gluster/glusterfs/blob/release-3.8/doc/release-notes/3.8.2.md ] Pretty much according to the release schedule, GlusterFS 3.8.2 has been released this week. Packages are available in the standard repositories, and moving from testing-status in different distributions to normal updates. Release notes for Gluster 3.8.2 This is a bugfix release. The Release Notes for 3.8.0 and 3.8.1 contain a listing of all the new features that were added and bugs fixed in the GlusterFS 3.8 stable release. Bugs addressed A total of 54 patches have been merged, addressing 50 bugs: * #1339928: Misleading error message on rebalance start when one of the glusterd instance is down * #1346133: tiering : Multiple brick processes crashed on tiered volume while taking snapshots * #1351878: client ID should logged when SSL connection fails * #1352771: [DHT]: Rebalance info for remove brick operation is not showing after glusterd restart * #1352926: gluster volume status client" isn't showing any information when one of the nodes in a 3-way Distributed-Replicate volume is shut down * #1353814: Bricks are starting when server quorum not met. * #1354250: Gluster fuse client crashed generating core dump * #1354395: rpc-transport: compiler warning format string * #1354405: process glusterd set TCP_USER_TIMEOUT failed * #1354429: [Bitrot] Need a way to set scrub interval to a minute, for ease of testing * #1354499: service file is executable * #1355609: [granular entry sh] - Clean up (stale) directory indices in the event of an rm -rf and also in the normal flow while a brick is down * #1355610: Fix timing issue in tests/bugs/glusterd/bug-963541.t * #1355639: [Bitrot]: Scrub status- Certain fields continue to show previous run's details, even if the current run is in progress * #1356439: Upgrade from 3.7.8 to 3.8.1 doesn't regenerate the volfiles * #1357257: observing " Too many levels of symbolic links" after adding bricks and then issuing a replace brick * #1357773: [georep]: If a georep session is recreated the existing files which are deleted from slave doesn't get sync again from master * #1357834: Gluster/NFS does not accept dashes in hostnames in exports/netgroups files * #1357975: [Bitrot+Sharding] Scrub status shows incorrect values for 'files scrubbed' and 'files skipped' * #1358262: Trash translator fails to create 'internal_op' directory under already existing trash directory * #1358591: Fix spurious failure of tests/bugs/glusterd/bug-1111041.t * #1359020: [Bitrot]: Sticky bit files considered and skipped by the scrubber, instead of getting ignored. * #1359364: changelog/rpc: Memory leak- rpc_clnt_t object is never freed * #1359625: remove hardcoding in get_aux function * #1359654: Polling failure errors getting when volume is started&stopped with SSL enabled setup. * #1360122: Tiering related core observed with "uuid_is_null () message". * #1360138: [Stress/Scale] : I/O errors out from gNFS mount points during high load on an erasure coded volume,Logs flooded with Error messages. * #1360174: IO error seen with Rolling or non-disruptive upgrade of an distribute-disperse(EC) volume from 3.7.5 to 3.7.9 * #1360556: afr coverity fixes * #1360573: Fix spurious failures in split-brain-favorite-child-policy.t * #1360574: multiple failures of tests/bugs/disperse/bug-1236065.t * #1360575: Fix spurious failures in ec.t * #1360576: [Disperse volume]: IO hang seen on mount with file ops * #1360579: tests: ./tests/bitrot/br-stub.t fails intermittently * #1360985: [SNAPSHOT]: The PID for snapd is displayed even after snapd process is killed. * #1361449: Direct io to sharded files fails when on zfs backend * #1361483: posix: leverage FALLOC_FL_ZERO_RANGE in zerofill fop * #1361665: Memory leak observed with upcall polling * #1362025: Add output option --xml to man page of gluster * #1362065: tests: ./tests/bitrot/bug-1244613.t fails intermittently * #1362069: [GSS] Rebalance crashed * #1362198: [tiering]: Files of size greater than that of high watermark level should not be promoted * #1363598: File not found errors during rpmbuild: /var/lib/glusterd/hooks/1/delete/post/S57glusterfind-delete-post.py{c,o} * #1364326: Spurious failure in tests/bugs/glusterd/bug-1089668.t * #1364329: Glusterd crashes upon receiving SIGUSR1 * #1364365: Bricks doesn't come online after reboot [ Brick Full ] * #1364497: posix: honour fsync flags in posix_do_zerofill * #1365265: Glusterd not operational due to snapshot conflicting with nfs-ganesha export file in "/var/lib/glusterd/snaps" * #1365742: inode leak in brick process * #1365743: GlusterFS - Memory Leak - High Memory Utilization
Attachment:
signature.asc
Description: PGP signature
_______________________________________________ Announce mailing list Announce@xxxxxxxxxxx http://www.gluster.org/mailman/listinfo/announce
_______________________________________________ Gluster-users mailing list Gluster-users@xxxxxxxxxxx http://www.gluster.org/mailman/listinfo/gluster-users