Hi
glusterfs-3.6.2beta2 has been released and can be found here.
http://download.gluster.org/pub/gluster/glusterfs/qa-releases/3.6.2beta2/
This beta release supposedly fixes the bugs listed below 3.6.2beta1 was
made available. Thanks to all who submitted the patches, reviewed the
changes.
1180404 - nfs server restarts when a snapshot is deactivated
1180411 - CIFS:[USS]: glusterfsd OOM killed when 255 snapshots were
browsed at CIFS mount and Control+C is issued
1180070 - [AFR] getfattr on fuse mount gives error : Software caused
connection abort
1175753 - [readdir-ahead]: indicate EOF for readdirp
1175752 - [USS]: On a successful lookup, snapd logs are filled with
Warnings "dict OR key (entry-point) is NULL"
1175749 - glusterfs client crashed while migrating the fds
1179658 - Add brick fails if parent dir of new brick and existing
brick is same and volume was accessed using libgfapi and smb.
1146524 - glusterfs.spec.in - synch minor diffs with fedora dist-git
glusterfs.spec
1175744 - [USS]: Unable to access .snaps after snapshot restore after
directories were deleted and recreated
1175742 - [USS]: browsing .snaps directory with CIFS fails with
"Invalid argument"
1175739 - [USS]: Non root user who has no access to a directory, from
NFS mount, is able to access the files under .snaps under that directory
1175758 - [USS] : Rebalance process tries to connect to snapd and in
case when snapd crashes it might affect rebalance process
1175765 - USS]: When snapd is crashed gluster volume stop/delete
operation fails making the cluster in inconsistent state
1173528 - Change in volume heal info command output
1166515 - [Tracker] RDMA support in glusterfs
1166505 - mount fails for nfs protocol in rdma volumes
1138385 - [DHT:REBALANCE]: Rebalance failures are seen with error
message " remote operation failed: File exists"
1177418 - entry self-heal in 3.5 and 3.6 are not compatible
1170954 - Fix mutex problems reported by coverity scan
1177899 - nfs: ls shows "Permission denied" with root-squash
1175738 - [USS]: data unavailability for a period of time when USS is
enabled/disabled
1175736 - [USS]:After deactivating a snapshot trying to access the
remaining activated snapshots from NFS mount gives 'Invalid argument' error
1175735 - [USS]: snapd process is not killed once the glusterd comes back
1175733 - [USS]: If the snap name is same as snap-directory than cd to
virtual snap directory fails
1175756 - [USS] : Snapd crashed while trying to access the snapshots
under .snaps directory
1175755 - SNAPSHOT[USS]:gluster volume set for uss doesnot check any
boundaries
1175732 - [SNAPSHOT]: nouuid is appended for every snapshoted brick
which causes duplication if the original brick has already nouuid
1175730 - [USS]: creating file/directories under .snaps shows wrong
error message
1175754 - [SNAPSHOT]: before the snap is marked to be deleted if the
node goes down than the snaps are propagated on other nodes and glusterd
hungs
1159484 - ls -alR can not heal the disperse volume
1138897 - NetBSD port
1175728 - [USS]: All uss related logs are reported under
/var/log/glusterfs, it makes sense to move it into subfolder
1170548 - [USS] : don't display the snapshots which are not activated
1170921 - [SNAPSHOT]: snapshot should be deactivated by default when
created
1175694 - [SNAPSHOT]: snapshoted volume is read only but it shows rw
attributes in mount
1161885 - Possible file corruption on dispersed volumes
1170959 - EC_MAX_NODES is defined incorrectly
1175645 - [USS]: Typo error in the description for USS under "gluster
volume set help"
1171259 - mount.glusterfs does not understand -n option
Regards,
Raghavendra Bhat
_______________________________________________
Gluster-devel mailing list
Gluster-devel@xxxxxxxxxxx
http://www.gluster.org/mailman/listinfo/gluster-devel