GlusterFS 3.5.3beta2 is now available for testing

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Many thanks to all our users that have reported bugs against the 3.5
version of GlusterFS! glusterfs-3.5.5beta2 has been made available for
testing.

Reporters of bugs are kindly requested to verify if the fixed bugs
listed in the Release Notes have indeed been fixed. When testing is
done, please update any verified/failed bugs as soon as possible. If any
assistence is needed, do not hesitate to send a request to the Gluster
Users mailinglist (gluster-users@xxxxxxxxxxx) or start a discussion in
the #gluster channel on Freenode IRC.

The release notes can be found on my blog, soon it will have been synced
to blog.gluster.org too:
- http://blog.nixpanic.net/2014/11/glusterfs-353beta2-release-notes.html

Packages for different distributions can be found on the main download
server:
- http://download.gluster.org/pub/gluster/glusterfs/qa-releases/3.5.3beta2/

Thank you in advance for testing,
Niels


[The notes below do not have functioning links, those work in the blog]

Release Notes for GlusterFS 3.5.3

This is a bugfix release. The Release Notes for 3.5.0, 3.5.1 and 3.5.2
contain a listing of all the new features that were added and bugs fixed
in the GlusterFS 3.5 stable release.

Bugs Fixed:

    1081016: glusterd needs xfsprogs and e2fsprogs packages
    1100204: brick failure detection does not work for ext4 filesystems
    1126801: glusterfs logrotate config file pollutes global config
    1129527: DHT :- data loss - file is missing on renaming same file from multiple client at same time
    1129541: [DHT:REBALANCE]: Rebalance failures are seen with error message " remote operation failed: File exists"
    1132391: NFS interoperability problem: stripe-xlator removes EOF at end of READDIR
    1133949: Minor typo in afr logging
    1136221: The memories are exhausted quickly when handle the message which has multi fragments in a single record
    1136835: crash on fsync
    1138922: DHT + rebalance : rebalance process crashed + data loss + few Directories are present on sub-volumes but not visible on mount point + lookup is not healing directories
    1139103: DHT + Snapshot :- If snapshot is taken when Directory is created only on hashed sub-vol; On restoring that snapshot Directory is not listed on mount point and lookup on parent is not healing
    1139170: DHT :- rm -rf is not removing stale link file and because of that unable to create file having same name as stale link file
    1139245: vdsm invoked oom-killer during rebalance and Killed process 4305, UID 0, (glusterfs nfs process)
    1140338: rebalance is not resulting in the hash layout changes being available to nfs client
    1140348: Renaming file while rebalance is in progress causes data loss
    1140549: DHT: Rebalance process crash after add-brick and `rebalance start' operation
    1140556: Core: client crash while doing rename operations on the mount
    1141558: AFR : "gluster volume heal <volume_name> info" prints some random characters
    1141733: data loss when rebalance + renames are in progress and bricks from replica pairs goes down and comes back
    1142052: Very high memory usage during rebalance
    1142614: files with open fd's getting into split-brain when bricks goes offline and comes back online
    1144315: core: all brick processes crash when quota is enabled
    1145000: Spec %post server does not wait for the old glusterd to exit
    1147156: AFR client segmentation fault in afr_priv_destroy
    1147243: nfs: volume set help says the rmtab file is in "/var/lib/glusterd/rmtab"
    1149857: Option transport.socket.bind-address ignored
    1153626: Sizeof bug for allocation of memory in afr_lookup
    1153629: AFR : excessive logging of "Non blocking entrylks failed" in glfsheal log file.
    1153900: Enabling Quota on existing data won't create pgfid xattrs
    1153904: self heal info logs are filled with messages reporting ENOENT while self-heal is going on
    1155073: Excessive logging in the self-heal daemon after a replace-brick
    1157661: GlusterFS allows insecure SSL modes

Known Issues:

  * The following configuration changes are necessary for 'qemu' and 'samba vfs plugin' integration with libgfapi to work seamlessly:

     1. gluster volume set <volname> server.allow-insecure on
     2. restarting the volume is necessary

         gluster volume stop <volname>
         gluster volume start <volname>

     3. Edit /etc/glusterfs/glusterd.vol to contain this line:

         option rpc-auth-allow-insecure on

     4. restarting glusterd is necessary

         service glusterd restart

        More details are also documented in the Gluster Wiki on the Libgfapi with qemu libvirt page.

  * For Block Device translator based volumes open-behind translator at the client side needs to be disabled.

  * gluster volume set <volname> performance.open-behind disabled

  * libgfapi clients calling glfs_fini before a successfull glfs_init will cause the client to hang as reported here. The workaround is NOT to call glfs_fini for error cases encountered before a successfull glfs_init. This is being tracked in Bug 1134050 for glusterfs-3.5 and Bug 1093594 for mainline.

  * If the /var/run/gluster directory does not exist enabling quota will likely fail (Bug 1117888).

Attachment: pgpXp3mqAzBXo.pgp
Description: PGP signature

_______________________________________________
Gluster-devel mailing list
Gluster-devel@xxxxxxxxxxx
http://supercolony.gluster.org/mailman/listinfo/gluster-devel

[Index of Archives]     [Gluster Users]     [Ceph Users]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Security]     [Bugtraq]     [Linux]     [Linux OMAP]     [Linux MIPS]     [eCos]     [Asterisk Internet PBX]     [Linux API]

  Powered by Linux