When I installed the 3.5.3beta on my HPC cluster, I get the following
warnings during the mounts:
WARNING: getfattr not found, certain checks will be skipped..
I do not have attr installed on my compute nodes. Is this something
that I need in order for gluster to work properly or can this safely be
ignored?
David
------ Original Message ------
From: "Niels de Vos" <ndevos@xxxxxxxxxx>
To: gluster-users@xxxxxxxxxxx; gluster-devel@xxxxxxxxxxx
Sent: 10/5/2014 8:44:59 AM
Subject: [Gluster-users] glusterfs-3.5.3beta1 has been released for
testing
GlusterFS 3.5.3 (beta1) has been released and is now available for
testing. Get the tarball from here:
-
http://bits.gluster.org/pub/gluster/glusterfs/src/glusterfs-3.5.3beta1.tar.gz
Packages for different distributions will land on the download server
over the next few days. When packages become available, the package
maintainers will send a notification to this list.
With this beta release, we make it possible for bug reporters and
testers to check if issues have indeed been fixed. All community
members
are invited to test and/or comment on this release.
This release for the 3.5 stable series includes the following bug
fixes:
- 1081016: glusterd needs xfsprogs and e2fsprogs packages
- 1129527: DHT :- data loss - file is missing on renaming same file
from multiple client at same time
- 1129541: [DHT:REBALANCE]: Rebalance failures are seen with error
message " remote operation failed: File exists"
- 1132391: NFS interoperability problem: stripe-xlator removes EOF at
end of READDIR
- 1133949: Minor typo in afr logging
- 1136221: The memories are exhausted quickly when handle the message
which has multi fragments in a single record
- 1136835: crash on fsync
- 1138922: DHT + rebalance : rebalance process crashed + data loss +
few Directories are present on sub-volumes but not visible on mount
point + lookup is not healing directories
- 1139103: DHT + Snapshot :- If snapshot is taken when Directory is
created only on hashed sub-vol; On restoring that snapshot Directory is
not listed on mount point and lookup on parent is not healing
- 1139170: DHT :- rm -rf is not removing stale link file and because of
that unable to create file having same name as stale link file
- 1139245: vdsm invoked oom-killer during rebalance and Killed process
4305, UID 0, (glusterfs nfs process)
- 1140338: rebalance is not resulting in the hash layout changes being
available to nfs client
- 1140348: Renaming file while rebalance is in progress causes data
loss
- 1140549: DHT: Rebalance process crash after add-brick and `rebalance
start' operation
- 1140556: Core: client crash while doing rename operations on the
mount
- 1141558: AFR : "gluster volume heal <volume_name> info" prints some
random characters
- 1141733: data loss when rebalance + renames are in progress and
bricks from replica pairs goes down and comes back
- 1142052: Very high memory usage during rebalance
- 1142614: files with open fd's getting into split-brain when bricks
goes offline and comes back online
- 1144315: core: all brick processes crash when quota is enabled
- 1145000: Spec %post server does not wait for the old glusterd to exit
- 1147243: nfs: volume set help says the rmtab file is in
"/var/lib/glusterd/rmtab"
To get more information about the above bugs, go to
https://bugzilla.redhat.com, enter the bug number in the search box and
press enter.
If a bug from this list has not been sufficiently fixed, please open
the
bug report, leave a comment with details of the testing and change the
status of the bug to ASSIGNED.
In case someone has successfully verified a fix for a bug, please
change
the status of the bug to VERIFIED.
The release notes have been posted for review, and a blog post contains
an easier readable version:
- http://review.gluster.org/8903
-
http://blog.nixpanic.net/2014/10/glusterfs-353beta1-has-been-released.html
Comments in bug reports, over email or on IRC (#gluster on Freenode)
are
much appreciated.
Thanks for testing,
Niels
_______________________________________________
Gluster-users mailing list
Gluster-users@xxxxxxxxxxx
http://supercolony.gluster.org/mailman/listinfo/gluster-users
_______________________________________________
Gluster-devel mailing list
Gluster-devel@xxxxxxxxxxx
http://supercolony.gluster.org/mailman/listinfo/gluster-devel