Wow, you guys seem to be having a real bugfixathon in your tla repository;
there were an impressive list of patches this week!
I grabbed the latest tla update a few hours ago and tried it out. My
observations:
1) NFS reexport seems to work! Glusterfsd no longer blows up to an
enormous size and then dies. It's completely happy, so far.
2) Stat-prefetch is still glitchy. I noticed the following when doing a
du of a subdirectory that was NFS reexported:
du -sk * |sort -n
du: `rkhunter-install/lib/rkhunter/docs/lustre': No such file or directory
du: `rkhunter-install/lib/rkhunter/docs/lost+found': No such file or directory
du: `rkhunter-install/lib/rkhunter/docs/system': No such file or directory
du: `rkhunter-install/lib/rkhunter/docs/internal': No such file or directory
du: `solaris/openldap-2.1.29/include/lustre': No such file or directory
du: `solaris/openldap-2.1.29/include/lost+found': No such file or directory
du: `solaris/openldap-2.1.29/include/system': No such file or directory
du: `solaris/openldap-2.1.29/include/internal': No such file or directory
du: `solaris/openldap-2.1.29/tests/progs/lustre': No such file or directory
du: `solaris/openldap-2.1.29/tests/progs/lost+found': No such file or directory
du: `solaris/openldap-2.1.29/tests/progs/system': No such file or directory
du: `solaris/openldap-2.1.29/tests/progs/internal': No such file or directory
du: `solaris/openldap-2.1.29/doc/devel/lustre': No such file or directory
du: `solaris/openldap-2.1.29/doc/devel/lost+found': No such file or directory
du: `solaris/openldap-2.1.29/doc/devel/system': No such file or directory
du: `solaris/openldap-2.1.29/doc/devel/internal': No such file or directory
...and then it gave presumably normal du output. lustre, lost+found,
system, and internal are the top-level directories in the filesystem and
definitely don't exist in the area that was being du'ed.
Further, after unmounting the NFS, I attempted the same du on the machine
that had reexported the filesystem, and its glusterfs promptly died.
Repeating these tests without stat-prefetch, everything was perfect!
3) When doing glusterfs -s to a different machine to retrieve the spec
file, it now fails. A glusterfs -s to the local machine succeeds. It
looks like a small buglet was introduced in the -s support.
I noticed that there were a lot of io-threads fixes this week. I assume,
however, that none of them fix the related mtime issue, so I haven't tried
io-threads.
In any case, with NFS reexport and our particular subset of modules all
appearing to be stable, it looks like we're going to go to the next stage
of testing in our environment, with some internal deployments for our own
usage.
Many thanks,
Brent