2) Stat-prefetch is still glitchy. I noticed the following when doing a du of a subdirectory that was NFS reexported: du -sk * |sort -n du: `rkhunter-install/lib/rkhunter/docs/lustre': No such file or directory du: `rkhunter-install/lib/rkhunter/docs/lost+found': No such file or directory du: `rkhunter-install/lib/rkhunter/docs/system': No such file or directory du: `rkhunter-install/lib/rkhunter/docs/internal': No such file or directory du: `solaris/openldap-2.1.29/include/lustre': No such file or directory du: `solaris/openldap-2.1.29/include/lost+found': No such file or directory du: `solaris/openldap-2.1.29/include/system': No such file or directory du: `solaris/openldap-2.1.29/include/internal': No such file or directory du: `solaris/openldap-2.1.29/tests/progs/lustre': No such file or directory du: `solaris/openldap-2.1.29/tests/progs/lost+found': No such file or directory du: `solaris/openldap-2.1.29/tests/progs/system': No such file or directory du: `solaris/openldap-2.1.29/tests/progs/internal': No such file or directory du: `solaris/openldap-2.1.29/doc/devel/lustre': No such file or directory du: `solaris/openldap-2.1.29/doc/devel/lost+found': No such file or directory du: `solaris/openldap-2.1.29/doc/devel/system': No such file or directory du: `solaris/openldap-2.1.29/doc/devel/internal': No such file or directory ...and then it gave presumably normal du output. lustre, lost+found, system, and internal are the top-level directories in the filesystem and definitely don't exist in the area that was being du'ed. Further, after unmounting the NFS, I attempted the same du on the machine that had reexported the filesystem, and its glusterfs promptly died.
do you have the glusterfs log backtrace? or the core dump?
Repeating these tests without stat-prefetch, everything was perfect!
Hmm, i can imagine a few conflicts with the way stat-prefetch works and NFS re-exports. will run some more tests to get a better idea.
3) When doing glusterfs -s to a different machine to retrieve the spec file, it now fails. A glusterfs -s to the local machine succeeds. It looks like a small buglet was introduced in the -s support.
this is fixed now, it was an unrelated change triggered by the new way -s works.
I noticed that there were a lot of io-threads fixes this week. I assume, however, that none of them fix the related mtime issue, so I haven't tried io-threads.
right.
In any case, with NFS reexport and our particular subset of modules all appearing to be stable, it looks like we're going to go to the next stage of testing in our environment, with some internal deployments for our own usage.
great to hear that :) we'll be more than happy to fix your issues on priorrity if you happen to face any. the current tla will be the next pre3 release going to happen today (and eventually 1.3 stable if no more new bugs are found). thanks! avati -- Anand V. Avati