Dear team - In our version of glusterfs 9.3, which is the latest we can use due to internal testing needs, we have made a patch because SLES15 has moved rpc.statd with no link to the standard location. While we are not using locking, it was producing error messages. I know this isn't the right fix for submission but illustrates what we did in a sles15-only build of glusterfs 9.3. I raced through the release notes and I don't see this is resolved but I didn't look through git. diff -Narup glusterfs-9.3.sgi-orig/xlators/nfs/server/src/nlm4.h glusterfs-9.3.sgi/xlators/nfs/server/src/nlm4.h --- glusterfs-9.3.sgi-orig/xlators/nfs/server/src/nlm4.h 2021-06-29 00:27:44.662408609 -0500 +++ glusterfs-9.3.sgi/xlators/nfs/server/src/nlm4.h 2022-02-12 10:30:55.934953279 -0600 @@ -69,6 +69,10 @@ #define GF_SM_NOTIFY_PIDFILE "/var/run/sm-notify.pid" #endif +/* sles15 has statd only in /usr/sbin */ +#define GF_RPC_STATD_PROG "/usr/sbin/rpc.statd" + + extern rpcsvc_program_t * nlm4svc_init(xlator_t *nfsx); PS: I understand you may not fix this since we are supposed to use Ganesha. We actually made a lot of progress with Ganesha since I last wrote about it. Our nfs root node boot problems are better now (the last problem distro was sles15sp3 but sles15sp4 beta changed something and we don't hang at boot starting nscd any more!). The reason we haven't switched yet is a) we still have a supported distro with boot hangs and b)because it's slower for our workload but we're miles closer than a few months back!! When we get a chance to breath, we will ask some questions in the Ganesha forum about performance comparisons gluster NFS to Ganesha with the pynamic library load simulation (uses mpi) on NFS clients. We have something a lot closer to a repeatable test case than we had months back. No more daemon crashes either. Thank you. ------- Community Meeting Calendar: Schedule - Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC Bridge: https://meet.google.com/cpu-eiue-hvk Gluster-devel mailing list Gluster-devel@xxxxxxxxxxx https://lists.gluster.org/mailman/listinfo/gluster-devel