Dear team. I made a new PR (sorry some experience showing in github.com I created a new PR instead of updating the old one. Seemed easier to close the old one and use the new one than fix the old one).
In the new PR, I integrated feedback. Thank you so much. https://github.com/gluster/glusterfs/pull/4322 If not, please let me know! Thanks again to all. Erik |
1. setup 3 sles15sp5 virtual machines 80G virtual disk "house" network (NAT) "private network" (shared just among the 3 servers) 2. install sles15sp5 pretty normally - SUSE Linux Enteprise Server 15 SP5 - no special software added - Text mode - Partition: - I took the defaults but turned /home into /data (XFS) 3 Set the hostnames as 'gluster1, gluster2, gluste3' for the hostnames (hostnamectl set-hostname) Now we have 3 sles15sp5 servers with a default setup except that all have a an XFS filesystem mounted at /data (to be used with gluster). All have a sles15sp5 (virtual) dvd and I enabled the repos by default in zypper and added the HPC repo for pdsh. I installed pdsh and pdsh-dshgroup to make this task easier and defined a pdsh group named 'gluster' that holds the 3 nodes. Some dependencies ------------------------------------------------------------------------------ pdsh -g gluster zypper install --no-confirm liburing1 Install Unpatched gluster: ------------------------------------------------------------------------------ Build glusterfs 9.6 without the patch Install the rpms on the 3 test servers. I setup pdsh, dsh group 'gluster' all 3 nodes Unpatched gluster 9.6 packages were copied to all three servers in /root/gluster-noaptch Install (didn't bother to make a repo): pdsh -g gluster rpm -Uvh \ /root/gluster-nopatch/glusterfs-9.6-150400.100.7730.1550.240320T1310.a.sles15sp5hpeerikjno_errno_patch.x86_64.rpm \ /root/gluster-nopatch/libglusterfs0-9.6-150400.100.7730.1550.240320T1310.a.sles15sp5hpeerikjno_errno_patch.x86_64.rpm \ /root/gluster-nopatch/libgfchangelog0-9.6-150400.100.7730.1550.240320T1310.a.sles15sp5hpeerikjno_errno_patch.x86_64.rpm \ /root/gluster-nopatch/libglusterd0-9.6-150400.100.7730.1550.240320T1310.a.sles15sp5hpeerikjno_errno_patch.x86_64.rpm \ /root/gluster-nopatch/libgfapi0-9.6-150400.100.7730.1550.240320T1310.a.sles15sp5hpeerikjno_errno_patch.x86_64.rpm \ /root/gluster-nopatch/libgfrpc0-9.6-150400.100.7730.1550.240320T1310.a.sles15sp5hpeerikjno_errno_patch.x86_64.rpm \ /root/gluster-nopatch/libgfxdr0-9.6-150400.100.7730.1550.240320T1310.a.sles15sp5hpeerikjno_errno_patch.x86_64.rpm Configure gluster - Base setup ------------------------------------------------------------------------------ Note: kernel NFS server not installed (on purpose as this test is about gluster nfs) pdsh -g gluster systemctl enable glusterd pdsh -g gluster systemctl start glusterd # Simple test case let us not worry about firewall pdsh -g gluster systemctl stop firewalld pdsh -g gluster systemctl disable firewalld gluster peer probe 192.168.128.2 # not needed since localhost gluster peer probe 192.168.128.3 gluster peer probe 192.168.128.4 Verified each host shows two peers Configure volume - sharded example ------------------------------------------------------------------------------ pdsh -g gluster mkdir /data/sharded gluster volume create sharded replica 3 transport tcp 192.168.128.2:/data/sharded 192.168.128.3:/data/sharded 192.168.128.4:/data/sharded gluster volume set sharded performance.cache-size 512MB gluster volume set sharded performance.client-io-threads on gluster volume set sharded performance.nfs.io-cache on gluster volume set sharded nfs.nlm off gluster volume set sharded nfs.ports-insecure off gluster volume set sharded nfs.export-volumes on gluster volume set sharded features.shard on gluster volume set sharded nfs.disable off # answer yes gluster volume start sharded Sharded Problem Duplication: ------------------------------------------------------------------------------ I just run this on one of the servers locally. I used 192.168.128.2 but testing has showed when the problem happens, it happens from any NFS client anywhere. mkdir -p /mnt/sharded/fuse mkdir -p /mnt/sharded/nfs #FUSE mount: mount -t glusterfs localhost:/sharded /mnt/sharded/fuse systemctl restart glusterd # Not sure why I had to restart glusterd again here # NFS Mount: mount -t nfs -o vers=3 localhost:/sharded /mnt/sharded/nfs # Make a big and small test file - from fuse mountg cd /mnt/sharded/fuse dd if=/dev/random of=testfile bs=1024k count=1024 # Confirm md5sum the same on both nfs mount and fuse mount # It should be. This always works (big files always work) md5sum /mnt/sharded/fuse/testfile /mnt/sharded/nfs/testfile # Now make a 1-byte file on the fuse mount echo -n 1 > /mnt/sharded/fuse/small-testfile # Now do an md5sum of fuse vs nfs - We reproduce the problem. Output: gluster1:/mnt/sharded/fuse # md5sum /mnt/sharded/fuse/small-testfile /mnt/sharded/nfs/small-testfile c4ca4238a0b923820dcc509a6f75849b /mnt/sharded/fuse/small-testfile md5sum: /mnt/sharded/nfs/small-testfile: Input/output error Configure volume - NON-sharded example ------------------------------------------------------------------------------ pdsh -g gluster mkdir /data/NON-sharded gluster volume create NON-sharded replica 3 transport tcp 192.168.128.2:/data/NON-sharded 192.168.128.3:/data/NON-sharded 192.168.128.4:/data/NON-sharded gluster volume set NON-sharded performance.cache-size 512MB gluster volume set NON-sharded performance.client-io-threads on gluster volume set NON-sharded performance.nfs.io-cache on gluster volume set NON-sharded nfs.nlm off gluster volume set NON-sharded nfs.ports-insecure off gluster volume set NON-sharded nfs.export-volumes on gluster volume set NON-sharded nfs.disable off # answer yes gluster volume start NON-sharded NON-Sharded Problem Duplication: ------------------------------------------------------------------------------ Like the sharded case, I just ran this locally on one of the servers but testing has shown it happens from an NFS client with any location. mkdir -p /mnt/NON-sharded/fuse mkdir -p /mnt/NON-sharded/nfs #FUSE mount: mount -t glusterfs localhost:/NON-sharded /mnt/NON-sharded/fuse systemctl restart glusterd # Not sure why I had to restart glusterd again here # NFS Mount: mount -t nfs -o vers=3 localhost:/NON-sharded /mnt/NON-sharded/nfs # Make a big and small test file - from fuse mountg cd /mnt/NON-sharded/fuse dd if=/dev/random of=testfile bs=1024k count=1024 # Confirm md5sum the same on both nfs mount and fuse mount # It should be. (this works) md5sum /mnt/NON-sharded/fuse/testfile /mnt/NON-sharded/nfs/testfile # Now make a 1-byte file on the fuse mount echo -n 1 > /mnt/NON-sharded/fuse/small-testfile # Check the mdsum. NFS gives an IO error in the fault condition. md5sum /mnt/NON-sharded/fuse/small-testfile /mnt/NON-sharded/nfs/small-testfile PROBLEM reproduced here too: gluster1:/mnt/NON-sharded/fuse # md5sum /mnt/NON-sharded/fuse/small-testfile /mnt/NON-sharded/nfs/small-testfile c4ca4238a0b923820dcc509a6f75849b /mnt/NON-sharded/fuse/small-testfile md5sum: /mnt/NON-sharded/nfs/small-testfile: Input/output error Problem Resolved with Patch ------------------------------------------------------------------------------ Created a new set of packages but with the errno patch (corrected per the PR) included. pdsh -g gluster mkdir -p /root/gluster-with-patch # This copies the rpms to the first gluster server scp *.rpm root@192.168.1.61:gluster-with-patch/ # copies to the other two pdcp -g gluster /root/gluster-with-patch/* /root/gluster-with-patch/ # Perform update pdsh -g gluster rpm -Fvh /root/gluster-with-patch/*.rpm # It is good to check if the glusterfs process serving NFS restarted or not. # It must have restarted for the fix to hold. Now we repeat the working and previously failing md5sums SUCCESS all md5sums report values now, no IO errors gluster1:~ # md5sum /mnt/NON-sharded/fuse/testfile /mnt/NON-sharded/nfs/testfileb4b85d33d083374ea2b6cf1cb2e3039a /mnt/NON-sharded/fuse/testfile b4b85d33d083374ea2b6cf1cb2e3039a /mnt/NON-sharded/nfs/testfile gluster1:~ # md5sum /mnt/NON-sharded/fuse/small-testfile /mnt/NON-sharded/nfs/small-testfile c4ca4238a0b923820dcc509a6f75849b /mnt/NON-sharded/fuse/small-testfile c4ca4238a0b923820dcc509a6f75849b /mnt/NON-sharded/nfs/small-testfile gluster1:~ # md5sum /mnt/sharded/fuse/testfile /mnt/sharded/nfs/testfile 50397a73f68f272c28dd212671e22722 /mnt/sharded/fuse/testfile 50397a73f68f272c28dd212671e22722 /mnt/sharded/nfs/testfile gluster1:~ # md5sum /mnt/sharded/fuse/small-testfile /mnt/sharded/nfs/small-testfile c4ca4238a0b923820dcc509a6f75849b /mnt/sharded/fuse/small-testfile c4ca4238a0b923820dcc509a6f75849b /mnt/sharded/nfs/small-testfile
------- Community Meeting Calendar: Schedule - Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC Bridge: https://meet.google.com/cpu-eiue-hvk Gluster-devel mailing list Gluster-devel@xxxxxxxxxxx https://lists.gluster.org/mailman/listinfo/gluster-devel