Hi All,
I tried many things to troubleshoot my issues; recreating volumes with different configurations, using different installation medium for the OS, re-installing the gluster environment several times.
My issues were resolved by reformatting my /opt partition from XFS to EXT4, then recreating the gluster volumes with EXT4-backed bricks rather than XFS-backed bricks.
Is there a reason to believe that XFS is an improper file system to be used for gluster bricks?
On Tue, Oct 6, 2015 at 9:20 AM, Cobin Bluth <cbluth@xxxxxxxxx> wrote:
I appreciate your response, Joe.Even before asking for help in IRC and this mailing list, I had googled and come across information about healing and split-brain, but I had not encountered any information that had helped.I tried "gluster volume heal DSR info" and this is the output I received:[root@glusterfs1 ~]# gluster volume heal DSR infoBrick glusterfs1:/opt/dsr/Number of entries: 0Brick glusterfs2:/opt/dsr/Number of entries: 0Brick glusterfs3:/opt/dsr/Number of entries: 0Brick glusterfs4:/opt/dsr/Number of entries: 0Brick glusterfs5:/opt/dsr/Number of entries: 0Brick glusterfs6:/opt/dsr/Number of entries: 0Brick glusterfs7:/opt/dsr/Number of entries: 0Brick glusterfs8:/opt/dsr/Number of entries: 0[root@glusterfs1 ~]# gluster volume heal DSR info split-brainBrick glusterfs1:/opt/dsr/Number of entries in split-brain: 0Brick glusterfs2:/opt/dsr/Number of entries in split-brain: 0Brick glusterfs3:/opt/dsr/Number of entries in split-brain: 0Brick glusterfs4:/opt/dsr/Number of entries in split-brain: 0Brick glusterfs5:/opt/dsr/Number of entries in split-brain: 0Brick glusterfs6:/opt/dsr/Number of entries in split-brain: 0Brick glusterfs7:/opt/dsr/Number of entries in split-brain: 0Brick glusterfs8:/opt/dsr/Number of entries in split-brain: 0[root@glusterfs1 ~]#If it helps, I am using CentOS7 and the latest repo found here: http://download.gluster.org/pub/gluster/glusterfs/LATEST/CentOS/glusterfs-epel.repo[root@glusterfs1 ~]# cat /etc/centos-releaseCentOS Linux release 7.1.1503 (Core)[root@glusterfs1 ~]# yum repolistLoaded plugins: fastestmirrorDetermining fastest mirrors* base: lug.mtu.edu* epel: mirrors.cat.pdx.edu* extras: centos.mirrors.tds.net* updates: mirrors.cat.pdx.edurepo id repo name status!base/7/x86_64 CentOS-7 - Base 8,652!epel/x86_64 Extra Packages for Enterprise Linux 7 - x86_64 8,524!extras/7/x86_64 CentOS-7 - Extras 214!glusterfs-epel/7/x86_64 GlusterFS is a clustered file-system capable of scaling to several petabytes. 14!glusterfs-noarch-epel/7 GlusterFS is a clustered file-system capable of scaling to several petabytes. 2!updates/7/x86_64 CentOS-7 - Updates 1,486repolist: 18,892[root@glusterfs1 ~]# yum list installed | grep -i glusterglusterfs.x86_64 3.7.4-2.el7 @glusterfs-epelglusterfs-api.x86_64 3.7.4-2.el7 @glusterfs-epelglusterfs-cli.x86_64 3.7.4-2.el7 @glusterfs-epelglusterfs-client-xlators.x86_64 3.7.4-2.el7 @glusterfs-epelglusterfs-fuse.x86_64 3.7.4-2.el7 @glusterfs-epelglusterfs-libs.x86_64 3.7.4-2.el7 @glusterfs-epelglusterfs-rdma.x86_64 3.7.4-2.el7 @glusterfs-epelglusterfs-server.x86_64 3.7.4-2.el7 @glusterfs-epelsamba-vfs-glusterfs.x86_64 4.1.12-23.el7_1 @updates[root@glusterfs1 ~]#And I am still experiencing the same error.I am not largely familiar with troubleshooting this, is there something that I am missing?I have set up another cluster in the same fashion, and I experienced the issue again.I appreciate the information regarding the split-brain, but for me it doesnt seem to be the case, could there be another culprit?On Mon, Oct 5, 2015 at 7:44 PM, Joe Julian <joe@xxxxxxxxxxxxxxxx> wrote:[19:33] <JoeJulian> AceFacee check "gluster volume heal DSR info"
[19:33] <JoeJulian> it sounds like split-brain.
[19:34] <JoeJulian> @split-brain
[19:34] <glusterbot> JoeJulian: To heal split-brains, see https://gluster.readthedocs.org/en/release-3.7.0/Features/heal-info-and-split-brain-resolution/ For additional information, see this older article https://joejulian.name/blog/fixing-split-brain-with-glusterfs-33/ Also see splitmount https://joejulian.name/blog/glusterfs-split-brain-recovery-made-easy/
On 10/05/2015 05:43 PM, Cobin Bluth wrote:
Please see the following:
root@Asus:/mnt/GlusterFS-POC# mount[ ...truncated... ]GlusterFS1:DSR on /mnt/GlusterFS-POC type fuse.glusterfs (rw,relatime,user_id=0,group_id=0,default_permissions,allow_other,max_read=131072)root@Asus:/mnt/GlusterFS-POC# pwd/mnt/GlusterFS-POCroot@Asus:/mnt/GlusterFS-POC# for i in {1..10}; do cat /mnt/10MB-File > $i.tmp; donecat: write error: Input/output errorcat: write error: Input/output errorcat: write error: Input/output errorcat: write error: Input/output errorcat: write error: Input/output errorcat: write error: Input/output errorcat: write error: Input/output errorcat: write error: Input/output errorcat: write error: Input/output errorcat: write error: Input/output errorcat: write error: Input/output errorcat: write error: Input/output errorcat: write error: Input/output errorcat: write error: Input/output errorcat: write error: Input/output errorcat: write error: Input/output errorcat: write error: Input/output errorcat: write error: Input/output errorcat: write error: Input/output errorcat: write error: Input/output errorroot@Asus:/mnt/GlusterFS-POC# cat /mnt/10MB-File > test-filecat: write error: Input/output errorcat: write error: Input/output errorroot@Asus:/mnt/GlusterFS-POC# dd if=/dev/zero of=test-file bs=1M count=10dd: error writing ‘test-file’: Input/output errordd: closing output file ‘test-file’: Input/output errorroot@Asus:/mnt/GlusterFS-POC# ls -lshatotal 444K9.0K drwxr-xr-x 4 root root 4.1K Oct 5 17:32 .4.0K drwxr-xr-x 10 root root 4.0K Oct 5 17:16 ..13K -rw-r--r-- 1 root root 257K Oct 5 17:30 10.tmp1.0K -rw-r--r-- 1 root root 129K Oct 5 17:30 1.tmp13K -rw-r--r-- 1 root root 257K Oct 5 17:30 2.tmp1.0K -rw-r--r-- 1 root root 129K Oct 5 17:30 3.tmp13K -rw-r--r-- 1 root root 257K Oct 5 17:30 4.tmp13K -rw-r--r-- 1 root root 257K Oct 5 17:30 5.tmp13K -rw-r--r-- 1 root root 257K Oct 5 17:30 6.tmp13K -rw-r--r-- 1 root root 257K Oct 5 17:30 7.tmp13K -rw-r--r-- 1 root root 257K Oct 5 17:30 8.tmp13K -rw-r--r-- 1 root root 257K Oct 5 17:30 9.tmp329K -rw-r--r-- 1 root root 385K Oct 5 17:32 test-file0 drwxr-xr-x 3 root root 48 Oct 5 16:05 .trashcanroot@Asus:/mnt/GlusterFS-POC#
I am trying to do tests on my gluster volume to see how well it will work for me.I am getting that error when I try to use dd on it.
What would be the best way to troubleshoot this?
Thanks,
Cobin
_______________________________________________ Gluster-users mailing list Gluster-users@xxxxxxxxxxx http://www.gluster.org/mailman/listinfo/gluster-users
_______________________________________________ Gluster-users mailing list Gluster-users@xxxxxxxxxxx http://www.gluster.org/mailman/listinfo/gluster-users