> ________________________________________ > Fra: Anuradha Talur [atalur@xxxxxxxxxx] > Sendt: 19. maj 2016 14:59 > Til: Jesper Led Lauridsen TS Infra server > Cc: gluster-users@xxxxxxxxxxx > Emne: Re: heal info report a gfid > > ----- Original Message ----- > > From: "Jesper Led Lauridsen TS Infra server" <JLY@xxxxx> > > To: gluster-users@xxxxxxxxxxx > > Sent: Thursday, May 19, 2016 2:49:33 PM > > Subject: heal info report a gfid > > > > Hi, > > > > I have a replicated volume where "gluster volume heal <volume> info" reports > > a GFID only on one of the bricks. > > > > The GFID referees to this file, but I can't locate the file on the brick > > located on glustertst01 or on a mounted volume > > File = > > /bricks/brick1/glu_rhevtst_dr2_data_01/6bdc67d1-4ae5-47e3-86c3-ef0916996862/master/tasks/ad75ad79-d90f-483d-8061-0ca640ad93d8/ad75ad79-d90f-483d-8061-0ca640ad93d8.task > > > > How do I solve this? > > > > # gluster volume info glu_rhevtst_dr2_data_01 > > Brick5: glustoretst01.net.dr.dk:/bricks/brick1/glu_rhevtst_dr2_data_01 > > Brick6: glustoretst02.net.dr.dk:/bricks/brick1/glu_rhevtst_dr2_data_01 > > > > # gluster volume heal glu_rhevtst_dr2_data_01 info split-brain > > Brick glustoretst01.net.dr.dk:/bricks/brick1/glu_rhevtst_dr2_data_01 > > Number of entries: 0 > > Brick glustoretst02.net.dr.dk:/bricks/brick1/glu_rhevtst_dr2_data_01 > > Number of entries: 0 > > > > # gluster volume heal glu_rhevtst_dr2_data_01 info > > Brick glustertst01.net.dr.dk:/bricks/brick1/glu_rhevtst_dr2_data_01/ > > Number of entries: 0 > > Brick glustertst02.net.dr.dk:/bricks/brick1/glu_rhevtst_dr2_data_01/ > > <gfid:325ccd9f-a7f1-4ad0-bfc8-6d4b73930b9f> > > Number of entries: 1 > > > Self-heal daemon (if it is in on state) will heal this file from glustertst02 to glustertst01. I'm not sure why you are trying to locate it. The Self-heal daemon is running, but the file does not seem to heal the file. The GFID entry has been there for a long time. # gluster volume status glu_rhevtst_dr2_data_01 Brick glustoretst01.net.dr.dk:/bricks/brick1/glu_rhevts t_dr2_data_01 49156 Y 2676 Brick glustoretst02.net.dr.dk:/bricks/brick1/glu_rhevts t_dr2_data_01 49156 Y 2650 Self-heal Daemon on localhost N/A Y 38645 Quota Daemon on localhost N/A Y 38652 Self-heal Daemon on glustoretst01.net.dr.dk N/A Y 14881 Quota Daemon on glustoretst01.net.dr.dk N/A Y 14888 > > If you want to locate it, this is how you do it : > 1) ls -i /bricks/brick1/glu_rhevtst_dr2_data_01/.glusterfs/32/5c/325ccd9f-a7f1-4ad0-bfc8-6d4b73930b9f on glustoretst02.net.dr.dk > 2) find the inode number that you will get associated with this file on the same brick (you can use -inum option of find). > > You should be able to locate the file. I hope this answers your question. I am not trying to locate the file. I did that already. What wonders me is: 1: Why I can't find the file on the mounted gluster volume (/var/run/gluster/glu_rhevtst_dr2_data_01/), when I can find it on the brick of glustertst02 2: Why the file doesn’t get healed and synced to glustertst01 The purpose of my quest is to get rid of this gfid entry before I am going to upgrade to 3.6.9. I’m currently running 3.6.2. > > > # stat > > /var/run/gluster/glu_rhevtst_dr2_data_01/6bdc67d1-4ae5-47e3-86c3-ef0916996862/master/tasks/ad75ad79-d90f-483d-8061-0ca640ad93d8/ad75ad79-d90f-483d-8061-0ca640ad93d8.task > > stat: cannot stat > > `/var/run/gluster/glu_rhevtst_dr2_data_01/6bdc67d1-4ae5-47e3-86c3-ef0916996862/master/tasks/ad75ad79-d90f-483d-8061-0ca640ad93d8/ad75ad79-d90f-483d-8061-0ca640ad93d8.task': > > No such file or directory > > > > glustertst02 ~]# getfattr -d -m . -e hex > > /bricks/brick1/glu_rhevtst_dr2_data_01/6bdc67d1-4ae5-47e3-86c3-ef0916996862/master/tasks/ad75ad79-d90f-483d-8061-0ca640ad93d8/ad75ad79-d90f-483d-8061-0ca640ad93d8.task > > getfattr: Removing leading '/' from absolute path names > > # file: > > bricks/brick1/glu_rhevtst_dr2_data_01/6bdc67d1-4ae5-47e3-86c3-ef0916996862/master/tasks/ad75ad79-d90f-483d-8061-0ca640ad93d8/ad75ad79-d90f-483d-8061-0ca640ad93d8.task > > security.selinux=0x73797374656d5f753a6f626a6563745f723a66696c655f743a733000 > > trusted.afr.glu_rhevtst_dr2_data_01-client-4=0x000000010000000200000000 > > trusted.afr.glu_rhevtst_dr2_data_01-client-5=0x000000000000000000000000 > > trusted.gfid=0x325ccd9fa7f14ad0bfc86d4b73930b9f > > trusted.glusterfs.dht.linkto=0x676c755f726865767473745f6472325f646174615f30312d7265706c69636174652d3300 > > trusted.glusterfs.quota.bf0a8e25-e918-4ae3-a947-7971b7b8a372.contri=0x0000000000000000 > > > > glustertst01 ~]# getfattr -n "trusted.gfid" -e hex > > /bricks/brick1/glu_rhevtst_dr2_data_01/6bdc67d1-4ae5-47e3-86c3-ef0916996862/master/tasks/ad75ad79-d90f-483d-8061-0ca640ad93d8/ad75ad79-d90f-483d-8061-0ca640ad93d8.task > > getfattr: > > /bricks/brick1/glu_rhevtst_dr2_data_01/6bdc67d1-4ae5-47e3-86c3-ef0916996862/master/tasks/ad75ad79-d90f-483d-8061-0ca640ad93d8/ad75ad79-d90f-483d-8061-0ca640ad93d8.task: > > No such file or directory > > > > Any help appreciated > > > > Thanks > > Jesper > > > > > > _______________________________________________ > > Gluster-users mailing list > > Gluster-users@xxxxxxxxxxx > > http://www.gluster.org/mailman/listinfo/gluster-users > > > > -- > Thanks, > Anuradha. > Thanks Jesper _______________________________________________ Gluster-users mailing list Gluster-users@xxxxxxxxxxx http://www.gluster.org/mailman/listinfo/gluster-users