Just a quick update. I was wrong saying the issue is reproducible in 3.7. What I could see is this issue is fixed in 3.7. Now I need to find out the patch which fixed it and backport it to 3.6. Would it be possible for you to upgrade the setup to 3.7 if you want a quick solution? ~Atin On 08/17/2015 07:23 PM, Atin Mukherjee wrote: > I've not got a chance to look at it which I will do now. Thanks for > reminder! > > -Atin > Sent from one plus one > On Aug 17, 2015 7:19 PM, "Davy Croonen" <davy.croonen@xxxxxxxxxxx> wrote: > >> Hi Atin >> >> Any news on this one? >> >> KR >> Davy >> >> On 12 Aug 2015, at 16:41, Atin Mukherjee <atin.mukherjee83@xxxxxxxxx> >> wrote: >> >> Davy, >> >> I will check this with Kaleb and get back to you. >> >> -Atin >> Sent from one plus one >> On Aug 12, 2015 7:22 PM, "Davy Croonen" <davy.croonen@xxxxxxxxxxx> wrote: >> >>> Atin >>> >>> No problem to raise a bug for this, but isn’t this already addressed here: >>> >>> Bug 1111670 <https://bugzilla.redhat.com/show_bug.cgi?id=1111670> - continuous >>> log entries failed to get inode size >>> https://bugzilla.redhat.com/show_bug.cgi?id=1111670#c2 >>> >>> KR >>> Davy >>> >>> On 12 Aug 2015, at 14:56, Atin Mukherjee <amukherj@xxxxxxxxxx> wrote: >>> >>> Well, this looks like a bug even in 3.7 as well. I've posted a fix [1] >>> to address it. >>> >>> [1] http://review.gluster.org/11898 >>> >>> Could you please raise a bug for this? >>> >>> ~Atin >>> >>> On 08/12/2015 01:32 PM, Davy Croonen wrote: >>> >>> Hi Atin >>> >>> Thanks for your answer. The op-version was indeed an old one, 30501 to be >>> precise. I’ve updated the op-version to the one you suggested with the >>> command: gluster volume set all cluster.op-version 30603. From testing it >>> seems this issue is solved for the moment. >>> >>> Considering the errors in the etc-glusterfs-glusterd.vol.log file I’m >>> looking forward to hear from you. >>> >>> Thanks in advance. >>> >>> KR >>> Davy >>> >>> On 11 Aug 2015, at 19:28, Atin Mukherjee <atin.mukherjee83@xxxxxxxxx< >>> mailto:atin.mukherjee83@xxxxxxxxx <atin.mukherjee83@xxxxxxxxx>>> wrote: >>> >>> >>> >>> -Atin >>> Sent from one plus one >>> On Aug 11, 2015 7:54 PM, "Davy Croonen" <davy.croonen@xxxxxxxxxxx< >>> mailto:davy.croonen@xxxxxxxxxxx <davy.croonen@xxxxxxxxxxx>>> wrote: >>> >>> >>> Hi all >>> >>> Our etc-glusterfs-glusterd.vol.log is filling up with entries as shown: >>> >>> [2015-08-11 11:40:33.807940] E >>> [glusterd-utils.c:7410:glusterd_add_inode_size_to_dict] 0-management: >>> tune2fs exited with non-zero exit status >>> [2015-08-11 11:40:33.807962] E >>> [glusterd-utils.c:7436:glusterd_add_inode_size_to_dict] 0-management: >>> failed to get inode size >>> >>> I will check this and get back to you. >>> >>> >>> From the mailinglist archive I could understand this was a problem in >>> gluster version 3.4 and should be fixed. We started out from version 3.5 >>> and upgraded in the meantime to version 3.6.4 but the error in the errorlog >>> still exists. >>> >>> We are also unable to execute the command >>> >>> $gluster volume status all inode >>> >>> as a result gluster hangs up with the message: “Another transaction is in >>> progress. Please try again after sometime.” while executing the command >>> >>> $gluster volume status >>> >>> Have you bump up the op version to 30603? Otherwise glusterd will still >>> have cluster locking and then multiple commands can't run simultaneously. >>> >>> >>> Are the error messages in the logs related to the hung up of gluster >>> while executing the mentioned commands? And any ideas about how to fix this? >>> >>> The error messages are not because of this. >>> >>> >>> Kind regards >>> Davy >>> _______________________________________________ >>> Gluster-users mailing list >>> Gluster-users@xxxxxxxxxxx<mailto:Gluster-users@xxxxxxxxxxx >>> <Gluster-users@xxxxxxxxxxx>> >>> http://www.gluster.org/mailman/listinfo/gluster-users >>> >>> >>> >>> >>> _______________________________________________ >>> Gluster-users mailing list >>> Gluster-users@xxxxxxxxxxx >>> http://www.gluster.org/mailman/listinfo/gluster-users >>> >>> >>> -- >>> ~Atin >>> >>> >>> >> > -- ~Atin _______________________________________________ Gluster-devel mailing list Gluster-devel@xxxxxxxxxxx http://www.gluster.org/mailman/listinfo/gluster-devel