Status on crash in dht_fsync reported by Vijay Bellur: I am not able to reproduce the issue. However I am consistently hitting a failure in the last test of ./tests/performance/open-behind.t (test 18). The test reads as: gluster volume top $V0 open | grep -w "$F0" >/dev/null 2>&1 TEST [ $? -eq 0 ]; "gluster volume top" is not giving file name in the output causing grep to fail resulting in failure of test. regards, Raghavendra. ----- Original Message ----- > From: "Vijaikumar M" <vmallika@xxxxxxxxxx> > To: "Gluster Devel" <gluster-devel@xxxxxxxxxxx> > Sent: Tuesday, May 26, 2015 4:43:23 PM > Subject: Re: Moratorium on new patch acceptance > > Here is the status on quota test-case spurious failure: > > There were 3 issues > 1) Quota exceeding the limit because of parallel writes - Merged > Upstream, patch submitted to release-3.7 #10910 > ./tests/bugs/quota/bug-1038598.t > ./tests/bugs/distribute/bug-1161156.t > 2) Quoting accounting going wrong - Patch Submitted #10918 > ./tests/basic/ec/quota.t > ./tests/basic/quota-nfs.t > 3) Quota with anonymous FDs on NetBSD: > This is NFS client caching issue on NetBSD. Sachin and Myself are > working on this issue. > ./tests/basic/quota-anon-fd-nfs.t > > > Thanks, > Vijay > > > On Friday 22 May 2015 11:45 PM, Vijay Bellur wrote: > > On 05/21/2015 12:07 AM, Vijay Bellur wrote: > >> On 05/19/2015 11:56 PM, Vijay Bellur wrote: > >>> On 05/18/2015 08:03 PM, Vijay Bellur wrote: > >>>> On 05/16/2015 03:34 PM, Vijay Bellur wrote: > >>>> > >>>>> > >>>>> I will send daily status updates from Monday (05/18) about this so > >>>>> that > >>>>> we are clear about where we are and what needs to be done to remove > >>>>> this > >>>>> moratorium. Appreciate your help in having a clean set of regression > >>>>> tests going forward! > >>>>> > >>>> > >>>> We have made some progress since Saturday. The problem with glupy.t > >>>> has > >>>> been fixed - thanks to Niels! All but following tests have developers > >>>> looking into them: > >>>> > >>>> ./tests/basic/afr/entry-self-heal.t > >>>> > >>>> ./tests/bugs/replicate/bug-976800.t > >>>> > >>>> ./tests/bugs/replicate/bug-1015990.t > >>>> > >>>> ./tests/bugs/quota/bug-1038598.t > >>>> > >>>> ./tests/basic/ec/quota.t > >>>> > >>>> ./tests/basic/quota-nfs.t > >>>> > >>>> ./tests/bugs/glusterd/bug-974007.t > >>>> > >>>> Can submitters of these test cases or current feature owners pick > >>>> these > >>>> up and start looking into the failures please? Do update the spurious > >>>> failures etherpad [1] once you pick up a particular test. > >>>> > >>>> > >>>> [1] https://public.pad.fsfe.org/p/gluster-spurious-failures > >>> > >>> > >>> Update for today - all tests that are known to fail have owners. Thanks > >>> everyone for chipping in! I think we should be able to lift this > >>> moratorium and resume normal patch acceptance shortly. > >>> > >> > >> Today's update - Pranith fixed a bunch of failures in erasure coding and > >> Avra removed a test that was not relevant anymore - thanks for that! > >> > >> Quota, afr, snapshot & tiering tests are being looked into. Will provide > >> an update on where we are with these tomorrow. > >> > > > > A few tests have not been readily reproducible. Of the remaining > > tests, all but the following have either been root caused or we have > > patches in review: > > > > ./tests/basic/mount-nfs-auth.t > > ./tests/performance/open-behind.t > > ./tests/basic/ec/ec-5-2.t > > ./tests/basic/quota-nfs.t > > > > With some reviews and investigations of failing tests happening over > > the weekend, I am optimistic about being able to accept patches as > > usual from early next week. > > > > Thanks, > > Vijay > > > > _______________________________________________ > > Gluster-devel mailing list > > Gluster-devel@xxxxxxxxxxx > > http://www.gluster.org/mailman/listinfo/gluster-devel > > _______________________________________________ > Gluster-devel mailing list > Gluster-devel@xxxxxxxxxxx > http://www.gluster.org/mailman/listinfo/gluster-devel > _______________________________________________ Gluster-devel mailing list Gluster-devel@xxxxxxxxxxx http://www.gluster.org/mailman/listinfo/gluster-devel