Point noted, will keep informed from next time! Thanks and Regards, Kotresh H R ----- Original Message ----- > From: "Kaushal M" <kshlmster@xxxxxxxxx> > To: "Kotresh Hiremath Ravishankar" <khiremat@xxxxxxxxxx> > Cc: "Aravinda" <avishwan@xxxxxxxxxx>, "Gluster Devel" <gluster-devel@xxxxxxxxxxx>, maintainers@xxxxxxxxxxx > Sent: Thursday, March 31, 2016 7:32:58 PM > Subject: Re: [Gluster-Maintainers] Update on 3.7.10 - on schedule to be tagged at 2200PDT 30th March. > > This is a really hard to hit issue, that requires a lot of things to > be in place for it to happen. > But it is an unexpected data loss issue. > > I'll wait tonight for the change to be merged, though I really don't like it. > > You could have informed me on this thread earlier. > Please, in the future, keep release-managers/maintainers updated about > any critical changes. > > The only reason this is getting merged now, is because of the Jenkins > migration which got completed surprisingly quickly. > > On Thu, Mar 31, 2016 at 7:08 PM, Kotresh Hiremath Ravishankar > <khiremat@xxxxxxxxxx> wrote: > > Kaushal, > > > > I just replied to Aravinda's mail. Anyway pasting the snippet if someone > > misses that. > > > > "In the scenario mentioned by aravinda below, when an unlink comes on a > > entry, in changelog xlator, it's 'loc->pargfid' > > was getting modified to "/". So consequence is that , when it hits > > posix, the 'loc->pargfid' would be pointing > > to "/" instead of actual parent. This is not so terrible yet, as we are > > saved by posix. Posix checks > > for "loc->path" first, only if it's not filled, it will use > > "pargfid/bname" combination. So only for > > clients like self-heal who does not populate 'loc->path' and the same > > basename exists on root, the > > unlink happens on root instead of actual path." > > > > Thanks and Regards, > > Kotresh H R > > > > ----- Original Message ----- > >> From: "Kaushal M" <kshlmster@xxxxxxxxx> > >> To: "Aravinda" <avishwan@xxxxxxxxxx> > >> Cc: "Gluster Devel" <gluster-devel@xxxxxxxxxxx>, maintainers@xxxxxxxxxxx, > >> "Kotresh Hiremath Ravishankar" > >> <khiremat@xxxxxxxxxx> > >> Sent: Thursday, March 31, 2016 6:56:18 PM > >> Subject: Re: [Gluster-Maintainers] Update on 3.7.10 - on schedule to be > >> tagged at 2200PDT 30th March. > >> > >> Kotresh, Could you please provide the details? > >> > >> On Thu, Mar 31, 2016 at 6:43 PM, Aravinda <avishwan@xxxxxxxxxx> wrote: > >> > Hi Kaushal, > >> > > >> > We have a Changelog bug which can lead to data loss if Glusterfind is > >> > enabled(To be specific, when changelog.capture-del-path and > >> > changelog.changelog options enabled on a replica volume). > >> > > >> > http://review.gluster.org/#/c/13861/ > >> > > >> > This is very corner case. but good to go with the release. We tried to > >> > merge > >> > this before the merge window for 3.7.10, but regressions not yet > >> > complete > >> > :( > >> > > >> > Do you think we should wait for this patch? > >> > > >> > @Kotresh can provide more details about this issue. > >> > > >> > regards > >> > Aravinda > >> > > >> > > >> > On 03/31/2016 01:29 PM, Kaushal M wrote: > >> >> > >> >> The last change for 3.7.10 has been merged now. Commit 2cd5b75 will be > >> >> used for the release. I'll be preparing release-notes, and tagging the > >> >> release soon. > >> >> > >> >> After running verification tests and checking for any perf > >> >> improvements, I'll make be making the release tarball. > >> >> > >> >> Regards, > >> >> Kaushal > >> >> > >> >> On Wed, Mar 30, 2016 at 7:00 PM, Kaushal M <kshlmster@xxxxxxxxx> wrote: > >> >>> > >> >>> Hi all, > >> >>> > >> >>> I'll be taking over the release duties for 3.7.10. Vijay is busy and > >> >>> could not get the time to do a scheduled release. > >> >>> > >> >>> The .10 release has been scheduled for tagging on 30th (ie. today). > >> >>> In the interests of providing some heads up to developers wishing to > >> >>> get changes merged, > >> >>> I'll be waiting till 10PM PDT, 30th March. (0500UTC/1030IST 31st > >> >>> March), to tag the release. > >> >>> > >> >>> So you have ~15 hours to get any changes required merged. > >> >>> > >> >>> Thanks, > >> >>> Kaushal > >> >> > >> >> _______________________________________________ > >> >> maintainers mailing list > >> >> maintainers@xxxxxxxxxxx > >> >> http://www.gluster.org/mailman/listinfo/maintainers > >> > > >> > > >> > _______________________________________________ Gluster-devel mailing list Gluster-devel@xxxxxxxxxxx http://www.gluster.org/mailman/listinfo/gluster-devel