On Mon, Jun 26, 2017 at 8:01 PM, Shyam <srangana@xxxxxxxxxx> wrote:
The problem is a regression from 3.8.x.
On 06/20/2017 08:41 AM, Pranith Kumar Karampuri wrote:
1) Are there any pending *blocker* bugs that need to be tracked for
3.11.1? If so mark them against the provided tracker [1] as blockers
for the release, or at the very least post them as a response to this
I added https://bugzilla.redhat.com/show_bug.cgi?id=1463250 as blocker
just now for this release. We just completed the discussion about
solution on gluster-devel. We are hoping to get the patch in by EOD
tomorrow IST. This is a geo-rep regression we introduced because of
changing node-uuid behavior. My mistake :-(
As we wait for the final patch in this series that fixes the mentioned bug above, I wanted to state something around regression here.
The patches that introduced the problem, were present in 3.11.0 itself, hence this is not a regression for 3.11.1
The problem is a regression from 3.8.x.
- https://review.gluster.org/#/c/17312/
- https://review.gluster.org/#/c/17336/
- https://review.gluster.org/#/c/17318/
Delay to the release should be based on regressions, and blockers (think data corruption, cores, larger than sustainable breakage of code).
In this instance, I think we over reacted, and hence wanted to take the opportunity to point the same out (I should have done my work as well in checking when this problem was introduced etc. before delaying the release).
In the future, we would like to stick with the release calendar, as that is published and well known, than delay releases. Hence, when raising blockers for a release or delaying the release, expect more questions and diligence required around the same in the future.
May be a good rule of thumb to come up with I think would be, what has to be done in case it is a regression with an existing previous release but in this branch we already made a release with the regression. That clarity would prevent these kinds of issues from repeating. One more thing would be, what should be communicated to the users when this slip happens?
Thanks,
Shyam
_______________________________________________
maintainers mailing list
maintainers@xxxxxxxxxxx
http://lists.gluster.org/mailman/listinfo/maintainers
--
Pranith
_______________________________________________ Gluster-devel mailing list Gluster-devel@xxxxxxxxxxx http://lists.gluster.org/mailman/listinfo/gluster-devel