On 3/1/2022 7:09 AM, Derrick Stolee wrote: > On 3/1/2022 2:33 AM, Junio C Hamano wrote: >> Jacob Keller <jacob.keller@xxxxxxxxx> writes: >> >>> On Mon, Feb 28, 2022 at 6:36 PM Junio C Hamano <gitster@xxxxxxxxx> wrote: >>>> >>>> Jacob Keller <jacob.e.keller@xxxxxxxxx> writes: >>>> >>>>> +test_expect_success 'name-rev without commitGraph does not handle non-monotonic timestamps' ' >>>>> + test_config -C non-monotonic core.commitGraph false && >>>>> + ( >>>>> + cd non-monotonic && >>>>> + >>>>> + rm -rf .git/info/commit-graph* && >>>>> + >>>>> + echo "main~3 undefined" >expect && >>>>> + git name-rev --tags main~3 >actual && >>>>> + >>>>> + test_cmp expect actual >>>>> + ) >>>>> +' >>>> >>>> I doubt it is wise to "test" that a program does _not_ produce a >>>> correct output, or even worse, it produces a particular wrong >>>> output. This test, for example, casts in stone that any future >>>> optimization that does not depend on the commit-graph is forever >>>> prohibited. >>>> >>>> Just dropping the test would be fine, I would think. >>> >>> Stolee mentioned it. We could also convert it to a >>> "test_expect_failure" with the expected output too... But that makes >>> it look like something we'll fix >> >> Neither sounds like a good idea anyway. What we care most is with >> commit graph, the algorithm will not be fooled by skewed timestamps. > > I'm fine with losing this test. > > I perhaps lean too hard on "tests should document current behavior" > so we know when we are changing behavior, and the commit can justify > that change. For this one, we are really documenting that we have > an optimization that doesn't walk all commits based on the date of > the target commit. If we dropped that optimization accidentally, > then we have no test so far that verifies that we don't walk the > entire commit history with these name-rev queries. > I think the "tests should document current behavior" is handled by the fact that this specific test fails if you revert the name-rev changes but keep the test. > If there is value in documenting that optimization, then a > comment before the test could describe that the output is not > desirable, but it's due to an optimization that we want to keep in > place. > > Thanks, > -Stolee What about a test which uses something like the trace system to list all the commits it checked? I guess that might get a bit messy but that could be used to cover the "this optimization is important" and that applies to the commit graph implementation rather than keeping a negative test of the other implementation. Thanks, Jake