On Thu, Jun 13, 2019 at 01:19:13PM -0400, Jeff King wrote: > On Sun, Jun 09, 2019 at 11:22:59AM +0200, SZEDER Gábor wrote: > > > So, 'test oidmap' from the previous patch prints the value we want to > > check with: > > > > printf("%u\n", sha1hash(oid.hash)); > > > > First, since object ids inherently make more sense as hex values, it > > would be more appropriate to print that hash with the '%x' format > > specifier, and then we wouldn't need Perl's hex() anymore, and thus > > could swap the order of the first four bytes in oidmap's hash without > > relying on Perl, e.g. with: > > > > sed -e 's/^\(..\)\(..\)\(..\)\(..\).*/\4\3\2\1/' > > > > Second, and more importantly, the need for swapping the byte order > > indicates that this test would fail on big-endian systems, I'm afraid. > > So I think we need an additional bswap32() on the printing side, and > > then could further simplify 'test_oidhash': > > I agree with all your points about using hex and pushing the logic into > test-oidmap.c. BUT. > > At the point where we are normalizing byte order of the hashes, I have > to wonder: why do we care about testing the hash value in the first > place? We care that oidmap can store and retrieve values, and that it > performs well. But as long as it does those things, I don't think > anybody cares if it uses the first 4 bytes of the sha1 or the last 4. > > I know there are testing philosophies that go to this level of > white-box testing, but I don't think we usually do in Git. A unit > test of oidmap's externally visible behavior seems like the right > level to me. That's a good point... but then why does 't0011-hashmap.sh' do it in the first place? As far as I understood this t0016 mainly follows suit of t0011.