On Mon, Sep 21, 2020 at 03:13:05PM +0200, Han-Wen Nienhuys wrote: > The problem is actually triggered by the Git-provided put_be64() > (which appears unused in the Git source code). The definition > > #define put_be64(p, v) do { *(uint64_t *)(p) = htonll(v); } while (0) > > is trying to reinterpret a char* as a uint64_t* , which is illegal in > strict aliasing rules? Ah, thanks for finding that. I think this resolves the mystery I had around aliasing warnings with put_be32() in f39ad38410 (fast-export: use local array to store anonymized oid, 2020-06-25). I got confused because when I looked at the implementation of put_be32(), I stupidly looked at the NO_ALIGNED_LOADS fallback code, which indeed does not alias. But the one we use on x86 does. I thought this would be OK because C allows aliasing through "char *" pointers. But I think it may only go the other way. I.e., you can type-pun a uint64_t as a char, but not the other way around. But then why doesn't basically every use of put_be32() and put_be64() complain? > I originally had > > +void put_be64(uint8_t *out, uint64_t v) > +{ > + int i = sizeof(uint64_t); > + while (i--) { > + out[i] = (uint8_t)(v & 0xff); > + v >>= 8; > + } > +} > > in my reftable library, which is portable. Is there a reason for the > magic with htonll and friends? Presumably it was thought to be faster. This comes originally from the block-sha1 code in 660231aa97 (block-sha1: support for architectures with memory alignment restrictions, 2009-08-12). I don't know how it compares in practice, and especially these days. Our fallback routines are similar to an unrolled version of what you wrote above. -Peff