Hi René, On Sun, Sep 06, 2020 at 10:59:06AM +0200, René Scharfe wrote: > - uint32_t hash_value = htonl(entry->hash); > - hashwrite(f, &hash_value, sizeof(hash_value)); > + hashwrite_be32(f, entry->hash); This is an obviously correct translation of what's already written, and indeed it is shorter and easier to read. Unfortunately, I think there is some more subtlety here since the hash cache isn't guarenteed to be aligned, and so blindly calling htonl() (either directly in write_hash_cache(), or indirectly in hashwrite_be32()) might cause tools like ASan to complain when loading data on architectures that don't support fast unaligned reads. So, I think that we could do one of three things, depending on how much you care about improving this case ;-). - leave your patch alone, accepting that this case which was broken before will remain broken, and leave it as #leftoverbits - discard your patch as-is, and replace the 'htonl' with 'get_be32()' before handing it off to 'hashwrite()', or - change the 'hashwrite_beXX()' implementations to use the correct 'get_beXX' wrappers which behave like htonl() on architectures with fast unaligned loads, and fall back to byte reads and shifts on architectures that don't. Credit goes to Peff for finding this issue in GitHub's fork. For what it's worth, we were planning on sending those patches to the list soon, but they are tied up with a longer series in the meantime. For what it's worth, I think doing any of the above would be fine. Thanks, Taylor