On Tue, 23 Jun 2009, Arnd Bergmann wrote: > @@ -71,7 +75,7 @@ static unsigned int do_csum(const unsigned char *buff, int len) > if (count) { > unsigned long carry = 0; > do { > - unsigned long w = *(unsigned long *) buff; > + unsigned long w = *(unsigned int *) buff; > count--; > buff += 4; > result += carry; I don't think this is sufficient. You might need to make 'result', 'carry', and 'w' be 'unsigned int' too. Why? Because the final folding is only doen from 32-bit to 16 bit, we don't do the whole 64-bit to 32-bit to 16-bit chain. Now, it's possible (even likely) that even with a 64-bit word, we'll never actually do large enough areas that 'result' would ever have very many bits set in the 32+ bit region, and since we do end up folding to 16 bits twice (once after the loop and once at the end), it _probably_ gets things right in most cases. But I doubt "probably" is strong enough. Somebody should check. Or just see arch/alpha/lib/checksum.c, which does the whole 64-bit case. Maybe lib/checksum.c should be lib/checksum_{32,64}.c. Linus -- To unsubscribe from this list: send the line "unsubscribe linux-arch" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html