From: Jan Engelhardt <jengelh@xxxxxxxxxx> Date: Tue, 24 Mar 2009 22:17:17 +0100 (CET) > > On Tuesday 2009-03-24 22:06, Eric Dumazet wrote: > >>> +/* > >>> + * Unfortunatly, _b and _mask are not aligned to an int (or long int) > >>> + * Some arches dont care, unrolling the loop is a win on them. > >>> + */ > >>> +static unsigned long ifname_compare(const char *_a, const char *_b, const char *_mask) > >>> +{ > >>> +#ifdef CONFIG_HAVE_EFFICIENT_UNALIGNED_ACCESS > >>> + const unsigned long *a = (const unsigned long *)_a; > >>> + const unsigned long *b = (const unsigned long *)_b; > >> > >> I think we can at least give some help for the platforms which > >> require alignment. > >> > >> We can, for example, assume 16-bit alignment and thus loop > >> over u16's > > > >Right. How about this incremental patch ? > > > >Thanks > > > >[PATCH] arp_tables: ifname_compare() can assume 16bit alignment > > > >Arches without efficient unaligned access can still perform a loop > >assuming 16bit alignment in ifname_compare() > > Allow me some skepticism, but the code looks pretty much like a > standard memcmp. memcmp() can't make any assumptions about alignment. Whereas we _know_ this thing is exactly 16-bit aligned. All of the optimized memcmp() implementations look for 32-bit alignment and punt to byte at a time comparison loops if things are not aligned enough. -- To unsubscribe from this list: send the line "unsubscribe netfilter-devel" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html