Re: netfilter 07/41: arp_tables: unfold two critical loops in arp_packet_match()

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Tuesday 2009-03-24 22:06, Eric Dumazet wrote:
>>> +/*
>>> + * Unfortunatly, _b and _mask are not aligned to an int (or long int)
>>> + * Some arches dont care, unrolling the loop is a win on them.
>>> + */
>>> +static unsigned long ifname_compare(const char *_a, const char *_b, const char *_mask)
>>> +{
>>> +#ifdef CONFIG_HAVE_EFFICIENT_UNALIGNED_ACCESS
>>> +	const unsigned long *a = (const unsigned long *)_a;
>>> +	const unsigned long *b = (const unsigned long *)_b;
>> 
>> I think we can at least give some help for the platforms which
>> require alignment.
>> 
>> We can, for example, assume 16-bit alignment and thus loop
>> over u16's
>
>Right. How about this incremental patch ?
>
>Thanks
>
>[PATCH] arp_tables: ifname_compare() can assume 16bit alignment
>
>Arches without efficient unaligned access can still perform a loop
>assuming 16bit alignment in ifname_compare()

Allow me some skepticism, but the code looks pretty much like a
standard memcmp.

> 	unsigned long ret = 0;
>+	const u16 *a = (const u16 *)_a;
>+	const u16 *b = (const u16 *)_b;
>+	const u16 *mask = (const u16 *)_mask;
> 	int i;
> 
>-	for (i = 0; i < IFNAMSIZ; i++)
>-		ret |= (_a[i] ^ _b[i]) & _mask[i];
>+	for (i = 0; i < IFNAMSIZ/sizeof(u16); i++)
>+		ret |= (a[i] ^ b[i]) & mask[i];
> #endif
> 	return ret;
> }
--
To unsubscribe from this list: send the line "unsubscribe netfilter-devel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html

[Index of Archives]     [Netfitler Users]     [LARTC]     [Bugtraq]     [Yosemite Forum]

  Powered by Linux