Ralf Baechle wrote:
On Wed, Jan 27, 2010 at 01:22:53PM -0800, David Daney wrote:
After aligning the blocks returned by kmalloc, we need to save the
original pointer so they can be correctly freed.
There are no guarantees about the alignment of SKB data, so we need to
handle worst case alignment.
Since right shifts over subtraction have no distributive property, we
need to fix the back pointer calculation.
Signed-off-by: David Daney <ddaney@xxxxxxxxxxxxxxxxxx>
---
The original in the linux-queue tree is broken as it assumes the
kmalloc returns aligned blocks. This is not the case when slab
debugging is enabled.
Queue updated - but shouldn't the magic numbers 128 rsp 256 all over this
patch be replaced by L1_CACHE_SHIFT rsp 2 * L1_CACHE_SHIFT?
Although the cache line size and alignment happen to match the size and
alignment used by the FPA, they are different things. So Probably it
could be a different symbolic constant with a value of 128.
David Daney