Re: [RFC][PATCH 2/3] mm: slab: move around slab ->freelist for cmpxchg

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Thu, Dec 12, 2013 at 05:46:02PM +0000, Christoph Lameter wrote:
> On Wed, 11 Dec 2013, Dave Hansen wrote:
> 
> >
> > The write-argument to cmpxchg_double() must be 16-byte aligned.
> > We used to align 'struct page' itself in order to guarantee this,
> > but that wastes 8-bytes per page.  Instead, we take 8-bytes
> > internal to the page before page->counters and move freelist
> > between there and the existing 8-bytes after counters.  That way,
> > no matter how 'stuct page' itself is aligned, we can ensure that
> > we have a 16-byte area with which to to this cmpxchg.
> 
> Well this adds additional branching to the fast paths.

The branch should be predictible and compare the cost of a branch
(near nothing on a modern OOO CPU with low IPC code like this when
predicted) to the cost of a cache miss (due to larger struct page)

-Andi

-- 
ak@xxxxxxxxxxxxxxx -- Speaking for myself only

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@xxxxxxxxx.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@xxxxxxxxx";> email@xxxxxxxxx </a>




[Index of Archives]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Bugtraq]     [Linux]     [Linux OMAP]     [Linux MIPS]     [ECOS]     [Asterisk Internet PBX]     [Linux API]