On Mon, 2012-03-19 at 15:34 +0100, Andrea Arcangeli wrote: > On Mon, Mar 19, 2012 at 03:07:59PM +0100, Peter Zijlstra wrote: > > And no, I really don't think giving up 0.5% of RAM is acceptable. > > Fine it's up to you :). > > Also note 16 bytes of those 24 bytes, you need to spend them too if > you remotely hope to perform as good as AutoNUMA (I can already tell > you...), they've absolutely nothing to do with the background scanning > that AutoNUMA does to avoid modifying the apps. Going by that size it can only be the list head and you use that for enqueueing the page on target node lists for page-migration. The thing is, since you work on page granular objects you have to have this information per-page. I work on vma objects and can do with this information per vma. It would be ever so much more helpful if, instead of talking in clues and riddles you just say what you mean. Also, try and say it without writing a book. I still haven't completely read your first email of today (and probably never will -- its just too big). > The blame on autonuma you can give is 8 bytes per page only, so 0.07%, > which I can probably reduce 0.03% if I screw the natural alignment of > the list pointers and MAX_NUMNODES is < 32768 at build time, not sure > if it's worth it. Well, no, I can blame the entire size increase on auto-numa. I don't need to enqueue individual pages to a target node, I simply unmap everything that's on the wrong node and the migrate-on-fault stuff will compute the target node based on the vma information. -- To unsubscribe, send a message with 'unsubscribe linux-mm' in the body to majordomo@xxxxxxxxx. For more info on Linux MM, see: http://www.linux-mm.org/ . Fight unfair telecom internet charges in Canada: sign http://stopthemeter.ca/ Don't email: <a href