On Thu, Mar 27, 2014 at 07:06:03PM +0800, Jianyu Zhan wrote: > Presently, after we fail the first try to walk the pcpu_slot list > to find a chunk for allocating, we just drop the pcpu_lock spinlock, > and go allocating a new chunk. Then we re-gain the pcpu_lock and > anchoring our hope on that during this period, some guys might have > freed space for us(we still hold the pcpu_alloc_mutex during this > period, so only freeing or reclaiming could happen), we do a fully > rewalk of the pcpu_slot list. > > However if nobody free space, this fully rewalk may seem too silly, > and we would eventually fall back to the new chunk. > > And since we hold pcpu_alloc_mutex, only freeing or reclaiming path > could touch the pcpu_slot(which just need holding a pcpu_lock), we > could maintain a pcpu_slot_stat bitmap to record that during the period > we don't have the pcpu_lock, if anybody free space to any slot we > interest in. If so, we just just go inside these slots for a try; > if not, we just do allocation using the newly-allocated fully-free > new chunk. The patch probably needs to be refreshed on top of percpu/for-3.15. Hmmm... I'm not sure whether the added complexity is worthwhile. It's a fairly cold path. Can you show how helpful this optimization is? Thanks. -- tejun -- To unsubscribe, send a message with 'unsubscribe linux-mm' in the body to majordomo@xxxxxxxxx. For more info on Linux MM, see: http://www.linux-mm.org/ . Don't email: <a href=mailto:"dont@xxxxxxxxx"> email@xxxxxxxxx </a>