On Wed, Mar 15, 2017 at 03:18:14PM +0100, Michal Hocko wrote: > On Wed 15-03-17 16:59:59, Aaron Lu wrote: > [...] > > The proposed parallel free did this: if the process has many pages to be > > freed, accumulate them in these struct mmu_gather_batch(es) one after > > another till 256K pages are accumulated. Then take this singly linked > > list starting from tlb->local.next off struct mmu_gather *tlb and free > > them in a worker thread. The main thread can return to continue zap > > other pages(after freeing pages pointed by tlb->local.pages). > > I didn't have a look at the implementation yet but there are two > concerns that raise up from this description. Firstly how are we going > to tune the number of workers. I assume there will be some upper bound > (one of the patch subject mentions debugfs for tuning) and secondly The workers are put in a dedicated workqueue which is introduced in patch 3/5 and the number of workers can be tuned through that workqueue's sysfs interface: max_active. > if we offload the page freeing to the worker then the original context > can consume much more cpu cycles than it was configured via cpu > controller. How are we going to handle that? Or is this considered > acceptable? I'll need to think about and take a look at this subject(not familiar with cpu controller). Thanks. -- To unsubscribe, send a message with 'unsubscribe linux-mm' in the body to majordomo@xxxxxxxxx. For more info on Linux MM, see: http://www.linux-mm.org/ . Don't email: <a href=mailto:"dont@xxxxxxxxx"> email@xxxxxxxxx </a>