On Thu, 21 Jul 2011 13:36:06 +0200 Michal Hocko <mhocko@xxxxxxx> wrote: > On Thu 21-07-11 19:12:50, KAMEZAWA Hiroyuki wrote: > > On Thu, 21 Jul 2011 09:38:00 +0200 > > Michal Hocko <mhocko@xxxxxxx> wrote: > > > > > drain_all_stock_async tries to optimize a work to be done on the work > > > queue by excluding any work for the current CPU because it assumes that > > > the context we are called from already tried to charge from that cache > > > and it's failed so it must be empty already. > > > While the assumption is correct we can do it by checking the current > > > number of pages in the cache. This will also reduce a work on other CPUs > > > with an empty stock. > > > > > > Signed-off-by: Michal Hocko <mhocko@xxxxxxx> > > > > > > At the first look, when a charge against TransParentHugepage() goes > > into the reclaim routine, stock->nr_pages != 0 and this will > > call additional kworker. > > True. We will drain a charge which could be used by other allocations > in the meantime so we have a good chance to reclaim less. But how big > problem is that? > I mean I can add a new parameter that would force checking the current > cpu but it doesn't look nice. I cannot add that condition > unconditionally because the code will be shared with the sync path in > the next patch and that one needs to drain _all_ cpus. > > What would you suggest? By 2 methods - just check nr_pages. - drain "local stock" without calling schedule_work(). It's fast. Thanks, -Kame -- To unsubscribe, send a message with 'unsubscribe linux-mm' in the body to majordomo@xxxxxxxxx. For more info on Linux MM, see: http://www.linux-mm.org/ . Fight unfair telecom internet charges in Canada: sign http://stopthemeter.ca/ Don't email: <a href=mailto:"dont@xxxxxxxxx"> email@xxxxxxxxx </a>