On Mon 13-09-21 08:54:01, yongw.pur@xxxxxxxxx wrote: > From: wangyong <wang.yong12@xxxxxxxxxx> > > Use the global variable num_events to record the number of vmpressure > events registered by the system, and wake up work only when there is > registration event. > Usually, the vmpressure event is not registered in the system, this patch > can avoid waking up work and doing nothing. How much of an improvement does this bring? > Refer to Michal Hocko's suggestion: > https://lore.kernel.org/linux-mm/YR%2FNRJEhPKRQ1r22@xxxxxxxxxxxxxx/ let me also point out that we do have means to implement conditional branches with a zero overhead. Have a look at static branches. > Tested-by: Zeal Robot <zealci@xxxxxxxxxx> > Signed-off-by: wangyong <wang.yong12@xxxxxxxxxx> > --- > mm/vmpressure.c | 9 ++++++++- > 1 file changed, 8 insertions(+), 1 deletion(-) > > diff --git a/mm/vmpressure.c b/mm/vmpressure.c > index 76518e4..dfac76b 100644 > --- a/mm/vmpressure.c > +++ b/mm/vmpressure.c > @@ -67,6 +67,11 @@ static const unsigned int vmpressure_level_critical = 95; > */ > static const unsigned int vmpressure_level_critical_prio = ilog2(100 / 10); > > +/* > + * Count the number of vmpressure events registered in the system. > + */ > +static atomic_t num_events = ATOMIC_INIT(0); > + > static struct vmpressure *work_to_vmpressure(struct work_struct *work) > { > return container_of(work, struct vmpressure, work); > @@ -277,7 +282,7 @@ void vmpressure(gfp_t gfp, struct mem_cgroup *memcg, bool tree, > vmpr->tree_reclaimed += reclaimed; > spin_unlock(&vmpr->sr_lock); > > - if (scanned < vmpressure_win) > + if (scanned < vmpressure_win || atomic_read(&num_events) == 0) > return; > schedule_work(&vmpr->work); > } else { This is a very odd place to put the check on. It is past locks being held and schedule_work on it's own is unlikely to provide a big overhead. I would have expected to implement the check at the very beginning of vmpressure() -- Michal Hocko SUSE Labs