On Sun, 17 May 2020 06:44:52 -0700 Shakeel Butt wrote: > > @@ -2583,12 +2606,23 @@ static int try_charge(struct mem_cgroup *memcg, gfp_t gfp_mask, > > * reclaim, the cost of mismatch is negligible. > > */ > > do { > > - if (page_counter_read(&memcg->memory) > READ_ONCE(memcg->high)) { > > - /* Don't bother a random interrupted task */ > > - if (in_interrupt()) { > > + bool mem_high, swap_high; > > + > > + mem_high = page_counter_read(&memcg->memory) > > > + READ_ONCE(memcg->high); > > + swap_high = page_counter_read(&memcg->swap) > > > + READ_ONCE(memcg->swap_high); > > + > > + /* Don't bother a random interrupted task */ > > + if (in_interrupt()) { > > + if (mem_high) { > > schedule_work(&memcg->high_work); > > break; > > } > > + continue; > > break? On a closer look I think continue is correct. In irq we only care about mem_high, because there's nothing we can do in a work context to penalize swap. So the loop is shortened. > > + } > > + > > + if (mem_high || swap_high) { > > current->memcg_nr_pages_over_high += batch; > > set_notify_resume(current); > > break;