Re: [PATCH v1 10/14] mm: multigenerational lru: core

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Tue, Mar 16, 2021 at 02:52:52PM +0800, Huang, Ying wrote:
> Yu Zhao <yuzhao@xxxxxxxxxx> writes:
> 
> > On Tue, Mar 16, 2021 at 10:08:51AM +0800, Huang, Ying wrote:
> >> Yu Zhao <yuzhao@xxxxxxxxxx> writes:
> >> [snip]
> >> 
> >> > +/* Main function used by foreground, background and user-triggered aging. */
> >> > +static bool walk_mm_list(struct lruvec *lruvec, unsigned long next_seq,
> >> > +			 struct scan_control *sc, int swappiness)
> >> > +{
> >> > +	bool last;
> >> > +	struct mm_struct *mm = NULL;
> >> > +	int nid = lruvec_pgdat(lruvec)->node_id;
> >> > +	struct mem_cgroup *memcg = lruvec_memcg(lruvec);
> >> > +	struct lru_gen_mm_list *mm_list = get_mm_list(memcg);
> >> > +
> >> > +	VM_BUG_ON(next_seq > READ_ONCE(lruvec->evictable.max_seq));
> >> > +
> >> > +	/*
> >> > +	 * For each walk of the mm list of a memcg, we decrement the priority
> >> > +	 * of its lruvec. For each walk of memcgs in kswapd, we increment the
> >> > +	 * priorities of all lruvecs.
> >> > +	 *
> >> > +	 * So if this lruvec has a higher priority (smaller value), it means
> >> > +	 * other concurrent reclaimers (global or memcg reclaim) have walked
> >> > +	 * its mm list. Skip it for this priority to balance the pressure on
> >> > +	 * all memcgs.
> >> > +	 */
> >> > +#ifdef CONFIG_MEMCG
> >> > +	if (!mem_cgroup_disabled() && !cgroup_reclaim(sc) &&
> >> > +	    sc->priority > atomic_read(&lruvec->evictable.priority))
> >> > +		return false;
> >> > +#endif
> >> > +
> >> > +	do {
> >> > +		last = get_next_mm(lruvec, next_seq, swappiness, &mm);
> >> > +		if (mm)
> >> > +			walk_mm(lruvec, mm, swappiness);
> >> > +
> >> > +		cond_resched();
> >> > +	} while (mm);
> >> 
> >> It appears that we need to scan the whole address space of multiple
> >> processes in this loop?
> >> 
> >> If so, I have some concerns about the duration of the function.  Do you
> >> have some number of the distribution of the duration of the function?
> >> And may be the number of mm_struct and the number of pages scanned.
> >> 
> >> In comparison, in the traditional LRU algorithm, for each round, only a
> >> small subset of the whole physical memory is scanned.
> >
> > Reasonable concerns, and insightful too. We are sensitive to direct
> > reclaim latency, and we tuned another path carefully so that direct
> > reclaims virtually don't hit this path :)
> >
> > Some numbers from the cover letter first:
> >   In addition, direct reclaim latency is reduced by 22% at 99th
> >   percentile and the number of refaults is reduced 7%. These metrics are
> >   important to phones and laptops as they are correlated to user
> >   experience.
> >
> > And "another path" is the background aging in kswapd:
> >   age_active_anon()
> >     age_lru_gens()
> >       try_walk_mm_list()
> >         /* try to spread pages out across spread+1 generations */
> >         if (old_and_young[0] >= old_and_young[1] * spread &&
> >             min_nr_gens(max_seq, min_seq, swappiness) > max(spread, MIN_NR_GENS))
> >                 return;
> >
> >         walk_mm_list(lruvec, max_seq, sc, swappiness);
> >
> > By default, spread = 2, which makes kswapd slight more aggressive
> > than direct reclaim for our use cases. This can be entirely disabled
> > by setting spread to 0, for worloads that don't care about direct
> > reclaim latency, or larger values, they are more sensitive than
> > ours.
> 
> OK, I see.  That can avoid the long latency in direct reclaim path.
> 
> > It's worth noting that walk_mm_list() is multithreaded -- reclaiming
> > threads can work on different mm_structs on the same list
> > concurrently. We do occasionally see this function in direct reclaims,
> > on over-overcommitted systems, i.e., kswapd CPU usage is 100%. Under
> > the same condition, we saw the current page reclaim live locked and
> > triggered hardware watchdog timeouts (our hardware watchdog is set to
> > 2 hours) many times.
> 
> Just to confirm, in the current page reclaim, kswapd will keep running
> until watchdog?  This is avoided in your algorithm mainly via
> multi-threading?  Or via direct vs. reversing page table scanning?

Well, don't tell me you've seen the problem :) Let me explain one
subtle difference in how the aging works between the current page
reclaim and this series, and point you to the code.

In the current page reclaim, we can't scan a page via the rmap without
isolating the page first. So the aging basically isolates a batch of
pages from a lru list, walks the rmap for each of the pages, and puts
active ones back to the list.

In this series, aging walks page tables to update the generation
numbers of active pages without isolating them. The isolation is the
subtle difference: it's not a problem when there are few threads, but
it causes live locks when hundreds of threads running the aging and
hit the following in shrink_inactive_list():

	while (unlikely(too_many_isolated(pgdat, file, sc))) {
		if (stalled)
			return 0;

		/* wait a bit for the reclaimer. */
		msleep(100);
		stalled = true;

		/* We are about to die and free our memory. Return now. */
		if (fatal_signal_pending(current))
			return SWAP_CLUSTER_MAX;
	}

Thanks to Michal who has improved it considerably by commit
db73ee0d4637 ("mm, vmscan: do not loop on too_many_isolated for
ever"). But we still occasionally see live locks on over-overcommitted
machines. Reclaiming threads step on each other while interleaving
between the msleep() and the aging, on 100+ CPUs.




[Index of Archives]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Bugtraq]     [Linux OMAP]     [Linux MIPS]     [eCos]     [Asterisk Internet PBX]     [Linux API]

  Powered by Linux