On 01/16/2014 02:53 AM, Andrew Morton wrote: > On Wed, 15 Jan 2014 19:55:11 +0400 Vladimir Davydov <vdavydov@xxxxxxxxxxxxx> wrote: > >>> We could avoid the "scan 32 then scan just 1" issue with something like >>> >>> if (total_scan > batch_size) >>> total_scan %= batch_size; >>> >>> before the loop. But I expect the effects of that will be unmeasurable >>> - on average the number of objects which are scanned in the final pass >>> of the loop will be batch_size/2, yes? That's still a decent amount. >> Let me try to summarize. We want to scan batch_size objects in one pass, >> not more (to keep latency low) and not less (to avoid cpu cache >> pollution due to too frequent calls); if the calculated value of >> nr_to_scan is less than the batch_size we should accumulate it in >> nr_deferred instead of calling ->scan() and add nr_deferred to >> nr_to_scan on the next pass, i.e. in pseudo-code: >> >> /* calculate current nr_to_scan */ >> max_pass = shrinker->count(); >> delta = max_pass * nr_user_pages_scanned / nr_user_pages; >> >> /* add nr_deferred */ >> total_scan = delta + nr_deferred; >> >> while (total_scan >= batch_size) { >> shrinker->scan(batch_size); >> total_scan -= batch_size; >> } >> >> /* save the remainder to nr_deferred */ >> nr_deferred = total_scan; >> >> That would work, but if max_pass is < batch_size, it would not scan the >> objects immediately even if prio is high (we want to scan all objects). > Yes, that's a problem. > >> For example, dropping caches would not work on the first attempt - the >> user would have to call it batch_size / max_pass times. > And we do want drop_caches to work immediately. > >> This could be >> fixed by making the code proceed to ->scan() not only if total_scan is >>> = batch_size, but also if max_pass is < batch_size and total_scan is >= >> max_pass, i.e. >> >> while (total_scan >= batch_size || >> (max_pass < batch_size && total_scan >= max_pass)) ... >> >> which is equivalent to >> >> while (total_scan >= batch_size || >> total_scan >= max_pass) ... >> >> The latter is the loop condition from the current patch, i.e. this patch >> would make the trick if shrink_slab() followed the pseudo-code above. In >> real life, it does not actually - we have to bias total_scan before the >> while loop in order to avoid dropping fs meta caches on light memory >> pressure due to a large number being built in nr_deferred: >> >> if (delta < max_pass / 4) >> total_scan = min(total_scan, max_pass / 2); > Oh, is that what's it's for. Where did you discover this gem? >From 3567b59aa80ac4417002bf58e35dce5c777d4164 Mon Sep 17 00:00:00 2001 From: Dave Chinner <dchinner@xxxxxxxxxx> Date: Fri, 8 Jul 2011 14:14:36 +1000 Subject: [PATCH] vmscan: reduce wind up shrinker->nr when shrinker can't do work >> while (total_scan >= batch_size) ... >> >> With this biasing, it is impossible to achieve the ideal behavior I've >> described above, because we will never accumulate max_pass objects in >> nr_deferred if memory pressure is low. So, if applied to the real code, >> this patch takes on a slightly different sense, which I tried to reflect >> in the comment to the code: it will call ->scan() with nr_to_scan < >> batch_size only if: >> >> 1) max_pass < batch_size && total_scan >= max_pass >> >> and >> >> 2) we're tight on memory, i.e. the current delta is high (otherwise >> total_scan will be biased as max_pass / 2 and condition 1 won't be >> satisfied). > (is max_pass misnamed?) Yes, the name is misleading. I guess, it should be called freeable_cnt, because we actually scan up to 2*max_pass objects in one pass. >> >From our discussion it seems condition 2 is not necessary at all, but it >> follows directly from the biasing rule. So I propose to tweak the >> biasing a bit so that total_scan won't be lowered < batch_size: >> >> diff --git a/mm/vmscan.c b/mm/vmscan.c >> index eea668d..78ddd5e 100644 >> --- a/mm/vmscan.c >> +++ b/mm/vmscan.c >> @@ -267,7 +267,7 @@ shrink_slab_node(struct shrink_control *shrinkctl, >> struct shrinker *shrinker, >> * a large delta change is calculated directly. >> */ >> if (delta < max_pass / 4) >> - total_scan = min(total_scan, max_pass / 2); >> + total_scan = min(total_scan, max(max_pass / 2, batch_size)); >> >> /* >> * Avoid risking looping forever due to too large nr value: >> @@ -281,7 +281,7 @@ shrink_slab_node(struct shrink_control *shrinkctl, >> struct shrinker *shrinker, >> nr_pages_scanned, lru_pages, >> max_pass, delta, total_scan); >> >> - while (total_scan >= batch_size) { >> + while (total_scan >= batch_size || total_scan >= max_pass) { >> unsigned long ret; >> >> shrinkctl->nr_to_scan = batch_size; >> >> The first hunk guarantees that total_scan will always accumulate at >> least batch_size objects no matter how small max_pass is. That means >> that when max_pass is < batch_size we will eventually get >= max_pass >> objects to scan and shrink the slab to 0 as we need. What do you think >> about that? > I'm a bit lost :( I'll try to clean up shrink_slab_node() and resend the patch then. Thanks. -- To unsubscribe, send a message with 'unsubscribe linux-mm' in the body to majordomo@xxxxxxxxx. For more info on Linux MM, see: http://www.linux-mm.org/ . Don't email: <a href=mailto:"dont@xxxxxxxxx"> email@xxxxxxxxx </a>