Re: [PATCH v4 02/31] vmscan: take at least one pass with shrinkers

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 04/30/2013 07:37 PM, Mel Gorman wrote:
> On Tue, Apr 30, 2013 at 05:31:32PM +0400, Glauber Costa wrote:
>> On 04/30/2013 05:22 PM, Mel Gorman wrote:
>>> On Sat, Apr 27, 2013 at 03:18:58AM +0400, Glauber Costa wrote:
>>>> In very low free kernel memory situations, it may be the case that we
>>>> have less objects to free than our initial batch size. If this is the
>>>> case, it is better to shrink those, and open space for the new workload
>>>> then to keep them and fail the new allocations.
>>>>
>>>> More specifically, this happens because we encode this in a loop with
>>>> the condition: "while (total_scan >= batch_size)". So if we are in such
>>>> a case, we'll not even enter the loop.
>>>>
>>>> This patch modifies turns it into a do () while {} loop, that will
>>>> guarantee that we scan it at least once, while keeping the behaviour
>>>> exactly the same for the cases in which total_scan > batch_size.
>>>>
>>>> Signed-off-by: Glauber Costa <glommer@xxxxxxxxxx>
>>>> Reviewed-by: Dave Chinner <david@xxxxxxxxxxxxx>
>>>> Reviewed-by: Carlos Maiolino <cmaiolino@xxxxxxxxxx>
>>>> CC: "Theodore Ts'o" <tytso@xxxxxxx>
>>>> CC: Al Viro <viro@xxxxxxxxxxxxxxxxxx>
>>>
>>> There are two cases where this *might* cause a problem and worth keeping
>>> an eye out for.
>>>
>>
>> Any test case that you envision that could help bringing those issues
>> forward should they exist ? (aside from getting it into upstream trees
>> early?)
>>
> 
> hmm.
> 
> fsmark multi-threaded in a small-memory machine with a small number of
> very large files greater than the size of physical memory might trigger
> it. There should be a small number of inodes active so less than the 128
> that would have been ignored before the patch. As the files are larger
> than memory, kswapd will be awake and calling shrinkers so if the
> shrinker is really discarding active inodes then the performance will
> degrade.
> 
FYI: The weird behavior you found on benchmarks is due to this patch.
The problem is twofold:

first, by always scanning we fail to differentiate from the 0 condition,
which means skip it. We should have at least 1 object in the counter to
scan.

second, nr_to_scan continues being a batch. So when count returns, say,
3 objects, the shrinker will still try to free, say, 128 objects. And in
some situations, it might very well succeed.

I have already fixed this, and as soon as I finish merging all your
suggestions I will send an updated version.

--
To unsubscribe from this list: send the line "unsubscribe cgroups" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html




[Index of Archives]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Security]     [Bugtraq]     [Linux OMAP]     [Linux MIPS]     [eCos]     [Asterisk Internet PBX]     [Linux API]     [Monitors]

  Powered by Linux