Re: [RFC PATCH] mm: memcontrol: memory+swap accounting for cgroup-v2

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Thu, Dec 21, 2017 at 5:37 AM, Tejun Heo <tj@xxxxxxxxxx> wrote:
> Hello, Shakeel.
>
> On Wed, Dec 20, 2017 at 05:15:41PM -0800, Shakeel Butt wrote:
>> Let's say we have a job that allocates 100 MiB memory and suppose 80
>> MiB is anon and 20 MiB is non-anon (file & kmem).
>>
>> [With memsw] Scheduler sets the memsw limit of the job to 100 MiB and
>> memory to max. Now suppose the job tries to allocates memory more than
>> 100 MiB, it will hit the memsw limit and will try to reclaim non-anon
>> memory. The memcg OOM behavior will only depend on the reclaim of
>> non-anon memory and will be independent of the underlying swap device.
>
> Sure, the direct reclaim on memsw limit won't reclaim anon pages, but
> think about how the state at that point would have formed.  You're
> claiming that memsw makes memory allocation and balancing behavior an
> invariant against the performance of the swap device that the machine
> has.  It's simply not possible.
>

I am claiming memory allocations under global pressure will be
affected by the performance of the underlying swap device. However
memory allocations under memcg memory pressure, with memsw, will not
be affected by the performance of the underlying swap device. A job
having 100 MiB limit running on a machine without global memory
pressure will never see swap on hitting 100 MiB memsw limit.

> On top of that, what's the point?
>
> 1. As I wrote earlier, given the current OOM killer implementation,
>    whether OOM kicks in or not is not even that relevant in
>    determining the health of the workload.  There are frequent failure
>    modes where OOM killer fails to kick in while the workload isn't
>    making any meaningful forward progress.
>

Deterministic oom-killer is not the point. The point is to
"consistently limit the anon memory" allocated by the job which only
memsw can provide. A job owner who has requested 100 MiB for a job
sees some instances of the job suffer at 100 MiB and other instances
suffer at 150 MiB, is an inconsistent behavior.

> 2. On hitting memsw limit, the OOM decision is dependent on the
>    performance of the file backing devices.  Why is that necessarily
>    better than being dependent on swap or both, which would increase
>    the reclaim efficiency anyway?  You can't avoid being affected by
>    the underlying hardware one way or the other.
>

This is a separate discussion but still the amount of file backed
pages is known and controlled by the job owner and they have the
option to use a storage service, providing a consistent performance
across different data centers, instead of the physical disks of the
system where the job is running and thus isolating the job's
performance from the speed of the local disk. This is not possible
with swap. The swap (and its performance) is and should be transparent
to the job owners.
--
To unsubscribe from this list: send the line "unsubscribe cgroups" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html



[Index of Archives]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Security]     [Bugtraq]     [Linux OMAP]     [Linux MIPS]     [eCos]     [Asterisk Internet PBX]     [Linux API]     [Monitors]

  Powered by Linux