Chunxin Zang writes:
On Tue, Sep 22, 2020 at 5:51 PM Chris Down <chris@xxxxxxxxxxxxxx> wrote:
Chunxin Zang writes:
>My usecase is that there are two types of services in one server. They
>have difference
>priorities. Type_A has the highest priority, we need to ensure it's
>schedule latency、I/O
>latency、memory enough. Type_B has the lowest priority, we expect it
>will not affect
>Type_A when executed.
>So Type_A could use memory without any limit. Type_B could use memory
>only when the
>memory is absolutely sufficient. But we cannot estimate how much
>memory Type_B should
>use. Because everything is dynamic. So we can't set Type_B's memory.high.
>
>So we want to release the memory of Type_B when global memory is
>insufficient in order
>to ensure the quality of service of Type_A . In the past, we used the
>'force_empty' interface
>of cgroup v1.
This sounds like a perfect use case for memory.low on Type_A, and it's pretty
much exactly what we invented it for. What's the problem with that?
But we cannot estimate how much memory Type_A uses at least.
memory.low allows ballparking, you don't have to know exactly how much it uses.
Any amount of protection biases reclaim away from that cgroup.
For example:
total memory: 100G
At the beginning, Type_A was in an idle state, and it only used 10G of memory.
The load is very low. We want to run Type_B to avoid wasting machine resources.
When Type_B runs for a while, it used 80G of memory.
At this time Type_A is busy, it needs more memory.
Ok, so set memory.low for Type_A close to your maximum expected value.