Re: [RFC 0/5] kernel: Introduce CPU Namespace

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hello Tejun,


On 20/10/21 10:05 pm, Tejun Heo wrote:
Hello,

On Wed, Oct 20, 2021 at 04:14:25PM +0530, Pratik Sampat wrote:
As you have elucidated, it doesn't like an easy feat to
define metrics like ballpark numbers as there are many variables
involved.
Yeah, it gets tricky and we want to get the basics right from the get go.

For the CPU example, cpusets control the resource space whereas
period-quota control resource time. These seem like two vectors on
different axes.
Conveying these restrictions in one metric doesn't seem easy. Some
container runtime convert the period-quota time dimension to X CPUs
worth of runtime space dimension. However, we need to carefully model
what a ballpark metric in this sense would be and provide clearer
constraints as both of these restrictions can be active at a given
point in time and can influence how something is run.
So, for CPU, the important functional number is the number of threads needed
to saturate available resources and that one is pretty easy.

I'm speculating, and please correct correct me if I'm wrong; suggesting
an optimal number of threads to spawn to saturate the available
resources can get convoluted right?

In the nginx example illustrated in the cover patch, it worked best
when the thread count was N+1 (N worker threads 1 master thread),
however different applications can work better with a different
configuration of threads spawned based on its usecase and
multi-threading requirements.

Eventually looking at the load we maybe able to suggest more/less
threads to spawn, but initially we may have to have to suggest threads
to spawn as direct function of N CPUs available or N CPUs worth of
runtime available?

The other
metric would be the maximum available fractions of CPUs available to the
cgroup subtree if the cgroup stays saturating. This number is trickier as it
has to consider how much others are using but would be determined by the
smaller of what would be available through cpu.weight and cpu.max.

I agree, this would be a very useful metric to have. Having the
knowledge for how much further we can scale when we're saturating our
limits keeping in mind of the other running applications can possibly
be really useful not just for the applications itself but also for the
container orchestrators as well.

IO likely is in a similar boat. We can calculate metrics showing the
rbps/riops/wbps/wiops available to a given cgroup subtree. This would factor
in the limits from io.max and the resulting distribution from io.weight in
iocost's case (iocost will give a % number but we can translate that to
bps/iops numbers).

Yes, that's a useful metric to expose this way as well.

Restrictions for memory are even more complicated to model as you have
pointed out as well.
Yeah, this one is the most challenging.

Thanks.

Thank you,
Pratik





[Index of Archives]     [Cgroups]     [Netdev]     [Linux Wireless]     [Kernel Newbies]     [Security]     [Linux for Hams]     [Netfilter]     [Bugtraq]     [Yosemite Forum]     [MIPS Linux]     [ARM Linux]     [Linux RAID]     [Linux Admin]     [Samba]

  Powered by Linux