On 2/6/24 07:56, Frederic Weisbecker wrote:
Le Wed, Jan 17, 2024 at 12:15:07PM -0500, Waiman Long a écrit :
On 1/17/24 12:07, Tejun Heo wrote:
Hello,
On Wed, Jan 17, 2024 at 11:35:03AM -0500, Waiman Long wrote:
The first 2 patches are adopted from Federic with minor twists to fix
merge conflicts and compilation issue. The rests are for implementing
the new cpuset.cpus.isolation_full interface which is essentially a flag
to globally enable or disable full CPU isolation on isolated partitions.
I think the interface is a bit premature. The cpuset partition feature is
already pretty restrictive and makes it really clear that it's to isolate
the CPUs. I think it'd be better to just enable all the isolation features
by default. If there are valid use cases which can't be served without
disabling some isolation features, we can worry about adding the interface
at that point.
My current thought is to make isolated partitions act like isolcpus=domain,
additional CPU isolation capabilities are optional and can be turned on
using isolation_full. However, I am fine with making all these turned on by
default if it is the consensus.
Right it was the consensus last time I tried. Along with the fact that mutating
this isolation_full set has to be done on offline CPUs to simplify the whole
picture.
So lemme try to summarize what needs to be done:
1) An all-isolation feature file (that is, all the HK_TYPE_* things) on/off for
now. And if it ever proves needed, provide a way later for more finegrained
tuning.
That is more or less the current plan. As detailed below, HK_TYPE_DOMAIN
& HK_TYPE_WQ isolation are included in the isolated partitions by
default. I am also thinking about including other relatively cheap
isolation flags by default. The expensive ones will have to be enabled
via isolation_full.
2) This file must only apply to offline CPUs because it avoids migrations and
stuff.
Well, the process of first moving the CPUs offline first is rather
expensive. I won't mind doing some partial offlining based on the
existing set of teardown and bringup callbacks, but I would try to avoid
fully offlining the CPUs first.
3) I need to make RCU NOCB tunable only on offline CPUs, which isn't that much
changes.
4) HK_TYPE_TIMER:
* Wrt. timers in general, not much needs to be done, the CPUs are
offline. But:
* arch/x86/kvm/x86.c does something weird
* drivers/char/random.c might need some care
* watchdog needs to be (de-)activated
5) HK_TYPE_DOMAIN:
* This one I fear is not mutable, this is isolcpus...
HK_TYPE_DOMAIN is already available via the current cpuset isolated
partition functionality. What I am currently doing is to extend that to
other HK_TYPE* flags.
6) HK_TYPE_MANAGED_IRQ:
* I prefer not to think about it :-)
7) HK_TYPE_TICK:
* Maybe some tiny ticks internals to revisit, I'll check that.
* There is a remote tick to take into consideration, but again the
CPUs are offline so it shouldn't be too complicated.
8) HK_TYPE_WQ:
* Fortunately we already have all the mutable interface in place.
But we must make it live nicely with the sysfs workqueue affinity
files.
HK_TYPE_WQ is basically done and it is going to work properly with the
workqueue affinity sysfs files. From the workqueue of view, HK_TYPE_WQ
is currently treated the same as HK_TYPE_DOMAIN.
9) HK_FLAG_SCHED:
* Oops, this one is ignored by nohz_full/isolcpus, isn't it?
Should be removed?
I don't think HK_FLAG_SCHED is being used at all. So I believe we should
remove it to avoid confusion.
10) HK_TYPE_RCU:
* That's point 3) and also some kthreads to affine, which leads us
to the following in HK_TYPE_KTHREAD:
11) HK_FLAG_KTHREAD:
* I'm guessing it's fine as long as isolation_full is also an
isolated partition. Then unbound kthreads shouldn't run there.
Yes, isolation_full applies only to isolated partitions. It extends the
amount of CPU isolation by enabling all the other CPU available
isolation flags.
Cheers,
Longman