On 11/18/24 4:39 AM, Juri Lelli wrote:
On 15/11/24 12:55, Waiman Long wrote:
On 11/15/24 5:54 AM, Juri Lelli wrote:
Hi Waiman,
On 14/11/24 13:19, Waiman Long wrote:
With some recent proposed changes [1] in the deadline server code,
it has caused a test failure in test_cpuset_prs.sh when a change
is being made to an isolated partition. This is due to failing
the cpuset_cpumask_can_shrink() check for SCHED_DEADLINE tasks at
validate_change().
What sort of change is being made to that isolated partition? Which test
is failing from the test_cpuset_prs.sh collection? Asking because I now
see "All tests PASSED" running that locally (with all my 3 patches on
top of cgroup/for-6.13 w/o this last patch from you).
The failing test isn't an isolated partition. The actual test failure is
Test TEST_MATRIX[62] failed result check!
C0-4:X2-4:S+ C1-4:X2-4:S+:P2 C2-4:X4:P1 . . X5 . . 0 A1:0-4,A2:1-4,A3:2-4
A1:P0,A2:P-2,A3:P-1
In this particular case, cgroup A3 has the following setting before the X5
operation.
A1/A2/A3/cpuset.cpus: 2-4
A1/A2/A3/cpuset.cpus.exclusive: 4
A1/A2/A3/cpuset.cpus.effective: 4
A1/A2/A3/cpuset.cpus.exclusive.effective: 4
A1/A2/A3/cpuset.cpus.partition: root
Right, and is this problematic already?
We allow nested partition setup. So there can be a child partition
underneath a parent partition. So this is OK.
Then the test, I believe, does
# echo 5 >cgroup/A1/A2/cpuset.cpus.exclusive
and that goes through and makes the setup invalid - root domain reconf
and the following
# cat cgroup/A1/cpuset.cpus.partition
member
# cat cgroup/A1/A2/cpuset.cpus.partition
isolated invalid (Parent is not a partition root)
# cat cgroup/A1/A2/A3/cpuset.cpus.partition
root invalid (Parent is an invalid partition root)
Is this what shouldn't happen?
A3 should become invalid because none of the CPUs in
cpuset.cpus.exclusive can be granted. However A2 should remain a valid
partition. I will look further into that. Thank for spotting this
inconsistency.
Cheers,
Longman