Re: [PATCH-cgroup] cgroup/cpuset: Enable invalid to valid local partition transition

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 10/2/23 06:06, Pierre Gondois wrote:
Hello Waiman,

I could test the patch using the for-next branch in your tree.
Just a NIT, it seemed that the message indicating the reason
the isolated configuration was invalid is not printed anymore:

Commands:
# mkdir cgroup
# mount -t cgroup2 none cgroup/
# mkdir cgroup/A1 cgroup/B1
# echo "+cpuset" > cgroup/cgroup.subtree_control
# echo 0-3 > cgroup/A1/cpuset.cpus
# echo isolated > cgroup/A1/cpuset.cpus.partition
# echo 4-6 > cgroup/B1/cpuset.cpus
# cat cgroup/A1/cpuset.cpus.partition
isolated
# echo 0-4 > cgroup/A1/cpuset.cpus
# cat cgroup/A1/cpuset.cpus.partition
isolated invalid                      <--- used to have '(Cpu list in cpuset.cpus not exclusive)'
# echo 0-3 > cgroup/A1/cpuset.cpus
# cat cgroup/A1/cpuset.cpus.partition
isolated                              <--- now working!


But when creating an isolated partition from overlapping cpusets,
the message is printed:
# mkdir cgroup
# mount -t cgroup2 none cgroup/
# mkdir cgroup/A1 cgroup/B1
# echo "+cpuset" > cgroup/cgroup.subtree_control
# echo 0-4 > cgroup/A1/cpuset.cpus
# echo 4-6 > cgroup/B1/cpuset.cpus
# echo isolated > cgroup/B1/cpuset.cpus.partition

# cat cgroup/A1/cpuset.cpus.partition
member
# cat cgroup/B1/cpuset.cpus.partition
isolated invalid (Cpu list in cpuset.cpus not exclusive) <--- Complete message printed


On 9/30/23 05:44, Waiman Long wrote:
When a local partition becomes invalid, it won't transition back to
valid partition automatically if a proper "cpuset.cpus.exclusive" or
"cpuset.cpus" change is made. Instead, system administrators have to
explicitly echo "root" or "isolated" into the "cpuset.cpus.partition"
file at the partition root.

This patch now enables the automatic transition of an invalid local
partition back to valid when there is a proper "cpuset.cpus.exclusive"
or "cpuset.cpus" change.

Automatic transition of an invalid remote partition to a valid one,
however, is not covered by this patch. They still need an explicit
write to "cpuset.cpus.partition" to become valid again.

I'm not sure I understand what is meant by 'remote partition',
is it possible to explain ? Or is the following illustrating what you
mean ?

# mkdir cgroup
# mount -t cgroup2 none cgroup/
# mkdir cgroup/A1 cgroup/B1
# echo "+cpuset" > cgroup/cgroup.subtree_control
# echo 0-3 > cgroup/A1/cpuset.cpus
# echo isolated > cgroup/A1/cpuset.cpus.partition
# echo 4-6 > cgroup/B1/cpuset.cpus
# echo isolated > cgroup/B1/cpuset.cpus.partition

# echo 0-4 > cgroup/A1/cpuset.cpus
# cat cgroup/A1/cpuset.cpus.partition
isolated invalid
# cat cgroup/B1/cpuset.cpus.partition
isolated invalid

# echo 0-3 > cgroup/A1/cpuset.cpus
# cat cgroup/A1/cpuset.cpus.partition
isolated
# cat cgroup/B1/cpuset.cpus.partition
isolated invalid        <--- The remote CPU is not updated

It is probably another corner case that has not been handled. I will look into that.

Thanks for the test.

-Longman





[Index of Archives]     [Linux Wireless]     [Linux Kernel]     [ATH6KL]     [Linux Bluetooth]     [Linux Netdev]     [Kernel Newbies]     [Share Photos]     [IDE]     [Security]     [Git]     [Netfilter]     [Bugtraq]     [Yosemite News]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux RAID]     [Linux ATA RAID]     [Samba]     [Device Mapper]

  Powered by Linux