Hi, On 18/06/18 12:13, Waiman Long wrote: > v10: > - Remove the cpuset.sched.load_balance patch for now as it may not > be that useful. > - Break the large patch 2 into smaller patches to make them a bit > easier to review. > - Test and fix issues related to changing "cpuset.cpus" and cpu > online/offline in a domain root. > - Rename isolated_cpus to reserved_cpus as this cpumask holds CPUs > reserved for child sched domains. > - Rework the scheduling domain debug printing code in the last patch. > - Document update to the newly moved > Documentation/admin-guide/cgroup-v2.rst. There seem to be two (similar but different) 6/9 in the set. Something went wrong? Also I can't seem to be able to create a subgroup with an isolated domain root. I think that, when doing the following # mount -t cgroup2 none /sys/fs/cgroup # echo "+cpuset" >/sys/fs/cgroup/cgroup.subtree_control # mkdir /sys/fs/cgroup/g1 # echo 0-1 >/sys/fs/cgroup/g1/cpuset.cpus # echo 1 >/sys/fs/cgroup/g1/cpuset.sched.domain_root rebuild_sched_domains_locked exits early, since top_cpuset.effective_cpus != cpu_active_mask. (effective_cpus being 2-3 at this point since I'm testing this on a 0-3 system) In your v9 this [1] was adding a special condition to make rebuilding of domains happen. Was the change intentional? Thanks, - Juri [1] https://marc.info/?l=linux-kernel&m=152760142531222&w=2 -- To unsubscribe from this list: send the line "unsubscribe cgroups" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html