On 06/18/2018 10:20 PM, Juri Lelli wrote: > Hi, > > On 18/06/18 12:13, Waiman Long wrote: >> v10: >> - Remove the cpuset.sched.load_balance patch for now as it may not >> be that useful. >> - Break the large patch 2 into smaller patches to make them a bit >> easier to review. >> - Test and fix issues related to changing "cpuset.cpus" and cpu >> online/offline in a domain root. >> - Rename isolated_cpus to reserved_cpus as this cpumask holds CPUs >> reserved for child sched domains. >> - Rework the scheduling domain debug printing code in the last patch. >> - Document update to the newly moved >> Documentation/admin-guide/cgroup-v2.rst. > There seem to be two (similar but different) 6/9 in the set. Something > went wrong? The isolated_cpus patch is old, I forgot to remove it before sending out the patch. > Also I can't seem to be able to create a subgroup with an isolated > domain root. I think that, when doing the following > > # mount -t cgroup2 none /sys/fs/cgroup > # echo "+cpuset" >/sys/fs/cgroup/cgroup.subtree_control > # mkdir /sys/fs/cgroup/g1 > # echo 0-1 >/sys/fs/cgroup/g1/cpuset.cpus > # echo 1 >/sys/fs/cgroup/g1/cpuset.sched.domain_root > > rebuild_sched_domains_locked exits early, since > top_cpuset.effective_cpus != cpu_active_mask. (effective_cpus being 2-3 > at this point since I'm testing this on a 0-3 system) > > In your v9 this [1] was adding a special condition to make rebuilding of > domains happen. Was the change intentional? Can you reply to the relevant patch to pinpoint what condition are you talking about? I do try to eliminate domain rebuild as much as possible, but I am just not sure which condition you have question about. Cheers, Longman -- To unsubscribe from this list: send the line "unsubscribe cgroups" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html