On Fri, Feb 19, 2021 at 09:16:11AM +0100, Daniel Wagner wrote: > Hi Peter, Hi, Daniel, > > On Thu, Feb 18, 2021 at 02:27:29PM -0500, Peter Xu wrote: > > parse_cpumask() is too strict for oslat, in that use_current_cpuset() will > > filter out all the cores that are not allowed for current process to run. This > > seems to be unnecessary at least for oslat. For example, the bash process that > > runs the oslat program may have a sched affinity of 0-2, however it's still > > legal to have it start a oslat thread running on the cores outside 0-2 as long > > as the follow up sched_setaffinity() will succeed. > > > > numa_parse_cpustring_all() suites exactly for this case, which should already > > have considered sysconf(_SC_NPROCESSORS_ONLN) limit. Use that instead. > > > > Since at it, also remove initialization of cpu_set variable otherwise it's > > leaked in previous parse_cpumask too: numa_parse_cpustring_all() will return a > > newly allocated buffer already. Quotting from manual: > > > > numa_parse_nodestring() parses a character string list of nodes into a bit > > mask. The bit mask is allocated by numa_allocate_nodemask(). > > > > numa_parse_nodestring_all() is similar to numa_parse_nodestring, but can > > parse all possible nodes, not only current nodeset. > > My 2 cents: If parse_cpumask() is to strict fix parse_cpumask() so all > tools do the same thing. Note parse_cpumask() contains a couple > of bugs: > > "rt-numa: Use mask size for iterator limit" > "rt-numa: Remove max_cpus argument from parse_cpusmask" Yes I explicitly avoided touching parse_cpumask because I don't want to change behavior of other tools if I'm not confident with that. Would above two patches fix oslat too (which I didn't check)? If so, I'll be fine to have this patch dropped. Otherwise I tend to prefer fixing oslat first. We can further rework the common code, but if existing tools are fine, then I don't think it's a bugfix, so no need to rush. While I'll count this patch as a real bugfix, so I'd hope we could consider merging it earlier. Thanks, -- Peter Xu