> -----Original Message----- > From: fio-owner@xxxxxxxxxxxxxxx [mailto:fio-owner@xxxxxxxxxxxxxxx] On Behalf Of Sitsofe Wheeler > Sent: Monday, October 29, 2018 3:12 PM > Subject: Re: fio win affinity broken > > On Mon, 29 Oct 2018 at 20:04, Jeff Furlong <jeff.furlong@xxxxxxx> wrote: > > > > Back in http://git.kernel.dk/cgit/fio/commit/?id=c479640d6208236744f0562b1e79535eec290e2b several > > updates were made for Windows CPU affinity. However, using the cpus_allowed since fio 3.6 and latest > > git appears to throw error messages. Consider a job: > > > > fio --name=test --ioengine=windowsaio --direct=1 --rw=write --overwrite=1 -- > > filename=\\.\PHYSICALDRIVE1 --runtime=1s --thread --cpus_allowed=0-1 --cpus_allowed_policy=shared Usually split is a better choice than shared - do you really want threads hopping between CPUs? > > fio_setaffinity: failed to set thread affinity (pid 2768, group 0, mask 10000, GetLastError=87) > > fio_setaffinity: failed to set thread affinity (pid 2372, group 0, mask 20000, GetLastError=87) > > clock setaffinity failed: No error Those are the clock threads, not the job threads. At startup, fio spawns threads on all CPUs to measure the clocks (fio_monotonic_clocktest). A Windows difference from linux I mentioned last year was: If you've constrained the CPU affinity outside fio, some of those will fail. In Windows, something like START /AFFINITY 0x55555555 fio ... can cause half of the clock threads to fail. In linux, processes and threads are not restricted to their parent's affinity mask. Although they inherit by default, they may call sched_setaffinity with any values. I hoped to work on a patch to limit the clock threads to the CPUs within the parent affinity mask and the logical OR of all the CPUs that are going to be used by the jobs, but then lost access to the Windows system I was using and never went back to it. > > It seems err 87 is ERROR_INVALID_PARAMETER, but any idea why "GROUP_AFFINITY struct's Reserved > members are not initialised to 0?" That's just one possible reason for failure; more likely causes are an invalid Mask or Group for the system under test. --- Robert Elliott, HPE Persistent Memory