On Wed, May 03, 2023 at 09:41:37AM +0000, Chaitanya Kulkarni wrote: > On 5/3/23 01:02, Daniel Wagner wrote: > > Limit the number of random threads to 32 for big machines. This still > > gives enough randomness but limits the resource usage. > > > > Signed-off-by: Daniel Wagner <dwagner@xxxxxxx> > > --- > > I don't think we should change this, the point of all the tests is > to not limit the resources but use threads at least equal to > $(nproc), see recent patches from lenovo they have 448 cores, > limiting 32 is < 10% CPUs and that is really small number for > a large machine if we decide to run tests on that machine ... I just wonder how handle the limits for the job size. Hannes asked to limit it to 32 CPUs so that the job size doesn't get small, e.g. nvme_img_size=16M job size per job with 448 CPUs is roughly 36kB. Is this good, bad or does it even make sense? I don't know. My question is what should the policy be? Should we reject configuration which try to run too small jobs sizes? Reject anything below 1M for example? Or is there a metric which we could as base for a limit calculation (disk geometry)?