On Wed, Jan 22, 2025 at 09:15:48AM +1100, Dave Chinner wrote: > check-parallel on my 64p machine runs the full auto group test in > under 10 minutes. > > i.e. if you have a typical modern server (64-128p, 256GB RAM and a > couple of NVMe SSDs), then check-parallel allows a full test run in > the same time that './check -g smoketest' will run.... Interesting. I would have thought that even with NVMe SSD's, you'd be I/O speed constrained, especially given that some of the tests (especially the ENOSPC hitters) can take quite a lot of time to fill the storage device, even if they are using fallocate. How do you have your test and scratch devices configured? > Yes, and I've previously made the point about how check-parallel > changes the way we should be looking at dev-test cycles. We no > longer have to care that auto group testing takes 4 hours to run and > have to work around that with things like smoketest groups. If you > can run the whole auto test group in 10-15 minutes, then we don't > need "quick", "smoketest", etc to reduce dev-test cycle time > anymore... Well, yes, if the only consideration is test run time latency. I can think of two off-setting considerations. The first is if you care about cost. The cheapest you can get a 64 CPU, 24 GiB VM on Google Cloud is $3.04 USD/hour (n1-stndard-64 in a Iowa data center), so ten minutes of run time is about 51 cents USD (ignoring the storage costs). Not bad. But running xfs/4k on the auto group on an e2-standard-2 VM takes 3.2 hours; but the e2-standard-2 VM is much cheaper, coming in at $0.087651 USD/ hour. So that translates to 28 cents for the VM, and that's not taking into account the fact you almost certainly much more expensive, high-performance storge to support the 64 CPU VM. So if you don't care about time to run completion (for example, if I'm monitoring the 5.15, 6.1, 6.6, and 6.12 LTS LTS rc git trees, and kicking off a build whenever Greg or Sasha updates them), using a serialized xfstests is going to be cheaper because you can use less expensive cloud resources. The second concern is that for certain class of failures (UBSAN, KCSAN, Lockdep, RCU soft lockups, WARN_ON, BUG_ON, and other panics/OOPS), if you are runnig 64 tests in parllel it might not be obvious which test caused the failure. Today, even if the test VM crashes or hangs, I can have test manager (which runs on a e2-small VM costing $0.021913 USD/hour and can manage dozens of test VM's all at the same time), can restart the test VM, and we know which test is at at fault, and we mark that a particular test with the Junit XML status of "error" (as distinct from "success" or "failure"). If there are 64 test runs in parallel, if I wanted to have automated recovery if the test appliance hangs or crashes, life gets a lot more complicated..... I suppose we could have the human (or test automation) try run each individual test that had been running at the time of the crash but that's a lot more complicated, and what if the tests pass when run once at a time? I guess we should happen that check-parallel found a bug that plain check didn't find, but the human being still has to root cause the failure. Cheers, - Ted