Eric Auger <eric.auger@xxxxxxxxxx> writes: > Hi Cleber, > > On 12/13/23 21:08, Cleber Rosa wrote: >> Alex Bennée <alex.bennee@xxxxxxxxxx> writes: >> >>> Cleber Rosa <crosa@xxxxxxxxxx> writes: >>> >>>> Based on many runs, the average run time for these 4 tests is around >>>> 250 seconds, with 320 seconds being the ceiling. In any way, the >>>> default 120 seconds timeout is inappropriate in my experience. >>> I would rather see these tests updated to fix: >>> >>> - Don't use such an old Fedora 31 image >> I remember proposing a bump in Fedora version used by default in >> avocado_qemu.LinuxTest (which would propagate to tests such as >> boot_linux.py and others), but that was not well accepted. I can >> definitely work on such a version bump again. >> >>> - Avoid updating image packages (when will RH stop serving them?) >> IIUC the only reason for updating the packages is to test the network >> from the guest, and could/should be done another way. >> >> Eric, could you confirm this? > Sorry for the delay. Yes effectively I used the dnf install to stress > the viommu. In the past I was able to trigger viommu bugs that way > whereas getting an IP @ for the guest was just successful. >> >>> - The "test" is a fairly basic check of dmesg/sysfs output >> Maybe the network is also an implicit check here. Let's see what Eric >> has to say. > > To be honest I do not remember how avocado does the check in itself; my > guess if that if the dnf install does not complete you get a timeout and > the test fails. But you may be more knowledged on this than me ;-) I guess the problem is relying on external infrastructure can lead to unpredictable results. However its a lot easier to configure user mode networking just to pull something off the internet than have a local netperf or some such setup to generate local traffic. I guess there is no loopback like setup which would sufficiently exercise the code? > > Thanks > > Eric >> >>> I think building a buildroot image with the tools pre-installed (with >>> perhaps more testing) would be a better use of our limited test time. >>> >>> FWIW the runtime on my machine is: >>> >>> ➜ env QEMU_TEST_FLAKY_TESTS=1 ./pyvenv/bin/avocado run ./tests/avocado/intel_iommu.py >>> JOB ID : 5c582ccf274f3aee279c2208f969a7af8ceb9943 >>> JOB LOG : /home/alex/avocado/job-results/job-2023-12-11T16.53-5c582cc/job.log >>> (1/4) ./tests/avocado/intel_iommu.py:IntelIOMMU.test_intel_iommu: PASS (44.21 s) >>> (2/4) ./tests/avocado/intel_iommu.py:IntelIOMMU.test_intel_iommu_strict: PASS (78.60 s) >>> (3/4) ./tests/avocado/intel_iommu.py:IntelIOMMU.test_intel_iommu_strict_cm: PASS (65.57 s) >>> (4/4) ./tests/avocado/intel_iommu.py:IntelIOMMU.test_intel_iommu_pt: PASS (66.63 s) >>> RESULTS : PASS 4 | ERROR 0 | FAIL 0 | SKIP 0 | WARN 0 | INTERRUPT 0 | CANCEL 0 >>> JOB TIME : 255.43 s >>> >> Yes, I've also seen similar runtimes in other environments... so it >> looks like it depends a lot on the "dnf -y install numactl-devel". If >> that can be removed, the tests would have much more predictable runtimes. >> -- Alex Bennée Virtualisation Tech Lead @ Linaro