Thank you all very much for your input. A long time ago I had needed to run some graphical tests in %check and I vaguely remember using some dummy driver which was then superseded(?) by xvfb and then for some reason I stopped needing to care about all that and that cache in my brain got flushed. Thanks for putting that info back. Thinking it was the easiest option, I decided to go with Karolina's suggestion and skip the offending module, which sure enough, allowed the tests to proceed and lead to a wall of failures: = 475 failed, 12 passed, 4 deselected, 4 warnings, 112 errors in 146.02s (0:02:26) = https://koji.fedoraproject.org/koji/taskinfo?taskID=129355209 I was hoping that Ben, who authored the pytest part of the spec file, would be able to shed some light as to what is going on, though I suspect I need to get in touch with upstream. I can't make heads or tails of anything. Why does test_autoload fail? What should the "Patch is already started" error message mean to me? I know our guidelines about unit tests and I can see the point in upstream having them in place, but do we as a downstream distribution need to run all of these? It's not like a math library, that might produce different results on different architectures, this program either works or it doesn't. 500 tests that ultimately tell us nothing about the state of the program once it is installed seem like an overkill. -- _______________________________________________ devel mailing list -- devel@xxxxxxxxxxxxxxxxxxxxxxx To unsubscribe send an email to devel-leave@xxxxxxxxxxxxxxxxxxxxxxx Fedora Code of Conduct: https://docs.fedoraproject.org/en-US/project/code-of-conduct/ List Guidelines: https://fedoraproject.org/wiki/Mailing_list_guidelines List Archives: https://lists.fedoraproject.org/archives/list/devel@xxxxxxxxxxxxxxxxxxxxxxx Do not reply to spam, report it: https://pagure.io/fedora-infrastructure/new_issue