Re: Need help with "cannot open display" error when %pyproject_check_import is run

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



I gather that you have already found that this will work for the import-only "smoke test":

    # Make sure everything is at least importable. Skip
    # inputremapper.bin.input_remapper_gtk because it requires a display.
    %pyproject_check_import -e inputremapper.bin.input_remapper_gtk

I spent a little time attempting an update to input-remapper 2.1.1 locally. I think that errors like

    _________________________ ERROR at setup of test_setup _________________________     file /builddir/build/BUILD/input-remapper-2.1.1-build/input-remapper-2.1.1/tests/lib/test_setup.py, line 33
      def test_setup(cls):
    E       fixture 'cls' not found
    >       available fixtures: cache, capfd, capfdbinary, caplog, capsys, capsysbinary, doctest_namespace, monkeypatch, pytestconfig,    record_property, record_testsuite_property, record_xml_attribute, recwarn,
tmp_path, tmp_path_factory, tmpdir, tmpdir_factory
    >       use 'pytest --fixtures [testpath]' for help on them.

/builddir/build/BUILD/input-remapper-2.1.1-build/input-remapper-2.1.1/tests/lib/test_setup.py:33

suggest upstream has accidentally made the tests incompatible with pytest as a runner, because test_setup looks like pytest's "magic syntax" for a test with a fixture. That’s unfortunate, because using pytest as the test runner allowed us to skip known failing tests easily by adding command-line options, while skipping an individual test with unittest requires patching the test source to add a decorator.

Still, I thought it was worth trying to run the tests closer to the way upstream does in .github/workflows/test.yml.

    %{py3_test_envvars} %{python3} -m unittest discover tests/unit

Unfortunately, I still see a wall of errors like

======================================================================
    ERROR: test_autoload (test_config.TestConfig.test_autoload)
----------------------------------------------------------------------
    Traceback (most recent call last):
      File "/builddir/build/BUILD/input-remapper-2.1.1-build/input-remapper-2.1.1/tests/lib/test_setup.py", line 79, in setUp
        patch.start()
        ~~~~~~~~~~~^^
      File "/usr/lib64/python3.13/unittest/mock.py", line 1652, in start
        result = self.__enter__()
      File "/usr/lib64/python3.13/unittest/mock.py", line 1474, in __enter__
        raise RuntimeError("Patch is already started")
    RuntimeError: Patch is already started

I really don’t have any immediate insight into why that is happening.

It’s a shame to disable the tests altogether, because this package has extensive test coverage and they serve as a very useful early warning of incompatibilities with new Python interpreter versions, system libraries, and so on. However, running tests is a "should," not a "must" (https://docs.fedoraproject.org/en-US/packaging-guidelines/Python/#_tests), and it looks like the difficulty of getting them working in the current release is high enough that simply omitting the tests would be justifiable. You’ll want to maintain the %pyproject_check_import "smoke test," of course.

- Ben Beasley (FAS: music)

On 2/17/25 7:55 PM, Alexander Ploumistos wrote:
Thank you all very much for your input.

A long time ago I had needed to run some graphical tests in %check and
I vaguely remember using some dummy driver which was then
superseded(?) by xvfb and then for some reason I stopped needing to
care about all that and that cache in my brain got flushed. Thanks for
putting that info back.

Thinking it was the easiest option, I decided to go with Karolina's
suggestion and skip the offending module, which sure enough, allowed
the tests to proceed and lead to a wall of failures:

= 475 failed, 12 passed, 4 deselected, 4 warnings, 112 errors in
146.02s (0:02:26) =

https://koji.fedoraproject.org/koji/taskinfo?taskID=129355209

I was hoping that Ben, who authored the pytest part of the spec file,
would be able to shed some light as to what is going on, though I
suspect I need to get in touch with upstream. I can't make heads or
tails of anything. Why does test_autoload fail? What should the "Patch
is already started" error message mean to me?

I know our guidelines about unit tests and I can see the point in
upstream having them in place, but do we as a downstream distribution
need to run all of these? It's not like a math library, that might
produce different results on different architectures, this program
either works or it doesn't. 500 tests that ultimately tell us nothing
about the state of the program once it is installed seem like an
overkill.
--
_______________________________________________
devel mailing list -- devel@xxxxxxxxxxxxxxxxxxxxxxx
To unsubscribe send an email to devel-leave@xxxxxxxxxxxxxxxxxxxxxxx
Fedora Code of Conduct: https://docs.fedoraproject.org/en-US/project/code-of-conduct/
List Guidelines: https://fedoraproject.org/wiki/Mailing_list_guidelines
List Archives: https://lists.fedoraproject.org/archives/list/devel@xxxxxxxxxxxxxxxxxxxxxxx
Do not reply to spam, report it: https://pagure.io/fedora-infrastructure/new_issue




[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
[Index of Archives]     [Fedora Announce]     [Fedora Users]     [Fedora Kernel]     [Fedora Testing]     [Fedora Formulas]     [Fedora PHP Devel]     [Kernel Development]     [Fedora Legacy]     [Fedora Maintainers]     [Fedora Desktop]     [PAM]     [Red Hat Development]     [Gimp]     [Yosemite News]

  Powered by Linux