On 10. 04. 21 7:50, Stef Walter wrote:
Hey Miro,
Sad to hear that it's been so rough.
On Wed, Apr 7, 2021 at 9:59 AM Miro Hrončok <mhroncok@xxxxxxxxxx
<mailto:mhroncok@xxxxxxxxxx>> wrote:
Hello,
I was torn whether to share this here or not. I don't want to be the one who
always complains about things, but at the end I've decided that without honest
feedback, there cannot be progress (and I've realized I already am that guy).
Please don't take this feedback personally, I know that building things is
hard.
I don't criticize people, but the tools.
Almost 2 years ago, we've decided to be the early adopters of gating in Fedora
with the python-virtualenv package:
https://src.fedoraproject.org/rpms/python-virtualenv/c/66b7533376f
<https://src.fedoraproject.org/rpms/python-virtualenv/c/66b7533376f>
Gating has proved more problematic than useful. It almost never works reliably,
the problems are impossible to decipher and/or debug. Too often we had to ask
for a CI-expert human intervention or straight out waive the results.
The humans we've contacted were always very friendly, helpful and they were
able
to solve our issues. However, human-operated CIs unfortunately don't scale very
well.
Heh heh.
At first, we assumed the issues will get ironed out with time, but there
seem to
be no visible progress.
Moreover, the gating caught 0 issues, because we already test our changes via
Pull Requests.
I'm not sure if others have similar experience, or if we just got unlucky :(
Martin Pitt recently posted a blog post about how he's been using the same tests
and environments upstream in Pull Requests + downstream in Fedora gating. He
also talks about "Fedora Gating woes" there. Perhaps similar concerns and
pragmatic solutions.
https://cockpit-project.org/blog/fmf-unified-testing.html
<https://cockpit-project.org/blog/fmf-unified-testing.html>
Thanks for the link Stef,
we will certainly be looking into fmf and tmt, it is on my TODO for a while. I
had no idea it is to be more reliable than standard test interface, which
certainly moves it higher in to priority list :)
As for what is said in the blog post, I am not sure that running test in
upstream using the *exact same environment* is what we want. We want to run
tests downstream using the Fedora environment as up to date as possible. The
problem we are trying to solve is "make sure that if we do this downstream, is
still works in up to date Fedora" not "if upstream commits this change, make
sure it does not break in this pinned environment". For me at least, It's all
about integration into the distro, not about regression testing -- upstream
already has that covered.
(I believe Python situation wrt this is different from projects that target
Fedora-ecosystem as their primary deployment platform, such as cokpit or anaconda.)
--
Miro Hrončok
--
Phone: +420777974800
IRC: mhroncok
_______________________________________________
devel mailing list -- devel@xxxxxxxxxxxxxxxxxxxxxxx
To unsubscribe send an email to devel-leave@xxxxxxxxxxxxxxxxxxxxxxx
Fedora Code of Conduct: https://docs.fedoraproject.org/en-US/project/code-of-conduct/
List Guidelines: https://fedoraproject.org/wiki/Mailing_list_guidelines
List Archives: https://lists.fedoraproject.org/archives/list/devel@xxxxxxxxxxxxxxxxxxxxxxx
Do not reply to spam on the list, report it: https://pagure.io/fedora-infrastructure