James Antill wrote:
1. Too many people want to be consumers of the testing but not the
providers of it.
I think that's an unwarranted assumption. How many people even know
about updates-testing compared to people that never change defaults?
Certainly everyone in this thread knows about it.
So maybe a dozen people...
How does someone using updates-testing ensure that their usage
'provides' something?
bodhi -k +1 -c 'pkg FOO, works for me'
...or even just leave the comment.
That's (a) not something many people are likely to do, and (b) not quite
what I want to know. I want to know that the combination of packages
installed on machine X at time Y worked together correctly (including
the kernel and base libs that may affect but not be specifically tied to
the package in question) before I install that same set on machine Z at
time Y + something.
Indeed IMO the whole updates-tested argument seems to devolve to "I'm
going to be clever and switch to this, but I'm pretty sure a bunch of
other people aren't going to know immediately and so will become my
unwilling testers".
No, the argument is this:
If I had a way to be moderately sure that my main work machine would be
usable every day running fedora and I could test things on a less
important machine, I'd be much more likely to run fedora more of the
time and on more machines.
So subscribe your work machine to just updates, and your test machine
to updates-testing ... what is the problem here?
Is the flow exactly predictable? That is, can I know that the package
set I get from updates will correspond exactly to what I tested earlier?
What is the process for problems detected in testing?
2. The people who are the providers of the testing, aren't necessarily
running the same kinds of workloads as the people who want to just be
consumers of the testing.
Exactly - it doesn't work that well as is. And even if I wanted to test
exactly the same work on exactly the same kind of machine, I don't think
I could predictably 'consume' that testing value - that is, there is no
way for me to know when or if a 'yum update' on my production machine is
going to reproduce exactly the software installed on my test machine.
(Personally I think this is a generic yum problem and it should provide
an option for reproducible installs regardless of what is going on in
the repositories, but that's a slightly different issue...).
Sure, it's one of the many things on the TODO list to fix ... and with
yum-debug-dump / yum shell / etc. there are a couple of ways of hacking
this kind of thing in.
However if you were running updates-testing refreshes fairly often then
anything going into updates would be fine for you, by definition.
Are sets of updates moved atomically from one repo to the next, holding
back the whole set for a problem in one package, or will I really get an
unpredictable grouping out of updates?
--
Les Mikesell
lesmikesell@xxxxxxxxx
--
fedora-devel-list mailing list
fedora-devel-list@xxxxxxxxxx
https://www.redhat.com/mailman/listinfo/fedora-devel-list