On 5/14/20 4:59 PM, Nico Kadel-Garcia wrote:
It's an ongoing problem. EPEL's decision to show only the most recent versions of RPMs, and to trim old RPMs out, is a destabilizing problem and why I make hrdlinked snapshots of EPEL using "rsnapshot" for internal access to old packages.
So that's a problem we have with Fedora (e.g. hardware regressions preventing some hardware from being in the latest kernel, *but* if they were not updating often enough, the last known-good version might not be available anymore unless you pull from Koji).
But it does not really apply to the EPEL retirement. If a decision is made to retire packages for a given branch, it does not matter how many versions you keep, they will all be retired, right?
Perhaps having additional metadata in dist-git (e.g. "retire from EL 8.2 onwards") would allow maintainers to keep building as normal, we can publish epel/8-all, epel/8.1, epel/8.2 etc. repos where the first repo has all packages, and the others have symlinks to 8-all but filtering out the retired packages?
But that won't work if there are ABI incompatibilities... so yeah, overall I think the best solution is to just try and get CentOS releases out of the door sooner rather than later.
Or have a "purgatory" repo where packages retired in EL 8.2 get to live until, say, a month after CentOS 8.2 is GA? Again, seems like too much work.
In our case, hopefully this is a one-off (it's because we deploy RPM-with-zstd internally).
-- Michel Alexandre Salim profile: https://keybase.io/michel_slm chat via email: https://delta.chat/ GPG key: 96A7 A6ED FB4D 2113 4056 3257 CAF9 AD10 ACB1 BEF2 _______________________________________________ epel-devel mailing list -- epel-devel@xxxxxxxxxxxxxxxxxxxxxxx To unsubscribe send an email to epel-devel-leave@xxxxxxxxxxxxxxxxxxxxxxx Fedora Code of Conduct: https://docs.fedoraproject.org/en-US/project/code-of-conduct/ List Guidelines: https://fedoraproject.org/wiki/Mailing_list_guidelines List Archives: https://lists.fedoraproject.org/archives/list/epel-devel@xxxxxxxxxxxxxxxxxxxxxxx