On Mon, Oct 05, 2020 at 08:30:12AM -0400, Stephen John Smoogen wrote: > On Mon, 5 Oct 2020 at 02:24, Adrian Reber <adrian@xxxxxxxx> wrote: > > > On Wed, Jul 01, 2020 at 09:16:05AM -0700, Kevin Fenzi wrote: > > > On Tue, Jun 30, 2020 at 02:17:30PM +0200, Adrian Reber wrote: > > > > On Mon, Jun 15, 2020 at 03:36:23PM -0700, Kevin Fenzi wrote: > > > > > On Wed, Jun 10, 2020 at 11:09:49AM +0200, Adrian Reber wrote: > > > > > > > > > > > > Then I just have to wait a bit. No problem. > > > > > > > > > > > > > > Having the possibility to generate the mirrorlist input data > > in about a > > > > > > > > minute would significantly reduce the load on the database > > server and > > > > > > > > enable us to react much faster if broken protobuf data has > > been synced > > > > > > > > to the mirrorlist servers on the proxies. > > > > > > > > > > > > > > Yeah, and I wonder if it would let us revisit the entire > > sequence from > > > > > > > 'update push finished' to updated mirrorlist server. > > > > > > > > > > This would help us with the case of: > > > > > - updates push/rawhide finishes, master mirror is updated. > > > > > - openqa/other internal thing tries to get images or updates in that > > > > > change and gets a metalink with the old checksum so it can't get > > the > > > > > new stuff. > > > > > - mm-backend01 generates and pushes out a new protobuf. > > > > > > > > > > > > Probably. As the new code will not run on the current RHEL 7 based > > > > > > mm-backend01 would it make sense to run a short running service > > like > > > > > > this on Fedora's OpenShift? We could also create a new read-only > > (SELECT > > > > > > only) database account for this. > > > > > > > > > > We could, or as smooge suggests make a mm-backend02? > > > > > > > > > > But I guess now mm-backend02 just generates new proobuf files and > > copies > > > > > them to mirrorlists? If thats all it's doing, perhaps we could indeed > > > > > replace it with an openshift project. > > > > > > > > We need a system to run the tool and copy the data to all proxies. > > > > > > > > I would like to see a new MirrorManager database user who can only do > > > > selects as that is all we need. > > > > > > > > Currently we copy the files via SSH to the proxies, if we continue > > doing > > > > it that way, then we would also need the existing SSH key to copy the > > > > data to the proxies. > > > > > > > > Easiest would probably be a small Fedora 32 based VM with 2GB of > > memory. > > > > > > I'm not sure f32 will work with 2gb memory anymore. I dont think it > > > installs at any rate. > > > > > > I do like the idea of just making it an openshift pod. Perhaps this > > > could even fit with pingous 'toddlers' setup. ie: > > > > I tried to create a toddler, but that setup is too complicated for me. > > Especially if something is not working it will be almost impossible for > > me to debug it if it is running somewhere I cannot reach via SSH. > > > > I just tried to build the generate-mirrorlist-cache on RHEL 7 (using > > Rust from EPEL) and it works fine. Instead of 20 minutes it needs 30 > > seconds to generate the mirrorlist cache file on mm-backend01. > > > > Although a RPM is available in Fedora I am not sure the RPM can be made > > available in EPEL 7. > > > > RPM Fusion is using the Rust based generate-mirrorlist-cache for some > > months already and I do not see any problems with it. > > > > > We are not wanting to deploy new EL7 systems but would probably install an > EL8 box for this. Does this change the need for moving to Fedora on it? I just asked on #fedora-rust, but it seems it is not easily possible to build the Fedora Rust packages for EL8. If I am understanding it correctly it seems we need to run the Rust based mirrorlist cache generation on a Fedora host. If we have a second mm-backend system (mm-backend02) that is Fedora based to generate the mirrorlist cache we could decrease the amount of RAM (32GB) on mm-backend01 to something like 8GB. Adrian
Attachment:
signature.asc
Description: PGP signature
_______________________________________________ infrastructure mailing list -- infrastructure@xxxxxxxxxxxxxxxxxxxxxxx To unsubscribe send an email to infrastructure-leave@xxxxxxxxxxxxxxxxxxxxxxx Fedora Code of Conduct: https://docs.fedoraproject.org/en-US/project/code-of-conduct/ List Guidelines: https://fedoraproject.org/wiki/Mailing_list_guidelines List Archives: https://lists.fedoraproject.org/archives/list/infrastructure@xxxxxxxxxxxxxxxxxxxxxxx