On Tue, Jun 30, 2020 at 02:17:30PM +0200, Adrian Reber wrote: > On Mon, Jun 15, 2020 at 03:36:23PM -0700, Kevin Fenzi wrote: > > On Wed, Jun 10, 2020 at 11:09:49AM +0200, Adrian Reber wrote: > > > > > > Then I just have to wait a bit. No problem. > > > > > > > > Having the possibility to generate the mirrorlist input data in about a > > > > > minute would significantly reduce the load on the database server and > > > > > enable us to react much faster if broken protobuf data has been synced > > > > > to the mirrorlist servers on the proxies. > > > > > > > > Yeah, and I wonder if it would let us revisit the entire sequence from > > > > 'update push finished' to updated mirrorlist server. > > > > This would help us with the case of: > > - updates push/rawhide finishes, master mirror is updated. > > - openqa/other internal thing tries to get images or updates in that > > change and gets a metalink with the old checksum so it can't get the > > new stuff. > > - mm-backend01 generates and pushes out a new protobuf. > > > > > > Probably. As the new code will not run on the current RHEL 7 based > > > mm-backend01 would it make sense to run a short running service like > > > this on Fedora's OpenShift? We could also create a new read-only (SELECT > > > only) database account for this. > > > > We could, or as smooge suggests make a mm-backend02? > > > > But I guess now mm-backend02 just generates new proobuf files and copies > > them to mirrorlists? If thats all it's doing, perhaps we could indeed > > replace it with an openshift project. > > We need a system to run the tool and copy the data to all proxies. > > I would like to see a new MirrorManager database user who can only do > selects as that is all we need. > > Currently we copy the files via SSH to the proxies, if we continue doing > it that way, then we would also need the existing SSH key to copy the > data to the proxies. > > Easiest would probably be a small Fedora 32 based VM with 2GB of memory. I'm not sure f32 will work with 2gb memory anymore. I dont think it installs at any rate. I do like the idea of just making it an openshift pod. Perhaps this could even fit with pingous 'toddlers' setup. ie: * listen for message saying a repo has updated * update the db * create the protobuf * push out to proxies The only weird part of putting it in openshift is that we would need to have fedora_ftp (ro) there available as a volume, but that is doable... kevin
Attachment:
signature.asc
Description: PGP signature
_______________________________________________ infrastructure mailing list -- infrastructure@xxxxxxxxxxxxxxxxxxxxxxx To unsubscribe send an email to infrastructure-leave@xxxxxxxxxxxxxxxxxxxxxxx Fedora Code of Conduct: https://docs.fedoraproject.org/en-US/project/code-of-conduct/ List Guidelines: https://fedoraproject.org/wiki/Mailing_list_guidelines List Archives: https://lists.fedoraproject.org/archives/list/infrastructure@xxxxxxxxxxxxxxxxxxxxxxx