On Wed, Jul 01, 2020 at 09:16:05AM -0700, Kevin Fenzi wrote: > On Tue, Jun 30, 2020 at 02:17:30PM +0200, Adrian Reber wrote: > > On Mon, Jun 15, 2020 at 03:36:23PM -0700, Kevin Fenzi wrote: > > > On Wed, Jun 10, 2020 at 11:09:49AM +0200, Adrian Reber wrote: > > > > > > > > Then I just have to wait a bit. No problem. > > > > > > > > > > Having the possibility to generate the mirrorlist input data in about a > > > > > > minute would significantly reduce the load on the database server and > > > > > > enable us to react much faster if broken protobuf data has been synced > > > > > > to the mirrorlist servers on the proxies. > > > > > > > > > > Yeah, and I wonder if it would let us revisit the entire sequence from > > > > > 'update push finished' to updated mirrorlist server. > > > > > > This would help us with the case of: > > > - updates push/rawhide finishes, master mirror is updated. > > > - openqa/other internal thing tries to get images or updates in that > > > change and gets a metalink with the old checksum so it can't get the > > > new stuff. > > > - mm-backend01 generates and pushes out a new protobuf. > > > > > > > > Probably. As the new code will not run on the current RHEL 7 based > > > > mm-backend01 would it make sense to run a short running service like > > > > this on Fedora's OpenShift? We could also create a new read-only (SELECT > > > > only) database account for this. > > > > > > We could, or as smooge suggests make a mm-backend02? > > > > > > But I guess now mm-backend02 just generates new proobuf files and copies > > > them to mirrorlists? If thats all it's doing, perhaps we could indeed > > > replace it with an openshift project. > > > > We need a system to run the tool and copy the data to all proxies. > > > > I would like to see a new MirrorManager database user who can only do > > selects as that is all we need. > > > > Currently we copy the files via SSH to the proxies, if we continue doing > > it that way, then we would also need the existing SSH key to copy the > > data to the proxies. > > > > Easiest would probably be a small Fedora 32 based VM with 2GB of memory. > > I'm not sure f32 will work with 2gb memory anymore. I dont think it > installs at any rate. > > I do like the idea of just making it an openshift pod. Perhaps this > could even fit with pingous 'toddlers' setup. ie: > > * listen for message saying a repo has updated > * update the db > * create the protobuf > * push out to proxies > > The only weird part of putting it in openshift is that we would need to > have fedora_ftp (ro) there available as a volume, but that is doable... No, this part only needs to talk read-only to the database. This is not touching anything on the disk besides writing the output. I guess you were thinking about the umdl (update-master-directory-listing) part. That would need ro access to the file-system. The part I am talking about just reads the database and creates a protobuf snapshot of the database which is then used by the mirrorlist servers on the proxies. Currently it runs once every hour. Which works pretty good so far. Triggering it on a message makes only limited sense as it depends on the results of the crawler. We could run it twice an hour to have newer database snapshots on the proxies. How can I prepare it for running in openshift. Can I use the configuration for toddlers? Where can I find that? Adrian _______________________________________________ infrastructure mailing list -- infrastructure@xxxxxxxxxxxxxxxxxxxxxxx To unsubscribe send an email to infrastructure-leave@xxxxxxxxxxxxxxxxxxxxxxx Fedora Code of Conduct: https://docs.fedoraproject.org/en-US/project/code-of-conduct/ List Guidelines: https://fedoraproject.org/wiki/Mailing_list_guidelines List Archives: https://lists.fedoraproject.org/archives/list/infrastructure@xxxxxxxxxxxxxxxxxxxxxxx