On Tue, 2019-07-23 at 14:57 -0400, Josh Boyer wrote: > > > Also, we can't really solve the machine resources of mirrors. Well, I > > mean, I guess we *could*, but I doubt anyone in RH is going to sign off > > on us buying a ton of expensive storage hardware and shipping it off to > > random universities around the world... > > Honestly, I'm less concerned about this. Why? Because anything new > like this does not immediately require the full weight of a mirror > system. The level of interest is likely to be small enough at the > start that we can and should approach it in a measured way. True, but the way our build process, repo layout and mirroring system work, if you want to leave a bit of Fedora out when you're mirroring it, this is not easy. rsync bundles can't really do the job in cases like this, because of how they're based on directory structures combined with how we structure our repos. Mirrors have to use some kind of script with a filter to do this; quick-fedora-mirror helps, but you still have to maintain and write the filters. My mirror currently rejoices in this: FILTEREXP='(/i386/|/armhfp/|/aarch64/|/source/|/SRPMS|/debug/|\.iso|\.qcow2|\.raw\.xz|\.box|/releases/test|/22/|/23/|/24/|/25/|/26/|/27/)' to try and reduce the amount of bandwidth it eats. Which is of course fun to remember about and maintain. And hey look, indeed I haven't, cos I didn't add 28 to it yet... Of course we could completely rearrange how we build and store things, but...see under 'human resources' :) -- Adam Williamson Fedora QA Community Monkey IRC: adamw | Twitter: AdamW_Fedora | XMPP: adamw AT happyassassin . net http://www.happyassassin.net _______________________________________________ devel mailing list -- devel@xxxxxxxxxxxxxxxxxxxxxxx To unsubscribe send an email to devel-leave@xxxxxxxxxxxxxxxxxxxxxxx Fedora Code of Conduct: https://docs.fedoraproject.org/en-US/project/code-of-conduct/ List Guidelines: https://fedoraproject.org/wiki/Mailing_list_guidelines List Archives: https://lists.fedoraproject.org/archives/list/devel@xxxxxxxxxxxxxxxxxxxxxxx