On Wed, Jan 10, 2018 at 12:01 PM, Stephen John Smoogen <smooge@xxxxxxxxx> wrote: > > On 10 January 2018 at 14:46, Andrew Lutomirski <luto@xxxxxxx> wrote: > > > > > > On Wed, Jan 10, 2018 at 11:38 AM, Stephen John Smoogen <smooge@xxxxxxxxx> > > wrote: > >> > >> On 10 January 2018 at 14:23, Andrew Lutomirski <luto@xxxxxxx> wrote: > >> >> On Jan 9, 2018, at 9:59 AM, Kevin Fenzi <kevin@xxxxxxxxx> wrote: > >> >> > >> >>> On 01/08/2018 10:53 AM, Kevin Kofler wrote: > >> >>> Kevin Fenzi wrote: > >> >>>> Well, if this firefox update was urgent, shouldn't it have been > >> >>>> marked > >> >>>> urgent? > >> >>> > >> >>> Urgency is always in the eye of the beholder. I as a user consider all > >> >>> security updates "urgent", and in addition, I want ALL updates as soon > >> >>> as > >> >>> they passed testing no matter whether they actually are urgent. > >> >> > >> >> You also don't want updates-testing to even exist right? > >> >> > >> >>> > >> >>>>> I really don't understand why we do this "batched" thing to begin > >> >>>>> with. > >> >>>> > >> >>>> To reduce the constant flow of updates that are very minor or affect > >> >>>> very few mixed in with the major updates that affect lots of people > >> >>>> and > >> >>>> are urgent. > >> >>> > >> >>> But the users were already able to opt to update only weekly. So why > >> >>> force a > >> >>> fixed schedule on them? > >> >> > >> >> To save all the Fedora users in the world from having to update > >> >> metadata > >> >> for minor changes. Since there's a hourly dnf makecache every user in > >> >> the world pulls down new metadata ever time we update a repo. > >> > > >> > Could Fedora, perhaps, come up with a way to make incremental metadata > >> > updates fast? This shouldn't be particularly hard -- a tool like > >> > casync or even svn should work pretty well. Or it could be a simple > >> > >> This sounds a lot like the Atomic project and how it does things... > >> > > > > Maybe some of Atomic's infrastructure could be used to distribute metadata > > for regular old Fedora. > > > > OK clearly I should not try to help with a single sentence email as it > didn't help anyone. I should have asked more questions on what you > meant by metadata and what you meant by distribution and how svn would > have been used instead. SVN is probably a poor model, but imagine that the various metadata files (repodata, I think) were check in, uncompressed, to an SVN repo. Then the client could do 'svn up' and take advantage of delta compression. I suspect the server load would be too high, though. Basically, what I'm getting at is that Fedora seems to distribute comps, prestodelta, primary, filelists, and updateinfo, and they add up to 20MB compressed for the updates repo. AFAICT it gets re-downlaoded from scratch every time, even though the actual list of changes is probably minimal from day to day. > > I should have also asked if you knew enough > about how atomic did things to give an informed opinion on how that > was similar to the request. Without that we ended up talking past each > other. > I don't know too much about the details. I suspect that, if the .xml.gz files were, instead, one file per package, then they could be served up from an ostree repo. _______________________________________________ devel mailing list -- devel@xxxxxxxxxxxxxxxxxxxxxxx To unsubscribe send an email to devel-leave@xxxxxxxxxxxxxxxxxxxxxxx