Re: disabling yum modular repos by default?

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Tue, May 09, 2023 at 02:21:46PM +0200, Petr Pisar wrote:
> V Tue, May 09, 2023 at 12:31:54PM +0800, Jens-Ulrik Petersen napsal(a):
> > I have been thinking about proposing a Change to Fedora 39,
> > which would disable yum modular repos by default in installs.
> > I thought I would float the idea here first.
> > 
> > I suspect the vast majority of Fedora users don't use
> > the modular repos
> 
> Probably. But I don't know. We will know from countme statistics after we
> disable them.
> 
> >, so I don't see the point of enabling
> > them by default anymore. Does this make sense?
> >
> That makes sense.
> 
> > I know dnf5 is coming with performance improvements
> > but I still think turning off the modular repos would speed up dnf
> > and save users a lot of time.
> >
> How much is a lot of time?
> 
> I measured cached "dnf upgrade" (i.e. DNF4) on rawhide without and with the modular
> repository 5 times and the times are 1.022 vs. 1.090 seconds. I.e. 6.2% speedup.
> 
> Then I removed caches and looked at download times. I again did 5 tries, but
> the variance in dowload times reported by DNF was significantly useless. So
> I can only say that a size of the transmitted data is 71.0 MB for nonmodular
> repository and 1.6 MB modular repository. However, even this comparison is not
> fair as a different compression algorithm is used.

Those results will depend a lot on the network. (I *did* say "spotty
network" in my original message.)  At home I have the problem that
most dnf download operations are OK, but every once in a while I get a
slow connection (I don't know if it's a question of a different
mirror, or just some temporary congestion, etc.)  So doing each extra
metadata download increase the chances of hitting a bad connection.
This doesn't scale proportionally to the size of the downloads, but
rather is some complex function that grows with the number of
downloads and their sizes.

Another case is network with large latency: even if the data is fresh, just
checking if it needs to be re-downloaded takes some time that grows with
the number of repos.

(FWIW, I did some measurements when writing this, and essentially the
standard deviation of the download time is larger than the download time
on this machine, so I'm not posting those meaningless numbers.)

Zbyszek
_______________________________________________
devel mailing list -- devel@xxxxxxxxxxxxxxxxxxxxxxx
To unsubscribe send an email to devel-leave@xxxxxxxxxxxxxxxxxxxxxxx
Fedora Code of Conduct: https://docs.fedoraproject.org/en-US/project/code-of-conduct/
List Guidelines: https://fedoraproject.org/wiki/Mailing_list_guidelines
List Archives: https://lists.fedoraproject.org/archives/list/devel@xxxxxxxxxxxxxxxxxxxxxxx
Do not reply to spam, report it: https://pagure.io/fedora-infrastructure/new_issue




[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
[Index of Archives]     [Fedora Announce]     [Fedora Users]     [Fedora Kernel]     [Fedora Testing]     [Fedora Formulas]     [Fedora PHP Devel]     [Kernel Development]     [Fedora Legacy]     [Fedora Maintainers]     [Fedora Desktop]     [PAM]     [Red Hat Development]     [Gimp]     [Yosemite News]

  Powered by Linux