Re: Proposal: Faster composes by eliminating deltarpms and using zchunked rpms instead

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Dne 21. 11. 18 v 14:36 Kamil Paral napsal(a):
On Fri, Nov 16, 2018 at 11:13 PM Jonathan Dieter <jdieter@xxxxxxxxx <mailto:jdieter@xxxxxxxxx>> wrote:

    For reference, this is in reply to Paul's email about lifecycle
    objectives, specifically focusing on problem statement #1[1].

    <tl;dr>
    Have rpm use zchunk as its compression format, removing the need for
    deltarpms, and thus reducing compose time.  This will require changes
    to both the rpm format and new features in the zchunk format.
    </tl;dr>


Hey Jonathan,

thanks for working on this. The proposed changes sound good to me. I'm a bit worried that zchunk is not yet a proven format, so it might be a good idea to use it for metadata first, see whether it works as expected, and then push it for RPM files. But that's for more technical people to judge.

I have some concrete questions, though:
1. I have noticed that especially with large RPMs (firefox, chrome, atom, game data like 0ad-data, etc), my PCs are mostly bottlenecked by CPU when installing them. And that's with a modern 3.5+GHz CPU. That's because RPM decompression runs in a single thread only, and xz is just unbelievably slow. I wonder, would zchunk used as an RPM compression algorithm improve this substantially? Can it decompress in multiple threads and/or does it have much faster decompression speeds (and how much)? I don't care about RPM size increase, but I'd really like to have them installed fast. (That's of course just my personal preference, but this also affects the speed of mock builds and such, so I think it's relevant.)

Well I'm ATM way more concerned about  absurd size of rpm REPO metadata size.

Often my upgrade first download   200MB of metadata to update 20MB of actual RPMs.

Please anyone - fix this first before anyone starts to parallelise decompression - this is minimal problem compared with the amount of processed metadata....

Next topic is - to replace/rewrite ONLY new files - files that has NOT changed might not be written at all (write of files takes way more time then its decompression) - in fact such file might not be even decompressed (depends on compression layout).

Thanks

Zdenek
_______________________________________________
devel mailing list -- devel@xxxxxxxxxxxxxxxxxxxxxxx
To unsubscribe send an email to devel-leave@xxxxxxxxxxxxxxxxxxxxxxx
Fedora Code of Conduct: https://getfedora.org/code-of-conduct.html
List Guidelines: https://fedoraproject.org/wiki/Mailing_list_guidelines
List Archives: https://lists.fedoraproject.org/archives/list/devel@xxxxxxxxxxxxxxxxxxxxxxx




[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
[Index of Archives]     [Fedora Announce]     [Fedora Users]     [Fedora Kernel]     [Fedora Testing]     [Fedora Formulas]     [Fedora PHP Devel]     [Kernel Development]     [Fedora Legacy]     [Fedora Maintainers]     [Fedora Desktop]     [PAM]     [Red Hat Development]     [Gimp]     [Yosemite News]

  Powered by Linux