On Mon, 2018-11-19 at 19:58 +0000, Jonathan Dieter wrote: > On Mon, 2018-11-19 at 20:16 +0100, Jan Pokorný wrote: > > On 19/11/18 13:13 +0100, Nicolas Mailhot wrote: > > > Le 2018-11-19 12:28, Martin Kolman a écrit : > > > > > > > Many people might think RAM would not be an issue in 2018, but in > > > > practice there are > > > > and likely always will be memory constrained installation targets, > > > > such as massive deployments > > > > of "small" VMs or the IoT use cases mentioned above. > > > > > > Sure, that’s the artificial small vm case > > > > > > The average old/limited hardware is limited in memory, cpu and storage. > > > Therefore if you have one factor to sacrifice it's cpu time because you can > > > always let the CPU run a little longer, but a limited system won't magically > > > grow more memory or more storage. > > > > > > Storage would not be such a problem is dnf was smart enough to auto > > > partition big upgrades in lots of small partial upgrades, before downloading > > > gigs of data that do not fit on disk. > > > > https://bugzilla.redhat.com/show_bug.cgi?id=1609824 > > > > Also, not familiar with zchunk way of doing things, but couldn't > > rpm-integrity-verified installed files be mapped back to "chunks" > > to further aleviate space concerns for the machine receiving > > updates in some cases? > > That's an interesting thought. I was picturing using the zchunk > library in the dnf download stage to build a local rpm from the > verified locally installed files and the downloaded changed chunks, > but, if I understand your suggestions correctly, you're saying we could > just download the changed chunks and have RPM automatically get the > rpm-integrity verified chunks during the *install* stage. How do you know which chunks to download w/o having a stored (or recomputed) list of existing chunks ? > The advantage of this method is that you don't need to store the local > data twice, but the danger is that the local files get changed > elsewhere during the install process. > > It's an interesting thought, though, and I wonder if there's a way we > could work around that danger? I do not think you can just trust random metadata somewhere, one of the points of a rpm reinstall is to fix damaged files for example. It does no good if you skip those because some file somewhere says they are "OK". (If I understood your comment about "just downloading changed chunks). A couple more questions. I skimmed quickly at the format and I have two questions I did not immediately see an answer for. 1) why are you still supporting SHA-1 in a new format ? 2) what are the chunks sizes ? Sorry if this is already answered somewhere. Finally what signature scheme where you planning to use ? And how do you deal with the data you want to "exclude" from signing, omit it or feed in blank "sectors" ? Thanks for any answer. Simo. -- Simo Sorce Sr. Principal Software Engineer Red Hat, Inc _______________________________________________ devel mailing list -- devel@xxxxxxxxxxxxxxxxxxxxxxx To unsubscribe send an email to devel-leave@xxxxxxxxxxxxxxxxxxxxxxx Fedora Code of Conduct: https://getfedora.org/code-of-conduct.html List Guidelines: https://fedoraproject.org/wiki/Mailing_list_guidelines List Archives: https://lists.fedoraproject.org/archives/list/devel@xxxxxxxxxxxxxxxxxxxxxxx