Re: Small rant: installer environment size

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 




On Thu, Dec 8, 2022, at 10:06 AM, Daniel P. Berrangé wrote:

> Is there something that can be done to optimize the RAM usage,
> in spite of the large installer env size ?

What's wrong with the RAM usage now?

We do semi-regularly run into issues with openQA VM's running out of memory. So far they're all been considered bugs when we hit them, and the change is reverted including a system change to bump tmpfs on /tmp quite a lot. But keeping the VMs at 2G has helped discover changes that in retrospect shouldn't have been changed. Ergo, even though it's a pain to regularly hit these bugs, I don't think there is a per se problem with the 2G RAM selection. When it's all working as expected, swap on zram is used rather significantly and works as expected.


> If we're installing off DVD media, it shouldn't be required to
> pull all of the content into RAM, since it can be fetched on
> demand from the media. 

AFAIK this doesn't happen. Files are read in only on demand, not a monolithic read of everything and kept in cache somehow.

> IOW, 99% of the firmware never need
> leave the ISO, so shouldn't matter if firmware is GBs in size [1]
> if we never load it off the media. Same for languages, only
> the one we actually want to use should ever get into RAM.

At first glance it might seem you can get memory savings by not having swap on zram enabled.

But what happens is anonymous pages can't be compressed. And also they can't be dropped since they have no files backing them. The result is file pages get dropped in memory tight situations and we end up getting (file) page faults, and it's just super expensive. Yes you can read these pages back from files but it's extra costly because even if it's only a 4K read that's needed, it'll translate into potentially upwards of 1M reads: to find the 4K extent requires reading multiple 4K ext4 metadata blocks, which aren't necessarily colocated in a 128 KiB squashfs block, so we end up reading 1 to 10 of those, taking the ram and memory hit to decompress them to reveal their 4K content we need, dropping the rest. And then doing that every time there's a page fault. So I'd say it's probably asking for a performance hit that isn't really going to save much memory.

On high latency devices like USB sticks, it's not a good UX.


> If we're installing off a network source, we need to pull content
> into RAM, but that doesn't mean we should pull everything in at
> once upfront.

Pretty sure netinstaller's big RAM hit is repo metadata. All of it is downloaded before we have partitioning done, thus the repo metadata isn't stored on disk, rather in memory and it's tmpfs so it may not be compressed either (at least it's not subject to swap on zram out of the gate). I'm pretty sure partitioning happens before the packages are downloaded though, which means they get stored on disk not in memory.

But the repo metadata is pretty big now and that's a big memory hit for netinstallers.



-- 
Chris Murphy
_______________________________________________
devel mailing list -- devel@xxxxxxxxxxxxxxxxxxxxxxx
To unsubscribe send an email to devel-leave@xxxxxxxxxxxxxxxxxxxxxxx
Fedora Code of Conduct: https://docs.fedoraproject.org/en-US/project/code-of-conduct/
List Guidelines: https://fedoraproject.org/wiki/Mailing_list_guidelines
List Archives: https://lists.fedoraproject.org/archives/list/devel@xxxxxxxxxxxxxxxxxxxxxxx
Do not reply to spam, report it: https://pagure.io/fedora-infrastructure/new_issue




[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
[Index of Archives]     [Fedora Announce]     [Fedora Users]     [Fedora Kernel]     [Fedora Testing]     [Fedora Formulas]     [Fedora PHP Devel]     [Kernel Development]     [Fedora Legacy]     [Fedora Maintainers]     [Fedora Desktop]     [PAM]     [Red Hat Development]     [Gimp]     [Yosemite News]

  Powered by Linux