On Fri, 2020-11-06 at 10:59 -0800, Samuel Sieb wrote: > On 11/6/20 3:36 AM, Patrick O'Callaghan wrote: > > On Thu, 2020-11-05 at 15:23 -0800, Samuel Sieb wrote: > > > On 11/5/20 2:23 PM, Patrick O'Callaghan wrote: > > > > On Thu, 2020-11-05 at 10:47 -0800, Samuel Sieb wrote: > > > > > Yes. But have you enabled zram yet for swap? > > > > > > > > This is a clean install, so it's enabled by default and has reserved > > > > 4GB. I think that's what's actually causing the OOMs. I have 16GB of > > > > RAM. On F32 I could run an 8GB VM with hugepages, i.e. dedicated > > > > memory, plus normal stuff including multiple browser tabs etc. and > > > > never had a problem. Now as soon as I try to start the VM it gets OOM > > > > errors, and multiple Firefox tabs are failing and have to be restarted. > > > > > > Check what's using the memory. zram doesn't *reserve* memory. It > > > doesn't use any memory until you start swapping out. Try increasing the > > > zram size. I would suggest at least 12GB. On my 12GB laptop, I have it > > > set to 12GB. > > > > Trying to get my head around that. You mean all of your RAM is > > potentially usable as compressed swap? How does that work? Surely it > > can never reach that limit? > > No, that's the uncompressed size. In general, the compression is at > least 3:1 so the 12GB of swap takes up a maximum of 4GB of RAM. My zram > config appears to be a little confused at this point, but here's what > one device looks like: > # zramctl > NAME ALGORITHM DISKSIZE DATA COMPR TOTAL STREAMS MOUNTPOINT > /dev/zram1 lz4 5G 4.9G 1.3G 1.4G 4 > > It's currently storing 4.9GB of swap data using 1.4GB of RAM. OK. poc _______________________________________________ users mailing list -- users@xxxxxxxxxxxxxxxxxxxxxxx To unsubscribe send an email to users-leave@xxxxxxxxxxxxxxxxxxxxxxx Fedora Code of Conduct: https://docs.fedoraproject.org/en-US/project/code-of-conduct/ List Guidelines: https://fedoraproject.org/wiki/Mailing_list_guidelines List Archives: https://lists.fedoraproject.org/archives/list/users@xxxxxxxxxxxxxxxxxxxxxxx