Search squid archive

Re: Re: Re: Linux EXT3 optimizations for Squid

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 





Heinz Diehl wrote:
On 09.08.2010, Marcus Kool wrote:
I think at least swappiness should better be 100 here, to free as much as
possible memory. Unused applications hanging around for a long
time can conserve quite a lot of pagecache which otherwise could be used
actively.
Do you have any proof to support this theory?

These are my thoughts, based on how vm.swappiness works:
unused applications may consume a lot of memory when swappiness is low (or
zero), because they rarely/never get swapped out. With swappiness on 100,
applications which were not in use over some time will quickly be
swapped out, freeing memory which can get used by active
applications, e.g. squid. If the system runs really low on memory, the
kernel will start to swap out anyway, even when you set swappiness to 0
(at least in mainline kernel, there's a patch in Con Kolivas'
BFS/CK patchset which adresses this).

What you really want is that the system utilizes all of its physical memory.
This gives a better overall system performance.
The aggressive method of freeing memory because one likes free memory
only produces page misses and swap-ins.  Your thoughts seems to be based
on systems with very high memory loads that need more memory or less
applications.

Changing vm.swappiness from its default of 60 to 20 means that on systems
with low to medium memory pressure (read: healthy pressure) utilizes
more system memory and does less swapouts and swapins.  On systems with
high memory pressure, this parameter has little effect: the system has
a set of processes that are too hungry for memory...  buy more memory
or reduce applications.

The vfs_cache_pressure is used when where is pressure: when the system
is running low on memory. Always bad, but Linux has a configurable
choice: free up some memory by reducing the file cache or free up
some memory reducing the inode cache.  Squid uses a lot of inodes/files.
and inodes are the index of the file system, you need them to access
files.  Making a preference for inodes over file buffers is a good
choice.

Don't know if this really matters in such a situation, if you're running
out of vfs cache the oom-killer begins to nuke your applications. This
happens more quickly if you prefer inode-cache over vfs-cache, which is
what you do by lowering vfs_cache_pressure. This is also identical with
what we did observe on our servers which were oom-killed regularly. Increasing vfs_cache_pressure to 10000 helped a lot.

Ah, the OOM-killer: this task runs when Linux gives out too much virtual
memory to the set of all processes. Bad idea.  See my recommendation in
the original post:
# Overcommit or not memory allocations:
# 2      Don't overcommit. The total address space commit
#        the system is not permitted to exceed swap + a
#        percentage (default is 50) of physical RAM.
#        on the percentage you use, in most situations
#        means a process will not be killed while accessing
#        but will receive errors on memory allocation as
#        appropriate.
#        Also ensure that the size of swap is at least 50%
#        of physical RAM with a minimum of 2 GB.
vm.overcommit_memory=2

For very large memory system you might want to tune vm.overcommit_ratio
but the idea is the same: manage (virtual) memory well and do not handle
out more than you have to prevent that a kernel job randomly kills
important processes.

If you start with vm.overcommit_memory=2 you will see that systems behave
very different under high memory loads.  After this you can tune
vfs_cache_pressure, also a parameter for systems with high memory loads.
With very high memory loads one should consider installing more memory
or distributing applications to more systems.

If you or others here on the list have useful opinions on this or can show
me where I'm wrong, I would be very thankful!


[Index of Archives]     [Linux Audio Users]     [Samba]     [Big List of Linux Books]     [Linux USB]     [Yosemite News]

  Powered by Linux