Search squid archive

Re: shared_memory_locking failed to mlock

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 07/16/2018 05:08 PM, Gordon Hsiao wrote:
> On a x86/64bit ubuntu machine if I set 'workers 4' and run:

> squid --foreground -f /etc/squid.conf 2>&1 |grep mlock
>   mlock(0x7f2e5bfb2000, 8)                = 0
>   mlock(0x7f2e5bf9f000, 73912)            = -1 ENOMEM

> squid -N -f /etc/squid.conf 2>& |grep mlock
>   mlock(0x7f8e4b7c0000, 8)                = 0
>   mlock(0x7f8e4b7ad000, 73912)            = -1 ENOMEM

> Note 1; -N and --foreground made no difference as long as 'workers 4' is
> set, I was expecting -N will ignore "worker 4", does it?

IIRC, -N does not start workers. However, some (memory allocation) code
may not honor -N and still allocate memory necessary for those (disabled
by -N) workers. That would be a bug AFAICT.


> Now I set 'workers 2' and run the same two commands above and I got the
> output(both are the same), which means squid started successfully:
>   mlock(0x7f0c441cc000, 8)                = 0
>   mlock(0x7f0c441c3000, 32852)            = 0
>   mlock(0x7f0c441c2000, 52)               = 0

The second allocation is probably smaller because two workers need fewer
SMP queues (or similar shared memory resources) than four workers.


> I have more than 4GB RAM free(this is a 8GB RAM laptop) and this
> is a Intel i7, the mlock failure is strange.

The default amount of shared memory available to a program is often much
smaller than the total amount of RAM. I do not recall which Ubuntu
commands or sysctl settings control the former, but Squid wiki or other
web resources should have that info. The question you should ask
yourself is "How much shared memory is available for the Squid process"?


> On my target system which has 512MB RAM, even 'workers 0' won't help, I
> still get :
> 
>   mlock(0x778de000, 2101212)              = -1 ENOMEM (Out of memory)

For "workers 0" concerns, please see the -N discussion above. The two
should be equivalent.


> I have to disable lock-memory for now and it puzzles me why the very
> first 2MB mlock can fail.

Most likely, your OS is configured (or defaults) to provide very little
shared memory to a process when the total RAM is only 512MB.


> I strace|grep shm_get and shmat and found nothing,

mlock() is a system call so strace should see it, but it may be called
something else.


> instead there are lots of mmap calls, so Squid is using mmap
> for its shared memory mapping,

Squid creates segments using shm_open() and attaches to them using mmap().


> the only question is that, is this mlock
> file-backed-up or is it anonymous mmaped(in this case on Linux it will
> use /dev/shm by default)?

On Ubuntu, Squid shared memory segments should all be in /dev/shm by
default. Squid does not want them to be backed by real files. See
shm_open(3).

Please note that some libc calls manipulating regular files are
translated into mmap() calls by the standard library (or some such). Not
all mmap() calls you see in strace are Squid mmap() calls.


HTH,

Alex.


> On Mon, Jul 16, 2018 at 11:58 AM Alex Rousskov wrote:
> 
>     On 07/15/2018 08:47 PM, Gordon Hsiao wrote:
>     > Just upgraded squid to 4.1, however if I enabled
>     shared_memory_locking I
>     > failed to start squid:
>     >
>     > "FATAL: shared_memory_locking on but failed to
>     > mlock(/squid-tls_session_cache.shm, 2101212): (12) Out of memory"
> 
>     > How do I know how much memory it is trying to mlock? is 2101212(~2MB)
>     > the shm size of not,
> 
>     Yes, Squid tried to lock a 2101212-byte segment and failed.
> 
> 
>     > any way to debug/looking-into/config this size?
> 
>     I am not sure what you mean, but please keep in mind that the failed
>     segment could be the last straw -- most of the shared memory could be
>     allocated earlier. You can observe all allocations/locks with 54,7
>     debugging. Look for "mlock(".
> 
>     You can also run "strace" or a similar command line tool to track
>     allocations, but analyzing strace output may be more difficult than
>     looking through Squid logs.
> 
> 
>     > Again I disabled cache etc for a memory restricted environment, also
>     > used the minimal configuration with a few enable-flags, in the
>     meantime
>     > I want to avoid memory overcommit from squid(thus mlock)
> 
>     I am glad the new code is working to prevent runtime crashes in your
>     memory-restricted environment. If studying previous mlock() calls does
>     not help, please suggest what else Squid could do not help you.
> 
> 
>     Thank you,
> 
>     Alex.
> 

_______________________________________________
squid-users mailing list
squid-users@xxxxxxxxxxxxxxxxxxxxx
http://lists.squid-cache.org/listinfo/squid-users




[Index of Archives]     [Linux Audio Users]     [Samba]     [Big List of Linux Books]     [Linux USB]     [Yosemite News]

  Powered by Linux