Search squid archive

Re: shared_memory_locking failed to mlock

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On a x86/64bit ubuntu machine if I set 'workers 4' and run:

squid --foreground -f /etc/squid.conf 2>&1 |grep mlock
  mlock(0x7f2e5bfb2000, 8)                = 0
  mlock(0x7f2e5bf9f000, 73912)            = -1 ENOMEM (Cannot allocate memory)
squid -N -f /etc/squid.conf 2>& |grep mlock
  mlock(0x7f8e4b7c0000, 8)                = 0
  mlock(0x7f8e4b7ad000, 73912)            = -1 ENOMEM (Cannot allocate memory)

Note 1; -N and --foreground made no difference as long as 'workers 4' is set, I was expecting -N will ignore "worker 4", does it?

Now I set 'workers 2' and run the same two commands above and I got the output(both are the same), which means squid started successfully:
  mlock(0x7f0c441cc000, 8)                = 0
  mlock(0x7f0c441c3000, 32852)            = 0
  mlock(0x7f0c441c2000, 52)               = 0

Note as long as "workers <=2" I can run squid as expected and mlock the memory. I have more than 4GB RAM free(this is a 8GB RAM laptop) and this is a Intel i7, the mlock failure is strange.

On my target system which has 512MB RAM, even 'workers 0' won't help, I still get :

  mlock(0x778de000, 2101212)              = -1 ENOMEM (Out of memory)

I have to disable lock-memory for now and it puzzles me why the very first 2MB mlock can fail. I strace|grep shm_get and shmat and found nothing, instead there are lots of mmap calls, so Squid is using mmap for its shared memory mapping, the only question is that, is this mlock file-backed-up or is it anonymous mmaped(in this case on Linux it will use /dev/shm by default)?

Thanks a lot,

Gordon

On Mon, Jul 16, 2018 at 11:58 AM Alex Rousskov <rousskov@xxxxxxxxxxxxxxxxxxxxxxx> wrote:
On 07/15/2018 08:47 PM, Gordon Hsiao wrote:
> Just upgraded squid to 4.1, however if I enabled shared_memory_locking I
> failed to start squid:
>
> "FATAL: shared_memory_locking on but failed to
> mlock(/squid-tls_session_cache.shm, 2101212): (12) Out of memory"

> How do I know how much memory it is trying to mlock? is 2101212(~2MB)
> the shm size of not,

Yes, Squid tried to lock a 2101212-byte segment and failed.


> any way to debug/looking-into/config this size?

I am not sure what you mean, but please keep in mind that the failed
segment could be the last straw -- most of the shared memory could be
allocated earlier. You can observe all allocations/locks with 54,7
debugging. Look for "mlock(".

You can also run "strace" or a similar command line tool to track
allocations, but analyzing strace output may be more difficult than
looking through Squid logs.


> Again I disabled cache etc for a memory restricted environment, also
> used the minimal configuration with a few enable-flags, in the meantime
> I want to avoid memory overcommit from squid(thus mlock)

I am glad the new code is working to prevent runtime crashes in your
memory-restricted environment. If studying previous mlock() calls does
not help, please suggest what else Squid could do not help you.


Thank you,

Alex.
_______________________________________________
squid-users mailing list
squid-users@xxxxxxxxxxxxxxxxxxxxx
http://lists.squid-cache.org/listinfo/squid-users

[Index of Archives]     [Linux Audio Users]     [Samba]     [Big List of Linux Books]     [Linux USB]     [Yosemite News]

  Powered by Linux