Search squid archive

Re: FATAL: shm_open(/squid-ssl_session_cache.shm)

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 08/28/2017 10:27 AM, Aaron Turner wrote:

> So I guess what I'd like to know is how squid handles a multi-layer
> cache config with ssl bumping?

If you are asking how to SSL bump requests in one Squid worker and then
satisfy those bumped requests in another Squid worker (and/or another
Squid instance), then the answer is that you cannot do that because
Squid does not support exporting decrypted bumped requests (without
encrypting them) from a Squid worker.


> For obvious performance reasons, I
> don't want to bump the same connection twice.  Much rather have the
> first layer bump the connection and have a memory cache.  If that
> cache is a miss, then hit the slower disk cache/outbound network
> connection.

Your desires require the currently missing Squid code to export bumped
requests _and_ they clash with the current Squid Project policy of
prohibiting the export of bumped requests.

If performance is important, consider using SMP-aware rock cache_dirs
instead of multiple Squid instances (including hacks that emulate
multiple Squid instances in a single Squid instance by abusing SMP macros).


HTH,

Alex.


> On Fri, Aug 25, 2017 at 3:13 PM, Alex Rousskov wrote:
>> On 08/25/2017 11:21 AM, Aaron Turner wrote:
>>> FATAL: Ipc::Mem::Segment::open failed to
>>> shm_open(/squid-ssl_session_cache.shm): (2) No such file or directory
>>
>>> I've verified that /dev/shm is mounted and based on the list of files
>>> in there, clearly squid is able to create files there, so it's not a
>>> Linux/shm config issue.
>>
>> Yes, moreover, this is not a segment creation failure. This is a failure
>> to open a segment that should exist but is missing. That segment should
>> have been created by the master process, but since your config (ab)uses
>> SMP macros, I am guessing that depending on the configuration details,
>> the master process may not know that it needs to create that segment.
>>
>> For the record, the same error happens in older Squids (including v3.5)
>> when there are two concurrent Squid instances running. However, I
>> speculate that you are suffering from a misconfiguration, not broken PID
>> file management here.
>>
>>
>>> So here's the funny thing... this worked fine until I enabled
>>> ssl-bumping on the backends (I was debugging some problems and on a
>>> whim I tried enabling it).  That didn't solve my problem and so I
>>> disabled ssl bumping on the backends.  And that's when this SHM error
>>> started happening with my frontend.   Re-enabling ssl-bump on the
>>> backends fixes the SHM error, but I don't think that would be a
>>> correct config?
>>
>> This is one of the reasons folks should not abuse SMP Squid for
>> implementing CARP clusters IMHO -- the config on that wiki page is
>> conceptually wrong, even though it may work in some cases.
>>
>> SMP macros are useful for simple, localized hacks like splitting
>> cache.log into worker-specific files or adding worker ID to access.log
>> entries. However, the more process-specific changes you introduce, the
>> higher are the changes that Squid will get confused.
>>
>> The overall principle is that all Squid processes should see the same
>> configuration. YMMV, but the number of places where SMP Squid relies on
>> that principle keeps growing...
>>
>> Alex.

_______________________________________________
squid-users mailing list
squid-users@xxxxxxxxxxxxxxxxxxxxx
http://lists.squid-cache.org/listinfo/squid-users




[Index of Archives]     [Linux Audio Users]     [Samba]     [Big List of Linux Books]     [Linux USB]     [Yosemite News]

  Powered by Linux