Hi Alex/Amos,
Do you still need memory logs from version 5.3 after stopping traffic through the squid? We have disabled traffic to the 5.3 version squid about 6 hours ago and have not seen any memory being freed up since. This node has used up ~50G more memory compared with 4.17 squid taking similar traffic over the last 3+ weeks. I am collecting hourly memory logs on 5.3 after stopping traffic. Let me know and I can attach the log tomorrow morning.
Thanks
Praveen
On Mon, Dec 27, 2021 at 4:58 PM Praveen Ponakanti <pponakanti@xxxxxxxxxx> wrote:
I cant make any changes to our prod squids this week. I have a squid instance (5.3v) in a test env but could not reproduce the leak by starting & stopping traffic with a bulk http req generator (wrk). Was able to send 175k rps @ 20k concurrent sessions (each doing a get on a 1KB object) through the 30-worker squid. This initially caused a 3G increase in memory usage and then flattened out after stopping the requests. If I restart the bulk reqs, the memory usage only goes up ~0.5GB and then drops back down. Live traffic is probably exercising a different code path within squid's memory pools.On Mon, Dec 27, 2021 at 2:26 AM Lukáš Loučanský <loucansky.lukas@xxxxxx> wrote:_______________________________________________After one day of running without clients my squid memory is stable
29345 proxy 20 0 171348 122360 14732 S 0.0 0.7 0:25.96 (squid-1) --kid squid-1 -YC -f /etc/squid5/squid.conf
29343 root 20 0 133712 79264 9284 S 0.0 0.5 0:00.00 /usr/sbin/squid -YC -f /etc/squid5/squid.conf
Storage Mem size: 3944 KB Storage Mem capacity: 0.2% used, 99.8% free Maximum Resident Size: 489440 KB Page faults with physical i/o: 0 Memory accounted for: Total accounted: 15741 KB memPoolAlloc calls: 1061495 memPoolFree calls: 1071691 Total allocated 15741 kB So this does not seem to be the problem... LDne 26.12.2021 v 10:02 Lukáš Loučanský napsal(a):
ok - as it seems my squid quacked on low memory again today -
Dec 26 00:04:25 gw (squid-1): FATAL: Too many queued store_id requests; see on-persistent-overload.#012 current master transaction: master4629331
Dec 26 00:04:28 gw squid[15485]: Squid Parent: squid-1 process 15487 exited with status 1
Dec 26 00:04:28 gw squid[15485]: Squid Parent: (squid-1) process 28375 started
2021/12/26 00:01:20 kid1| helperOpenServers: Starting 5/64 'storeid_file_rewrite' processes
2021/12/26 00:01:20 kid1| ipcCreate: fork: (12) Cannot allocate memory
2021/12/26 00:01:20 kid1| WARNING: Cannot run '/lib/squid5/storeid_file_rewrite' process.
2021/12/26 00:01:20 kid1| ipcCreate: fork: (12) Cannot allocate memory
I'm going to reroute my clients (which are on their days off anyway) to direct connections and run it "dry" - on it's own. But I'm not able to to test it before "lack of memory issues occur" - because my clients are offline. So I'll watch squid for it's own memory consuption. It's all I can do right now - my squid already restarted and it's memory has been freed - so I think just now I have no power to fill it up again :-]
L
Dne 26.12.2021 v 7:41 Amos Jeffries napsal(a):
If possible can one of you run a Squid to get this behaviour, then stop new clients connecting to it before lack of memory issues occur and see if the memory usage disappears or reduces after a 24-48hr wait.
A series of regular mempools report dumps from across the test may help Alex or whoever works on the bug eliminate further which cache and client related things are releasing properly.
Amos
_______________________________________________
squid-users mailing list
squid-users@xxxxxxxxxxxxxxxxxxxxx
http://lists.squid-cache.org/listinfo/squid-users
squid-users mailing list
squid-users@xxxxxxxxxxxxxxxxxxxxx
http://lists.squid-cache.org/listinfo/squid-users
_______________________________________________ squid-users mailing list squid-users@xxxxxxxxxxxxxxxxxxxxx http://lists.squid-cache.org/listinfo/squid-users