Eliezer Croitoru wrote > You can try to use the atime and not the mtime. Each time the fetcher script runs , all of request files will access and then atime will refreshed . I think for "request" directory it should be "mtime" and for "body" directory it should be "atime" . Eliezer Croitoru wrote > It is possible that some fetchers will consume lots of memory and some of > the requests are indeed un-needed but... don’t delete them. > Try to archive them and only then remove from them some by their age or > something similar. > Once you have the request you have the option to fetch files and since > it's such a small thing(max 64k per request) it's better to save and > archive first and later wonder if some file request is missing. But currently there is more than 230000 files in old request directory . Maybe the garbage collector of GoLang will not release the memory after processing each file . Eliezer Croitoru wrote > * if you want me to test or analyze your archived requests archive them > inside a xz and send them over to me. I have sent you the request directory in previous private email . Thanks -- View this message in context: http://squid-web-proxy-cache.1019090.n4.nabble.com/Windows-Updates-a-Caching-Stub-zone-A-windows-updates-store-tp4678454p4682360.html Sent from the Squid - Users mailing list archive at Nabble.com. _______________________________________________ squid-users mailing list squid-users@xxxxxxxxxxxxxxxxxxxxx http://lists.squid-cache.org/listinfo/squid-users