Hi Amos, It works now. I made a proper test between 2 clients and using Robocopy. Here's the cache HIT result: ----------------------------------------------------------------- Cache information for squid: Hits as % of all requests: 5min: 15.2%, 60min: 14.4% Hits as % of bytes sent: 5min: 67.4%, 60min: 67.4% Memory hits as % of hit requests: 5min: 100.0%, 60min: 100.0% Disk hits as % of hit requests: 5min: 0.0%, 60min: 0.0% Storage Swap size: 70752 KB Storage Swap capacity: 69.1% used, 30.9% free Storage Mem size: 70968 KB Storage Mem capacity: 54.1% used, 45.9% free Mean Object Size: 1768.80 KB Requests given to unlinkd: 0 ----------------------------------------------------------------- Robocopy log from WORKSTATION1 Files : 40 Copied: 40 Time : 00:48 Robocopy log from WORKSTATION2 Files : 40 Copied: 40 Time : 00:14 If I clear the WebDAV client cache on WORKSTATION1 and execute the copy test again, I will also download from the cache. The overall copy time will be below 15 seconds instead of 50 seconds. I don't have any error if I try to read a file from the cache (as I had before) and the copied files are healthy. Great ! I will wait before saying it's a victory. But at least we can now read files from the Squid Cache. Which was the most important step before going any forward. 😊 Thank you very much for your help and accurate answers. Regards, Olivier MARCHETTA -----Original Message----- From: Amos Jeffries [mailto:squid3@xxxxxxxxxxxxx] Sent: Wednesday, August 30, 2017 4:56 PM To: Olivier MARCHETTA <olivier.marchetta@xxxxxxxxxxx>; squid-users@xxxxxxxxxxxxxxxxxxxxx Subject: Re: Squid Reverse Proxy and WebDAV caching On 31/08/17 03:35, Olivier MARCHETTA wrote: > Hello, > > I've made many test, but it seems not wanting to deliver from the cache. > I think the objects are in the cache, I have modified the cache in memory object size. > And now I can see the memory being filled up as I transfer / GET the files from SharePoint Online / Office 365. > > Do you think that any configuration change would work ? What you have now should be caching the responses like the one in your previous mail, AND serving them to clients. I can only guess that something is wrong with your tests. Or that the previous mails transaction is not actually a typical object. > I was thinking about rewriting URLs upfront, before the Squid Cache proxy, in a chain configuration. > But I am trying to avoid it for now. It would not help. The URL is just part of the hash key for caching. The other HTTP mechanisms are the things causing HIT vs MISS vs REFRESH behavior and you already have configured to override those. Amos _______________________________________________ squid-users mailing list squid-users@xxxxxxxxxxxxxxxxxxxxx http://lists.squid-cache.org/listinfo/squid-users