Hello, About a year ago I posted the following message to the list: ************************************************************* We have a very unusual squid application. We want to use a squid as a distribution point of a few very large files (~300 MB) to hundreds of computers. If the first computer requests the file, and it isn't in the disk cache, a request will be made to the origin server. That's fine. What happens if a second computer requests the same file before the first download from the origin server is completely in the squid cache? Is squid smart enough to realize that the file has already been requested from the origin server and wait, or will the second request initiate a second download from the origin server? ************************************************************** I received several replies that said that squid should only forward one request to the origin server. During some recent testing, however, I was sent the following message: ******************************************************** I now have done a controlled test and squid is definitely NOT consolidating all the requests. I started 40 clients up at nearly the same time, 4 on each of 10 machines, and I have seen as many as 27 of the same query going all the way through to the database. This is going through two squids, where one is the cache_peer parent of the other. The client programs do 5 queries that each return a 9.5MB squid object. One time I actually saw all 40 of all 5 queries get through, but that was so consistent that I hardly trust the results; the client nodes were general purpose lxplus nodes so I don't expect them to all track so close together. ************************************************************* We are using squid 2.5.STABLE14 on 64-bit linux. These objects are definitely cachable. What can we do to debug this? Any help/advice would be appreciated. Thanks very much, Barry