Hello, I have a squid application where a storeurl_rewriter program is needed to normalize incoming fetch urls into the same url. I am running squid-2.7_9, and have a storeurl_rewriter program that gets launched at squid startup. In my config file, I set storeurl_rewrite_children to 100, and only allow http requests to come through (using the storeurl_access allow proto HTTP setting, so any cache_object://localhost requests will be ignored). The problem I am having is I keep getting cache.log entries that look like "storeClientReadHeader: URL mismatch", comparing the incoming fetch url with my normalized url. I am getting thousands of these errors an hour. I was sure to clear out the squid cache as I launched my new storeurl_rewriter program, so I don't understand how the cache has any of the original fetch urls in it at all to be causing this URL mismatch. I was getting warnings about not enough storeurl_rewriter programs running, which was when I bumped it up to 100. I was also having an issue where logs showed that when the cache_object://localhost requests came through, my program shut down, which was why I added the access setting. I don't see either of these warnings in the logs anymore, yet still see tons of URL mismatch errors. Does anyone have any ideas on this? Just to clarify a bit more...suppose a fetch url looks like http://www.fetchme.com/123456, but it should then go through my storeurl_rewriter program where it will be turned into http://www.normalized.com/123456, and this is what should be used to fetch and then cache. My assumption is that every fetch request goes through storeurl_rewriter, so how is it that I would be seeing so many log entries like "storeClientReadHeader: URL mismatch {http://www.normalized.com/123456} != {http://www.fetchme.com/123456}? Is there some case where a fetch request does not go through storeurl_rewriter? If it was busy, I think I would be seeing a log warning saying so, which I am not since I bumped children to 100. Thanks so much, Kathleen