On 4/03/2015 9:35 a.m., Sebastian Goicochea wrote: > Hello everyone, I'm experimenting with cache_peer directive and node.js: > > cache_peer 10.0.0.90 parent 8888 0 no-query no-digest proxy-only name=test > > in that port I have a node.js Proxy receiveing connections in the same > host, it extracts some information I need and saves it to a DB, then > redirects Squid with a 302 response with some garbage added to the url. > I use that garbage to match an access list so I can prevent looping. > > Squid is working in transparent mode, the problem I'm facing is that if > I don't configure a tcp_outgoing_address Squid does not reach port 8888 > on localhost. If I set a tcp_outgoing_address Squid can reach > localhost:8888 but with his own IP address and I need it to be > transparent, I need the real client IP address. Why? what is all this for? HTTP is designed to operate just fine without forging client IPs on proxy outgoing traffic. Some web applications are seriously broken, but since its on your localhost you obviously are in a great position to fix this one. also, it sounds to me like you are using all this complex 4-party interaction to replicate something that an ICAP/eCAP service does much faster and simpler. Or perhapse you are trying to implement Squids StoreID feature using layers of proxies. > > Is there a way to configure tcp_outgoing_address to use the client's IP > when fetching something? No. You can only bind to IP addresses which have been assigned to the machine Squid is running on. Lookup the "triangular routing" problem if you want all the gory details. Amos _______________________________________________ squid-users mailing list squid-users@xxxxxxxxxxxxxxxxxxxxx http://lists.squid-cache.org/listinfo/squid-users