The trick for support the thousand of user must be increasing the ip_conntrack parameter of iptables and increase the max memory object size in squid.conf I had test with my DL380 G4 memory 4 GB dual XEON 3.2GHz load test with webpolymix-4 the box can support with 10,000 concurrent ip client and throughput increase that 300 MB/Sec (not Mbit/Sec) My OS is Ubuntu 7.04 standard kernel and tuning sysctl.conf with web100 style. I'm not use WCCP because it has many problem and difficult to troubleshoot hello message and GRE but I had use the Foundry ServerIron for HQ and Policy routing on Cisco because policy routing consume the router processor minimal than WCCP process Good luck -----Original Message----- From: Dan Letkeman [mailto:danletkeman@xxxxxxxxx] Sent: Wednesday, December 26, 2007 1:57 AM To: squid-users Subject: design, wccp, hardware recommendations Hello, I would like to implement a couple of squid caching boxes to our network. These are my current requirements: -needs to be transparent -needs to have a failsafe in case of caching server problems -should use wccp -should be able to take it down at anytime with out interruption This what I currently have in place: -The network is divided up into two sides. About 400 users on one side and 600 users on the other side. -Each side will have one squid cache to start. -Each side will have a cisco 2801 router with a dedicated fastethernet port for the caching server. So my questions are: -Which documentation for squid and wccp with transparent proxying would you recommend? -Which server operating system would you recommend. (Currently debian is the only one that works with the scsi card that I have in the server). -For this amount of users what size of server would you recommend? (If I have to guess I would say that up to have of the users could be using the internet at the same time. Average during the day would probably 1/4 of the users at one time. So around 150 users one caching server and 100 users on the other caching server. Any other recommendations are welcome! Thanks, Dan