Did you mean 1 CPU with couple cores? For squid it most of the time takes time to load these lists into ram. It’s not wrong to do so since in many cases it’s the right thing to do. In case these lists are stale for at-least a day I assume it should be fine. However there are tools like ufdbguard and others which are very good in the sense of memory footprint and fast URLs lookup. How do you add these URLS, I am not mistaken about URLs and not domains right? Eliezer ---- Eliezer Croitoru Tech Support Mobile: +972-5-28704261 Email: ngtech1ltd@xxxxxxxxx From: m k <tamurin0525@xxxxxxxxx> Sent: Thursday, August 6, 2020 8:29 AM To: Eliezer Croitor <ngtech1ltd@xxxxxxxxx> Cc: squid-users@xxxxxxxxxxxxxxxxxxxxx Subject: Re: I would like to know performance sizing aspects. Squid's default setting is 1 core CPU, 16GB mem. How many URLs(Blacklist) will degrade Squid's performance? Kitamura, About the tens of thousands of URLs, Have you considered using a Blacklisting utility, it might lower the memory footprint. Eliezer ---- Eliezer Croitoru Tech Support Mobile: +972-5-28704261 Email: ngtech1ltd@xxxxxxxxx Thank you for your reply. > That number was gained before HTTPS became so popular. So YMMV depending > on how many CONNECT tunnels you have to deal with. That HTTPS traffic can possibly be decrypted > and cached but performance trade-offs are quite large. I'm very worried about the internet slowing down due to https decording. and I'm also worried about the internet slowing down due to using Blacklist. I load tens of thousands of URL(black list file) every time I set up ACL. How many requests does SSL-Bump in one second? On 5/08/20 11:28 am, m k wrote: >> We are considering to use Squid for our proxy, and would like to know >> performance sizing aspects. >> >> Current web access request averages per 1 hour are as followings >> Clients:30,000、 >> Page Views:141,741/hour >> *Requests:4,893,106 >>
Okay. Requests and client count are the important numbers there.
The ~1359 req/sec is well within a default Squid capabilities, which can extend up to around 10k req/sec before needing careful tuning.
That number was gained before HTTPS became so popular. So YMMV depending on how many CONNECT tunnels you have to deal with. That HTTPS traffic can possibly be decrypted and cached but performance trade-offs are quite large.
>> We will install Squid on CentOS 8.1. Please kindly share your >> thoughts / advices
Whatever OS you are most comfortable with administering. Be aware that CentOS official Squid packages are very slow to update - Apparently they still have only v4.4 (8 months old) despite a 8.2 point release only a few weeks ago.
So you may need to be building your own from sources and/or using other semi-official packagers such as the ones from Eliezer at NGTech when he gets around to CentOS 8 packages. <https://wiki.squid-cache.org/KnowledgeBase/CentOS>
FYI; If you find yourself having to use SSL-Bump, then we highly recommended to follow the latest Squid releases with fairly frequent updates (at minimum a few times per year - worst case monthly). If you like CentOS you may find Fedora more suitable to track the security environment volatility and update churn.
>> Is there sizing methodology and tools?
There are a couple of methodologies, depending on what aspect you are tuning towards - and one for identifying the limitation points to begin a tuning process tuning.
The info you gave above is the beginning. Checking to see if your traffic rate is reasonably within capability of a single Squid instance.
Yours is reasonable, so next step is to get Squid running and see where the trouble points (if any) are.
For more see <https://wiki.squid-cache.org/SquidFaq/>
>> How much resources are generally recommended for our environment? >> CPU: Memory: Disk space : Other factors to be considered if any: >> Do you have a generally recommended performance testing tools? Any >> suggested guidelines? >>
CPU - squid is still mostly single-process. So prioritize faster GHz rates over core number. Multi-core can help of course, but not as much as cycle speeds do. Hyper-threading is useless for Squid.
Memory - Squid will use as much as you can give it. Let your budget govern this.
Disk - Squid will happily run with no disk - or lots of large ones.
- Avoid RAID. Squid *will* shorten disk lifetimes with its unusually high write I/O pattern. How much shorter varies by disk type (HDD vs SSD). So you may find it better to plan budget towards maintenance costs of replacing disks in future rather than buying multiple up-front for RAID use. see <https://wiki.squid-cache.org/SquidFaq/RAID> for details.
- Up to a few hundred GB per cache_dir can be good for large caches. Going up to TB is not (yet) worth the disk cost as Squid has a per-cache limit on stored objects.
- Disk caches can be re-tuned, added, moved, removed, and/or extended at any time and will depend on the profile of object sizes your proxy handles - which itself likely changes over time. So general let your budget decide the initial disks and work from there.
Load Testing - the tools us dev use to review performance are listed at the bottom of the profiling FAQ page. These are best for testing the theoretical limits of a particular installation - real traffic tends to be somewhat lower. So I personally prefer taking stats from the running proxy on real traffic and seeing what I can observe from those.
HTH Amos _______________________________________________ squid-users mailing list squid-users@xxxxxxxxxxxxxxxxxxxxx http://lists.squid-cache.org/listinfo/squid-users
|