The workers are to use most of the CPU cores (and not only 1, which is not enough)
The hard drivers are to increase IO.
2015-02-09 18:28 GMT-03:00 Amos Jeffries <squid3@xxxxxxxxxxxxx>:
On 10/02/2015 5:11 a.m., Alfredo Rezinovsky wrote:
> I have one of these lines for each cache disc (sdb, sdc, etc)
>
> cache_dir aufs /cache/sdb/${process_number} 230000 16 256 min-size=1
> max-size=838860800
>
> I'm using 4 or 5 discs to increase the cache I/O.
>
> The caché size is a little less than disk_size/workers
>
> There's a way to run a full store rebuild and exit. So I'm sure the stores
> are clean after starting the real squid and enabling transparent proxy
> iptables rules?
I already mentioned to you about disk I/O contention right?
The rule-of-thumb about one cache_dir per physical disk spindle is not
removed by multiple workers. A *single* Squid process can push any disk
hardware to its bare-metal I/O write capacity. Sharing between workers
just makes disks cap-out faster.
During that startup all Squid is doing with those disks is reading the
swap.state files from each AUFS cache_dir and scanning across the rock
cache_dir slots in MB sized chunks. Provided you are right about the
caches being clean (having a non-corrupt swap.state journal).
... why do you think increasing the number of workers makes them slow
down until one hits that 10 second timeout?
And consider what would happen later when your full network bandwidth
get thrown at the workers (hint approx. 4/5 of network I/O gets written
to disk).
Amos
_______________________________________________
squid-users mailing list
squid-users@xxxxxxxxxxxxxxxxxxxxx
http://lists.squid-cache.org/listinfo/squid-users
Alfrenovsky
_______________________________________________ squid-users mailing list squid-users@xxxxxxxxxxxxxxxxxxxxx http://lists.squid-cache.org/listinfo/squid-users