Search squid archive

Re: Maximum disk cache size per worker

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 22/03/2013 7:21 p.m., Sokvantha YOUK wrote:
Dear Amos,

I am pretty sure love to go down to try SMP equivalent of a CARP
peering. Please guide me.

The design is laid out here with config files for the pre-SMP versions of Squid:
 http://wiki.squid-cache.org/ConfigExamples/MultiCpuSystem

The SMP worker version of that is only slightly different. There is only one "squid" started using a single main squid.conf containing a series of if-conditions assigning each worker to a frontend or backend configuration file like so:

squid.conf:
  workers 3
  if ${process_number} = 1
  include /etc/squid/backend.conf
  endif
  if ${process_number} = 2
  include /etc/squid/backend.conf
  endif
  if ${process_number} = 3
  include /etc/squid/frontend.conf
endif


The backend can share one config file by using ${process_number} in all the unique '1' , '2' places (hostname, cache_dir, last digit of port number etc)

The frontend must reference the backend without using ${process_number}. And can also have rock cache_dir to service small objects out of quickly, although YMMV on this.

You can expand this out with multiple frontend if you like, or with more than 2 backends.


Amos


---
Regards,
Vantha

On Fri, Mar 22, 2013 at 1:13 PM, Amos Jeffries <squid3@xxxxxxxxxxxxx> wrote:
On 22/03/2013 4:39 p.m., Alex Rousskov wrote:
On 03/21/2013 08:11 PM, Sokvantha YOUK wrote:

Thank you for your advice. If I want large files to be cached when it
fist seen by worker, My config should change to first worker that see
large file and cache it else left it over to remaining worker for rock
store worker?
Your OS assigns workers to incoming connections. Squid does not control
that assignment. For the purposes of designing your storage, you may
assume that the next request goes to a random worker. Thus, each of your
workers must cache large files for files to be reliably cached.


I don't want cached content to be duplicated
among AUFS cache_dir and I want to use the advantage of rock store
which can be shared within worker on SMP deployment.
The above is not yet possible using official code. Your options include:

1. Do not cache large files.

2. Cache large files in isolated per-worker ufs-based cache_dirs,
     one ufs-based cache_dir per worker,
     suffering from false misses and duplicates.
     I believe somebody reported success with this approach. YMMV.

3. Cache large files in SMP-aware rock cache_dirs,
     using unofficial experimental Large Rock branch
     that does not limit the size of cached objects to 32KB:
     http://wiki.squid-cache.org/Features/LargeRockStore

4. Setup the SMP equivalent of a CARP peering hierarchy with the frontend
workers using shared rock caches and the backend using UFS. This minimizes
cache duplication. But in the current SMP code requires disabling loop
detection (probably not a good thing) and some advanced configuration
trickery.
If you want to actually go down that path let me know and I'll put the
details together.

Amos




[Index of Archives]     [Linux Audio Users]     [Samba]     [Big List of Linux Books]     [Linux USB]     [Yosemite News]

  Powered by Linux