Search squid archive

Re: Why `Storage Mem capacity` has a value larger than 100%.

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 31/07/19 3:23 pm, kmiku7 wrote:
> 
> Thanks your reply.
> I am running 64bit build of squid on 64bit system. The output of top/ps
> shows that squid is using as much memory as claimed in report.

Okay. That means the negative values are just an artifact of 32-bit
types used in the report display, not actually overflow bugs in the
store code.

> I configure cache directory with size of 4T: cache_dir ufs PATH 4194304 128
> 256.

FYI: On 64-bit systems each 1GB of disk storage needs approximately 15MB
of RAM for the index and metadata.

Does your machine actually have 60GB of RAM available for the proxy to
use for this large cache_dir index?


Those numbers are relative to the avg object size. So I advise tuning
the min-size= parameter so only the many-MB objects get stored there.
That should cut the index RAM requirement by a few orders of magnitude.


> 
> There are many child-process start, following is output of `ps`:
> USER    20401  0.0  0.0  71020  2728 ?        Ss   Feb14   0:00
> /PATH/TO/squid -f /PATH/TO/CONFIG/FILE -n squidHot
> USER    20405  0.5 11.8 8258832 7771408 ?     S    Feb14 1298:30 (squid-1)
> --kid squid-1 -f /PATH/TO/CONFIG/FILE -n squidHot
> USER    20440  0.0  0.0  29468  1444 ?        S    Feb14   0:16
> (logfile-daemon) /PATH/TO/access.log
> USER    20441  0.0  0.0  29460  1256 ?        S    Feb14   0:00 (unlinkd)
> USER    20444  0.0  0.0  29468  1252 ?        S    Feb14   0:00
> (logfile-daemon) /PATH/TO/store.log
> 
> Process 20405 cost maximum memory.
> 
> Other part of report also make me puzzled:
> 	Internal Data Structures:
> 		  1185 StoreEntries

ie. Total number of objects being cached by this proxy.


> 		  1184 StoreEntries with MemObjects

ie. Total number of objects which have at least some portion stored in
RAM for fast access.

This includes:
 * all objects in cache_mem
 * all cacheable objects currently being received from a server
 * all cacheable objects currently being delivered to a client
(though objects may match multiple of those criteria, each is only
counted once).

The difference between this and total objects (1185 - 1184 = 1) is the
number of objects *only* stored in a cache_dir.


> 		     8 Hot Object Cache Items

ie. Total count of items in cache_mem area of RAM.

> 		     9 on-disk objects

ie. Total count of objects stored in all configured cache_dir.


> `9 on-disk objects` means only 9 entries of 1185 are stored on disk, and
> others are stored in memory?
> 

Essentially, yes.


> 
> 
> Amos Jeffries wrote
>> Also, your proxy is apparently trying to fit objects with an *average*
>> size exceeding 70MB into that 256MB of cache. The bit of the report you
>> elided shows how many it is trying to fit in there.
> 
> Yes, we have many file larger than 256MB. But what problem will this lead
> to? And why?
> 

That will lead to disk I/O capacity being a major limiting factor in
delivery speed for all those objects. Since their data has to be saved
to or read from disk in order to be used.

Modulo bugs, the store is only supposed to be keeping a small portion of
them in memory awaiting delivery (on sending them) or waiting for
swapout to disk (on receiving). Though maximum_object_size_in_memory
place a role there, objects under that limit *may* be loaded fully into
cache_mem.

Check your object size limits:
<http://www.squid-cache.org/Doc/config/minimum_object_size/>
<http://www.squid-cache.org/Doc/config/maximum_object_size/>
<http://www.squid-cache.org/Doc/config/maximum_object_size_in_memory/>


PS. I also advise upgrading to the latest v4 release to avoid the
security issues and a memory leak that have been fixed since v4.4.

Amos
_______________________________________________
squid-users mailing list
squid-users@xxxxxxxxxxxxxxxxxxxxx
http://lists.squid-cache.org/listinfo/squid-users




[Index of Archives]     [Linux Audio Users]     [Samba]     [Big List of Linux Books]     [Linux USB]     [Yosemite News]

  Powered by Linux