On 07/14/2017 02:11 AM, bugreporter wrote: > By doing so I'll get a new (or the same) rough estimation which is not what > I'm really looking for. You will get an accurate-enough formula, which is what you should be looking for. > Actually I need to have a formula based on the mean object size That is what I am trying to give you. Sorry if I was not explicit enough. Idle Squid memory requirements can be computed using the following formula: RAM used for HTTP caching purposes = RAM used by all cache indexes (cache_dirs and cache_mem) + cache_mem where RAM used by a single cache index = C + v*n where C is an unknown constant representing the size of in-RAM overhead of having a single (empty) cache (cache_dir or cache_mem). C depends on Squid build and configuration. C is normally a lot less than v*n so you might just ignore it. v is an unknown constant representing the size of in-RAM overhead of indexing a single cache object v depends on Squid build (e.g., 32 vs 64bit) v should be close to the sum of StoreEntry and LruNode sizes. n is the number of objects in the cache, which you can estimate by dividing the cache size by the mean cached object size. The experiments I suggested can be used to estimate the C and v constants required to compute the "RAM used by cache indexes" component. You can measure/estimate/configure/control everything else in the formula. The formula works well for large numbers of n, where various rounding effects become negligible. How you use this formula/model is up to you. If your experiments prove the formula wrong, please discuss! Thank you, Alex. P.S. Please note that a busy Squid also consumes memory for in-transit transactions and other caches. If you know how much Squid consumes for HTTP caching, then you can effectively measure other overheads, which will also vary from one deployment environment to another. _______________________________________________ squid-users mailing list squid-users@xxxxxxxxxxxxxxxxxxxxx http://lists.squid-cache.org/listinfo/squid-users