Search squid archive

Re: Squid 2.x maximum_object_size related to memory usage

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Adrian Chadd wrote:
On Fri, Jul 27, 2007, rihad wrote:

Ok, now I understand that, if you have cache_mem of, say, 300 mb, it's never a good idea to make maximum_object_size = 64 MB as an average of 10-20 concurrent downloads will surely fill the memory. I hoped Squid would only keep in memory the buffer (up to read_ahead_gap) necessary for relaying the file being downloaded to the client, while still writing it on the disk in the process.

The memory cache and the disk cache are pretty seperate. squid can
remove a memory from being in the "memory cache" (ie, the whole object
is in memory) whilst still writing it to disk as it comes off the
network.

I don't believe anyone's done much work investigating Squid's behaviour
with small and large memory objects.




Let me be the pioneer ;-)

KEY 83FC85024B526D35AD958302B5253591
	GET http://example.com/path/to/large.file
	STORE_PENDING NOT_IN_MEMORY SWAPOUT_WRITING PING_DONE
	CACHABLE,DISPATCHED,VALIDATED
	LV:1185550831 LU:1185550831 LM:1185513900 EX:-1
	5 locks, 1 clients, 1 refs
	Swap Dir 0, File 0X0D6F8D
	inmem_lo: 10432512
	inmem_hi: 10485475
	swapout: 10481664 bytes queued
	swapout: 10481843 bytes written
	Client #0, 0x0
		copy_offset: 10432820
		seen_offset: 10432820
		copy_size: 4096
		flags:

About inmem_lo and inmem_hi: aren't they saying there's currently
inmem_hi - inmem_lo bytes buffered in memory, and that the file is being
written to disk on-the-fly (SWAPOUT_WRITING)?


[Index of Archives]     [Linux Audio Users]     [Samba]     [Big List of Linux Books]     [Linux USB]     [Yosemite News]

  Powered by Linux