Search squid archive

Re: Squid Under High Load

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Fri, Feb 02, 2007, Manoj Rajkarnikar wrote:
> On Thu, 1 Feb 2007, Michel Santos wrote:
> 
> > depends how you look at it
> > disk space is cheap and serving one 650MB object is a fat win even if it
> > happens only twice a month
> 
> yes the disk space is cheap but it is not alone the fact of disk space. 
> more disk you use, more RAM you'll need and many more files and disk 
> space you'll have to sort to look for a file. also to tell you that the 
> byte hit alone is not our goal, its also how fast you can deliver the 
> cached objects to your clients. from here, reaching sites located in 
> other countries is a satellite hop away, 600ms +. so its more about giving 
> better response time than saving little bandwidth. as you said it all 
> depends upon the situation you're in.

Part of the work I did quite a while ago was to try and allow people to
store very large objects on another spool. I guessed that the large objects
were accessed less frequently and so could happily be stored in a UNIX
filesystem. The file open rate for a "normal" UNIX filesystem is what, 50 ish
requests a second for a single-spindle disk filesystem? Maybe slightly higher
if all your directory entries are cached?

Research has mostly shown that to be true; ie the overhead of UNIX filesystems
becomes less of a concern after the object size grows past a couple hundred
kilobytes. I'd quote the references but I don't have them handy - I'll make
sure they appear in the document library once the new Squid website is released.

So as long as you're able to store small objects seperately from large objects
and make sure one doesn't starve IO from the other then you'll be able to both
enjoy your cake and eat it too. :P




Adrian


[Index of Archives]     [Linux Audio Users]     [Samba]     [Big List of Linux Books]     [Linux USB]     [Yosemite News]

  Powered by Linux