On Sat, Aug 11, 2007, Michel Santos wrote: > I must admit I can't talk in there because I never could test it really > but I do not convinve myself easy by reading papers. Good! Thats why you take the papers and try to duplicate/build from them to convince yourself. Google for "UCFS web cache", should bring out one of the papers in question. 2 to 3 times the small object performance is what people are seeing in COSS under certain circumstances as it eliminates the multiple seeks required in the worst cases for "normal" UNIX filesystems. It also reduces write overhead and fragmentation issues by writing in larger chunks. Issuing a 512 byte write vs a 16k write to the same sector of disk is pretty much an equivalent operation in terms of time taken. The stuff to do, basically, involves: * planning out better object memory cache management; * sorting out a smarter method of writing stuff to disk - ie, exploit locality; * don't write everything cachable to disk! only write stuff that has a good chance of being read again; * do your IO ops in larger chunks than 512 bytes - I think the sweet spot from my own semi-scientific tests is ~64k but what I needed to do is try to detect the physical geometry of the disk and make sure my write sizes match physical sector sizes (ie, so my X kbyte writes aren't kicking off a seek to an adjacent sector, and another rotation to reposition the head where it needs to be.) * handle larger objects / partial object replies better I think I've said most/all of that before. We've identified what needs doing - what we lack is people to do it and/or to fund it. In fact, I'd be happy to do all the work as long as I had money available once it was done (so I'm not paid for the "hope" that the work is done.) Trouble is, we're coders, not sales/marketing people, and sometimes I think thats sorely what the Squid project needs to get itself back into the full swing of things. Adrian