Search squid archive

caching gzip pages

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Dear all,

www.yahoo.com and other websites send gzip versions of their web page. The squid just stores [caches ] it in some location  and the browser can interpret/deflate the gzip and hence displays the webpage.

In this regard How does squid cache web pages that comes as gzip format?
I see other http proxies [like wcol for ex.] creates directory named www.yahoo.com and stores a file named no_name.html and then when you click further links from this page they go on creating subdirectories with same names as the requested link. Does squid also work this way? Can you send me a pointer to the code where I can checkout its  caching implementation.

Thanks and Regards
Geetha

DISCLAIMER:
This email (including any attachments) is intended for the sole use of the intended recipient/s and may contain material that is CONFIDENTIAL AND PRIVATE COMPANY INFORMATION. Any review or reliance by others or copying or distribution or forwarding of any or all of the contents in this message is STRICTLY PROHIBITED. If you are not the intended recipient, please contact the sender by email and delete all copies; your cooperation in this regard is appreciated.


[Index of Archives]     [Linux Audio Users]     [Samba]     [Big List of Linux Books]     [Linux USB]     [Yosemite News]

  Powered by Linux