Search squid archive

Re: saving web page body to file... help needed

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



thank you for your quick reply.... we will try to use the ICAP server
as u suggested..

but currently we are concentrating only on the filtering part before
we get to embedding the code.

once that is completed we will extend it to any server.. and rite now
we have squid installed on our work machines.

so we currently needed to know the source file in which squid takes
the body content of a web page from the web and the function
containing and the name of the data structure it temporarily stores it
in before storing it in the cache. we can then store it in a file
there itself and use it as required.

studying the source code is taking a very long time and we are running
a time constraint.. so if we could please get some help on the
source file, the function name and the data structure name which
stores the body it would be great.

thanks a lot

Siddhesh Pai Raikar


On 1/17/07, Alex Rousskov <rousskov@xxxxxxxxxxxxxxxxxxxxxxx> wrote:
On Wed, 2007-01-17 at 19:07 +0530, Siddhesh PaiRaikar wrote:

> we are trying to develop a small enhancement to the existing application
of
> squidguard using the squid proxy server... which can later be embedded in
to
> squid itself as a html web page body scanner for unwanted content.

Please consider using an ICAP (RFC 3507, i-cap.org) server for content
scanning, blocking, and manipulation instead of building this feature
directly into Squid. To implement your functionality, you can modify one
of the free or for-a-fee ICAP servers available on the web.

Besides having to work with a much simpler and smaller code base, you
will have an advantage of being compatible with other popular proxies
because they all speak ICAP.

Good luck,

Alex.




[Index of Archives]     [Linux Audio Users]     [Samba]     [Big List of Linux Books]     [Linux USB]     [Yosemite News]

  Powered by Linux