Adam Fyne wrote:
Hi,
Here at work we have programs that run on computers that ask for large
files from our storage computer(s) (also on our network).
Instead of copying the file locally each time for every computer - we
would like to find out if we could use Squid for this purpose.
On the first time - squid would copy the desired file to the computer's
cache (could be Gigabytes) and then on subsequent requests the required
file will be taken from the cache by the proxy,
and not copied again.
Can we use Squid for this and if so - how so ?
Thank you in advance,
Adam
There is one huge IFF in this.
Squid is HTTP proxy. IFF those files are transfered over HTTP its
possible squid coud sit between.
However, in setups like the one proposed squid acts as if it was just
one of the storage computers (it is really, storing it its own cache
while the files are unchanged).
The only benefit you would get out of it is if the 'storage' computers
were actually under some other workloads to generate new content. Squid
could take some of the delivery workload away.
Or being able to provide access to 'more' sources of the same files.
Squid is built primarily for distributed delivery of websites and
gatewaying access for many users to the WWW.
Amos
--
Please use Squid 2.6STABLE17+ or 3.0STABLE1+
There are serious security advisories out on all earlier releases.