Re: Building local repo - eliminating dups - why some new x86_64?

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]



>> (...) When a computer on the network asks for a file
>> that's been downloaded previously, there is no need to go into the
>> Internet.
>
> Yes and no.
>
> arch packages are not exactly small. I run a squid cache and a cache object
> size of 128KB serves me pretty well. To accomodate all arch packages, this
> setting has to go up    to may be 150MB(for openoffice). If the cache start
> caching every object of size upto 150MB, it won't be as effective or will
> baloon dramatically. Not to mention the memory requirement that will go up
> too.

I'm under the impression that you can configure it in other ways and
not just space, therefore letting it work for Arch packages (say, from
your favourite mirrors) and not from everywhere. Yeah, it does
increase the requirements, but I'm sure it's handleable.

>
> But no doubt http access will be dramatically fast :)
>
> Not to mention, squid is only http caching proxy, not ftp.

"Squid is a caching proxy for the Web supporting HTTP, HTTPS, FTP, and
more." -- their website.

> squid is great but I doubt it can help with multiple computers with arch. It
> can handle only download caching but thats not enough.
>
> (snip)
>

Yeah, some decent ideas there.

-AT


[Index of Archives]     [Linux Wireless]     [Linux Kernel]     [ATH6KL]     [Linux Bluetooth]     [Linux Netdev]     [Kernel Newbies]     [Share Photos]     [IDE]     [Security]     [Git]     [Netfilter]     [Bugtraq]     [Yosemite News]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux RAID]     [Linux ATA RAID]     [Samba]     [Device Mapper]
  Powered by Linux