Search squid archive

Re: Accelerating Proxy options?

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Amos Jeffries wrote:
On Mon, 18 Apr 2011 18:30:51 -0700, Linda Walsh wrote:
[wondering about squid accelerator features such as...]
1)  Parsing fetched webpages and looking for statically included content
 and starting a "fetch" on those files as soon as it determines
 page-requisites

Squid is designed not to touch the content. Doing so makes things slower.
----
   Um, you mean:  "Doing so can often make things slower."   :-)

It depends on the relative speed of CPU speed (specifically, the CPU speed of the processor where squid is being run) vs. the external line speed. Certainly, you would agree that if the external line speed is 30Bps, for example, Squid would have much greater latitude to "diddle" with the content before a performance impact would be noticed.

   I would agree that doing such processing "in-line" would create
a performance impact, since even right now, with no such processing being
done, I note squid impacting performance by about 10-30% over a direct
connection to *fast* sites.  However,  I would only think about doing
such work outside of the direct i/o chain via separate threads or processes.

   Picture this: I (on a client sys) pull in a web page.  At same time
I get it, it's handed over to a separate process running on a separate core
that begins processing.  Even if the server and client parse at the same
speed, the server would have an edge in formulating the "pre-fetch" requests simple because it's on the same physical machine and doesn't have any client-server latency). The server might have an additional edge since it would only be looking through fetched content for "pre-fetchables" and not concerning itself with rendering issues.

There are ICAP server apps and eCAP modules floating around that people have written to plug into Squid and do it. The only public one AFAICT is the one doing gzipping, the others are all proprietary or private projects.
---
  Too bad there is no "CSAN" repository akin to perl's CPAN as well
as a seemingly different level of community motivation to adding to such
a repository.




2. Another level would be pre-inclusion of included content for pages
that have already been fetched and are in cache.  [...]

ESI does this. But requires the website to support ESI syntax in the page code.
---
ESI? Is there a TLA URL for that? ;-)

  Anyway, just some wonderings...
What will it take for Sq3 to get to the feature level of Sq2 and allow, for example, caching of dynamic content?
  Also, what will it take for Sq3 to get full, included HTTP1.1 support?

It __seems__ like, though it's been out for years, it hasn't made much progress on those fronts. Are they simply not a priority?

 Especially getting to the 1st goal (Sq3>=Sq2), I would think, would consolidate community efforts at improvement and module construction
(e.g. caching dynamic content like that from youtube and the associated wiki directions for doing so under Sq2, which are inapplicable to Sq3)...
(chomping at bit, for Sq2 to become obviated by Sq3)...





[Index of Archives]     [Linux Audio Users]     [Samba]     [Big List of Linux Books]     [Linux USB]     [Yosemite News]

  Powered by Linux