On 06/09/2013 06:29 AM, Eliezer Croitoru wrote:
I have seen http://www.squidblacklist.org/ which is a very nice idea but I am wondering if squid.conf and other squid products are the good choice for any place. For a mission critical proxy server you will need to prevent any "reload" of the proxy which can cause a *small* download corruption. I know that admins most of the time don't think about reload of a process but it can cause some department in the enterprise a problem. From the ISP point of view you must prevent any problem to the client. The current solutions for filtering are:helper,squid.conf,icap\ecap service. A helper is a nice solution like squidguard but this also needs a reload of the whole squid service.
ufdbGuard is excluded from this comparison. ufdbguard reloads without squid reloading. ufdbGuard reloads its database in 10 seconds during which time Squid operates normal and there is no disruption of any kind for an end user unless it is configured. During the reload ufdbGuard has a configurable behaviour: - all URLs are allowed (default) - all URLs are allowed and URL lookups are intentionally delayed with a small delay. The goal is to lower the number of URLs that are passed without being filtered. - all URLs are blocked and the user receives a message explaining the reload and to try again in a few moments.
On a small scale system it's easy to do but for a 24x7 filtering solution we can't just reload the squid process or recreate the DB. There for I wrote my ICAP service for this specific issue. The need for a more persistent solution which doesn't require down time from the DB proxy or the filtering service point of view. Would you prefer a filtering based on a reload or a persistent DB like mongoDB or tokyo tyrant?
ufdbGuard has its database in memory and simply discards the current in-memory database and loads a new version. Because of the in-memory database it is so fast: 50,000 URL verifications per second on a single core (5-year old technology). The API based on ufdbGuard is even faster and goes to 90,000 URL verifications per second on a single core.
Since there were some bug fixes in squid ICAP based solutions can give more throughput and I got into a 8k persistent request on a INTEL ATOM based filtering system.
I do not understand the performance figure. Can you give more details ? Best regards, Marcus
Regards, Eliezer