On 02/12/2014 09:23 AM, k simon wrote: > I > create a 16GB size "rock" and limit the swap rate to 200, swap timeout > to 300. > When it's full filled, I reconfigured it. Iostat display the disk rps > is about 200/s and throughput about 4MBytes/s. It's spent 61 minutes to > rebuilding sucessfully, but the worker report it "register failed" 30 > minutes ago. Registration should proceed regardless of the cache rebuild. If it does not, it is a bug. If there is such a bug, it may have been fixed in trunk or even v3.4, but I have not tested that use case recently. > Rebuliding waste too long time, it's really a problem for > the forward proxy, Apart from registration failure that you attribute to the slow rebuild, why is slow rebuild a problem for you? Squid should work while rebuilding its cache. There is nothing good about slow rebuild, but specific solutions to the slow rebuild depend on why it is a problem in your specific case. > and I am interested in what's "many factors" I can > realize or tuning it. You may have misinterpreted my comment. I said: Whether that time is "too long" depends on many factors, but we have not optimized anything in that area (yet). For you, 61 minutes may be too long. For somebody else it is not a problem at all. Others may see a 5-minute rebuild. The determination of whether something is too long depends on many local factors so it was difficult to answer your loaded "Does X solve rebuilding time too long issue?" question without implying that there is a rebuild issue that affects everybody. For example, if I restart Squid once a month, and there is a secondary Squid that can serve traffic during cache rebuild, a 61 minute rebuild is not a problem for me. On the other hand, if I have just one Squid and restart it every hour during peak usage, the performance degradation associated with the rebuild may be prohibitive even if the rebuild takes 5 minutes. As I said, Rock rebuild has not been optimized yet so you are unlikely to see faster rebuild times with Large Rock. The next steps may include: 1. optimizing Rock code so that it can rebuild faster 2. optimizing your OS so that it can read disk faster 3. optimizing your disks so that they can read faster 4. reducing your cache size so that there is less to read 5. reducing reload impact by reloading less frequently 6. reducing reload impact by reloading fewer entries at a time 7. using a "slower" cache_dir module with faster rebuild time I would suggest #1, but I am biased and do not really know which of the above are feasible (or even a good idea) in your specific environment. HTH, Alex. >> On 01/27/2014 05:26 AM, k simon wrote: >> >>> I noticed large rock have merged to squid 3.5, I have some question >>> about your large rock patch. >> >> >> Hello Simon, >> >>> 1,Does large rock support non-SMP instance? >> >> Yes. Rock store can be used in non-SMP mode (as defined at [1]). Rock >> store uses blocking disk I/O in non-SMP mode. >> >> Please note that if your Squid is SMP-capable (e.g., atomics are >> supported) and SMP is allowed (e.g., no -N), then Squid will start one >> disker process per Rock cache_dir and, hence, will use SMP mode, even if >> you have a single worker. >> >> >>> I know the rock store must used with worker. >> >> Not sure what you mean. Squid cannot work without a worker. Diskers are >> not required for Rock store. Without diskers, the performance should be >> significantly worse though because blocking disk I/O will block the >> entire worker process. For SMP terminology, please see [1]. >> >> >>> 2,Does large rock solve rebuilding time too long issue? >> >> Current large rock has about the same cache index building time as small >> rock. Whether that time is "too long" depends on many factors, but we >> have not optimized anything in that area (yet). >> >> >>> 3,Can large rock support range splice? >> >> I do not think Squid itself supports range splicing. If Squid supports >> them, Rock store should support them as well. >> >> >> HTH, >> >> Alex. >> [1] http://wiki.squid-cache.org/Features/SmpScale >>