Amos, I recompiled 3.5.8 with this configuration (removed ipv6 and
ssl): Squid Cache: Version 3.5.8 Service Name: squid configure options: '--prefix=/usr/local' '--datadir=/usr/local/share' '--bindir=/usr/local/sbin' '--libexecdir=/usr/local/lib/squid' '--localstatedir=/var' '--sysconfdir=/etc/squid3' '--enable-delay-pools' '--enable-linux-netfilter' '--enable-eui' '--enable-snmp' '--enable-gnuregex' '--enable-ltdl-convenience' '--enable-removal-policies=lru heap' '--enable-http-violations' '--with-openssl' '--with-filedescriptors=24321' '--enable-poll' '--enable-epoll' '--enable-storeio=ufs,aufs,diskd,rock' Again formatted the partitions, started with this config (removed shared memory off, removed all refresh patterns) and no workers directive at all: http_access allow localhost manager http_access deny manager acl purge method PURGE http_access allow purge localhost http_access deny purge acl all src all acl localhost src 127.0.0.1/32 acl localnet src 127.0.0.0/8 acl Safe_ports port 80 acl snmppublic snmp_community public http_access deny !Safe_ports http_access allow all dns_v4_first on cache_mem 1024 MB maximum_object_size_in_memory 64 KB memory_cache_mode always maximum_object_size 260000 KB minimum_object_size 100 bytes collapsed_forwarding on logfile_rotate 5 mime_table /etc/squid3/mime.conf debug_options ALL,1 store_id_access deny all store_id_bypass on quick_abort_min 0 KB quick_abort_max 0 KB quick_abort_pct 100 range_offset_limit 0 negative_ttl 1 minute negative_dns_ttl 1 minute read_ahead_gap 128 KB request_header_max_size 100 KB reply_header_max_size 100 KB via off half_closed_clients off cache_mgr webmaster cache_effective_user squid cache_effective_group squid httpd_suppress_version_string on snmp_access allow snmppublic localhost snmp_access deny all snmp_incoming_address 127.0.0.1 error_directory /etc/squid3/errors/English max_filedescriptors 65535 ipcache_size 1024 forwarded_for off log_icp_queries off icp_access allow localnet icp_access deny all htcp_access allow localnet htcp_access deny all digest_rebuild_period 15 minutes digest_rewrite_period 15 minutes strip_query_terms off max_open_disk_fds 150 cache_replacement_policy heap LFUDA memory_pools off http_port 9001 http_port 901 tproxy pid_filename /var/run/squid1.pid visible_hostname localhost snmp_port 1611 icp_port 3131 htcp_port 4828 cachemgr_passwd admin admin if ${process_number} = 1 access_log stdio:/var/log/squid/1/access.log squid cache_log /var/log/squid/1/cache.log cache_store_log none cache_swap_state /var/log/squid/1/%s.swap.state else access_log none cache_log /dev/null endif cache_dir rock /cache1/rock1 256 min-size=500 max-size=2000 cache_dir rock /cache1/rock2 2000 min-size=2000 max-size=30000 cache_dir diskd /cache1/diskd2 60000 16 256 min-size=30000 max-size=400000 cache_dir diskd /cache2/2 100000 16 256 min-size=400000 max-size=1048576 cache_dir diskd /cache2/1 680000 16 256 min-size=1048576 This config generates this processes: # ps ax | grep squid 9768 ? Ss 0:00 /usr/local/sbin/squid -f /etc/squid3/squid1.conf 9770 ? S 0:00 (squid-coord-4) -f /etc/squid3/squid1.conf 9771 ? S 0:01 (squid-disk-3) -f /etc/squid3/squid1.conf 9772 ? S 0:00 (squid-disk-2) -f /etc/squid3/squid1.conf 9773 ? S 1:13 (squid-1) -f /etc/squid3/squid1.conf But still seeing all those Vary loops all the time :( Thanks, Sebastian El 03/09/15 a las 15:48, Amos Jeffries
escribió:
On 4/09/2015 6:24 a.m., Sebastián Goicochea wrote:Regarding configure options, I disable IPv6 because of the latency that adds to DNS queries, enable-ssl could be removed, gnuregex gave no problems (or that I think). That options on the config file are the core of my configuration. Just stripped ACLs and that kind of stuff to make it shorter, and I also stripped the part of the rewriter (because I have it commented at the moment). Could any of the misconfigurations you mention could be causing this Vary loop?In summary; it looks like you may have been using SMP workers in an unsafe manner (simetime recently perhapse) and screwed over your cache_dir. A full cache re-scan is probably in order to fix it. In detail; What I noticed particularly was that you have a section of SMP configuration. And that later you have "cache_dir diskd ..." without any SMP protections. But what you posted did not say "workers" directive so I was unsure. If you have at any time run that config file with the "workers" directive in it, then those diskd caches will have been randomly overwriting each others stored content. Almost guaranteeing these types of problem and other SWAPFAIL events as well. Even if workers was for only happening for a short time, disk cache corruption is persistent. You have two options. 1) wait until all the collisions have been found and erased. That could take a while to happen naturally. 2) stop Squid, erase the swap.state in those cache_dir and restart Squid. The slow "DIRTY" rebuild will fix collision type corruptions. In related settings you have shared memory cache disabled and rock store in use. Disabling shared memory and running with SMP workers might make rock store collide as well - though I'm not sure of that. It does nothing in a non-SMP configuration. If the rock is corrupted it self-heals pretty quickly. Just restart Squid and that happens. Amos _______________________________________________ squid-users mailing list squid-users@xxxxxxxxxxxxxxxxxxxxx http://lists.squid-cache.org/listinfo/squid-users |
_______________________________________________ squid-users mailing list squid-users@xxxxxxxxxxxxxxxxxxxxx http://lists.squid-cache.org/listinfo/squid-users