Search squid archive

Re: 2.6-stable3 performance weirdness with polymix-4 testing

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 8/28/06, Adrian Chadd <adrian@xxxxxxxxxxxxxxx> wrote:
On Mon, Aug 28, 2006, Pranav Desai wrote:

> >A single drive and you expect 1000 req/s? Is this a solid-state drive or
> >something with infinitely small seek time?
> >
> No, its just a regular drive. I am already working on that, but its
> kind of difficult to get things changed quickly :-).
> But the good part is that even with 1 disk its able to do 1000 req/s
> atleast for the first 6 hrs, so I am sure I can push it more once I
> get some better hardware.

I think you'll find that the performance is degrading because:

* (a) the disk write queues are slowly filling up and taking longer to write;
* (b) the disk is filling up and fragmentation+object replacement+non-linear
      file allocation kicks in.

Faster hardware won't fix that. More disks may. COSS will help a lot for
the smaller objects. But a lot more work needs to be done to improve the
Squid disk store under load.


for (a), shouldnt the disk queue also affect the performance during
the first phase of the test. The first phase is almost perfect with no
degradation in times.
If I may send you the graphs, in which it seems very wrong that the
degradation happens so suddenly.
Maybe I will try a few test with a longer first phase to see if the
disk queue catches up and degrades the performance.

I will be trying COSS shortly. Even I am expecting it to do much
better. What other kind of work do you have in mind. Another
filesystem or something ?
I am thinking of giving this http://logfs.sourceforge.net/ a try.
But I dont know if it needs changes in squid to make efficient use of
the filesystem.

> It didnt in the first phase. In fact it did very well in the first
> phase compared to the first phase of 2.5, but somehow after the idle
> phase it just wasnt able to recover and the polybench wasnt even able
> to push to 1000 req/s because the response times were so high.

Whats your squid.conf look like? It sounds like the disks just couldn't
keep up with your request rate.


visible_hostname        10.51.6.102
cache_dir       aufs /var/cache/squid 30720 16 256
http_port       8080
request_body_max_size   0
snmp_port       3401
negative_ttl    0 minutes
pid_filename    /var/run/squid.pid
coredump_dir    /var/log/squid
cache_effective_user    squid
cache_effective_group   squid
cache_access_log        /var/log/squid/access.log
cache_log       /var/log/squid/cache.log
cache_store_log none
cache_swap_log  /var/log/squid/swap.log
logfile_rotate  10
icp_port        3130
icp_query_timeout       20000
log_icp_queries on
extension_methods       SEARCH PROPPATCH
forwarded_for on
acl all src     0.0.0.0/0.0.0.0
acl localhost   src     127.0.0.1 10.51.6.102
acl manager     proto   cache_object
acl snmppublic  snmp_community  public
http_access allow       localhost
miss_access allow       all
http_access allow       all
snmp_access allow snmppublic    all
memory_pools off




Adrian




--

------------------------------
http://pd.dnsalias.org

[Index of Archives]     [Linux Audio Users]     [Samba]     [Big List of Linux Books]     [Linux USB]     [Yosemite News]

  Powered by Linux