Re: successive bonnie++ tests taking longer and longer to run (system load steadily increasing)

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



* What does the top output corresponding to glusterfs say? what is the memory usage and cpu usage?
* Do you find anything interesting in glusterfs client log files? Can we get the log files?

regards,
On Mon, Feb 22, 2010 at 1:29 PM, Daniel Maher <dma+gluster@xxxxxxxxx> wrote:
Hello all,

Four-node client-side replication setup running 3.0.0 throughout.
Config generated via :
$ /usr/bin/glusterfs-volgen --name replicated --raid 1 s01:/opt/gluster
s02:/opt/gluster

Connected via private gigabit LAN.  The hardware in each of the four
nodes is identical.

Running the following script on one of the clients :

---
#!/bin/bash

loop=0

while [ $loop -lt 100 ]
do
       (time /usr/sbin/bonnie++ -d /opt/gluster/bonnie -u root) 3>&1 2>&3 |
tee -a bonnie-single-c01.log
done
---

[root@c01 ~]# # bonnie++ 2>&1 | grep Version
Version: 1.03

The time to completion has been slowly creeping upwards.  Granted, the
full run hasn't completed yet, but after 89 iterations, a clear trend appears to be forming :

[root@c01 ~]# grep real bonnie-single-c01.log
real    15m4.444s
real    15m49.245s
real    16m44.263s
real    16m55.868s
real    17m16.479s
real    18m4.800s
real    19m18.089s
real    20m59.218s
real    22m58.493s
real    22m25.134s
real    23m36.715s
real    25m4.022s
real    29m36.527s
real    26m26.325s
real    28m37.396s
real    35m4.289s
real    32m41.834s
real    34m33.418s
real    37m1.098s
real    38m10.750s
real    46m14.002s
real    40m15.416s
real    36m53.414s
real    44m46.302s
real    38m57.842s
real    41m55.745s
real    46m41.493s
real    54m12.818s
real    52m53.408s
real    36m20.212s
real    40m1.991s
real    54m54.110s
real    57m57.162s
real    57m53.896s
real    62m56.905s
real    65m53.524s
real    67m54.105s
real    74m40.195s
real    80m30.843s
real    81m15.571s
real    77m50.950s
real    91m46.081s
real    87m50.726s
real    94m7.863s
real    94m37.050s
real    86m43.104s
real    96m22.384s
real    98m42.054s
real    103m56.266s
real    109m33.518s
real    101m49.557s
real    93m25.883s
real    87m26.209s
real    97m36.759s
real    111m18.548s
real    97m53.249s
real    97m44.629s
real    100m36.912s
real    102m15.858s
real    97m16.631s
real    116m44.927s
real    102m11.010s
real    129m3.242s
real    99m32.759s
real    117m23.075s
real    112m1.927s
real    116m45.873s
real    125m22.549s
real    129m7.776s
real    114m19.009s
real    120m21.973s
real    111m15.333s
real    113m4.147s
real    96m20.848s
real    107m0.043s
real    128m11.131s
real    118m58.662s
real    116m40.381s
real    118m50.392s
real    118m17.404s
real    117m29.695s
real    94m45.270s
real    104m35.469s
real    125m56.206s
real    108m40.728s
real    133m24.056s
real    103m16.188s
real    146m42.629s
real    101m5.182s

As a control, i ran the same script with a local filesystem target on
the other client :

[root@c02 ~]# grep real bonnie-local-c02.log
real    3m42.788s
real    3m43.251s
real    3m42.568s
real    3m42.533s
real    3m41.764s
real    3m43.305s
real    3m42.536s
real    3m43.052s
real    3m44.406s
real    3m42.894s
real    3m43.639s
(cut for brevity - all 100 runs are around this time mark.)

It is perhaps worth noting that the load on c01 (running the gluster
tests) has been creeping up as well.  It started at basically 0, and is
now at 4.03, 3.94, 3.27 .

Any ideas what's going on ?



--
Daniel Maher <dma+gluster AT witbe DOT net>


_______________________________________________
Gluster-devel mailing list
Gluster-devel@xxxxxxxxxx
http://lists.nongnu.org/mailman/listinfo/gluster-devel



--
Raghavendra G


[Index of Archives]     [Gluster Users]     [Ceph Users]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Security]     [Bugtraq]     [Linux]     [Linux OMAP]     [Linux MIPS]     [eCos]     [Asterisk Internet PBX]     [Linux API]

  Powered by Linux