successive bonnie++ tests taking longer and longer to run (system load steadily increasing)

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hello all,

Four-node client-side replication setup running 3.0.0 throughout.
Config generated via :
$ /usr/bin/glusterfs-volgen --name replicated --raid 1 s01:/opt/gluster 
s02:/opt/gluster

Connected via private gigabit LAN.  The hardware in each of the four 
nodes is identical.

Running the following script on one of the clients :

---
#!/bin/bash

loop=0

while [ $loop -lt 100 ]
do
	(time /usr/sbin/bonnie++ -d /opt/gluster/bonnie -u root) 3>&1 2>&3 | 
tee -a bonnie-single-c01.log
done
---

[root at c01 ~]# # bonnie++ 2>&1 | grep Version
Version: 1.03

The time to completion has been slowly creeping upwards.  Granted, the 
full run hasn't completed yet, but a clear trend appears to be forming :

[root at c01 ~]# grep real bonnie-single-c01.log
real    15m4.444s
real    15m49.245s
real    16m44.263s
real    16m55.868s
real    17m16.479s
real    18m4.800s
real    19m18.089s
real    20m59.218s
real    22m58.493s
real    22m25.134s
real    23m36.715s
real    25m4.022s
real    29m36.527s
real    26m26.325s
real    28m37.396s
real    35m4.289s
real    32m41.834s
real    34m33.418s
real    37m1.098s
real    38m10.750s
real    46m14.002s
real    40m15.416s
real    36m53.414s
real    44m46.302s
real    38m57.842s
real    41m55.745s
real    46m41.493s
real    54m12.818s
real    52m53.408s
real    36m20.212s
real    40m1.991s
real    54m54.110s
real    57m57.162s
real    57m53.896s
real    62m56.905s
real    65m53.524s
real    67m54.105s

As a control, i ran the same script with a local filesystem target on 
the other client :

[root at c02 ~]# grep real bonnie-local-c02.log
real    3m42.788s
real    3m43.251s
real    3m42.568s
real    3m42.533s
real    3m41.764s
real    3m43.305s
real    3m42.536s
real    3m43.052s
real    3m44.406s
real    3m42.894s
real    3m43.639s
(cut for brevity - all 100 runs are around this time mark.)

It is perhaps worth noting that the load on c01 (running the gluster 
tests) has been creeping up as well.  It started at basically 0, and is 
now at 4.03, 3.94, 3.27 .

Any ideas what's going on ?


-- 
Daniel Maher <dma+gluster AT witbe DOT net>


[Index of Archives]     [Gluster Development]     [Linux Filesytems Development]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Bugtraq]     [Linux OMAP]     [Linux MIPS]     [eCos]     [Asterisk Internet PBX]     [Linux API]

  Powered by Linux