Re: clustered afr

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Could you try 10GB files? My read leaks with read-ahead were triggered part-way through reading files of this size.

I, too, was seeing slow increases in glusterfs; you might see if it also happens without stat-prefetch, as I believe my tests did.

Thanks,

Brent

On Tue, 13 Mar 2007, Tibor Veres wrote:

Scheduler is an option of unify translator. As of now you are using
plain AFR across three bricks with 2 copies. To achieve what you want,
you can try like this. Create 3 AFR volumes across 3 bricks in pairs
of two like this: 1-2, 2-3 and 3-1. Then unify all the three volumes
using rr scheduler.

i set up the test enviroment like this, and it works fine, though
setting it up for more nodes/replicas would require writing a script
to generate my config files, but its doable :)

anyway, i checked out the latest source (patch-75), recompiled with
-O3 and enabled all the performance translators
iothreads in the server config, after the brick, before the
protocol/server volume
writeback/readahead/statprefetch in the client config, just like in
client.vol.sample

i did some tests to check for memory leaks
didn't see any memory leak at write, nor at read

but statprefetch is leaking memory:
gluster-cl2:/mnt/gluster/test/lib# for j in `seq 1 4`; do ps uax |grep
-v grep |grep glus ; for i in `seq 1 100`; do ls -la >/dev/null; done;
done
root     29575  0.6  0.1   1916   988 ?        Ss   11:53   0:28 [glusterfsd]
root     29597  2.2  2.7  15396 14324 ?        Ss   11:54   1:40 [glusterfs]

root     29575  0.6  0.1   1916   988 ?        Ss   11:53   0:30 [glusterfsd]
root     29597  2.3  3.2  18196 17108 ?        Ss   11:54   1:45 [glusterfs]

root     29575  0.7  0.1   1916   988 ?        Ss   11:53   0:32 [glusterfsd]
root     29597  2.4  3.8  21236 20104 ?        Ss   11:54   1:51 [glusterfs]

root     29575  0.7  0.1   1916   988 ?        Ss   11:53   0:34 [glusterfsd]
root     29597  2.5  4.4  24292 23168 ?        Ss   11:54   1:56 [glusterfs]

write/read tests and performance
no write memory leak:
root     29575  0.6  0.1   1916   988 ?        Ss   11:53   0:17 [glusterfsd]
root     29597  1.7  2.2  12772 11716 ?        Ss   11:54   0:47 [glusterfs]
gluster-cl2:/mnt/gluster/test# for i in `seq 1 10`; do dd if=/dev/zero
of=test_$i bs=10000 count=10000; done
100000000 bytes (100 MB) copied, 2.93355 seconds, 34.1 MB/s
100000000 bytes (100 MB) copied, 3.47932 seconds, 28.7 MB/s
100000000 bytes (100 MB) copied, 3.38781 seconds, 29.5 MB/s
100000000 bytes (100 MB) copied, 7.33118 seconds, 13.6 MB/s
100000000 bytes (100 MB) copied, 3.45481 seconds, 28.9 MB/s
100000000 bytes (100 MB) copied, 9.05226 seconds, 11.0 MB/s
100000000 bytes (100 MB) copied, 9.37289 seconds, 10.7 MB/s
100000000 bytes (100 MB) copied, 3.17613 seconds, 31.5 MB/s
100000000 bytes (100 MB) copied, 6.28839 seconds, 15.9 MB/s
100000000 bytes (100 MB) copied, 6.46795 seconds, 15.5 MB/s
gluster-cl2:/mnt/gluster/test# ps uax |grep glus
root     29575  0.6  0.1   1916   988 ?        Ss   11:53   0:20 [glusterfsd]
root     29597  2.4  2.2  12772 11716 ?        Ss   11:54   1:09 [glusterfs]

no read memory leak:
gluster-cl2:/mnt/gluster/test# for i in `seq 1 10`; do cat test_$i
/dev/null; done
root     29575  0.6  0.1   1916   988 ?        Ss   11:53   0:24 [glusterfsd]
root     29597  2.2  2.3  13100 12060 ?        Ss   11:54   1:27 [glusterfs]
gluster-cl2:/mnt/gluster/test# for i in `seq 1 10`; do cat test_$i
/dev/null; done
root     29575  0.6  0.1   1916   988 ?        Ss   11:53   0:24 [glusterfsd]
root     29597  2.2  2.3  13100 12060 ?        Ss   11:54   1:28 [glusterfs]
gluster-cl2:/mnt/gluster/test# for i in `seq 1 10`; do cat test_$i
/dev/null; done
root     29575  0.6  0.1   1916   988 ?        Ss   11:53   0:25 [glusterfsd]
root     29597  2.2  2.3  13164 12124 ?        Ss   11:54   1:30 [glusterfs]
gluster-cl2:/mnt/gluster/test# for i in `seq 1 10`; do cat test_$i
/dev/null; done
root     29575  0.6  0.1   1916   988 ?        Ss   11:53   0:25 [glusterfsd]
root     29597  2.3  2.3  13100 12060 ?        Ss   11:54   1:31 [glusterfs]

read performance:
gluster-cl2:/mnt/gluster/test# for i in `seq 1 10`; do time cat
test_$i >/dev/null; done
real    0m2.001s
real    0m2.660s
real    0m2.344s
real    0m1.979s
real    0m2.314s
real    0m2.627s
real    0m2.635s
real    0m2.815s
real    0m2.907s
real    0m2.088s



--
Tibor Veres
tibor.veres@xxxxxxxxx


_______________________________________________
Gluster-devel mailing list
Gluster-devel@xxxxxxxxxx
http://lists.nongnu.org/mailman/listinfo/gluster-devel





[Index of Archives]     [Gluster Users]     [Ceph Users]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Security]     [Bugtraq]     [Linux]     [Linux OMAP]     [Linux MIPS]     [eCos]     [Asterisk Internet PBX]     [Linux API]

  Powered by Linux