I have a crash test cluster where i’ve tested the new version
of GlusterFS (v3.7) before upgrading my HPC cluster in
production.
But… all my tests show me very very low performances.
For my benches, as you can read below, I do some actions
(untar, du, find, tar, rm) with linux kernel sources, dropping
cache, each on distributed, replicated, distributed-replicated,
single (single brick) volumes and the native FS of one brick.
# time (echo 3 > /proc/sys/vm/drop_caches; tar xJf
~/linux-4.1-rc5.tar.xz; sync; echo 3 >
/proc/sys/vm/drop_caches)
# time (echo 3 > /proc/sys/vm/drop_caches; du -sh
linux-4.1-rc5/; echo 3 > /proc/sys/vm/drop_caches)
# time (echo 3 > /proc/sys/vm/drop_caches; find
linux-4.1-rc5/|wc -l; echo 3 > /proc/sys/vm/drop_caches)
# time (echo 3 > /proc/sys/vm/drop_caches; tar czf
linux-4.1-rc5.tgz linux-4.1-rc5/; echo 3 >
/proc/sys/vm/drop_caches)
# time (echo 3 > /proc/sys/vm/drop_caches; rm -rf
linux-4.1-rc5.tgz linux-4.1-rc5/; echo 3 >
/proc/sys/vm/drop_caches)
And here are the process times:
---------------------------------------------------------------
| | UNTAR | DU | FIND | TAR
| RM |
---------------------------------------------------------------
| single | ~3m45s | ~43s | ~47s |
~3m10s | ~3m15s |
---------------------------------------------------------------
| replicated | ~5m10s | ~59s | ~1m6s |
~1m19s | ~1m49s |
---------------------------------------------------------------
| distributed | ~4m18s | ~41s | ~57s |
~2m24s | ~1m38s |
---------------------------------------------------------------
| dist-repl | ~8m18s | ~1m4s | ~1m11s |
~1m24s | ~2m40s |
---------------------------------------------------------------
| native FS | ~11s | ~4s | ~2s |
~56s | ~10s |
---------------------------------------------------------------
I get the same results, whether with default configurations
with custom configurations.
if I look at the side of the ifstat command, I can note my
IO write processes never exceed 3MBs...
EXT4 native FS seems to be faster (roughly 15-20% but no
more) than XFS one
My [test] storage cluster config is composed by 2 identical
servers (biCPU Intel Xeon X5355, 8GB of RAM, 2x2TB HDD
(no-RAID) and Gb ethernet)
My volume settings:
single:
1server 1 brick
replicated:
2 servers 1 brick each
distributed:
2 servers 2 bricks each
dist-repl:
2 bricks in the same server and replica 2
All seems to be OK in gluster status command line.
Do you have an idea why I obtain so bad results?
Thanks in advance.
Geoffrey
-----------------------------------------------
Geoffrey Letessier
Responsable informatique & ingénieur système
CNRS - UPR 9080 - Laboratoire de Biochimie Théorique
Institut de Biologie Physico-Chimique
13, rue Pierre et Marie Curie - 75005 Paris
Tel: 01 58 41 50 93 - eMail:
geoffrey.letessier@xxxxxxx