investigate & troubleshoot speed bottleneck(s) - how?

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



hi guys/gals

I realize that this question must have been asked before, I sroogled and found some posts on the web on how to tweak/tune gluster, however..

What I hope is that some experts and/or devel could write a bit more, maybe compose a doc on - How to investigate and trouble gluster's speed-performance bottleneck.

Why I think such a thorough guide would be important? Well.. I guess many of us wonder when we look at how "raw" fs vs glusterfs does then we wonder - how come!? I know such a comparison is oversimplification or maybe even unfair but when I see such a gigantic performance difference then I think, I hope - it must be possible to help with a detective work to unravel any bottlenecks that may hamper glusterfs so badly that almost to a point that one wonders... what is the point.

I did today such a oversimplified test, I used dbench on a raw xfs lvm raid0 four ssd pvs (no hardware raid)

$ dbench -t 60 10
...
8 of 10 processes prepared for launch   0 sec
10 of 10 processes prepared for launch   0 sec
releasing clients
  10     21573  1339.94 MB/sec  warmup   1 sec  latency 35.500 ms   10     50505  1448.58 MB/sec  warmup   2 sec  latency 10.027 ms   10     78424  1467.54 MB/sec  warmup   3 sec  latency 8.810 ms   10    105338  1462.94 MB/sec  warmup   4 sec  latency 19.670 ms   10    134820  1488.04 MB/sec  warmup   5 sec  latency 11.237 ms   10    164380  1505.12 MB/sec  warmup   6 sec  latency 4.007 ms
...
Throughput 1662.91 MB/sec  10 clients  10 procs  max_latency=38.879 ms

Cluster hosts 9 vols, each vol on three peers in replica mode. Generally, but also most of the time vols utilization is really low, data would be regular office work, read randomly single files. Peers are connected via net switch stack, each peer connects to stack via two-port lacp 1GB link, jumbo mtu.

So I do do same dbench on one of the vols:
...
8 of 10 processes prepared for launch   0 sec
10 of 10 processes prepared for launch   0 sec
releasing clients
  10        98    45.41 MB/sec  warmup   1 sec  latency 113.146 ms   10       212    41.52 MB/sec  warmup   2 sec  latency 93.800 ms   10       343    41.23 MB/sec  warmup   3 sec  latency 53.545 ms   10       468    41.06 MB/sec  warmup   4 sec  latency 54.450 ms   10       612    41.89 MB/sec  warmup   5 sec  latency 152.659 ms   10       866    35.99 MB/sec  warmup   6 sec  latency 31.377 ms   10      1074    32.74 MB/sec  warmup   7 sec  latency 39.923 ms   10      1307    29.77 MB/sec  warmup   8 sec  latency 42.388 ms
...
Throughput 15.3757 MB/sec  10 clients  10 procs  max_latency=54.371 ms

So yes.. gee.. how can I make my gluster-cluster faster???

many thanks, L.
_______________________________________________
Gluster-users mailing list
Gluster-users@xxxxxxxxxxx
http://lists.gluster.org/mailman/listinfo/gluster-users




[Index of Archives]     [Gluster Development]     [Linux Filesytems Development]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Bugtraq]     [Linux OMAP]     [Linux MIPS]     [eCos]     [Asterisk Internet PBX]     [Linux API]

  Powered by Linux