I've made a mistake we are using 30Mbit connectivity on all of the nodes. Below is a iperf test between the node and the client [root at gfs4 ~]# iperf -c 93.123.32.41 ------------------------------------------------------------ Client connecting to 93.123.32.41, TCP port 5001 TCP window size: 23.2 KByte (default) ------------------------------------------------------------ [ 3] local 93.123.32.44 port 49838 connected with 93.123.32.41 port 5001 [ ID] Interval Transfer Bandwidth [ 3] 0.0-10.1 sec 49.9 MBytes 41.5 Mbits/sec [root at gfs4 ~]# But when trying to copy a 1Gb file on the client's mounted volume the speed between the client and the node is ~500kb/s --- Find out about our new Cloud service - Cloudware.bg <http://cloudware.bg/?utm_source=email&utm_medium=signature&utm_content=link&utm_campaign=newwebsite> Access anywhere. Manage it yourself. Pay as you go. ------------------------------------------------------------------------ *Yavor Marinov* System Administrator Neterra Ltd. Telephone: +359 2 975 16 16 Fax: +359 2 975 34 36 Mobile: +359 888 610 048 www.neterra.net <http://www.neterra.net> On 05/23/2013 12:16 PM, Nux! wrote: > On 23.05.2013 09:41, ???? ??????? wrote: >> Thanks for your reply. >> >> No matter how many nodes (currently the volume is only with its own >> node) the speed is really slow. For testing purposes, i made a volume >> with only one node, without any replication - however the speed is >> still ~500kb/s. The cloud servers are limited to 30Gbit/s but still >> the traffic when writing to the node is ~500kb/s >> >> i'm using 3.3.1 glusterfsd with kernel 2.6.18-348.el5xen and i need >> to know if the the problem is within the kernel. > > I don't think it is a problem with gluster; I never used el5 for this, > but I doubt there's an inherent problem with it either. That speed > limit looks odd to me and I think it's somewhere in your setup. > Have you done any actual speed tests in the VMs? > -------------- next part -------------- An HTML attachment was scrubbed... URL: <http://supercolony.gluster.org/pipermail/gluster-users/attachments/20130523/5e0c5f13/attachment.html>