Re: Finding performance bottlenecks

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 01/05/2018 02:27, Thing wrote:
> Hi,
> 
> So is the KVM or Vmware as the host(s)?  I basically have the same setup
> ie 3 x 1TB "raid1" nodes and VMs, but 1gb networking.  I do notice with
> vmware using NFS disk was pretty slow (40% of a single disk) but this
> was over 1gb networking which was clearly saturating.  Hence I am moving
> to KVM to use glusterfs hoping for better performance and bonding, it
> will be interesting to see which host type runs faster.

1gb will always be the bottleneck in that situation - that's going too
max out at the speed of a single disk or lower.  You need at minimum to
bond interfaces and preferably go to 10gb to do that.

Our NFS actually ends up faster than local disk because the read speed
of the raid is faster than the read speed of the local disk.

> Which operating system is gluster on?  

Debian Linux.  Supermicro motherboards, 24 core i7 with 128GB of RAM on
the VM hosts.

> Did you do iperf between all nodes?

Yes, around 9.7Gb/s

It doesn't appear to be raw read speed but iowait.  Under nfs load with
multiple VMs I get an iowait of around 0.3%.  Under gluster, never less
than 10% and glusterfsd is often the top of the CPU usage.  This causes
a load average of ~12 compared to 3 over NFS, and absolutely kills VMs
esp. Windows ones - one machine I set booting and it was still booting
30 minutes later!

Tony
_______________________________________________
Gluster-users mailing list
Gluster-users@xxxxxxxxxxx
http://lists.gluster.org/mailman/listinfo/gluster-users




[Index of Archives]     [Gluster Development]     [Linux Filesytems Development]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Bugtraq]     [Linux OMAP]     [Linux MIPS]     [eCos]     [Asterisk Internet PBX]     [Linux API]

  Powered by Linux