Re: [Gluster-users] High CPU Usage - Glusterfsd

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Dear Ben,

Very interesting answer from yours of how to find out where the bottleneck is. These commands and paramters (iostat, sar) should maybe be documented on the Gluster wiki.

I have a question for you, in order to better use my CPU cores (6 cores per node) I was wondering if I should create two bricks per node even if these two bricks would be pointing to the same RAID array. What do you think? Would there be a performance gain in my case and is that recommended? I read that optimally the second brick, if on the same server, should be on another RAID or HBA controller.

Best regards
ML





On Sunday, February 22, 2015 7:40 PM, Ben England <bengland@xxxxxxxxxx> wrote:
Renchu, 

I didn't see anything about average file size and read/write mix.  One example of how to observe both of these, as well as latency and throughput - on server run these commands:

# gluster volume profile your-volume start
# gluster volume profile your-volume info > /tmp/dontcare
# sleep 60
# gluster volume profile your-volume info > profile-for-last-minute.log

There is also a "gluster volume top" command that may be of use to you in understanding what your users are doing with Gluster.

Also you may want to run "top -H" and see whether any threads in either glusterfsd or smbd are at or near 100% CPU - if so, you really are hitting a CPU bottleneck.  Looking at process CPU utilization can be deceptive, since a process may include multiple threads.  "sar -n DEV 2" will show you network utilization, and "iostat -mdx /dev/sd? 2" on your server will show block device queue depth (latter two tools require sysstat rpm).  Together these can help you to understand what kind of bottleneck you are seeing.

I don't see how many "bricks" are in your Gluster volume but it sounds like you have only one glusterfsd/server.   If you have idle cores on your servers, you can harness more CPU power by using multiple bricks/server, which results in multiple glusterfsd processes on each server, allowing greater parallelism.    For example, you can do this by presenting individual disk drives as bricks rather than RAID volumes.    

Let us know if these suggestions helped

-ben england

----- Original Message -----
> From: "Renchu Mathew" <renchu@xxxxxxxxxxxxx>
> To: gluster-users@xxxxxxxxxxx
> Cc: gluster-devel@xxxxxxxxxxx
> Sent: Sunday, February 22, 2015 7:09:09 AM
> Subject:  High CPU Usage - Glusterfsd
> 
> 
> 
> Dear all,
> 
> 
> 
> I have implemented glusterfs storage on my company – 2 servers with
> replicate. But glustherfsd shows more than 100% CPU utilization most of the
> time. So it is so slow to access the gluster volume. My setup is two
> glusterfs servers with replication. The gluster volume (almost 10TB of data)
> is mounted on another server (glusterfs native client) and using samba share
> for the network users to access those files. Is there any way to reduce the
> processor usage on these servers? Please give a solution ASAP since the
> users are complaining about the poor performance. I am using glusterfs
> version 3.6.
> 
> 
> 
> Regards
> 
> 
> 
> Renchu Mathew | Sr. IT Administrator
> 
> 
> 
> 
> 
> 
> 
> CRACKNELL DUBAI | P.O. Box 66231 | United Arab Emirates | T +971 4 3445417 |
> F +971 4 3493675 | M +971 50 7386484
> 
> ABU DHABI | DUBAI | LONDON | MUSCAT | DOHA | JEDDAH
> 
> EMAIL renchu@xxxxxxxxxxxxx | WEB www.cracknell.com
> 
> 
> 
> This email, its content and any files transmitted with it are intended solely
> for the addressee(s) and may be legally privileged and/or confidential. If
> you are not the intended recipient please let us know by email reply and
> delete it from the system. Please note that any views or opinions presented
> in this email do not necessarily represent those of the company. Email
> transmissions cannot be guaranteed to be secure or error-free as information
> could be intercepted, corrupted, lost, destroyed, arrive late or incomplete,
> or contain viruses. The company therefore does not accept liability for any
> errors or omissions in the contents of this message which arise as a result
> of email transmission.
> 
> 
> 
> _______________________________________________
> Gluster-devel mailing list
> Gluster-devel@xxxxxxxxxxx
> http://www.gluster.org/mailman/listinfo/gluster-devel

> 
_______________________________________________
Gluster-users mailing list
Gluster-users@xxxxxxxxxxx
http://www.gluster.org/mailman/listinfo/gluster-users
_______________________________________________
Gluster-devel mailing list
Gluster-devel@xxxxxxxxxxx
http://www.gluster.org/mailman/listinfo/gluster-devel





[Index of Archives]     [Gluster Users]     [Ceph Users]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Security]     [Bugtraq]     [Linux]     [Linux OMAP]     [Linux MIPS]     [eCos]     [Asterisk Internet PBX]     [Linux API]

  Powered by Linux