On Thu, May 18, 2017 at 6:39 PM, Joe Julian <joe@xxxxxxxxxxxxxxxx> wrote:
On the other hand, tracking that stat between versions with a known test sequence may be valuable for watching for performance issues or improvements.
+1
Once we have nightly build setup, we should take this up. Can do these after branch out to allow time for fixing major issues if any.
Once we have nightly build setup, we should take this up. Can do these after branch out to allow time for fixing major issues if any.
-Amar
--On May 17, 2017 10:03:28 PM PDT, Ravishankar N <ravishankar@xxxxxxxxxx> wrote:On 05/17/2017 11:07 PM, Pranith Kumar Karampuri wrote:
Okay, that could be due to the syscalls themselves or pre-emptive multitasking in case there aren't enough cpu cores. I think the spike in numbers is due to more users accessing the files at the same time like you observed, translating into more syscalls. You can try capturing the gluster volume profile info the next time it occurs and co-relate with the cs count. If you don't see any negative performance impact, I think you don't need to be bothered much by the numbers.+ gluster-devel
On Wed, May 17, 2017 at 10:50 PM, mabi <mabi@xxxxxxxxxxxxx> wrote:
I don't know exactly what kind of context-switches it was but what I know is that it is the "cs" number under "system" when you run vmstat.
HTH,
Ravi
--
Also I use the percona linux monitoring template for cacti (https://www.percona.com/doc/percona-monitoring-plugins/LATE ) which monitors context switches too. If that's of any use interrupts where also quite high during that time with peaks up to 50k interrupts.ST/cacti/linux-templates.html
______________________________
-------- Original Message --------
Subject: Re: 120k context switches on GlsuterFS nodes
Local Time: May 17, 2017 2:37 AM
UTC Time: May 17, 2017 12:37 AM
From: ravishankar@xxxxxxxxxx
On 05/16/2017 11:13 PM, mabi wrote:
Today I even saw up to 400k context switches for around 30 minutes on my two nodes replica... Does anyone else have so high context switches on their GlusterFS nodes?
I am wondering what is "normal" and if I should be worried...
-------- Original Message --------
Subject: 120k context switches on GlsuterFS nodes
Local Time: May 11, 2017 9:18 PM
UTC Time: May 11, 2017 7:18 PM
From: mabi@xxxxxxxxxxxxx
To: Gluster Users <gluster-users@xxxxxxxxxxx>
Hi,
Today I noticed that for around 50 minutes my two GlusterFS 3.8.11 nodes had a very high amount of context switches, around 120k. Usually the average is more around 1k-2k. So I checked what was happening and there where just more users accessing (downloading) their files at the same time. These are directories with typical cloud files, which means files of any sizes ranging from a few kB to MB and a lot of course.
Now I never saw such a high number in context switches in my entire life so I wanted to ask if this is normal or to be expected? I do not find any signs of errors or warnings in any log files.
What context switch are you referring to (syscalls context-switch on the bricks?) ? How did you measure this?
-Ravi
My volume is a replicated volume on two nodes with ZFS as filesystem behind and the volume is mounted using FUSE on the client (the cloud server). On that cloud server the glusterfs process was using quite a lot of system CPU but that server (VM) only has 2 vCPUs so maybe I should increase the number of vCPUs...
Any ideas or recommendations?
Regards,
M.
_______________________________________________ Gluster-users mailing list Gluster-users@xxxxxxxxxxx http://lists.gluster.org/mailm an/listinfo/gluster-users
_________________ Gluster-users mailing list Gluster-users@xxxxxxxxxxx http://lists.gluster.org/mailm an/listinfo/gluster-users Pranith
Sent from my Android device with K-9 Mail. Please excuse my brevity.
_______________________________________________
Gluster-devel mailing list
Gluster-devel@xxxxxxxxxxx
http://lists.gluster.org/mailman/listinfo/gluster-devel
--
Amar Tumballi (amarts)
_______________________________________________ Gluster-users mailing list Gluster-users@xxxxxxxxxxx http://lists.gluster.org/mailman/listinfo/gluster-users