Re: Lots of connections on clients - appropriate values for various thread parameters

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



what is the per thread CPU usage like on these clients? With highly concurrent workloads we've seen single thread that reads requests from /dev/fuse (fuse reader thread) becoming bottleneck. Would like to know what is the cpu usage of this thread looks like (you can use top -H).

On Mon, Mar 4, 2019 at 3:39 PM Hu Bert <revirii@xxxxxxxxxxxxxx> wrote:
Good morning,

we use gluster v5.3 (replicate with 3 servers, 2 volumes, raid10 as
brick) with at the moment 10 clients; 3 of them do heavy I/O
operations (apache tomcats, read+write of (small) images). These 3
clients have a quite high I/O wait (stats from yesterday) as can be
seen here:

client: https://abload.de/img/client1-cpu-dayulkza.png
server: https://abload.de/img/server1-cpu-dayayjdq.png

The iowait in the graphics differ a lot. I checked netstat for the
different clients; the other clients have 8 open connections:
https://pastebin.com/bSN5fXwc

4 for each server and each volume. The 3 clients with the heavy I/O
have (at the moment) according to netstat 170, 139 and 153
connections. An example for one client can be found here:
https://pastebin.com/2zfWXASZ

gluster volume info: https://pastebin.com/13LXPhmd
gluster volume status: https://pastebin.com/cYFnWjUJ

I just was wondering if the iowait is based on the clients and their
workflow: requesting a lot of files (up to hundreds per second),
opening a lot of connections and the servers aren't able to answer
properly. Maybe something can be tuned here?

Especially the server|client.event-threads (both set to 4) and
performance.(high|normal|low|least)-prio-threads (all at default value
16) and performance.io-thread-count (32) options, maybe these aren't
properly configured for up to 170 client connections.

Both servers and clients have a Xeon CPU (6 cores, 12 threads), a 10
GBit connection and 128G (servers) respectively 256G (clients) RAM.
Enough power :-)


Thx for reading && best regards,

Hubert
_______________________________________________
Gluster-users mailing list
Gluster-users@xxxxxxxxxxx
https://lists.gluster.org/mailman/listinfo/gluster-users
_______________________________________________
Gluster-users mailing list
Gluster-users@xxxxxxxxxxx
https://lists.gluster.org/mailman/listinfo/gluster-users

[Index of Archives]     [Gluster Development]     [Linux Filesytems Development]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Bugtraq]     [Linux OMAP]     [Linux MIPS]     [eCos]     [Asterisk Internet PBX]     [Linux API]

  Powered by Linux