Il 2020-11-27 06:53 Dmitry Antipov ha scritto:
Thanks, it seems you're right. Running local replica 3 volume on 3x1Gb
ramdisks, I'm seeing:
top - 08:44:35 up 1 day, 11:51, 1 user, load average: 2.34, 1.94,
1.00
Tasks: 237 total, 2 running, 235 sleeping, 0 stopped, 0 zombie
%Cpu(s): 38.7 us, 29.4 sy, 0.0 ni, 23.6 id, 0.0 wa, 0.4 hi, 7.9 si,
0.0 st
MiB Mem : 15889.8 total, 1085.7 free, 1986.3 used, 12817.8
buff/cache
MiB Swap: 0.0 total, 0.0 free, 0.0 used. 12307.3 avail
Mem
PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+
COMMAND
63651 root 20 0 664124 41676 9600 R 166.7 0.3 0:24.20
fio
63282 root 20 0 1235336 21484 8768 S 120.4 0.1 2:43.73
glusterfsd
63298 root 20 0 1235368 20512 8856 S 120.0 0.1 2:42.43
glusterfsd
63314 root 20 0 1236392 21396 8684 S 119.8 0.1 2:41.94
glusterfsd
So, 32-core server-class system with a lot of RAM can't perform much
faster for an
individual I/O client - it just scales better if there are a lot of
clients, right?
Yes, it should scale with additional clients and bricks.
As a side note, this high-cpu, (relatively) low-perf result was the
reason why I abandoned the idea to use a 3-way Gluster as backing store
for hyperconverged KVM setup (with VMs running on the same Gluster
host): while adequate for "normal" VMs, it would not fit the bill for
high performance guest.
Increasing the number of bricks/clients would ameliorate the situation,
but we are suddenly in the "rack full of gluster server" setup (which is
not compatible with my customers requests).
If anyone has some suggestions, I am all ears!
Regards.
--
Danti Gionatan
Supporto Tecnico
Assyoma S.r.l. - www.assyoma.it
email: g.danti@xxxxxxxxxx - info@xxxxxxxxxx
GPG public key ID: FF5F32A8
________
Community Meeting Calendar:
Schedule -
Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC
Bridge: https://meet.google.com/cpu-eiue-hvk
Gluster-users mailing list
Gluster-users@xxxxxxxxxxx
https://lists.gluster.org/mailman/listinfo/gluster-users