Re: Poor performance on a server-class system vs. desktop

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Silly question to all though -

Akin to the problems that Linus Tech Tips experienced with ZFS and a multi-disk NVMe SSD array -- is GlusterFS written so that it takes how NVMe SSDS operate in mind?

(i.e. that the code itself might have wait and/or wait for synchronous commands to finish first before executing the next command?)

cf. https://forum.level1techs.com/t/fixing-slow-nvme-raid-performance-on-epyc/151909

I'm not a programmer nor a developer, so I don't really understand programming software, but I am just wondering that if this might be a similar issue with GlusterFS as it is with ZFS with NVMe storage devices because the underlying code/system was written with mechanically rotating disks in mind and/or, at best, SATA 3.0 6 Gbps SSDs in mind, as opposed to NVMe SSDs.

Could this be a possible reason/cause, ad simile?




From: gluster-users-bounces@xxxxxxxxxxx <gluster-users-bounces@xxxxxxxxxxx> on behalf of Dmitry Antipov <dmantipov@xxxxxxxxx>
Sent: November 26, 2020 8:36 AM
To: gluster-users@xxxxxxxxxxx List <gluster-users@xxxxxxxxxxx>
Subject: Re: Poor performance on a server-class system vs. desktop
 
To whom it may be interesting, this paper says that ~80K IOPS (4K random writes) is real:

https://archive.fosdem.org/2018/schedule/event/optimizing_sds/attachments/slides/2300/export/events/attachments/optimizing_sds/slides/2300/GlusterOnNVMe_FOSDEM2018.pdf

On the same-class server hardware, following their tuning recommendations, etc. I just run 8 times slower.
So it seems that RH insiders are the only people knows how to setup real GlusterFS installation properly :(.

Dmitry
________



Community Meeting Calendar:

Schedule -
Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC
Bridge: https://meet.google.com/cpu-eiue-hvk
Gluster-users mailing list
Gluster-users@xxxxxxxxxxx
https://lists.gluster.org/mailman/listinfo/gluster-users
________



Community Meeting Calendar:

Schedule -
Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC
Bridge: https://meet.google.com/cpu-eiue-hvk
Gluster-users mailing list
Gluster-users@xxxxxxxxxxx
https://lists.gluster.org/mailman/listinfo/gluster-users

[Index of Archives]     [Gluster Development]     [Linux Filesytems Development]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Bugtraq]     [Linux OMAP]     [Linux MIPS]     [eCos]     [Asterisk Internet PBX]     [Linux API]

  Powered by Linux