Il 21-01-2020 18:40 Jeff Brown ha scritto:
Looking at your setup, you have two two disk drives satisfying your random read and write requests. Your mirroring for redundancy and striping across the mirrors. so your read transactions will be satisfied by two disks logically. This class of device will deliver ~120 IOPs random small IOs per second from my recollection, so I think your getting what you should expect. Am I missing something with your configuration here?
Hi Jeff, you are missing the fact that both bricks (one per machine, replica 2) were located in /dev/shm. In other words, they really are backed by memory rather than disks. Hence my surprise in seeing only 250 iops with 4k fsynced writes. Bypassing any networking (with two bricks located on the *same* server, always under /deb/shm) produced only 500 iops.
Is gluster really that slow for small fsynced writes, or am I doing it wrong?
Thanks. -- Danti Gionatan Supporto Tecnico Assyoma S.r.l. - www.assyoma.it email: g.danti@xxxxxxxxxx - info@xxxxxxxxxx GPG public key ID: FF5F32A8 ________ Community Meeting Calendar: APAC Schedule - Every 2nd and 4th Tuesday at 11:30 AM IST Bridge: https://bluejeans.com/441850968 NA/EMEA Schedule - Every 1st and 3rd Tuesday at 01:00 PM EDT Bridge: https://bluejeans.com/441850968 Gluster-users mailing list Gluster-users@xxxxxxxxxxx https://lists.gluster.org/mailman/listinfo/gluster-users