On Thu, Mar 31, 2022 at 11:22:03PM +0000, Michael Marod wrote: > # /usr/local/bin/fio -name=randrw -filename=/opt/foo -direct=1 -iodepth=1 -thread -rw=randrw -ioengine=psync -bs=4k -size=10G -numjobs=16 -group_reporting=1 -runtime=120 > > // Ubuntu 16.04 / Linux 4.4.0: > Run status group 0 (all jobs): > READ: bw=54.5MiB/s (57.1MB/s), 54.5MiB/s-54.5MiB/s (57.1MB/s-57.1MB/s), io=6537MiB (6854MB), run=120002-120002msec > WRITE: bw=54.5MiB/s (57.2MB/s), 54.5MiB/s-54.5MiB/s (57.2MB/s-57.2MB/s), io=6544MiB (6862MB), run=120002-120002msec > > // Ubuntu 18.04 / Linux 5.4.0: > Run status group 0 (all jobs): > READ: bw=23.5MiB/s (24.7MB/s), 23.5MiB/s-23.5MiB/s (24.7MB/s-24.7MB/s), io=2821MiB (2959MB), run=120002-120002msec > WRITE: bw=23.5MiB/s (24.6MB/s), 23.5MiB/s-23.5MiB/s (24.6MB/s-24.6MB/s), io=2819MiB (2955MB), run=120002-120002msec > > // Ubuntu 18.04 / Linux 5.17: > Run status group 0 (all jobs): > READ: bw=244MiB/s (255MB/s), 244MiB/s-244MiB/s (255MB/s-255MB/s), io=28.6GiB (30.7GB), run=120001-120001msec > WRITE: bw=244MiB/s (256MB/s), 244MiB/s-244MiB/s (256MB/s-256MB/s), io=28.6GiB (30.7GB), run=120001-120001msec Thanks for the info. I don't know of anything block or nvme specific that might explain an order of magnitude perf difference. Could you try the same test without the filesytems? You mentioned using mdraid, so try '--filename=/dev/mdX'. If that also shows similiar performance difference, try using one of your nvme member drives directly, like '--filename=/dev/nvme1n1'. That should isolate which subsystem is contributing to the difference.