Re: [RFC PATCH 00/11] Rust null block driver

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi Niklas,

Niklas Cassel <Niklas.Cassel@xxxxxxx> writes:

> On Wed, May 03, 2023 at 11:06:57AM +0200, Andreas Hindborg wrote:
>> From: Andreas Hindborg <a.hindborg@xxxxxxxxxxx>
>> 
>
> (cut)
>
>> 
>> For each measurement the drivers are loaded, a drive is configured with memory
>> backing and a size of 4 GiB. C null_blk is configured to match the implemented
>> modes of the Rust driver: `blocksize` is set to 4 KiB, `completion_nsec` to 0,
>> `irqmode` to 0 (IRQ_NONE), `queue_mode` to 2 (MQ), `hw_queue_depth` to 256 and
>> `memory_backed` to 1. For both the drivers, the queue scheduler is set to
>> `none`. These measurements are made using 30 second runs of `fio` with the
>> `PSYNC` IO engine with workers pinned to separate CPU cores. The measurements
>> are done inside a virtual machine (qemu/kvm) on an Intel Alder Lake workstation
>> (i5-12600).
>
> Hello Andreas,
>
> I'm curious why you used psync ioengine for the benchmarks.
>
> As psync is a sync ioengine, it means queue depth == 1.
>
> Wouldn't it have been more interesting to see an async ioengine,
> together with different queue depths?

That would also be interesting. I was a bit constrained on CPU cycles so
I had to choose. I intend to produce the numbers you ask for. For now
here is two runs of random read using io_uring with queue depth 128
(same table style):


For iodepth_batch_submit=1, iodepth_batch_complete=1:
+---------+----------+---------------------+---------------------+
| jobs/bs | workload |          1          |          6          |
+---------+----------+---------------------+---------------------+
|    4k   | randread | 2.97 0.00 (0.9,0.0) | 4.06 0.00 (1.8,0.0) |
+---------+----------+---------------------+---------------------+

For iodepth_batch_submit=16, iodepth_batch_complete=16:
+---------+----------+---------------------+---------------------+
| jobs/bs | workload |          1          |          6          |
+---------+----------+---------------------+---------------------+
|    4k   | randread | 4.40 0.00 (1.1,0.0) | 4.87 0.00 (1.8,0.0) |
+---------+----------+---------------------+---------------------+

Above numbers are 60 second runs on bare metal, so not entirely
comparable with the ones in the cover letter.

> You might want to explain your table a bit more.

I understand that the table can be difficult to read. It is not easy to
convey all this information in ASCII email. The numbers in parenthesis
in the cells _are_ IOPS x 10e6 (read,write). Referring to he second
table above, for 1 job at 4k bs the Rust driver performed 4.8 percent
more IOPS than the C driver. The C driver did 1.1M IOPS. I hope this
clarifies the table, otherwise let me know!

> It might be nice to see IOPS and average latencies.

I did collect latency info as well, including completion latency
percentiles. It's just difficult to fit all that data in an email. I
have the fio json output, let me know if you want it and I will find a
way to get it to you. I am considering setting up some kind of CI that
will publish the performance results online automatically so that it
will be a link instead of an inline table.

>
> As an example of a table that I find easier to interpret,
> see e.g. the table on page 29 in the SPDK performance report:
> https://ci.spdk.io/download/performance-reports/SPDK_nvme_bdev_perf_report_2301.pdf

Thanks for the input, I will be sure to reference that next time. Just
for clarity, as you mentioned there is only one queue depth in play for
the numbers in the cover letter.

Best regards
Andreas



[Index of Archives]     [Linux RAID]     [Linux SCSI]     [Linux ATA RAID]     [IDE]     [Linux Wireless]     [Linux Kernel]     [ATH6KL]     [Linux Bluetooth]     [Linux Netdev]     [Kernel Newbies]     [Security]     [Git]     [Netfilter]     [Bugtraq]     [Yosemite News]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Device Mapper]

  Powered by Linux