On 3/27/20 3:58 PM, Keith Busch wrote: > On 3/27/20 3:25 PM, Jens Axboe wrote: >> On 3/27/20 1:01 PM, Alexey Dobriyan wrote: >>> On Fri, Mar 27, 2020 at 08:47:08AM -0600, Jens Axboe wrote: >>>> On 3/27/20 8:25 AM, Keith Busch wrote >>>>> I'm not completely against fio providing an nvme ioengine, and libnvme >>>>> could easily fit in, but I don't think that faithfully represents how >>>>> these devices are actually used. The passthrough interface is not really >>>>> our fast path, and being a synchronous interface is a bit limiting in >>>>> testing device capabilities. >>>> I guess my main question is what purpose it fills. Since it's not >>>> a performant interface, it's not a benchmarking thing. >>>> Hence it's for testing the feature? If so, would it be better to have in >>>> nvme-cli or a standalone tool? >>> This engine can easily create QD=NR_CPUS, it is not much but it is something. >> Sure, just like any other sync engine can also be somewhat parallel if >> you just run multiple threads/processes on it. I just don't want people >> to get the idea that it's something that's useful for benchmarking. And >> if it isn't, then what is the point? >> >> As I said, I'm not vehemently opposed to the idea of adding the engine, >> because it is just a simple wrapper around the submit ioctl. But it'd be >> nice to have some clear purpose behind it, justifying the inclusion. > > > I think this is the nvme equivalent to the 'sg' ioengine using the > SG_IO ioctl. I don't think SG_IO is a good benchmarking interface > either, but if it's useful for something I'm not considering, then I > guess 'nvme' should have a similar justification. True, and at least this one is easier to maintain, whereas the async sg engine is a lot more complicated. My main worry is, as usual, that to add a maintenance burden, justification needs to be there. -- Jens Axboe