On Fri, Mar 27, 2020 at 08:47:08AM -0600, Jens Axboe wrote: > On 3/27/20 8:25 AM, Keith Busch wrote: > > On 3/27/20 12:14 AM, Alexey Dobriyan wrote: > >> On Fri, Mar 27, 2020 at 12:56:00AM +0000, Damien Le Moal wrote: > >>> > >>> On 2020/03/27 5:44, Alexey Dobriyan wrote: > >>>> Add simple iodepth=1 NVMe engine: > >>>> > >>>> ioengine=nvme > >>>> > >>>> It works via standard Linux NVMe ioctls. > >>> Keith is working on splitting up nvmecli into the cli part and libnvme which > >>> uses the kernel ioctl iinterface for NVMe command passthrough. So I think it may > >>> be better to implement ioengine=libnvme using Keith libnvme library. That will > >>> remove the need to define all the NVMe command stuff here. > >> Sure. It is just standalone file you can send to colleagues and forget. > >> Similar to how header-only C++ libraries work. > >> > >>>> It will be used for testing upcoming ZNS stuff. > > > > > > > > I'm not completely against fio providing an nvme ioengine, and libnvme > > could easily fit in, but I don't think that faithfully represents how > > these devices are actually used. The passthrough interface is not really > > our fast path, and being a synchronous interface is a bit limiting in > > testing device capabilities. > > I guess my main question is what purpose it fills. Since it's not > a performant interface, it's not a benchmarking thing. > Hence it's for testing the feature? If so, would it be better to have in > nvme-cli or a standalone tool? This engine can easily create QD=NR_CPUS, it is not much but it is something. Another thing, I've wasted a week (and counting!) struggling with userspace nvme drivers, they are such a pain. Anything from broken hardware, to code like this: spin_lock() // sic, before copy_from_user copy_from_user(...) // sic, no checking It is much easier to implement new commands via passthrough ioctls and have load/stress testing generator.