Re: [RFC PATCH 3/3] nvme: add the "debug" host driver

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 04.02.2022 07:58, Chaitanya Kulkarni wrote:
On 2/3/22 22:28, Damien Le Moal wrote:
On 2/4/22 12:12, Chaitanya Kulkarni wrote:

One can instantiate scsi devices with qemu by using fake scsi devices,
but one can also just use scsi_debug to do the same. I see both efforts
as desirable, so long as someone mantains this.


Why do you think both efforts are desirable ?

When testing code using the functionality, it is far easier to get said
functionality doing a simple "modprobe" rather than having to setup a
VM. C.f. running blktests or fstests.


agree on simplicity but then why do we have QEMU implementations for
the NVMe features (e.g. ZNS, NVMe Simple Copy) ? we can just build
memoery backed NVMeOF test target for NVMe controller features.

Also, recognizing the simplicity I proposed initially NVMe ZNS
fabrics based emulation over QEMU (I think I still have initial state
machine implementation code for ZNS somewhere), those were "nacked" for
the right reason, since we've decided go with QEMU and use that as a
primary platform for testing, so I failed to understand what has
changed.. since given that QEMU already supports NVMe simple copy ...

I was not part of this conversation, but as I see it each approach give
a benefit. QEMU is fantastic for compliance testing and I am not sure
you get the same level of command analysis anywhere else; at least not
without writing dedicated code for this in a target.

This said, when we want to test for race conditions, QEMU is very slow.
For a software-only solution, we have experimented with something
similar to the nvme-debug code tha Mikulas is proposing. Adam pointed to
the nvme-loop target as an alternative and this seems to work pretty
nicely. I do not believe there should be many changes to support copy
offload using this.

So in my view having both is not replication and it gives more
flexibility for validation, which I believe it is always good.


So personally, I also think it would be great to have a kernel-based
emulation of copy offload. And that should be very easy to implement
with the fabric code. Then loopback onto a nullblk device and you get a
quick and easy to setup copy-offload device that can even be of the ZNS
variant if you want since nullblk supports zones.


One can do that with creating null_blk based NVMeOF target namespace,
no need to emulate simple copy memory backed code in the fabrics
with nvme-loop.. it is as simple as inserting module and configuring
ns with nvmetcli once we have finalized the solution for copy offload.
If you remember, I already have patches for that...


NVMe ZNS QEMU implementation proved to be perfect and works just
fine for testing, copy offload is not an exception.

For instance, blktests uses scsi_debug for simplicity.

In the end you decide what you want to use.

Can we use the nvme-loop target instead?

I am advocating for this approach as well. It presentas a virtual nvme
controller already.


It does that assuming underlying block device such as null_blk or
QEMU implementation supports required features not to bloat the the
NVMeOF target.

-ck


-ck


--
dm-devel mailing list
dm-devel@xxxxxxxxxx
https://listman.redhat.com/mailman/listinfo/dm-devel




[Index of Archives]     [DM Crypt]     [Fedora Desktop]     [ATA RAID]     [Fedora Marketing]     [Fedora Packaging]     [Fedora SELinux]     [Yosemite Discussion]     [KDE Users]     [Fedora Docs]

  Powered by Linux