Thanks for a quick reply Bjorn!
Actually performance is not the biggest concern.
Mmiotrace has documented SMP race condition:
https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/tree/Documentation/trace/mmiotrace.rst#n135
Also playing correctly with page fault is quite a challenge. I'm trying
to find a simpler/easier solution :)
Thanks for Qemu tip! I'll take a look
On Wed, Jan 31 2024 at 15:20:09 -06:00:00, Bjorn Helgaas
<helgaas@xxxxxxxxxx> wrote:
On Wed, Jan 31, 2024 at 08:42:18PM +0000, nowicki@xxxxxxxxxx wrote:
Hello,
I'm trying to implement a fake PCIe device and I'm looking for
guidance (by
fake I mean fully software device).
So far I implemented:
- fake PCIe bus with custom fake pci_ops.read & pci_ops.write
functions
- fake PCIe switch
- fake PCIe endpoint
Fake devices have implemented PCIe registers and are visible in
user space
via lspci tool.
Registers can be edited via setpci tool.
Now I'm looking for a way to implement BAR regions with custom
memory
handlers. Is it even possible?
Basically I'd like to capture each MemoryWrite & MemoryRead
targeted for
PCIe endpoint's BAR region and emulate NVMe registers.
I'm in dead-end right now and I'm seeing only two options:
- generate page faults on every access to fake BAR region and
execute fake
PCIe endpoint's callbacks - similar/the same as mmiotrace
- periodically scan fake BAR region for any changes
Both solutions have drawbacks.
Is there other way to implement fake BAR region?
Sounds kind of cool and potentially useful to build kernel test tools.
Is the page fault on access option a problem because you want better
performance? I assume you really *want* to know about every write and
possibly even every read, so a page fault seems like the way to do
that.
Maybe qemu would have some ideas? I assume it implements some similar
things.
Bjorn