On 10/22/20 7:20 AM, Dmitry Osipenko wrote:
20.10.2020 12:18, Mikko Perttunen пишет:
I'm asking this question because right now there is only one potential
client for this IOCTL, the VIC. If other clients aren't supposed to be a
part of the DRM driver, like for example NVDEC which probably should be
a V4L driver, then DRM driver will have only a single VIC and in this
case we shouldn't need this IOCTL because DRM and V4L should use generic
dmabuf API for importing and exporting buffers.
This IOCTL is required for GR2D/GR3D too, as they need to access memory
as well. This is a different step from import/export -- first you import
or allocate your memory so you have a GEM handle, then you map it to the
channel, which does the IOMMU mapping (if there is an IOMMU).
This doesn't answer my question. I don't have a full picture and for now
will remain dubious about this IOCTL, but it should be fine to have it
in a form of WIP patches (maybe staging feature) until userspace code
and hardware specs will become available.
Some more comments:
1. Older Tegra SoCs do not have restrictions which do not allow to group
IOMMU as wished by kernel driver. It's fine to have one static mapping
per-GEM that can be accessed by all DRM devices, that's why CHANNEL_MAP
is questionable.
Sure, on older Tegras this is not strictly necessary because the
firewall can check pointers. It's not related to IOMMU grouping.
2. IIUC, the mappings need to be done per-device group/stream and not
per-channel_ctx. It looks like nothing stops channel contexts to guess
mapping addresses of the other context, isn't it?
I'm suggesting that each GEM should have a per-device mapping and the
new IOCTL should create these GEM-mappings, instead of the channel_ctx
mappings.
In the absence of context isolation, this is correct. But with context
isolation (which is next on my upstream todo-list), on supported chips
(T186+), we can map to individual contexts, which are associated with
channel_ctx's.
Without context isolation, this IOCTL just maps to the underlying
engine. The list of mappings can also be used by the firewall later - as
mentioned, just the relocs would be enough for that, but I think there's
a good orthogonality in CHANNEL_MAP describing memory regions accessible
by the engine, and relocations just doing relocations.
3. We shouldn't need channel contexts at all, a single "DRM file"
context should be enough to have.
Yeah, maybe we could just have one "inline" channel context in the DRM
file, that's just initialized by the CHANNEL_OPEN IOCTL.
4. The new UAPI need to be separated into several parts in the next
revision, one patch for each new feature.
I'll try to split up where possible.
The first patches should be the ones that are relevant to the existing
userspace code, like support for the waits.
Can you elaborate what you mean by this?
Partial mappings should be a separate feature because it's a
questionable feature that needs to be proved by a real userspace first.
Maybe it would be even better to drop it for the starter, to ease reviewing.
Considering that the "no-op" support for it (map the whole buffer but
just keep track of the starting offset) is only a couple of lines, I'd
like to keep it in.
Waiting for fences on host1x should be a separate feature.
OK.
The syncfile support needs to be a separate feature as well because I
don't see a use-case for it right now.
I'll separate it - the reason it's there is to avoid the overhead of the
extra ID/threshold -> sync_file conversion IOCTL if you need it.
I'd like to see the DRM_SCHED and syncobj support. I can help you with
it if it's out of yours scope for now.
I already wrote some code for syncobj I can probably pull in. Regarding
DRM_SCHED, help is accepted. However, we should keep using the hardware
scheduler, and considering it's a bigger piece of work, let's not block
this series on it.
Mikko
_______________________________________________
dri-devel mailing list
dri-devel@xxxxxxxxxxxxxxxxxxxxx
https://lists.freedesktop.org/mailman/listinfo/dri-devel