On Tue, Jan 07, 2025 at 09:10:11AM -0800, Nicolin Chen wrote: > +/* > + * Typically called in driver's threaded IRQ handler. > + * The @type and @event_data must be defined in include/uapi/linux/iommufd.h > + */ > +int iommufd_viommu_report_event(struct iommufd_viommu *viommu, > + enum iommu_veventq_type type, void *event_data, > + size_t data_len) > +{ > + struct iommufd_veventq *veventq; > + struct iommufd_vevent *vevent; > + int rc = 0; > + > + if (!viommu) > + return -ENODEV; > + if (WARN_ON_ONCE(!viommu->ops || !viommu->ops->supports_veventq || > + !viommu->ops->supports_veventq(type))) > + return -EOPNOTSUPP; > + if (WARN_ON_ONCE(!data_len || !event_data)) > + return -EINVAL; > + > + down_read(&viommu->veventqs_rwsem); > + > + veventq = iommufd_viommu_find_veventq(viommu, type); > + if (!veventq) { > + rc = -EOPNOTSUPP; > + goto out_unlock_veventqs; > + } > + > + vevent = kmalloc(struct_size(vevent, event_data, data_len), GFP_KERNEL); > + if (!vevent) { > + rc = -ENOMEM; > + goto out_unlock_veventqs; > + } > + memcpy(vevent->event_data, event_data, data_len); The page fault path is self limited because end point devices are only able to issue a certain number of PRI's before they have to stop. But the async events generated by something like the SMMU are not self limiting and we can have a huge barrage of them. I think you need to add some kind of limiting here otherwise we will OOM the kernel and crash, eg if the VM spams protection errors. The virtual event queue should behave the same as if the physical event queue overflows, and that logic should be in the smmu driver - this should return some Exxx to indicate the queue is filled. I supposed we will need a way to indicate lost events to userspace on top of this? Presumably userspace should specify the max queue size. Jason