On Thu, Jul 13, 2017 at 1:35 AM, Sricharan R <sricharan@xxxxxxxxxxxxxx> wrote: > Hi Vivek, > > On 7/13/2017 10:43 AM, Vivek Gautam wrote: >> Hi Stephen, >> >> >> On 07/13/2017 04:24 AM, Stephen Boyd wrote: >>> On 07/06, Vivek Gautam wrote: >>>> @@ -1231,12 +1237,18 @@ static int arm_smmu_map(struct iommu_domain *domain, unsigned long iova, >>>> static size_t arm_smmu_unmap(struct iommu_domain *domain, unsigned long iova, >>>> size_t size) >>>> { >>>> - struct io_pgtable_ops *ops = to_smmu_domain(domain)->pgtbl_ops; >>>> + struct arm_smmu_domain *smmu_domain = to_smmu_domain(domain); >>>> + struct io_pgtable_ops *ops = smmu_domain->pgtbl_ops; >>>> + size_t ret; >>>> if (!ops) >>>> return 0; >>>> - return ops->unmap(ops, iova, size); >>>> + pm_runtime_get_sync(smmu_domain->smmu->dev); >>> Can these map/unmap ops be called from an atomic context? I seem >>> to recall that being a problem before. >> >> That's something which was dropped in the following patch merged in master: >> 523d7423e21b iommu/arm-smmu: Remove io-pgtable spinlock >> >> Looks like we don't need locks here anymore? > > Apart from the locking, wonder why a explicit pm_runtime is needed > from unmap. Somehow looks like some path in the master using that > should have enabled the pm ? > Yes, there are a bunch of scenarios where unmap can happen with disabled master (but not in atomic context). On the gpu side we opportunistically keep a buffer mapping until the buffer is freed (which can happen after gpu is disabled). Likewise, v4l2 won't unmap an exported dmabuf while some other driver holds a reference to it (which can be dropped when the v4l2 device is suspended). Since unmap triggers tbl flush which touches iommu regs, the iommu driver *definitely* needs a pm_runtime_get_sync(). BR, -R -- To unsubscribe from this list: send the line "unsubscribe devicetree" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html