On Wed, Nov 27, 2019 at 05:09:41PM +0000, Robin Murphy wrote: > On 27/11/2019 2:16 pm, Thierry Reding wrote: > [...] > > Nevermind that, I figured out that I was missingthe initialization of > > some of the S2CR variables. I've got something that I think is working > > now, though I don't know yet how to go about cleaning up the initial > > mapping and "recycling" it. > > > > I'll clean things up a bit, run some more tests and post a new patch > > that can serve as a basis for discussion. > > I'm a little puzzled by the smmu->identity domain - disregarding the fact > that it's not actually used by the given diff ;) - if isolation is the > reason for not simply using a bypass S2CR for the window between reset and > the relevant .add_device call where the default domain proper comes in[1], > then surely exposing the union of memory regions to the union of all > associated devices isn't all that desirable either. A bypass S2CR was what I had originally in mind, but Will objected to that because it "leaves the thing wide open if we don't subsequently probe the master."[0] Will went on to suggest setting up a page-table early for stream IDs with reserved regions, so that's what I implemented. It ends up working fairly nicely (see attached patch). I suppose putting all the masters into the same bucket isn't an ideal solution, but it's pretty simple and straightforward. Also, I don't expect this to be a very common use-case. In fact, the only place where I'm aware that this is needed is for display controllers scanning out a splash screen. So the worst that could happen here is if they somehow got the addresses mixed up and read each others' framebuffers, which would really only be possible if they were already doing so before the SMMU was initialized. Any harm from that would already be done. I don't think there's a real risk here. Before the ARM SMMU driver takes over and configures all contexts as fault by default all of these devices are reading from physical memory without any isolation. Setting up this identity domain will allow them to keep accessing the regions that they were meant to access, while still faulting when access happens outside. > Either way, I'll give you the pre-emptive warning that this is the SMMU in > the way of my EFI framebuffer ;) > > ... > arm-smmu 7fb20000.iommu: 1 context banks (1 stage-2 only) > ... Interesting. How did you avoid getting the faults by default? Do you just enable bypass by default? If I understand correctly, this would mean that you can have only a single IOMMU domain in that case, right? In that case it would perhaps be better to keep a list of identity IOMMU domains and later on somehow pass them on when the driver takes over. Basically these would have to become the IOMMU groups' default domains. > Robin. > > [1] the fact that it currently depends on probe order whether getting that > .add_device call requires a driver probing for the device is an error as > discussed elsewhere, and will get fixed separately, I promise. I'm not sure I understand how that would fix anything. You'd still need to program the SMMU first before calling the ->add_device() for all the masters, in which case you're still going to run into faults. Thierry [0]: https://lkml.org/lkml/2019/9/17/745
From cd7be912e74bdd463384e42f1aa275e959f4bee2 Mon Sep 17 00:00:00 2001 From: Thierry Reding <treding@xxxxxxxxxx> Date: Thu, 28 Nov 2019 12:03:58 +0100 Subject: [PATCH] iommu: arm-smmu: Add support for early direct mappings On platforms, the firmware will setup hardware to read from a given region of memory. One such example is a display controller that is scanning out a splash screen from physical memory. During Linux's boot process, the ARM SMMU will configure all contexts to fault by default. This means that memory accesses that happen by an SMMU master before its driver has had a chance to properly set up the IOMMU will cause a fault. This is especially annoying for something like the display controller scanning out a splash screen because the faults will result in the display controller getting bogus data (all-ones on Tegra) and since it repeatedly scans that framebuffer, it will keep triggering such faults and spam the boot log with them. In order to work around such problems, scan the device tree for IOMMU masters and set up a special identity domain that will map 1:1 all of the reserved regions associated with them. This happens before the SMMU is enabled, so that the mappings are already set up before translations begin. TODO: remove identity domain when no longer in use Signed-off-by: Thierry Reding <treding@xxxxxxxxxx> --- drivers/iommu/arm-smmu.c | 172 ++++++++++++++++++++++++++++++++++++++- drivers/iommu/arm-smmu.h | 2 + 2 files changed, 173 insertions(+), 1 deletion(-) diff --git a/drivers/iommu/arm-smmu.c b/drivers/iommu/arm-smmu.c index 58ec52d3c5af..3d6c58ce3bab 100644 --- a/drivers/iommu/arm-smmu.c +++ b/drivers/iommu/arm-smmu.c @@ -1887,6 +1887,172 @@ static int arm_smmu_device_cfg_probe(struct arm_smmu_device *smmu) return 0; } +static int arm_smmu_identity_map_regions(struct arm_smmu_device *smmu, + struct device_node *np) +{ + struct device *dev = smmu->dev; + struct of_phandle_iterator it; + unsigned long page_size; + unsigned int count = 0; + int ret; + + page_size = 1UL << __ffs(smmu->identity->pgsize_bitmap); + + /* parse memory regions and add them to the identity mapping */ + of_for_each_phandle(&it, ret, np, "memory-region", NULL, 0) { + int prot = IOMMU_READ | IOMMU_WRITE; + dma_addr_t start, limit, iova; + struct resource res; + + ret = of_address_to_resource(it.node, 0, &res); + if (ret < 0) { + dev_err(dev, "failed to parse memory region %pOF: %d\n", + it.node, ret); + continue; + } + + /* check that region is not empty */ + if (resource_size(&res) == 0) { + dev_dbg(dev, "skipping empty memory region %pOF\n", + it.node); + continue; + } + + start = ALIGN(res.start, page_size); + limit = ALIGN(res.start + resource_size(&res), page_size); + + for (iova = start; iova < limit; iova += page_size) { + phys_addr_t phys; + + /* check that this IOVA isn't already mapped */ + phys = iommu_iova_to_phys(smmu->identity, iova); + if (phys) + continue; + + ret = iommu_map(smmu->identity, iova, iova, page_size, + prot); + if (ret < 0) { + dev_err(dev, "failed to map %pad for %pOF: %d\n", + &iova, it.node, ret); + continue; + } + } + + dev_dbg(dev, "identity mapped memory region %pR\n", &res); + count++; + } + + return count; +} + +static int arm_smmu_identity_add_master(struct arm_smmu_device *smmu, + struct of_phandle_args *args) +{ + struct arm_smmu_domain *identity = to_smmu_domain(smmu->identity); + struct arm_smmu_smr *smrs = smmu->smrs; + struct device *dev = smmu->dev; + unsigned int index; + u16 sid, mask; + u32 fwid; + int ret; + + /* skip masters that aren't ours */ + if (args->np != dev->of_node) + return 0; + + fwid = arm_smmu_of_parse(args->np, args->args, args->args_count); + sid = FIELD_GET(SMR_ID, fwid); + mask = FIELD_GET(SMR_MASK, fwid); + + ret = arm_smmu_find_sme(smmu, sid, mask); + if (ret < 0) { + dev_err(dev, "failed to find SME: %d\n", ret); + return ret; + } + + index = ret; + + if (smrs && smmu->s2crs[index].count == 0) { + smrs[index].id = sid; + smrs[index].mask = mask; + smrs[index].valid = true; + } + + smmu->s2crs[index].type = S2CR_TYPE_TRANS; + smmu->s2crs[index].privcfg = S2CR_PRIVCFG_DEFAULT; + smmu->s2crs[index].cbndx = identity->cfg.cbndx; + smmu->s2crs[index].count++; + + return 0; +} + +static int arm_smmu_identity_add_device(struct arm_smmu_device *smmu, + struct device_node *np) +{ + struct device *dev = smmu->dev; + struct of_phandle_args args; + unsigned int index = 0; + int ret; + + /* add stream IDs to the identity mapping */ + while (!of_parse_phandle_with_args(np, "iommus", "#iommu-cells", + index, &args)) { + ret = arm_smmu_identity_add_master(smmu, &args); + if (ret < 0) + return ret; + + index++; + } + + return 0; +} + +static int arm_smmu_setup_identity(struct arm_smmu_device *smmu) +{ + struct arm_smmu_domain *identity; + struct device *dev = smmu->dev; + struct device_node *np; + int ret; + + /* create early identity mapping */ + smmu->identity = arm_smmu_domain_alloc(IOMMU_DOMAIN_UNMANAGED); + if (!smmu->identity) { + dev_err(dev, "failed to create identity domain\n"); + return -ENOMEM; + } + + smmu->identity->pgsize_bitmap = smmu->pgsize_bitmap; + smmu->identity->type = IOMMU_DOMAIN_UNMANAGED; + smmu->identity->ops = &arm_smmu_ops; + + ret = arm_smmu_init_domain_context(smmu->identity, smmu); + if (ret < 0) { + dev_err(dev, "failed to initialize identity domain: %d\n", ret); + return ret; + } + + identity = to_smmu_domain(smmu->identity); + + for_each_node_with_property(np, "iommus") { + ret = arm_smmu_identity_map_regions(smmu, np); + if (ret < 0) + continue; + + /* + * Do not add devices to the early identity mapping if they + * do not define any memory-regions. + */ + if (ret == 0) + continue; + + ret = arm_smmu_identity_add_device(smmu, np); + if (ret < 0) + continue; + } + + return 0; +} + struct arm_smmu_match_data { enum arm_smmu_arch_version version; enum arm_smmu_implementation model; @@ -2128,6 +2294,10 @@ static int arm_smmu_device_probe(struct platform_device *pdev) if (err) return err; + err = arm_smmu_setup_identity(smmu); + if (err) + return err; + if (smmu->version == ARM_SMMU_V2) { if (smmu->num_context_banks > smmu->num_context_irqs) { dev_err(dev, @@ -2170,8 +2340,8 @@ static int arm_smmu_device_probe(struct platform_device *pdev) } platform_set_drvdata(pdev, smmu); - arm_smmu_device_reset(smmu); arm_smmu_test_smr_masks(smmu); + arm_smmu_device_reset(smmu); /* * We want to avoid touching dev->power.lock in fastpaths unless diff --git a/drivers/iommu/arm-smmu.h b/drivers/iommu/arm-smmu.h index 6b6b877135de..001e60a3d18c 100644 --- a/drivers/iommu/arm-smmu.h +++ b/drivers/iommu/arm-smmu.h @@ -280,6 +280,8 @@ struct arm_smmu_device { /* IOMMU core code handle */ struct iommu_device iommu; + + struct iommu_domain *identity; }; enum arm_smmu_context_fmt { -- 2.23.0
Attachment:
signature.asc
Description: PGP signature