Hi,
On 11/13/19 3:16 PM, Christoph Hellwig wrote:
On Wed, Nov 13, 2019 at 10:51:43AM +0100, Thomas Hellström (VMware) wrote:
From: Thomas Hellstrom <thellstrom@xxxxxxxxxx>
We're gradually moving towards using DMA coherent memory in most
situations, although TTM interactions with the DMA layers is still a
work-in-progress. Meanwhile, use coherent memory when there are size
restrictions meaning that there is a chance that streaming dma mapping
of large buffer objects may fail.
Also move DMA mask settings to the vmw_dma_select_mode function, since
it's important that we set the correct DMA masks before calling the
dma_max_mapping_size() function.
Cc: Christoph Hellwig <hch@xxxxxxxxxxxxx>
Signed-off-by: Thomas Hellstrom <thellstrom@xxxxxxxxxx>
Reviewed-by: Brian Paul <brianp@xxxxxxxxxx>
---
drivers/gpu/drm/vmwgfx/vmwgfx_drv.c | 31 +++++++----------------------
1 file changed, 7 insertions(+), 24 deletions(-)
diff --git a/drivers/gpu/drm/vmwgfx/vmwgfx_drv.c b/drivers/gpu/drm/vmwgfx/vmwgfx_drv.c
index fc0283659c41..1e1de83908fe 100644
--- a/drivers/gpu/drm/vmwgfx/vmwgfx_drv.c
+++ b/drivers/gpu/drm/vmwgfx/vmwgfx_drv.c
@@ -569,7 +569,10 @@ static int vmw_dma_select_mode(struct vmw_private *dev_priv)
[vmw_dma_map_populate] = "Caching DMA mappings.",
[vmw_dma_map_bind] = "Giving up DMA mappings early."};
- if (vmw_force_coherent)
+ (void) dma_set_mask_and_coherent(dev_priv->dev->dev, DMA_BIT_MASK(64));
Please don't use void cast on returns. Also this genercally can't
fail, so if it fails anyway it is fatal, and you should actually
return an error.
OK. Will fix.
+ if (vmw_force_coherent ||
+ dma_max_mapping_size(dev_priv->dev->dev) != SIZE_MAX)
I don't think this is the right check to see if swiotlb would be
used. You probably want to call dma_addressing_limited(). Please
also add a comment explaining the call here.
If I understand things correctly, dma_addressing_limited() would always
return false on vmwgfx, at least if the "restrict to 44 bit" option is
removed. The "odd" modes we want to catch is someone setting
swiotlb=force, or someone enabling SEV. In both cases, vmwgfx would
break down due to limited DMA space even if streaming DMA was fixed with
appropriate sync calls.
The same holds for vmw_pvscsi (which we discussed before) where the
large queue depth will typically fill the SWIOTLB causing excessive
non-fatal error logging.
dma_max_mapping_size() currently catch these modes, but best I guess
would be dma_swiotlb_forced() function that could be used in cases like
this?
if (dev_priv->map_mode != vmw_dma_phys &&
AFAIK vmw_dma_phys can't even be set in current mainline and is dead
code. Can you check if I'm missing something? Certainly all three
branches above don't set it.
I'll remove that dead code for v2.
(sizeof(unsigned long) == 4 || vmw_restrict_dma_mask)) {
DRM_INFO("Restricting DMA addresses to 44 bits.\n");
- return dma_set_mask_and_coherent(dev->dev, DMA_BIT_MASK(44));
+ return dma_set_mask_and_coherent(dev_priv->dev->dev,
+ DMA_BIT_MASK(44));
Can you keep setting the DMA mask together?
E.g. make the whole thing something like:
static int vmw_dma_select_mode(struct vmw_private *dev_priv)
{
if (dma_addressing_limited(dev_priv->dev->dev) || vmw_force_coherent) {
/*
* XXX: what about AMD iommu? virtio-iommu? Also
* swiotlb should be always available where we can
* address more than 32-bit of memory..
*/
if (!IS_ENABLED(CONFIG_SWIOTLB) &&
!IS_ENABLED(CONFIG_INTEL_IOMMU))
return -EINVAL;
The above check is only to see if coherent memory functionality is
really compiled in into TTM. We have a patchset that got lost in the
last merge window to fix this properly and avoid confusion about this.
I'll rebase on that.
dev_priv->map_mode = vmw_dma_alloc_coherent;
} else if (vmw_restrict_iommu) {
dev_priv->map_mode = vmw_dma_map_bind;
} else {
dev_priv->map_mode = vmw_dma_map_populate;
}
/*
* On 32-bit systems we can only handle 32 bit PFNs. Optionally set
* that restriction also for 64-bit systems.
*/
return dma_set_mask_and_coherent(dev->dev,
(IS_ENABLED(CONFIG_64BIT) && !vmw_restrict_dma_mask) ?
64 : 44);
}
Thanks,
Thomas
_______________________________________________
dri-devel mailing list
dri-devel@xxxxxxxxxxxxxxxxxxxxx
https://lists.freedesktop.org/mailman/listinfo/dri-devel