Hi,
On 2023/6/22 00:07, Lucas Stach wrote:
And as the HW guarantees it on your platform, your platform
implementation makes this function effectively a no-op. Skipping the
call to this function is breaking the DMA API abstraction, as now the
driver is second guessing the DMA API implementation. I really see no
reason to do this.
It is the same reason you chose the word 'effectively', not 'difinitely'.
We don't want waste the CPU's time,
to running the dma_sync_sg_for_cpu funcion() function
```
void dma_sync_sg_for_cpu(struct device *dev, struct scatterlist *sg,
int nelems, enum dma_data_direction dir)
{
const struct dma_map_ops *ops = get_dma_ops(dev);
BUG_ON(!valid_dma_direction(dir));
if (dma_map_direct(dev, ops))
dma_direct_sync_sg_for_cpu(dev, sg, nelems, dir);
else if (ops->sync_sg_for_cpu)
ops->sync_sg_for_cpu(dev, sg, nelems, dir);
debug_dma_sync_sg_for_cpu(dev, sg, nelems, dir);
}
```
to running the this:
```
int etnaviv_gem_cpu_fini(struct drm_gem_object *obj)
{
struct drm_device *dev = obj->dev;
struct etnaviv_gem_object *etnaviv_obj = to_etnaviv_bo(obj);
struct etnaviv_drm_private *priv = dev->dev_private;
if (!priv->dma_coherent && etnaviv_obj->flags & ETNA_BO_CACHED) {
/* fini without a prep is almost certainly a userspace error */
WARN_ON(etnaviv_obj->last_cpu_prep_op == 0);
dma_sync_sgtable_for_device(dev->dev, etnaviv_obj->sgt,
etnaviv_op_to_dma_dir(etnaviv_obj->last_cpu_prep_op));
etnaviv_obj->last_cpu_prep_op = 0;
}
return 0;
}
```
But, this is acceptable, because we can kill the GEM_CPU_PREP and
GEM_CPU_FINI ioctl entirely
at userspace for cached buffer, as this is totally not needed for cached
mapping on our platform.
Well leave this for the WC mapping only,
OK ?
--
Jingfeng