On 2/18/20 8:06 PM, Daniel Vetter wrote:
On Tue, Feb 18, 2020 at 07:37:44PM +0100, Christian König wrote:
Am 18.02.20 um 19:28 schrieb Thomas Zimmermann:
Hi
Am 18.02.20 um 19:23 schrieb Christian König:
Am 18.02.20 um 19:16 schrieb Thomas Zimmermann:
Hi
Am 18.02.20 um 18:13 schrieb Nirmoy:
On 2/18/20 1:44 PM, Christian König wrote:
Am 18.02.20 um 13:40 schrieb Thomas Zimmermann:
Hi
Am 17.02.20 um 16:04 schrieb Nirmoy Das:
GPU address handling is device specific and should be handle by its
device
driver.
Signed-off-by: Nirmoy Das <nirmoy.das@xxxxxxx>
---
drivers/gpu/drm/ttm/ttm_bo.c | 7 -------
include/drm/ttm/ttm_bo_api.h | 2 --
include/drm/ttm/ttm_bo_driver.h | 1 -
3 files changed, 10 deletions(-)
diff --git a/drivers/gpu/drm/ttm/ttm_bo.c
b/drivers/gpu/drm/ttm/ttm_bo.c
index 151edfd8de77..d5885cd609a3 100644
--- a/drivers/gpu/drm/ttm/ttm_bo.c
+++ b/drivers/gpu/drm/ttm/ttm_bo.c
@@ -85,7 +85,6 @@ static void ttm_mem_type_debug(struct
ttm_bo_device *bdev, struct drm_printer *p
drm_printf(p, " has_type: %d\n", man->has_type);
drm_printf(p, " use_type: %d\n", man->use_type);
drm_printf(p, " flags: 0x%08X\n", man->flags);
- drm_printf(p, " gpu_offset: 0x%08llX\n", man->gpu_offset);
drm_printf(p, " size: %llu\n", man->size);
drm_printf(p, " available_caching: 0x%08X\n",
man->available_caching);
drm_printf(p, " default_caching: 0x%08X\n",
man->default_caching);
@@ -345,12 +344,6 @@ static int ttm_bo_handle_move_mem(struct
ttm_buffer_object *bo,
moved:
bo->evicted = false;
- if (bo->mem.mm_node)
- bo->offset = (bo->mem.start << PAGE_SHIFT) +
- bdev->man[bo->mem.mem_type].gpu_offset;
- else
- bo->offset = 0;
-
After moving this into users, the else branch has been lost. Is
'bo->mem.mm_node' always true?
At least for the amdgpu and radeon use cases, yes.
But that is a rather good question I mean for it is illegal to get the
GPU BO address if it is inaccessible (e.g. in the system domain).
Could be that some driver relied on the behavior to get 0 for the
system domain here.
I wonder how to verify that ?
If I understand correctly:
1 qxl uses bo->offset only in qxl_bo_physical_address() which is not in
system domain.
2 unfortunately I can't say the same for bochs but it works with this
patch series so I think bochs is fine as well.
3 vmwgfx uses bo->offset only when bo->mem.mem_type == TTM_PL_VRAM so
vmwgfx should be fine.
4 amdgpu and radeon runs with 'bo->mem.mm_node' always true
I am not sure about nouveau as bo->offset is being used in many places.
I could probably mirror the removed logic to nouveau as
I suggest to introduce a ttm helper that contains the original branching
and use it everywhere. Something like
s64 ttm_bo_offset(struct ttm_buffer_object *bo)
{
if (WARN_ON_ONCE(!bo->mem.mm_node))
return 0;
return bo->mem.start << PAGE_SHIFT;
}
Could be static inline. The warning should point to broken drivers. This
also gets rid of the ugly shift in the drivers.
Big NAK on this. That is exactly what we should NOT do.
See the shift belongs into the driver, because it is driver dependent if
we work with page or byte offsets.
For amdgpu we for example want to work with byte offsets and TTM should
not make any assumption about what bo->mem.start actually contains.
OK. What about something like ttm_bo_pg_offset()? Same code without the
shift. Would also make it clear that it's a page offset.
That is a rather good idea. We could name that ttm_bo_man_offset() and put
it into ttm_bo_manager.c next to the manager which allocates them.
It's just that this manager is not used by everybody, so amdgpu, radeon and
nouveau would still need a separate function.
Let me pile on with my bikeshed color choice :-)
I'd do this a wrapper, but for drivers individually. And only for those we
haven't audited yet, or where we think the WARN_ON actually provides
value. So maybe in vram helpers, since that might be used by some drivers
that get stuff wrong. Or maybe noveau isn't audited yet.
I like this idea more. I will modify and resend patches with above
suggestions.
That also sidesteps the "what should we call this, drivers are different?"
problem.
Anyway feel free to ignore me, really just a bikeshed at this point.
Cheers, Daniel
Christian.
Best regards
Thomas
Regards,
Christian.
Best regards
Thomas
diff --git a/drivers/gpu/drm/nouveau/nouveau_bo.c
b/drivers/gpu/drm/nouveau/nouveau_bo.c
index f8015e0318d7..5a6a2af91318 100644
--- a/drivers/gpu/drm/nouveau/nouveau_bo.c
+++ b/drivers/gpu/drm/nouveau/nouveau_bo.c
@@ -1317,6 +1317,10 @@ nouveau_bo_move_ntfy(struct ttm_buffer_object
*bo, bool evict,
list_for_each_entry(vma, &nvbo->vma_list, head) {
nouveau_vma_map(vma, mem);
}
+ if (bo->mem.mm_node)
+ nvbo->offset = (new_reg->start << PAGE_SHIFT);
+ else
+ nvbo->offset = 0;
} else {
list_for_each_entry(vma, &nvbo->vma_list, head) {
WARN_ON(ttm_bo_wait(bo, false, false));
Regards,
Nirmoy
Regards,
Christian.
Best regards
Thomas
ctx->bytes_moved += bo->num_pages << PAGE_SHIFT;
return 0;
diff --git a/include/drm/ttm/ttm_bo_api.h
b/include/drm/ttm/ttm_bo_api.h
index b9bc1b00142e..d6f39ee5bf5d 100644
--- a/include/drm/ttm/ttm_bo_api.h
+++ b/include/drm/ttm/ttm_bo_api.h
@@ -213,8 +213,6 @@ struct ttm_buffer_object {
* either of these locks held.
*/
- uint64_t offset; /* GPU address space is independent of CPU
word size */
-
struct sg_table *sg;
};
diff --git a/include/drm/ttm/ttm_bo_driver.h
b/include/drm/ttm/ttm_bo_driver.h
index c9e0fd09f4b2..c8ce6c181abe 100644
--- a/include/drm/ttm/ttm_bo_driver.h
+++ b/include/drm/ttm/ttm_bo_driver.h
@@ -177,7 +177,6 @@ struct ttm_mem_type_manager {
bool has_type;
bool use_type;
uint32_t flags;
- uint64_t gpu_offset; /* GPU address space is independent of CPU
word size */
uint64_t size;
uint32_t available_caching;
uint32_t default_caching;
_______________________________________________
dri-devel mailing list
dri-devel@xxxxxxxxxxxxxxxxxxxxx
https://nam11.safelinks.protection.outlook.com/?url=https%3A%2F%2Flists.freedesktop.org%2Fmailman%2Flistinfo%2Fdri-devel&data=02%7C01%7CNirmoy.Das%40amd.com%7Cb0279de1528f44e16aa508d7b4a5b192%7C3dd8961fe4884e608e11a82d994e183d%7C0%7C0%7C637176496067660861&sdata=oDEilLoCymlcMxXJ2mnVWwTxqloQlW0SP3HDPmQRSVc%3D&reserved=0
_______________________________________________
dri-devel mailing list
dri-devel@xxxxxxxxxxxxxxxxxxxxx
https://nam11.safelinks.protection.outlook.com/?url=https%3A%2F%2Flists.freedesktop.org%2Fmailman%2Flistinfo%2Fdri-devel&data=02%7C01%7CNirmoy.Das%40amd.com%7Cb0279de1528f44e16aa508d7b4a5b192%7C3dd8961fe4884e608e11a82d994e183d%7C0%7C0%7C637176496067660861&sdata=oDEilLoCymlcMxXJ2mnVWwTxqloQlW0SP3HDPmQRSVc%3D&reserved=0
_______________________________________________
amd-gfx mailing list
amd-gfx@xxxxxxxxxxxxxxxxxxxxx
https://nam11.safelinks.protection.outlook.com/?url=https%3A%2F%2Flists.freedesktop.org%2Fmailman%2Flistinfo%2Famd-gfx&data=02%7C01%7CNirmoy.Das%40amd.com%7Cb0279de1528f44e16aa508d7b4a5b192%7C3dd8961fe4884e608e11a82d994e183d%7C0%7C0%7C637176496067660861&sdata=XNDpZIj%2FwDbHOHBL25G%2FHpS9JETN%2F37H35E6t4q4joU%3D&reserved=0
_______________________________________________
Nouveau mailing list
Nouveau@xxxxxxxxxxxxxxxxxxxxx
https://nam11.safelinks.protection.outlook.com/?url=https%3A%2F%2Flists.freedesktop.org%2Fmailman%2Flistinfo%2Fnouveau&data=02%7C01%7CNirmoy.Das%40amd.com%7Cb0279de1528f44e16aa508d7b4a5b192%7C3dd8961fe4884e608e11a82d994e183d%7C0%7C0%7C637176496067660861&sdata=6ln5B81rbRdVEN6tRk8KMmIx04VPJw10dwZdqIYMkSo%3D&reserved=0
_______________________________________________
dri-devel mailing list
dri-devel@xxxxxxxxxxxxxxxxxxxxx
https://lists.freedesktop.org/mailman/listinfo/dri-devel