Re: [PATCH 1/2] drm/ttm: add a way to bo_wait for either the last read or last write

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 10/08/2011 12:03 AM, Marek Olšák wrote:
On Fri, Oct 7, 2011 at 10:00 AM, Thomas Hellstrom<thomas@xxxxxxxxxxxx>  wrote:
OK. First I think we need to make a distinction: bo sync objects vs driver
fences. The bo sync obj api is there to strictly provide functionality that
the ttm bo subsystem is using, and that follows a simple set of rules:

1) the bo subsystem does never assume sync objects are ordered. That means
the bo subsystem needs to wait on a sync object before removing it from a
buffer. Any other assumption is buggy and must be fixed. BUT, if that
assumption takes place in the driver unknowingly from the ttm bo subsystem
(which is usually the case), it's OK.

2) When the sync object(s) attached to the bo are signaled the ttm bo
subsystem is free to copy the bo contents and to unbind the bo.

3) The ttm bo system allows sync objects to be signaled in different ways
opaque to the subsystem using sync_obj_arg. The driver is responsible for
setting up that argument.

4) Driver fences may be used for or expose other functionality or adaptions
to APIs as long as the sync obj api exported to the bo sybsystem follows the
above rules.

This means the following w r t the patch.

A) it violates 1). This is a bug that must be fixed. Assumptions that if one
sync object is singnaled, another sync object is also signaled must be done
in the driver and not in the bo subsystem. Hence we need to explicitly wait
for a fence to remove it from the bo.

B) the sync_obj_arg carries *per-sync-obj* information on how it should be
signaled. If we need to attach multiple sync objects to a buffer object, we
also need multiple sync_obj_args. This is a bug and needs to be fixed.

C) There is really only one reason that the ttm bo subsystem should care
about multiple sync objects, and that is because the driver can't order them
efficiently. A such example would be hardware with multiple pipes reading
simultaneously from the same texture buffer. Currently we don't support this
so only the *last* sync object needs to be know by the bo subsystem. Keeping
track of multiple fences generates a lot of completely unnecessary code in
the ttm_bo_util file, the ttm_bo_vm file, and will be a nightmare if / when
we truly support pipelined moves.

As I understand it from your patches, you want to keep multiple fences
around only to track rendering history. If we want to do that generically, i
suggest doing it in the execbuf util code in the following way:

struct ttm_eu_rendering_history {
    void *last_read_sync_obj;
    void *last_read_sync_obj_arg;
    void *last_write_sync_obj;
    void *last_write_sync_obj_arg;
}

Embed this structure in the radeon_bo, and build a small api around it,
including *optionally* passing it to the existing execbuf utilities, and you
should be done. The bo_util code and bo_vm code doesn't care about the
rendering history. Only that the bo is completely idle.

Note also that when an accelerated bo move is scheduled, the driver needs to
update this struct
OK, sounds good. I'll fix what should be fixed and send a patch when
it's ready. I am now not so sure whether doing this generically is a
good idea. :)

Marek

Marek,
Any progress on this. The merge window is about to open soon I guess and we need a fix by then.

/Thomas



_______________________________________________
dri-devel mailing list
dri-devel@xxxxxxxxxxxxxxxxxxxxx
http://lists.freedesktop.org/mailman/listinfo/dri-devel



[Index of Archives]     [Linux DRI Users]     [Linux Intel Graphics]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [XFree86]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Linux Kernel]     [Linux SCSI]     [XFree86]
  Powered by Linux