Chris Wilson <chris@xxxxxxxxxxxxxxxxxx> writes: >> [PATCH 1/4] Support 64-bit MSC values. Handle kernel vageries about > > Some spurious assignments that appear to intentially drop the error code > could be clarified, I can't find any dropped error codes in this patch to add comments to, please provide patch excerpts for this review. > and intel_crtc_msc_to_sequence() is always called > with a derived current_msc already to hand. The latter present path > obfuscates its derived current_msc. Present always computes absolute MSC values and provides those to the driver, instead of expecting every driver to duplicate that logic. >> [PATCH 2/4] Restructure DRM event handling. > > This won't compile against older Xorg due to xorg_list in the common > code. Can switch to intel_list, but that would need list_for_each_entry_safe added. How many versions back is this supposed to compile against? >> [PATCH 3/4] Add DRI3 and miSyncShm support > > O_CLOEXEC needs protecting, also would appear to be candidate for a > render-node. Yes, obviously this wants to use render-node. I haven't had complaints about O_CLOEXEC from BSD or Solaris developers for libxshmfence; what systems do not have support for this? > The imported and exported DRI3 pixmaps need to be pinned > to prevent the driver using BO exchanges on that pixmap. I don't understand this comment. > DRI3 doesn't respect the xorg.conf Option for disabling. Ok, it should check intel->directRenderingType == DRI_DISABLED. > A fence is only tied to a > screen and no XID or Client in particular? DRI3 fences are screen-specific (otherwise you'd have no way of hooking the fence to a specific driver). > So it is a global operation > akin to intel_flush_callback() which would be called before the Sync > reply was sent. Yes, the hardware queue is to be flushed before the Sync event is sent (and before the xshmfence object is triggered, of course). Note that this is just the mi version of sync fences, which use libxshmfence; the driver is free to use different code there. If we find that the code for handling these xshmfence objects is common across drivers, we can move that into the X server to share. >> [PATCH 4/4] Add Present extension support > > Yikes. The patch is itself fairly innoculous, but only because the Present > extension in the server appears to be repeating the worst of DRI2, > including its original bugs. Please provide more specific comments here. > The fallback/non-fullscreen case is not > synchronised to screen refresh (if the Client so desired), and should > be passed through to the driver. The fallback case is synchronized as the Present code triggers the CopyArea call from the vblank hook. In practice, this has proven sufficient to get images onto the screen without tearing and without requiring a huge amount of driver and kernel infrastructure. > The whole driver interface seems to be too low a level, baking in many > assumptions, rather the usual approach of providing a set of mi > routines that the driver can plug into or not as the case may be. Patches to the X server to change the API for better hardware support are welcome, of course. > That the WindowPixmap no longer points to the actual bo leads > to a few problems, such as the CRTC misconfiguration and GetImage being > broken after a PresentFlip. A patch for the X server to fix that has been posted. > After a vblank_event, Present must check that > the flip is still valid before execution. The flip proc may return FALSE to indicate failure of any kind. Present will then fall-back to a simple blt. > In the backend it is not clear whether the RRCrtc should be the > primary CRTC or the only CRTC to flip. There is only one screen pixmap, so of course every CRTC must flip together. The CRTC provided indicates which one the MSC is from. > Damage is processed after the fallback but not the Flip path, the lack > of Damage notification would upset Prime amongst others. Sounds easy to fix in the X server. Thanks for your review! -- keith.packard@xxxxxxxxx
Attachment:
pgp4DhMqDEkz1.pgp
Description: PGP signature
_______________________________________________ Intel-gfx mailing list Intel-gfx@xxxxxxxxxxxxxxxxxxxxx http://lists.freedesktop.org/mailman/listinfo/intel-gfx