Re: [PATCH 04/11] drm/i915: Support for GuC interrupts

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 





On 6/28/2016 7:14 PM, Tvrtko Ursulin wrote:

On 28/06/16 12:12, Goel, Akash wrote:


On 6/28/2016 3:33 PM, Tvrtko Ursulin wrote:

On 27/06/16 13:16, akash.goel@xxxxxxxxx wrote:
From: Sagar Arun Kamble <sagar.a.kamble@xxxxxxxxx>

There are certain types of interrupts which Host can recieve from GuC.
GuC ukernel sends an interrupt to Host for certain events, like for
example retrieve/consume the logs generated by ukernel.
This patch adds support to receive interrupts from GuC but currently
enables & partially handles only the interrupt sent by GuC ukernel.
Future patches will add support for handling other interrupt types.

v2: Use common low level routines for PM IER/IIR programming (Chris)
     Rename interrupt functions to gen9_xxx from gen8_xxx (Chris)
     Replace disabling of wake ref asserts with rpm get/put (Chris)

Signed-off-by: Sagar Arun Kamble <sagar.a.kamble@xxxxxxxxx>
Signed-off-by: Akash Goel <akash.goel@xxxxxxxxx>
---
  drivers/gpu/drm/i915/i915_drv.h            |  1 +
  drivers/gpu/drm/i915/i915_guc_submission.c |  5 ++
  drivers/gpu/drm/i915/i915_irq.c            | 95
++++++++++++++++++++++++++++--
  drivers/gpu/drm/i915/i915_reg.h            | 11 ++++
  drivers/gpu/drm/i915/intel_drv.h           |  3 +
  drivers/gpu/drm/i915/intel_guc.h           |  5 ++
  drivers/gpu/drm/i915/intel_guc_loader.c    |  4 ++
  7 files changed, 120 insertions(+), 4 deletions(-)

+static void gen9_guc2host_events_work(struct work_struct *work)
+{
+    struct drm_i915_private *dev_priv =
+        container_of(work, struct drm_i915_private, guc.events_work);
+
+    spin_lock_irq(&dev_priv->irq_lock);
+    /* Speed up work cancelation during disabling guc interrupts. */
+    if (!dev_priv->guc.interrupts_enabled) {
+        spin_unlock_irq(&dev_priv->irq_lock);
+        return;
+    }
+
+    /* Though this work item gets synced during rpm suspend, but
still need
+     * a rpm get/put to avoid the warning, as it could get executed
in a
+     * window, where rpm ref count has dropped to zero but rpm
suspend has
+     * not kicked in. Generally device is expected to be active only
at this
+     * time so get/put should be really quick.
+     */
+    intel_runtime_pm_get(dev_priv);
+
+    gen6_enable_pm_irq(dev_priv, GEN9_GUC_TO_HOST_INT_EVENT);
+    spin_unlock_irq(&dev_priv->irq_lock);
+
+    /* TODO: Handle the events for which GuC interrupted host */
+
+    intel_runtime_pm_put(dev_priv);
+}

  static bool bxt_port_hotplug_long_detect(enum port port, u32 val)
@@ -1653,6 +1722,20 @@ static void gen6_rps_irq_handler(struct
drm_i915_private *dev_priv, u32 pm_iir)
      }
  }

+static void gen9_guc_irq_handler(struct drm_i915_private *dev_priv,
u32 gt_iir)
+{
+    if (gt_iir & GEN9_GUC_TO_HOST_INT_EVENT) {
+        spin_lock(&dev_priv->irq_lock);
+        if (dev_priv->guc.interrupts_enabled) {

So it is expected interrupts will always be enabled when
i915.guc_log_level is set, correct?

Yes currently only when guc_log_level > 0, interrupt should be enabled.

But we need to disable/enable the interrupt upon suspend/resume and
across GPU reset.
So interrupt may not be always in a enabled state when guc_log_level>0.

Also do you need to check against dev_priv->guc.interrupts_enabled at
all then? Or from an opposite angle, would you instead need to log the
fact unexpected interrupt was received here?

I think this check is needed, to avoid the race in disabling interrupt.
Please refer the sequence in interrupt disabling function (same as rps
disabling), there we first set the interrupts_enabled flag to false,
then wait for the work item to finish execution and then program the IMR
register.

Right I see now that it is copy-pasted existing sequence. In this case I
won't question it further. :)


+            /* Process all the GuC to Host events in bottom half */
+            gen6_disable_pm_irq(dev_priv,
+                GEN9_GUC_TO_HOST_INT_EVENT);

Why it is important to disable the interrupt here? Not for the queue
work I think.

We want to & can handle one interrupt at a time, unless the queued work
item is executed we can't process the next interrupt, so better to keep
the interrupt masked.
Sorry this is what is my understanding.

So it is queued in hardware and will get asserted when unmasked?
As per my understanding, if the interrupt is masked (IMR), it won't be
queued, will be ignored & so will not be asserted on unmasking.

If the interrupt wasn't masked, but was disabled (in IER) then it
will be asserted (in IIR) when its enabled.




Also, is it safe with regards to potentially losing the interrupt?

Particularly for the FLUSH_LOG_BUFFER case, GuC won't send a new flush
interrupt unless its gets an acknowledgement (flush signal) of the
previous one from Host.

Ah so the previous comment is really impossible? I mean the need to mask?

Sorry my comments were not fully correct. GuC can send a new flush interrupt, even if the previous one is pending, but that will be for a
different log buffer type (3 types of log buffer ISR, DPC, CRASH).
For the same buffer type, GuC won't send a new flush interrupt unless its gets an acknowledgement of the previous one from Host.

But as you said the workqueue is ordered and furthermore there is a
single instance of work item, so the serialization will be provided
implicitly and there is no real need to mask the interrupt.

As mentioned above, a new flush interrupt can come while the previous
one is being processed on Host but due to a single instance of work item either that new interrupt will not do anything effectively if work
item was in a pending state or will re queue the work item if it was
getting executed at that time.

Also the state of all 3 log buffer types are being parsed irrespective
for which one the interrupt actually came, and the whole buffer is being
captured (this is how it has been recommended to handle the flush
interrupts from Host side). So if a new interrupt comes while the work item was in a pending state, then effectively work of this new interrupt will also be done when work item is executed later.

So will remove the masking then ?


Possibly just put a comment up there explaining that.


+            queue_work(dev_priv->wq, &dev_priv->guc.events_work);

Because dev_priv->wq is a one a time in order wq so if something else is
running on it and taking time, can that also be a cause of dropping an
interrupt or being late with sending the flush signal to the guc and so
losing some logs?

Its a Driver's private workqueue and Turbo work item is also queued
inside this workqueue which too needs to be executed without much delay.
But yes the flush work item can get substantially delayed in case if
there are other work items queued before it, especially the
mm.retire_work (but generally executes every ~1 second).

Best would be if the log buffer (44KB data) can be sampled in IRQ
context (or Tasklet context) itself.

I was just trying to understand if you perhaps need a dedicated wq. I
don't have a feel at all on how much data guc logging generates per
second. If the interrupt is low frequency even with a lot of cmd
submission happening it could be fine like it is.

Actually with maximum verbosity level, I am seeing flush interrupt every ms, with 'gem_exec_nop' IGT, as there are lot of submissions being done. But such may not happen in real life scenario.

I think, if needed, later on we can either have a dedicated high priority work queue for logging work or use the tasklet context to do
the processing.

Best regards
Akash
Regards,

Tvrtko
_______________________________________________
Intel-gfx mailing list
Intel-gfx@xxxxxxxxxxxxxxxxxxxxx
https://lists.freedesktop.org/mailman/listinfo/intel-gfx




[Index of Archives]     [Linux USB Devel]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]
  Powered by Linux