Blob Blame History Raw
From: Chris Wilson <chris@chris-wilson.co.uk>
Date: Sat, 23 Jun 2018 11:39:51 +0100
Subject: drm/i915: Defer modeset cleanup to a secondary task
Git-commit: 8d52e447807b350b98ffb4e64bc2fcc1f181c5be
Patch-mainline: v4.19-rc1
References: FATE#326289 FATE#326079 FATE#326049 FATE#322398 FATE#326166

If we avoid cleaning up the old state immediately in
intel_atomic_commit_tail() and defer it to a second task, we can avoid
taking heavily contended locks when the caller is ready to procede.
Subsequent modesets will wait for the cleanup operation (either directly
via the ordered modeset wq or indirectly through the atomic helperr)
which keeps the number of inflight cleanup tasks in check.

As an example, during reset an immediate modeset is performed to disable
the displays before the HW is reset, which must avoid struct_mutex to
avoid recursion. Moving the cleanup to a separate task, defers acquiring
the struct_mutex to after the GPU is running again, allowing it to
complete. Even in a few patches time (optimist!) when we no longer
require struct_mutex to unpin the framebuffers, it will still be good
practice to minimise the number of contention points along reset. The
mutex dependency still exists (as one modeset flushes the other), but in
the short term it resolves the deadlock for simple reset cases.

Bugzilla: https://bugs.freedesktop.org/show_bug.cgi?id=101600
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Link: https://patchwork.freedesktop.org/patch/msgid/20180623103951.23889-1-chris@chris-wilson.co.uk
Acked-by: Maarten Lankhorst <maarten.lankhorst@linux.intel.com>
Acked-by: Daniel Vetter <daniel.vetter@ffwll.ch>
Acked-by: Petr Tesarik <ptesarik@suse.com>
---
 drivers/gpu/drm/i915/intel_display.c |   30 +++++++++++++++++++++++-------
 1 file changed, 23 insertions(+), 7 deletions(-)

--- a/drivers/gpu/drm/i915/intel_display.c
+++ b/drivers/gpu/drm/i915/intel_display.c
@@ -12554,6 +12554,19 @@ static void intel_atomic_commit_fence_wa
 	finish_wait(&dev_priv->gpu_error.wait_queue, &wait_reset);
 }
 
+static void intel_atomic_cleanup_work(struct work_struct *work)
+{
+	struct drm_atomic_state *state =
+		container_of(work, struct drm_atomic_state, commit_work);
+	struct drm_i915_private *i915 = to_i915(state->dev);
+
+	drm_atomic_helper_cleanup_planes(&i915->drm, state);
+	drm_atomic_helper_commit_cleanup_done(state);
+	drm_atomic_state_put(state);
+
+	intel_atomic_helper_free_state(i915);
+}
+
 static void intel_atomic_commit_tail(struct drm_atomic_state *state)
 {
 	struct drm_device *dev = state->dev;
@@ -12714,13 +12727,16 @@ static void intel_atomic_commit_tail(str
 		intel_display_power_put(dev_priv, POWER_DOMAIN_MODESET);
 	}
 
-	drm_atomic_helper_cleanup_planes(dev, state);
-
-	drm_atomic_helper_commit_cleanup_done(state);
-
-	drm_atomic_state_put(state);
-
-	intel_atomic_helper_free_state(dev_priv);
+	/*
+	 * Defer the cleanup of the old state to a separate worker to not
+	 * impede the current task (userspace for blocking modesets) that
+	 * are executed inline. For out-of-line asynchronous modesets/flips,
+	 * deferring to a new worker seems overkill, but we would place a
+	 * schedule point (cond_resched()) here anyway to keep latencies
+	 * down.
+	 */
+	INIT_WORK(&state->commit_work, intel_atomic_cleanup_work);
+	schedule_work(&state->commit_work);
 }
 
 static void intel_atomic_commit_work(struct work_struct *work)