Blob Blame History Raw
From baec997285e63ad3e03d8b8d45e14776cd737f62 Mon Sep 17 00:00:00 2001
From: Colin Xu <colin.xu@intel.com>
Date: Fri, 16 Oct 2020 13:40:59 +0800
Subject: [PATCH] drm/i915/gvt: Only pin/unpin intel_context along with workload
Git-commit: baec997285e63ad3e03d8b8d45e14776cd737f62
Patch-mainline: v5.10-rc3
References: git-fixes

One issue exposed after below commit with which the system will freeze
at suspend after vGPU is created (no need to activate the vGPU).
commit e6ba76480299 ("drm/i915: Remove i915->kernel_context")

Old implementation pin the intel_context at setup_submission and
unpin it at clean_submission. So after some vGPU is created, the
intel_context is always pinned there although no workload using it.
It will then block i915 enter suspend state.

There is no need to pin it all the time. Pin/unpin it around workload
lifecycle is more reasonable. After GVT enabled suspend/resume, the
pinned intel_context will also get unpined when userspace put VM process
into suspend state since all workloads are retired, then it's safe to
unpin all intel_context for workloads created. So move the pin/unpin to
create_workload and destroy_workload, while still keep the
create/destroy in old place.

V2: 
Rebase.

Fixes: e6ba76480299 ("drm/i915: Remove i915->kernel_context")
Reviewed-by: Zhenyu Wang <zhenyuw@linux.intel.com>
Signed-off-by: Colin Xu <colin.xu@intel.com>
Signed-off-by: Zhenyu Wang <zhenyuw@linux.intel.com>
Link: http://patchwork.freedesktop.org/patch/msgid/20201016054059.238371-1-colin.xu@intel.com
Acked-by: Takashi Iwai <tiwai@suse.de>

---
 drivers/gpu/drm/i915/gvt/scheduler.c | 15 ++++++++-------
 1 file changed, 8 insertions(+), 7 deletions(-)

diff --git a/drivers/gpu/drm/i915/gvt/scheduler.c b/drivers/gpu/drm/i915/gvt/scheduler.c
index 3c3b9842bbbd..68b2d10108fd 100644
--- a/drivers/gpu/drm/i915/gvt/scheduler.c
+++ b/drivers/gpu/drm/i915/gvt/scheduler.c
@@ -1268,7 +1268,7 @@ void intel_vgpu_clean_submission(struct intel_vgpu *vgpu)
 
 	i915_context_ppgtt_root_restore(s, i915_vm_to_ppgtt(s->shadow[0]->vm));
 	for_each_engine(engine, vgpu->gvt->gt, id)
-		intel_context_unpin(s->shadow[id]);
+		intel_context_put(s->shadow[id]);
 
 	kmem_cache_destroy(s->workloads);
 }
@@ -1360,11 +1360,6 @@ int intel_vgpu_setup_submission(struct intel_vgpu *vgpu)
 			ce->ring = __intel_context_ring_size(ring_size);
 		}
 
-		ret = intel_context_pin(ce);
-		intel_context_put(ce);
-		if (ret)
-			goto out_shadow_ctx;
-
 		s->shadow[i] = ce;
 	}
 
@@ -1396,7 +1391,6 @@ int intel_vgpu_setup_submission(struct intel_vgpu *vgpu)
 		if (IS_ERR(s->shadow[i]))
 			break;
 
-		intel_context_unpin(s->shadow[i]);
 		intel_context_put(s->shadow[i]);
 	}
 	i915_vm_put(&ppgtt->vm);
@@ -1470,6 +1464,7 @@ void intel_vgpu_destroy_workload(struct intel_vgpu_workload *workload)
 {
 	struct intel_vgpu_submission *s = &workload->vgpu->submission;
 
+	intel_context_unpin(s->shadow[workload->engine->id]);
 	release_shadow_batch_buffer(workload);
 	release_shadow_wa_ctx(&workload->wa_ctx);
 
@@ -1715,6 +1710,12 @@ intel_vgpu_create_workload(struct intel_vgpu *vgpu,
 		return ERR_PTR(ret);
 	}
 
+	ret = intel_context_pin(s->shadow[engine->id]);
+	if (ret) {
+		intel_vgpu_destroy_workload(workload);
+		return ERR_PTR(ret);
+	}
+
 	return workload;
 }
 
-- 
2.26.2