Blob Blame History Raw
From: Pixel Ding <Pixel.Ding@amd.com>
Date: Tue, 24 Apr 2018 22:52:45 -0400
Subject: drm/scheduler: don't update last scheduled fence in TDR
Git-commit: 5dd3f9efd4199f0d9e8244322934494ebd140dfd
Patch-mainline: v4.18-rc1
References: FATE#326289 FATE#326079 FATE#326049 FATE#322398 FATE#326166

The current sequence in scheduler thread is:
1. update last sched fence
2. job begin (adding to mirror list)
3. job finish (remove from mirror list)
4. back to 1

Since we update last sched prior to joining mirror list, the jobs
in mirror list already pass the last sched fence. TDR just run
the jobs in mirror list, so we should not update the last sched
fences in TDR.

Signed-off-by: Pixel Ding <Pixel.Ding@amd.com>
Reviewed-by: Monk Liu <monk.liu@amd.com>
Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
Acked-by: Petr Tesarik <ptesarik@suse.com>
---
 drivers/gpu/drm/scheduler/gpu_scheduler.c |    3 ---
 1 file changed, 3 deletions(-)

--- a/drivers/gpu/drm/scheduler/gpu_scheduler.c
+++ b/drivers/gpu/drm/scheduler/gpu_scheduler.c
@@ -575,9 +575,6 @@ void drm_sched_job_recovery(struct drm_g
 		fence = sched->ops->run_job(s_job);
 		atomic_inc(&sched->hw_rq_count);
 
-		dma_fence_put(s_job->entity->last_scheduled);
-		s_job->entity->last_scheduled = dma_fence_get(&s_fence->finished);
-
 		if (fence) {
 			s_fence->parent = dma_fence_get(fence);
 			r = dma_fence_add_callback(fence, &s_fence->cb,