Blob Blame History Raw
From: Vincent Guittot <vincent.guittot@linaro.org>
Date: Wed, 24 Jun 2020 21:27:12 +0100
Subject: [PATCH] sched/cfs: change initial value of runnable_avg

References: bsc#1158765
Patch-mainline: v5.8-rc3
Git-commit: e21cf43406a190adfcc4bfe592768066fb3aaa9b

Some performance regression on reaim benchmark have been raised with
  commit 070f5e860ee2 ("sched/fair: Take into account runnable_avg to classify group")

The problem comes from the init value of runnable_avg which is initialized
with max value. This can be a problem if the newly forked task is finally
a short task because the group of CPUs is wrongly set to overloaded and
tasks are pulled less agressively.

Set initial value of runnable_avg equals to util_avg to reflect that there
is no waiting time so far.

Fixes: 070f5e860ee2 ("sched/fair: Take into account runnable_avg to classify group")
Reported-by: kernel test robot <rong.a.chen@intel.com>
Signed-off-by: Vincent Guittot <vincent.guittot@linaro.org>
Signed-off-by: Mel Gorman <mgorman@suse.de>
---
 kernel/sched/fair.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
index 18e86a70144b..63ab71d51db4 100644
--- a/kernel/sched/fair.c
+++ b/kernel/sched/fair.c
@@ -795,7 +795,7 @@ void post_init_entity_util_avg(struct task_struct *p)
 		}
 	}
 
-	sa->runnable_avg = cpu_scale;
+	sa->runnable_avg = sa->util_avg;
 
 	if (p->sched_class != &fair_sched_class) {
 		/*