Blob Blame History Raw
From: Aubrey Li <aubrey.li@intel.com>
Date: Thu, 26 Mar 2020 14:23:07 +0000
Subject: [PATCH] sched/fair: Fix negative imbalance in imbalance calculation

Patch-mainline: v5.7-rc1
Git-commit: 111688ca1c4a43a7e482f5401f82c46326b8ed49
References: bnc#1155798 (CPU scheduler functional and performance backports)

A negative imbalance value was observed after imbalance calculation,
this happens when the local sched group type is group_fully_busy,
and the average load of local group is greater than the selected
busiest group. Fix this problem by comparing the average load of the
local and busiest group before imbalance calculation formula.

Suggested-by: Vincent Guittot <vincent.guittot@linaro.org>
Signed-off-by: Aubrey Li <aubrey.li@linux.intel.com>
Cc: Phil Auld <pauld@redhat.com>
Signed-off-by: Mel Gorman <mgorman@suse.de>
---
 kernel/sched/fair.c | 8 ++++++++
 1 file changed, 8 insertions(+)

diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
index 2806158aa964..cc8fd4976b01 100644
--- a/kernel/sched/fair.c
+++ b/kernel/sched/fair.c
@@ -8970,6 +8970,14 @@ static inline void calculate_imbalance(struct lb_env *env, struct sd_lb_stats *s
 
 		sds->avg_load = (sds->total_load * SCHED_CAPACITY_SCALE) /
 				sds->total_capacity;
+		/*
+		 * If the local group is more loaded than the selected
+		 * busiest group don't try to pull any tasks.
+		 */
+		if (local->avg_load >= busiest->avg_load) {
+			env->imbalance = 0;
+			return;
+		}
 	}
 
 	/*