Blob Blame History Raw
From: Vincent Guittot <vincent.guittot@linaro.org>
Date: Tue, 18 Feb 2020 15:49:49 +0000
Subject: [PATCH] sched/fair: fix statistics for find_idlest_group()

References: bnc#1155798 (CPU scheduler functional and performance backports)
Patch-mainline: v5.6-rc5
Git-commit: 289de35984815576793f579ec27248609e75976e

sgs->group_weight is not set while gathering statistics in
update_sg_wakeup_stats(). This means that a group can be classified as
fully busy with 0 running tasks if utilization is high enough.

This path is mainly used for fork and exec.

Fixes: 57abff067a08 ("sched/fair: Rework find_idlest_group()")
Signed-off-by: Vincent Guittot <vincent.guittot@linaro.org>
Signed-off-by: Mel Gorman <mgorman@suse.de>
---
 kernel/sched/fair.c | 2 ++
 1 file changed, 2 insertions(+)

diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
index 358a9734e79b..6de10afc117d 100644
--- a/kernel/sched/fair.c
+++ b/kernel/sched/fair.c
@@ -8503,6 +8503,8 @@ static inline void update_sg_wakeup_stats(struct sched_domain *sd,
 
 	sgs->group_capacity = group->sgc->capacity;
 
+	sgs->group_weight = group->group_weight;
+
 	sgs->group_type = group_classify(sd->imbalance_pct, group, sgs);
 
 	/*