Blob Blame History Raw
From 27100d6f4cb5588e312cd0bf490a33e22e929076 Mon Sep 17 00:00:00 2001
From: Bharata B Rao <bharata@amd.com>
Date: Mon, 4 Oct 2021 16:27:05 +0530
Subject: [PATCH] sched/numa: Fix a few comments

References: bsc#1189999 (Scheduler functional and performance backports)
Patch-mainline: v5.16-rc1
Git-commit: 7d380f24fe662033fd21a65f678057abd293f76e

Fix a few comments to help understand them better.

Signed-off-by: Bharata B Rao <bharata@amd.com>
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Acked-by: Mel Gorman <mgorman@suse.de>
Link: https://lkml.kernel.org/r/20211004105706.3669-4-bharata@amd.com
Signed-off-by: Mel Gorman <mgorman@suse.de>
---
 kernel/sched/fair.c | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
index 5b7d6c444333..0ea8bcbb0cec 100644
--- a/kernel/sched/fair.c
+++ b/kernel/sched/fair.c
@@ -2136,7 +2136,7 @@ static void numa_migrate_preferred(struct task_struct *p)
 }
 
 /*
- * Find out how many nodes on the workload is actively running on. Do this by
+ * Find out how many nodes the workload is actively running on. Do this by
  * tracking the nodes from which NUMA hinting faults are triggered. This can
  * be different from the set of nodes where the workload's memory is currently
  * located.
@@ -2190,7 +2190,7 @@ static void update_task_scan_period(struct task_struct *p,
 
 	/*
 	 * If there were no record hinting faults then either the task is
-	 * completely idle or all activity is areas that are not of interest
+	 * completely idle or all activity is in areas that are not of interest
 	 * to automatic numa balancing. Related to that, if there were failed
 	 * migration then it implies we are migrating too quickly or the local
 	 * node is overloaded. In either case, scan slower