Blob Blame History Raw
From 3df05a1195907d351c66ac7d67487b1d226cf7a9 Mon Sep 17 00:00:00 2001
From: Mel Gorman <mgorman@techsingularity.net>
Date: Wed, 16 Feb 2022 13:07:14 +0000
Subject: [PATCH] mm/page_alloc: Drain the requested list first during bulk
 free

References: bnc#1193239,bnc#1193199,bnc#1193329
Patch-mainline: v5.18-rc1
Git-commit: d61372bc41cfe91d6170434fc44b6af49cd2c755

Prior to the series, pindex 0 (order-0 MIGRATE_UNMOVABLE) was always
skipped first and the precise reason is forgotten. A potential reason may
have been to artificially preserve MIGRATE_UNMOVABLE but there is no reason
why that would be optimal as it depends on the workload. The more likely
reason is that it was less complicated to do a pre-increment instead of
a post-increment in terms of overall code flow. As free_pcppages_bulk()
now typically receives the pindex of the PCP list that exceeded high,
always start draining that list.

Signed-off-by: Mel Gorman <mgorman@techsingularity.net>
Signed-off-by: Mel Gorman <mgorman@suse.de>
---
 mm/page_alloc.c | 4 ++++
 1 file changed, 4 insertions(+)

diff --git a/mm/page_alloc.c b/mm/page_alloc.c
index 040416b5edbf..fffa9dfc1a8a 100644
--- a/mm/page_alloc.c
+++ b/mm/page_alloc.c
@@ -1448,6 +1448,10 @@ static void free_pcppages_bulk(struct zone *zone, int count,
 	 * below while (list_empty(list)) loop.
 	 */
 	count = min(pcp->count, count);
+
+	/* Ensure requested pindex is drained first. */
+	pindex = pindex - 1;
+
 	while (count > 0) {
 		struct list_head *list;
 		int nr_pages;