Blob Blame History Raw
From 1979ba9634fc017fe0e68bc6b3032eb6943e17fe Mon Sep 17 00:00:00 2001
From: Andrey Ryabinin <aryabinin@virtuozzo.com>
Date: Wed, 4 Oct 2017 08:39:27 +0100
Subject: [PATCH] mm/mempolicy: fix NUMA_INTERLEAVE_HIT counter

Patch-Mainline: v4.14-rc5
Git-commit: de55c8b251974247edda38e952da8e8dd71683ec
References: VM Performance, bnc#959436

3a321d2a3dde81214 ("mm: change the call sites of numa statistics items")
separated NUMA counters from zone counters, but the NUMA_INTERLEAVE_HIT
call site wasn't updated to use the new interface.  So
alloc_page_interleave() actually increments NR_ZONE_INACTIVE_FILE instead
of NUMA_INTERLEAVE_HIT.

Fix this by using __inc_numa_state() interface to increment
NUMA_INTERLEAVE_HIT.

Link: http://lkml.kernel.org/r/20171003191003.8573-1-aryabinin@virtuozzo.com
Fixes: 3a321d2a3dde ("mm: change the call sites of numa statistics items")
Signed-off-by: Andrey Ryabinin <aryabinin@virtuozzo.com>
Acked-by: Mel Gorman <mgorman@suse.de>
Cc: Kemi Wang <kemi.wang@intel.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Mel Gorman <mgorman@suse.de>
---
 mm/mempolicy.c | 7 +++++--
 1 file changed, 5 insertions(+), 2 deletions(-)

diff --git a/mm/mempolicy.c b/mm/mempolicy.c
index e0157546e6b5..ccfb949b4e7f 100644
--- a/mm/mempolicy.c
+++ b/mm/mempolicy.c
@@ -1932,8 +1932,11 @@ static struct page *alloc_page_interleave(gfp_t gfp, unsigned order,
 
 	zl = node_zonelist(nid, gfp);
 	page = __alloc_pages(gfp, order, zl);
-	if (page && page_zone(page) == zonelist_zone(&zl->_zonerefs[0]))
-		inc_zone_page_state(page, NUMA_INTERLEAVE_HIT);
+	if (page && page_zone(page) == zonelist_zone(&zl->_zonerefs[0])) {
+		preempt_disable();
+		__inc_numa_state(page_zone(page), NUMA_INTERLEAVE_HIT);
+		preempt_enable();
+	}
 	return page;
 }