From 0b3901b38d9d916f634e903ce7cd2a8ddd5b1559 Mon Sep 17 00:00:00 2001
From: Jan Kara <jack@suse.cz>
Date: Fri, 28 Dec 2018 00:39:01 -0800
Subject: [PATCH] mm: migration: factor out code to compute expected number of
page references
Git-commit: 0b3901b38d9d916f634e903ce7cd2a8ddd5b1559
Patch-mainline: v4.21-rc1
References: bsc#1084216
Patch series "mm: migrate: Fix page migration stalls for blkdev pages".
This patchset deals with page migration stalls that were reported by our
customer due to a block device page that had a bufferhead that was in the
bh LRU cache.
The patchset modifies the page migration code so that bufferheads are
completely handled inside buffer_migrate_page() and then provides a new
migration helper for pages with buffer heads that is safe to use even for
block device pages and that also deals with bh lrus.
This patch (of 6):
Factor out function to compute number of expected page references in
migrate_page_move_mapping(). Note that we move hpage_nr_pages() and
page_has_private() checks from under xas_lock_irq() however this is safe
since we hold page lock.
[jack@suse.cz: fix expected_page_refs()]
Link: http://lkml.kernel.org/r/20181217131710.GB8611@quack2.suse.cz
Link: http://lkml.kernel.org/r/20181211172143.7358-2-jack@suse.cz
Signed-off-by: Jan Kara <jack@suse.cz>
Acked-by: Mel Gorman <mgorman@suse.de>
Cc: Michal Hocko <mhocko@suse.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Acked-by: Jan Kara <jack@suse.cz>
---
mm/migrate.c | 26 +++++++++++++++++---------
1 file changed, 17 insertions(+), 9 deletions(-)
--- a/mm/migrate.c
+++ b/mm/migrate.c
@@ -392,6 +392,22 @@ static inline bool buffer_migrate_lock_b
}
#endif /* CONFIG_BLOCK */
+static int expected_page_refs(struct page *page)
+{
+ int expected_count = 1;
+
+ /*
+ * Device public or private pages have an extra refcount as they are
+ * ZONE_DEVICE pages.
+ */
+ expected_count += is_device_private_page(page);
+ expected_count += is_device_public_page(page);
+ if (page_mapping(page))
+ expected_count += 1 + page_has_private(page);
+
+ return expected_count;
+}
+
/*
* Replace the page in the mapping.
*
@@ -407,16 +423,9 @@ int migrate_page_move_mapping(struct add
{
struct zone *oldzone, *newzone;
int dirty;
- int expected_count = 1 + extra_count;
+ int expected_count = expected_page_refs(page) + extra_count;
void **pslot;
- /*
- * Device public or private pages have an extra refcount as they are
- * ZONE_DEVICE pages.
- */
- expected_count += is_device_private_page(page);
- expected_count += is_device_public_page(page);
-
if (!mapping) {
/* Anonymous page without mapping */
if (page_count(page) != expected_count)
@@ -439,7 +448,6 @@ int migrate_page_move_mapping(struct add
pslot = radix_tree_lookup_slot(&mapping->page_tree,
page_index(page));
- expected_count += 1 + page_has_private(page);
if (page_count(page) != expected_count ||
radix_tree_deref_slot_protected(pslot, &mapping->tree_lock) != page) {
spin_unlock_irq(&mapping->tree_lock);