Blob Blame History Raw
From: Shaohua Li <shli@fb.com>
Date: Tue, 3 Oct 2017 16:15:32 -0700
Subject: mm: fix data corruption caused by lazyfree page
Git-commit: 9625456cc76391b7f3f2809579126542a8ed4d39
Patch-mainline: v4.14-rc4
References: VM Functionality, bsc#1061775

MADV_FREE clears pte dirty bit and then marks the page lazyfree (clear
SwapBacked).  There is no lock to prevent the page is added to swap
cache between these two steps by page reclaim.  If page reclaim finds
such page, it will simply add the page to swap cache without pageout the
page to swap because the page is marked as clean.  Next time, page fault
will read data from the swap slot which doesn't have the original data,
so we have a data corruption.  To fix issue, we mark the page dirty and
pageout the page.

However, we shouldn't dirty all pages which is clean and in swap cache.
swapin page is swap cache and clean too.  So we only dirty page which is
added into swap cache in page reclaim, which shouldn't be swapin page.
As Minchan suggested, simply dirty the page in add_to_swap can do the
job.

Fixes: 802a3a92ad7a ("mm: reclaim MADV_FREE pages")
Link: http://lkml.kernel.org/r/08c84256b007bf3f63c91d94383bd9eb6fee2daa.1506446061.git.shli@fb.com
Signed-off-by: Shaohua Li <shli@fb.com>
Reported-by: Artem Savkov <asavkov@redhat.com>
Acked-by: Michal Hocko <mhocko@suse.com>
Acked-by: Minchan Kim <minchan@kernel.org>
Cc: Johannes Weiner <hannes@cmpxchg.org>
Cc: Hillf Danton <hdanton@sina.com>
Cc: Hugh Dickins <hughd@google.com>
Cc: Rik van Riel <riel@redhat.com>
Cc: Mel Gorman <mgorman@techsingularity.net>
Cc: <stable@vger.kernel.org>	[4.12+]
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Signed-off-by: Vlastimil Babka <vbabka@suse.cz>
---
 mm/swap_state.c |   13 +++++++++++++
 1 file changed, 13 insertions(+)

--- a/mm/swap_state.c
+++ b/mm/swap_state.c
@@ -208,6 +208,19 @@ int add_to_swap(struct page *page, struc
 			__GFP_HIGH|__GFP_NOMEMALLOC|__GFP_NOWARN);
 
 	if (!err) {
+		/*
+		 * Normally the page will be dirtied in unmap because its pte
+		 * should be dirty. A special case is MADV_FREE page. The page'e
+		 * pte could have dirty bit cleared but the page's SwapBacked
+		 * bit is still set because clearing the dirty bit and
+		 * SwapBacked bit has no lock protected. For such page, unmap
+		 * will not set dirty bit for it, so page reclaim will not write
+		 * the page out. This can cause data corruption when the page is
+		 * swap in later. Always setting the dirty bit for the page
+		 * solves the problem.
+		 */
+		set_page_dirty(page);
+
 		return 1;
 	} else {	/* -ENOMEM radix-tree allocation failure */
 		/*