Blob Blame History Raw
From 7892d3555aa9b3713806d9385ecd0b2b3ece6790 Mon Sep 17 00:00:00 2001
From: Davidlohr Bueso <dave@stgolabs.net>
Date: Wed, 26 Sep 2018 14:16:21 -0700
Subject: [PATCH] Revert "mm,vmacache: optimize overflow system-wide flushing"
Patch-mainline: Never, SLE specific
References: bsc#1108399 CVE-2018-17182

This reverts:

    6b4ebc3a9078 (mm,vmacache: optimize overflow system-wide flushing)

Due to kabi constraints, this fix is a SLE-specific change,
which is equivalent to upstream's approach:

   7a9cdebdcc17 (mm: get rid of vmacache_flush_all() entirely)

By disabling the "fastpath" (which isn't even fast as this is
rarely run code paths) we deal with the security flaw.

Signed-off-by: Davidlohr Bueso <dbueso@suse.de>

---
 mm/vmacache.c | 10 ----------
 1 file changed, 10 deletions(-)

diff --git a/mm/vmacache.c b/mm/vmacache.c
index 7ffa0ee341b5..117d849513ed 100644
--- a/mm/vmacache.c
+++ b/mm/vmacache.c
@@ -20,16 +20,6 @@ void vmacache_flush_all(struct mm_struct *mm)
 
 	count_vm_vmacache_event(VMACACHE_FULL_FLUSHES);
 
-	/*
-	 * Single threaded tasks need not iterate the entire
-	 * list of process. We can avoid the flushing as well
-	 * since the mm's seqnum was increased and don't have
-	 * to worry about other threads' seqnum. Current's
-	 * flush will occur upon the next lookup.
-	 */
-	if (atomic_read(&mm->mm_users) == 1)
-		return;
-
 	rcu_read_lock();
 	for_each_process_thread(g, p) {
 		/*
-- 
2.16.4