Blob Blame History Raw
From: Gerald Schaefer <gerald.schaefer@de.ibm.com>
Subject: s390/mm: fix write access check in gup_huge_pmd()
Patch-mainline: v4.14-rc2
Git-commit: ba385c0594e723d41790ecfb12c610e6f90c7785
References: bnc#1066983, LTC#160551

Description:  s390/mm: fix write access check in gup_huge_pmd()
Symptom:      Possible data corruption when using async I/O on large page
              mappings (transparent hugepages).
Problem:      The check for the _SEGMENT_ENTRY_PROTECT bit in
              gup_huge_pmd() is the wrong way around. It must not be set
              for write==1, and not be checked for write==0. Allowing
              write==1 with protection bit set, instead of breaking out
              to the slow path, will result in a missing faultin_page()
              to clear the protection bit (for valid writable mappings),
              and the async I/O write operation will fail to write to
              such a mapping.
Solution:     Fix it by correctly checking the protection bit like it is
              also done in gup_pte_range() and gup_huge_pud().
Reproduction: Async I/O workload on buffers that are mapped as transparent
              hugepages.

Upstream-Description:

              s390/mm: fix write access check in gup_huge_pmd()

              The check for the _SEGMENT_ENTRY_PROTECT bit in gup_huge_pmd() is the
              wrong way around. It must not be set for write==1, and not be checked for
              write==0. Fix this similar to how it was fixed for ptes long time ago in
              commit 25591b070336 ("[S390] fix get_user_pages_fast").

              One impact of this bug would be unnecessarily using the gup slow path for
              write==0 on r/w mappings. A potentially more severe impact would be that
              gup_huge_pmd() will succeed for write==1 on r/o mappings.

              Cc: <stable@vger.kernel.org>
              Signed-off-by: Gerald Schaefer <gerald.schaefer@de.ibm.com>
              Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>


Signed-off-by: Gerald Schaefer <gerald.schaefer@de.ibm.com>
Acked-by: Hannes Reinecke <hare@suse.com>
---
 arch/s390/mm/gup.c |    7 +++----
 1 file changed, 3 insertions(+), 4 deletions(-)

--- a/arch/s390/mm/gup.c
+++ b/arch/s390/mm/gup.c
@@ -56,13 +56,12 @@ static inline int gup_pte_range(pmd_t *p
 static inline int gup_huge_pmd(pmd_t *pmdp, pmd_t pmd, unsigned long addr,
 		unsigned long end, int write, struct page **pages, int *nr)
 {
-	unsigned long mask, result;
 	struct page *head, *page;
+	unsigned long mask;
 	int refs;
 
-	result = write ? 0 : _SEGMENT_ENTRY_PROTECT;
-	mask = result | _SEGMENT_ENTRY_INVALID;
-	if ((pmd_val(pmd) & mask) != result)
+	mask = (write ? _SEGMENT_ENTRY_PROTECT : 0) | _SEGMENT_ENTRY_INVALID;
+	if ((pmd_val(pmd) & mask) != 0)
 		return 0;
 	VM_BUG_ON(!pfn_valid(pmd_val(pmd) >> PAGE_SHIFT));