Blob Blame History Raw
From: Ralph Campbell <rcampbell@nvidia.com>
Date: Fri, 23 Aug 2019 15:17:53 -0700
Subject: mm/hmm: hmm_range_fault() infinite loop
Git-commit: c18ce674d548c00faa6b7e760bacbaf1f39315f3
Patch-mainline: v5.4-rc1
References: HMM Functionality, jsc#SLE-8176, jsc#SLE-16387

Normally, callers to handle_mm_fault() are supposed to check the
vma->vm_flags first. hmm_range_fault() checks for VM_READ but doesn't
check for VM_WRITE if the caller requests a page to be faulted in with
write permission (via the hmm_range.pfns[] value).  If the vma is write
protected, this can result in an infinite loop:

  hmm_range_fault()
    walk_page_range()
      ...
      hmm_vma_walk_hole()
        hmm_vma_walk_hole_()
          hmm_vma_do_fault()
            handle_mm_fault(FAULT_FLAG_WRITE)
            /* returns VM_FAULT_WRITE */
          /* returns -EBUSY */
        /* returns -EBUSY */
      /* returns -EBUSY */
    /* loops on -EBUSY and range->valid */

Prevent this by checking for vma->vm_flags & VM_WRITE before calling
handle_mm_fault().

Link: https://lore.kernel.org/r/20190823221753.2514-3-rcampbell@nvidia.com
Signed-off-by: Ralph Campbell <rcampbell@nvidia.com>
Reviewed-by: Christoph Hellwig <hch@lst.de>
Reviewed-by: Jason Gunthorpe <jgg@mellanox.com>
Signed-off-by: Jason Gunthorpe <jgg@mellanox.com>
Signed-off-by: Vlastimil Babka <vbabka@suse.cz>
---
 mm/hmm.c |    3 +++
 1 file changed, 3 insertions(+)

--- a/mm/hmm.c
+++ b/mm/hmm.c
@@ -292,6 +292,9 @@ static int hmm_vma_walk_hole_(unsigned l
 	hmm_vma_walk->last = addr;
 	i = (addr - range->start) >> PAGE_SHIFT;
 
+	if (write_fault && walk->vma && !(walk->vma->vm_flags & VM_WRITE))
+		return -EPERM;
+
 	for (; addr < end; addr += PAGE_SIZE, i++) {
 		pfns[i] = range->values[HMM_PFN_NONE];
 		if (fault || write_fault) {