From: Ralph Campbell <rcampbell@nvidia.com>
Date: Fri, 23 Aug 2019 15:17:53 -0700
Subject: mm/hmm: hmm_range_fault() infinite loop
Git-commit: c18ce674d548c00faa6b7e760bacbaf1f39315f3
Patch-mainline: v5.4-rc1
References: HMM Functionality, jsc#SLE-8176
Normally, callers to handle_mm_fault() are supposed to check the
vma->vm_flags first. hmm_range_fault() checks for VM_READ but doesn't
check for VM_WRITE if the caller requests a page to be faulted in with
write permission (via the hmm_range.pfns[] value). If the vma is write
protected, this can result in an infinite loop:
hmm_range_fault()
walk_page_range()
...
hmm_vma_walk_hole()
hmm_vma_walk_hole_()
hmm_vma_do_fault()
handle_mm_fault(FAULT_FLAG_WRITE)
/* returns VM_FAULT_WRITE */
/* returns -EBUSY */
/* returns -EBUSY */
/* returns -EBUSY */
/* loops on -EBUSY and range->valid */
Prevent this by checking for vma->vm_flags & VM_WRITE before calling
handle_mm_fault().
Link: https://lore.kernel.org/r/20190823221753.2514-3-rcampbell@nvidia.com
Signed-off-by: Ralph Campbell <rcampbell@nvidia.com>
Reviewed-by: Christoph Hellwig <hch@lst.de>
Reviewed-by: Jason Gunthorpe <jgg@mellanox.com>
Signed-off-by: Jason Gunthorpe <jgg@mellanox.com>
Signed-off-by: Vlastimil Babka <vbabka@suse.cz>
---
mm/hmm.c | 3 +++
1 file changed, 3 insertions(+)
--- a/mm/hmm.c
+++ b/mm/hmm.c
@@ -357,6 +357,9 @@ static int hmm_vma_walk_hole_(unsigned l
page_size = hmm_range_page_size(range);
i = (addr - range->start) >> range->page_shift;
+ if (write_fault && walk->vma && !(walk->vma->vm_flags & VM_WRITE))
+ return -EPERM;
+
for (; addr < end; addr += page_size, i++) {
pfns[i] = range->values[HMM_PFN_NONE];
if (fault || write_fault) {