Blob Blame History Raw
From: Doug Ledford <dledford@redhat.com>
Date: Fri, 21 Sep 2018 11:30:13 -0400
Subject: RDMA/umem: Fix potential addition overflow
Patch-mainline: v4.20-rc1
Git-commit: c6ce580716372d71cd119bacf73f14a62e9af2ea
References: bsc#1103992 FATE#326009

Given a large enough memory allocation, it is possible to wrap the
pinned_vm counter.  Check for addition overflow to prevent such
eventualities.

Fixes: 40ddacf2dda9 ("RDMA/umem: Don't hold mmap_sem for too long")
Reported-by: Jason Gunthorpe <jgg@ziepe.ca>
Signed-off-by: Doug Ledford <dledford@redhat.com>
Reviewed-by: Leon Romanovsky <leonro@mellanox.com>
Signed-off-by: Jason Gunthorpe <jgg@mellanox.com>
Acked-by: Thomas Bogendoerfer <tbogendoerfer@suse.de>
---
 drivers/infiniband/core/umem.c |    8 +++++---
 1 file changed, 5 insertions(+), 3 deletions(-)

--- a/drivers/infiniband/core/umem.c
+++ b/drivers/infiniband/core/umem.c
@@ -85,6 +85,7 @@ struct ib_umem *ib_umem_get(struct ib_uc
 	struct page **page_list;
 	struct vm_area_struct **vma_list;
 	unsigned long lock_limit;
+	unsigned long new_pinned;
 	unsigned long cur_base;
 	struct mm_struct *mm;
 	unsigned long npages;
@@ -160,12 +161,13 @@ struct ib_umem *ib_umem_get(struct ib_uc
 	lock_limit = rlimit(RLIMIT_MEMLOCK) >> PAGE_SHIFT;
 
 	down_write(&mm->mmap_sem);
-	mm->pinned_vm += npages;
-	if ((mm->pinned_vm > lock_limit) && !capable(CAP_IPC_LOCK)) {
+	if (check_add_overflow(mm->pinned_vm, npages, &new_pinned) ||
+	    (new_pinned > lock_limit && !capable(CAP_IPC_LOCK))) {
 		up_write(&mm->mmap_sem);
 		ret = -ENOMEM;
-		goto vma;
+		goto out;
 	}
+	mm->pinned_vm = new_pinned;
 	up_write(&mm->mmap_sem);
 
 	cur_base = addr & PAGE_MASK;