From: Shiraz Saleem <shiraz.saleem@intel.com>
Date: Thu, 28 Mar 2019 11:49:44 -0500
Subject: RDMA/cxbg: Use correct sizing on buffers holding page DMA addresses
Patch-mainline: v5.2-rc1
Git-commit: 5f818d676ac455bbc812ffaaf5bf780be5465114
References: bsc#1136348 jsc#SLE-4684
The PBL array that hold the page DMA address is sized off umem->nmap.
This can potentially cause out of bound accesses on the PBL array when
iterating the umem DMA-mapped SGL. This is because if umem pages are
combined, umem->nmap can be much lower than the number of system pages
in umem.
Use ib_umem_num_pages() to size this array.
Cc: Potnuri Bharat Teja <bharat@chelsio.com>
Signed-off-by: Shiraz Saleem <shiraz.saleem@intel.com>
Signed-off-by: Jason Gunthorpe <jgg@mellanox.com>
Acked-by: Thomas Bogendoerfer <tbogendoerfer@suse.de>
---
drivers/infiniband/hw/cxgb3/iwch_provider.c | 2 +-
drivers/infiniband/hw/cxgb4/mem.c | 2 +-
2 files changed, 2 insertions(+), 2 deletions(-)
--- a/drivers/infiniband/hw/cxgb3/iwch_provider.c
+++ b/drivers/infiniband/hw/cxgb3/iwch_provider.c
@@ -549,7 +549,7 @@ static struct ib_mr *iwch_reg_user_mr(st
shift = mhp->umem->page_shift;
- n = mhp->umem->nmap;
+ n = ib_umem_num_pages(mhp->umem);
err = iwch_alloc_pbl(mhp, n);
if (err)
--- a/drivers/infiniband/hw/cxgb4/mem.c
+++ b/drivers/infiniband/hw/cxgb4/mem.c
@@ -543,7 +543,7 @@ struct ib_mr *c4iw_reg_user_mr(struct ib
shift = mhp->umem->page_shift;
- n = mhp->umem->nmap;
+ n = ib_umem_num_pages(mhp->umem);
err = alloc_pbl(mhp, n);
if (err)
goto err_umem_release;