Blob Blame History Raw
From: Eric Dumazet <edumazet@google.com>
Date: Wed, 29 Jul 2020 18:57:55 -0700
Subject: RDMA/umem: Add a schedule point in ib_umem_get()
Patch-mainline: v5.9-rc1
Git-commit: 928da37a229f344424ffc89c9a58feb2368bb018
References: jsc#SLE-15176

Mapping as little as 64GB can take more than 10 seconds, triggering issues
on kernels with CONFIG_PREEMPT_NONE=y.

ib_umem_get() already splits the work in 2MB units on x86_64, adding a
cond_resched() in the long-lasting loop is enough to solve the issue.

Note that sg_alloc_table() can still use more than 100 ms, which is also
problematic. This might be addressed later in ib_umem_add_sg_table(),
adding new blocks in sgl on demand.

Link: https://lore.kernel.org/r/20200730015755.1827498-1-edumazet@google.com
Signed-off-by: Eric Dumazet <edumazet@google.com>
Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
Acked-by: Thomas Bogendoerfer <tbogendoerfer@suse.de>
---
 drivers/infiniband/core/umem.c |    1 +
 1 file changed, 1 insertion(+)

--- a/drivers/infiniband/core/umem.c
+++ b/drivers/infiniband/core/umem.c
@@ -261,6 +261,7 @@ struct ib_umem *ib_umem_get(struct ib_de
 	sg = umem->sg_head.sgl;
 
 	while (npages) {
+		cond_resched();
 		down_read(&mm->mmap_sem);
 		ret = get_user_pages(cur_base,
 				     min_t(unsigned long, npages,