Blob Blame History Raw
From: Ben Skeggs <bskeggs@redhat.com>
Date: Thu, 7 Dec 2017 15:25:14 +1000
Subject: drm/nouveau: avoid GPU page sizes > PAGE_SIZE for buffer objects in
 host memory
Git-commit: f29f18eb952bc3e71deedf8bd8fc902f66853c48
Patch-mainline: v4.15-rc5
References: FATE#326289 FATE#326079 FATE#326049 FATE#322398 FATE#326166

While the Tegra (GK20A, GM20B, GP10B) MMUs support large pages in host
memory, we're currently lacking IOMMU support for merging system pages
into large enough chunks to be mapped as such by the GPU.

The core VMM code actually supports automatically determining the best
page size to map with, which is intended for these situations, but for
various complicated reasons the DRM is currently forcing the page size
selection on a per-BO basis.

This should fix breakage reported on Tegra GPUs in the meantime, until
one or both of the above issues are resolved properly.

Reported-by: Mikko Perttunen <cyndis@kapsi.fi>
Fixes: 7dc6a446da7c ("drm/nouveau: improve selection of GPU page size")
Signed-off-by: Ben Skeggs <bskeggs@redhat.com>
Tested-by: Thierry Reding <treding@nvidia.com>
Acked-by: Petr Tesarik <ptesarik@suse.com>
---
 drivers/gpu/drm/nouveau/nouveau_bo.c |    3 ++-
 1 file changed, 2 insertions(+), 1 deletion(-)

--- a/drivers/gpu/drm/nouveau/nouveau_bo.c
+++ b/drivers/gpu/drm/nouveau/nouveau_bo.c
@@ -262,7 +262,8 @@ nouveau_bo_new(struct nouveau_cli *cli,
 		if (cli->device.info.family > NV_DEVICE_INFO_V0_CURIE &&
 		    (flags & TTM_PL_FLAG_VRAM) && !vmm->page[i].vram)
 			continue;
-		if ((flags & TTM_PL_FLAG_TT  ) && !vmm->page[i].host)
+		if ((flags & TTM_PL_FLAG_TT) &&
+		    (!vmm->page[i].host || vmm->page[i].shift > PAGE_SHIFT))
 			continue;
 
 		/* Select this page size if it's the first that supports