Blob Blame History Raw
From: Halil Pasic <pasic@linux.ibm.com>
Date: Wed, 24 Jul 2019 00:51:55 +0200
Subject: s390/dma: provide proper ARCH_ZONE_DMA_BITS value
Git-commit: 1a2dcff881059dedc14fafc8a442664c8dbd60f1
Patch-mainline: v5.2-rc1
References: jsc#SLE-6197 FATE#327012 bsc#1140559 LTC#173150

On s390 ZONE_DMA is up to 2G, i.e. ARCH_ZONE_DMA_BITS should be 31 bits.
The current value is 24 and makes __dma_direct_alloc_pages() take a
wrong turn first (but __dma_direct_alloc_pages() recovers then).

Let's correct ARCH_ZONE_DMA_BITS value and avoid wrong turns.

Signed-off-by: Halil Pasic <pasic@linux.ibm.com>
Reported-by: Petr Tesarik <ptesarik@suse.cz>
Fixes: c61e9637340e ("dma-direct: add support for allocation from ZONE_DMA and ZONE_DMA32")
Signed-off-by: Heiko Carstens <heiko.carstens@de.ibm.com>
Acked-by: Petr Tesarik <ptesarik@suse.com>
---
 arch/s390/include/asm/page.h |    2 ++
 1 file changed, 2 insertions(+)

--- a/arch/s390/include/asm/page.h
+++ b/arch/s390/include/asm/page.h
@@ -172,6 +172,8 @@ static inline int devmem_is_allowed(unsi
 #define VM_DATA_DEFAULT_FLAGS	(VM_READ | VM_WRITE | \
 				 VM_MAYREAD | VM_MAYWRITE | VM_MAYEXEC)
 
+#define ARCH_ZONE_DMA_BITS	31
+
 #include <asm-generic/memory_model.h>
 #include <asm-generic/getorder.h>