Blob Blame History Raw
From: Heiko Carstens <hca@linux.ibm.com>
Date: Tue, 24 Oct 2023 10:15:19 +0200
Subject: s390/cmma: fix detection of DAT pages
Git-commit: 44d93045247661acbd50b1629e62f415f2747577
Patch-mainline: v6.7-rc1
References: LTC#203996 bsc#1217087

If the cmma no-dat feature is available the kernel page tables are walked
to identify and mark all pages which are used for address translation (all
region, segment, and page tables). In a subsequent loop all other pages are
marked as "no-dat" pages with the ESSA instruction.

This information is visible to the hypervisor, so that the hypervisor can
optimize purging of guest TLB entries. The initial loop however is
incorrect: only the first three of the four pages which belong to segment
and region tables will be marked as being used for DAT. The last page is
incorrectly marked as no-dat.

This can result in incorrect guest TLB flushes.

Fix this by simply marking all four pages.

Cc: <stable@vger.kernel.org>
Reviewed-by: Claudio Imbrenda <imbrenda@linux.ibm.com>
Signed-off-by: Heiko Carstens <hca@linux.ibm.com>
Signed-off-by: Vasily Gorbik <gor@linux.ibm.com>
Acked-by: Miroslav Franc <mfranc@suse.cz>
---
 arch/s390/mm/page-states.c |    4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

--- a/arch/s390/mm/page-states.c
+++ b/arch/s390/mm/page-states.c
@@ -137,7 +137,7 @@ static void mark_kernel_pud(p4d_t *p4d,
 		if ((pud_val(*pud) & _REGION_ENTRY_TYPE_MASK) >=
 		    _REGION_ENTRY_TYPE_R3) {
 			page = virt_to_page(pud_val(*pud));
-			for (i = 0; i < 3; i++)
+			for (i = 0; i < 4; i++)
 				set_bit(PG_arch_1, &page[i].flags);
 		}
 		mark_kernel_pmd(pud, addr, next);
@@ -167,7 +167,7 @@ static void mark_kernel_pgd(void)
 		if ((pgd_val(*pgd) & _REGION_ENTRY_TYPE_MASK) >=
 		    _REGION_ENTRY_TYPE_R2) {
 			page = virt_to_page(pgd_val(*pgd));
-			for (i = 0; i < 3; i++)
+			for (i = 0; i < 4; i++)
 				set_bit(PG_arch_1, &page[i].flags);
 		}
 		mark_kernel_pud(pgd, addr, next);