From: Mark Rutland <mark.rutland@arm.com>
Date: Wed, 14 Aug 2019 14:28:47 +0100
Subject: arm64: memory: fix flipped VA space fallout
Patch-mainline: v5.4-rc1
Git-commit: 233947ef16a18952d22786770dab1ddafa1ac377
References: jsc#SLE-16407
VA_START used to be the start of the TTBR1 address space, but now it's a
point midway though. In a couple of places we still use VA_START to get
the start of the TTBR1 address space, so let's fix these up to use
PAGE_OFFSET instead.
Fixes: 14c127c957c1c607 ("arm64: mm: Flip kernel VA space")
Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Tested-by: Steve Capper <steve.capper@arm.com>
Reviewed-by: Steve Capper <steve.capper@arm.com>
Signed-off-by: Will Deacon <will@kernel.org>
Acked-by: Lee, Chun-Yi <jlee@suse.com>
---
arch/arm64/mm/dump.c | 2 +-
arch/arm64/mm/fault.c | 2 +-
2 files changed, 2 insertions(+), 2 deletions(-)
--- a/arch/arm64/mm/dump.c
+++ b/arch/arm64/mm/dump.c
@@ -391,7 +391,7 @@ void ptdump_check_wx(void)
.check_wx = true,
};
- walk_pgd(&st, &init_mm, VA_START);
+ walk_pgd(&st, &init_mm, PAGE_OFFSET);
note_page(&st, 0, 0, 0);
if (st.wx_pages || st.uxn_pages)
pr_warn("Checked W+X mappings: FAILED, %lu W+X pages found, %lu non-UXN pages found\n",
--- a/arch/arm64/mm/fault.c
+++ b/arch/arm64/mm/fault.c
@@ -109,7 +109,7 @@ static inline bool is_ttbr0_addr(unsigne
static inline bool is_ttbr1_addr(unsigned long addr)
{
/* TTBR1 addresses may have a tag if KASAN_SW_TAGS is in use */
- return arch_kasan_reset_tag(addr) >= VA_START;
+ return arch_kasan_reset_tag(addr) >= PAGE_OFFSET;
}
/*