Blob Blame History Raw
From e3f408b12a3df1aa787dd6f70c285e359a4c00c3 Mon Sep 17 00:00:00 2001
From: Michael Neuling <mikey@neuling.org>
Date: Mon, 10 Sep 2018 15:44:05 +1000
Subject: [PATCH] powerpc: Avoid code patching freed init sections

References: bnc#1107735
Patch-mainline: submitted http://patchwork.ozlabs.org/patch/967867/

This stops us from doing code patching in init sections after they've
been freed.

In this chain:
  kvm_guest_init() ->
    kvm_use_magic_page() ->
      fault_in_pages_readable() ->
	 __get_user() ->
	   __get_user_nocheck() ->
	     barrier_nospec();

We have a code patching location at barrier_nospec() and
kvm_guest_init() is an init function. This whole chain gets inlined,
so when we free the init section (hence kvm_guest_init()), this code
goes away and hence should no longer be patched.

We seen this as userspace memory corruption when using a memory
checker while doing partition migration testing on powervm (this
starts the code patching post migration via
/sys/kernel/mobility/migration). In theory, it could also happen when
using /sys/kernel/debug/powerpc/barrier_nospec.

With this patch there is a small change of a race if we code patch
between the init section being freed and setting SYSTEM_RUNNING (in
kernel_init()) but that seems like an impractical time and small
window for any code patching to occur.

cc: stable@vger.kernel.org # 4.13+
Signed-off-by: Michael Neuling <mikey@neuling.org>
Acked-by: Michal Suchanek <msuchanek@suse.de>
---
 arch/powerpc/lib/code-patching.c | 18 ++++++++++++++++++
 1 file changed, 18 insertions(+)

diff --git a/arch/powerpc/lib/code-patching.c b/arch/powerpc/lib/code-patching.c
index 3dfe98786ab6..99ee07a05a2f 100644
--- a/arch/powerpc/lib/code-patching.c
+++ b/arch/powerpc/lib/code-patching.c
@@ -13,15 +13,33 @@
 #include <linux/init.h>
 #include <linux/mm.h>
 #include <asm/page.h>
+#include <asm/sections.h>
 #include <asm/code-patching.h>
 #include <linux/uaccess.h>
 #include <linux/kprobes.h>
 
+static inline bool in_init_section(unsigned int *patch_addr)
+{
+	if (patch_addr < (unsigned int *)__init_begin)
+		return false;
+	if (patch_addr >= (unsigned int *)__init_end)
+		return false;
+	return true;
+}
+
+static inline bool init_freed(void)
+{
+	return (system_state >= SYSTEM_RUNNING);
+}
 
 int patch_instruction(unsigned int *addr, unsigned int instr)
 {
 	int err;
 
+	/* Make sure we aren't patching a freed init section */
+	if (in_init_section(addr) && init_freed())
+		return 0;
+
 	__put_user_size(instr, addr, 4, err);
 	if (err)
 		return err;
-- 
2.13.7