Blob Blame History Raw
From: Alexei Starovoitov <ast@kernel.org>
Date: Tue, 30 Jul 2019 18:38:26 -0700
Subject: bpf: fix x64 JIT code generation for jmp to 1st insn
Patch-mainline: v5.3-rc6
Git-commit: 7c2e988f400e83501e0a3568250780609b7c8263
References: bsc#1178163

Introduction of bounded loops exposed old bug in x64 JIT.
JIT maintains the array of offsets to the end of all instructions to
compute jmp offsets.
addrs[0] - offset of the end of the 1st insn (that includes prologue).
addrs[1] - offset of the end of the 2nd insn.
JIT didn't keep the offset of the beginning of the 1st insn,
since classic BPF didn't have backward jumps and valid extended BPF
couldn't have a branch to 1st insn, because it didn't allow loops.
With bounded loops it's possible to construct a valid program that
jumps backwards to the 1st insn.
Fix JIT by computing:
addrs[0] - offset of the end of prologue == start of the 1st insn.
addrs[1] - offset of the end of 1st insn.

v1->v2:
- Yonghong noticed a bug in jit linfo.
  Fix it by passing 'addrs + 1' to bpf_prog_fill_jited_linfo(),
  since it expects insn_to_jit_off array to be offsets to last byte.

Reported-by: syzbot+35101610ff3e83119b1b@syzkaller.appspotmail.com
Fixes: 2589726d12a1 ("bpf: introduce bounded loops")
Fixes: 0a14842f5a3c ("net: filter: Just In Time compiler for x86-64")
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Acked-by: Song Liu <songliubraving@fb.com>
Acked-by: Gary Lin <glin@suse.com>

NOTE from Gary:
  This patch was modified to fit SLE12-SP5:
  + Replace kmalloc_array with kmalloc
  + Remove the diff for bpf_prog_fill_jited_linfo()

---
 arch/x86/net/bpf_jit_comp.c |    8 +++++---
 1 file changed, 5 insertions(+), 3 deletions(-)

--- a/arch/x86/net/bpf_jit_comp.c
+++ b/arch/x86/net/bpf_jit_comp.c
@@ -365,7 +365,9 @@ static int do_jit(struct bpf_prog *bpf_p
 	emit_prologue(&prog, bpf_prog->aux->stack_depth,
 		      bpf_prog_was_classic(bpf_prog));
 
-	for (i = 0; i < insn_cnt; i++, insn++) {
+	addrs[0] = prog - temp;
+
+	for (i = 1; i <= insn_cnt; i++, insn++) {
 		const s32 imm32 = insn->imm;
 		u32 dst_reg = insn->dst_reg;
 		u32 src_reg = insn->src_reg;
@@ -1039,7 +1041,7 @@ struct bpf_prog *bpf_int_jit_compile(str
 		extra_pass = true;
 		goto skip_init_addrs;
 	}
-	addrs = kmalloc(prog->len * sizeof(*addrs), GFP_KERNEL);
+	addrs = kmalloc((prog->len + 1) * sizeof(*addrs), GFP_KERNEL);
 	if (!addrs) {
 		prog = orig_prog;
 		goto out_addrs;
@@ -1048,7 +1050,7 @@ struct bpf_prog *bpf_int_jit_compile(str
 	/* Before first pass, make a rough estimation of addrs[]
 	 * each bpf instruction is translated to less than 64 bytes
 	 */
-	for (proglen = 0, i = 0; i < prog->len; i++) {
+	for (proglen = 0, i = 0; i <= prog->len; i++) {
 		proglen += 64;
 		addrs[i] = proglen;
 	}