Skip to content

Commit 1cedfe1

Browse files
author
Alexei Starovoitov
committed
Merge branch 'emit-endbr-bti-instructions-for-indirect'
Xu Kuohai says: ==================== emit ENDBR/BTI instructions for indirect On architectures with CFI protection enabled that require landing pad instructions at indirect jump targets, such as x86 with CET/IBT enabled and arm64 with BTI enabled, kernel panics when an indirect jump lands on a target without landing pad. Therefore, the JIT must emit landing pad instructions for indirect jump targets. The verifier already recognizes which instructions are indirect jump targets during the verification phase. So we can store this information in env->insn_aux_data and pass it to the JIT as new parameter, allowing the JIT to consult env->insn_aux_data to determine which instructions are indirect jump targets. During JIT, constants blinding is performed. It rewrites the private copy of instructions for the JITed program, but it does not adjust the global env->insn_aux_data array. As a result, after constants blinding, the instruction indexes used by JIT may no longer match the indexes in env->insn_aux_data, so the JIT can not use env->insn_aux_data directly. To avoid this mismatch, and given that all existing arch-specific JITs already implement constants blinding with largely duplicated code, move constants blinding from JIT to generic code. v15: - Rebase and target bpf tree - Resotre subprog_start of the fake 'exit' subprog on failure - Fix wrong function name used in comment v14: https://lore.kernel.org/all/cover.1776062885.git.xukuohai@hotmail.com/ - Rebase - Fix comment style - Fix incorrect variable and function name used in commit message v13: https://lore.kernel.org/bpf/20260411133847.1042658-1-xukuohai@huaweicloud.com - Use vmalloc to allocate memory for insn_aux_data copies to match with vfree - Do not free the copied memory of insn_aux_data when restoring from failure - Code cleanup v12: https://lore.kernel.org/bpf/20260403132811.753894-1-xukuohai@huaweicloud.com - Restore env->insn_aux_data on JIT failure - Fix incorrect error code sign (-EFAULT vs EFAULT) - Fix incorrect prog used in the restore path v11: https://lore.kernel.org/bpf/20260403090915.473493-1-xukuohai@huaweicloud.com - Restore env->subprog_info after jit_subprogs() fails - Clear prog->jit_requested and prog->blinding_requested on failure - Use the actual env->insn_aux_data size in clear_insn_aux_data() on failure v10: https://lore.kernel.org/bpf/20260324122052.342751-1-xukuohai@huaweicloud.com - Fix the incorrect call_imm restore in jit_subprogs - Define a dummy void version of bpf_jit_prog_release_other and bpf_patch_insn_data when the corresponding config is not set - Remove the unnecessary #ifdef in x86_64 JIT (Leon Hwang) v9: https://lore.kernel.org/bpf/20260312170255.3427799-1-xukuohai@huaweicloud.com - Make constant blinding available for classic bpf (Eduard) - Clear prog->bpf_func, prog->jited ... on the error path of extra pass (Eduard) - Fix spelling errors and remove unused parameter (Anton Protopopov) v8: https://lore.kernel.org/bpf/20260309140044.2652538-1-xukuohai@huaweicloud.com - Define void bpf_jit_blind_constants() function when CONFIG_BPF_JIT is not set - Move indirect_target fixup for insn patching from bpf_jit_blind_constants() to adjust_insn_aux_data() v7: https://lore.kernel.org/bpf/20260307103949.2340104-1-xukuohai@huaweicloud.com - Move constants blinding logic back to bpf/core.c - Compute ip address before switch statement in x86 JIT - Clear JIT state from error path on arm64 and loongarch v6: https://lore.kernel.org/bpf/20260306102329.2056216-1-xukuohai@huaweicloud.com - Move constants blinding from JIT to verifier - Move call to bpf_prog_select_runtime from bpf_prog_load to verifier v5: https://lore.kernel.org/bpf/20260302102726.1126019-1-xukuohai@huaweicloud.com - Switch to pass env to JIT directly to get rid of copying private insn_aux_data for each prog v4: https://lore.kernel.org/all/20260114093914.2403982-1-xukuohai@huaweicloud.com - Switch to the approach proposed by Eduard, using insn_aux_data to identify indirect jump targets, and emit ENDBR on x86 v3: https://lore.kernel.org/bpf/20251227081033.240336-1-xukuohai@huaweicloud.com - Get rid of unnecessary enum definition (Yonghong Song, Anton Protopopov) v2: https://lore.kernel.org/bpf/20251223085447.139301-1-xukuohai@huaweicloud.com - Exclude instruction arrays not used for indirect jumps (Anton Protopopov) v1: https://lore.kernel.org/bpf/20251127140318.3944249-1-xukuohai@huaweicloud.com ==================== Link: https://patch.msgid.link/20260416064341.151802-1-xukuohai@huaweicloud.com Signed-off-by: Alexei Starovoitov <ast@kernel.org>
2 parents a204466 + f6606a4 commit 1cedfe1

19 files changed

Lines changed: 529 additions & 555 deletions

File tree

arch/arc/net/bpf_jit_core.c

Lines changed: 15 additions & 26 deletions
Original file line numberDiff line numberDiff line change
@@ -79,7 +79,6 @@ struct arc_jit_data {
7979
* The JIT pertinent context that is used by different functions.
8080
*
8181
* prog: The current eBPF program being handled.
82-
* orig_prog: The original eBPF program before any possible change.
8382
* jit: The JIT buffer and its length.
8483
* bpf_header: The JITed program header. "jit.buf" points inside it.
8584
* emit: If set, opcodes are written to memory; else, a dry-run.
@@ -94,12 +93,10 @@ struct arc_jit_data {
9493
* need_extra_pass: A forecast if an "extra_pass" will occur.
9594
* is_extra_pass: Indicates if the current pass is an extra pass.
9695
* user_bpf_prog: True, if VM opcodes come from a real program.
97-
* blinded: True if "constant blinding" step returned a new "prog".
9896
* success: Indicates if the whole JIT went OK.
9997
*/
10098
struct jit_context {
10199
struct bpf_prog *prog;
102-
struct bpf_prog *orig_prog;
103100
struct jit_buffer jit;
104101
struct bpf_binary_header *bpf_header;
105102
bool emit;
@@ -114,7 +111,6 @@ struct jit_context {
114111
bool need_extra_pass;
115112
bool is_extra_pass;
116113
bool user_bpf_prog;
117-
bool blinded;
118114
bool success;
119115
};
120116

@@ -161,13 +157,7 @@ static int jit_ctx_init(struct jit_context *ctx, struct bpf_prog *prog)
161157
{
162158
memset(ctx, 0, sizeof(*ctx));
163159

164-
ctx->orig_prog = prog;
165-
166-
/* If constant blinding was requested but failed, scram. */
167-
ctx->prog = bpf_jit_blind_constants(prog);
168-
if (IS_ERR(ctx->prog))
169-
return PTR_ERR(ctx->prog);
170-
ctx->blinded = (ctx->prog != ctx->orig_prog);
160+
ctx->prog = prog;
171161

172162
/* If the verifier doesn't zero-extend, then we have to do it. */
173163
ctx->do_zext = !ctx->prog->aux->verifier_zext;
@@ -214,27 +204,26 @@ static inline void maybe_free(struct jit_context *ctx, void **mem)
214204
*/
215205
static void jit_ctx_cleanup(struct jit_context *ctx)
216206
{
217-
if (ctx->blinded) {
218-
/* if all went well, release the orig_prog. */
219-
if (ctx->success)
220-
bpf_jit_prog_release_other(ctx->prog, ctx->orig_prog);
221-
else
222-
bpf_jit_prog_release_other(ctx->orig_prog, ctx->prog);
223-
}
224-
225207
maybe_free(ctx, (void **)&ctx->bpf2insn);
226208
maybe_free(ctx, (void **)&ctx->jit_data);
227209

228210
if (!ctx->bpf2insn)
229211
ctx->bpf2insn_valid = false;
230212

231213
/* Freeing "bpf_header" is enough. "jit.buf" is a sub-array of it. */
232-
if (!ctx->success && ctx->bpf_header) {
233-
bpf_jit_binary_free(ctx->bpf_header);
234-
ctx->bpf_header = NULL;
235-
ctx->jit.buf = NULL;
236-
ctx->jit.index = 0;
237-
ctx->jit.len = 0;
214+
if (!ctx->success) {
215+
if (ctx->bpf_header) {
216+
bpf_jit_binary_free(ctx->bpf_header);
217+
ctx->bpf_header = NULL;
218+
ctx->jit.buf = NULL;
219+
ctx->jit.index = 0;
220+
ctx->jit.len = 0;
221+
}
222+
if (ctx->is_extra_pass) {
223+
ctx->prog->bpf_func = NULL;
224+
ctx->prog->jited = 0;
225+
ctx->prog->jited_len = 0;
226+
}
238227
}
239228

240229
ctx->emit = false;
@@ -1411,7 +1400,7 @@ static struct bpf_prog *do_extra_pass(struct bpf_prog *prog)
14111400
* (re)locations involved that their addresses are not known
14121401
* during the first run.
14131402
*/
1414-
struct bpf_prog *bpf_int_jit_compile(struct bpf_prog *prog)
1403+
struct bpf_prog *bpf_int_jit_compile(struct bpf_verifier_env *env, struct bpf_prog *prog)
14151404
{
14161405
vm_dump(prog);
14171406

arch/arm/net/bpf_jit_32.c

Lines changed: 8 additions & 35 deletions
Original file line numberDiff line numberDiff line change
@@ -2142,11 +2142,9 @@ bool bpf_jit_needs_zext(void)
21422142
return true;
21432143
}
21442144

2145-
struct bpf_prog *bpf_int_jit_compile(struct bpf_prog *prog)
2145+
struct bpf_prog *bpf_int_jit_compile(struct bpf_verifier_env *env, struct bpf_prog *prog)
21462146
{
2147-
struct bpf_prog *tmp, *orig_prog = prog;
21482147
struct bpf_binary_header *header;
2149-
bool tmp_blinded = false;
21502148
struct jit_ctx ctx;
21512149
unsigned int tmp_idx;
21522150
unsigned int image_size;
@@ -2156,20 +2154,7 @@ struct bpf_prog *bpf_int_jit_compile(struct bpf_prog *prog)
21562154
* the interpreter.
21572155
*/
21582156
if (!prog->jit_requested)
2159-
return orig_prog;
2160-
2161-
/* If constant blinding was enabled and we failed during blinding
2162-
* then we must fall back to the interpreter. Otherwise, we save
2163-
* the new JITed code.
2164-
*/
2165-
tmp = bpf_jit_blind_constants(prog);
2166-
2167-
if (IS_ERR(tmp))
2168-
return orig_prog;
2169-
if (tmp != prog) {
2170-
tmp_blinded = true;
2171-
prog = tmp;
2172-
}
2157+
return prog;
21732158

21742159
memset(&ctx, 0, sizeof(ctx));
21752160
ctx.prog = prog;
@@ -2179,10 +2164,8 @@ struct bpf_prog *bpf_int_jit_compile(struct bpf_prog *prog)
21792164
* we must fall back to the interpreter
21802165
*/
21812166
ctx.offsets = kcalloc(prog->len, sizeof(int), GFP_KERNEL);
2182-
if (ctx.offsets == NULL) {
2183-
prog = orig_prog;
2184-
goto out;
2185-
}
2167+
if (ctx.offsets == NULL)
2168+
return prog;
21862169

21872170
/* 1) fake pass to find in the length of the JITed code,
21882171
* to compute ctx->offsets and other context variables
@@ -2194,10 +2177,8 @@ struct bpf_prog *bpf_int_jit_compile(struct bpf_prog *prog)
21942177
* being successful in the second pass, so just fall back
21952178
* to the interpreter.
21962179
*/
2197-
if (build_body(&ctx)) {
2198-
prog = orig_prog;
2180+
if (build_body(&ctx))
21992181
goto out_off;
2200-
}
22012182

22022183
tmp_idx = ctx.idx;
22032184
build_prologue(&ctx);
@@ -2213,10 +2194,8 @@ struct bpf_prog *bpf_int_jit_compile(struct bpf_prog *prog)
22132194
ctx.idx += ctx.imm_count;
22142195
if (ctx.imm_count) {
22152196
ctx.imms = kcalloc(ctx.imm_count, sizeof(u32), GFP_KERNEL);
2216-
if (ctx.imms == NULL) {
2217-
prog = orig_prog;
2197+
if (ctx.imms == NULL)
22182198
goto out_off;
2219-
}
22202199
}
22212200
#else
22222201
/* there's nothing about the epilogue on ARMv7 */
@@ -2238,10 +2217,8 @@ struct bpf_prog *bpf_int_jit_compile(struct bpf_prog *prog)
22382217
/* Not able to allocate memory for the structure then
22392218
* we must fall back to the interpretation
22402219
*/
2241-
if (header == NULL) {
2242-
prog = orig_prog;
2220+
if (header == NULL)
22432221
goto out_imms;
2244-
}
22452222

22462223
/* 2.) Actual pass to generate final JIT code */
22472224
ctx.target = (u32 *) image_ptr;
@@ -2278,16 +2255,12 @@ struct bpf_prog *bpf_int_jit_compile(struct bpf_prog *prog)
22782255
#endif
22792256
out_off:
22802257
kfree(ctx.offsets);
2281-
out:
2282-
if (tmp_blinded)
2283-
bpf_jit_prog_release_other(prog, prog == orig_prog ?
2284-
tmp : orig_prog);
2258+
22852259
return prog;
22862260

22872261
out_free:
22882262
image_ptr = NULL;
22892263
bpf_jit_binary_free(header);
2290-
prog = orig_prog;
22912264
goto out_imms;
22922265
}
22932266

0 commit comments

Comments
 (0)