diff options
| author | Alexei Starovoitov <ast@kernel.org> | 2026-03-13 19:09:35 -0700 |
|---|---|---|
| committer | Alexei Starovoitov <ast@kernel.org> | 2026-03-13 19:09:35 -0700 |
| commit | bb41fcef5c7932e61ce87f573497ab0472cfe496 (patch) | |
| tree | 631e074d556a9837683969324576bd03987428c3 /tools | |
| parent | 2af3aa702c05ecd05850db9d9e110be9ffa3cf47 (diff) | |
| parent | 0a753d8cd61e31cc438a4fc414cc01655d3f3b72 (diff) | |
Merge branch 'optimize-bounds-refinement-by-reordering-deductions'
Paul Chaignon says:
====================
Optimize bounds refinement by reordering deductions
This patchset optimizes the bounds refinement (reg_bounds_sync) by
reordering deductions in __reg_deduce_bounds. This reordering allows us
to improve precision slightly while losing one call to
__reg_deduce_bounds.
The first patch from Eduard refactors the __reg_deduce_bounds
subfunctions, the second patch implements the reordering, and the last
one adds a selftest.
Changes in v3:
- Added first commit from Eduard that significantly helps with
readability of second commit.
- Reshuffled a bit more the functions in the second commit to improve
precision (Eduard).
- Rebased.
Changes in v2:
- Updated description to mention potential precision improvement and
to clarify the sequence of refinements (Shung-Hsi).
- Added the second patch.
- Rebased.
====================
Link: https://patch.msgid.link/cover.1773401138.git.paul.chaignon@gmail.com
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Diffstat (limited to 'tools')
| -rw-r--r-- | tools/testing/selftests/bpf/progs/verifier_bounds.c | 33 |
1 files changed, 33 insertions, 0 deletions
diff --git a/tools/testing/selftests/bpf/progs/verifier_bounds.c b/tools/testing/selftests/bpf/progs/verifier_bounds.c index ce09379130aa..3724d5e5bcb3 100644 --- a/tools/testing/selftests/bpf/progs/verifier_bounds.c +++ b/tools/testing/selftests/bpf/progs/verifier_bounds.c @@ -2037,4 +2037,37 @@ __naked void signed_unsigned_intersection32_case2(void *ctx) : __clobber_all); } +/* After instruction 3, the u64 and s64 ranges look as follows: + * 0 umin=2 umax=0xff..ff00..03 U64_MAX + * | [xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx] | + * |----------------------------|------------------------------| + * |xx] [xxxxxxxxxxxxxxxxxxxxxxxxxxxx| + * 0 smax=2 smin=0x800..02 -1 + * + * The two ranges can't be refined because they overlap in two places. Once we + * add an upper-bound to u64 at instruction 4, the refinement can happen. This + * test validates that this refinement does happen and is not overwritten by + * the less-precise 32bits ranges. + */ +SEC("socket") +__description("bounds refinement: 64bits ranges not overwritten by 32bits ranges") +__msg("3: (65) if r0 s> 0x2 {{.*}} R0=scalar(smin=0x8000000000000002,smax=2,umin=smin32=umin32=2,umax=0xffffffff00000003,smax32=umax32=3") +__msg("4: (25) if r0 > 0x13 {{.*}} R0=2") +__success __log_level(2) +__naked void refinement_32bounds_not_overwriting_64bounds(void *ctx) +{ + asm volatile(" \ + call %[bpf_get_prandom_u32]; \ + if w0 < 2 goto +5; \ + if w0 > 3 goto +4; \ + if r0 s> 2 goto +3; \ + if r0 > 19 goto +2; \ + if r0 == 2 goto +1; \ + r10 = 0; \ + exit; \ +" : + : __imm(bpf_get_prandom_u32) + : __clobber_all); +} + char _license[] SEC("license") = "GPL"; |
