diff options
author | Alexei Starovoitov <ast@kernel.org> | 2022-09-29 09:25:48 -0700 |
---|---|---|
committer | Alexei Starovoitov <ast@kernel.org> | 2022-09-29 09:25:48 -0700 |
commit | 5ee35abb461e34ec8727dd8abc621ba9abec3e31 (patch) | |
tree | 95473ec30ee99a86de01e2abeb0c6636e4eb9cc3 /crypto/asymmetric_keys/x509.asn1 | |
parent | 8526f0d6135f77451566463ace6f0fb8b72cedaa (diff) | |
parent | 3411c5b6f8d6e08d98e606dcf74fc42e2f9d731f (diff) | |
download | linux-5ee35abb461e34ec8727dd8abc621ba9abec3e31.tar.gz |
Merge branch 'bpf: Remove recursion check for struct_ops prog'
Martin KaFai Lau says:
====================
From: Martin KaFai Lau <martin.lau@kernel.org>
The struct_ops is sharing the tracing-trampoline's enter/exit
function which tracks prog->active to avoid recursion. It turns
out the struct_ops bpf prog will hit this prog->active and
unnecessarily skipped running the struct_ops prog. eg. The
'.ssthresh' may run in_task() and then interrupted by softirq
that runs the same '.ssthresh'.
The kernel does not call the tcp-cc's ops in a recursive way,
so this set is to remove the recursion check for struct_ops prog.
v3:
- Clear the bpf_chg_cc_inprogress from the newly cloned tcp_sock
in tcp_create_openreq_child() because the listen sk can
be cloned without lock being held. (Eric Dumazet)
v2:
- v1 [0] turned into a long discussion on a few cases and also
whether it needs to follow the bpf_run_ctx chain if there is
tracing bpf_run_ctx (kprobe/trace/trampoline) running in between.
It is a good signal that it is not obvious enough to reason
about it and needs a tradeoff for a more straight forward approach.
This revision uses one bit out of an existing 1 byte hole
in the tcp_sock. It is in Patch 4.
[0]: https://lore.kernel.org/bpf/20220922225616.3054840-1-kafai@fb.com/T/#md98d40ac5ec295fdadef476c227a3401b2b6b911
====================
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Diffstat (limited to 'crypto/asymmetric_keys/x509.asn1')
0 files changed, 0 insertions, 0 deletions