diff options
author | Alexei Starovoitov <ast@kernel.org> | 2019-04-17 19:09:25 -0700 |
---|---|---|
committer | Alexei Starovoitov <ast@kernel.org> | 2019-04-17 19:09:26 -0700 |
commit | 193d0002ef04d331466f4d211d008ff8257bfa6a (patch) | |
tree | ac3d55b7b5731f7e19fd2e27b242135b3fd453d2 /tools/testing/selftests/bpf | |
parent | 00967e84f742f87603e769529628e32076ade188 (diff) | |
parent | 86d231459d6dc9094e70c35c3517f4ef860b2f1e (diff) | |
download | linux-193d0002ef04d331466f4d211d008ff8257bfa6a.tar.gz |
Merge branch 'bulk-cpumap-redirect'
Jesper Dangaard Brouer says:
====================
This patchset utilize a number of different kernel bulk APIs for optimizing
the performance for the XDP cpumap redirect feature.
Benchmark details are available here:
https://github.com/xdp-project/xdp-project/blob/master/areas/cpumap/cpumap03-optimizations.org
Performance measurements can be considered micro benchmarks, as they measure
dropping packets at different stages in the network stack.
Summary based on above:
Baseline benchmarks
- baseline-redirect: UdpNoPorts: 3,180,074
- baseline-redirect: iptables-raw drop: 6,193,534
Patch1: bpf: cpumap use ptr_ring_consume_batched
- redirect: UdpNoPorts: 3,327,729
- redirect: iptables-raw drop: 6,321,540
Patch2: net: core: introduce build_skb_around
- redirect: UdpNoPorts: 3,221,303
- redirect: iptables-raw drop: 6,320,066
Patch3: bpf: cpumap do bulk allocation of SKBs
- redirect: UdpNoPorts: 3,290,563
- redirect: iptables-raw drop: 6,650,112
Patch4: bpf: cpumap memory prefetchw optimizations for struct page
- redirect: UdpNoPorts: 3,520,250
- redirect: iptables-raw drop: 7,649,604
In this V2 submission I have chosen drop the SKB-list patch using
netif_receive_skb_list() as it was not showing a performance improvement for
these micro benchmarks.
====================
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Diffstat (limited to 'tools/testing/selftests/bpf')
0 files changed, 0 insertions, 0 deletions