diff options
| author | Eric Dumazet <edumazet@google.com> | 2025-10-16 18:29:11 +0000 |
|---|---|---|
| committer | Jakub Kicinski <kuba@kernel.org> | 2025-10-20 17:37:57 -0700 |
| commit | a5cd3a60aa1d0366265638c43012e1ac1b3f6d6a (patch) | |
| tree | 91d9a2b42314ae51cc0e1618f794de607c003632 /tools/perf/scripts/python/stackcollapse.py | |
| parent | 6ae022f8ac7ad581c31d3309afacdb8e33ef9c7f (diff) | |
net: shrink napi_skb_cache_{put,get}() and napi_skb_cache_get_bulk()
Following loop in napi_skb_cache_put() is unrolled by the compiler
even if CONFIG_KASAN is not enabled:
for (i = NAPI_SKB_CACHE_HALF; i < NAPI_SKB_CACHE_SIZE; i++)
kasan_mempool_unpoison_object(nc->skb_cache[i],
kmem_cache_size(net_hotdata.skbuff_cache));
We have 32 times this sequence, for a total of 384 bytes.
48 8b 3d 00 00 00 00 net_hotdata.skbuff_cache,%rdi
e8 00 00 00 00 call kmem_cache_size
This is because kmem_cache_size() is not an inline and not const,
and kasan_unpoison_object_data() is an inline function.
Cache kmem_cache_size() result in a variable, so that
the compiler can remove dead code (and variable) when/if
CONFIG_KASAN is unset.
After this patch, napi_skb_cache_put() is inlined in its callers,
and we avoid one kmem_cache_size() call in napi_skb_cache_get()
and napi_skb_cache_get_bulk().
Signed-off-by: Eric Dumazet <edumazet@google.com>
Reviewed-by: Jacob Keller <jacob.e.keller@intel.com>
Reviewed-by: Kuniyuki Iwashima <kuniyu@google.com>
Reviewed-by: Alexander Lobakin <aleksander.lobakin@intel.com>
Link: https://patch.msgid.link/20251016182911.1132792-1-edumazet@google.com
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
Diffstat (limited to 'tools/perf/scripts/python/stackcollapse.py')
0 files changed, 0 insertions, 0 deletions
