<feed xmlns='http://www.w3.org/2005/Atom'>
<title>linux.git/include/linux/rhashtable.h, branch v7.1-rc3</title>
<subtitle>Linux kernel source tree</subtitle>
<link rel='alternate' type='text/html' href='https://git.tavy.me/linux.git/'/>
<entry>
<title>rhashtable: Bounce deferred worker kick through irq_work</title>
<updated>2026-04-21T06:10:50+00:00</updated>
<author>
<name>Tejun Heo</name>
<email>tj@kernel.org</email>
</author>
<published>2026-04-21T06:03:26+00:00</published>
<link rel='alternate' type='text/html' href='https://git.tavy.me/linux.git/commit/?id=4fe985292709eeb6a4653c71660f893e26c2f2dd'/>
<id>4fe985292709eeb6a4653c71660f893e26c2f2dd</id>
<content type='text'>
Inserts past 75% load call schedule_work(&amp;ht-&gt;run_work) to kick an
async resize. If a caller holds a raw spinlock (e.g. an
insecure_elasticity user), schedule_work() under that lock records

  caller_lock -&gt; pool-&gt;lock -&gt; pi_lock -&gt; rq-&gt;__lock

A cycle forms if any of these locks is acquired in the reverse
direction elsewhere. sched_ext, the only current insecure_elasticity
user, hits this: it holds scx_sched_lock across rhashtable inserts of
sub-schedulers, while scx_bypass() takes rq-&gt;__lock -&gt; scx_sched_lock.
Exercising the resize path produces:

  Chain exists of:
    &amp;pool-&gt;lock --&gt; &amp;rq-&gt;__lock --&gt; scx_sched_lock

Bounce the kick from the insert paths through irq_work so
schedule_work() runs from hard IRQ context with the caller's lock no
longer held. rht_deferred_worker()'s self-rearm on error stays on
schedule_work(&amp;ht-&gt;run_work) - the worker runs in process context with
no caller lock held, and keeping the self-requeue on @run_work lets
cancel_work_sync() in rhashtable_free_and_destroy() drain it.

v3: Keep rht_deferred_worker()'s self-rearm on schedule_work(&amp;run_work).
    Routing it through irq_work in v2 broke cancel_work_sync()'s
    self-requeue handling - an irq_work queued after irq_work_sync()
    returned but while cancel_work_sync() was still waiting could fire
    post-teardown.

v2: Bounce unconditionally instead of gating on insecure_elasticity,
    as suggested by Herbert.

Signed-off-by: Tejun Heo &lt;tj@kernel.org&gt;
Acked-by: Herbert Xu &lt;herbert@gondor.apana.org.au&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
Inserts past 75% load call schedule_work(&amp;ht-&gt;run_work) to kick an
async resize. If a caller holds a raw spinlock (e.g. an
insecure_elasticity user), schedule_work() under that lock records

  caller_lock -&gt; pool-&gt;lock -&gt; pi_lock -&gt; rq-&gt;__lock

A cycle forms if any of these locks is acquired in the reverse
direction elsewhere. sched_ext, the only current insecure_elasticity
user, hits this: it holds scx_sched_lock across rhashtable inserts of
sub-schedulers, while scx_bypass() takes rq-&gt;__lock -&gt; scx_sched_lock.
Exercising the resize path produces:

  Chain exists of:
    &amp;pool-&gt;lock --&gt; &amp;rq-&gt;__lock --&gt; scx_sched_lock

Bounce the kick from the insert paths through irq_work so
schedule_work() runs from hard IRQ context with the caller's lock no
longer held. rht_deferred_worker()'s self-rearm on error stays on
schedule_work(&amp;ht-&gt;run_work) - the worker runs in process context with
no caller lock held, and keeping the self-requeue on @run_work lets
cancel_work_sync() in rhashtable_free_and_destroy() drain it.

v3: Keep rht_deferred_worker()'s self-rearm on schedule_work(&amp;run_work).
    Routing it through irq_work in v2 broke cancel_work_sync()'s
    self-requeue handling - an irq_work queued after irq_work_sync()
    returned but while cancel_work_sync() was still waiting could fire
    post-teardown.

v2: Bounce unconditionally instead of gating on insecure_elasticity,
    as suggested by Herbert.

Signed-off-by: Tejun Heo &lt;tj@kernel.org&gt;
Acked-by: Herbert Xu &lt;herbert@gondor.apana.org.au&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>rhashtable: Restore insecure_elasticity toggle</title>
<updated>2026-04-19T15:47:21+00:00</updated>
<author>
<name>Herbert Xu</name>
<email>herbert@gondor.apana.org.au</email>
</author>
<published>2026-04-18T01:41:21+00:00</published>
<link rel='alternate' type='text/html' href='https://git.tavy.me/linux.git/commit/?id=73bd1227787bfe73eea3d04c63a89cb55db9c23e'/>
<id>73bd1227787bfe73eea3d04c63a89cb55db9c23e</id>
<content type='text'>
Some users of rhashtable cannot handle insertion failures, and
are happy to accept the consequences of a hash table that having
very long chains.

Restore the insecure_elasticity toggle for these users.  In
addition to disabling the chain length checks, this also removes
the emergency resize that would otherwise occur when the hash
table occupancy hits 100% (an async resize is still scheduled
at 75%).

Signed-off-by: Herbert Xu &lt;herbert@gondor.apana.org.au&gt;
Signed-off-by: Tejun Heo &lt;tj@kernel.org&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
Some users of rhashtable cannot handle insertion failures, and
are happy to accept the consequences of a hash table that having
very long chains.

Restore the insecure_elasticity toggle for these users.  In
addition to disabling the chain length checks, this also removes
the emergency resize that would otherwise occur when the hash
table occupancy hits 100% (an async resize is still scheduled
at 75%).

Signed-off-by: Herbert Xu &lt;herbert@gondor.apana.org.au&gt;
Signed-off-by: Tejun Heo &lt;tj@kernel.org&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>rhashtable: consolidate hash computation in rht_key_get_hash()</title>
<updated>2026-03-07T05:12:20+00:00</updated>
<author>
<name>Mykyta Yatsenko</name>
<email>yatsenko@meta.com</email>
</author>
<published>2026-02-24T19:29:54+00:00</published>
<link rel='alternate' type='text/html' href='https://git.tavy.me/linux.git/commit/?id=c9429bf56405a326845a8a35357b5bdf1dc4558c'/>
<id>c9429bf56405a326845a8a35357b5bdf1dc4558c</id>
<content type='text'>
The else-if and else branches in rht_key_get_hash() both compute a hash
using either params.hashfn or jhash, differing only in the source of
key_len (params.key_len vs ht-&gt;p.key_len). Merge the two branches into
one by using the ternary `params.key_len ?: ht-&gt;p.key_len` to select
the key length, removing the duplicated logic.

This also improves the performance of the else branch which previously
always used jhash and never fell through to jhash2. This branch is going
to be used by BPF resizable hashmap, which wraps rhashtable:
https://lore.kernel.org/bpf/20260205-rhash-v1-0-30dd6d63c462@meta.com/

Signed-off-by: Mykyta Yatsenko &lt;yatsenko@meta.com&gt;
Signed-off-by: Herbert Xu &lt;herbert@gondor.apana.org.au&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
The else-if and else branches in rht_key_get_hash() both compute a hash
using either params.hashfn or jhash, differing only in the source of
key_len (params.key_len vs ht-&gt;p.key_len). Merge the two branches into
one by using the ternary `params.key_len ?: ht-&gt;p.key_len` to select
the key length, removing the duplicated logic.

This also improves the performance of the else branch which previously
always used jhash and never fell through to jhash2. This branch is going
to be used by BPF resizable hashmap, which wraps rhashtable:
https://lore.kernel.org/bpf/20260205-rhash-v1-0-30dd6d63c462@meta.com/

Signed-off-by: Mykyta Yatsenko &lt;yatsenko@meta.com&gt;
Signed-off-by: Herbert Xu &lt;herbert@gondor.apana.org.au&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>rhashtable: Enable context analysis</title>
<updated>2026-01-05T15:43:35+00:00</updated>
<author>
<name>Marco Elver</name>
<email>elver@google.com</email>
</author>
<published>2025-12-19T15:40:21+00:00</published>
<link rel='alternate' type='text/html' href='https://git.tavy.me/linux.git/commit/?id=322366b8f13a8cafe169dc1dc6f6ec0d82ff8734'/>
<id>322366b8f13a8cafe169dc1dc6f6ec0d82ff8734</id>
<content type='text'>
Enable context analysis for rhashtable, which was used as an initial
test as it contains a combination of RCU, mutex, and bit_spinlock usage.

Users of rhashtable now also benefit from annotations on the API, which
will now warn if the RCU read lock is not held where required.

Signed-off-by: Marco Elver &lt;elver@google.com&gt;
Signed-off-by: Peter Zijlstra (Intel) &lt;peterz@infradead.org&gt;
Link: https://patch.msgid.link/20251219154418.3592607-33-elver@google.com
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
Enable context analysis for rhashtable, which was used as an initial
test as it contains a combination of RCU, mutex, and bit_spinlock usage.

Users of rhashtable now also benefit from annotations on the API, which
will now warn if the RCU read lock is not held where required.

Signed-off-by: Marco Elver &lt;elver@google.com&gt;
Signed-off-by: Peter Zijlstra (Intel) &lt;peterz@infradead.org&gt;
Link: https://patch.msgid.link/20251219154418.3592607-33-elver@google.com
</pre>
</div>
</content>
</entry>
<entry>
<title>rhashtable: use likely for rhashtable lookup</title>
<updated>2025-10-20T04:10:28+00:00</updated>
<author>
<name>Menglong Dong</name>
<email>menglong8.dong@gmail.com</email>
</author>
<published>2025-10-11T01:48:55+00:00</published>
<link rel='alternate' type='text/html' href='https://git.tavy.me/linux.git/commit/?id=aa653654ee67f9cbbebb7d4c18f360ad4fef3180'/>
<id>aa653654ee67f9cbbebb7d4c18f360ad4fef3180</id>
<content type='text'>
Sometimes, the result of the rhashtable_lookup() is expected to be found.
Therefore, we can use likely() for such cases.

Following new functions are introduced, which will use likely or unlikely
during the lookup:

 rhashtable_lookup_likely
 rhltable_lookup_likely

A micro-benchmark is made for these new functions: lookup a existed entry
repeatedly for 100000000 times, and rhashtable_lookup_likely() gets ~30%
speedup.

Signed-off-by: Menglong Dong &lt;dongml2@chinatelecom.cn&gt;
Signed-off-by: Herbert Xu &lt;herbert@gondor.apana.org.au&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
Sometimes, the result of the rhashtable_lookup() is expected to be found.
Therefore, we can use likely() for such cases.

Following new functions are introduced, which will use likely or unlikely
during the lookup:

 rhashtable_lookup_likely
 rhltable_lookup_likely

A micro-benchmark is made for these new functions: lookup a existed entry
repeatedly for 100000000 times, and rhashtable_lookup_likely() gets ~30%
speedup.

Signed-off-by: Menglong Dong &lt;dongml2@chinatelecom.cn&gt;
Signed-off-by: Herbert Xu &lt;herbert@gondor.apana.org.au&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>rhashtable: Use rcu_dereference_all and rcu_dereference_all_check</title>
<updated>2025-09-20T12:21:03+00:00</updated>
<author>
<name>Herbert Xu</name>
<email>herbert@gondor.apana.org.au</email>
</author>
<published>2025-09-09T09:50:56+00:00</published>
<link rel='alternate' type='text/html' href='https://git.tavy.me/linux.git/commit/?id=bee8a520eb84950193d0566ea2c2e46406a4b6ce'/>
<id>bee8a520eb84950193d0566ea2c2e46406a4b6ce</id>
<content type='text'>
Add rcu_dereference_all and rcu_dereference_all_check so that
library code such as rhashtable can be used with any RCU variant.

As it stands rcu_dereference is used within rashtable, which
creates false-positive warnings if the user calls it from another
RCU context, such as preempt_disable().

Use the rcu_dereference_all and rcu_dereference_all_check calls
in rhashtable to suppress these warnings.

Also replace the rcu_dereference_raw calls in the list iterators
with rcu_dereference_all to uncover buggy calls.

Reported-by: Menglong Dong &lt;dongml2@chinatelecom.cn&gt;
Signed-off-by: Herbert Xu &lt;herbert@gondor.apana.org.au&gt;
Reviewed-by: Paul E. McKenney &lt;paulmck@kernel.org&gt;
Signed-off-by: Herbert Xu &lt;herbert@gondor.apana.org.au&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
Add rcu_dereference_all and rcu_dereference_all_check so that
library code such as rhashtable can be used with any RCU variant.

As it stands rcu_dereference is used within rashtable, which
creates false-positive warnings if the user calls it from another
RCU context, such as preempt_disable().

Use the rcu_dereference_all and rcu_dereference_all_check calls
in rhashtable to suppress these warnings.

Also replace the rcu_dereference_raw calls in the list iterators
with rcu_dereference_all to uncover buggy calls.

Reported-by: Menglong Dong &lt;dongml2@chinatelecom.cn&gt;
Signed-off-by: Herbert Xu &lt;herbert@gondor.apana.org.au&gt;
Reviewed-by: Paul E. McKenney &lt;paulmck@kernel.org&gt;
Signed-off-by: Herbert Xu &lt;herbert@gondor.apana.org.au&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>rhashtable: Use __always_inline instead of inline</title>
<updated>2025-09-06T07:57:23+00:00</updated>
<author>
<name>Menglong Dong</name>
<email>menglong8.dong@gmail.com</email>
</author>
<published>2025-08-29T07:28:44+00:00</published>
<link rel='alternate' type='text/html' href='https://git.tavy.me/linux.git/commit/?id=1544344563376b2a2ae2af5af1db00d6410c18e0'/>
<id>1544344563376b2a2ae2af5af1db00d6410c18e0</id>
<content type='text'>
Sometimes, the compiler is not clever enough to inline the
rhashtable_lookup() for us, even if the "obj_cmpfn" and "key_len" in
params is const. This can introduce more overhead.

Therefore, use __always_inline for the rhashtable.

Signed-off-by: Menglong Dong &lt;dongml2@chinatelecom.cn&gt;
Signed-off-by: Herbert Xu &lt;herbert@gondor.apana.org.au&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
Sometimes, the compiler is not clever enough to inline the
rhashtable_lookup() for us, even if the "obj_cmpfn" and "key_len" in
params is const. This can introduce more overhead.

Therefore, use __always_inline for the rhashtable.

Signed-off-by: Menglong Dong &lt;dongml2@chinatelecom.cn&gt;
Signed-off-by: Herbert Xu &lt;herbert@gondor.apana.org.au&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>rhashtable: remove needless return in three void APIs</title>
<updated>2025-03-17T06:24:15+00:00</updated>
<author>
<name>Zijun Hu</name>
<email>quic_zijuhu@quicinc.com</email>
</author>
<published>2025-02-21T13:02:21+00:00</published>
<link rel='alternate' type='text/html' href='https://git.tavy.me/linux.git/commit/?id=758502918e77323bf999a3eddd71466bf6135173'/>
<id>758502918e77323bf999a3eddd71466bf6135173</id>
<content type='text'>
Remove needless 'return' in the following void APIs:

 rhltable_walk_enter()
 rhltable_free_and_destroy()
 rhltable_destroy()

Since both the API and callee involved are void functions.

Link: https://lkml.kernel.org/r/20250221-rmv_return-v1-16-cc8dff275827@quicinc.com
Signed-off-by: Zijun Hu &lt;quic_zijuhu@quicinc.com&gt;
Signed-off-by: Andrew Morton &lt;akpm@linux-foundation.org&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
Remove needless 'return' in the following void APIs:

 rhltable_walk_enter()
 rhltable_free_and_destroy()
 rhltable_destroy()

Since both the API and callee involved are void functions.

Link: https://lkml.kernel.org/r/20250221-rmv_return-v1-16-cc8dff275827@quicinc.com
Signed-off-by: Zijun Hu &lt;quic_zijuhu@quicinc.com&gt;
Signed-off-by: Andrew Morton &lt;akpm@linux-foundation.org&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>rhashtable: Improve grammar</title>
<updated>2024-04-03T01:03:32+00:00</updated>
<author>
<name>Jonathan Neuschäfer</name>
<email>j.neuschaefer@gmx.net</email>
</author>
<published>2024-03-29T16:26:27+00:00</published>
<link rel='alternate' type='text/html' href='https://git.tavy.me/linux.git/commit/?id=8db2509faa331865903a81a92f15c449e821b1d7'/>
<id>8db2509faa331865903a81a92f15c449e821b1d7</id>
<content type='text'>
Change "a" to "an" according to the usual rules, fix an "if" that
was mistyped as "in", improve grammar in "considerable slow" -&gt;
"considerably slower".

Signed-off-by: Jonathan Neuschäfer &lt;j.neuschaefer@gmx.net&gt;
Reviewed-by: Randy Dunlap &lt;rdunlap@infradead.org&gt;
Link: https://lore.kernel.org/r/20240329-misc-rhashtable-v1-1-5862383ff798@gmx.net
Signed-off-by: Jakub Kicinski &lt;kuba@kernel.org&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
Change "a" to "an" according to the usual rules, fix an "if" that
was mistyped as "in", improve grammar in "considerable slow" -&gt;
"considerably slower".

Signed-off-by: Jonathan Neuschäfer &lt;j.neuschaefer@gmx.net&gt;
Reviewed-by: Randy Dunlap &lt;rdunlap@infradead.org&gt;
Link: https://lore.kernel.org/r/20240329-misc-rhashtable-v1-1-5862383ff798@gmx.net
Signed-off-by: Jakub Kicinski &lt;kuba@kernel.org&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>rhashtable: Allow rhashtable to be used from irq-safe contexts</title>
<updated>2022-12-09T10:42:56+00:00</updated>
<author>
<name>Tejun Heo</name>
<email>tj@kernel.org</email>
</author>
<published>2022-12-06T21:36:32+00:00</published>
<link rel='alternate' type='text/html' href='https://git.tavy.me/linux.git/commit/?id=e47877c7aa821c413b45e05f804819579bdfa1a3'/>
<id>e47877c7aa821c413b45e05f804819579bdfa1a3</id>
<content type='text'>
rhashtable currently only does bh-safe synchronization making it impossible
to use from irq-safe contexts. Switch it to use irq-safe synchronization to
remove the restriction.

v2: Update the lock functions to return the ulong flags value and unlock
    functions to take the value directly instead of passing around the
    pointer. Suggested by Linus.

Signed-off-by: Tejun Heo &lt;tj@kernel.org&gt;
Reviewed-by: David Vernet &lt;dvernet@meta.com&gt;
Acked-by: Josh Don &lt;joshdon@google.com&gt;
Acked-by: Hao Luo &lt;haoluo@google.com&gt;
Acked-by: Barret Rhoden &lt;brho@google.com&gt;
Cc: Linus Torvalds &lt;torvalds@linux-foundation.org&gt;
Signed-off-by: David S. Miller &lt;davem@davemloft.net&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
rhashtable currently only does bh-safe synchronization making it impossible
to use from irq-safe contexts. Switch it to use irq-safe synchronization to
remove the restriction.

v2: Update the lock functions to return the ulong flags value and unlock
    functions to take the value directly instead of passing around the
    pointer. Suggested by Linus.

Signed-off-by: Tejun Heo &lt;tj@kernel.org&gt;
Reviewed-by: David Vernet &lt;dvernet@meta.com&gt;
Acked-by: Josh Don &lt;joshdon@google.com&gt;
Acked-by: Hao Luo &lt;haoluo@google.com&gt;
Acked-by: Barret Rhoden &lt;brho@google.com&gt;
Cc: Linus Torvalds &lt;torvalds@linux-foundation.org&gt;
Signed-off-by: David S. Miller &lt;davem@davemloft.net&gt;
</pre>
</div>
</content>
</entry>
</feed>
