summaryrefslogtreecommitdiff
path: root/rust/alloc/collections/git@git.tavy.me:linux.git
diff options
context:
space:
mode:
authorVlastimil Babka <vbabka@suse.cz>2026-01-23 07:52:48 +0100
committerVlastimil Babka <vbabka@suse.cz>2026-01-29 09:27:51 +0100
commited30c4adfc2b56909ca43fb5e4750a646928cbf4 (patch)
treead79f4ee2b87f210305ea8c6573256c1a48c1a39 /rust/alloc/collections/git@git.tavy.me:linux.git
parent913ffd3a1bf5d154995c6cfab44994b07b3c103f (diff)
slab: add optimized sheaf refill from partial list
At this point we have sheaves enabled for all caches, but their refill is done via __kmem_cache_alloc_bulk() which relies on cpu (partial) slabs - now a redundant caching layer that we are about to remove. The refill will thus be done from slabs on the node partial list. Introduce new functions that can do that in an optimized way as it's easier than modifying the __kmem_cache_alloc_bulk() call chain. Introduce struct partial_bulk_context, a variant of struct partial_context that can return a list of slabs from the partial list with the sum of free objects in them within the requested min and max. Introduce get_partial_node_bulk() that removes the slabs from freelist and returns them in the list. There is a racy read of slab->counters so make sure the non-atomic write in __update_freelist_slow() is not tearing. Introduce get_freelist_nofreeze() which grabs the freelist without freezing the slab. Introduce alloc_from_new_slab() which can allocate multiple objects from a newly allocated slab where we don't need to synchronize with freeing. In some aspects it's similar to alloc_single_from_new_slab() but assumes the cache is a non-debug one so it can avoid some actions. It supports the allow_spin parameter, which we always set true here, but the followup change will reuse the function in a context where it may be false. Introduce __refill_objects() that uses the functions above to fill an array of objects. It has to handle the possibility that the slabs will contain more objects that were requested, due to concurrent freeing of objects to those slabs. When no more slabs on partial lists are available, it will allocate new slabs. It is intended to be only used in context where spinning is allowed, so add a WARN_ON_ONCE check there. Finally, switch refill_sheaf() to use __refill_objects(). Sheaves are only refilled from contexts that allow spinning, or even blocking. Reviewed-by: Suren Baghdasaryan <surenb@google.com> Reviewed-by: Hao Li <hao.li@linux.dev> Reviewed-by: Harry Yoo <harry.yoo@oracle.com> Signed-off-by: Vlastimil Babka <vbabka@suse.cz>
Diffstat (limited to 'rust/alloc/collections/git@git.tavy.me:linux.git')
0 files changed, 0 insertions, 0 deletions