<feed xmlns='http://www.w3.org/2005/Atom'>
<title>linux.git/mm/vmalloc.c, branch v7.1-rc2</title>
<subtitle>Linux kernel source tree</subtitle>
<link rel='alternate' type='text/html' href='https://git.tavy.me/linux.git/'/>
<entry>
<title>vmalloc: fix buffer overflow in vrealloc_node_align()</title>
<updated>2026-04-27T12:54:23+00:00</updated>
<author>
<name>Marco Elver</name>
<email>elver@google.com</email>
</author>
<published>2026-04-20T11:47:26+00:00</published>
<link rel='alternate' type='text/html' href='https://git.tavy.me/linux.git/commit/?id=82d1f01292d3f09bf063f829f8ab8de12b4280a1'/>
<id>82d1f01292d3f09bf063f829f8ab8de12b4280a1</id>
<content type='text'>
Commit 4c5d3365882d ("mm/vmalloc: allow to set node and align in
vrealloc") added the ability to force a new allocation if the current
pointer is on the wrong NUMA node, or if an alignment constraint is not
met, even if the user is shrinking the allocation.

On this path (need_realloc), the code allocates a new object of 'size'
bytes and then memcpy()s 'old_size' bytes into it.  If the request is to
shrink the object (size &lt; old_size), this results in an out-of-bounds
write on the new buffer.

Fix this by bounding the copy length by the new allocation size.

Link: https://lore.kernel.org/20260420114805.3572606-2-elver@google.com
Fixes: 4c5d3365882d ("mm/vmalloc: allow to set node and align in vrealloc")
Signed-off-by: Marco Elver &lt;elver@google.com&gt;
Reported-by: Harry Yoo (Oracle) &lt;harry@kernel.org&gt;
Reviewed-by: Uladzislau Rezki (Sony) &lt;urezki@gmail.com&gt;
Acked-by: Vlastimil Babka (SUSE) &lt;vbabka@kernel.org&gt;
Reviewed-by: Harry Yoo (Oracle) &lt;harry@kernel.org&gt;
Cc: &lt;stable@vger.kernel.org&gt;
Signed-off-by: Andrew Morton &lt;akpm@linux-foundation.org&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
Commit 4c5d3365882d ("mm/vmalloc: allow to set node and align in
vrealloc") added the ability to force a new allocation if the current
pointer is on the wrong NUMA node, or if an alignment constraint is not
met, even if the user is shrinking the allocation.

On this path (need_realloc), the code allocates a new object of 'size'
bytes and then memcpy()s 'old_size' bytes into it.  If the request is to
shrink the object (size &lt; old_size), this results in an out-of-bounds
write on the new buffer.

Fix this by bounding the copy length by the new allocation size.

Link: https://lore.kernel.org/20260420114805.3572606-2-elver@google.com
Fixes: 4c5d3365882d ("mm/vmalloc: allow to set node and align in vrealloc")
Signed-off-by: Marco Elver &lt;elver@google.com&gt;
Reported-by: Harry Yoo (Oracle) &lt;harry@kernel.org&gt;
Reviewed-by: Uladzislau Rezki (Sony) &lt;urezki@gmail.com&gt;
Acked-by: Vlastimil Babka (SUSE) &lt;vbabka@kernel.org&gt;
Reviewed-by: Harry Yoo (Oracle) &lt;harry@kernel.org&gt;
Cc: &lt;stable@vger.kernel.org&gt;
Signed-off-by: Andrew Morton &lt;akpm@linux-foundation.org&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>Merge tag 'mm-hotfixes-stable-2026-04-19-00-14' of git://git.kernel.org/pub/scm/linux/kernel/git/akpm/mm</title>
<updated>2026-04-19T21:45:37+00:00</updated>
<author>
<name>Linus Torvalds</name>
<email>torvalds@linux-foundation.org</email>
</author>
<published>2026-04-19T21:45:37+00:00</published>
<link rel='alternate' type='text/html' href='https://git.tavy.me/linux.git/commit/?id=c1f49dea2b8f335813d3b348fd39117fb8efb428'/>
<id>c1f49dea2b8f335813d3b348fd39117fb8efb428</id>
<content type='text'>
Pull MM fixes from Andrew Morton:
 "7 hotfixes. 6 are cc:stable and all are for MM. Please see the
  individual changelogs for details"

* tag 'mm-hotfixes-stable-2026-04-19-00-14' of git://git.kernel.org/pub/scm/linux/kernel/git/akpm/mm:
  mm/damon/core: disallow non-power of two min_region_sz on damon_start()
  mm/vmalloc: take vmap_purge_lock in shrinker
  mm: call -&gt;free_folio() directly in folio_unmap_invalidate()
  mm: blk-cgroup: fix use-after-free in cgwb_release_workfn()
  mm/zone_device: do not touch device folio after calling -&gt;folio_free()
  mm/damon/core: disallow time-quota setting zero esz
  mm/mempolicy: fix weighted interleave auto sysfs name
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
Pull MM fixes from Andrew Morton:
 "7 hotfixes. 6 are cc:stable and all are for MM. Please see the
  individual changelogs for details"

* tag 'mm-hotfixes-stable-2026-04-19-00-14' of git://git.kernel.org/pub/scm/linux/kernel/git/akpm/mm:
  mm/damon/core: disallow non-power of two min_region_sz on damon_start()
  mm/vmalloc: take vmap_purge_lock in shrinker
  mm: call -&gt;free_folio() directly in folio_unmap_invalidate()
  mm: blk-cgroup: fix use-after-free in cgwb_release_workfn()
  mm/zone_device: do not touch device folio after calling -&gt;folio_free()
  mm/damon/core: disallow time-quota setting zero esz
  mm/mempolicy: fix weighted interleave auto sysfs name
</pre>
</div>
</content>
</entry>
<entry>
<title>mm/vmalloc: take vmap_purge_lock in shrinker</title>
<updated>2026-04-19T06:24:27+00:00</updated>
<author>
<name>Uladzislau Rezki (Sony)</name>
<email>urezki@gmail.com</email>
</author>
<published>2026-04-13T19:26:46+00:00</published>
<link rel='alternate' type='text/html' href='https://git.tavy.me/linux.git/commit/?id=ec05f51f1e65bce95528543eb73fda56fd201d94'/>
<id>ec05f51f1e65bce95528543eb73fda56fd201d94</id>
<content type='text'>
decay_va_pool_node() can be invoked concurrently from two paths:
__purge_vmap_area_lazy() when pools are being purged, and the shrinker via
vmap_node_shrink_scan().

However, decay_va_pool_node() is not safe to run concurrently, and the
shrinker path currently lacks serialization, leading to races and possible
leaks.

Protect decay_va_pool_node() by taking vmap_purge_lock in the shrinker
path to ensure serialization with purge users.

Link: https://lore.kernel.org/20260413192646.14683-1-urezki@gmail.com
Fixes: 7679ba6b36db ("mm: vmalloc: add a shrinker to drain vmap pools")
Signed-off-by: Uladzislau Rezki (Sony) &lt;urezki@gmail.com&gt;
Reviewed-by: Baoquan He &lt;baoquan.he@linux.dev&gt;
Cc: chenyichong &lt;chenyichong@uniontech.com&gt;
Cc: &lt;stable@vger.kernel.org&gt;
Signed-off-by: Andrew Morton &lt;akpm@linux-foundation.org&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
decay_va_pool_node() can be invoked concurrently from two paths:
__purge_vmap_area_lazy() when pools are being purged, and the shrinker via
vmap_node_shrink_scan().

However, decay_va_pool_node() is not safe to run concurrently, and the
shrinker path currently lacks serialization, leading to races and possible
leaks.

Protect decay_va_pool_node() by taking vmap_purge_lock in the shrinker
path to ensure serialization with purge users.

Link: https://lore.kernel.org/20260413192646.14683-1-urezki@gmail.com
Fixes: 7679ba6b36db ("mm: vmalloc: add a shrinker to drain vmap pools")
Signed-off-by: Uladzislau Rezki (Sony) &lt;urezki@gmail.com&gt;
Reviewed-by: Baoquan He &lt;baoquan.he@linux.dev&gt;
Cc: chenyichong &lt;chenyichong@uniontech.com&gt;
Cc: &lt;stable@vger.kernel.org&gt;
Signed-off-by: Andrew Morton &lt;akpm@linux-foundation.org&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>mm: vmalloc: update outdated comment for renamed vread()</title>
<updated>2026-04-05T20:53:34+00:00</updated>
<author>
<name>Kexin Sun</name>
<email>kexinsun@smail.nju.edu.cn</email>
</author>
<published>2026-03-21T10:58:20+00:00</published>
<link rel='alternate' type='text/html' href='https://git.tavy.me/linux.git/commit/?id=3cb0dc0d0eab18d6ef738e10d5634e3a71121044'/>
<id>3cb0dc0d0eab18d6ef738e10d5634e3a71121044</id>
<content type='text'>
The function vread() was renamed to vread_iter() in commit 4c91c07c93bb
("mm: vmalloc: convert vread() to vread_iter()"), converting from a
buffer-based to an iterator-based interface.

Update the kdoc of vread_iter() to reflect the new interface: replace
references to @buf with @iter, drop the stale "kernel's buffer"
requirement, and update the self-reference from vread() to vread_iter().

Also update the stale vread() reference in pstore's ram_core.c.

Assisted-by: unnamed:deepseek-v3.2 coccinelle
Link: https://lkml.kernel.org/r/20260321105820.7134-1-kexinsun@smail.nju.edu.cn
Signed-off-by: Kexin Sun &lt;kexinsun@smail.nju.edu.cn&gt;
Reviewed-by: Andrew Morton &lt;akpm@linux-foundation.org&gt;
Cc: "Guilherme G. Piccoli" &lt;gpiccoli@igalia.com&gt;
Cc: Julia Lawall &lt;julia.lawall@inria.fr&gt;
Cc: Kees Cook &lt;kees@kernel.org&gt;
Cc: Tony Luck &lt;tony.luck@intel.com&gt;
Cc: "Uladzislau Rezki (Sony)" &lt;urezki@gmail.com&gt;
Signed-off-by: Andrew Morton &lt;akpm@linux-foundation.org&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
The function vread() was renamed to vread_iter() in commit 4c91c07c93bb
("mm: vmalloc: convert vread() to vread_iter()"), converting from a
buffer-based to an iterator-based interface.

Update the kdoc of vread_iter() to reflect the new interface: replace
references to @buf with @iter, drop the stale "kernel's buffer"
requirement, and update the self-reference from vread() to vread_iter().

Also update the stale vread() reference in pstore's ram_core.c.

Assisted-by: unnamed:deepseek-v3.2 coccinelle
Link: https://lkml.kernel.org/r/20260321105820.7134-1-kexinsun@smail.nju.edu.cn
Signed-off-by: Kexin Sun &lt;kexinsun@smail.nju.edu.cn&gt;
Reviewed-by: Andrew Morton &lt;akpm@linux-foundation.org&gt;
Cc: "Guilherme G. Piccoli" &lt;gpiccoli@igalia.com&gt;
Cc: Julia Lawall &lt;julia.lawall@inria.fr&gt;
Cc: Kees Cook &lt;kees@kernel.org&gt;
Cc: Tony Luck &lt;tony.luck@intel.com&gt;
Cc: "Uladzislau Rezki (Sony)" &lt;urezki@gmail.com&gt;
Signed-off-by: Andrew Morton &lt;akpm@linux-foundation.org&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>vmalloc: support __GFP_RETRY_MAYFAIL and __GFP_NORETRY</title>
<updated>2026-04-05T20:53:12+00:00</updated>
<author>
<name>Michal Hocko</name>
<email>mhocko@suse.com</email>
</author>
<published>2026-03-02T11:47:40+00:00</published>
<link rel='alternate' type='text/html' href='https://git.tavy.me/linux.git/commit/?id=3caedb3b99eabe9f67b7b6c704ab8a92fe35dcec'/>
<id>3caedb3b99eabe9f67b7b6c704ab8a92fe35dcec</id>
<content type='text'>
__GFP_RETRY_MAYFAIL and __GFP_NORETRY haven't been supported so far
because their semantic (i.e.  to not trigger OOM killer) is not possible
with the existing vmalloc page table allocation which is allowing for the
OOM killer.

Example: __vmalloc(size, GFP_KERNEL | __GFP_RETRY_MAYFAIL);

&lt;snip&gt;
 vmalloc_test/55 invoked oom-killer:
 gfp_mask=0x40dc0(
 GFP_KERNEL|__GFP_ZERO|__GFP_COMP), order=0, oom_score_adj=0
 active_anon:0 inactive_anon:0 isolated_anon:0
  active_file:0 inactive_file:0 isolated_file:0
  unevictable:0 dirty:0 writeback:0
  slab_reclaimable:700 slab_unreclaimable:33708
  mapped:0 shmem:0 pagetables:5174
  sec_pagetables:0 bounce:0
  kernel_misc_reclaimable:0
  free:850 free_pcp:319 free_cma:0
 CPU: 4 UID: 0 PID: 639 Comm: vmalloc_test/55 ...
 Hardware name: QEMU Standard PC (i440FX + PIIX, ...
 Call Trace:
  &lt;TASK&gt;
  dump_stack_lvl+0x5d/0x80
  dump_header+0x43/0x1b3
  out_of_memory.cold+0x8/0x78
  __alloc_pages_slowpath.constprop.0+0xef5/0x1130
  __alloc_frozen_pages_noprof+0x312/0x330
  alloc_pages_mpol+0x7d/0x160
  alloc_pages_noprof+0x50/0xa0
  __pte_alloc_kernel+0x1e/0x1f0
  ...
&lt;snip&gt;

There are usecases for these modifiers when a large allocation request
should rather fail than trigger OOM killer which wouldn't be able to
handle the situation anyway [1].

While we cannot change existing page table allocation code easily we can
piggy back on scoped NOWAIT allocation for them that we already have in
place.  The rationale is that the bulk of the consumed memory is sitting
in pages backing the vmalloc allocation.  Page tables are only
participating a tiny fraction.  Moreover page tables for virtually
allocated areas are never reclaimed so the longer the system runs to less
likely they are.  It makes sense to allow an approximation of
__GFP_RETRY_MAYFAIL and __GFP_NORETRY even if the page table allocation
part is much weaker.  This doesn't break the failure mode while it allows
for the no OOM semantic.

[1] https://lore.kernel.org/all/32bd9bed-a939-69c4-696d-f7f9a5fe31d8@redhat.com/T/#u

Link: https://lkml.kernel.org/r/20260302114740.2668450-2-urezki@gmail.com
Signed-off-by: Michal Hocko &lt;mhocko@suse.com&gt;
Signed-off-by: Uladzislau Rezki (Sony) &lt;urezki@gmail.com&gt;
Tested-by: Uladzislau Rezki (Sony) &lt;urezki@gmail.com&gt;
Cc: Baoquan He &lt;bhe@redhat.com&gt;
Cc: Mikulas Patocka &lt;mpatocka@redhat.com&gt;
Cc: Vishal Moola (Oracle) &lt;vishal.moola@gmail.com&gt;
Signed-off-by: Andrew Morton &lt;akpm@linux-foundation.org&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
__GFP_RETRY_MAYFAIL and __GFP_NORETRY haven't been supported so far
because their semantic (i.e.  to not trigger OOM killer) is not possible
with the existing vmalloc page table allocation which is allowing for the
OOM killer.

Example: __vmalloc(size, GFP_KERNEL | __GFP_RETRY_MAYFAIL);

&lt;snip&gt;
 vmalloc_test/55 invoked oom-killer:
 gfp_mask=0x40dc0(
 GFP_KERNEL|__GFP_ZERO|__GFP_COMP), order=0, oom_score_adj=0
 active_anon:0 inactive_anon:0 isolated_anon:0
  active_file:0 inactive_file:0 isolated_file:0
  unevictable:0 dirty:0 writeback:0
  slab_reclaimable:700 slab_unreclaimable:33708
  mapped:0 shmem:0 pagetables:5174
  sec_pagetables:0 bounce:0
  kernel_misc_reclaimable:0
  free:850 free_pcp:319 free_cma:0
 CPU: 4 UID: 0 PID: 639 Comm: vmalloc_test/55 ...
 Hardware name: QEMU Standard PC (i440FX + PIIX, ...
 Call Trace:
  &lt;TASK&gt;
  dump_stack_lvl+0x5d/0x80
  dump_header+0x43/0x1b3
  out_of_memory.cold+0x8/0x78
  __alloc_pages_slowpath.constprop.0+0xef5/0x1130
  __alloc_frozen_pages_noprof+0x312/0x330
  alloc_pages_mpol+0x7d/0x160
  alloc_pages_noprof+0x50/0xa0
  __pte_alloc_kernel+0x1e/0x1f0
  ...
&lt;snip&gt;

There are usecases for these modifiers when a large allocation request
should rather fail than trigger OOM killer which wouldn't be able to
handle the situation anyway [1].

While we cannot change existing page table allocation code easily we can
piggy back on scoped NOWAIT allocation for them that we already have in
place.  The rationale is that the bulk of the consumed memory is sitting
in pages backing the vmalloc allocation.  Page tables are only
participating a tiny fraction.  Moreover page tables for virtually
allocated areas are never reclaimed so the longer the system runs to less
likely they are.  It makes sense to allow an approximation of
__GFP_RETRY_MAYFAIL and __GFP_NORETRY even if the page table allocation
part is much weaker.  This doesn't break the failure mode while it allows
for the no OOM semantic.

[1] https://lore.kernel.org/all/32bd9bed-a939-69c4-696d-f7f9a5fe31d8@redhat.com/T/#u

Link: https://lkml.kernel.org/r/20260302114740.2668450-2-urezki@gmail.com
Signed-off-by: Michal Hocko &lt;mhocko@suse.com&gt;
Signed-off-by: Uladzislau Rezki (Sony) &lt;urezki@gmail.com&gt;
Tested-by: Uladzislau Rezki (Sony) &lt;urezki@gmail.com&gt;
Cc: Baoquan He &lt;bhe@redhat.com&gt;
Cc: Mikulas Patocka &lt;mpatocka@redhat.com&gt;
Cc: Vishal Moola (Oracle) &lt;vishal.moola@gmail.com&gt;
Signed-off-by: Andrew Morton &lt;akpm@linux-foundation.org&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>mm/vmalloc: fix incorrect size reporting on allocation failure</title>
<updated>2026-04-05T20:53:12+00:00</updated>
<author>
<name>Uladzislau Rezki (Sony)</name>
<email>urezki@gmail.com</email>
</author>
<published>2026-03-02T11:47:39+00:00</published>
<link rel='alternate' type='text/html' href='https://git.tavy.me/linux.git/commit/?id=0edd78cd4d40a752dc6d1bc661ce297c40baea29'/>
<id>0edd78cd4d40a752dc6d1bc661ce297c40baea29</id>
<content type='text'>
When __vmalloc_area_node() fails to allocate pages, the failure message
may report an incorrect allocation size, for example:

  vmalloc error: size 0, failed to allocate pages, ...

This happens because the warning prints area-&gt;nr_pages * PAGE_SIZE.  At
this point, area-&gt;nr_pages may be zero or partly populated thus it is not
valid.

Report the originally requested allocation size instead by using
nr_small_pages * PAGE_SIZE, which reflects the actual number of pages
being requested by user.

Link: https://lkml.kernel.org/r/20260302114740.2668450-1-urezki@gmail.com
Signed-off-by: Uladzislau Rezki (Sony) &lt;urezki@gmail.com&gt;
Reviewed-by: Mikulas Patocka &lt;mpatocka@redhat.com&gt;
Reviewed-by: Vishal Moola (Oracle) &lt;vishal.moola@gmail.com&gt;
Cc: Baoquan He &lt;bhe@redhat.com&gt;
Cc: Michal Hocko &lt;mhocko@suse.com&gt;
Signed-off-by: Andrew Morton &lt;akpm@linux-foundation.org&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
When __vmalloc_area_node() fails to allocate pages, the failure message
may report an incorrect allocation size, for example:

  vmalloc error: size 0, failed to allocate pages, ...

This happens because the warning prints area-&gt;nr_pages * PAGE_SIZE.  At
this point, area-&gt;nr_pages may be zero or partly populated thus it is not
valid.

Report the originally requested allocation size instead by using
nr_small_pages * PAGE_SIZE, which reflects the actual number of pages
being requested by user.

Link: https://lkml.kernel.org/r/20260302114740.2668450-1-urezki@gmail.com
Signed-off-by: Uladzislau Rezki (Sony) &lt;urezki@gmail.com&gt;
Reviewed-by: Mikulas Patocka &lt;mpatocka@redhat.com&gt;
Reviewed-by: Vishal Moola (Oracle) &lt;vishal.moola@gmail.com&gt;
Cc: Baoquan He &lt;bhe@redhat.com&gt;
Cc: Michal Hocko &lt;mhocko@suse.com&gt;
Signed-off-by: Andrew Morton &lt;akpm@linux-foundation.org&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>mm/vmalloc: export clear_vm_uninitialized_flag()</title>
<updated>2026-04-05T20:53:06+00:00</updated>
<author>
<name>Pasha Tatashin</name>
<email>pasha.tatashin@soleen.com</email>
</author>
<published>2026-02-25T22:38:56+00:00</published>
<link rel='alternate' type='text/html' href='https://git.tavy.me/linux.git/commit/?id=ec106365394dc6c4e9ecf00842186d367dcc955a'/>
<id>ec106365394dc6c4e9ecf00842186d367dcc955a</id>
<content type='text'>
Patch series "Fix KASAN support for KHO restored vmalloc regions".

When KHO restores a vmalloc area, it maps existing physical pages into a
newly allocated virtual memory area.  However, because these areas were
not properly unpoisoned, KASAN would treat any access to the restored
region as out-of-bounds, as seen in the following trace:

BUG: KASAN: vmalloc-out-of-bounds in kho_test_restore_data.isra.0+0x17b/0x2cd
Read of size 8 at addr ffffc90000025000 by task swapper/0/1
[...]
Call Trace:
[...]
kasan_report+0xe8/0x120
kho_test_restore_data.isra.0+0x17b/0x2cd
kho_test_init+0x15a/0x1f0
do_one_initcall+0xd5/0x4b0

The fix involves deferring KASAN's default poisoning by using the
VM_UNINITIALIZED flag during allocation, manually unpoisoning the memory
once it is correctly mapped, and then clearing the uninitialized flag
using a newly exported helper.


This patch (of 2):

Make clear_vm_uninitialized_flag() available to other parts of the kernel
that need to manage vmalloc areas manually, such as KHO for restoring
vmallocs.

Link: https://lkml.kernel.org/r/20260225220223.1695350-1-pasha.tatashin@soleen.com
Link: https://lkml.kernel.org/r/20260225223857.1714801-2-pasha.tatashin@soleen.com
Signed-off-by: Pasha Tatashin &lt;pasha.tatashin@soleen.com&gt;
Acked-by: Pratyush Yadav (Google) &lt;pratyush@kernel.org&gt;
Cc: Alexander Graf &lt;graf@amazon.com&gt;
Cc: Liam Howlett &lt;liam.howlett@oracle.com&gt;
Cc: Lorenzo Stoakes &lt;lorenzo.stoakes@oracle.com&gt;
Cc: Michal Hocko &lt;mhocko@suse.com&gt;
Cc: Mike Rapoport &lt;rppt@kernel.org&gt;
Cc: Suren Baghdasaryan &lt;surenb@google.com&gt;
Cc: "Uladzislau Rezki (Sony)" &lt;urezki@gmail.com&gt;
Signed-off-by: Andrew Morton &lt;akpm@linux-foundation.org&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
Patch series "Fix KASAN support for KHO restored vmalloc regions".

When KHO restores a vmalloc area, it maps existing physical pages into a
newly allocated virtual memory area.  However, because these areas were
not properly unpoisoned, KASAN would treat any access to the restored
region as out-of-bounds, as seen in the following trace:

BUG: KASAN: vmalloc-out-of-bounds in kho_test_restore_data.isra.0+0x17b/0x2cd
Read of size 8 at addr ffffc90000025000 by task swapper/0/1
[...]
Call Trace:
[...]
kasan_report+0xe8/0x120
kho_test_restore_data.isra.0+0x17b/0x2cd
kho_test_init+0x15a/0x1f0
do_one_initcall+0xd5/0x4b0

The fix involves deferring KASAN's default poisoning by using the
VM_UNINITIALIZED flag during allocation, manually unpoisoning the memory
once it is correctly mapped, and then clearing the uninitialized flag
using a newly exported helper.


This patch (of 2):

Make clear_vm_uninitialized_flag() available to other parts of the kernel
that need to manage vmalloc areas manually, such as KHO for restoring
vmallocs.

Link: https://lkml.kernel.org/r/20260225220223.1695350-1-pasha.tatashin@soleen.com
Link: https://lkml.kernel.org/r/20260225223857.1714801-2-pasha.tatashin@soleen.com
Signed-off-by: Pasha Tatashin &lt;pasha.tatashin@soleen.com&gt;
Acked-by: Pratyush Yadav (Google) &lt;pratyush@kernel.org&gt;
Cc: Alexander Graf &lt;graf@amazon.com&gt;
Cc: Liam Howlett &lt;liam.howlett@oracle.com&gt;
Cc: Lorenzo Stoakes &lt;lorenzo.stoakes@oracle.com&gt;
Cc: Michal Hocko &lt;mhocko@suse.com&gt;
Cc: Mike Rapoport &lt;rppt@kernel.org&gt;
Cc: Suren Baghdasaryan &lt;surenb@google.com&gt;
Cc: "Uladzislau Rezki (Sony)" &lt;urezki@gmail.com&gt;
Signed-off-by: Andrew Morton &lt;akpm@linux-foundation.org&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>mm: memcontrol: switch to native NR_VMALLOC vmstat counter</title>
<updated>2026-04-05T20:53:04+00:00</updated>
<author>
<name>Johannes Weiner</name>
<email>hannes@cmpxchg.org</email>
</author>
<published>2026-02-23T16:01:07+00:00</published>
<link rel='alternate' type='text/html' href='https://git.tavy.me/linux.git/commit/?id=c466412c73c339e33e83b68770e5b556457c03de'/>
<id>c466412c73c339e33e83b68770e5b556457c03de</id>
<content type='text'>
Eliminates the custom memcg counter and results in a single, consolidated
accounting call in vmalloc code.

Link: https://lkml.kernel.org/r/20260223160147.3792777-2-hannes@cmpxchg.org
Signed-off-by: Johannes Weiner &lt;hannes@cmpxchg.org&gt;
Acked-by: Shakeel Butt &lt;shakeel.butt@linux.dev&gt;
Reviewed-by: Uladzislau Rezki (Sony) &lt;urezki@gmail.com&gt;
Reviewed-by: Roman Gushchin &lt;roman.gushchin@linux.dev&gt;
Reviewed-by: Vishal Moola (Oracle) &lt;vishal.moola@gmail.com&gt;
Cc: Joshua Hahn &lt;joshua.hahnjy@gmail.com&gt;
Cc: Michal Hocko &lt;mhocko@kernel.org&gt;
Cc: Muchun Song &lt;muchun.song@linux.dev&gt;
Cc: Shakeel Butt &lt;shakeel.butt@linux.dev&gt;
Signed-off-by: Andrew Morton &lt;akpm@linux-foundation.org&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
Eliminates the custom memcg counter and results in a single, consolidated
accounting call in vmalloc code.

Link: https://lkml.kernel.org/r/20260223160147.3792777-2-hannes@cmpxchg.org
Signed-off-by: Johannes Weiner &lt;hannes@cmpxchg.org&gt;
Acked-by: Shakeel Butt &lt;shakeel.butt@linux.dev&gt;
Reviewed-by: Uladzislau Rezki (Sony) &lt;urezki@gmail.com&gt;
Reviewed-by: Roman Gushchin &lt;roman.gushchin@linux.dev&gt;
Reviewed-by: Vishal Moola (Oracle) &lt;vishal.moola@gmail.com&gt;
Cc: Joshua Hahn &lt;joshua.hahnjy@gmail.com&gt;
Cc: Michal Hocko &lt;mhocko@kernel.org&gt;
Cc: Muchun Song &lt;muchun.song@linux.dev&gt;
Cc: Shakeel Butt &lt;shakeel.butt@linux.dev&gt;
Signed-off-by: Andrew Morton &lt;akpm@linux-foundation.org&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>mm: vmalloc: streamline vmalloc memory accounting</title>
<updated>2026-04-05T20:53:04+00:00</updated>
<author>
<name>Johannes Weiner</name>
<email>hannes@cmpxchg.org</email>
</author>
<published>2026-02-23T16:01:06+00:00</published>
<link rel='alternate' type='text/html' href='https://git.tavy.me/linux.git/commit/?id=b9ec0ed907062a67a7cca2d04e7652aec06a0c35'/>
<id>b9ec0ed907062a67a7cca2d04e7652aec06a0c35</id>
<content type='text'>
Use a vmstat counter instead of a custom, open-coded atomic. This has
the added benefit of making the data available per-node, and prepares
for cleaning up the memcg accounting as well.

Link: https://lkml.kernel.org/r/20260223160147.3792777-1-hannes@cmpxchg.org
Acked-by: Shakeel Butt &lt;shakeel.butt@linux.dev&gt;
Signed-off-by: Johannes Weiner &lt;hannes@cmpxchg.org&gt;
Reviewed-by: Roman Gushchin &lt;roman.gushchin@linux.dev&gt;
Reviewed-by: Vishal Moola (Oracle) &lt;vishal.moola@gmail.com&gt;
Reviewed-by: Uladzislau Rezki (Sony) &lt;urezki@gmail.com&gt;
Cc: Joshua Hahn &lt;joshua.hahnjy@gmail.com&gt;
Cc: Michal Hocko &lt;mhocko@kernel.org&gt;
Cc: Muchun Song &lt;muchun.song@linux.dev&gt;
Signed-off-by: Andrew Morton &lt;akpm@linux-foundation.org&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
Use a vmstat counter instead of a custom, open-coded atomic. This has
the added benefit of making the data available per-node, and prepares
for cleaning up the memcg accounting as well.

Link: https://lkml.kernel.org/r/20260223160147.3792777-1-hannes@cmpxchg.org
Acked-by: Shakeel Butt &lt;shakeel.butt@linux.dev&gt;
Signed-off-by: Johannes Weiner &lt;hannes@cmpxchg.org&gt;
Reviewed-by: Roman Gushchin &lt;roman.gushchin@linux.dev&gt;
Reviewed-by: Vishal Moola (Oracle) &lt;vishal.moola@gmail.com&gt;
Reviewed-by: Uladzislau Rezki (Sony) &lt;urezki@gmail.com&gt;
Cc: Joshua Hahn &lt;joshua.hahnjy@gmail.com&gt;
Cc: Michal Hocko &lt;mhocko@kernel.org&gt;
Cc: Muchun Song &lt;muchun.song@linux.dev&gt;
Signed-off-by: Andrew Morton &lt;akpm@linux-foundation.org&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>Convert 'alloc_obj' family to use the new default GFP_KERNEL argument</title>
<updated>2026-02-22T01:09:51+00:00</updated>
<author>
<name>Linus Torvalds</name>
<email>torvalds@linux-foundation.org</email>
</author>
<published>2026-02-22T00:37:42+00:00</published>
<link rel='alternate' type='text/html' href='https://git.tavy.me/linux.git/commit/?id=bf4afc53b77aeaa48b5409da5c8da6bb4eff7f43'/>
<id>bf4afc53b77aeaa48b5409da5c8da6bb4eff7f43</id>
<content type='text'>
This was done entirely with mindless brute force, using

    git grep -l '\&lt;k[vmz]*alloc_objs*(.*, GFP_KERNEL)' |
        xargs sed -i 's/\(alloc_objs*(.*\), GFP_KERNEL)/\1)/'

to convert the new alloc_obj() users that had a simple GFP_KERNEL
argument to just drop that argument.

Note that due to the extreme simplicity of the scripting, any slightly
more complex cases spread over multiple lines would not be triggered:
they definitely exist, but this covers the vast bulk of the cases, and
the resulting diff is also then easier to check automatically.

For the same reason the 'flex' versions will be done as a separate
conversion.

Signed-off-by: Linus Torvalds &lt;torvalds@linux-foundation.org&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
This was done entirely with mindless brute force, using

    git grep -l '\&lt;k[vmz]*alloc_objs*(.*, GFP_KERNEL)' |
        xargs sed -i 's/\(alloc_objs*(.*\), GFP_KERNEL)/\1)/'

to convert the new alloc_obj() users that had a simple GFP_KERNEL
argument to just drop that argument.

Note that due to the extreme simplicity of the scripting, any slightly
more complex cases spread over multiple lines would not be triggered:
they definitely exist, but this covers the vast bulk of the cases, and
the resulting diff is also then easier to check automatically.

For the same reason the 'flex' versions will be done as a separate
conversion.

Signed-off-by: Linus Torvalds &lt;torvalds@linux-foundation.org&gt;
</pre>
</div>
</content>
</entry>
</feed>
