summaryrefslogtreecommitdiff
path: root/kernel
AgeCommit message (Collapse)Author
2026-03-23tracing: Remove unnecessary check for EVENT_FILE_FL_FREEDPetr Pavlu
The event_filter_write() function calls event_file_file() to retrieve a trace_event_file associated with a given file struct. If a non-NULL pointer is returned, the function then checks whether the trace_event_file instance has the EVENT_FILE_FL_FREED flag set. This check is redundant because event_file_file() already performs this validation and returns NULL if the flag is set. The err value is also already initialized to -ENODEV. Remove the unnecessary check for EVENT_FILE_FL_FREED in event_filter_write(). Cc: Masami Hiramatsu <mhiramat@kernel.org> Cc: Mathieu Desnoyers <mathieu.desnoyers@efficios.com> Cc: Tom Zanussi <zanussi@kernel.org> Link: https://patch.msgid.link/20260219162737.314231-4-petr.pavlu@suse.com Signed-off-by: Petr Pavlu <petr.pavlu@suse.com> Signed-off-by: Steven Rostedt (Google) <rostedt@goodmis.org>
2026-03-23tracing: move __printf() attribute on __ftrace_vbprintk()Arnd Bergmann
The sunrpc change to use trace_printk() for debugging caused a new warning for every instance of dprintk() in some configurations, when -Wformat-security is enabled: fs/nfs/getroot.c: In function 'nfs_get_root': fs/nfs/getroot.c:90:17: error: format not a string literal and no format arguments [-Werror=format-security] 90 | nfs_errorf(fc, "NFS: Couldn't getattr on root"); I've been slowly chipping away at those warnings over time with the intention of enabling them by default in the future. While I could not figure out why this only happens for this one instance, I see that the __trace_bprintk() function is always called with a local variable as the format string, rather than a literal. Move the __printf(2,3) annotation on this function from the declaration to the caller. As this is can only be validated for literals, the attribute on the declaration causes the warnings every time, but removing it entirely introduces a new warning on the __ftrace_vbprintk() definition. The format strings still get checked because the underlying literal keeps getting passed into __trace_printk() in the "else" branch, which is not taken but still evaluated for compile-time warnings. Cc: Masami Hiramatsu <mhiramat@kernel.org> Cc: Anna Schumaker <anna.schumaker@oracle.com> Cc: Chuck Lever <chuck.lever@oracle.com> Cc: Simon Horman <horms@kernel.org> Cc: Mathieu Desnoyers <mathieu.desnoyers@efficios.com> Cc: Andrew Morton <akpm@linux-foundation.org> Cc: Yury Norov <ynorov@nvidia.com> Cc: Randy Dunlap <rdunlap@infradead.org> Link: https://patch.msgid.link/20260203164545.3174910-1-arnd@kernel.org Fixes: ec7d8e68ef0e ("sunrpc: add a Kconfig option to redirect dfprintk() output to trace buffer") Acked-by: Jeff Layton <jlayton@kernel.org> Acked-by: Steven Rostedt (Google) <rostedt@goodmis.org> Signed-off-by: Arnd Bergmann <arnd@arndb.de> Acked-by: Andy Shevchenko <andriy.shevchenko@linux.intel.com> Signed-off-by: Steven Rostedt (Google) <rostedt@goodmis.org>
2026-03-23sched_ext: Fix invalid kobj cast in scx_uevent()Cheng-Yang Chou
When scx_alloc_and_add_sched() creates the sub-scheduler kset, it sets sch->kobj as the parent. Because sch->kobj.kset points to scx_kset, registering this sub-kset triggers a KOBJ_ADD uevent. The uevent walk finds scx_kset and calls scx_uevent() with the sub-kset's kobject. scx_uevent() unconditionally uses container_of() to cast the incoming kobject to struct scx_sched, producing a wild pointer when the kobject belongs to the kset itself rather than a scheduler instance. Accessing sch->ops.name through this pointer causes a KASAN slab-out-of-bounds read: BUG: KASAN: slab-out-of-bounds in string+0x3b6/0x4c0 Read of size 1 at addr ffff888004d04348 by task scx_enable_help/748 Call Trace: string+0x3b6/0x4c0 vsnprintf+0x3ec/0x1550 add_uevent_var+0x160/0x3a0 scx_uevent+0x22/0x30 kobject_uevent_env+0x5dc/0x1730 kset_register+0x192/0x280 scx_alloc_and_add_sched+0x130d/0x1c60 ... Fix this by checking the kobject's ktype against scx_ktype before performing the cast, and returning 0 for non-matching kobjects. Tested with vng and scx_qmap without triggering any KASAN errors. Fixes: ebeca1f930ea ("sched_ext: Introduce cgroup sub-sched support") Signed-off-by: Cheng-Yang Chou <yphbchou0911@gmail.com> Signed-off-by: Tejun Heo <tj@kernel.org>
2026-03-23signal: update outdated comment for removed freezable_schedule()Kexin Sun
The function freezable_schedule() was removed in commit f5d39b020809 ("freezer,sched: Rewrite core freezer logic"), which rewrote the freezer to use a dedicated TASK_FROZEN state instead. do_signal_stop() and ptrace_stop() no longer call freezable_schedule(); they now set TASK_STOPPED/TASK_TRACED and the freezer handles those states directly via TASK_FROZEN. Update the comment to reflect the current mechanism. Assisted-by: unnamed:deepseek-v3.2 coccinelle Signed-off-by: Kexin Sun <kexinsun@smail.nju.edu.cn> Link: https://patch.msgid.link/20260321105927.7979-1-kexinsun@smail.nju.edu.cn Signed-off-by: Christian Brauner <brauner@kernel.org>
2026-03-23PM: sleep: Drop spurious WARN_ON() from pm_restore_gfp_mask()Youngjun Park
Commit 35e4a69b2003f ("PM: sleep: Allow pm_restrict_gfp_mask() stacking") introduced refcount-based GFP mask management that warns when pm_restore_gfp_mask() is called with saved_gfp_count == 0. Some hibernation paths call pm_restore_gfp_mask() defensively where the GFP mask may or may not be restricted depending on the execution path. For example, the uswsusp interface invokes it in SNAPSHOT_CREATE_IMAGE, SNAPSHOT_UNFREEZE, and snapshot_release(). Before the stacking change this was a silent no-op; it now triggers a spurious WARNING. Remove the WARN_ON() wrapper from the !saved_gfp_count check while retaining the check itself, so that defensive calls remain harmless without producing false warnings. Fixes: 35e4a69b2003f ("PM: sleep: Allow pm_restrict_gfp_mask() stacking") Signed-off-by: Youngjun Park <youngjun.park@lge.com> [ rjw: Subject tweak ] Link: https://patch.msgid.link/20260322120528.750178-1-youngjun.park@lge.com Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
2026-03-23PM: hibernate: Drain trailing zero pages on userspace restoreAlberto Garcia
Commit 005e8dddd497 ("PM: hibernate: don't store zero pages in the image file") added an optimization to skip zero-filled pages in the hibernation image. On restore, zero pages are handled internally by snapshot_write_next() in a loop that processes them without returning to the caller. With the userspace restore interface, writing the last non-zero page to /dev/snapshot is followed by the SNAPSHOT_ATOMIC_RESTORE ioctl. At this point there are no more calls to snapshot_write_next() so any trailing zero pages are not processed, snapshot_image_loaded() fails because handle->cur is smaller than expected, the ioctl returns -EPERM and the image is not restored. The in-kernel restore path is not affected by this because the loop in load_image() in swap.c calls snapshot_write_next() until it returns 0. It is this final call that drains any trailing zero pages. Fixed by calling snapshot_write_next() in snapshot_write_finalize(), giving the kernel the chance to drain any trailing zero pages. Fixes: 005e8dddd497 ("PM: hibernate: don't store zero pages in the image file") Signed-off-by: Alberto Garcia <berto@igalia.com> Acked-by: Brian Geffon <bgeffon@google.com> Link: https://patch.msgid.link/ef5a7c5e3e3dbd17dcb20efaa0c53a47a23498bb.1773075892.git.berto@igalia.com Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
2026-03-23tracing: Adjust cmd_check_undefined to show unexpected undefined symbolsNathan Chancellor
When the check_undefined command in kernel/trace/Makefile fails, there is no output, making it hard to understand why the build failed. Capture the output of the $(NM) + grep command and print it when failing to make it clearer what the problem is. Fixes: a717943d8ecc ("tracing: Check for undefined symbols in simple_ring_buffer") Signed-off-by: Nathan Chancellor <nathan@kernel.org> Reviewed-by: Vincent Donnefort <vdonnefort@google.com> Acked-by: Arnd Bergmann <arnd@arndb.de> Link: https://patch.msgid.link/20260320-cmd_check_undefined-verbose-v1-1-54fc5b061f94@kernel.org Signed-off-by: Marc Zyngier <maz@kernel.org>
2026-03-22Merge git://git.kernel.org/pub/scm/linux/kernel/git/bpf/bpf 7.0-rc5Alexei Starovoitov
Cross-merge BPF and other fixes after downstream PR. Minor conflicts in: tools/testing/selftests/bpf/progs/exceptions_fail.c tools/testing/selftests/bpf/progs/verifier_bounds.c Signed-off-by: Alexei Starovoitov <ast@kernel.org>
2026-03-22sched_ext: Use irq_work_queue_on() in schedule_deferred()Tejun Heo
schedule_deferred() uses irq_work_queue() which always queues on the calling CPU. The deferred work can run from any CPU correctly, and the _locked() path already processes remote rqs from the calling CPU. However, when falling through to the irq_work path, queuing on the target CPU is preferable as the work can run sooner via IPI delivery rather than waiting for the calling CPU to re-enable IRQs. Currently, only reenqueue operations use this path - either BPF-initiated reenqueue targeting a remote rq, or IMMED reenqueue when the target CPU is busy running userspace (not in balance or wakeup, so the _locked() fast paths aren't available). Use irq_work_queue_on() to target the owning CPU. This improves IMMED reenqueue latency when tasks are dispatched to remote local DSQs. Testing on a 24-CPU AMD Ryzen 3900X with scx_qmap -I -F 50 (ALWAYS_ENQ_IMMED, every 50th enqueue forced to prev_cpu's local DSQ) under heavy mixed load (2x CPU oversubscription, yield and context-switch pressure, SCHED_FIFO bursts, periodic fork storms, mixed nice levels, C-states disabled), measuring local DSQ residence time (insert to remove) over 5 x 120s runs (~1.2M tasks per set): >128us outliers: 71 -> 39 (-45%) >256us outliers: 59 -> 36 (-39%) Signed-off-by: Tejun Heo <tj@kernel.org> Reviewed-by: Emil Tsalapatis <emil@etsalapatis.com>
2026-03-22sched_ext: Guard cpu_smt_mask() with CONFIG_SCHED_SMTAndrea Righi
Wrap cpu_smt_mask() usage with CONFIG_SCHED_SMT to avoid build failures on kernels built without SMT support. Fixes: 2197cecdb02c ("sched_ext: idle: Prioritize idle SMT sibling") Reported-by: kernel test robot <lkp@intel.com> Closes: https://lore.kernel.org/oe-kbuild-all/202603221422.XIueJOE9-lkp@intel.com/ Signed-off-by: Andrea Righi <arighi@nvidia.com> Reviewed-by: Cheng-Yang Chou <yphbchou0911@gmail.com> Signed-off-by: Tejun Heo <tj@kernel.org>
2026-03-22sched_ext: Fix build errors and unused label warning in non-cgroup configsCheng-Yang Chou
When building with SCHED_CLASS_EXT=y but CGROUPS=n, clang reports errors for undeclared cgroup_put() and cgroup_get() calls, and a warning for the unused err_stop_helper label. EXT_SUB_SCHED is def_bool y depending only on SCHED_CLASS_EXT, but it fundamentally requires cgroups (cgroup_path, cgroup_get, cgroup_put, cgroup_id, etc.). Add the missing CGROUPS dependency to EXT_SUB_SCHED in init/Kconfig. Guard cgroup_put() and cgroup_get() in the common paths with: #if defined(CONFIG_EXT_GROUP_SCHED) || defined(CONFIG_EXT_SUB_SCHED) Guard the err_stop_helper label with #ifdef CONFIG_EXT_SUB_SCHED since all gotos targeting it are inside that same ifdef block. Tested with both CGROUPS enabled and disabled. Fixes: ebeca1f930ea ("sched_ext: Introduce cgroup sub-sched support") Reported-by: kernel test robot <lkp@intel.com> Closes: https://lore.kernel.org/oe-kbuild-all/202603210903.IrKhPd6k-lkp@intel.com/ Signed-off-by: Cheng-Yang Chou <yphbchou0911@gmail.com> Acked-by: Andrea Righi <arighi@nvidia.com> Signed-off-by: Tejun Heo <tj@kernel.org>
2026-03-22Merge tag 'bpf-fixes' of git://git.kernel.org/pub/scm/linux/kernel/git/bpf/bpfLinus Torvalds
Pull bpf fixes from Alexei Starovoitov: - Fix how linked registers track zero extension of subregisters (Daniel Borkmann) - Fix unsound scalar fork for OR instructions (Daniel Wade) - Fix exception exit lock check for subprogs (Ihor Solodrai) - Fix undefined behavior in interpreter for SDIV/SMOD instructions (Jenny Guanni Qu) - Release module's BTF when module is unloaded (Kumar Kartikeya Dwivedi) - Fix constant blinding for PROBE_MEM32 instructions (Sachin Kumar) - Reset register ID for END instructions to prevent incorrect value tracking (Yazhou Tang) * tag 'bpf-fixes' of git://git.kernel.org/pub/scm/linux/kernel/git/bpf/bpf: selftests/bpf: Add a test cases for sync_linked_regs regarding zext propagation bpf: Fix sync_linked_regs regarding BPF_ADD_CONST32 zext propagation selftests/bpf: Add tests for maybe_fork_scalars() OR vs AND handling bpf: Fix unsound scalar forking in maybe_fork_scalars() for BPF_OR selftests/bpf: Add tests for sdiv32/smod32 with INT_MIN dividend bpf: Fix undefined behavior in interpreter sdiv/smod for INT_MIN selftests/bpf: Add tests for bpf_throw lock leak from subprogs bpf: Fix exception exit lock checking for subprogs bpf: Release module BTF IDR before module unload selftests/bpf: Fix pkg-config call on static builds bpf: Fix constant blinding for PROBE_MEM32 stores selftests/bpf: Add test for BPF_END register ID reset bpf: Reset register ID for BPF_END value tracking
2026-03-22Merge tag 'trace-v7.0-rc4' of ↵Linus Torvalds
git://git.kernel.org/pub/scm/linux/kernel/git/trace/linux-trace Pull tracing fixes from Steven Rostedt: - Revert "tracing: Remove pid in task_rename tracing output" A change was made to remove the pid field from the task_rename event because it was thought that it was always done for the current task and recording the pid would be redundant. This turned out to be incorrect and there are a few corner case where this is not true and caused some regressions in tooling. - Fix the reading from user space for migration The reading of user space uses a seq lock type of logic where it uses a per-cpu temporary buffer and disables migration, then enables preemption, does the copy from user space, disables preemption, enables migration and checks if there was any schedule switches while preemption was enabled. If there was a context switch, then it is considered that the per-cpu buffer could be corrupted and it tries again. There's a protection check that tests if it takes a hundred tries, it issues a warning and exits out to prevent a live lock. This was triggered because the task was selected by the load balancer to be migrated to another CPU, every time preemption is enabled the migration task would schedule in try to migrate the task but can't because migration is disabled and let it run again. This caused the scheduler to schedule out the task every time it enabled preemption and made the loop never exit (until the 100 iteration test triggered). Fix this by enabling and disabling preemption and keeping migration enabled if the reading from user space needs to be done again. This will let the migration thread migrate the task and the copy from user space will likely pass on the next iteration. - Fix trace_marker copy option freeing The "copy_trace_marker" option allows a tracing instance to get a copy of a write to the trace_marker file of the top level instance. This is managed by a link list protected by RCU. When an instance is removed, a check is made if the option is set, and if so synchronized_rcu() is called. The problem is that an iteration is made to reset all the flags to what they were when the instance was created (to perform clean ups) was done before the check of the copy_trace_marker option and that option was cleared, so the synchronize_rcu() was never called. Move the clearing of all the flags after the check of copy_trace_marker to do synchronize_rcu() so that the option is still set if it was before and the synchronization is performed. - Fix entries setting when validating the persistent ring buffer When validating the persistent ring buffer on boot up, the number of events per sub-buffer is added to the sub-buffer meta page. The validator was updating cpu_buffer->head_page (the first sub-buffer of the per-cpu buffer) and not the "head_page" variable that was iterating the sub-buffers. This was causing the first sub-buffer to be assigned the entries for each sub-buffer and not the sub-buffer that was supposed to be updated. - Use "hash" value to update the direct callers When updating the ftrace direct callers, it assigned a temporary callback to all the callback functions of the ftrace ops and not just the functions represented by the passed in hash. This causes an unnecessary slow down of the functions of the ftrace_ops that is not being modified. Only update the functions that are going to be modified to call the ftrace loop function so that the update can be made on those functions. * tag 'trace-v7.0-rc4' of git://git.kernel.org/pub/scm/linux/kernel/git/trace/linux-trace: ftrace: Use hash argument for tmp_ops in update_ftrace_direct_mod ring-buffer: Fix to update per-subbuf entries of persistent ring buffer tracing: Fix trace_marker copy link list updates tracing: Fix failure to read user space from system call trace events tracing: Revert "tracing: Remove pid in task_rename tracing output"
2026-03-22Merge tag 'perf-urgent-2026-03-22' of ↵Linus Torvalds
git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip Pull perf fixes from Ingo Molnar: - Fix a PMU driver crash on AMD EPYC systems, caused by a race condition in x86_pmu_enable() - Fix a possible counter-initialization bug in x86_pmu_enable() - Fix a counter inheritance bug in inherit_event() and __perf_event_read() - Fix an Intel PMU driver branch constraints handling bug found by UBSAN - Fix the Intel PMU driver's new Off-Module Response (OMR) support code for Diamond Rapids / Nova lake, to fix a snoop information parsing bug * tag 'perf-urgent-2026-03-22' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip: perf/x86/intel: Fix OMR snoop information parsing issues perf/x86/intel: Add missing branch counters constraint apply perf: Make sure to use pmu_ctx->pmu for groups x86/perf: Make sure to program the counter value for stopped events on migration perf/x86: Move event pointer setup earlier in x86_pmu_enable()
2026-03-21workqueue: Fix false positive stall reportsSong Liu
On weakly ordered architectures (e.g., arm64), the lockless check in wq_watchdog_timer_fn() can observe a reordering between the worklist insertion and the last_progress_ts update. Specifically, the watchdog can see a non-empty worklist (from a list_add) while reading a stale last_progress_ts value, causing a false positive stall report. This was confirmed by reading pool->last_progress_ts again after holding pool->lock in wq_watchdog_timer_fn(): workqueue watchdog: pool 7 false positive detected! lockless_ts=4784580465 locked_ts=4785033728 diff=453263ms worklist_empty=0 To avoid slowing down the hot path (queue_work, etc.), recheck last_progress_ts with pool->lock held. This will eliminate the false positive with minimal overhead. Remove two extra empty lines in wq_watchdog_timer_fn() as we are on it. Fixes: 82607adcf9cd ("workqueue: implement lockup detector") Cc: stable@vger.kernel.org # v4.5+ Assisted-by: claude-code:claude-opus-4-6 Signed-off-by: Song Liu <song@kernel.org> Signed-off-by: Tejun Heo <tj@kernel.org>
2026-03-22padata: Put CPU offline callback in ONLINE section to allow failureDaniel Jordan
syzbot reported the following warning: DEAD callback error for CPU1 WARNING: kernel/cpu.c:1463 at _cpu_down+0x759/0x1020 kernel/cpu.c:1463, CPU#0: syz.0.1960/14614 at commit 4ae12d8bd9a8 ("Merge tag 'kbuild-fixes-7.0-2' of git://git.kernel.org/pub/scm/linux/kernel/git/kbuild/linux") which tglx traced to padata_cpu_dead() given it's the only sub-CPUHP_TEARDOWN_CPU callback that returns an error. Failure isn't allowed in hotplug states before CPUHP_TEARDOWN_CPU so move the CPU offline callback to the ONLINE section where failure is possible. Fixes: 894c9ef9780c ("padata: validate cpumask without removed CPU during offline") Reported-by: syzbot+123e1b70473ce213f3af@syzkaller.appspotmail.com Closes: https://lore.kernel.org/all/69af0a05.050a0220.310d8.002f.GAE@google.com/ Debugged-by: Thomas Gleixner <tglx@kernel.org> Signed-off-by: Daniel Jordan <daniel.m.jordan@oracle.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2026-03-21sched_ext: Fix inconsistent NUMA node lookup in scx_select_cpu_dfl()Cheng-Yang Chou
In the WAKE_SYNC path of scx_select_cpu_dfl(), waker_node was computed with cpu_to_node(), while node (for prev_cpu) was computed with scx_cpu_node_if_enabled(). When scx_builtin_idle_per_node is disabled, idle_cpumask(waker_node) is called with a real node ID even though per-node idle tracking is disabled, resulting in undefined behavior. Fix by using scx_cpu_node_if_enabled() for waker_node as well, ensuring both variables are computed consistently. Fixes: 48849271e6611 ("sched_ext: idle: Per-node idle cpumasks") Cc: stable@vger.kernel.org # v6.15+ Signed-off-by: Cheng-Yang Chou <yphbchou0911@gmail.com> Reviewed-by: Andrea Righi <arighi@nvidia.com> Signed-off-by: Tejun Heo <tj@kernel.org>
2026-03-21ftrace: Use hash argument for tmp_ops in update_ftrace_direct_modJiri Olsa
The modify logic registers temporary ftrace_ops object (tmp_ops) to trigger the slow path for all direct callers to be able to safely modify attached addresses. At the moment we use ops->func_hash for tmp_ops filter, which represents all the systems attachments. It's faster to use just the passed hash filter, which contains only the modified sites and is always a subset of the ops->func_hash. Cc: Alexei Starovoitov <ast@kernel.org> Cc: Daniel Borkmann <daniel@iogearbox.net> Cc: Andrii Nakryiko <andrii@kernel.org> Cc: Menglong Dong <menglong8.dong@gmail.com> Cc: Song Liu <song@kernel.org> Link: https://patch.msgid.link/20260312123738.129926-1-jolsa@kernel.org Fixes: e93672f770d7 ("ftrace: Add update_ftrace_direct_mod function") Signed-off-by: Jiri Olsa <jolsa@kernel.org> Signed-off-by: Steven Rostedt (Google) <rostedt@goodmis.org>
2026-03-21ring-buffer: Fix to update per-subbuf entries of persistent ring bufferMasami Hiramatsu (Google)
Since the validation loop in rb_meta_validate_events() updates the same cpu_buffer->head_page->entries, the other subbuf entries are not updated. Fix to use head_page to update the entries field, since it is the cursor in this loop. Cc: stable@vger.kernel.org Cc: Mathieu Desnoyers <mathieu.desnoyers@efficios.com> Cc: Ian Rogers <irogers@google.com> Fixes: 5f3b6e839f3c ("ring-buffer: Validate boot range memory events") Link: https://patch.msgid.link/177391153882.193994.17158784065013676533.stgit@mhiramat.tok.corp.google.com Signed-off-by: Masami Hiramatsu (Google) <mhiramat@kernel.org> Signed-off-by: Steven Rostedt (Google) <rostedt@goodmis.org>
2026-03-21tracing: Fix trace_marker copy link list updatesSteven Rostedt
When the "copy_trace_marker" option is enabled for an instance, anything written into /sys/kernel/tracing/trace_marker is also copied into that instances buffer. When the option is set, that instance's trace_array descriptor is added to the marker_copies link list. This list is protected by RCU, as all iterations uses an RCU protected list traversal. When the instance is deleted, all the flags that were enabled are cleared. This also clears the copy_trace_marker flag and removes the trace_array descriptor from the list. The issue is after the flags are called, a direct call to update_marker_trace() is performed to clear the flag. This function returns true if the state of the flag changed and false otherwise. If it returns true here, synchronize_rcu() is called to make sure all readers see that its removed from the list. But since the flag was already cleared, the state does not change and the synchronization is never called, leaving a possible UAF bug. Move the clearing of all flags below the updating of the copy_trace_marker option which then makes sure the synchronization is performed. Also use the flag for checking the state in update_marker_trace() instead of looking at if the list is empty. Cc: stable@vger.kernel.org Cc: Masami Hiramatsu <mhiramat@kernel.org> Cc: Mathieu Desnoyers <mathieu.desnoyers@efficios.com> Link: https://patch.msgid.link/20260318185512.1b6c7db4@gandalf.local.home Fixes: 7b382efd5e8a ("tracing: Allow the top level trace_marker to write into another instances") Reported-by: Sasha Levin <sashal@kernel.org> Closes: https://lore.kernel.org/all/20260225133122.237275-1-sashal@kernel.org/ Signed-off-by: Steven Rostedt (Google) <rostedt@goodmis.org>
2026-03-21tracing: Fix failure to read user space from system call trace eventsSteven Rostedt
The system call trace events call trace_user_fault_read() to read the user space part of some system calls. This is done by grabbing a per-cpu buffer, disabling migration, enabling preemption, calling copy_from_user(), disabling preemption, enabling migration and checking if the task was preempted while preemption was enabled. If it was, the buffer is considered corrupted and it tries again. There's a safety mechanism that will fail out of this loop if it fails 100 times (with a warning). That warning message was triggered in some pi_futex stress tests. Enabling the sched_switch trace event and traceoff_on_warning, showed the problem: pi_mutex_hammer-1375 [006] d..21 138.981648: sched_switch: prev_comm=pi_mutex_hammer prev_pid=1375 prev_prio=95 prev_state=R+ ==> next_comm=migration/6 next_pid=47 next_prio=0 migration/6-47 [006] d..2. 138.981651: sched_switch: prev_comm=migration/6 prev_pid=47 prev_prio=0 prev_state=S ==> next_comm=pi_mutex_hammer next_pid=1375 next_prio=95 pi_mutex_hammer-1375 [006] d..21 138.981656: sched_switch: prev_comm=pi_mutex_hammer prev_pid=1375 prev_prio=95 prev_state=R+ ==> next_comm=migration/6 next_pid=47 next_prio=0 migration/6-47 [006] d..2. 138.981659: sched_switch: prev_comm=migration/6 prev_pid=47 prev_prio=0 prev_state=S ==> next_comm=pi_mutex_hammer next_pid=1375 next_prio=95 pi_mutex_hammer-1375 [006] d..21 138.981664: sched_switch: prev_comm=pi_mutex_hammer prev_pid=1375 prev_prio=95 prev_state=R+ ==> next_comm=migration/6 next_pid=47 next_prio=0 migration/6-47 [006] d..2. 138.981667: sched_switch: prev_comm=migration/6 prev_pid=47 prev_prio=0 prev_state=S ==> next_comm=pi_mutex_hammer next_pid=1375 next_prio=95 pi_mutex_hammer-1375 [006] d..21 138.981671: sched_switch: prev_comm=pi_mutex_hammer prev_pid=1375 prev_prio=95 prev_state=R+ ==> next_comm=migration/6 next_pid=47 next_prio=0 migration/6-47 [006] d..2. 138.981675: sched_switch: prev_comm=migration/6 prev_pid=47 prev_prio=0 prev_state=S ==> next_comm=pi_mutex_hammer next_pid=1375 next_prio=95 pi_mutex_hammer-1375 [006] d..21 138.981679: sched_switch: prev_comm=pi_mutex_hammer prev_pid=1375 prev_prio=95 prev_state=R+ ==> next_comm=migration/6 next_pid=47 next_prio=0 migration/6-47 [006] d..2. 138.981682: sched_switch: prev_comm=migration/6 prev_pid=47 prev_prio=0 prev_state=S ==> next_comm=pi_mutex_hammer next_pid=1375 next_prio=95 pi_mutex_hammer-1375 [006] d..21 138.981687: sched_switch: prev_comm=pi_mutex_hammer prev_pid=1375 prev_prio=95 prev_state=R+ ==> next_comm=migration/6 next_pid=47 next_prio=0 migration/6-47 [006] d..2. 138.981690: sched_switch: prev_comm=migration/6 prev_pid=47 prev_prio=0 prev_state=S ==> next_comm=pi_mutex_hammer next_pid=1375 next_prio=95 pi_mutex_hammer-1375 [006] d..21 138.981695: sched_switch: prev_comm=pi_mutex_hammer prev_pid=1375 prev_prio=95 prev_state=R+ ==> next_comm=migration/6 next_pid=47 next_prio=0 migration/6-47 [006] d..2. 138.981698: sched_switch: prev_comm=migration/6 prev_pid=47 prev_prio=0 prev_state=S ==> next_comm=pi_mutex_hammer next_pid=1375 next_prio=95 pi_mutex_hammer-1375 [006] d..21 138.981703: sched_switch: prev_comm=pi_mutex_hammer prev_pid=1375 prev_prio=95 prev_state=R+ ==> next_comm=migration/6 next_pid=47 next_prio=0 migration/6-47 [006] d..2. 138.981706: sched_switch: prev_comm=migration/6 prev_pid=47 prev_prio=0 prev_state=S ==> next_comm=pi_mutex_hammer next_pid=1375 next_prio=95 pi_mutex_hammer-1375 [006] d..21 138.981711: sched_switch: prev_comm=pi_mutex_hammer prev_pid=1375 prev_prio=95 prev_state=R+ ==> next_comm=migration/6 next_pid=47 next_prio=0 migration/6-47 [006] d..2. 138.981714: sched_switch: prev_comm=migration/6 prev_pid=47 prev_prio=0 prev_state=S ==> next_comm=pi_mutex_hammer next_pid=1375 next_prio=95 pi_mutex_hammer-1375 [006] d..21 138.981719: sched_switch: prev_comm=pi_mutex_hammer prev_pid=1375 prev_prio=95 prev_state=R+ ==> next_comm=migration/6 next_pid=47 next_prio=0 migration/6-47 [006] d..2. 138.981722: sched_switch: prev_comm=migration/6 prev_pid=47 prev_prio=0 prev_state=S ==> next_comm=pi_mutex_hammer next_pid=1375 next_prio=95 pi_mutex_hammer-1375 [006] d..21 138.981727: sched_switch: prev_comm=pi_mutex_hammer prev_pid=1375 prev_prio=95 prev_state=R+ ==> next_comm=migration/6 next_pid=47 next_prio=0 migration/6-47 [006] d..2. 138.981730: sched_switch: prev_comm=migration/6 prev_pid=47 prev_prio=0 prev_state=S ==> next_comm=pi_mutex_hammer next_pid=1375 next_prio=95 pi_mutex_hammer-1375 [006] d..21 138.981735: sched_switch: prev_comm=pi_mutex_hammer prev_pid=1375 prev_prio=95 prev_state=R+ ==> next_comm=migration/6 next_pid=47 next_prio=0 migration/6-47 [006] d..2. 138.981738: sched_switch: prev_comm=migration/6 prev_pid=47 prev_prio=0 prev_state=S ==> next_comm=pi_mutex_hammer next_pid=1375 next_prio=95 What happened was the task 1375 was flagged to be migrated. When preemption was enabled, the migration thread woke up to migrate that task, but failed because migration for that task was disabled. This caused the loop to fail to exit because the task scheduled out while trying to read user space. Every time the task enabled preemption the migration thread would schedule in, try to migrate the task, fail and let the task continue. But because the loop would only enable preemption with migration disabled, it would always fail because each time it enabled preemption to read user space, the migration thread would try to migrate it. To solve this, when the loop fails to read user space without being scheduled out, enabled and disable preemption with migration enabled. This will allow the migration task to successfully migrate the task and the next loop should succeed to read user space without being scheduled out. Cc: stable@vger.kernel.org Cc: Masami Hiramatsu <mhiramat@kernel.org> Cc: Mathieu Desnoyers <mathieu.desnoyers@efficios.com> Link: https://patch.msgid.link/20260316130734.1858a998@gandalf.local.home Fixes: 64cf7d058a005 ("tracing: Have trace_marker use per-cpu data to read user space") Signed-off-by: Steven Rostedt (Google) <rostedt@goodmis.org>
2026-03-21bpf: Fix sync_linked_regs regarding BPF_ADD_CONST32 zext propagationDaniel Borkmann
Jenny reported that in sync_linked_regs() the BPF_ADD_CONST32 flag is checked on known_reg (the register narrowed by a conditional branch) instead of reg (the linked target register created by an alu32 operation). Example case with reg: 1. r6 = bpf_get_prandom_u32() 2. r7 = r6 (linked, same id) 3. w7 += 5 (alu32 -- r7 gets BPF_ADD_CONST32, zero-extended by CPU) 4. if w6 < 0xFFFFFFFC goto safe (narrows r6 to [0xFFFFFFFC, 0xFFFFFFFF]) 5. sync_linked_regs() propagates to r7 but does NOT call zext_32_to_64() 6. Verifier thinks r7 is [0x100000001, 0x100000004] instead of [1, 4] Since known_reg above does not have BPF_ADD_CONST32 set above, zext_32_to_64() is never called on alu32-derived linked registers. This causes the verifier to track incorrect 64-bit bounds, while the CPU correctly zero-extends the 32-bit result. The code checking known_reg->id was correct however (see scalars_alu32_wrap selftest case), but the real fix needs to handle both directions - zext propagation should be done when either register has BPF_ADD_CONST32, since the linked relationship involves a 32-bit operation regardless of which side has the flag. Example case with known_reg (exercised also by scalars_alu32_wrap): 1. r1 = r0; w1 += 0x100 (alu32 -- r1 gets BPF_ADD_CONST32) 2. if r1 > 0x80 - known_reg = r1 (has BPF_ADD_CONST32), reg = r0 (doesn't) Hence, fix it by checking for (reg->id | known_reg->id) & BPF_ADD_CONST32. Moreover, sync_linked_regs() also has a soundness issue when two linked registers used different ALU widths: one with BPF_ADD_CONST32 and the other with BPF_ADD_CONST64. The delta relationship between linked registers assumes the same arithmetic width though. When one register went through alu32 (CPU zero-extends the 32-bit result) and the other went through alu64 (no zero-extension), the propagation produces incorrect bounds. Example: r6 = bpf_get_prandom_u32() // fully unknown if r6 >= 0x100000000 goto out // constrain r6 to [0, U32_MAX] r7 = r6 w7 += 1 // alu32: r7.id = N | BPF_ADD_CONST32 r8 = r6 r8 += 2 // alu64: r8.id = N | BPF_ADD_CONST64 if r7 < 0xFFFFFFFF goto out // narrows r7 to [0xFFFFFFFF, 0xFFFFFFFF] At the branch on r7, sync_linked_regs() runs with known_reg=r7 (BPF_ADD_CONST32) and reg=r8 (BPF_ADD_CONST64). The delta path computes: r8 = r7 + (delta_r8 - delta_r7) = 0xFFFFFFFF + (2 - 1) = 0x100000000 Then, because known_reg->id has BPF_ADD_CONST32, zext_32_to_64(r8) is called, truncating r8 to [0, 0]. But r8 used a 64-bit ALU op -- the CPU does NOT zero-extend it. The actual CPU value of r8 is 0xFFFFFFFE + 2 = 0x100000000, not 0. The verifier now underestimates r8's 64-bit bounds, which is a soundness violation. Fix sync_linked_regs() by skipping propagation when the two registers have mixed ALU widths (one BPF_ADD_CONST32, the other BPF_ADD_CONST64). Lastly, fix regsafe() used for path pruning: the existing checks used "& BPF_ADD_CONST" to test for offset linkage, which treated BPF_ADD_CONST32 and BPF_ADD_CONST64 as equivalent. Fixes: 7a433e519364 ("bpf: Support negative offsets, BPF_SUB, and alu32 for linked register tracking") Reported-by: Jenny Guanni Qu <qguanni@gmail.com> Co-developed-by: Puranjay Mohan <puranjay@kernel.org> Signed-off-by: Puranjay Mohan <puranjay@kernel.org> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net> Acked-by: Eduard Zingerman <eddyz87@gmail.com> Link: https://lore.kernel.org/r/20260319211507.213816-1-daniel@iogearbox.net Signed-off-by: Alexei Starovoitov <ast@kernel.org>
2026-03-21bpf: Fix unsound scalar forking in maybe_fork_scalars() for BPF_ORDaniel Wade
maybe_fork_scalars() is called for both BPF_AND and BPF_OR when the source operand is a constant. When dst has signed range [-1, 0], it forks the verifier state: the pushed path gets dst = 0, the current path gets dst = -1. For BPF_AND this is correct: 0 & K == 0. For BPF_OR this is wrong: 0 | K == K, not 0. The pushed path therefore tracks dst as 0 when the runtime value is K, producing an exploitable verifier/runtime divergence that allows out-of-bounds map access. Fix this by passing env->insn_idx (instead of env->insn_idx + 1) to push_stack(), so the pushed path re-executes the ALU instruction with dst = 0 and naturally computes the correct result for any opcode. Fixes: bffacdb80b93 ("bpf: Recognize special arithmetic shift in the verifier") Signed-off-by: Daniel Wade <danjwade95@gmail.com> Reviewed-by: Amery Hung <ameryhung@gmail.com> Acked-by: Eduard Zingerman <eddyz87@gmail.com> Link: https://lore.kernel.org/r/20260314021521.128361-2-danjwade95@gmail.com Signed-off-by: Alexei Starovoitov <ast@kernel.org>
2026-03-21bpf: Fix undefined behavior in interpreter sdiv/smod for INT_MINJenny Guanni Qu
The BPF interpreter's signed 32-bit division and modulo handlers use the kernel abs() macro on s32 operands. The abs() macro documentation (include/linux/math.h) explicitly states the result is undefined when the input is the type minimum. When DST contains S32_MIN (0x80000000), abs((s32)DST) triggers undefined behavior and returns S32_MIN unchanged on arm64/x86. This value is then sign-extended to u64 as 0xFFFFFFFF80000000, causing do_div() to compute the wrong result. The verifier's abstract interpretation (scalar32_min_max_sdiv) computes the mathematically correct result for range tracking, creating a verifier/interpreter mismatch that can be exploited for out-of-bounds map value access. Introduce abs_s32() which handles S32_MIN correctly by casting to u32 before negating, avoiding signed overflow entirely. Replace all 8 abs((s32)...) call sites in the interpreter's sdiv32/smod32 handlers. s32 is the only affected case -- the s64 division/modulo handlers do not use abs(). Fixes: ec0e2da95f72 ("bpf: Support new signed div/mod instructions.") Acked-by: Yonghong Song <yonghong.song@linux.dev> Acked-by: Mykyta Yatsenko <yatsenko@meta.com> Signed-off-by: Jenny Guanni Qu <qguanni@gmail.com> Link: https://lore.kernel.org/r/20260311011116.2108005-2-qguanni@gmail.com Signed-off-by: Alexei Starovoitov <ast@kernel.org>
2026-03-21bpf: Consolidate sleepable checks in check_func_call()Puranjay Mohan
The sleepable context check for global function calls in check_func_call() open-codes the same checks that in_sleepable_context() already performs. Replace the open-coded check with a call to in_sleepable_context() and use non_sleepable_context_description() for the error message, consistent with check_helper_call() and check_kfunc_call(). Note that in_sleepable_context() also checks active_locks, which overlaps with the existing active_locks check above it. However, the two checks serve different purposes: the active_locks check rejects all global function calls while holding a lock (not just sleepable ones), so it must remain as a separate guard. Update the expected error messages in the irq and preempt_lock selftests to match. Acked-by: Eduard Zingerman <eddyz87@gmail.com> Signed-off-by: Puranjay Mohan <puranjay@kernel.org> Acked-by: Kumar Kartikeya Dwivedi <memxor@gmail.com> Link: https://lore.kernel.org/r/20260318174327.3151925-4-puranjay@kernel.org Signed-off-by: Alexei Starovoitov <ast@kernel.org>
2026-03-21bpf: Consolidate sleepable checks in check_kfunc_call()Puranjay Mohan
check_kfunc_call() has multiple scattered checks that reject sleepable kfuncs in various non-sleepable contexts (RCU, preempt-disabled, IRQ- disabled). These are the same conditions already checked by in_sleepable_context(), so replace them with a single consolidated check. This also simplifies the preempt lock tracking by flattening the nested if/else structure into a linear chain: preempt_disable increments, preempt_enable checks for underflow and decrements. The sleepable check is kept as a separate block since it is logically distinct from the lock accounting. No functional change since in_sleepable_context() checks all the same state (active_rcu_locks, active_preempt_locks, active_locks, active_irq_id, in_sleepable). Acked-by: Eduard Zingerman <eddyz87@gmail.com> Acked-by: Mykyta Yatsenko <yatsenko@meta.com> Signed-off-by: Puranjay Mohan <puranjay@kernel.org> Acked-by: Kumar Kartikeya Dwivedi <memxor@gmail.com> Link: https://lore.kernel.org/r/20260318174327.3151925-3-puranjay@kernel.org Signed-off-by: Alexei Starovoitov <ast@kernel.org>
2026-03-21bpf: Consolidate sleepable checks in check_helper_call()Puranjay Mohan
check_helper_call() prints the error message for every env->cur_state->active* element when calling a sleepable helper. Consolidate all of them into a single print statement. The check for env->cur_state->active_locks was not part of the removed print statements and will not be triggered with the consolidated print as well because it is checked in do_check() before check_helper_call() is even reached. Acked-by: Mykyta Yatsenko <yatsenko@meta.com> Acked-by: Eduard Zingerman <eddyz87@gmail.com> Signed-off-by: Puranjay Mohan <puranjay@kernel.org> Acked-by: Kumar Kartikeya Dwivedi <memxor@gmail.com> Link: https://lore.kernel.org/r/20260318174327.3151925-2-puranjay@kernel.org Signed-off-by: Alexei Starovoitov <ast@kernel.org>
2026-03-21bpf: Fix exception exit lock checking for subprogsIhor Solodrai
process_bpf_exit_full() passes check_lock = !curframe to check_resource_leak(), which is false in cases when bpf_throw() is called from a static subprog. This makes check_resource_leak() to skip validation of active_rcu_locks, active_preempt_locks, and active_irq_id on exception exits from subprogs. At runtime bpf_throw() unwinds the stack via ORC without releasing any user-acquired locks, which may cause various issues as the result. Fix by setting check_lock = true for exception exits regardless of curframe, since exceptions bypass all intermediate frame cleanup. Update the error message prefix to "bpf_throw" for exception exits to distinguish them from normal BPF_EXIT. Fix reject_subprog_with_rcu_read_lock test which was previously passing for the wrong reason. Test program returned directly from the subprog call without closing the RCU section, so the error was triggered by the unclosed RCU lock on normal exit, not by bpf_throw. Update __msg annotations for affected tests to match the new "bpf_throw" error prefix. The spin_lock case is not affected because they are already checked [1] at the call site in do_check_insn() before bpf_throw can run. [1] https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/tree/kernel/bpf/verifier.c?h=v7.0-rc4#n21098 Assisted-by: Claude:claude-opus-4-6 Fixes: f18b03fabaa9 ("bpf: Implement BPF exceptions") Signed-off-by: Ihor Solodrai <ihor.solodrai@linux.dev> Acked-by: Yonghong Song <yonghong.song@linux.dev> Acked-by: Kumar Kartikeya Dwivedi <memxor@gmail.com> Link: https://lore.kernel.org/r/20260320000809.643798-1-ihor.solodrai@linux.dev Signed-off-by: Alexei Starovoitov <ast@kernel.org>
2026-03-21cgroup/dmem: remove region parameter from dmemcg_parse_limitThadeu Lima de Souza Cascardo
dmemcg_parse_limit does not use the region parameter. Remove it. Signed-off-by: Thadeu Lima de Souza Cascardo <cascardo@igalia.com> Signed-off-by: Tejun Heo <tj@kernel.org>
2026-03-21sched_ext: idle: Prioritize idle SMT siblingAndrea Righi
In the default built-in idle CPU selection policy, when @prev_cpu is busy and no fully idle core is available, try to place the task on its SMT sibling if that sibling is idle, before searching any other idle CPU in the same LLC. Migration to the sibling is cheap and keeps the task on the same core, preserving L1 cache and reducing wakeup latency. On large SMT systems this appears to consistently boost throughput by roughly 2-3% on CPU-bound workloads (running a number of tasks equal to the number of SMT cores). Cc: Cheng-Yang Chou <yphbchou0911@gmail.com> Signed-off-by: Andrea Righi <arighi@nvidia.com> Signed-off-by: Tejun Heo <tj@kernel.org>
2026-03-21Merge tag 'v7.0-rc4' into timers/core, to resolve conflictIngo Molnar
Resolve conflict between this change in the upstream kernel: 4c652a47722f ("rseq: Mark rseq_arm_slice_extension_timer() __always_inline") ... and this pending change in timers/core: 0e98eb14814e ("entry: Prepare for deferred hrtimer rearming") Signed-off-by: Ingo Molnar <mingo@kernel.org>
2026-03-20PM: hibernate: return -ENODATA if the snapshot image is not loadedAlberto Garcia
snapshot_image_loaded() is used in both the in-kernel and the userspace restore path to ensure that the snapshot image has been completely loaded. However the latter path returns -EPERM in such situations, which is meant for cases where the operation is neither write-only nor ready. This patch updates the check so the returned error code is -ENODATA in both cases. Suggested-by: Brian Geffon <bgeffon@google.com> Signed-off-by: Alberto Garcia <berto@igalia.com> Acked-by: Brian Geffon <bgeffon@google.com> Link: https://patch.msgid.link/8cfda38659c623f5392f3458cb32504ffd556a74.1773075892.git.berto@igalia.com Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
2026-03-20pid_namespace: allow opening pid_for_children before init was createdPavel Tikhomirov
This effectively gives us an ability to create the pid namespace init as a child of the process (setns-ed to the pid namespace) different to the process which created the pid namespace itself. Original problem: There is a cool set_tid feature in clone3() syscall, it allows you to create process with desired pids on multiple pid namespace levels. Which is useful to restore processes in CRIU for nested pid namespace case. In nested container case we can potentially see this kind of pid/user namespace tree: Process ┌─────────┐ User NS0 ──▶ Pid NS0 ──▶ Pid p0 │ │ │ │ │ ▼ ▼ │ │ User NS1 ──▶ Pid NS1 ──▶ Pid p1 │ │ │ │ │ ... ... │ ... │ │ │ │ │ ▼ ▼ │ │ User NSn ──▶ Pid NSn ──▶ Pid pn │ └─────────┘ So to create the "Process" and set pids {p0, p1, ... pn} for it on all pid namespace levels we can use clone3() syscall set_tid feature, BUT the syscall does not allow you to set pid on pid namespace levels you don't have permission to. So basically you have to be in "User NS0" when creating the "Process" to actually be able to set pids on all levels. It is ok for almost any process, but with pid namespace init this does not work, as currently we can only create pid namespace init and the pid namespace itself simultaneously, so to make "Pid NSn" owned by "User NSn" we have to be in the "User NSn". We can't possibly be in "User NS0" and "User NSn" at the same time, hence the problem. Alternative solution: Yes, for the case of pid namespace init we can use old and gold /proc/sys/kernel/ns_last_pid interface on the levels lower than n. But it is much more complicated and introduces tons of extra code to do. It would be nice to make clone3() set_tid interface also aplicable to this corner case. Implementation: Now when anyone can setns to the pid namespace before the creation of init, and thus multiple processes can fork children to the pid namespace, it is important that we enforce the first process created is always pid namespace init. (Note that this was done by the previous preparational patch as a standalon useful change.) We only allow other processes after the init sets pid_namespace->child_reaper. Reviewed-by: Oleg Nesterov <oleg@redhat.com> Acked-by: Andrei Vagin <avagin@google.com> Signed-off-by: Pavel Tikhomirov <ptikhomirov@virtuozzo.com> -- v2: Use *_ONCE for ->child_reaper accesses atomicity, and avoid taking task_list lock for reading it. Rebase to master, and thus remove now excess pidns_ready variable. v3: Separate *_ONCE change and "init is first" checks into separate commits. v5: Add Andrei's review tag. ->child_reaper which can influence the pid namespace, so it looks like the pid namespace is fully setup at the point when init sets ->child_reaper to receive more processes. Thus tasklist lock looks excess in pidns_for_children_get()'s ->child_reaper check and it should be safe not to have it in the corresponding check in alloc_pid() (introduced earlier in this series). Link: https://patch.msgid.link/20260318122157.280595-4-ptikhomirov@virtuozzo.com Acked-by: Andrei Vagin <avagin@google.com> Reviewed-by: Oleg Nesterov <oleg@redhat.com> Note: I didn't find anything in copy_process() around setting the Signed-off-by: Christian Brauner <brauner@kernel.org>
2026-03-20pid: check init is created first after idr allocPavel Tikhomirov
This moves the condition (tid != 1 && !tmp->child_reaper) to after idr alloc, so it not only covers that first process in pid namespace has pid 1 in case of clone3(set_tid) requesting wrong pid, but also if idr itself gives wrong pid for some reason. This could've been the case before this patch, when creating first process the alloc_pid()->pidfs_add_pid() code path fails, so that the idr->idr_next is non zero anymore and next process calling to alloc_pid(), will get 2 as a pid from idr_alloc_cyclic(). Though thanks to PIDNS_ADDING logic, free_pid() disables further pid allocation in this case and it does not lead to any real problem. Note: This is also a preparation for the next patch in the series, which will introduce an ability of creating init from the task different to the task which had created the pid namespace. Needed to make sure that init is always first, even in this new case. -- Suggested-by: Oleg Nesterov <oleg@redhat.com> Signed-off-by: Oleg Nesterov <oleg@redhat.com> Acked-by: Andrei Vagin <avagin@google.com> Signed-off-by: Pavel Tikhomirov <ptikhomirov@virtuozzo.com> Link: https://patch.msgid.link/20260318122157.280595-3-ptikhomirov@virtuozzo.com v3: Split from main commit. Merge two checks of ->child_reaper into one. v4: Update commit message about PIDNS_ADDING. v5: Add Andrei's review tag. Signed-off-by: Christian Brauner <brauner@kernel.org>
2026-03-20pid_namespace: avoid optimization of accesses to ->child_reaperPavel Tikhomirov
To avoid potential problems related to cpu/compiler optimizations around ->child_reaper, let's use WRITE_ONCE (additional to task_list lock) everywhere we write it and use READ_ONCE where we read it without explicit lock. Note: It also pairs with existing READ_ONCE with no lock in nsfs_fh_to_dentry(). Also let's add ASSERT_EXCLUSIVE_WRITER before write to identify to KCSAN that we don't expect any concurrent ->child_reaper modifications, and those must be detected. -- Suggested-by: Oleg Nesterov <oleg@redhat.com> Acked-by: Oleg Nesterov <oleg@redhat.com> Signed-off-by: Pavel Tikhomirov <ptikhomirov@virtuozzo.com> Link: https://patch.msgid.link/20260318122157.280595-2-ptikhomirov@virtuozzo.com v3: Split from main commit. Add ASSERT_EXCLUSIVE_WRITER. v5: Add one more READ_ONCE for access without lock in free_pid(). Signed-off-by: Christian Brauner <brauner@kernel.org>
2026-03-20clocksource: Rewrite watchdog code completelyThomas Gleixner
The clocksource watchdog code has over time reached the state of an impenetrable maze of duct tape and staples. The original design, which was made in the context of systems far smaller than today, is based on the assumption that the to be monitored clocksource (TSC) can be trivially compared against a known to be stable clocksource (HPET/ACPI-PM timer). Over the years it turned out that this approach has major flaws: - Long delays between watchdog invocations can result in wrap arounds of the reference clocksource - Scalability of the reference clocksource readout can degrade on large multi-socket systems due to interconnect congestion This was addressed with various heuristics which degraded the accuracy of the watchdog to the point that it fails to detect actual TSC problems on older hardware which exposes slow inter CPU drifts due to firmware manipulating the TSC to hide SMI time. To address this and bring back sanity to the watchdog, rewrite the code completely with a different approach: 1) Restrict the validation against a reference clocksource to the boot CPU, which is usually the CPU/Socket closest to the legacy block which contains the reference source (HPET/ACPI-PM timer). Validate that the reference readout is within a bound latency so that the actual comparison against the TSC stays within 500ppm as long as the clocks are stable. 2) Compare the TSCs of the other CPUs in a round robin fashion against the boot CPU in the same way the TSC synchronization on CPU hotplug works. This still can suffer from delayed reaction of the remote CPU to the SMP function call and the latency of the control variable cache line. But this latency is not affecting correctness. It only affects the accuracy. With low contention the readout latency is in the low nanoseconds range, which detects even slight skews between CPUs. Under high contention this becomes obviously less accurate, but still detects slow skews reliably as it solely relies on subsequent readouts being monotonically increasing. It just can take slightly longer to detect the issue. 3) Rewrite the watchdog test so it tests the various mechanisms one by one and validating the result against the expectation. Signed-off-by: Thomas Gleixner <tglx@kernel.org> Tested-by: Borislav Petkov (AMD) <bp@alien8.de> Tested-by: Daniel J Blueman <daniel@quora.org> Reviewed-by: Jiri Wiesner <jwiesner@suse.de> Reviewed-by: Daniel J Blueman <daniel@quora.org> Link: https://patch.msgid.link/20260123231521.926490888@kernel.org Link: https://patch.msgid.link/87h5qeomm5.ffs@tglx
2026-03-20dma-direct: prevent SWIOTLB path when DMA_ATTR_REQUIRE_COHERENT is setLeon Romanovsky
DMA_ATTR_REQUIRE_COHERENT indicates that SWIOTLB must not be used. Ensure the SWIOTLB path is declined whenever the DMA direct path is selected. Signed-off-by: Leon Romanovsky <leonro@nvidia.com> Signed-off-by: Marek Szyprowski <m.szyprowski@samsung.com> Link: https://lore.kernel.org/r/20260316-dma-debug-overlap-v3-5-1dde90a7f08b@nvidia.com
2026-03-20dma-mapping: Introduce DMA require coherency attributeLeon Romanovsky
The mapping buffers which carry this attribute require DMA coherent system. This means that they can't take SWIOTLB path, can perform CPU cache overlap and doesn't perform cache flushing. Signed-off-by: Leon Romanovsky <leonro@nvidia.com> Signed-off-by: Marek Szyprowski <m.szyprowski@samsung.com> Link: https://lore.kernel.org/r/20260316-dma-debug-overlap-v3-4-1dde90a7f08b@nvidia.com
2026-03-20dma-mapping: Clarify valid conditions for CPU cache line overlapLeon Romanovsky
Rename the DMA_ATTR_CPU_CACHE_CLEAN attribute to better reflect that it is debugging aid to inform DMA core code that CPU cache line overlaps are allowed, and refine the documentation describing its use. Signed-off-by: Leon Romanovsky <leonro@nvidia.com> Signed-off-by: Marek Szyprowski <m.szyprowski@samsung.com> Link: https://lore.kernel.org/r/20260316-dma-debug-overlap-v3-3-1dde90a7f08b@nvidia.com
2026-03-20dma-debug: Allow multiple invocations of overlapping entriesLeon Romanovsky
Repeated DMA mappings with DMA_ATTR_CPU_CACHE_CLEAN trigger the following splat. This prevents using the attribute in cases where a DMA region is shared and reused more than seven times. ------------[ cut here ]------------ DMA-API: exceeded 7 overlapping mappings of cacheline 0x000000000438c440 WARNING: kernel/dma/debug.c:467 at add_dma_entry+0x219/0x280, CPU#4: ibv_rc_pingpong/1644 Modules linked in: xt_conntrack xt_MASQUERADE nf_conntrack_netlink nfnetlink iptable_nat nf_nat xt_addrtype br_netfilter rpcsec_gss_krb5 auth_rpcgss oid_registry overlay mlx5_fwctl zram zsmalloc mlx5_ib fuse rpcrdma rdma_ucm ib_uverbs ib_iser libiscsi scsi_transport_iscsi ib_umad rdma_cm ib_ipoib iw_cm ib_cm mlx5_core ib_core CPU: 4 UID: 2733 PID: 1644 Comm: ibv_rc_pingpong Not tainted 6.19.0+ #129 PREEMPT Hardware name: QEMU Standard PC (Q35 + ICH9, 2009), BIOS rel-1.13.0-0-gf21b5a4aeb02-prebuilt.qemu.org 04/01/2014 RIP: 0010:add_dma_entry+0x221/0x280 Code: c0 0f 84 f2 fe ff ff 83 e8 01 89 05 6d 99 11 01 e9 e4 fe ff ff 0f 8e 1f ff ff ff 48 8d 3d 07 ef 2d 01 be 07 00 00 00 48 89 e2 <67> 48 0f b9 3a e9 06 ff ff ff 48 c7 c7 98 05 2b 82 c6 05 72 92 28 RSP: 0018:ff1100010e657970 EFLAGS: 00010002 RAX: 0000000000000007 RBX: ff1100010234eb00 RCX: 0000000000000000 RDX: ff1100010e657970 RSI: 0000000000000007 RDI: ffffffff82678660 RBP: 000000000438c440 R08: 0000000000000228 R09: 0000000000000000 R10: 00000000000001be R11: 000000000000089d R12: 0000000000000800 R13: 00000000ffffffef R14: 0000000000000202 R15: ff1100010234eb00 FS: 00007fb15f3f6740(0000) GS:ff110008dcc19000(0000) knlGS:0000000000000000 CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033 CR2: 00007fb15f32d3a0 CR3: 0000000116f59001 CR4: 0000000000373eb0 Call Trace: <TASK> debug_dma_map_sg+0x1b4/0x390 __dma_map_sg_attrs+0x6d/0x1a0 dma_map_sgtable+0x19/0x30 ib_umem_get+0x284/0x3b0 [ib_uverbs] mlx5_ib_reg_user_mr+0x68/0x2a0 [mlx5_ib] ib_uverbs_reg_mr+0x17f/0x2a0 [ib_uverbs] ib_uverbs_handler_UVERBS_METHOD_INVOKE_WRITE+0xc2/0x130 [ib_uverbs] ib_uverbs_cmd_verbs+0xa0b/0xae0 [ib_uverbs] ? ib_uverbs_handler_UVERBS_METHOD_QUERY_PORT_SPEED+0xe0/0xe0 [ib_uverbs] ? mmap_region+0x7a/0xb0 ? do_mmap+0x3b8/0x5c0 ib_uverbs_ioctl+0xa7/0x110 [ib_uverbs] __x64_sys_ioctl+0x14f/0x8b0 ? ksys_mmap_pgoff+0xc5/0x190 do_syscall_64+0x8c/0xbf0 entry_SYSCALL_64_after_hwframe+0x4b/0x53 RIP: 0033:0x7fb15f5e4eed Code: 04 25 28 00 00 00 48 89 45 c8 31 c0 48 8d 45 10 c7 45 b0 10 00 00 00 48 89 45 b8 48 8d 45 d0 48 89 45 c0 b8 10 00 00 00 0f 05 <89> c2 3d 00 f0 ff ff 77 1a 48 8b 45 c8 64 48 2b 04 25 28 00 00 00 RSP: 002b:00007ffe09a5c540 EFLAGS: 00000246 ORIG_RAX: 0000000000000010 RAX: ffffffffffffffda RBX: 00007ffe09a5c5d0 RCX: 00007fb15f5e4eed RDX: 00007ffe09a5c5f0 RSI: 00000000c0181b01 RDI: 0000000000000003 RBP: 00007ffe09a5c590 R08: 0000000000000028 R09: 00007ffe09a5c794 R10: 0000000000000001 R11: 0000000000000246 R12: 00007ffe09a5c794 R13: 000000000000000c R14: 0000000025a49170 R15: 000000000000000c </TASK> ---[ end trace 0000000000000000 ]--- Fixes: 61868dc55a11 ("dma-mapping: add DMA_ATTR_CPU_CACHE_CLEAN") Signed-off-by: Leon Romanovsky <leonro@nvidia.com> Signed-off-by: Marek Szyprowski <m.szyprowski@samsung.com> Link: https://lore.kernel.org/r/20260316-dma-debug-overlap-v3-1-1dde90a7f08b@nvidia.com
2026-03-19module: expose imported namespaces via sysfsNicholas Sielicki
Add /sys/module/*/import_ns to expose imported namespaces for currently loaded modules. The file contains one namespace per line and only exists for modules that import at least one namespace. Previously, the only way for userspace to inspect the symbol namespaces a module imports is to locate the .ko on disk and invoke modinfo(8) to decompress/parse the metadata. The kernel validated namespaces at load time, but it was otherwise discarded. Exposing this data via sysfs provides a runtime mechanism to verify which namespaces are being used by modules. For example, this allows userspace to audit driver API access in Android GKI, which uses symbol namespaces to restrict vendor drivers from using specific kernel interfaces (e.g., direct filesystem access). Signed-off-by: Nicholas Sielicki <linux@opensource.nslick.com> [Sami: Updated the commit message to explain motivation.] Signed-off-by: Sami Tolvanen <samitolvanen@google.com>
2026-03-19bpf: Add warning to detect memory leak in bpf_selem_unlink_nofail()Amery Hung
While very unlikely, local storage theoretically may leak memory of the size of "struct bpf_local_storage" when destroy() fails to grab local_storage->lock and initializes selem->local_storage before other racing map_free() see it. Warn the user to allow debugging the issue instead of leaking the memory silently. Note that test_maps in bpf selftests already stress tested bpf_selem_unlink_nofail() by creating 4096 sockets and then immediately destroying them in multiple threads. With 64 threads, 64 x 4096 socket local storages were created and destroyed during the test and no warning in the function were triggered. Signed-off-by: Amery Hung <ameryhung@gmail.com> Signed-off-by: Martin KaFai Lau <martin.lau@kernel.org> Acked-by: Kumar Kartikeya Dwivedi <memxor@gmail.com> Link: https://patch.msgid.link/20260318224219.615105-1-ameryhung@gmail.com
2026-03-19bpf: Do not allow deleting local storage in NMIAmery Hung
Currently, local storage may deadlock when deferring freeing selem or local storage through kfree_rcu(), call_rcu() or call_rcu_tasks_trace() in NMI or reentrant. Since deleting selem in NMI is an unlikely use case, partially mitigate it by returning error when calling from bpf_xxx_storage_delete() helpers in NMI. Note that, it is still possible to deadlock through reentrant. A full mitigation requires returning error when irqs_disabled() is true, which, however is too heavy-handed for bpf_xxx_storage_delete(). The long-term solution requires _nolock versions of call_rcu. Another possible solution is to defer the free through irq_work [0], but it would grow the size of selem, which is non-ideal. The check is only needed in bpf_selem_unlink(), which is used by helpers and syscalls. bpf_selem_unlink_nofail() is fine as it is called during map and owner tear down that never run in NMI or reentrant. [0] https://lore.kernel.org/bpf/20260205190233.912-1-alexei.starovoitov@gmail.com/ Fixes: a10787e6d58c ("bpf: Enable task local storage for tracing programs") Signed-off-by: Amery Hung <ameryhung@gmail.com> Signed-off-by: Martin KaFai Lau <martin.lau@kernel.org> Acked-by: Kumar Kartikeya Dwivedi <memxor@gmail.com> Link: https://patch.msgid.link/20260319025716.2361065-1-ameryhung@gmail.com
2026-03-19Merge tag 'pm-7.0-rc5' of ↵Linus Torvalds
git://git.kernel.org/pub/scm/linux/kernel/git/rafael/linux-pm Pull power management fixes from Rafael Wysocki: "These fix an idle loop issue exposed by recent changes and a race condition related to device removal in the runtime PM core code: - Consolidate the handling of two special cases in the idle loop that occur when only one CPU idle state is present (Rafael Wysocki) - Fix a race condition related to device removal in the runtime PM core code that may cause a stale device object pointer to be dereferenced (Bart Van Assche)" * tag 'pm-7.0-rc5' of git://git.kernel.org/pub/scm/linux/kernel/git/rafael/linux-pm: PM: runtime: Fix a race condition related to device removal sched: idle: Consolidate the handling of two special cases
2026-03-18bpf: Release module BTF IDR before module unloadKumar Kartikeya Dwivedi
Gregory reported in [0] that the global_map_resize test when run in repeatedly ends up failing during program load. This stems from the fact that BTF reference has not dropped to zero after the previous run's module is unloaded, and the older module's BTF is still discoverable and visible. Later, in libbpf, load_module_btfs() will find the ID for this stale BTF, open its fd, and then it will be used during program load where later steps taking module reference using btf_try_get_module() fail since the underlying module for the BTF is gone. Logically, once a module is unloaded, it's associated BTF artifacts should become hidden. The BTF object inside the kernel may still remain alive as long its reference counts are alive, but it should no longer be discoverable. To fix this, let us call btf_free_id() from the MODULE_STATE_GOING case for the module unload to free the BTF associated IDR entry, and disable its discovery once module unload returns to user space. If a race happens during unload, the outcome is non-deterministic anyway. However, user space should be able to rely on the guarantee that once it has synchronously established a successful module unload, no more stale artifacts associated with this module can be obtained subsequently. Note that we must be careful to not invoke btf_free_id() in btf_put() when btf_is_module() is true now. There could be a window where the module unload drops a non-terminal reference, frees the IDR, but the same ID gets reused and the second unconditional btf_free_id() ends up releasing an unrelated entry. To avoid a special case for btf_is_module() case, set btf->id to zero to make btf_free_id() idempotent, such that we can unconditionally invoke it from btf_put(), and also from the MODULE_STATE_GOING case. Since zero is an invalid IDR, the idr_remove() should be a noop. Note that we can be sure that by the time we reach final btf_put() for btf_is_module() case, the btf_free_id() is already done, since the module itself holds the BTF reference, and it will call this function for the BTF before dropping its own reference. [0]: https://lore.kernel.org/bpf/cover.1773170190.git.grbell@redhat.com Fixes: 36e68442d1af ("bpf: Load and verify kernel module BTFs") Acked-by: Martin KaFai Lau <martin.lau@kernel.org> Suggested-by: Martin KaFai Lau <martin.lau@kernel.org> Reported-by: Gregory Bell <grbell@redhat.com> Reviewed-by: Emil Tsalapatis <emil@etsalapatis.com> Signed-off-by: Kumar Kartikeya Dwivedi <memxor@gmail.com> Link: https://lore.kernel.org/r/20260312205307.1346991-1-memxor@gmail.com Signed-off-by: Alexei Starovoitov <ast@kernel.org>
2026-03-18module: Clean up parse_args() argumentsPetr Pavlu
* Use the preferred `unsigned int` over plain `unsigned` for the `num` parameter. * Synchronize the parameter names in moduleparam.h with the ones used by the implementation in params.c. Signed-off-by: Petr Pavlu <petr.pavlu@suse.com> Signed-off-by: Sami Tolvanen <samitolvanen@google.com>
2026-03-18module: Fix freeing of charp module parameters when CONFIG_SYSFS=nPetr Pavlu
When setting a charp module parameter, the param_set_charp() function allocates memory to store a copy of the input value. Later, when the module is potentially unloaded, the destroy_params() function is called to free this allocated memory. However, destroy_params() is available only when CONFIG_SYSFS=y, otherwise only a dummy variant is present. In the unlikely case that the kernel is configured with CONFIG_MODULES=y and CONFIG_SYSFS=n, this results in a memory leak of charp values when a module is unloaded. Fix this issue by making destroy_params() always available when CONFIG_MODULES=y. Rename the function to module_destroy_params() to clarify that it is intended for use by the module loader. Fixes: e180a6b7759a ("param: fix charp parameters set via sysfs") Signed-off-by: Petr Pavlu <petr.pavlu@suse.com> Signed-off-by: Sami Tolvanen <samitolvanen@google.com>
2026-03-18sched/fair: Simplify SIS_UTIL handling in select_idle_cpu()K Prateek Nayak
Use the "sd_llc" passed to select_idle_cpu() to obtain the "sd_llc_shared" instead of dereferencing the per-CPU variable. Since "sd->shared" is always reclaimed at the same time as "sd" via call_rcu() and update_top_cache_domain() always ensures a valid "sd->shared" assignment when "sd_llc" is present, "sd_llc->shared" can always be dereferenced without needing an additional check. While at it move the cpumask_and() operation after the SIS_UTIL bailout check to avoid unnecessarily computing the cpumask. Signed-off-by: K Prateek Nayak <kprateek.nayak@amd.com> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Reviewed-by: Chen Yu <yu.c.chen@intel.com> Reviewed-by: Shrikanth Hegde <sshegde@linux.ibm.com> Reviewed-by: Dietmar Eggemann <dietmar.eggemann@arm.com> Tested-by: Dietmar Eggemann <dietmar.eggemann@arm.com> Link: https://patch.msgid.link/20260312044434.1974-10-kprateek.nayak@amd.com
2026-03-18sched/fair: Simplify the entry condition for update_idle_cpu_scan()K Prateek Nayak
Only the topmost SD_SHARE_LLC domain has the "sd->shared" assigned. Simply use "sd->shared" as an indicator for load balancing at the highest SD_SHARE_LLC domain in update_idle_cpu_scan() instead of relying on llc_size. Signed-off-by: K Prateek Nayak <kprateek.nayak@amd.com> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Reviewed-by: Chen Yu <yu.c.chen@intel.com> Reviewed-by: Dietmar Eggemann <dietmar.eggemann@arm.com> Tested-by: Dietmar Eggemann <dietmar.eggemann@arm.com> Link: https://patch.msgid.link/20260312044434.1974-9-kprateek.nayak@amd.com
2026-03-18sched/fair: Remove superfluous rcu_read_lock() in the wakeup pathK Prateek Nayak
select_task_rq_fair() is always called with p->pi_lock held and IRQs disabled which makes it equivalent of an RCU read-side. Since commit 71fedc41c23b ("sched/fair: Switch to rcu_dereference_all()") switched to using rcu_dereference_all() in the wakeup path, drop the explicit rcu_read_{lock,unlock}() in the fair task's wakeup path. Future plans to reuse select_task_rq_fair() / find_energy_efficient_cpu() in the fair class' balance callback will do so with IRQs disabled and will comply with the requirements of rcu_dereference_all() which makes this safe keeping in mind future development plans too. Signed-off-by: K Prateek Nayak <kprateek.nayak@amd.com> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Reviewed-by: Dietmar Eggemann <dietmar.eggemann@arm.com> Tested-by: Dietmar Eggemann <dietmar.eggemann@arm.com> Link: https://patch.msgid.link/20260312044434.1974-8-kprateek.nayak@amd.com