summaryrefslogtreecommitdiff
path: root/kernel
AgeCommit message (Collapse)Author
2026-03-06sched_ext: Make scx_bpf_reenqueue_local() sub-sched awareTejun Heo
scx_bpf_reenqueue_local() currently re-enqueues all tasks on the local DSQ regardless of which sub-scheduler owns them. With multiple sub-schedulers, each should only re-enqueue tasks it owns or are owned by its descendants. Replace the per-rq boolean flag with a lock-free linked list to track per-scheduler reenqueue requests. Filter tasks in reenq_local() using hierarchical ownership checks and block deferrals during bypass to prevent use on dead schedulers. Signed-off-by: Tejun Heo <tj@kernel.org> Reviewed-by: Andrea Righi <arighi@nvidia.com>
2026-03-06sched_ext: Add scx_sched back pointer to scx_sched_pcpuTejun Heo
Add a back pointer from scx_sched_pcpu to scx_sched. This will be used by the next patch to make scx_bpf_reenqueue_local() sub-sched aware. Signed-off-by: Tejun Heo <tj@kernel.org> Reviewed-by: Andrea Righi <arighi@nvidia.com>
2026-03-06sched_ext: Implement cgroup sub-sched enabling and disablingTejun Heo
The preceding changes implemented the framework to support cgroup sub-scheds and updated scheduling paths and kfuncs so that they have minimal but working support for sub-scheds. However, actual sub-sched enabling/disabling hasn't been implemented yet and all tasks stayed on scx_root. Implement cgroup sub-sched enabling and disabling to actually activate sub-scheds: - Both enable and disable operations bypass only the tasks in the subtree of the child being enabled or disabled to limit disruptions. - When enabling, all candidate tasks are first initialized for the child sched. Once that succeeds, the tasks are exited for the parent and then switched over to the child. This adds a bit of complication but guarantees that child scheduler failures are always contained. - Disabling works the same way in the other direction. However, when the parent may fail to initialize a task, disabling is propagated up to the parent. While this means that a parent sched fail due to a child sched event, the failure can only originate from the parent itself (its ops.init_task()). The only effect a malfunctioning child can have on the parent is attempting to move the tasks back to the parent. After this change, although not all the necessary mechanisms are in place yet, sub-scheds can take control of their tasks and schedule them. v2: Fix missing scx_cgroup_unlock()/percpu_up_write() in abort path (Cheng-Yang Chou). Signed-off-by: Tejun Heo <tj@kernel.org> Reviewed-by: Andrea Righi <arighi@nvidia.com>
2026-03-06sched_ext: Support dumping multiple schedulers and add scheduler identificationTejun Heo
Extend scx_dump_state() to support multiple schedulers and improve task identification in dumps. The function now takes a specific scheduler to dump and can optionally filter tasks by scheduler. scx_dump_task() now displays which scheduler each task belongs to, using "*" to mark tasks owned by the scheduler being dumped. Sub-schedulers are identified with their level and cgroup ID. The SysRq-D handler now iterates through all active schedulers under scx_sched_lock and dumps each one separately. For SysRq-D dumps, only tasks owned by each scheduler are dumped to avoid redundancy since all schedulers are being dumped. Error-triggered dumps continue to dump all tasks since only that specific scheduler is being dumped. Signed-off-by: Tejun Heo <tj@kernel.org> Reviewed-by: Andrea Righi <arighi@nvidia.com>
2026-03-06sched_ext: Convert scx_dump_state() spinlock to raw spinlockTejun Heo
The scx_dump_state() function uses a regular spinlock to serialize access. In a subsequent patch, this function will be called while holding scx_sched_lock, which is a raw spinlock, creating a lock nesting violation. Convert the dump_lock to a raw spinlock and use the guard macro for cleaner lock management. Signed-off-by: Tejun Heo <tj@kernel.org> Reviewed-by: Andrea Righi <arighi@nvidia.com>
2026-03-06sched_ext: Make watchdog sub-sched awareTejun Heo
Currently, the watchdog checks all tasks as if they are all on scx_root. Move scx_watchdog_timeout inside scx_sched and make check_rq_for_timeouts() use the timeout from the scx_sched associated with each task. refresh_watchdog() is added, which determines the timer interval as half of the shortest watchdog timeouts of all scheds and arms or disarms it as necessary. Every scx_sched instance has equivalent or better detection latency while sharing the same timer. Signed-off-by: Tejun Heo <tj@kernel.org> Reviewed-by: Andrea Righi <arighi@nvidia.com>
2026-03-06sched_ext: Move scx_dsp_ctx and scx_dsp_max_batch into scx_schedTejun Heo
scx_dsp_ctx and scx_dsp_max_batch are global variables used in the dispatch path. In prepration for multiple scheduler support, move the former into scx_sched_pcpu and the latter into scx_sched. No user-visible behavior changes intended. Signed-off-by: Tejun Heo <tj@kernel.org> Reviewed-by: Andrea Righi <arighi@nvidia.com>
2026-03-06sched_ext: Dispatch from all scx_sched instancesTejun Heo
The cgroup sub-sched support involves invasive changes to many areas of sched_ext. The overall scaffolding is now in place and the next step is implementing sub-sched enable/disable. To enable partial testing and verification, update balance_one() to dispatch from all scx_sched instances until it finds a task to run. This should keep scheduling working when sub-scheds are enabled with tasks on them. This will be replaced by BPF-driven hierarchical operation. Signed-off-by: Tejun Heo <tj@kernel.org> Reviewed-by: Andrea Righi <arighi@nvidia.com>
2026-03-06sched_ext: Implement hierarchical bypass modeTejun Heo
When a sub-scheduler enters bypass mode, its tasks must be scheduled by an ancestor to guarantee forward progress. Tasks from bypassing descendants are queued in the bypass DSQs of the nearest non-bypassing ancestor, or the root scheduler if all ancestors are bypassing. This requires coordination between bypassing schedulers and their hosts. Add bypass_enq_target_dsq() to find the correct bypass DSQ by walking up the hierarchy until reaching a non-bypassing ancestor. When a sub-scheduler starts bypassing, all its runnable tasks are re-enqueued after scx_bypassing() is set, ensuring proper migration to ancestor bypass DSQs. Update scx_dispatch_sched() to handle hosting bypassed descendants. When a scheduler is not bypassing but has bypassing descendants, it must schedule both its own tasks and bypassed descendant tasks. A simple policy is implemented where every Nth dispatch attempt (SCX_BYPASS_HOST_NTH=2) consumes from the bypass DSQ. A fallback consumption is also added at the end of dispatch to ensure bypassed tasks make progress even when normal scheduling is idle. Update enable_bypass_dsp() and disable_bypass_dsp() to increment bypass_dsp_enable_depth on both the bypassing scheduler and its parent host, ensuring both can detect that bypass dispatch is active through bypass_dsp_enabled(). Add SCX_EV_SUB_BYPASS_DISPATCH event counter to track scheduling of bypassed descendant tasks. v2: Fix comment typos (Andrea). Signed-off-by: Tejun Heo <tj@kernel.org> Reviewed-by: Andrea Righi <arighi@nvidia.com>
2026-03-06sched_ext: Separate bypass dispatch enabling from bypass depth trackingTejun Heo
The bypass_depth field tracks nesting of bypass operations but is also used to determine whether the bypass dispatch path should be active. With hierarchical scheduling, child schedulers may need to activate their parent's bypass dispatch path without affecting the parent's bypass_depth, requiring separation of these concerns. Add bypass_dsp_enable_depth and bypass_dsp_claim to independently control bypass dispatch path activation. The new enable_bypass_dsp() and disable_bypass_dsp() functions manage this state with proper claim semantics to prevent races. The bypass dispatch path now only activates when bypass_dsp_enabled() returns true, which checks the new enable_depth counter. The disable operation is carefully ordered after all tasks are moved out of bypass DSQs to ensure they are drained before the dispatch path is disabled. During scheduler teardown, disable_bypass_dsp() is called explicitly to ensure cleanup even if bypass mode was never entered normally. Signed-off-by: Tejun Heo <tj@kernel.org> Reviewed-by: Andrea Righi <arighi@nvidia.com>
2026-03-06sched_ext: When calling ops.dispatch() @prev must be on the same scx_schedTejun Heo
The @prev parameter passed into ops.dispatch() is expected to be on the same sched. Passing in @prev which isn't on the sched can spuriously trigger failures that can kill the scheduler. Pass in @prev iff it's on the same sched. Signed-off-by: Tejun Heo <tj@kernel.org> Reviewed-by: Andrea Righi <arighi@nvidia.com>
2026-03-06sched_ext: Factor out scx_dispatch_sched()Tejun Heo
In preparation of multiple scheduler support, factor out scx_dispatch_sched() from balance_one(). The function boundary makes remembering $prev_on_scx and $prev_on_rq less useful. Open code $prev_on_scx in balance_one() and $prev_on_rq in both balance_one() and scx_dispatch_sched(). No functional changes. Signed-off-by: Tejun Heo <tj@kernel.org> Reviewed-by: Andrea Righi <arighi@nvidia.com>
2026-03-06sched_ext: Prepare bypass mode for hierarchical operationTejun Heo
Bypass mode is used to simplify enable and disable paths and guarantee forward progress when something goes wrong. When enabled, all tasks skip BPF scheduling and fall back to simple in-kernel FIFO scheduling. While this global behavior can be used as-is when dealing with sub-scheds, that would allow any sub-sched instance to affect the whole system in a significantly disruptive manner. Make bypass state hierarchical by propagating it to descendants and updating per-cpu flags accordingly. This allows an scx_sched to bypass if itself or any of its ancestors are in bypass mode. However, this doesn't make the actual bypass enqueue and dispatch paths hierarchical yet. That will be done in later patches. Signed-off-by: Tejun Heo <tj@kernel.org> Reviewed-by: Andrea Righi <arighi@nvidia.com>
2026-03-06sched_ext: Move bypass state into scx_schedTejun Heo
In preparation of multiple scheduler support, make bypass state per-scx_sched. Move scx_bypass_depth, bypass_timestamp and bypass_lb_timer from globals into scx_sched. Move SCX_RQ_BYPASSING from rq to scx_sched_pcpu as SCX_SCHED_PCPU_BYPASSING. scx_bypass() now takes @sch and scx_rq_bypassing(rq) is replaced with scx_bypassing(sch, cpu). All callers updated. scx_bypassed_for_enable existed to balance the global scx_bypass_depth when enable failed. Now that bypass_depth is per-scheduler, the counter is destroyed along with the scheduler on enable failure. Remove scx_bypassed_for_enable. As all tasks currently use the root scheduler, there's no observable behavior change. Signed-off-by: Tejun Heo <tj@kernel.org> Reviewed-by: Andrea Righi <arighi@nvidia.com>
2026-03-06sched_ext: Move bypass_dsq into scx_sched_pcpuTejun Heo
To support bypass mode for sub-schedulers, move bypass_dsq from struct scx_rq to struct scx_sched_pcpu. Add bypass_dsq() helper. Move bypass_dsq initialization from init_sched_ext_class() to scx_alloc_and_attach_sched(). bypass_lb_cpu() now takes a CPU number instead of rq pointer. All callers updated. No behavior change as all tasks use the root scheduler. Signed-off-by: Tejun Heo <tj@kernel.org> Reviewed-by: Andrea Righi <arighi@nvidia.com>
2026-03-06sched_ext: Move aborting flag to per-scheduler fieldTejun Heo
The abort state was tracked in the global scx_aborting flag which was used to break out of potential live-lock scenarios when an error occurs. With hierarchical scheduling, each scheduler instance must track its own abort state independently so that an aborting scheduler doesn't interfere with others. Move the aborting flag into struct scx_sched and update all access sites. The early initialization check in scx_root_enable() that warned about residual aborting state is no longer needed as each scheduler instance now starts with a clean state. Signed-off-by: Tejun Heo <tj@kernel.org> Reviewed-by: Andrea Righi <arighi@nvidia.com>
2026-03-06sched_ext: Move default slice to per-scheduler fieldTejun Heo
The default time slice was stored in the global scx_slice_dfl variable which was dynamically modified when entering and exiting bypass mode. With hierarchical scheduling, each scheduler instance needs its own default slice configuration so that bypass operations on one scheduler don't affect others. Move slice_dfl into struct scx_sched and update all access sites. The bypass logic now modifies the root scheduler's slice_dfl. At task initialization in init_scx_entity(), use the SCX_SLICE_DFL constant directly since the task may not yet be associated with a specific scheduler. Signed-off-by: Tejun Heo <tj@kernel.org> Reviewed-by: Andrea Righi <arighi@nvidia.com>
2026-03-06sched_ext: Make scx_prio_less() handle multiple schedulersTejun Heo
Call ops.core_sched_before() iff both tasks belong to the same scx_sched. Otherwise, use timestamp based ordering. Signed-off-by: Tejun Heo <tj@kernel.org> Reviewed-by: Andrea Righi <arighi@nvidia.com>
2026-03-06sched_ext: Refactor task init/exit helpersTejun Heo
- Add the @sch parameter to scx_init_task() and drop @tg as it can be obtained from @p. Separate out __scx_init_task() which does everything except for the task state transition. - Add the @sch parameter to scx_enable_task(). Separate out __scx_enable_task() which does everything except for the task state transition. - Add the @sch parameter to scx_disable_task(). - Rename scx_exit_task() to scx_disable_and_exit_task() and separate out __scx_disable_and_exit_task() which does everything except for the task state transition. While some task state transitions are relocated, no meaningful behavior changes are expected. Signed-off-by: Tejun Heo <tj@kernel.org> Reviewed-by: Andrea Righi <arighi@nvidia.com>
2026-03-06sched_ext: scx_dsq_move() should validate the task belongs to the right ↵Tejun Heo
scheduler scx_bpf_dsq_move[_vtime]() calls scx_dsq_move() to move task from a DSQ to another. However, @p doesn't necessarily have to come form the containing iteration and can thus be a task which belongs to another scx_sched. Verify that @p is on the same scx_sched as the DSQ being iterated. Signed-off-by: Tejun Heo <tj@kernel.org> Reviewed-by: Andrea Righi <arighi@nvidia.com>
2026-03-06sched_ext: Enforce scheduler ownership when updating slice and dsq_vtimeTejun Heo
scx_bpf_task_set_slice() and scx_bpf_task_set_dsq_vtime() now verify that the calling scheduler has authority over the task before allowing updates. This prevents schedulers from modifying tasks that don't belong to them in hierarchical scheduling configurations. Direct writes to p->scx.slice and p->scx.dsq_vtime are deprecated and now trigger warnings. They will be disallowed in a future release. Signed-off-by: Tejun Heo <tj@kernel.org> Reviewed-by: Andrea Righi <arighi@nvidia.com>
2026-03-06sched_ext: Enforce scheduling authority in dispatch and select_cpu operationsTejun Heo
Add checks to enforce scheduling authority boundaries when multiple schedulers are present: 1. In scx_dsq_insert_preamble() and the dispatch retry path, ignore attempts to insert tasks that the scheduler doesn't own, counting them via SCX_EV_INSERT_NOT_OWNED. As BPF schedulers are allowed to ignore dequeues, such attempts can occur legitimately during sub-scheduler enabling when tasks move between schedulers. The counter helps distinguish normal cases from scheduler bugs. 2. For scx_bpf_dsq_insert_vtime() and scx_bpf_select_cpu_and(), error out when sub-schedulers are attached. These functions lack the aux__prog parameter needed to identify the calling scheduler, so they cannot be used safely with multiple schedulers. BPF programs should use the arg-wrapped versions (__scx_bpf_dsq_insert_vtime() and __scx_bpf_select_cpu_and()) instead. These checks ensure that with multiple concurrent schedulers, scheduler identity can be properly determined and unauthorized task operations are prevented or tracked. Signed-off-by: Tejun Heo <tj@kernel.org> Reviewed-by: Andrea Righi <arighi@nvidia.com>
2026-03-06sched_ext: Introduce scx_prog_sched()Tejun Heo
In preparation for multiple scheduler support, introduce scx_prog_sched() accessor which returns the scx_sched instance associated with a BPF program. The association is determined via the special KF_IMPLICIT_ARGS kfunc parameter, which provides access to bpf_prog_aux. This aux can be used to retrieve the struct_ops (sched_ext_ops) that the program is associated with, and from there, the corresponding scx_sched instance. For compatibility, when ops.sub_attach is not implemented (older schedulers without sub-scheduler support), unassociated programs fall back to scx_root. A warning is logged once per scheduler for such programs. As scx_root is still the only scheduler, this shouldn't introduce user-visible behavior changes. Signed-off-by: Tejun Heo <tj@kernel.org> Reviewed-by: Andrea Righi <arighi@nvidia.com>
2026-03-06sched_ext: Introduce scx_task_sched[_rcu]()Tejun Heo
In preparation of multiple scheduler support, add p->scx.sched which points to the scx_sched instance that the task is scheduled by, which is currently always scx_root. Add scx_task_sched[_rcu]() accessors which return the associated scx_sched of the specified task and replace the raw scx_root dereferences with it where applicable. scx_task_on_sched() is also added to test whether a given task is on the specified sched. As scx_root is still the only scheduler, this shouldn't introduce user-visible behavior changes. Signed-off-by: Tejun Heo <tj@kernel.org> Reviewed-by: Andrea Righi <arighi@nvidia.com>
2026-03-06sched_ext: Introduce cgroup sub-sched supportTejun Heo
A system often runs multiple workloads especially in multi-tenant server environments where a system is split into partitions servicing separate more-or-less independent workloads each requiring an application-specific scheduler. To support such and other use cases, sched_ext is in the process of growing multiple scheduler support. When partitioning a system in terms of CPUs for such use cases, an oft-taken approach is hard partitioning the system using cpuset. While it would be possible to tie sched_ext multiple scheduler support to cpuset partitions, such an approach would have fundamental limitations stemming from the lack of dynamism and flexibility. Users often don't care which specific CPUs are assigned to which workload and want to take advantage of optimizations which are enabled by running workloads on a larger machine - e.g. opportunistic over-commit, improving latency critical workload characteristics while maintaining bandwidth fairness, employing control mechanisms based on different criteria than on-CPU time for e.g. flexible memory bandwidth isolation, packing similar parts from different workloads on same L3s to improve cache efficiency, and so on. As this sort of dynamic behaviors are impossible or difficult to implement with hard partitioning, sched_ext is implementing cgroup sub-sched support where schedulers can be attached to the cgroup hierarchy and a parent scheduler is responsible for controlling the CPUs that each child can use at any given moment. This makes CPU distribution dynamically controlled by BPF allowing high flexibility. This patch adds the skeletal sched_ext cgroup sub-sched support: - sched_ext_ops.sub_cgroup_id and .sub_attach/detach() are added. Non-zero sub_cgroup_id indicates that the scheduler is to be attached to the identified cgroup. A sub-sched is attached to the cgroup iff the nearest ancestor scheduler implements .sub_attach() and grants the attachment. Max nesting depth is limited by SCX_SUB_MAX_DEPTH. - When a scheduler exits, all its descendant schedulers are exited together. Also, cgroup.scx_sched added which points to the effective scheduler instance for the cgroup. This is updated on scheduler init/exit and inherited on cgroup online. When a cgroup is offlined, the attached scheduler is automatically exited. - Sub-sched support is gated on CONFIG_EXT_SUB_SCHED which is automatically enabled if both SCX and cgroups are enabled. Sub-sched support is not tied to the CPU controller but rather the cgroup hierarchy itself. This is intentional as the support for cpu.weight and cpu.max based resource control is orthogonal to sub-sched support. Note that CONFIG_CGROUPS around cgroup subtree iteration support for scx_task_iter is replaced with CONFIG_EXT_SUB_SCHED for consistency. - This allows loading sub-scheds and most framework operations such as propagating disable down the hierarchy work. However, sub-scheds are not operational yet and all tasks stay with the root sched. This will serve as the basis for building up full sub-sched support. - DSQs point to the scx_sched they belong to. - scx_qmap is updated to allow attachment of sub-scheds and also serving as sub-scheds. - scx_is_descendant() is added but not yet used in this patch. It is used by later changes in the series and placed here as this is where the function belongs. Signed-off-by: Tejun Heo <tj@kernel.org> Reviewed-by: Andrea Righi <arighi@nvidia.com>
2026-03-06sched_ext: Reorganize enable/disable path for multi-scheduler supportTejun Heo
In preparation for multiple scheduler support, reorganize the enable and disable paths to make scheduler instances explicit. Extract scx_root_disable() from scx_disable_workfn(). Rename scx_enable_workfn() to scx_root_enable_workfn(). Change scx_disable() to take @sch parameter and only queue disable_work if scx_claim_exit() succeeds for consistency. Move exit_kind validation into scx_claim_exit(). The sysrq handler now prints a message when no scheduler is loaded. These changes don't materially affect user-visible behavior. v2: Keep scx_enable() name as-is and only rename the workfn to scx_root_enable_workfn(). Change scx_enable() return type to s32. Signed-off-by: Tejun Heo <tj@kernel.org> Reviewed-by: Andrea Righi <arighi@nvidia.com>
2026-03-06sched_ext: Update p->scx.disallow warning in scx_init_task()Tejun Heo
- Always trigger the warning if p->scx.disallow is set for fork inits. There is no reason to set it during forks. - Flip the positions of if/else arms to ease adding error conditions. Signed-off-by: Tejun Heo <tj@kernel.org> Reviewed-by: Andrea Righi <arighi@nvidia.com>
2026-03-06sched/core: Swap the order between sched_post_fork() and cgroup_post_fork()Tejun Heo
The planned sched_ext cgroup sub-scheduler support needs the newly forked task to be associated with its cgroup in its post_fork() hook. There is no existing ordering requirement between the two now. Swap them and note the new ordering requirement. Signed-off-by: Tejun Heo <tj@kernel.org> Acked-by: Peter Zijlstra (Intel) <peterz@infradead.org> Reviewed-by: Andrea Righi <arighi@nvidia.com> Cc: Ingo Molnar <mingo@redhat.com>
2026-03-06sched_ext: Add @kargs to scx_fork()Tejun Heo
Make sched_cgroup_fork() pass @kargs to scx_fork(). This will be used to determine @p's cgroup for cgroup sub-sched support. Signed-off-by: Tejun Heo <tj@kernel.org> Reviewed-by: Andrea Righi <arighi@nvidia.com> Cc: Peter Zijlstra <peterz@infradead.org>
2026-03-06sched_ext: Implement cgroup subtree iteration for scx_task_iterTejun Heo
For the planned cgroup sub-scheduler support, enable/disable operations are going to be subtree specific and iterating all tasks in the system for those operations can be unnecessarily expensive and disruptive. cgroup already has mechanisms to perform subtree task iterations. Implement cgroup subtree iteration for scx_task_iter: - Add optional @cgrp to scx_task_iter_start() which enables cgroup subtree iteration. - Make scx_task_iter use css_next_descendant_pre() and css_task_iter to iterate all tasks in the cgroup subtree. - Update all existing callers to pass NULL to maintain current behavior. The two iteration mechanisms are independent and duplicate. It's likely that scx_tasks can be removed in favor of always using cgroup iteration if CONFIG_SCHED_CLASS_EXT depends on CONFIG_CGROUPS. Signed-off-by: Tejun Heo <tj@kernel.org> Reviewed-by: Andrea Righi <arighi@nvidia.com>
2026-03-06Merge branch 'for-7.1' of ↵Tejun Heo
git://git.kernel.org/pub/scm/linux/kernel/git/tj/cgroup into for-7.1 To receive 5b30afc20b3f ("cgroup: Expose some cgroup helpers") which will be used by sub-sched support. Signed-off-by: Tejun Heo <tj@kernel.org>
2026-03-06Merge branch 'for-7.0-fixes' into for-7.1Tejun Heo
To prepare for hierarchical scheduling patchset which will cause multiple conflicts otherwise. Signed-off-by: Tejun Heo <tj@kernel.org>
2026-03-06cgroup/cpuset: Call rebuild_sched_domains() directly in hotplugWaiman Long
Besides deferring the call to housekeeping_update(), commit 6df415aa46ec ("cgroup/cpuset: Defer housekeeping_update() calls from CPU hotplug to workqueue") also defers the rebuild_sched_domains() call to the workqueue. So a new offline CPU may still be in a sched domain or new online CPU not showing up in the sched domains for a short transition period. That could be a problem in some corner cases and can be the cause of a reported test failure[1]. Fix it by calling rebuild_sched_domains_cpuslocked() directly in hotplug as before. If isolated partition invalidation or recreation is being done, the housekeeping_update() call to update the housekeeping cpumasks will still be deferred to a workqueue. In commit 3bfe47967191 ("cgroup/cpuset: Move housekeeping_update()/rebuild_sched_domains() together"), housekeeping_update() is called before rebuild_sched_domains() because it needs to access the HK_TYPE_DOMAIN housekeeping cpumask. That is now changed to use the static HK_TYPE_DOMAIN_BOOT cpumask as HK_TYPE_DOMAIN cpumask is now changeable at run time. As a result, we can move the rebuild_sched_domains() call before housekeeping_update() with the slight advantage that it will be done in the same cpus_read_lock critical section without the possibility of interference by a concurrent cpu hot add/remove operation. As it doesn't make sense to acquire cpuset_mutex/cpuset_top_mutex after calling housekeeping_update() and immediately release them again, move the cpuset_full_unlock() operation inside update_hk_sched_domains() and rename it to cpuset_update_sd_hk_unlock() to signify that it will release the full set of locks. [1] https://lore.kernel.org/lkml/1a89aceb-48db-4edd-a730-b445e41221fe@nvidia.com Fixes: 6df415aa46ec ("cgroup/cpuset: Defer housekeeping_update() calls from CPU hotplug to workqueue") Tested-by: Jon Hunter <jonathanh@nvidia.com> Reviewed-by: Chen Ridong <chenridong@huaweicloud.com> Signed-off-by: Waiman Long <longman@redhat.com> Signed-off-by: Tejun Heo <tj@kernel.org>
2026-03-06sched_ext: Use READ_ONCE() for scx_slice_bypass_us in scx_bypass()David Carlier
Commit 0927780c90ce ("sched_ext: Use READ_ONCE() for lock-free reads of module param variables") annotated the plain reads of scx_slice_bypass_us and scx_bypass_lb_intv_us in bypass_lb_cpu(), but missed a third site in scx_bypass(): WRITE_ONCE(scx_slice_dfl, scx_slice_bypass_us * NSEC_PER_USEC); scx_slice_bypass_us is a module parameter writable via sysfs in process context through set_slice_us() -> param_set_uint_minmax(), which performs a plain store without holding bypass_lock. scx_bypass() reads the variable under bypass_lock, but since the writer does not take that lock, the two accesses are concurrent. WRITE_ONCE() only applies volatile semantics to the store of scx_slice_dfl -- the val expression containing scx_slice_bypass_us is evaluated as a plain read, providing no protection against concurrent writes. Wrap the read with READ_ONCE() to complete the annotation started by commit 0927780c90ce and make the access KCSAN-clean, consistent with the existing READ_ONCE(scx_slice_bypass_us) in bypass_lb_cpu(). Signed-off-by: David Carlier <devnexen@gmail.com> Signed-off-by: Tejun Heo <tj@kernel.org>
2026-03-06workqueue: Rename show_cpu_pool{s,}_hog{s,}() to reflect broadened scopeBreno Leitao
show_cpu_pool_hog() and show_cpu_pools_hogs() no longer only dump CPU hogs — since commit 8823eaef45da ("workqueue: Show all busy workers in stall diagnostics"), they dump every in-flight worker in the pool's busy_hash. Rename them to show_cpu_pool_busy_workers() and show_cpu_pools_busy_workers() to accurately describe what they do. Also fix the pr_info() message to say "stalled worker pools" instead of "stalled CPU-bound worker pools", since sleeping/blocked workers are now included. No functional change. Suggested-by: Tejun Heo <tj@kernel.org> Signed-off-by: Breno Leitao <leitao@debian.org> Signed-off-by: Tejun Heo <tj@kernel.org>
2026-03-06Merge tag 'block-7.0-20260305' of ↵Linus Torvalds
git://git.kernel.org/pub/scm/linux/kernel/git/axboe/linux Pull block fixes from Jens Axboe: - NVMe pull request via Keith: - Improve quirk visibility and configurability (Maurizio) - Fix runtime user modification to queue setup (Keith) - Fix multipath leak on try_module_get failure (Keith) - Ignore ambiguous spec definitions for better atomics support (John) - Fix admin queue leak on controller reset (Ming) - Fix large allocation in persistent reservation read keys (Sungwoo Kim) - Fix fcloop callback handling (Justin) - Securely free DHCHAP secrets (Daniel) - Various cleanups and typo fixes (John, Wilfred) - Avoid a circular lock dependency issue in the sysfs nr_requests or scheduler store handling - Fix a circular lock dependency with the pcpu mutex and the queue freeze lock - Cleanup for bio_copy_kern(), using __bio_add_page() rather than the bio_add_page(), as adding a page here cannot fail. The exiting code had broken cleanup for the error condition, so make it clear that the error condition cannot happen - Fix for a __this_cpu_read() in preemptible context splat * tag 'block-7.0-20260305' of git://git.kernel.org/pub/scm/linux/kernel/git/axboe/linux: block: use trylock to avoid lockdep circular dependency in sysfs nvme: fix memory allocation in nvme_pr_read_keys() block: use __bio_add_page in bio_copy_kern block: break pcpu_alloc_mutex dependency on freeze_lock blktrace: fix __this_cpu_read/write in preemptible context nvme-multipath: fix leak on try_module_get failure nvmet-fcloop: Check remoteport port_state before calling done callback nvme-pci: do not try to add queue maps at runtime nvme-pci: cap queue creation to used queues nvme-pci: ensure we're polling a polled queue nvme: fix memory leak in quirks_param_set() nvme: correct comment about nvme_ns_remove() nvme: stop setting namespace gendisk device driver data nvme: add support for dynamic quirk configuration via module parameter nvme: fix admin queue leak on controller reset nvme-fabrics: use kfree_sensitive() for DHCHAP secrets nvme: stop using AWUPF nvme: expose active quirks in sysfs nvme/host: fixup some typos
2026-03-06bpf: drop kthread_exit from noreturn_denyChristian Loehle
kthread_exit became a macro to do_exit in commit 28aaa9c39945 ("kthread: consolidate kthread exit paths to prevent use-after-free"), so there is no kthread_exit function BTF ID to resolve. Remove it from noreturn_deny to avoid resolve_btfids unresolved symbol warnings. Signed-off-by: Christian Loehle <christian.loehle@arm.com> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2026-03-06treewide: change inode->i_ino from unsigned long to u64Jeff Layton
On 32-bit architectures, unsigned long is only 32 bits wide, which causes 64-bit inode numbers to be silently truncated. Several filesystems (NFS, XFS, BTRFS, etc.) can generate inode numbers that exceed 32 bits, and this truncation can lead to inode number collisions and other subtle bugs on 32-bit systems. Change the type of inode->i_ino from unsigned long to u64 to ensure that inode numbers are always represented as 64-bit values regardless of architecture. Update all format specifiers treewide from %lu/%lx to %llu/%llx to match the new type, along with corresponding local variable types. This is the bulk treewide conversion. Earlier patches in this series handled trace events separately to allow trace field reordering for better struct packing on 32-bit. Signed-off-by: Jeff Layton <jlayton@kernel.org> Link: https://patch.msgid.link/20260304-iino-u64-v3-12-2257ad83d372@kernel.org Acked-by: Damien Le Moal <dlemoal@kernel.org> Reviewed-by: Christoph Hellwig <hch@lst.de> Reviewed-by: Jan Kara <jack@suse.cz> Reviewed-by: Chuck Lever <chuck.lever@oracle.com> Signed-off-by: Christian Brauner <brauner@kernel.org>
2026-03-06audit: widen ino fields to u64Jeff Layton
inode->i_ino is being widened from unsigned long to u64. The audit subsystem uses unsigned long ino in struct fields, function parameters, and local variables that store inode numbers from arbitrary filesystems. On 32-bit platforms this truncates inode numbers that exceed 32 bits, which will cause incorrect audit log entries and broken watch/mark comparisons. Widen all audit ino fields, parameters, and locals to u64, and update the inode format string from %lu to %llu to match. Signed-off-by: Jeff Layton <jlayton@kernel.org> Link: https://patch.msgid.link/20260304-iino-u64-v3-2-2257ad83d372@kernel.org Acked-by: Paul Moore <paul@paul-moore.com> Signed-off-by: Christian Brauner <brauner@kernel.org>
2026-03-06sched/headers: Inline raw_spin_rq_unlock()Xie Yuanbin
raw_spin_rq_unlock() is short, and is called in some hot code paths such as finish_lock_switch(). Inline raw_spin_rq_unlock() to micro-optimize performance a bit. Signed-off-by: Xie Yuanbin <qq570070308@gmail.com> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Signed-off-by: Ingo Molnar <mingo@kernel.org> Link: https://patch.msgid.link/20260216164950.147617-3-qq570070308@gmail.com
2026-03-06Merge branch 'linus' into sched/core, to resolve conflictsIngo Molnar
Conflicts: kernel/sched/ext.c Signed-off-by: Ingo Molnar <mingo@kernel.org>
2026-03-06sched/hrtick: Mark hrtick_clear() as always usedIngo Molnar
This recent commit: 96d1610e0b20b ("sched: Optimize hrtimer handling") introduced a new build warning when !CONFIG_HOTPLUG_CPU while SCHED_HRTIMERS=y [ == HIGH_RES_TIMERS=y ]: /tip.testing/kernel/sched/core.c:882:13: warning: ‘hrtick_clear’ defined but not used [-Wunused-function] Mark this helper function as always-used, instead of complicating the code with another obscure #ifdef. Fixes: 96d1610e0b20b ("sched: Optimize hrtimer handling") Signed-off-by: Ingo Molnar <mingo@kernel.org> Acked-by: Peter Zijlstra <peterz@infradead.org> Cc: Thomas Gleixner <tglx@linutronix.de> Link: https://patch.msgid.link/177245077226.1647592.1821545206171336606.tip-bot2@tip-bot2
2026-03-05cgroup: Expose some cgroup helpersTejun Heo
Expose the following through cgroup.h: - cgroup_on_dfl() - cgroup_is_dead() - cgroup_for_each_live_child() - cgroup_for_each_live_descendant_pre() - cgroup_for_each_live_descendant_post() Until now, these didn't need to be exposed because controllers only cared about the css hierarchy. The planned sched_ext hierarchical scheduler support will be based on the default cgroup hierarchy, which is in line with the existing BPF cgroup support, and thus needs these exposed. Signed-off-by: Tejun Heo <tj@kernel.org>
2026-03-05audit: fix coding style issuesRicardo Robaina
Fix various coding style issues across the audit subsystem flagged by checkpatch.pl script to adhere to kernel coding standards. Specific changes include: - kernel/auditfilter.c: Move the open brace '{' to the previous line for the audit_ops array declaration. - lib/audit.c: Add a required space before the open parenthesis '('. - include/uapi/linux/audit.h: Enclose the complex macro value for AUDIT_UID_UNSET in parentheses. Signed-off-by: Ricardo Robaina <rrobaina@redhat.com> Signed-off-by: Paul Moore <paul@paul-moore.com>
2026-03-05workqueue: Show all busy workers in stall diagnosticsBreno Leitao
show_cpu_pool_hog() only prints workers whose task is currently running on the CPU (task_is_running()). This misses workers that are busy processing a work item but are sleeping or blocked — for example, a worker that clears PF_WQ_WORKER and enters wait_event_idle(). Such a worker still occupies a pool slot and prevents progress, yet produces an empty backtrace section in the watchdog output. This is happening on real arm64 systems, where toggle_allocation_gate() IPIs every single CPU in the machine (which lacks NMI), causing workqueue stalls that show empty backtraces because toggle_allocation_gate() is sleeping in wait_event_idle(). Remove the task_is_running() filter so every in-flight worker in the pool's busy_hash is dumped. The busy_hash is protected by pool->lock, which is already held. Signed-off-by: Breno Leitao <leitao@debian.org> Acked-by: Song Liu <song@kernel.org> Signed-off-by: Tejun Heo <tj@kernel.org>
2026-03-05workqueue: Show in-flight work item duration in stall diagnosticsBreno Leitao
When diagnosing workqueue stalls, knowing how long each in-flight work item has been executing is valuable. Add a current_start timestamp (jiffies) to struct worker, set it when a work item begins execution in process_one_work(), and print the elapsed wall-clock time in show_pwq(). Unlike current_at (which tracks CPU runtime and resets on wakeup for CPU-intensive detection), current_start is never reset because the diagnostic cares about total wall-clock time including sleeps. Before: in-flight: 165:stall_work_fn [wq_stall] After: in-flight: 165:stall_work_fn [wq_stall] for 100s Signed-off-by: Breno Leitao <leitao@debian.org> Acked-by: Song Liu <song@kernel.org> Signed-off-by: Tejun Heo <tj@kernel.org>
2026-03-05workqueue: Rename pool->watchdog_ts to pool->last_progress_tsBreno Leitao
The watchdog_ts name doesn't convey what the timestamp actually tracks. This field tracks the last time a workqueue got progress. Rename it to last_progress_ts to make it clear that it records when the pool last made forward progress (started processing new work items). No functional change. Signed-off-by: Breno Leitao <leitao@debian.org> Acked-by: Song Liu <song@kernel.org> Signed-off-by: Tejun Heo <tj@kernel.org>
2026-03-05workqueue: Use POOL_BH instead of WQ_BH when checking pool flagsBreno Leitao
pr_cont_worker_id() checks pool->flags against WQ_BH, which is a workqueue-level flag (defined in workqueue.h). Pool flags use a separate namespace with POOL_* constants (defined in workqueue.c). The correct constant is POOL_BH. Both WQ_BH and POOL_BH are defined as (1 << 0) so this has no behavioral impact, but it is semantically wrong and inconsistent with every other pool-level BH check in the file. Fixes: 4cb1ef64609f ("workqueue: Implement BH workqueues to eventually replace tasklets") Signed-off-by: Breno Leitao <leitao@debian.org> Acked-by: Song Liu <song@kernel.org> Signed-off-by: Tejun Heo <tj@kernel.org>
2026-03-05clocksource: Update clocksource::freq_khz on registrationThomas Gleixner
Borislav reported a division by zero in the timekeeping code and random hangs with the new coupled clocksource/clockevent functionality. It turned out that the TSC clocksource is not always updating the freq_khz field of the clocksource on registration. The coupled mode conversion calculation requires the frequency and as it's not initialized the resulting factor is zero or a random value. As a consequence this causes a division by zero or random boot hangs. Instead of chasing down all clocksources which fail to update that member, fill it in at registration time where the caller has to supply the frequency anyway. Except for special clocksources like jiffies which never can have coupled mode. To make this more robust put a check into the registration function to validate that the caller supplied a frequency if the coupled mode feature bit is set. If not, emit a warning and clear the feature bit. Fixes: cd38bdb8e696 ("timekeeping: Provide infrastructure for coupled clockevents") Reported-by: Borislav Petkov <bp@alien8.de> Reported-by: Nathan Chancellor <nathan@kernel.org> Signed-off-by: Thomas Gleixner <tglx@kernel.org> Tested-by: Borislav Petkov <bp@alien8.de> Tested-by: Nathan Chancellor <nathan@kernel.org> Link: https://patch.msgid.link/87cy1jsa4m.ffs@tglx Closes: https://lore.kernel.org/20260303213027.GA2168957@ax162
2026-03-05timekeeping: Initialize the coupled clocksource conversion completelyThomas Gleixner
Nathan reported a boot failure after the coupled clocksource/event support was enabled for the TSC deadline timer. It turns out that on the affected test systems the TSC frequency is not refined against HPET, so it is registered with the same frequency as the TSC-early clocksource. As a consequence the update function which checks for a change of the shift/mult pair of the clocksource fails to compute the conversion limit, which is zero initialized. This check is there to avoid pointless computations on every timekeeping update cycle (tick). So the actual clockevent conversion function limits the delta expiry to zero, which means the timer is always programmed to expire in the past. This obviously results in a spectacular timer interrupt storm, which goes unnoticed because the per CPU interrupts on x86 are not exposed to the runaway detection mechanism and the NMI watchdog is not yet functional. So the machine simply stops booting. That did not show up in testing. All test machines refine the TSC frequency so TSC has a differrent shift/mult pair than TSC-early and the conversion limit is properly initialized. Cure that by setting the conversion limit right at the point where the new clocksource is installed. Fixes: cd38bdb8e696 ("timekeeping: Provide infrastructure for coupled clockevents") Reported-by: Nathan Chancellor <nathan@kernel.org> Signed-off-by: Thomas Gleixner <tglx@kernel.org> Tested-by: Nathan Chancellor <nathan@kernel.org> Acked-by: John Stultz <jstultz@google.com> Link: https://patch.msgid.link/87bjh4zies.ffs@tglx Closes: https://lore.kernel.org/20260303012905.GA978396@ax162