summaryrefslogtreecommitdiff
path: root/kernel
diff options
context:
space:
mode:
authorTejun Heo <tj@kernel.org>2026-04-10 07:54:06 -1000
committerTejun Heo <tj@kernel.org>2026-04-10 07:54:06 -1000
commitb470e37c1fad72731be6f437e233cb6b16618f41 (patch)
tree81c5c1dbdc45f7cb6f87a41729f28b8f2aca7934 /kernel
parent9fb457074f6d118b30458624223abef985725a88 (diff)
sched_ext: Fix ops.cgroup_move() invocation kf_mask and rq tracking
sched_move_task() invokes ops.cgroup_move() inside task_rq_lock(tsk), so @p's rq lock is held. The SCX_CALL_OP_TASK invocation mislabels this: - kf_mask = SCX_KF_UNLOCKED (== 0), claiming no lock is held. - rq = NULL, so update_locked_rq() doesn't run and scx_locked_rq() returns NULL. Switch to SCX_KF_REST and pass task_rq(p), matching ops.set_cpumask() from set_cpus_allowed_scx(). Three effects: - scx_bpf_task_cgroup() becomes callable (was rejected by scx_kf_allowed(__SCX_KF_RQ_LOCKED)). Safe; rq lock is held. - scx_bpf_dsq_move() is now rejected (was allowed via the unlocked branch). Calling it while holding an unrelated task's rq lock is risky; rejection is correct. - scx_bpf_select_cpu_*() previously took the unlocked branch in select_cpu_from_kfunc() and called task_rq_lock(p, &rf), which would deadlock against the already-held pi_lock. Now it takes the locked-rq branch and is rejected with -EPERM via the existing kf_allowed(SCX_KF_SELECT_CPU | SCX_KF_ENQUEUE) check. Latent deadlock fix. No in-tree scheduler is known to call any of these from ops.cgroup_move(). v2: Add Fixes: tag (Andrea Righi). Fixes: 18853ba782be ("sched_ext: Track currently locked rq") Signed-off-by: Tejun Heo <tj@kernel.org> Reviewed-by: Andrea Righi <arighi@nvidia.com>
Diffstat (limited to 'kernel')
-rw-r--r--kernel/sched/ext.c2
1 files changed, 1 insertions, 1 deletions
diff --git a/kernel/sched/ext.c b/kernel/sched/ext.c
index aee48b34aefa..4d793a56d965 100644
--- a/kernel/sched/ext.c
+++ b/kernel/sched/ext.c
@@ -4397,7 +4397,7 @@ void scx_cgroup_move_task(struct task_struct *p)
*/
if (SCX_HAS_OP(sch, cgroup_move) &&
!WARN_ON_ONCE(!p->scx.cgrp_moving_from))
- SCX_CALL_OP_TASK(sch, SCX_KF_UNLOCKED, cgroup_move, NULL,
+ SCX_CALL_OP_TASK(sch, SCX_KF_REST, cgroup_move, task_rq(p),
p, p->scx.cgrp_moving_from,
tg_cgrp(task_group(p)));
p->scx.cgrp_moving_from = NULL;