<feed xmlns='http://www.w3.org/2005/Atom'>
<title>linux.git/tools/sched_ext, branch master</title>
<subtitle>Linux kernel source tree</subtitle>
<link rel='alternate' type='text/html' href='https://git.tavy.me/linux.git/'/>
<entry>
<title>tools/sched_ext: scx_qmap: Silence task_ctx lookup miss</title>
<updated>2026-04-21T16:18:58+00:00</updated>
<author>
<name>Tejun Heo</name>
<email>tj@kernel.org</email>
</author>
<published>2026-04-21T07:17:11+00:00</published>
<link rel='alternate' type='text/html' href='https://git.tavy.me/linux.git/commit/?id=05909810a946222aca5d0611d37be82d18f95228'/>
<id>05909810a946222aca5d0611d37be82d18f95228</id>
<content type='text'>
scx_fork() dispatches ops.init_task to exactly one scheduler - the one
owning the forking task's cgroup. A task forked inside a sub-scheduler's
cgroup is init'd into the sub only; the root scheduler has no task_ctx
entry for it. When that task later appears as @prev in the root's
qmap_dispatch() (or flows through core-sched comparison via task_qdist),
the bpf_task_storage_get() legitimately misses.

qmap treated those misses as fatal via scx_bpf_error("task_ctx lookup
failed") and aborted the scheduler as soon as the first cross-sched
task hit the root. Drop the error in the sites where the miss is
legitimate: lookup_task_ctx() (helper; callers already check for NULL),
qmap_dispatch()'s @prev branch (bookkeeping-only), task_qdist()
(returns 0 which makes the comparison a no-op), and qmap_select_cpu()
(returns prev_cpu as a no-op fallback instead of -ESRCH). The existing
scx_error was a paranoid guard from the pre-sub-sched world where every
task was owned by the one and only scheduler.

v2: qmap_select_cpu() returns prev_cpu on NULL instead of -ESRCH, so
    the root scheduler doesn't error on cross-sched tasks that pass
    through it (Andrea Righi).

Fixes: 4f8b122848db ("sched_ext: Add basic building blocks for nested sub-scheduler dispatching")
Signed-off-by: Tejun Heo &lt;tj@kernel.org&gt;
Reviewed-by: Andrea Righi &lt;arighi@nvidia.com&gt;
Reviewed-by: Zhao Mengmeng &lt;zhaomengmeng@kylinos.cn&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
scx_fork() dispatches ops.init_task to exactly one scheduler - the one
owning the forking task's cgroup. A task forked inside a sub-scheduler's
cgroup is init'd into the sub only; the root scheduler has no task_ctx
entry for it. When that task later appears as @prev in the root's
qmap_dispatch() (or flows through core-sched comparison via task_qdist),
the bpf_task_storage_get() legitimately misses.

qmap treated those misses as fatal via scx_bpf_error("task_ctx lookup
failed") and aborted the scheduler as soon as the first cross-sched
task hit the root. Drop the error in the sites where the miss is
legitimate: lookup_task_ctx() (helper; callers already check for NULL),
qmap_dispatch()'s @prev branch (bookkeeping-only), task_qdist()
(returns 0 which makes the comparison a no-op), and qmap_select_cpu()
(returns prev_cpu as a no-op fallback instead of -ESRCH). The existing
scx_error was a paranoid guard from the pre-sub-sched world where every
task was owned by the one and only scheduler.

v2: qmap_select_cpu() returns prev_cpu on NULL instead of -ESRCH, so
    the root scheduler doesn't error on cross-sched tasks that pass
    through it (Andrea Righi).

Fixes: 4f8b122848db ("sched_ext: Add basic building blocks for nested sub-scheduler dispatching")
Signed-off-by: Tejun Heo &lt;tj@kernel.org&gt;
Reviewed-by: Andrea Righi &lt;arighi@nvidia.com&gt;
Reviewed-by: Zhao Mengmeng &lt;zhaomengmeng@kylinos.cn&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>tools/sched_ext: Add explicit cast from void* in RESIZE_ARRAY()</title>
<updated>2026-04-13T16:14:11+00:00</updated>
<author>
<name>Kuba Piecuch</name>
<email>jpiecuch@google.com</email>
</author>
<published>2026-04-13T15:50:05+00:00</published>
<link rel='alternate' type='text/html' href='https://git.tavy.me/linux.git/commit/?id=7e311bafb9ad3a4711c08c00b09fb7839ada37f0'/>
<id>7e311bafb9ad3a4711c08c00b09fb7839ada37f0</id>
<content type='text'>
This fixes the following compilation error when using the header from
C++ code:

  error: assigning to 'struct scx_flux__data_uei_dump *' from
  incompatible type 'void *'

Signed-off-by: Kuba Piecuch &lt;jpiecuch@google.com&gt;
Signed-off-by: Tejun Heo &lt;tj@kernel.org&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
This fixes the following compilation error when using the header from
C++ code:

  error: assigning to 'struct scx_flux__data_uei_dump *' from
  incompatible type 'void *'

Signed-off-by: Kuba Piecuch &lt;jpiecuch@google.com&gt;
Signed-off-by: Tejun Heo &lt;tj@kernel.org&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>sched_ext: Make string params of __ENUM_set() const</title>
<updated>2026-04-13T16:14:05+00:00</updated>
<author>
<name>Kuba Piecuch</name>
<email>jpiecuch@google.com</email>
</author>
<published>2026-04-13T12:49:02+00:00</published>
<link rel='alternate' type='text/html' href='https://git.tavy.me/linux.git/commit/?id=4615361f0b148c172852590e6245a953cc075b73'/>
<id>4615361f0b148c172852590e6245a953cc075b73</id>
<content type='text'>
A small change to improve type safety/const correctness.
__COMPAT_read_enum() already has const string parameters.

It fixes a warning when using the header in C++ code:

  error: ISO C++11 does not allow conversion from string literal
         to 'char *' [-Werror,-Wwritable-strings]

That's because string literals have type char[N] in C and
const char[N] in C++.

Signed-off-by: Kuba Piecuch &lt;jpiecuch@google.com&gt;
Signed-off-by: Tejun Heo &lt;tj@kernel.org&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
A small change to improve type safety/const correctness.
__COMPAT_read_enum() already has const string parameters.

It fixes a warning when using the header in C++ code:

  error: ISO C++11 does not allow conversion from string literal
         to 'char *' [-Werror,-Wwritable-strings]

That's because string literals have type char[N] in C and
const char[N] in C++.

Signed-off-by: Kuba Piecuch &lt;jpiecuch@google.com&gt;
Signed-off-by: Tejun Heo &lt;tj@kernel.org&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>tools/sched_ext: Kick home CPU for stranded tasks in scx_qmap</title>
<updated>2026-04-13T16:13:59+00:00</updated>
<author>
<name>Tejun Heo</name>
<email>tj@kernel.org</email>
</author>
<published>2026-04-13T03:30:52+00:00</published>
<link rel='alternate' type='text/html' href='https://git.tavy.me/linux.git/commit/?id=3d3667f265148d856bc6eb54d1bd780a94e38da7'/>
<id>3d3667f265148d856bc6eb54d1bd780a94e38da7</id>
<content type='text'>
scx_qmap uses global BPF queue maps (BPF_MAP_TYPE_QUEUE) that any CPU's
ops.dispatch() can pop from. When a CPU pops a task that can't run on it
(e.g. a pinned per-CPU kthread), it inserts the task into SHARED_DSQ.
consume_dispatch_q() then skips the task due to affinity mismatch, leaving it
stranded until some CPU in its allowed mask calls ops.dispatch(). This doesn't
cause indefinite stalls -- the periodic tick keeps firing (can_stop_idle_tick()
returns false when softirq is pending) -- but can cause noticeable scheduling
delays.

After inserting to SHARED_DSQ, kick the task's home CPU if this CPU can't run
it. There's a small race window where the home CPU can enter idle before the
kick lands -- if a per-CPU kthread like ksoftirqd is the stranded task, this
can trigger a "NOHZ tick-stop error" warning. The kick arrives shortly after
and the home CPU drains the task.

Rather than fully eliminating the warning by routing pinned tasks to local or
global DSQs, the current code keeps them going through the normal BPF queue
path and documents the race and the resulting warning in detail. scx_qmap is an
example scheduler and having tasks go through the usual dispatch path is useful
for testing. The detailed comment also serves as a reference for other
schedulers that may encounter similar warnings.

Reviewed-by: Andrea Righi &lt;arighi@nvidia.com&gt;
Signed-off-by: Tejun Heo &lt;tj@kernel.org&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
scx_qmap uses global BPF queue maps (BPF_MAP_TYPE_QUEUE) that any CPU's
ops.dispatch() can pop from. When a CPU pops a task that can't run on it
(e.g. a pinned per-CPU kthread), it inserts the task into SHARED_DSQ.
consume_dispatch_q() then skips the task due to affinity mismatch, leaving it
stranded until some CPU in its allowed mask calls ops.dispatch(). This doesn't
cause indefinite stalls -- the periodic tick keeps firing (can_stop_idle_tick()
returns false when softirq is pending) -- but can cause noticeable scheduling
delays.

After inserting to SHARED_DSQ, kick the task's home CPU if this CPU can't run
it. There's a small race window where the home CPU can enter idle before the
kick lands -- if a per-CPU kthread like ksoftirqd is the stranded task, this
can trigger a "NOHZ tick-stop error" warning. The kick arrives shortly after
and the home CPU drains the task.

Rather than fully eliminating the warning by routing pinned tasks to local or
global DSQs, the current code keeps them going through the normal BPF queue
path and documents the race and the resulting warning in detail. scx_qmap is an
example scheduler and having tasks go through the usual dispatch path is useful
for testing. The detailed comment also serves as a reference for other
schedulers that may encounter similar warnings.

Reviewed-by: Andrea Righi &lt;arighi@nvidia.com&gt;
Signed-off-by: Tejun Heo &lt;tj@kernel.org&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>tools/sched_ext: Fix off-by-one in scx_sdt payload zeroing</title>
<updated>2026-04-06T18:06:24+00:00</updated>
<author>
<name>Cheng-Yang Chou</name>
<email>yphbchou0911@gmail.com</email>
</author>
<published>2026-03-31T09:18:33+00:00</published>
<link rel='alternate' type='text/html' href='https://git.tavy.me/linux.git/commit/?id=a3c3fb2f86f8a1f266747622037f90eab58186ad'/>
<id>a3c3fb2f86f8a1f266747622037f90eab58186ad</id>
<content type='text'>
scx_alloc_free_idx() zeroes the payload of a freed arena allocation
one word at a time. The loop bound was alloc-&gt;pool.elem_size / 8, but
elem_size includes sizeof(struct sdt_data) (the 8-byte union sdt_id
header). This caused the loop to write one extra u64 past the
allocation, corrupting the tid field of the adjacent pool element.

Fix the loop bound to (elem_size - sizeof(struct sdt_data)) / 8 so
only the payload portion is zeroed.

Test plan:
- Add a temporary sanity check in scx_task_free() before the free call:

  if (mval-&gt;data-&gt;tid.idx != mval-&gt;tid.idx)
      scx_bpf_error("tid corruption: arena=%d storage=%d",
                    mval-&gt;data-&gt;tid.idx, (int)mval-&gt;tid.idx);

- stress-ng --fork 100 -t 10 &amp; sudo ./build/bin/scx_sdt

Without this fix, running scx_sdt under fork-heavy load triggers the
corruption error. With the fix applied, the same workload completes
without error.

Fixes: 36929ebd17ae ("tools/sched_ext: add arena based scheduler")
Signed-off-by: Cheng-Yang Chou &lt;yphbchou0911@gmail.com&gt;
Reviewed-by: Emil Tsalapatis &lt;emil@etsalapatis.com&gt;
Signed-off-by: Tejun Heo &lt;tj@kernel.org&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
scx_alloc_free_idx() zeroes the payload of a freed arena allocation
one word at a time. The loop bound was alloc-&gt;pool.elem_size / 8, but
elem_size includes sizeof(struct sdt_data) (the 8-byte union sdt_id
header). This caused the loop to write one extra u64 past the
allocation, corrupting the tid field of the adjacent pool element.

Fix the loop bound to (elem_size - sizeof(struct sdt_data)) / 8 so
only the payload portion is zeroed.

Test plan:
- Add a temporary sanity check in scx_task_free() before the free call:

  if (mval-&gt;data-&gt;tid.idx != mval-&gt;tid.idx)
      scx_bpf_error("tid corruption: arena=%d storage=%d",
                    mval-&gt;data-&gt;tid.idx, (int)mval-&gt;tid.idx);

- stress-ng --fork 100 -t 10 &amp; sudo ./build/bin/scx_sdt

Without this fix, running scx_sdt under fork-heavy load triggers the
corruption error. With the fix applied, the same workload completes
without error.

Fixes: 36929ebd17ae ("tools/sched_ext: add arena based scheduler")
Signed-off-by: Cheng-Yang Chou &lt;yphbchou0911@gmail.com&gt;
Reviewed-by: Emil Tsalapatis &lt;emil@etsalapatis.com&gt;
Signed-off-by: Tejun Heo &lt;tj@kernel.org&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>scx_central: Defer timer start to central dispatch to fix init error</title>
<updated>2026-03-27T17:33:00+00:00</updated>
<author>
<name>Zhao Mengmeng</name>
<email>zhaomengmeng@kylinos.cn</email>
</author>
<published>2026-03-27T06:17:57+00:00</published>
<link rel='alternate' type='text/html' href='https://git.tavy.me/linux.git/commit/?id=d6edb15ad92cb61386c46662a5ae245c7feac5f0'/>
<id>d6edb15ad92cb61386c46662a5ae245c7feac5f0</id>
<content type='text'>
scx_central currently assumes that ops.init() runs on the selected
central CPU and aborts otherwise. This is no longer true, as ops.init()
is invoked from the scx_enable_helper thread, which can run on any
CPU.

As a result, sched_setaffinity() from userspace doesn't work, causing
scx_central to fail when loading with:

[ 1985.319942] sched_ext: central: scx_central.bpf.c:314: init from non-central CPU
[ 1985.320317]    scx_exit+0xa3/0xd0
[ 1985.320535]    scx_bpf_error_bstr+0xbd/0x220
[ 1985.320840]    bpf_prog_3a445a8163fa8149_central_init+0x103/0x1ba
[ 1985.321073]    bpf__sched_ext_ops_init+0x40/0xa8
[ 1985.321286]    scx_root_enable_workfn+0x507/0x1650
[ 1985.321461]    kthread_worker_fn+0x260/0x940
[ 1985.321745]    kthread+0x303/0x3e0
[ 1985.321901]    ret_from_fork+0x589/0x7d0
[ 1985.322065]    ret_from_fork_asm+0x1a/0x30

DEBUG DUMP
===================================================================

central: root
scx_enable_help[134] triggered exit kind 1025:
  scx_bpf_error (scx_central.bpf.c:314: init from non-central CPU)

Fix this by:
- Defer bpf_timer_start() to the first dispatch on the central CPU.
- Initialize the BPF timer in central_init() and kick the central CPU
to guarantee entering the dispatch path on the central CPU immediately.
- Remove the unnecessary sched_setaffinity() call in userspace.

Suggested-by: Tejun Heo &lt;tj@kernel.org&gt;
Signed-off-by: Zhao Mengmeng &lt;zhaomengmeng@kylinos.cn&gt;
Signed-off-by: Tejun Heo &lt;tj@kernel.org&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
scx_central currently assumes that ops.init() runs on the selected
central CPU and aborts otherwise. This is no longer true, as ops.init()
is invoked from the scx_enable_helper thread, which can run on any
CPU.

As a result, sched_setaffinity() from userspace doesn't work, causing
scx_central to fail when loading with:

[ 1985.319942] sched_ext: central: scx_central.bpf.c:314: init from non-central CPU
[ 1985.320317]    scx_exit+0xa3/0xd0
[ 1985.320535]    scx_bpf_error_bstr+0xbd/0x220
[ 1985.320840]    bpf_prog_3a445a8163fa8149_central_init+0x103/0x1ba
[ 1985.321073]    bpf__sched_ext_ops_init+0x40/0xa8
[ 1985.321286]    scx_root_enable_workfn+0x507/0x1650
[ 1985.321461]    kthread_worker_fn+0x260/0x940
[ 1985.321745]    kthread+0x303/0x3e0
[ 1985.321901]    ret_from_fork+0x589/0x7d0
[ 1985.322065]    ret_from_fork_asm+0x1a/0x30

DEBUG DUMP
===================================================================

central: root
scx_enable_help[134] triggered exit kind 1025:
  scx_bpf_error (scx_central.bpf.c:314: init from non-central CPU)

Fix this by:
- Defer bpf_timer_start() to the first dispatch on the central CPU.
- Initialize the BPF timer in central_init() and kick the central CPU
to guarantee entering the dispatch path on the central CPU immediately.
- Remove the unnecessary sched_setaffinity() call in userspace.

Suggested-by: Tejun Heo &lt;tj@kernel.org&gt;
Signed-off-by: Zhao Mengmeng &lt;zhaomengmeng@kylinos.cn&gt;
Signed-off-by: Tejun Heo &lt;tj@kernel.org&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>tools/sched_ext: Remove redundant SCX_ENQ_IMMED compat definition</title>
<updated>2026-03-26T20:07:42+00:00</updated>
<author>
<name>Tejun Heo</name>
<email>tj@kernel.org</email>
</author>
<published>2026-03-26T20:07:42+00:00</published>
<link rel='alternate' type='text/html' href='https://git.tavy.me/linux.git/commit/?id=ea70239320394266ec8ccf43ff3a6415e43b8163'/>
<id>ea70239320394266ec8ccf43ff3a6415e43b8163</id>
<content type='text'>
compat.bpf.h defined a fallback SCX_ENQ_IMMED macro using
__COMPAT_ENUM_OR_ZERO(). After 6bf36c68b0a2 ("tools/sched_ext:
Regenerate autogen enum headers") added SCX_ENQ_IMMED to the autogen
headers, including both triggers -Wmacro-redefined warnings.

The autogen definition through const volatile __weak already resolves to
0 on older kernels, providing the same backward compatibility. Remove
the now-redundant compat fallback.

Fixes: 6bf36c68b0a2 ("tools/sched_ext: Regenerate autogen enum headers")
Link: https://lore.kernel.org/r/20260326100313.338388-1-zhaomzhao@126.com
Reported-by: Zhao Mengmeng &lt;zhaomengmeng@kylinos.cn&gt;
Signed-off-by: Tejun Heo &lt;tj@kernel.org&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
compat.bpf.h defined a fallback SCX_ENQ_IMMED macro using
__COMPAT_ENUM_OR_ZERO(). After 6bf36c68b0a2 ("tools/sched_ext:
Regenerate autogen enum headers") added SCX_ENQ_IMMED to the autogen
headers, including both triggers -Wmacro-redefined warnings.

The autogen definition through const volatile __weak already resolves to
0 on older kernels, providing the same backward compatibility. Remove
the now-redundant compat fallback.

Fixes: 6bf36c68b0a2 ("tools/sched_ext: Regenerate autogen enum headers")
Link: https://lore.kernel.org/r/20260326100313.338388-1-zhaomzhao@126.com
Reported-by: Zhao Mengmeng &lt;zhaomengmeng@kylinos.cn&gt;
Signed-off-by: Tejun Heo &lt;tj@kernel.org&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>tools/sched_ext: scx_pair: fix pair_ctx indexing for CPU pairs</title>
<updated>2026-03-26T03:45:23+00:00</updated>
<author>
<name>Zhao Mengmeng</name>
<email>zhaomengmeng@kylinos.cn</email>
</author>
<published>2026-03-26T02:51:55+00:00</published>
<link rel='alternate' type='text/html' href='https://git.tavy.me/linux.git/commit/?id=f546c77038ab898726e7344255217fbec382b97f'/>
<id>f546c77038ab898726e7344255217fbec382b97f</id>
<content type='text'>
scx_pair sizes pair_ctx to nr_cpu_ids / 2, so valid pair_ctx keys are
dense pair indexes in the range [0, nr_cpu_ids / 2).

However, the userspace setup code stores pair_id as the first CPU number
in each pair. On an 8-CPU system with "-S 1", that produces pair IDs
0, 2, 4 and 6 for pairs [0,1], [2,3], [4,5] and [6,7]. CPUs in the
latter half then look up pair_ctx with out-of-range keys and the BPF
scheduler aborts with:

EXIT: scx_bpf_error (scx_pair.bpf.c:328: failed to lookup pairc and
in_pair_mask for cpu[5])

Assign pair_id using a dense pair counter instead so that each CPU pair
maps to a valid pair_ctx entry. Besides, reject odd CPU configuration, as
scx_pair requires all CPUs to be paired.

Fixes: f0262b102c7c ("tools/sched_ext: add scx_pair scheduler")
Signed-off-by: Zhao Mengmeng &lt;zhaomengmeng@kylinos.cn&gt;
Signed-off-by: Tejun Heo &lt;tj@kernel.org&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
scx_pair sizes pair_ctx to nr_cpu_ids / 2, so valid pair_ctx keys are
dense pair indexes in the range [0, nr_cpu_ids / 2).

However, the userspace setup code stores pair_id as the first CPU number
in each pair. On an 8-CPU system with "-S 1", that produces pair IDs
0, 2, 4 and 6 for pairs [0,1], [2,3], [4,5] and [6,7]. CPUs in the
latter half then look up pair_ctx with out-of-range keys and the BPF
scheduler aborts with:

EXIT: scx_bpf_error (scx_pair.bpf.c:328: failed to lookup pairc and
in_pair_mask for cpu[5])

Assign pair_id using a dense pair counter instead so that each CPU pair
maps to a valid pair_ctx entry. Besides, reject odd CPU configuration, as
scx_pair requires all CPUs to be paired.

Fixes: f0262b102c7c ("tools/sched_ext: add scx_pair scheduler")
Signed-off-by: Zhao Mengmeng &lt;zhaomengmeng@kylinos.cn&gt;
Signed-off-by: Tejun Heo &lt;tj@kernel.org&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>tools/sched_ext: Regenerate autogen enum headers</title>
<updated>2026-03-25T15:58:08+00:00</updated>
<author>
<name>Cheng-Yang Chou</name>
<email>yphbchou0911@gmail.com</email>
</author>
<published>2026-03-25T04:51:48+00:00</published>
<link rel='alternate' type='text/html' href='https://git.tavy.me/linux.git/commit/?id=6bf36c68b0a23afba108920d21c1c108f83371d6'/>
<id>6bf36c68b0a23afba108920d21c1c108f83371d6</id>
<content type='text'>
Regenerate enum_defs.autogen.h, enums.autogen.h and enums.autogen.bpf.h
using the upstream scripts [1][2] to sync with recent kernel enum
additions.

[1] https://github.com/sched-ext/scx/blob/main/scripts/gen_enum_defs.py
[2] https://github.com/sched-ext/scx/blob/main/scripts/gen_enums.py

Signed-off-by: Cheng-Yang Chou &lt;yphbchou0911@gmail.com&gt;
Signed-off-by: Tejun Heo &lt;tj@kernel.org&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
Regenerate enum_defs.autogen.h, enums.autogen.h and enums.autogen.bpf.h
using the upstream scripts [1][2] to sync with recent kernel enum
additions.

[1] https://github.com/sched-ext/scx/blob/main/scripts/gen_enum_defs.py
[2] https://github.com/sched-ext/scx/blob/main/scripts/gen_enums.py

Signed-off-by: Cheng-Yang Chou &lt;yphbchou0911@gmail.com&gt;
Signed-off-by: Tejun Heo &lt;tj@kernel.org&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>tools/sched_ext: Add scx_bpf_sub_dispatch() compat wrapper</title>
<updated>2026-03-23T17:45:08+00:00</updated>
<author>
<name>Cheng-Yang Chou</name>
<email>yphbchou0911@gmail.com</email>
</author>
<published>2026-03-23T15:17:33+00:00</published>
<link rel='alternate' type='text/html' href='https://git.tavy.me/linux.git/commit/?id=cb251eae7b0aec8a7924fb27bcb5b0388a3706bc'/>
<id>cb251eae7b0aec8a7924fb27bcb5b0388a3706bc</id>
<content type='text'>
Add a transparent compatibility wrapper for the scx_bpf_sub_dispatch()
kfunc in compat.bpf.h. This allows BPF schedulers using the sub-sched
dispatch feature to build and run on older kernels that lack the kfunc.

To avoid requiring code changes in individual schedulers, the
transparent wrapper pattern is used instead of a __COMPAT prefix. The
kfunc is declared with a ___compat suffix, while the static inline
wrapper retains the original scx_bpf_sub_dispatch() name.

When the kfunc is unavailable, the wrapper safely falls back to
returning false. This is acceptable because the dispatch path cannot
do anything useful without underlying sub-sched support anyway.

Tested scx_qmap on v6.14 successfully.

Signed-off-by: Cheng-Yang Chou &lt;yphbchou0911@gmail.com&gt;
Signed-off-by: Tejun Heo &lt;tj@kernel.org&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
Add a transparent compatibility wrapper for the scx_bpf_sub_dispatch()
kfunc in compat.bpf.h. This allows BPF schedulers using the sub-sched
dispatch feature to build and run on older kernels that lack the kfunc.

To avoid requiring code changes in individual schedulers, the
transparent wrapper pattern is used instead of a __COMPAT prefix. The
kfunc is declared with a ___compat suffix, while the static inline
wrapper retains the original scx_bpf_sub_dispatch() name.

When the kfunc is unavailable, the wrapper safely falls back to
returning false. This is acceptable because the dispatch path cannot
do anything useful without underlying sub-sched support anyway.

Tested scx_qmap on v6.14 successfully.

Signed-off-by: Cheng-Yang Chou &lt;yphbchou0911@gmail.com&gt;
Signed-off-by: Tejun Heo &lt;tj@kernel.org&gt;
</pre>
</div>
</content>
</entry>
</feed>
