<feed xmlns='http://www.w3.org/2005/Atom'>
<title>linux.git/kernel, branch v6.5</title>
<subtitle>Linux kernel source tree</subtitle>
<link rel='alternate' type='text/html' href='https://git.tavy.me/linux.git/'/>
<entry>
<title>Merge tag 'irq-urgent-2023-08-26' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip</title>
<updated>2023-08-26T17:34:29+00:00</updated>
<author>
<name>Linus Torvalds</name>
<email>torvalds@linux-foundation.org</email>
</author>
<published>2023-08-26T17:34:29+00:00</published>
<link rel='alternate' type='text/html' href='https://git.tavy.me/linux.git/commit/?id=3b35375f19fe87b5d8822ce01f917095d575ee28'/>
<id>3b35375f19fe87b5d8822ce01f917095d575ee28</id>
<content type='text'>
Pull irq fix from Thomas Gleixner:
 "A last minute fix for a regression introduced in the v6.5 merge
  window.

  The conversion of the software based interrupt resend mechanism to
  hlist missed to add a check whether the descriptor is already enqueued
  and dropped the interrupt descriptor lookup for nested interrupts.

  The missing check whether the descriptor is already queued causes
  hlist corruption and can be observed in the wild. The dropped parent
  descriptor lookup has not yet caused problems, but it would result in
  stale interrupt line in the worst case.

  Add the missing enqueued check and bring the descriptor lookup back to
  cure this"

* tag 'irq-urgent-2023-08-26' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip:
  genirq: Fix software resend lockup and nested resend
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
Pull irq fix from Thomas Gleixner:
 "A last minute fix for a regression introduced in the v6.5 merge
  window.

  The conversion of the software based interrupt resend mechanism to
  hlist missed to add a check whether the descriptor is already enqueued
  and dropped the interrupt descriptor lookup for nested interrupts.

  The missing check whether the descriptor is already queued causes
  hlist corruption and can be observed in the wild. The dropped parent
  descriptor lookup has not yet caused problems, but it would result in
  stale interrupt line in the worst case.

  Add the missing enqueued check and bring the descriptor lookup back to
  cure this"

* tag 'irq-urgent-2023-08-26' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip:
  genirq: Fix software resend lockup and nested resend
</pre>
</div>
</content>
</entry>
<entry>
<title>genirq: Fix software resend lockup and nested resend</title>
<updated>2023-08-26T17:14:31+00:00</updated>
<author>
<name>Johan Hovold</name>
<email>johan+linaro@kernel.org</email>
</author>
<published>2023-08-26T15:40:04+00:00</published>
<link rel='alternate' type='text/html' href='https://git.tavy.me/linux.git/commit/?id=9f5deb551655a4cff04b21ecffdcdab75112da3a'/>
<id>9f5deb551655a4cff04b21ecffdcdab75112da3a</id>
<content type='text'>
The switch to using hlist for managing software resend of interrupts
broke resend in at least two ways:

First, unconditionally adding interrupt descriptors to the resend list can
corrupt the list when the descriptor in question has already been
added. This causes the resend tasklet to loop indefinitely with interrupts
disabled as was recently reported with the Lenovo ThinkPad X13s after
threaded NAPI was disabled in the ath11k WiFi driver.

This bug is easily fixed by restoring the old semantics of irq_sw_resend()
so that it can be called also for descriptors that have already been marked
for resend.

Second, the offending commit also broke software resend of nested
interrupts by simply discarding the code that made sure that such
interrupts are retriggered using the parent interrupt.

Add back the corresponding code that adds the parent descriptor to the
resend list.

Fixes: bc06a9e08742 ("genirq: Use hlist for managing resend handlers")
Signed-off-by: Johan Hovold &lt;johan+linaro@kernel.org&gt;
Signed-off-by: Thomas Gleixner &lt;tglx@linutronix.de&gt;
Reviewed-by: Marc Zyngier &lt;maz@kernel.org&gt;
Link: https://lore.kernel.org/lkml/20230809073432.4193-1-johan+linaro@kernel.org/
Link: https://lore.kernel.org/r/20230826154004.1417-1-johan+linaro@kernel.org

</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
The switch to using hlist for managing software resend of interrupts
broke resend in at least two ways:

First, unconditionally adding interrupt descriptors to the resend list can
corrupt the list when the descriptor in question has already been
added. This causes the resend tasklet to loop indefinitely with interrupts
disabled as was recently reported with the Lenovo ThinkPad X13s after
threaded NAPI was disabled in the ath11k WiFi driver.

This bug is easily fixed by restoring the old semantics of irq_sw_resend()
so that it can be called also for descriptors that have already been marked
for resend.

Second, the offending commit also broke software resend of nested
interrupts by simply discarding the code that made sure that such
interrupts are retriggered using the parent interrupt.

Add back the corresponding code that adds the parent descriptor to the
resend list.

Fixes: bc06a9e08742 ("genirq: Use hlist for managing resend handlers")
Signed-off-by: Johan Hovold &lt;johan+linaro@kernel.org&gt;
Signed-off-by: Thomas Gleixner &lt;tglx@linutronix.de&gt;
Reviewed-by: Marc Zyngier &lt;maz@kernel.org&gt;
Link: https://lore.kernel.org/lkml/20230809073432.4193-1-johan+linaro@kernel.org/
Link: https://lore.kernel.org/r/20230826154004.1417-1-johan+linaro@kernel.org

</pre>
</div>
</content>
</entry>
<entry>
<title>tracing: Introduce pipe_cpumask to avoid race on trace_pipes</title>
<updated>2023-08-21T15:17:14+00:00</updated>
<author>
<name>Zheng Yejian</name>
<email>zhengyejian1@huawei.com</email>
</author>
<published>2023-08-18T02:26:45+00:00</published>
<link rel='alternate' type='text/html' href='https://git.tavy.me/linux.git/commit/?id=c2489bb7e6be2e8cdced12c16c42fa128403ac03'/>
<id>c2489bb7e6be2e8cdced12c16c42fa128403ac03</id>
<content type='text'>
There is race issue when concurrently splice_read main trace_pipe and
per_cpu trace_pipes which will result in data read out being different
from what actually writen.

As suggested by Steven:
  &gt; I believe we should add a ref count to trace_pipe and the per_cpu
  &gt; trace_pipes, where if they are opened, nothing else can read it.
  &gt;
  &gt; Opening trace_pipe locks all per_cpu ref counts, if any of them are
  &gt; open, then the trace_pipe open will fail (and releases any ref counts
  &gt; it had taken).
  &gt;
  &gt; Opening a per_cpu trace_pipe will up the ref count for just that
  &gt; CPU buffer. This will allow multiple tasks to read different per_cpu
  &gt; trace_pipe files, but will prevent the main trace_pipe file from
  &gt; being opened.

But because we only need to know whether per_cpu trace_pipe is open or
not, using a cpumask instead of using ref count may be easier.

After this patch, users will find that:
 - Main trace_pipe can be opened by only one user, and if it is
   opened, all per_cpu trace_pipes cannot be opened;
 - Per_cpu trace_pipes can be opened by multiple users, but each per_cpu
   trace_pipe can only be opened by one user. And if one of them is
   opened, main trace_pipe cannot be opened.

Link: https://lore.kernel.org/linux-trace-kernel/20230818022645.1948314-1-zhengyejian1@huawei.com

Suggested-by: Steven Rostedt (Google) &lt;rostedt@goodmis.org&gt;
Signed-off-by: Zheng Yejian &lt;zhengyejian1@huawei.com&gt;
Reviewed-by: Masami Hiramatsu (Google) &lt;mhiramat@kernel.org&gt;
Signed-off-by: Steven Rostedt (Google) &lt;rostedt@goodmis.org&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
There is race issue when concurrently splice_read main trace_pipe and
per_cpu trace_pipes which will result in data read out being different
from what actually writen.

As suggested by Steven:
  &gt; I believe we should add a ref count to trace_pipe and the per_cpu
  &gt; trace_pipes, where if they are opened, nothing else can read it.
  &gt;
  &gt; Opening trace_pipe locks all per_cpu ref counts, if any of them are
  &gt; open, then the trace_pipe open will fail (and releases any ref counts
  &gt; it had taken).
  &gt;
  &gt; Opening a per_cpu trace_pipe will up the ref count for just that
  &gt; CPU buffer. This will allow multiple tasks to read different per_cpu
  &gt; trace_pipe files, but will prevent the main trace_pipe file from
  &gt; being opened.

But because we only need to know whether per_cpu trace_pipe is open or
not, using a cpumask instead of using ref count may be easier.

After this patch, users will find that:
 - Main trace_pipe can be opened by only one user, and if it is
   opened, all per_cpu trace_pipes cannot be opened;
 - Per_cpu trace_pipes can be opened by multiple users, but each per_cpu
   trace_pipe can only be opened by one user. And if one of them is
   opened, main trace_pipe cannot be opened.

Link: https://lore.kernel.org/linux-trace-kernel/20230818022645.1948314-1-zhengyejian1@huawei.com

Suggested-by: Steven Rostedt (Google) &lt;rostedt@goodmis.org&gt;
Signed-off-by: Zheng Yejian &lt;zhengyejian1@huawei.com&gt;
Reviewed-by: Masami Hiramatsu (Google) &lt;mhiramat@kernel.org&gt;
Signed-off-by: Steven Rostedt (Google) &lt;rostedt@goodmis.org&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>tracing: Fix memleak due to race between current_tracer and trace</title>
<updated>2023-08-17T17:49:37+00:00</updated>
<author>
<name>Zheng Yejian</name>
<email>zhengyejian1@huawei.com</email>
</author>
<published>2023-08-17T12:55:39+00:00</published>
<link rel='alternate' type='text/html' href='https://git.tavy.me/linux.git/commit/?id=eecb91b9f98d6427d4af5fdb8f108f52572a39e7'/>
<id>eecb91b9f98d6427d4af5fdb8f108f52572a39e7</id>
<content type='text'>
Kmemleak report a leak in graph_trace_open():

  unreferenced object 0xffff0040b95f4a00 (size 128):
    comm "cat", pid 204981, jiffies 4301155872 (age 99771.964s)
    hex dump (first 32 bytes):
      e0 05 e7 b4 ab 7d 00 00 0b 00 01 00 00 00 00 00 .....}..........
      f4 00 01 10 00 a0 ff ff 00 00 00 00 65 00 10 00 ............e...
    backtrace:
      [&lt;000000005db27c8b&gt;] kmem_cache_alloc_trace+0x348/0x5f0
      [&lt;000000007df90faa&gt;] graph_trace_open+0xb0/0x344
      [&lt;00000000737524cd&gt;] __tracing_open+0x450/0xb10
      [&lt;0000000098043327&gt;] tracing_open+0x1a0/0x2a0
      [&lt;00000000291c3876&gt;] do_dentry_open+0x3c0/0xdc0
      [&lt;000000004015bcd6&gt;] vfs_open+0x98/0xd0
      [&lt;000000002b5f60c9&gt;] do_open+0x520/0x8d0
      [&lt;00000000376c7820&gt;] path_openat+0x1c0/0x3e0
      [&lt;00000000336a54b5&gt;] do_filp_open+0x14c/0x324
      [&lt;000000002802df13&gt;] do_sys_openat2+0x2c4/0x530
      [&lt;0000000094eea458&gt;] __arm64_sys_openat+0x130/0x1c4
      [&lt;00000000a71d7881&gt;] el0_svc_common.constprop.0+0xfc/0x394
      [&lt;00000000313647bf&gt;] do_el0_svc+0xac/0xec
      [&lt;000000002ef1c651&gt;] el0_svc+0x20/0x30
      [&lt;000000002fd4692a&gt;] el0_sync_handler+0xb0/0xb4
      [&lt;000000000c309c35&gt;] el0_sync+0x160/0x180

The root cause is descripted as follows:

  __tracing_open() {  // 1. File 'trace' is being opened;
    ...
    *iter-&gt;trace = *tr-&gt;current_trace;  // 2. Tracer 'function_graph' is
                                        //    currently set;
    ...
    iter-&gt;trace-&gt;open(iter);  // 3. Call graph_trace_open() here,
                              //    and memory are allocated in it;
    ...
  }

  s_start() {  // 4. The opened file is being read;
    ...
    *iter-&gt;trace = *tr-&gt;current_trace;  // 5. If tracer is switched to
                                        //    'nop' or others, then memory
                                        //    in step 3 are leaked!!!
    ...
  }

To fix it, in s_start(), close tracer before switching then reopen the
new tracer after switching. And some tracers like 'wakeup' may not update
'iter-&gt;private' in some cases when reopen, then it should be cleared
to avoid being mistakenly closed again.

Link: https://lore.kernel.org/linux-trace-kernel/20230817125539.1646321-1-zhengyejian1@huawei.com

Fixes: d7350c3f4569 ("tracing/core: make the read callbacks reentrants")
Signed-off-by: Zheng Yejian &lt;zhengyejian1@huawei.com&gt;
Signed-off-by: Steven Rostedt (Google) &lt;rostedt@goodmis.org&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
Kmemleak report a leak in graph_trace_open():

  unreferenced object 0xffff0040b95f4a00 (size 128):
    comm "cat", pid 204981, jiffies 4301155872 (age 99771.964s)
    hex dump (first 32 bytes):
      e0 05 e7 b4 ab 7d 00 00 0b 00 01 00 00 00 00 00 .....}..........
      f4 00 01 10 00 a0 ff ff 00 00 00 00 65 00 10 00 ............e...
    backtrace:
      [&lt;000000005db27c8b&gt;] kmem_cache_alloc_trace+0x348/0x5f0
      [&lt;000000007df90faa&gt;] graph_trace_open+0xb0/0x344
      [&lt;00000000737524cd&gt;] __tracing_open+0x450/0xb10
      [&lt;0000000098043327&gt;] tracing_open+0x1a0/0x2a0
      [&lt;00000000291c3876&gt;] do_dentry_open+0x3c0/0xdc0
      [&lt;000000004015bcd6&gt;] vfs_open+0x98/0xd0
      [&lt;000000002b5f60c9&gt;] do_open+0x520/0x8d0
      [&lt;00000000376c7820&gt;] path_openat+0x1c0/0x3e0
      [&lt;00000000336a54b5&gt;] do_filp_open+0x14c/0x324
      [&lt;000000002802df13&gt;] do_sys_openat2+0x2c4/0x530
      [&lt;0000000094eea458&gt;] __arm64_sys_openat+0x130/0x1c4
      [&lt;00000000a71d7881&gt;] el0_svc_common.constprop.0+0xfc/0x394
      [&lt;00000000313647bf&gt;] do_el0_svc+0xac/0xec
      [&lt;000000002ef1c651&gt;] el0_svc+0x20/0x30
      [&lt;000000002fd4692a&gt;] el0_sync_handler+0xb0/0xb4
      [&lt;000000000c309c35&gt;] el0_sync+0x160/0x180

The root cause is descripted as follows:

  __tracing_open() {  // 1. File 'trace' is being opened;
    ...
    *iter-&gt;trace = *tr-&gt;current_trace;  // 2. Tracer 'function_graph' is
                                        //    currently set;
    ...
    iter-&gt;trace-&gt;open(iter);  // 3. Call graph_trace_open() here,
                              //    and memory are allocated in it;
    ...
  }

  s_start() {  // 4. The opened file is being read;
    ...
    *iter-&gt;trace = *tr-&gt;current_trace;  // 5. If tracer is switched to
                                        //    'nop' or others, then memory
                                        //    in step 3 are leaked!!!
    ...
  }

To fix it, in s_start(), close tracer before switching then reopen the
new tracer after switching. And some tracers like 'wakeup' may not update
'iter-&gt;private' in some cases when reopen, then it should be cleared
to avoid being mistakenly closed again.

Link: https://lore.kernel.org/linux-trace-kernel/20230817125539.1646321-1-zhengyejian1@huawei.com

Fixes: d7350c3f4569 ("tracing/core: make the read callbacks reentrants")
Signed-off-by: Zheng Yejian &lt;zhengyejian1@huawei.com&gt;
Signed-off-by: Steven Rostedt (Google) &lt;rostedt@goodmis.org&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>tracing/synthetic: Allocate one additional element for size</title>
<updated>2023-08-16T20:37:07+00:00</updated>
<author>
<name>Sven Schnelle</name>
<email>svens@linux.ibm.com</email>
</author>
<published>2023-08-16T15:49:28+00:00</published>
<link rel='alternate' type='text/html' href='https://git.tavy.me/linux.git/commit/?id=c4d6b5438116c184027b2e911c0f2c7c406fb47c'/>
<id>c4d6b5438116c184027b2e911c0f2c7c406fb47c</id>
<content type='text'>
While debugging another issue I noticed that the stack trace contains one
invalid entry at the end:

&lt;idle&gt;-0       [008] d..4.    26.484201: wake_lat: pid=0 delta=2629976084 000000009cc24024 stack=STACK:
=&gt; __schedule+0xac6/0x1a98
=&gt; schedule+0x126/0x2c0
=&gt; schedule_timeout+0x150/0x2c0
=&gt; kcompactd+0x9ca/0xc20
=&gt; kthread+0x2f6/0x3d8
=&gt; __ret_from_fork+0x8a/0xe8
=&gt; 0x6b6b6b6b6b6b6b6b

This is because the code failed to add the one element containing the
number of entries to field_size.

Link: https://lkml.kernel.org/r/20230816154928.4171614-4-svens@linux.ibm.com

Cc: Masami Hiramatsu &lt;mhiramat@kernel.org&gt;
Fixes: 00cf3d672a9d ("tracing: Allow synthetic events to pass around stacktraces")
Signed-off-by: Sven Schnelle &lt;svens@linux.ibm.com&gt;
Signed-off-by: Steven Rostedt (Google) &lt;rostedt@goodmis.org&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
While debugging another issue I noticed that the stack trace contains one
invalid entry at the end:

&lt;idle&gt;-0       [008] d..4.    26.484201: wake_lat: pid=0 delta=2629976084 000000009cc24024 stack=STACK:
=&gt; __schedule+0xac6/0x1a98
=&gt; schedule+0x126/0x2c0
=&gt; schedule_timeout+0x150/0x2c0
=&gt; kcompactd+0x9ca/0xc20
=&gt; kthread+0x2f6/0x3d8
=&gt; __ret_from_fork+0x8a/0xe8
=&gt; 0x6b6b6b6b6b6b6b6b

This is because the code failed to add the one element containing the
number of entries to field_size.

Link: https://lkml.kernel.org/r/20230816154928.4171614-4-svens@linux.ibm.com

Cc: Masami Hiramatsu &lt;mhiramat@kernel.org&gt;
Fixes: 00cf3d672a9d ("tracing: Allow synthetic events to pass around stacktraces")
Signed-off-by: Sven Schnelle &lt;svens@linux.ibm.com&gt;
Signed-off-by: Steven Rostedt (Google) &lt;rostedt@goodmis.org&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>tracing/synthetic: Skip first entry for stack traces</title>
<updated>2023-08-16T20:34:25+00:00</updated>
<author>
<name>Sven Schnelle</name>
<email>svens@linux.ibm.com</email>
</author>
<published>2023-08-16T15:49:27+00:00</published>
<link rel='alternate' type='text/html' href='https://git.tavy.me/linux.git/commit/?id=887f92e09ef34a949745ad26ce82be69e2dabcf6'/>
<id>887f92e09ef34a949745ad26ce82be69e2dabcf6</id>
<content type='text'>
While debugging another issue I noticed that the stack trace output
contains the number of entries on top:

         &lt;idle&gt;-0       [000] d..4.   203.322502: wake_lat: pid=0 delta=2268270616 stack=STACK:
=&gt; 0x10
=&gt; __schedule+0xac6/0x1a98
=&gt; schedule+0x126/0x2c0
=&gt; schedule_timeout+0x242/0x2c0
=&gt; __wait_for_common+0x434/0x680
=&gt; __wait_rcu_gp+0x198/0x3e0
=&gt; synchronize_rcu+0x112/0x138
=&gt; ring_buffer_reset_online_cpus+0x140/0x2e0
=&gt; tracing_reset_online_cpus+0x15c/0x1d0
=&gt; tracing_set_clock+0x180/0x1d8
=&gt; hist_register_trigger+0x486/0x670
=&gt; event_hist_trigger_parse+0x494/0x1318
=&gt; trigger_process_regex+0x1d4/0x258
=&gt; event_trigger_write+0xb4/0x170
=&gt; vfs_write+0x210/0xad0
=&gt; ksys_write+0x122/0x208

Fix this by skipping the first element. Also replace the pointer
logic with an index variable which is easier to read.

Link: https://lkml.kernel.org/r/20230816154928.4171614-3-svens@linux.ibm.com

Cc: Masami Hiramatsu &lt;mhiramat@kernel.org&gt;
Fixes: 00cf3d672a9d ("tracing: Allow synthetic events to pass around stacktraces")
Signed-off-by: Sven Schnelle &lt;svens@linux.ibm.com&gt;
Signed-off-by: Steven Rostedt (Google) &lt;rostedt@goodmis.org&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
While debugging another issue I noticed that the stack trace output
contains the number of entries on top:

         &lt;idle&gt;-0       [000] d..4.   203.322502: wake_lat: pid=0 delta=2268270616 stack=STACK:
=&gt; 0x10
=&gt; __schedule+0xac6/0x1a98
=&gt; schedule+0x126/0x2c0
=&gt; schedule_timeout+0x242/0x2c0
=&gt; __wait_for_common+0x434/0x680
=&gt; __wait_rcu_gp+0x198/0x3e0
=&gt; synchronize_rcu+0x112/0x138
=&gt; ring_buffer_reset_online_cpus+0x140/0x2e0
=&gt; tracing_reset_online_cpus+0x15c/0x1d0
=&gt; tracing_set_clock+0x180/0x1d8
=&gt; hist_register_trigger+0x486/0x670
=&gt; event_hist_trigger_parse+0x494/0x1318
=&gt; trigger_process_regex+0x1d4/0x258
=&gt; event_trigger_write+0xb4/0x170
=&gt; vfs_write+0x210/0xad0
=&gt; ksys_write+0x122/0x208

Fix this by skipping the first element. Also replace the pointer
logic with an index variable which is easier to read.

Link: https://lkml.kernel.org/r/20230816154928.4171614-3-svens@linux.ibm.com

Cc: Masami Hiramatsu &lt;mhiramat@kernel.org&gt;
Fixes: 00cf3d672a9d ("tracing: Allow synthetic events to pass around stacktraces")
Signed-off-by: Sven Schnelle &lt;svens@linux.ibm.com&gt;
Signed-off-by: Steven Rostedt (Google) &lt;rostedt@goodmis.org&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>tracing/synthetic: Use union instead of casts</title>
<updated>2023-08-16T20:33:27+00:00</updated>
<author>
<name>Sven Schnelle</name>
<email>svens@linux.ibm.com</email>
</author>
<published>2023-08-16T15:49:26+00:00</published>
<link rel='alternate' type='text/html' href='https://git.tavy.me/linux.git/commit/?id=ddeea494a16f32522bce16ee65f191d05d4b8282'/>
<id>ddeea494a16f32522bce16ee65f191d05d4b8282</id>
<content type='text'>
The current code uses a lot of casts to access the fields member in struct
synth_trace_events with different sizes.  This makes the code hard to
read, and had already introduced an endianness bug. Use a union and struct
instead.

Link: https://lkml.kernel.org/r/20230816154928.4171614-2-svens@linux.ibm.com

Cc: stable@vger.kernel.org
Cc: Masami Hiramatsu &lt;mhiramat@kernel.org&gt;
Fixes: 00cf3d672a9dd ("tracing: Allow synthetic events to pass around stacktraces")
Signed-off-by: Sven Schnelle &lt;svens@linux.ibm.com&gt;
Signed-off-by: Steven Rostedt (Google) &lt;rostedt@goodmis.org&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
The current code uses a lot of casts to access the fields member in struct
synth_trace_events with different sizes.  This makes the code hard to
read, and had already introduced an endianness bug. Use a union and struct
instead.

Link: https://lkml.kernel.org/r/20230816154928.4171614-2-svens@linux.ibm.com

Cc: stable@vger.kernel.org
Cc: Masami Hiramatsu &lt;mhiramat@kernel.org&gt;
Fixes: 00cf3d672a9dd ("tracing: Allow synthetic events to pass around stacktraces")
Signed-off-by: Sven Schnelle &lt;svens@linux.ibm.com&gt;
Signed-off-by: Steven Rostedt (Google) &lt;rostedt@goodmis.org&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>tracing: Fix cpu buffers unavailable due to 'record_disabled' missed</title>
<updated>2023-08-16T19:12:42+00:00</updated>
<author>
<name>Zheng Yejian</name>
<email>zhengyejian1@huawei.com</email>
</author>
<published>2023-08-05T03:38:15+00:00</published>
<link rel='alternate' type='text/html' href='https://git.tavy.me/linux.git/commit/?id=b71645d6af10196c46cbe3732de2ea7d36b3ff6d'/>
<id>b71645d6af10196c46cbe3732de2ea7d36b3ff6d</id>
<content type='text'>
Trace ring buffer can no longer record anything after executing
following commands at the shell prompt:

  # cd /sys/kernel/tracing
  # cat tracing_cpumask
  fff
  # echo 0 &gt; tracing_cpumask
  # echo 1 &gt; snapshot
  # echo fff &gt; tracing_cpumask
  # echo 1 &gt; tracing_on
  # echo "hello world" &gt; trace_marker
  -bash: echo: write error: Bad file descriptor

The root cause is that:
  1. After `echo 0 &gt; tracing_cpumask`, 'record_disabled' of cpu buffers
     in 'tr-&gt;array_buffer.buffer' became 1 (see tracing_set_cpumask());
  2. After `echo 1 &gt; snapshot`, 'tr-&gt;array_buffer.buffer' is swapped
     with 'tr-&gt;max_buffer.buffer', then the 'record_disabled' became 0
     (see update_max_tr());
  3. After `echo fff &gt; tracing_cpumask`, the 'record_disabled' become -1;
Then array_buffer and max_buffer are both unavailable due to value of
'record_disabled' is not 0.

To fix it, enable or disable both array_buffer and max_buffer at the same
time in tracing_set_cpumask().

Link: https://lkml.kernel.org/r/20230805033816.3284594-2-zhengyejian1@huawei.com

Cc: &lt;mhiramat@kernel.org&gt;
Cc: &lt;vnagarnaik@google.com&gt;
Cc: &lt;shuah@kernel.org&gt;
Fixes: 71babb2705e2 ("tracing: change CPU ring buffer state from tracing_cpumask")
Signed-off-by: Zheng Yejian &lt;zhengyejian1@huawei.com&gt;
Signed-off-by: Steven Rostedt (Google) &lt;rostedt@goodmis.org&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
Trace ring buffer can no longer record anything after executing
following commands at the shell prompt:

  # cd /sys/kernel/tracing
  # cat tracing_cpumask
  fff
  # echo 0 &gt; tracing_cpumask
  # echo 1 &gt; snapshot
  # echo fff &gt; tracing_cpumask
  # echo 1 &gt; tracing_on
  # echo "hello world" &gt; trace_marker
  -bash: echo: write error: Bad file descriptor

The root cause is that:
  1. After `echo 0 &gt; tracing_cpumask`, 'record_disabled' of cpu buffers
     in 'tr-&gt;array_buffer.buffer' became 1 (see tracing_set_cpumask());
  2. After `echo 1 &gt; snapshot`, 'tr-&gt;array_buffer.buffer' is swapped
     with 'tr-&gt;max_buffer.buffer', then the 'record_disabled' became 0
     (see update_max_tr());
  3. After `echo fff &gt; tracing_cpumask`, the 'record_disabled' become -1;
Then array_buffer and max_buffer are both unavailable due to value of
'record_disabled' is not 0.

To fix it, enable or disable both array_buffer and max_buffer at the same
time in tracing_set_cpumask().

Link: https://lkml.kernel.org/r/20230805033816.3284594-2-zhengyejian1@huawei.com

Cc: &lt;mhiramat@kernel.org&gt;
Cc: &lt;vnagarnaik@google.com&gt;
Cc: &lt;shuah@kernel.org&gt;
Fixes: 71babb2705e2 ("tracing: change CPU ring buffer state from tracing_cpumask")
Signed-off-by: Zheng Yejian &lt;zhengyejian1@huawei.com&gt;
Signed-off-by: Steven Rostedt (Google) &lt;rostedt@goodmis.org&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>Merge tag 'pm-6.5-rc6' of git://git.kernel.org/pub/scm/linux/kernel/git/rafael/linux-pm</title>
<updated>2023-08-11T19:24:22+00:00</updated>
<author>
<name>Linus Torvalds</name>
<email>torvalds@linux-foundation.org</email>
</author>
<published>2023-08-11T19:24:22+00:00</published>
<link rel='alternate' type='text/html' href='https://git.tavy.me/linux.git/commit/?id=9578b04c32397e664bd4643c8b7f525728df3028'/>
<id>9578b04c32397e664bd4643c8b7f525728df3028</id>
<content type='text'>
Pull power management fixes from Rafael Wysocki:
 "These fix an amd-pstate cpufreq driver issues and recently introduced
  hibernation-related breakage.

  Specifics:

   - Make amd-pstate use device_attributes as expected by the CPU root
     kobject (Thomas Weißschuh)

   - Restore the previous behavior of resume_store() when hibernation is
     not available which is to return the full number of bytes that were
     to be written by user space (Vlastimil Babka)"

* tag 'pm-6.5-rc6' of git://git.kernel.org/pub/scm/linux/kernel/git/rafael/linux-pm:
  cpufreq: amd-pstate: fix global sysfs attribute type
  PM: hibernate: fix resume_store() return value when hibernation not available
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
Pull power management fixes from Rafael Wysocki:
 "These fix an amd-pstate cpufreq driver issues and recently introduced
  hibernation-related breakage.

  Specifics:

   - Make amd-pstate use device_attributes as expected by the CPU root
     kobject (Thomas Weißschuh)

   - Restore the previous behavior of resume_store() when hibernation is
     not available which is to return the full number of bytes that were
     to be written by user space (Vlastimil Babka)"

* tag 'pm-6.5-rc6' of git://git.kernel.org/pub/scm/linux/kernel/git/rafael/linux-pm:
  cpufreq: amd-pstate: fix global sysfs attribute type
  PM: hibernate: fix resume_store() return value when hibernation not available
</pre>
</div>
</content>
</entry>
<entry>
<title>Merge tag 'wq-for-6.5-rc5-fixes' of git://git.kernel.org/pub/scm/linux/kernel/git/tj/wq</title>
<updated>2023-08-07T20:07:12+00:00</updated>
<author>
<name>Linus Torvalds</name>
<email>torvalds@linux-foundation.org</email>
</author>
<published>2023-08-07T20:07:12+00:00</published>
<link rel='alternate' type='text/html' href='https://git.tavy.me/linux.git/commit/?id=14f9643dc90adea074a0ffb7a17d337eafc6a5cc'/>
<id>14f9643dc90adea074a0ffb7a17d337eafc6a5cc</id>
<content type='text'>
Pull workqueue fixes from Tejun Heo:

 - The recently added cpu_intensive auto detection and warning mechanism
   was spuriously triggered on slow CPUs.

   While not causing serious issues, it's still a nuisance and can cause
   unintended concurrency management behaviors.

   Relax the threshold on machines with lower BogoMIPS. While BogoMIPS
   is not an accurate measure of performance by most measures, we don't
   have to be accurate and it has rough but strong enough correlation.

 - A correction in Kconfig help text

* tag 'wq-for-6.5-rc5-fixes' of git://git.kernel.org/pub/scm/linux/kernel/git/tj/wq:
  workqueue: Scale up wq_cpu_intensive_thresh_us if BogoMIPS is below 4000
  workqueue: Fix cpu_intensive_thresh_us name in help text
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
Pull workqueue fixes from Tejun Heo:

 - The recently added cpu_intensive auto detection and warning mechanism
   was spuriously triggered on slow CPUs.

   While not causing serious issues, it's still a nuisance and can cause
   unintended concurrency management behaviors.

   Relax the threshold on machines with lower BogoMIPS. While BogoMIPS
   is not an accurate measure of performance by most measures, we don't
   have to be accurate and it has rough but strong enough correlation.

 - A correction in Kconfig help text

* tag 'wq-for-6.5-rc5-fixes' of git://git.kernel.org/pub/scm/linux/kernel/git/tj/wq:
  workqueue: Scale up wq_cpu_intensive_thresh_us if BogoMIPS is below 4000
  workqueue: Fix cpu_intensive_thresh_us name in help text
</pre>
</div>
</content>
</entry>
</feed>
