<feed xmlns='http://www.w3.org/2005/Atom'>
<title>linux.git/arch/arm64/kernel/cpu_errata.c, branch v5.1</title>
<subtitle>Linux kernel source tree</subtitle>
<link rel='alternate' type='text/html' href='https://git.tavy.me/linux.git/'/>
<entry>
<title>arm64: kpti: Avoid rewriting early page tables when KASLR is enabled</title>
<updated>2019-01-10T17:49:35+00:00</updated>
<author>
<name>Will Deacon</name>
<email>will.deacon@arm.com</email>
</author>
<published>2019-01-08T16:19:01+00:00</published>
<link rel='alternate' type='text/html' href='https://git.tavy.me/linux.git/commit/?id=b89d82ef01b33bc50cbaa8ff05607879b40d0704'/>
<id>b89d82ef01b33bc50cbaa8ff05607879b40d0704</id>
<content type='text'>
A side effect of commit c55191e96caa ("arm64: mm: apply r/o permissions
of VM areas to its linear alias as well") is that the linear map is
created with page granularity, which means that transitioning the early
page table from global to non-global mappings when enabling kpti can
take a significant amount of time during boot.

Given that most CPU implementations do not require kpti, this mainly
impacts KASLR builds where kpti is forcefully enabled. However, in these
situations we know early on that non-global mappings are required and
can avoid the use of global mappings from the beginning. The only gotcha
is Cavium erratum #27456, which we must detect based on the MIDR value
of the boot CPU.

Reviewed-by: Ard Biesheuvel &lt;ard.biesheuvel@linaro.org&gt;
Reported-by: John Garry &lt;john.garry@huawei.com&gt;
Signed-off-by: Will Deacon &lt;will.deacon@arm.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
A side effect of commit c55191e96caa ("arm64: mm: apply r/o permissions
of VM areas to its linear alias as well") is that the linear map is
created with page granularity, which means that transitioning the early
page table from global to non-global mappings when enabling kpti can
take a significant amount of time during boot.

Given that most CPU implementations do not require kpti, this mainly
impacts KASLR builds where kpti is forcefully enabled. However, in these
situations we know early on that non-global mappings are required and
can avoid the use of global mappings from the beginning. The only gotcha
is Cavium erratum #27456, which we must detect based on the MIDR value
of the boot CPU.

Reviewed-by: Ard Biesheuvel &lt;ard.biesheuvel@linaro.org&gt;
Reported-by: John Garry &lt;john.garry@huawei.com&gt;
Signed-off-by: Will Deacon &lt;will.deacon@arm.com&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>Merge tag 'arm64-upstream' of git://git.kernel.org/pub/scm/linux/kernel/git/arm64/linux</title>
<updated>2018-12-26T01:41:56+00:00</updated>
<author>
<name>Linus Torvalds</name>
<email>torvalds@linux-foundation.org</email>
</author>
<published>2018-12-26T01:41:56+00:00</published>
<link rel='alternate' type='text/html' href='https://git.tavy.me/linux.git/commit/?id=5694cecdb092656a822287a6691aa7ce668c8160'/>
<id>5694cecdb092656a822287a6691aa7ce668c8160</id>
<content type='text'>
Pull arm64 festive updates from Will Deacon:
 "In the end, we ended up with quite a lot more than I expected:

   - Support for ARMv8.3 Pointer Authentication in userspace (CRIU and
     kernel-side support to come later)

   - Support for per-thread stack canaries, pending an update to GCC
     that is currently undergoing review

   - Support for kexec_file_load(), which permits secure boot of a kexec
     payload but also happens to improve the performance of kexec
     dramatically because we can avoid the sucky purgatory code from
     userspace. Kdump will come later (requires updates to libfdt).

   - Optimisation of our dynamic CPU feature framework, so that all
     detected features are enabled via a single stop_machine()
     invocation

   - KPTI whitelisting of Cortex-A CPUs unaffected by Meltdown, so that
     they can benefit from global TLB entries when KASLR is not in use

   - 52-bit virtual addressing for userspace (kernel remains 48-bit)

   - Patch in LSE atomics for per-cpu atomic operations

   - Custom preempt.h implementation to avoid unconditional calls to
     preempt_schedule() from preempt_enable()

   - Support for the new 'SB' Speculation Barrier instruction

   - Vectorised implementation of XOR checksumming and CRC32
     optimisations

   - Workaround for Cortex-A76 erratum #1165522

   - Improved compatibility with Clang/LLD

   - Support for TX2 system PMUS for profiling the L3 cache and DMC

   - Reflect read-only permissions in the linear map by default

   - Ensure MMIO reads are ordered with subsequent calls to Xdelay()

   - Initial support for memory hotplug

   - Tweak the threshold when we invalidate the TLB by-ASID, so that
     mremap() performance is improved for ranges spanning multiple PMDs.

   - Minor refactoring and cleanups"

* tag 'arm64-upstream' of git://git.kernel.org/pub/scm/linux/kernel/git/arm64/linux: (125 commits)
  arm64: kaslr: print PHYS_OFFSET in dump_kernel_offset()
  arm64: sysreg: Use _BITUL() when defining register bits
  arm64: cpufeature: Rework ptr auth hwcaps using multi_entry_cap_matches
  arm64: cpufeature: Reduce number of pointer auth CPU caps from 6 to 4
  arm64: docs: document pointer authentication
  arm64: ptr auth: Move per-thread keys from thread_info to thread_struct
  arm64: enable pointer authentication
  arm64: add prctl control for resetting ptrauth keys
  arm64: perf: strip PAC when unwinding userspace
  arm64: expose user PAC bit positions via ptrace
  arm64: add basic pointer authentication support
  arm64/cpufeature: detect pointer authentication
  arm64: Don't trap host pointer auth use to EL2
  arm64/kvm: hide ptrauth from guests
  arm64/kvm: consistently handle host HCR_EL2 flags
  arm64: add pointer authentication register bits
  arm64: add comments about EC exception levels
  arm64: perf: Treat EXCLUDE_EL* bit definitions as unsigned
  arm64: kpti: Whitelist Cortex-A CPUs that don't implement the CSV3 field
  arm64: enable per-task stack canaries
  ...
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
Pull arm64 festive updates from Will Deacon:
 "In the end, we ended up with quite a lot more than I expected:

   - Support for ARMv8.3 Pointer Authentication in userspace (CRIU and
     kernel-side support to come later)

   - Support for per-thread stack canaries, pending an update to GCC
     that is currently undergoing review

   - Support for kexec_file_load(), which permits secure boot of a kexec
     payload but also happens to improve the performance of kexec
     dramatically because we can avoid the sucky purgatory code from
     userspace. Kdump will come later (requires updates to libfdt).

   - Optimisation of our dynamic CPU feature framework, so that all
     detected features are enabled via a single stop_machine()
     invocation

   - KPTI whitelisting of Cortex-A CPUs unaffected by Meltdown, so that
     they can benefit from global TLB entries when KASLR is not in use

   - 52-bit virtual addressing for userspace (kernel remains 48-bit)

   - Patch in LSE atomics for per-cpu atomic operations

   - Custom preempt.h implementation to avoid unconditional calls to
     preempt_schedule() from preempt_enable()

   - Support for the new 'SB' Speculation Barrier instruction

   - Vectorised implementation of XOR checksumming and CRC32
     optimisations

   - Workaround for Cortex-A76 erratum #1165522

   - Improved compatibility with Clang/LLD

   - Support for TX2 system PMUS for profiling the L3 cache and DMC

   - Reflect read-only permissions in the linear map by default

   - Ensure MMIO reads are ordered with subsequent calls to Xdelay()

   - Initial support for memory hotplug

   - Tweak the threshold when we invalidate the TLB by-ASID, so that
     mremap() performance is improved for ranges spanning multiple PMDs.

   - Minor refactoring and cleanups"

* tag 'arm64-upstream' of git://git.kernel.org/pub/scm/linux/kernel/git/arm64/linux: (125 commits)
  arm64: kaslr: print PHYS_OFFSET in dump_kernel_offset()
  arm64: sysreg: Use _BITUL() when defining register bits
  arm64: cpufeature: Rework ptr auth hwcaps using multi_entry_cap_matches
  arm64: cpufeature: Reduce number of pointer auth CPU caps from 6 to 4
  arm64: docs: document pointer authentication
  arm64: ptr auth: Move per-thread keys from thread_info to thread_struct
  arm64: enable pointer authentication
  arm64: add prctl control for resetting ptrauth keys
  arm64: perf: strip PAC when unwinding userspace
  arm64: expose user PAC bit positions via ptrace
  arm64: add basic pointer authentication support
  arm64/cpufeature: detect pointer authentication
  arm64: Don't trap host pointer auth use to EL2
  arm64/kvm: hide ptrauth from guests
  arm64/kvm: consistently handle host HCR_EL2 flags
  arm64: add pointer authentication register bits
  arm64: add comments about EC exception levels
  arm64: perf: Treat EXCLUDE_EL* bit definitions as unsigned
  arm64: kpti: Whitelist Cortex-A CPUs that don't implement the CSV3 field
  arm64: enable per-task stack canaries
  ...
</pre>
</div>
</content>
</entry>
<entry>
<title>arm64: cpufeature: Rework ptr auth hwcaps using multi_entry_cap_matches</title>
<updated>2018-12-13T16:42:47+00:00</updated>
<author>
<name>Will Deacon</name>
<email>will.deacon@arm.com</email>
</author>
<published>2018-12-12T15:53:54+00:00</published>
<link rel='alternate' type='text/html' href='https://git.tavy.me/linux.git/commit/?id=1e013d06120cbf67e771848fc5d98174c33b078a'/>
<id>1e013d06120cbf67e771848fc5d98174c33b078a</id>
<content type='text'>
Open-coding the pointer-auth HWCAPs is a mess and can be avoided by
reusing the multi-cap logic from the CPU errata framework.

Move the multi_entry_cap_matches code to cpufeature.h and reuse it for
the pointer auth HWCAPs.

Reviewed-by: Suzuki Poulose &lt;suzuki.poulose@arm.com&gt;
Signed-off-by: Will Deacon &lt;will.deacon@arm.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
Open-coding the pointer-auth HWCAPs is a mess and can be avoided by
reusing the multi-cap logic from the CPU errata framework.

Move the multi_entry_cap_matches code to cpufeature.h and reuse it for
the pointer auth HWCAPs.

Reviewed-by: Suzuki Poulose &lt;suzuki.poulose@arm.com&gt;
Signed-off-by: Will Deacon &lt;will.deacon@arm.com&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>Merge branch 'kvm/cortex-a76-erratum-1165522' into aarch64/for-next/core</title>
<updated>2018-12-10T18:53:52+00:00</updated>
<author>
<name>Will Deacon</name>
<email>will.deacon@arm.com</email>
</author>
<published>2018-12-10T18:53:03+00:00</published>
<link rel='alternate' type='text/html' href='https://git.tavy.me/linux.git/commit/?id=bc84a2d106beab6000223b569c3bcb9ebf49d9ec'/>
<id>bc84a2d106beab6000223b569c3bcb9ebf49d9ec</id>
<content type='text'>
Pull in KVM workaround for A76 erratum #116522.

Conflicts:
	arch/arm64/include/asm/cpucaps.h
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
Pull in KVM workaround for A76 erratum #116522.

Conflicts:
	arch/arm64/include/asm/cpucaps.h
</pre>
</div>
</content>
</entry>
<entry>
<title>arm64: KVM: Force VHE for systems affected by erratum 1165522</title>
<updated>2018-12-10T11:59:07+00:00</updated>
<author>
<name>Marc Zyngier</name>
<email>marc.zyngier@arm.com</email>
</author>
<published>2018-12-06T17:31:23+00:00</published>
<link rel='alternate' type='text/html' href='https://git.tavy.me/linux.git/commit/?id=8b2cca9ade2c0f1d2ba94e39781e7306c918e544'/>
<id>8b2cca9ade2c0f1d2ba94e39781e7306c918e544</id>
<content type='text'>
In order to easily mitigate ARM erratum 1165522, we need to force
affected CPUs to run in VHE mode if using KVM.

Reviewed-by: James Morse &lt;james.morse@arm.com&gt;
Signed-off-by: Marc Zyngier &lt;marc.zyngier@arm.com&gt;
Signed-off-by: Will Deacon &lt;will.deacon@arm.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
In order to easily mitigate ARM erratum 1165522, we need to force
affected CPUs to run in VHE mode if using KVM.

Reviewed-by: James Morse &lt;james.morse@arm.com&gt;
Signed-off-by: Marc Zyngier &lt;marc.zyngier@arm.com&gt;
Signed-off-by: Will Deacon &lt;will.deacon@arm.com&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>arm64: capabilities: Merge duplicate entries for Qualcomm erratum 1003</title>
<updated>2018-12-06T11:47:44+00:00</updated>
<author>
<name>Suzuki K Poulose</name>
<email>suzuki.poulose@arm.com</email>
</author>
<published>2018-11-30T17:18:02+00:00</published>
<link rel='alternate' type='text/html' href='https://git.tavy.me/linux.git/commit/?id=a3dcea2c85129716f323d504b087a04200687242'/>
<id>a3dcea2c85129716f323d504b087a04200687242</id>
<content type='text'>
Remove duplicate entries for Qualcomm erratum 1003. Since the entries
are not purely based on generic MIDR checks, use the multi_cap_entry
type to merge the entries.

Cc: Christopher Covington &lt;cov@codeaurora.org&gt;
Cc: Will Deacon &lt;will.deacon@arm.com&gt;
Reviewed-by: Vladimir Murzin &lt;vladimir.murzin@arm.com&gt;
Tested-by: Vladimir Murzin &lt;vladimir.murzin@arm.com&gt;
Signed-off-by: Suzuki K Poulose &lt;suzuki.poulose@arm.com&gt;
Signed-off-by: Will Deacon &lt;will.deacon@arm.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
Remove duplicate entries for Qualcomm erratum 1003. Since the entries
are not purely based on generic MIDR checks, use the multi_cap_entry
type to merge the entries.

Cc: Christopher Covington &lt;cov@codeaurora.org&gt;
Cc: Will Deacon &lt;will.deacon@arm.com&gt;
Reviewed-by: Vladimir Murzin &lt;vladimir.murzin@arm.com&gt;
Tested-by: Vladimir Murzin &lt;vladimir.murzin@arm.com&gt;
Signed-off-by: Suzuki K Poulose &lt;suzuki.poulose@arm.com&gt;
Signed-off-by: Will Deacon &lt;will.deacon@arm.com&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>arm64: capabilities: Merge duplicate Cavium erratum entries</title>
<updated>2018-12-06T11:47:44+00:00</updated>
<author>
<name>Suzuki K Poulose</name>
<email>suzuki.poulose@arm.com</email>
</author>
<published>2018-11-30T17:18:01+00:00</published>
<link rel='alternate' type='text/html' href='https://git.tavy.me/linux.git/commit/?id=f58cdf7e3cab33306efd999c23b4fb606184abf3'/>
<id>f58cdf7e3cab33306efd999c23b4fb606184abf3</id>
<content type='text'>
Merge duplicate entries for a single capability using the midr
range list for Cavium errata 30115 and 27456.

Cc: Andrew Pinski &lt;apinski@cavium.com&gt;
Cc: David Daney &lt;david.daney@cavium.com&gt;
Cc: Will Deacon &lt;will.deacon@arm.com&gt;
Cc: Catalin Marinas &lt;catalin.marinas@arm.com&gt;
Reviewed-by: Vladimir Murzin &lt;vladimir.murzin@arm.com&gt;
Tested-by: Vladimir Murzin &lt;vladimir.murzin@arm.com&gt;
Signed-off-by: Suzuki K Poulose &lt;suzuki.poulose@arm.com&gt;
Signed-off-by: Will Deacon &lt;will.deacon@arm.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
Merge duplicate entries for a single capability using the midr
range list for Cavium errata 30115 and 27456.

Cc: Andrew Pinski &lt;apinski@cavium.com&gt;
Cc: David Daney &lt;david.daney@cavium.com&gt;
Cc: Will Deacon &lt;will.deacon@arm.com&gt;
Cc: Catalin Marinas &lt;catalin.marinas@arm.com&gt;
Reviewed-by: Vladimir Murzin &lt;vladimir.murzin@arm.com&gt;
Tested-by: Vladimir Murzin &lt;vladimir.murzin@arm.com&gt;
Signed-off-by: Suzuki K Poulose &lt;suzuki.poulose@arm.com&gt;
Signed-off-by: Will Deacon &lt;will.deacon@arm.com&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>arm64: capabilities: Merge entries for ARM64_WORKAROUND_CLEAN_CACHE</title>
<updated>2018-12-06T11:47:44+00:00</updated>
<author>
<name>Suzuki K Poulose</name>
<email>suzuki.poulose@arm.com</email>
</author>
<published>2018-11-30T17:18:00+00:00</published>
<link rel='alternate' type='text/html' href='https://git.tavy.me/linux.git/commit/?id=c9460dcb06ee68af1c75f9232603ece071901abe'/>
<id>c9460dcb06ee68af1c75f9232603ece071901abe</id>
<content type='text'>
We have two entries for ARM64_WORKAROUND_CLEAN_CACHE capability :

1) ARM Errata 826319, 827319, 824069, 819472 on A53 r0p[012]
2) ARM Errata 819472 on A53 r0p[01]

Both have the same work around. Merge these entries to avoid
duplicate entries for a single capability. Add a new Kconfig
entry to control the "capability" entry to make it easier
to handle combinations of the CONFIGs.

Cc: Will Deacon &lt;will.deacon@arm.com&gt;
Cc: Andre Przywara &lt;andre.przywara@arm.com&gt;
Cc: Mark Rutland &lt;mark.rutland@arm.com&gt;
Signed-off-by: Suzuki K Poulose &lt;suzuki.poulose@arm.com&gt;
Signed-off-by: Will Deacon &lt;will.deacon@arm.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
We have two entries for ARM64_WORKAROUND_CLEAN_CACHE capability :

1) ARM Errata 826319, 827319, 824069, 819472 on A53 r0p[012]
2) ARM Errata 819472 on A53 r0p[01]

Both have the same work around. Merge these entries to avoid
duplicate entries for a single capability. Add a new Kconfig
entry to control the "capability" entry to make it easier
to handle combinations of the CONFIGs.

Cc: Will Deacon &lt;will.deacon@arm.com&gt;
Cc: Andre Przywara &lt;andre.przywara@arm.com&gt;
Cc: Mark Rutland &lt;mark.rutland@arm.com&gt;
Signed-off-by: Suzuki K Poulose &lt;suzuki.poulose@arm.com&gt;
Signed-off-by: Will Deacon &lt;will.deacon@arm.com&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>arm64: Add workaround for Cortex-A76 erratum 1286807</title>
<updated>2018-11-29T16:45:45+00:00</updated>
<author>
<name>Catalin Marinas</name>
<email>catalin.marinas@arm.com</email>
</author>
<published>2018-11-19T11:27:28+00:00</published>
<link rel='alternate' type='text/html' href='https://git.tavy.me/linux.git/commit/?id=ce8c80c536dac9f325a051b30bf7730ee505eddc'/>
<id>ce8c80c536dac9f325a051b30bf7730ee505eddc</id>
<content type='text'>
On the affected Cortex-A76 cores (r0p0 to r3p0), if a virtual address
for a cacheable mapping of a location is being accessed by a core while
another core is remapping the virtual address to a new physical page
using the recommended break-before-make sequence, then under very rare
circumstances TLBI+DSB completes before a read using the translation
being invalidated has been observed by other observers. The workaround
repeats the TLBI+DSB operation and is shared with the Qualcomm Falkor
erratum 1009

Reviewed-by: Suzuki K Poulose &lt;suzuki.poulose@arm.com&gt;
Signed-off-by: Catalin Marinas &lt;catalin.marinas@arm.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
On the affected Cortex-A76 cores (r0p0 to r3p0), if a virtual address
for a cacheable mapping of a location is being accessed by a core while
another core is remapping the virtual address to a new physical page
using the recommended break-before-make sequence, then under very rare
circumstances TLBI+DSB completes before a read using the translation
being invalidated has been observed by other observers. The workaround
repeats the TLBI+DSB operation and is shared with the Qualcomm Falkor
erratum 1009

Reviewed-by: Suzuki K Poulose &lt;suzuki.poulose@arm.com&gt;
Signed-off-by: Catalin Marinas &lt;catalin.marinas@arm.com&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>arm64: Use a raw spinlock in __install_bp_hardening_cb()</title>
<updated>2018-11-27T18:01:34+00:00</updated>
<author>
<name>James Morse</name>
<email>james.morse@arm.com</email>
</author>
<published>2018-11-27T15:35:21+00:00</published>
<link rel='alternate' type='text/html' href='https://git.tavy.me/linux.git/commit/?id=d8797b125711f23d83f5a71e908d34dfcd1fc3e9'/>
<id>d8797b125711f23d83f5a71e908d34dfcd1fc3e9</id>
<content type='text'>
__install_bp_hardening_cb() is called via stop_machine() as part
of the cpu_enable callback. To force each CPU to take its turn
when allocating slots, they take a spinlock.

With the RT patches applied, the spinlock becomes a mutex,
and we get warnings about sleeping while in stop_machine():
| [    0.319176] CPU features: detected: RAS Extension Support
| [    0.319950] BUG: scheduling while atomic: migration/3/36/0x00000002
| [    0.319955] Modules linked in:
| [    0.319958] Preemption disabled at:
| [    0.319969] [&lt;ffff000008181ae4&gt;] cpu_stopper_thread+0x7c/0x108
| [    0.319973] CPU: 3 PID: 36 Comm: migration/3 Not tainted 4.19.1-rt3-00250-g330fc2c2a880 #2
| [    0.319975] Hardware name: linux,dummy-virt (DT)
| [    0.319976] Call trace:
| [    0.319981]  dump_backtrace+0x0/0x148
| [    0.319983]  show_stack+0x14/0x20
| [    0.319987]  dump_stack+0x80/0xa4
| [    0.319989]  __schedule_bug+0x94/0xb0
| [    0.319991]  __schedule+0x510/0x560
| [    0.319992]  schedule+0x38/0xe8
| [    0.319994]  rt_spin_lock_slowlock_locked+0xf0/0x278
| [    0.319996]  rt_spin_lock_slowlock+0x5c/0x90
| [    0.319998]  rt_spin_lock+0x54/0x58
| [    0.320000]  enable_smccc_arch_workaround_1+0xdc/0x260
| [    0.320001]  __enable_cpu_capability+0x10/0x20
| [    0.320003]  multi_cpu_stop+0x84/0x108
| [    0.320004]  cpu_stopper_thread+0x84/0x108
| [    0.320008]  smpboot_thread_fn+0x1e8/0x2b0
| [    0.320009]  kthread+0x124/0x128
| [    0.320010]  ret_from_fork+0x10/0x18

Switch this to a raw spinlock, as we know this is only called with
IRQs masked.

Signed-off-by: James Morse &lt;james.morse@arm.com&gt;
Signed-off-by: Will Deacon &lt;will.deacon@arm.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
__install_bp_hardening_cb() is called via stop_machine() as part
of the cpu_enable callback. To force each CPU to take its turn
when allocating slots, they take a spinlock.

With the RT patches applied, the spinlock becomes a mutex,
and we get warnings about sleeping while in stop_machine():
| [    0.319176] CPU features: detected: RAS Extension Support
| [    0.319950] BUG: scheduling while atomic: migration/3/36/0x00000002
| [    0.319955] Modules linked in:
| [    0.319958] Preemption disabled at:
| [    0.319969] [&lt;ffff000008181ae4&gt;] cpu_stopper_thread+0x7c/0x108
| [    0.319973] CPU: 3 PID: 36 Comm: migration/3 Not tainted 4.19.1-rt3-00250-g330fc2c2a880 #2
| [    0.319975] Hardware name: linux,dummy-virt (DT)
| [    0.319976] Call trace:
| [    0.319981]  dump_backtrace+0x0/0x148
| [    0.319983]  show_stack+0x14/0x20
| [    0.319987]  dump_stack+0x80/0xa4
| [    0.319989]  __schedule_bug+0x94/0xb0
| [    0.319991]  __schedule+0x510/0x560
| [    0.319992]  schedule+0x38/0xe8
| [    0.319994]  rt_spin_lock_slowlock_locked+0xf0/0x278
| [    0.319996]  rt_spin_lock_slowlock+0x5c/0x90
| [    0.319998]  rt_spin_lock+0x54/0x58
| [    0.320000]  enable_smccc_arch_workaround_1+0xdc/0x260
| [    0.320001]  __enable_cpu_capability+0x10/0x20
| [    0.320003]  multi_cpu_stop+0x84/0x108
| [    0.320004]  cpu_stopper_thread+0x84/0x108
| [    0.320008]  smpboot_thread_fn+0x1e8/0x2b0
| [    0.320009]  kthread+0x124/0x128
| [    0.320010]  ret_from_fork+0x10/0x18

Switch this to a raw spinlock, as we know this is only called with
IRQs masked.

Signed-off-by: James Morse &lt;james.morse@arm.com&gt;
Signed-off-by: Will Deacon &lt;will.deacon@arm.com&gt;
</pre>
</div>
</content>
</entry>
</feed>
