<feed xmlns='http://www.w3.org/2005/Atom'>
<title>linux.git/arch/arm64/include/asm/assembler.h, branch v5.1</title>
<subtitle>Linux kernel source tree</subtitle>
<link rel='alternate' type='text/html' href='https://git.tavy.me/linux.git/'/>
<entry>
<title>arm64: Add workaround for Fujitsu A64FX erratum 010001</title>
<updated>2019-02-28T16:24:25+00:00</updated>
<author>
<name>Zhang Lei</name>
<email>zhang.lei@jp.fujitsu.com</email>
</author>
<published>2019-02-26T18:43:41+00:00</published>
<link rel='alternate' type='text/html' href='https://git.tavy.me/linux.git/commit/?id=3e32131abc311a5cb9fddecc72cbd0b95ffcc10d'/>
<id>3e32131abc311a5cb9fddecc72cbd0b95ffcc10d</id>
<content type='text'>
On the Fujitsu-A64FX cores ver(1.0, 1.1), memory access may cause
an undefined fault (Data abort, DFSC=0b111111). This fault occurs under
a specific hardware condition when a load/store instruction performs an
address translation. Any load/store instruction, except non-fault access
including Armv8 and SVE might cause this undefined fault.

The TCR_ELx.NFD1 bit is used by the kernel when CONFIG_RANDOMIZE_BASE
is enabled to mitigate timing attacks against KASLR where the kernel
address space could be probed using the FFR and suppressed fault on
SVE loads.

Since this erratum causes spurious exceptions, which may corrupt
the exception registers, we clear the TCR_ELx.NFDx=1 bits when
booting on an affected CPU.

Signed-off-by: Zhang Lei &lt;zhang.lei@jp.fujitsu.com&gt;
[Generated MIDR value/mask for __cpu_setup(), removed spurious-fault handler
 and always disabled the NFDx bits on affected CPUs]
Signed-off-by: James Morse &lt;james.morse@arm.com&gt;
Tested-by: zhang.lei &lt;zhang.lei@jp.fujitsu.com&gt;
Signed-off-by: Catalin Marinas &lt;catalin.marinas@arm.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
On the Fujitsu-A64FX cores ver(1.0, 1.1), memory access may cause
an undefined fault (Data abort, DFSC=0b111111). This fault occurs under
a specific hardware condition when a load/store instruction performs an
address translation. Any load/store instruction, except non-fault access
including Armv8 and SVE might cause this undefined fault.

The TCR_ELx.NFD1 bit is used by the kernel when CONFIG_RANDOMIZE_BASE
is enabled to mitigate timing attacks against KASLR where the kernel
address space could be probed using the FFR and suppressed fault on
SVE loads.

Since this erratum causes spurious exceptions, which may corrupt
the exception registers, we clear the TCR_ELx.NFDx=1 bits when
booting on an affected CPU.

Signed-off-by: Zhang Lei &lt;zhang.lei@jp.fujitsu.com&gt;
[Generated MIDR value/mask for __cpu_setup(), removed spurious-fault handler
 and always disabled the NFDx bits on affected CPUs]
Signed-off-by: James Morse &lt;james.morse@arm.com&gt;
Tested-by: zhang.lei &lt;zhang.lei@jp.fujitsu.com&gt;
Signed-off-by: Catalin Marinas &lt;catalin.marinas@arm.com&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>arm64: Rename get_thread_info()</title>
<updated>2019-02-26T16:57:59+00:00</updated>
<author>
<name>Julien Thierry</name>
<email>julien.thierry@arm.com</email>
</author>
<published>2019-02-22T09:32:50+00:00</published>
<link rel='alternate' type='text/html' href='https://git.tavy.me/linux.git/commit/?id=4caf8758b60b6f7f9773fd1d265cb5a7cf935c97'/>
<id>4caf8758b60b6f7f9773fd1d265cb5a7cf935c97</id>
<content type='text'>
The assembly macro get_thread_info() actually returns a task_struct and is
analogous to the current/get_current macro/function.

While it could be argued that thread_info sits at the start of
task_struct and the intention could have been to return a thread_info,
instances of loads from/stores to the address obtained from
get_thread_info() use offsets that are generated with
offsetof(struct task_struct, [...]).

Rename get_thread_info() to state it returns a task_struct.

Acked-by: Mark Rutland &lt;mark.rutland@arm.com&gt;
Signed-off-by: Julien Thierry &lt;julien.thierry@arm.com&gt;
Cc: Will Deacon &lt;will.deacon@arm.com&gt;
Signed-off-by: Catalin Marinas &lt;catalin.marinas@arm.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
The assembly macro get_thread_info() actually returns a task_struct and is
analogous to the current/get_current macro/function.

While it could be argued that thread_info sits at the start of
task_struct and the intention could have been to return a thread_info,
instances of loads from/stores to the address obtained from
get_thread_info() use offsets that are generated with
offsetof(struct task_struct, [...]).

Rename get_thread_info() to state it returns a task_struct.

Acked-by: Mark Rutland &lt;mark.rutland@arm.com&gt;
Signed-off-by: Julien Thierry &lt;julien.thierry@arm.com&gt;
Cc: Will Deacon &lt;will.deacon@arm.com&gt;
Signed-off-by: Catalin Marinas &lt;catalin.marinas@arm.com&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>arm64: Remove unused daif related functions/macros</title>
<updated>2019-02-06T10:05:17+00:00</updated>
<author>
<name>Julien Thierry</name>
<email>julien.thierry@arm.com</email>
</author>
<published>2019-01-31T14:58:40+00:00</published>
<link rel='alternate' type='text/html' href='https://git.tavy.me/linux.git/commit/?id=a82785a953e03444fe38616aed4d27b01da79a97'/>
<id>a82785a953e03444fe38616aed4d27b01da79a97</id>
<content type='text'>
There are some helpers to modify PSR.[DAIF] bits that are not referenced
anywhere. The less these bits are available outside of local_irq_*
functions the better.

Get rid of those unused helpers.

Signed-off-by: Julien Thierry &lt;julien.thierry@arm.com&gt;
Reviewed-by: Mark Rutland &lt;mark.rutland@arm.com&gt;
Acked-by: Catalin Marinas &lt;catalin.marinas@arm.com&gt;
Acked-by: Marc Zyngier &lt;marc.zyngier@arm.com&gt;
Cc: Will Deacon &lt;will.deacon@arm.com&gt;
Cc: James Morse &lt;james.morse@arm.com&gt;
Signed-off-by: Catalin Marinas &lt;catalin.marinas@arm.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
There are some helpers to modify PSR.[DAIF] bits that are not referenced
anywhere. The less these bits are available outside of local_irq_*
functions the better.

Get rid of those unused helpers.

Signed-off-by: Julien Thierry &lt;julien.thierry@arm.com&gt;
Reviewed-by: Mark Rutland &lt;mark.rutland@arm.com&gt;
Acked-by: Catalin Marinas &lt;catalin.marinas@arm.com&gt;
Acked-by: Marc Zyngier &lt;marc.zyngier@arm.com&gt;
Cc: Will Deacon &lt;will.deacon@arm.com&gt;
Cc: James Morse &lt;james.morse@arm.com&gt;
Signed-off-by: Catalin Marinas &lt;catalin.marinas@arm.com&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>arm64: preempt: Fix big-endian when checking preempt count in assembly</title>
<updated>2018-12-11T20:07:03+00:00</updated>
<author>
<name>Will Deacon</name>
<email>will.deacon@arm.com</email>
</author>
<published>2018-12-11T13:41:32+00:00</published>
<link rel='alternate' type='text/html' href='https://git.tavy.me/linux.git/commit/?id=7faa313f05cad184e8b17750f0cbe5216ac6debb'/>
<id>7faa313f05cad184e8b17750f0cbe5216ac6debb</id>
<content type='text'>
Commit 396244692232 ("arm64: preempt: Provide our own implementation of
asm/preempt.h") extended the preempt count field in struct thread_info
to 64 bits, so that it consists of a 32-bit count plus a 32-bit flag
indicating whether or not the current task needs rescheduling.

Whilst the asm-offsets definition of TSK_TI_PREEMPT was updated to point
to this new field, the assembly usage was left untouched meaning that a
32-bit load from TSK_TI_PREEMPT on a big-endian machine actually returns
the reschedule flag instead of the count.

Whilst we could fix this by pointing TSK_TI_PREEMPT at the count field,
we're actually better off reworking the two assembly users so that they
operate on the whole 64-bit value in favour of inspecting the thread
flags separately in order to determine whether a reschedule is needed.

Acked-by: Ard Biesheuvel &lt;ard.biesheuvel@linaro.org&gt;
Reported-by: "kernelci.org bot" &lt;bot@kernelci.org&gt;
Tested-by: Kevin Hilman &lt;khilman@baylibre.com&gt;
Signed-off-by: Will Deacon &lt;will.deacon@arm.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
Commit 396244692232 ("arm64: preempt: Provide our own implementation of
asm/preempt.h") extended the preempt count field in struct thread_info
to 64 bits, so that it consists of a 32-bit count plus a 32-bit flag
indicating whether or not the current task needs rescheduling.

Whilst the asm-offsets definition of TSK_TI_PREEMPT was updated to point
to this new field, the assembly usage was left untouched meaning that a
32-bit load from TSK_TI_PREEMPT on a big-endian machine actually returns
the reschedule flag instead of the count.

Whilst we could fix this by pointing TSK_TI_PREEMPT at the count field,
we're actually better off reworking the two assembly users so that they
operate on the whole 64-bit value in favour of inspecting the thread
flags separately in order to determine whether a reschedule is needed.

Acked-by: Ard Biesheuvel &lt;ard.biesheuvel@linaro.org&gt;
Reported-by: "kernelci.org bot" &lt;bot@kernelci.org&gt;
Tested-by: Kevin Hilman &lt;khilman@baylibre.com&gt;
Signed-off-by: Will Deacon &lt;will.deacon@arm.com&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>arm64: Kconfig: Re-jig CONFIG options for 52-bit VA</title>
<updated>2018-12-10T18:42:18+00:00</updated>
<author>
<name>Will Deacon</name>
<email>will.deacon@arm.com</email>
</author>
<published>2018-12-10T14:15:15+00:00</published>
<link rel='alternate' type='text/html' href='https://git.tavy.me/linux.git/commit/?id=68d23da4373aba76f5300017c4746440f276698e'/>
<id>68d23da4373aba76f5300017c4746440f276698e</id>
<content type='text'>
Enabling 52-bit VAs for userspace is pretty confusing, since it requires
you to select "48-bit" virtual addressing in the Kconfig.

Rework the logic so that 52-bit user virtual addressing is advertised in
the "Virtual address space size" choice, along with some help text to
describe its interaction with Pointer Authentication. The EXPERT-only
option to force all user mappings to the 52-bit range is then made
available immediately below the VA size selection.

Signed-off-by: Will Deacon &lt;will.deacon@arm.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
Enabling 52-bit VAs for userspace is pretty confusing, since it requires
you to select "48-bit" virtual addressing in the Kconfig.

Rework the logic so that 52-bit user virtual addressing is advertised in
the "Virtual address space size" choice, along with some help text to
describe its interaction with Pointer Authentication. The EXPERT-only
option to force all user mappings to the 52-bit range is then made
available immediately below the VA size selection.

Signed-off-by: Will Deacon &lt;will.deacon@arm.com&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>arm64: mm: introduce 52-bit userspace support</title>
<updated>2018-12-10T18:42:17+00:00</updated>
<author>
<name>Steve Capper</name>
<email>steve.capper@arm.com</email>
</author>
<published>2018-12-06T22:50:41+00:00</published>
<link rel='alternate' type='text/html' href='https://git.tavy.me/linux.git/commit/?id=67e7fdfcc6824a4f768d76d89377b33baad58fad'/>
<id>67e7fdfcc6824a4f768d76d89377b33baad58fad</id>
<content type='text'>
On arm64 there is optional support for a 52-bit virtual address space.
To exploit this one has to be running with a 64KB page size and be
running on hardware that supports this.

For an arm64 kernel supporting a 48 bit VA with a 64KB page size,
some changes are needed to support a 52-bit userspace:
 * TCR_EL1.T0SZ needs to be 12 instead of 16,
 * TASK_SIZE needs to reflect the new size.

This patch implements the above when the support for 52-bit VAs is
detected at early boot time.

On arm64 userspace addresses translation is controlled by TTBR0_EL1. As
well as userspace, TTBR0_EL1 controls:
 * The identity mapping,
 * EFI runtime code.

It is possible to run a kernel with an identity mapping that has a
larger VA size than userspace (and for this case __cpu_set_tcr_t0sz()
would set TCR_EL1.T0SZ as appropriate). However, when the conditions for
52-bit userspace are met; it is possible to keep TCR_EL1.T0SZ fixed at
12. Thus in this patch, the TCR_EL1.T0SZ size changing logic is
disabled.

Reviewed-by: Catalin Marinas &lt;catalin.marinas@arm.com&gt;
Signed-off-by: Steve Capper &lt;steve.capper@arm.com&gt;
Signed-off-by: Will Deacon &lt;will.deacon@arm.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
On arm64 there is optional support for a 52-bit virtual address space.
To exploit this one has to be running with a 64KB page size and be
running on hardware that supports this.

For an arm64 kernel supporting a 48 bit VA with a 64KB page size,
some changes are needed to support a 52-bit userspace:
 * TCR_EL1.T0SZ needs to be 12 instead of 16,
 * TASK_SIZE needs to reflect the new size.

This patch implements the above when the support for 52-bit VAs is
detected at early boot time.

On arm64 userspace addresses translation is controlled by TTBR0_EL1. As
well as userspace, TTBR0_EL1 controls:
 * The identity mapping,
 * EFI runtime code.

It is possible to run a kernel with an identity mapping that has a
larger VA size than userspace (and for this case __cpu_set_tcr_t0sz()
would set TCR_EL1.T0SZ as appropriate). However, when the conditions for
52-bit userspace are met; it is possible to keep TCR_EL1.T0SZ fixed at
12. Thus in this patch, the TCR_EL1.T0SZ size changing logic is
disabled.

Reviewed-by: Catalin Marinas &lt;catalin.marinas@arm.com&gt;
Signed-off-by: Steve Capper &lt;steve.capper@arm.com&gt;
Signed-off-by: Will Deacon &lt;will.deacon@arm.com&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>arm64: mm: Offset TTBR1 to allow 52-bit PTRS_PER_PGD</title>
<updated>2018-12-10T18:42:17+00:00</updated>
<author>
<name>Steve Capper</name>
<email>steve.capper@arm.com</email>
</author>
<published>2018-12-06T22:50:39+00:00</published>
<link rel='alternate' type='text/html' href='https://git.tavy.me/linux.git/commit/?id=e842dfb5a2d3b4c43766508ef89a4eb67471d53a'/>
<id>e842dfb5a2d3b4c43766508ef89a4eb67471d53a</id>
<content type='text'>
Enabling 52-bit VAs on arm64 requires that the PGD table expands from 64
entries (for the 48-bit case) to 1024 entries. This quantity,
PTRS_PER_PGD is used as follows to compute which PGD entry corresponds
to a given virtual address, addr:

pgd_index(addr) -&gt; (addr &gt;&gt; PGDIR_SHIFT) &amp; (PTRS_PER_PGD - 1)

Userspace addresses are prefixed by 0's, so for a 48-bit userspace
address, uva, the following is true:
(uva &gt;&gt; PGDIR_SHIFT) &amp; (1024 - 1) == (uva &gt;&gt; PGDIR_SHIFT) &amp; (64 - 1)

In other words, a 48-bit userspace address will have the same pgd_index
when using PTRS_PER_PGD = 64 and 1024.

Kernel addresses are prefixed by 1's so, given a 48-bit kernel address,
kva, we have the following inequality:
(kva &gt;&gt; PGDIR_SHIFT) &amp; (1024 - 1) != (kva &gt;&gt; PGDIR_SHIFT) &amp; (64 - 1)

In other words a 48-bit kernel virtual address will have a different
pgd_index when using PTRS_PER_PGD = 64 and 1024.

If, however, we note that:
kva = 0xFFFF &lt;&lt; 48 + lower (where lower[63:48] == 0b)
and, PGDIR_SHIFT = 42 (as we are dealing with 64KB PAGE_SIZE)

We can consider:
(kva &gt;&gt; PGDIR_SHIFT) &amp; (1024 - 1) - (kva &gt;&gt; PGDIR_SHIFT) &amp; (64 - 1)
 = (0xFFFF &lt;&lt; 6) &amp; 0x3FF - (0xFFFF &lt;&lt; 6) &amp; 0x3F	// "lower" cancels out
 = 0x3C0

In other words, one can switch PTRS_PER_PGD to the 52-bit value globally
provided that they increment ttbr1_el1 by 0x3C0 * 8 = 0x1E00 bytes when
running with 48-bit kernel VAs (TCR_EL1.T1SZ = 16).

For kernel configuration where 52-bit userspace VAs are possible, this
patch offsets ttbr1_el1 and sets PTRS_PER_PGD corresponding to the
52-bit value.

Reviewed-by: Catalin Marinas &lt;catalin.marinas@arm.com&gt;
Reviewed-by: Suzuki K Poulose &lt;suzuki.poulose@arm.com&gt;
Suggested-by: Catalin Marinas &lt;catalin.marinas@arm.com&gt;
Signed-off-by: Steve Capper &lt;steve.capper@arm.com&gt;
[will: added comment to TTBR1_BADDR_4852_OFFSET calculation]
Signed-off-by: Will Deacon &lt;will.deacon@arm.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
Enabling 52-bit VAs on arm64 requires that the PGD table expands from 64
entries (for the 48-bit case) to 1024 entries. This quantity,
PTRS_PER_PGD is used as follows to compute which PGD entry corresponds
to a given virtual address, addr:

pgd_index(addr) -&gt; (addr &gt;&gt; PGDIR_SHIFT) &amp; (PTRS_PER_PGD - 1)

Userspace addresses are prefixed by 0's, so for a 48-bit userspace
address, uva, the following is true:
(uva &gt;&gt; PGDIR_SHIFT) &amp; (1024 - 1) == (uva &gt;&gt; PGDIR_SHIFT) &amp; (64 - 1)

In other words, a 48-bit userspace address will have the same pgd_index
when using PTRS_PER_PGD = 64 and 1024.

Kernel addresses are prefixed by 1's so, given a 48-bit kernel address,
kva, we have the following inequality:
(kva &gt;&gt; PGDIR_SHIFT) &amp; (1024 - 1) != (kva &gt;&gt; PGDIR_SHIFT) &amp; (64 - 1)

In other words a 48-bit kernel virtual address will have a different
pgd_index when using PTRS_PER_PGD = 64 and 1024.

If, however, we note that:
kva = 0xFFFF &lt;&lt; 48 + lower (where lower[63:48] == 0b)
and, PGDIR_SHIFT = 42 (as we are dealing with 64KB PAGE_SIZE)

We can consider:
(kva &gt;&gt; PGDIR_SHIFT) &amp; (1024 - 1) - (kva &gt;&gt; PGDIR_SHIFT) &amp; (64 - 1)
 = (0xFFFF &lt;&lt; 6) &amp; 0x3FF - (0xFFFF &lt;&lt; 6) &amp; 0x3F	// "lower" cancels out
 = 0x3C0

In other words, one can switch PTRS_PER_PGD to the 52-bit value globally
provided that they increment ttbr1_el1 by 0x3C0 * 8 = 0x1E00 bytes when
running with 48-bit kernel VAs (TCR_EL1.T1SZ = 16).

For kernel configuration where 52-bit userspace VAs are possible, this
patch offsets ttbr1_el1 and sets PTRS_PER_PGD corresponding to the
52-bit value.

Reviewed-by: Catalin Marinas &lt;catalin.marinas@arm.com&gt;
Reviewed-by: Suzuki K Poulose &lt;suzuki.poulose@arm.com&gt;
Suggested-by: Catalin Marinas &lt;catalin.marinas@arm.com&gt;
Signed-off-by: Steve Capper &lt;steve.capper@arm.com&gt;
[will: added comment to TTBR1_BADDR_4852_OFFSET calculation]
Signed-off-by: Will Deacon &lt;will.deacon@arm.com&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>arm64: Fix minor issues with the dcache_by_line_op macro</title>
<updated>2018-12-10T15:03:51+00:00</updated>
<author>
<name>Will Deacon</name>
<email>will.deacon@arm.com</email>
</author>
<published>2018-12-10T13:39:48+00:00</published>
<link rel='alternate' type='text/html' href='https://git.tavy.me/linux.git/commit/?id=33309ecda0070506c49182530abe7728850ebe78'/>
<id>33309ecda0070506c49182530abe7728850ebe78</id>
<content type='text'>
The dcache_by_line_op macro suffers from a couple of small problems:

First, the GAS directives that are currently being used rely on
assembler behavior that is not documented, and probably not guaranteed
to produce the correct behavior going forward. As a result, we end up
with some undefined symbols in cache.o:

$ nm arch/arm64/mm/cache.o
         ...
         U civac
         ...
         U cvac
         U cvap
         U cvau

This is due to the fact that the comparisons used to select the
operation type in the dcache_by_line_op macro are comparing symbols
not strings, and even though it seems that GAS is doing the right
thing here (undefined symbols by the same name are equal to each
other), it seems unwise to rely on this.

Second, when patching in a DC CVAP instruction on CPUs that support it,
the fallback path consists of a DC CVAU instruction which may be
affected by CPU errata that require ARM64_WORKAROUND_CLEAN_CACHE.

Solve these issues by unrolling the various maintenance routines and
using the conditional directives that are documented as operating on
strings. To avoid the complexity of nested alternatives, we move the
DC CVAP patching to __clean_dcache_area_pop, falling back to a branch
to __clean_dcache_area_poc if DCPOP is not supported by the CPU.

Reported-by: Ard Biesheuvel &lt;ard.biesheuvel@linaro.org&gt;
Suggested-by: Robin Murphy &lt;robin.murphy@arm.com&gt;
Signed-off-by: Will Deacon &lt;will.deacon@arm.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
The dcache_by_line_op macro suffers from a couple of small problems:

First, the GAS directives that are currently being used rely on
assembler behavior that is not documented, and probably not guaranteed
to produce the correct behavior going forward. As a result, we end up
with some undefined symbols in cache.o:

$ nm arch/arm64/mm/cache.o
         ...
         U civac
         ...
         U cvac
         U cvap
         U cvau

This is due to the fact that the comparisons used to select the
operation type in the dcache_by_line_op macro are comparing symbols
not strings, and even though it seems that GAS is doing the right
thing here (undefined symbols by the same name are equal to each
other), it seems unwise to rely on this.

Second, when patching in a DC CVAP instruction on CPUs that support it,
the fallback path consists of a DC CVAU instruction which may be
affected by CPU errata that require ARM64_WORKAROUND_CLEAN_CACHE.

Solve these issues by unrolling the various maintenance routines and
using the conditional directives that are documented as operating on
strings. To avoid the complexity of nested alternatives, we move the
DC CVAP patching to __clean_dcache_area_pop, falling back to a branch
to __clean_dcache_area_poc if DCPOP is not supported by the CPU.

Reported-by: Ard Biesheuvel &lt;ard.biesheuvel@linaro.org&gt;
Suggested-by: Robin Murphy &lt;robin.murphy@arm.com&gt;
Signed-off-by: Will Deacon &lt;will.deacon@arm.com&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>arm64: add EXPORT_SYMBOL_NOKASAN()</title>
<updated>2018-12-10T11:50:11+00:00</updated>
<author>
<name>Mark Rutland</name>
<email>mark.rutland@arm.com</email>
</author>
<published>2018-12-07T18:08:16+00:00</published>
<link rel='alternate' type='text/html' href='https://git.tavy.me/linux.git/commit/?id=386b3c7bdafcc67aaf4168e9627b381370b6538a'/>
<id>386b3c7bdafcc67aaf4168e9627b381370b6538a</id>
<content type='text'>
So that we can export symbols directly from assembly files, let's make
use of the generic &lt;asm/export.h&gt;. We have a few symbols that we'll want
to conditionally export for !KASAN kernel builds, so we add a helper for
that in &lt;asm/assembler.h&gt;.

Signed-off-by: Mark Rutland &lt;mark.rutland@arm.com&gt;
Cc: Catalin Marinas &lt;catalin.marinas@arm.com&gt;
Cc: Will Deacon &lt;will.deacon@arm.com&gt;
Signed-off-by: Will Deacon &lt;will.deacon@arm.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
So that we can export symbols directly from assembly files, let's make
use of the generic &lt;asm/export.h&gt;. We have a few symbols that we'll want
to conditionally export for !KASAN kernel builds, so we add a helper for
that in &lt;asm/assembler.h&gt;.

Signed-off-by: Mark Rutland &lt;mark.rutland@arm.com&gt;
Cc: Catalin Marinas &lt;catalin.marinas@arm.com&gt;
Cc: Will Deacon &lt;will.deacon@arm.com&gt;
Signed-off-by: Will Deacon &lt;will.deacon@arm.com&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>arm64: Add support for SB barrier and patch in over DSB; ISB sequences</title>
<updated>2018-12-06T16:47:04+00:00</updated>
<author>
<name>Will Deacon</name>
<email>will.deacon@arm.com</email>
</author>
<published>2018-06-14T10:21:34+00:00</published>
<link rel='alternate' type='text/html' href='https://git.tavy.me/linux.git/commit/?id=bd4fb6d270bc423a9a4098108784f7f9254c4e6d'/>
<id>bd4fb6d270bc423a9a4098108784f7f9254c4e6d</id>
<content type='text'>
We currently use a DSB; ISB sequence to inhibit speculation in set_fs().
Whilst this works for current CPUs, future CPUs may implement a new SB
barrier instruction which acts as an architected speculation barrier.

On CPUs that support it, patch in an SB; NOP sequence over the DSB; ISB
sequence and advertise the presence of the new instruction to userspace.

Signed-off-by: Will Deacon &lt;will.deacon@arm.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
We currently use a DSB; ISB sequence to inhibit speculation in set_fs().
Whilst this works for current CPUs, future CPUs may implement a new SB
barrier instruction which acts as an architected speculation barrier.

On CPUs that support it, patch in an SB; NOP sequence over the DSB; ISB
sequence and advertise the presence of the new instruction to userspace.

Signed-off-by: Will Deacon &lt;will.deacon@arm.com&gt;
</pre>
</div>
</content>
</entry>
</feed>
