<feed xmlns='http://www.w3.org/2005/Atom'>
<title>linux-stable.git/arch/arm64/kernel/entry.S, branch linux-4.3.y</title>
<subtitle>Linux kernel stable tree</subtitle>
<link rel='alternate' type='text/html' href='https://git.tavy.me/linux-stable.git/'/>
<entry>
<title>arm64: entry: always restore x0 from the stack on syscall return</title>
<updated>2015-08-21T14:11:43+00:00</updated>
<author>
<name>Will Deacon</name>
<email>will.deacon@arm.com</email>
</author>
<published>2015-08-19T14:57:09+00:00</published>
<link rel='alternate' type='text/html' href='https://git.tavy.me/linux-stable.git/commit/?id=412fcb6cebd758d080cacd5a41a0cbc656ea5fce'/>
<id>412fcb6cebd758d080cacd5a41a0cbc656ea5fce</id>
<content type='text'>
We have a micro-optimisation on the fast syscall return path where we
take care to keep x0 live with the return value from the syscall so that
we can avoid restoring it from the stack. The benefit of doing this is
fairly suspect, since we will be restoring x1 from the stack anyway
(which lives adjacent in the pt_regs structure) and the only additional
cost is saving x0 back to pt_regs after the syscall handler, which could
be seen as a poor man's prefetch.

More importantly, this causes issues with the context tracking code.

The ct_user_enter macro ends up branching into C code, which is free to
use x0 as a scratch register and consequently leads to us returning junk
back to userspace as the syscall return value. Rather than special case
the context-tracking code, this patch removes the questionable
optimisation entirely.

Cc: &lt;stable@vger.kernel.org&gt;
Cc: Larry Bassel &lt;larry.bassel@linaro.org&gt;
Cc: Kevin Hilman &lt;khilman@linaro.org&gt;
Reviewed-by: Catalin Marinas &lt;catalin.marinas@arm.com&gt;
Reported-by: Hanjun Guo &lt;hanjun.guo@linaro.org&gt;
Tested-by: Hanjun Guo &lt;hanjun.guo@linaro.org&gt;
Signed-off-by: Will Deacon &lt;will.deacon@arm.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
We have a micro-optimisation on the fast syscall return path where we
take care to keep x0 live with the return value from the syscall so that
we can avoid restoring it from the stack. The benefit of doing this is
fairly suspect, since we will be restoring x1 from the stack anyway
(which lives adjacent in the pt_regs structure) and the only additional
cost is saving x0 back to pt_regs after the syscall handler, which could
be seen as a poor man's prefetch.

More importantly, this causes issues with the context tracking code.

The ct_user_enter macro ends up branching into C code, which is free to
use x0 as a scratch register and consequently leads to us returning junk
back to userspace as the syscall return value. Rather than special case
the context-tracking code, this patch removes the questionable
optimisation entirely.

Cc: &lt;stable@vger.kernel.org&gt;
Cc: Larry Bassel &lt;larry.bassel@linaro.org&gt;
Cc: Kevin Hilman &lt;khilman@linaro.org&gt;
Reviewed-by: Catalin Marinas &lt;catalin.marinas@arm.com&gt;
Reported-by: Hanjun Guo &lt;hanjun.guo@linaro.org&gt;
Tested-by: Hanjun Guo &lt;hanjun.guo@linaro.org&gt;
Signed-off-by: Will Deacon &lt;will.deacon@arm.com&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>arm64: kernel: Adopt new alternative assembler macros</title>
<updated>2015-07-27T10:08:40+00:00</updated>
<author>
<name>Daniel Thompson</name>
<email>daniel.thompson@linaro.org</email>
</author>
<published>2015-07-22T11:21:03+00:00</published>
<link rel='alternate' type='text/html' href='https://git.tavy.me/linux-stable.git/commit/?id=e28cabf12304717b1054d0a02f0850f91e8a2074'/>
<id>e28cabf12304717b1054d0a02f0850f91e8a2074</id>
<content type='text'>
Convert the dynamic patching for ARM64_WORKAROUND_845719 over to
the newly added alternative assembler macros.

Signed-off-by: Daniel Thompson &lt;daniel.thompson@linaro.org&gt;
Signed-off-by: Will Deacon &lt;will.deacon@arm.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
Convert the dynamic patching for ARM64_WORKAROUND_845719 over to
the newly added alternative assembler macros.

Signed-off-by: Daniel Thompson &lt;daniel.thompson@linaro.org&gt;
Signed-off-by: Will Deacon &lt;will.deacon@arm.com&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>arm64: switch_to: calculate cpu context pointer using separate register</title>
<updated>2015-07-22T09:56:41+00:00</updated>
<author>
<name>Will Deacon</name>
<email>will.deacon@arm.com</email>
</author>
<published>2015-07-20T14:14:53+00:00</published>
<link rel='alternate' type='text/html' href='https://git.tavy.me/linux-stable.git/commit/?id=c0d3fce5e192c6b45a9d8e06aecfcec546f73884'/>
<id>c0d3fce5e192c6b45a9d8e06aecfcec546f73884</id>
<content type='text'>
Commit 0c8c0f03e3a2 ("x86/fpu, sched: Dynamically allocate 'struct fpu'")
moved the thread_struct to the bottom of task_struct. As a result, the
offset is now too large to be used in an immediate add on arm64 with
some kernel configs:

arch/arm64/kernel/entry.S: Assembler messages:
arch/arm64/kernel/entry.S:588: Error: immediate out of range
arch/arm64/kernel/entry.S:597: Error: immediate out of range

This patch calculates the offset using an additional register instead of
an immediate offset.

Fixes: 0c8c0f03e3a2 ("x86/fpu, sched: Dynamically allocate 'struct fpu'")
Cc: Dave Hansen &lt;dave.hansen@linux.intel.com&gt;
Cc: Olof Johansson &lt;olof@lixom.net&gt;
Cc: Ingo Molnar &lt;mingo@kernel.org&gt;
Signed-off-by: Will Deacon &lt;will.deacon@arm.com&gt;
Tested-by: Guenter Roeck &lt;linux@roeck-us.net&gt;
Signed-off-by: Catalin Marinas &lt;catalin.marinas@arm.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
Commit 0c8c0f03e3a2 ("x86/fpu, sched: Dynamically allocate 'struct fpu'")
moved the thread_struct to the bottom of task_struct. As a result, the
offset is now too large to be used in an immediate add on arm64 with
some kernel configs:

arch/arm64/kernel/entry.S: Assembler messages:
arch/arm64/kernel/entry.S:588: Error: immediate out of range
arch/arm64/kernel/entry.S:597: Error: immediate out of range

This patch calculates the offset using an additional register instead of
an immediate offset.

Fixes: 0c8c0f03e3a2 ("x86/fpu, sched: Dynamically allocate 'struct fpu'")
Cc: Dave Hansen &lt;dave.hansen@linux.intel.com&gt;
Cc: Olof Johansson &lt;olof@lixom.net&gt;
Cc: Ingo Molnar &lt;mingo@kernel.org&gt;
Signed-off-by: Will Deacon &lt;will.deacon@arm.com&gt;
Tested-by: Guenter Roeck &lt;linux@roeck-us.net&gt;
Signed-off-by: Catalin Marinas &lt;catalin.marinas@arm.com&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>arm64: entry: handle debug exceptions in el*_inv</title>
<updated>2015-07-08T17:03:48+00:00</updated>
<author>
<name>Mark Rutland</name>
<email>mark.rutland@arm.com</email>
</author>
<published>2015-07-07T17:00:49+00:00</published>
<link rel='alternate' type='text/html' href='https://git.tavy.me/linux-stable.git/commit/?id=1b42804d27b1c2623309950e9b203b11f4c67f4f'/>
<id>1b42804d27b1c2623309950e9b203b11f4c67f4f</id>
<content type='text'>
Currently we enable debug exceptions before reading ESR_EL1 in both
el0_inv and el1_inv. If a debug exception is taken before we read
ESR_EL1, the value will have been corrupted.

As el*_inv is typically fatal, an intervening debug exception results in
misleading debug information being logged to the console, but is not
otherwise harmful.

As with the other entry paths, we can use the ESR_EL1 value stashed
earlier in the exception entry (in x25 for el0_sync{,_compat}, and x1
for el1_sync), giving us better error reporting in this case.

Signed-off-by: Mark Rutland &lt;mark.rutland@arm.com&gt;
Acked-by: Will Deacon &lt;will.deacon@arm.com&gt;
Signed-off-by: Catalin Marinas &lt;catalin.marinas@arm.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
Currently we enable debug exceptions before reading ESR_EL1 in both
el0_inv and el1_inv. If a debug exception is taken before we read
ESR_EL1, the value will have been corrupted.

As el*_inv is typically fatal, an intervening debug exception results in
misleading debug information being logged to the console, but is not
otherwise harmful.

As with the other entry paths, we can use the ESR_EL1 value stashed
earlier in the exception entry (in x25 for el0_sync{,_compat}, and x1
for el1_sync), giving us better error reporting in this case.

Signed-off-by: Mark Rutland &lt;mark.rutland@arm.com&gt;
Acked-by: Will Deacon &lt;will.deacon@arm.com&gt;
Signed-off-by: Catalin Marinas &lt;catalin.marinas@arm.com&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>arm64: entry: fix context tracking for el0_sp_pc</title>
<updated>2015-06-17T10:53:19+00:00</updated>
<author>
<name>Mark Rutland</name>
<email>mark.rutland@arm.com</email>
</author>
<published>2015-06-15T15:40:27+00:00</published>
<link rel='alternate' type='text/html' href='https://git.tavy.me/linux-stable.git/commit/?id=46b0567c851cf85d6ba6f23eef385ec9111d09bc'/>
<id>46b0567c851cf85d6ba6f23eef385ec9111d09bc</id>
<content type='text'>
Commit 6c81fe7925cc4c42 ("arm64: enable context tracking") did not
update el0_sp_pc to use ct_user_exit, but this appears to have been
unintentional. In commit 6ab6463aeb5fbc75 ("arm64: adjust el0_sync so
that a function can be called") we made x0 available, and in the return
to userspace we call ct_user_enter in the kernel_exit macro.

Due to this, we currently don't correctly inform RCU of the user-&gt;kernel
transition, and may erroneously account for time spent in the kernel as
if we were in an extended quiescent state when CONFIG_CONTEXT_TRACKING
is enabled.

As we do record the kernel-&gt;user transition, a userspace application
making accesses from an unaligned stack pointer can demonstrate the
imbalance, provoking the following warning:

------------[ cut here ]------------
WARNING: CPU: 2 PID: 3660 at kernel/context_tracking.c:75 context_tracking_enter+0xd8/0xe4()
Modules linked in:
CPU: 2 PID: 3660 Comm: a.out Not tainted 4.1.0-rc7+ #8
Hardware name: ARM Juno development board (r0) (DT)
Call trace:
[&lt;ffffffc000089914&gt;] dump_backtrace+0x0/0x124
[&lt;ffffffc000089a48&gt;] show_stack+0x10/0x1c
[&lt;ffffffc0005b3cbc&gt;] dump_stack+0x84/0xc8
[&lt;ffffffc0000b3214&gt;] warn_slowpath_common+0x98/0xd0
[&lt;ffffffc0000b330c&gt;] warn_slowpath_null+0x14/0x20
[&lt;ffffffc00013ada4&gt;] context_tracking_enter+0xd4/0xe4
[&lt;ffffffc0005b534c&gt;] preempt_schedule_irq+0xd4/0x114
[&lt;ffffffc00008561c&gt;] el1_preempt+0x4/0x28
[&lt;ffffffc0001b8040&gt;] exit_files+0x38/0x4c
[&lt;ffffffc0000b5b94&gt;] do_exit+0x430/0x978
[&lt;ffffffc0000b614c&gt;] do_group_exit+0x40/0xd4
[&lt;ffffffc0000c0208&gt;] get_signal+0x23c/0x4f4
[&lt;ffffffc0000890b4&gt;] do_signal+0x1ac/0x518
[&lt;ffffffc000089650&gt;] do_notify_resume+0x5c/0x68
---[ end trace 963c192600337066 ]---

This patch adds the missing ct_user_exit to the el0_sp_pc entry path,
correcting the context tracking for this case.

Signed-off-by: Mark Rutland &lt;mark.rutland@arm.com&gt;
Acked-by: Will Deacon &lt;will.deacon@arm.com&gt;
Fixes: 6c81fe7925cc ("arm64: enable context tracking")
Cc: &lt;stable@vger.kernel.org&gt; # v3.17+
Signed-off-by: Catalin Marinas &lt;catalin.marinas@arm.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
Commit 6c81fe7925cc4c42 ("arm64: enable context tracking") did not
update el0_sp_pc to use ct_user_exit, but this appears to have been
unintentional. In commit 6ab6463aeb5fbc75 ("arm64: adjust el0_sync so
that a function can be called") we made x0 available, and in the return
to userspace we call ct_user_enter in the kernel_exit macro.

Due to this, we currently don't correctly inform RCU of the user-&gt;kernel
transition, and may erroneously account for time spent in the kernel as
if we were in an extended quiescent state when CONFIG_CONTEXT_TRACKING
is enabled.

As we do record the kernel-&gt;user transition, a userspace application
making accesses from an unaligned stack pointer can demonstrate the
imbalance, provoking the following warning:

------------[ cut here ]------------
WARNING: CPU: 2 PID: 3660 at kernel/context_tracking.c:75 context_tracking_enter+0xd8/0xe4()
Modules linked in:
CPU: 2 PID: 3660 Comm: a.out Not tainted 4.1.0-rc7+ #8
Hardware name: ARM Juno development board (r0) (DT)
Call trace:
[&lt;ffffffc000089914&gt;] dump_backtrace+0x0/0x124
[&lt;ffffffc000089a48&gt;] show_stack+0x10/0x1c
[&lt;ffffffc0005b3cbc&gt;] dump_stack+0x84/0xc8
[&lt;ffffffc0000b3214&gt;] warn_slowpath_common+0x98/0xd0
[&lt;ffffffc0000b330c&gt;] warn_slowpath_null+0x14/0x20
[&lt;ffffffc00013ada4&gt;] context_tracking_enter+0xd4/0xe4
[&lt;ffffffc0005b534c&gt;] preempt_schedule_irq+0xd4/0x114
[&lt;ffffffc00008561c&gt;] el1_preempt+0x4/0x28
[&lt;ffffffc0001b8040&gt;] exit_files+0x38/0x4c
[&lt;ffffffc0000b5b94&gt;] do_exit+0x430/0x978
[&lt;ffffffc0000b614c&gt;] do_group_exit+0x40/0xd4
[&lt;ffffffc0000c0208&gt;] get_signal+0x23c/0x4f4
[&lt;ffffffc0000890b4&gt;] do_signal+0x1ac/0x518
[&lt;ffffffc000089650&gt;] do_notify_resume+0x5c/0x68
---[ end trace 963c192600337066 ]---

This patch adds the missing ct_user_exit to the el0_sp_pc entry path,
correcting the context tracking for this case.

Signed-off-by: Mark Rutland &lt;mark.rutland@arm.com&gt;
Acked-by: Will Deacon &lt;will.deacon@arm.com&gt;
Fixes: 6c81fe7925cc ("arm64: enable context tracking")
Cc: &lt;stable@vger.kernel.org&gt; # v3.17+
Signed-off-by: Catalin Marinas &lt;catalin.marinas@arm.com&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>arm64: fix missing syscall trace exit</title>
<updated>2015-06-08T17:34:21+00:00</updated>
<author>
<name>Josh Stone</name>
<email>jistone@redhat.com</email>
</author>
<published>2015-06-05T21:28:03+00:00</published>
<link rel='alternate' type='text/html' href='https://git.tavy.me/linux-stable.git/commit/?id=04d7e098f541769721d7511d56aea4b976fd29fd'/>
<id>04d7e098f541769721d7511d56aea4b976fd29fd</id>
<content type='text'>
If a syscall is entered without TIF_SYSCALL_TRACE set, then it goes on
the fast path.  It's then possible to have TIF_SYSCALL_TRACE added in
the middle of the syscall, but ret_fast_syscall doesn't check this flag
again.  This causes a ptrace syscall-exit-stop to be missed.

For instance, from a PTRACE_EVENT_FORK reported during do_fork, the
tracer might resume with PTRACE_SYSCALL, setting TIF_SYSCALL_TRACE.
Now the completion of the fork should have a syscall-exit-stop.

Russell King fixed this on arm by re-checking _TIF_SYSCALL_WORK in the
fast exit path.  Do the same on arm64.

Reviewed-by: Will Deacon &lt;will.deacon@arm.com&gt;
Cc: Russell King &lt;rmk+kernel@arm.linux.org.uk&gt;
Signed-off-by: Josh Stone &lt;jistone@redhat.com&gt;
Signed-off-by: Catalin Marinas &lt;catalin.marinas@arm.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
If a syscall is entered without TIF_SYSCALL_TRACE set, then it goes on
the fast path.  It's then possible to have TIF_SYSCALL_TRACE added in
the middle of the syscall, but ret_fast_syscall doesn't check this flag
again.  This causes a ptrace syscall-exit-stop to be missed.

For instance, from a PTRACE_EVENT_FORK reported during do_fork, the
tracer might resume with PTRACE_SYSCALL, setting TIF_SYSCALL_TRACE.
Now the completion of the fork should have a syscall-exit-stop.

Russell King fixed this on arm by re-checking _TIF_SYSCALL_WORK in the
fast exit path.  Do the same on arm64.

Reviewed-by: Will Deacon &lt;will.deacon@arm.com&gt;
Cc: Russell King &lt;rmk+kernel@arm.linux.org.uk&gt;
Signed-off-by: Josh Stone &lt;jistone@redhat.com&gt;
Signed-off-by: Catalin Marinas &lt;catalin.marinas@arm.com&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>arm64: alternative: Merge alternative-asm.h into alternative.h</title>
<updated>2015-06-05T09:38:53+00:00</updated>
<author>
<name>Marc Zyngier</name>
<email>marc.zyngier@arm.com</email>
</author>
<published>2015-06-01T09:47:41+00:00</published>
<link rel='alternate' type='text/html' href='https://git.tavy.me/linux-stable.git/commit/?id=8d883b23aed73cad844ba48051c7e96eddf0f51c'/>
<id>8d883b23aed73cad844ba48051c7e96eddf0f51c</id>
<content type='text'>
asm/alternative-asm.h and asm/alternative.h are extremely similar,
and really deserve to live in the same file (as this makes further
modufications a bit easier).

Fold the content of alternative-asm.h into alternative.h, and
update the few users.

Acked-by: Will Deacon &lt;will.deacon@arm.com&gt;
Signed-off-by: Marc Zyngier &lt;marc.zyngier@arm.com&gt;
Signed-off-by: Catalin Marinas &lt;catalin.marinas@arm.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
asm/alternative-asm.h and asm/alternative.h are extremely similar,
and really deserve to live in the same file (as this makes further
modufications a bit easier).

Fold the content of alternative-asm.h into alternative.h, and
update the few users.

Acked-by: Will Deacon &lt;will.deacon@arm.com&gt;
Signed-off-by: Marc Zyngier &lt;marc.zyngier@arm.com&gt;
Signed-off-by: Catalin Marinas &lt;catalin.marinas@arm.com&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>arm64: Rework alternate sequence for ARM erratum 845719</title>
<updated>2015-06-05T09:38:52+00:00</updated>
<author>
<name>Marc Zyngier</name>
<email>marc.zyngier@arm.com</email>
</author>
<published>2015-06-03T13:36:23+00:00</published>
<link rel='alternate' type='text/html' href='https://git.tavy.me/linux-stable.git/commit/?id=b0dd9c02d476162340ad60fc96befa817fa8fe9f'/>
<id>b0dd9c02d476162340ad60fc96befa817fa8fe9f</id>
<content type='text'>
The workaround for erratum 845719 is currently using
a branch between two alternate sequences, which is
quite fragile, and that we are going to break as we
rework the alternative code.

This patch reworks the workaround to fit in a single
alternative sequence. The generated code itself is
unchanged.

Signed-off-by: Marc Zyngier &lt;marc.zyngier@arm.com&gt;
Signed-off-by: Catalin Marinas &lt;catalin.marinas@arm.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
The workaround for erratum 845719 is currently using
a branch between two alternate sequences, which is
quite fragile, and that we are going to break as we
rework the alternative code.

This patch reworks the workaround to fit in a single
alternative sequence. The generated code itself is
unchanged.

Signed-off-by: Marc Zyngier &lt;marc.zyngier@arm.com&gt;
Signed-off-by: Catalin Marinas &lt;catalin.marinas@arm.com&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>arm64: errata: add workaround for cortex-a53 erratum #845719</title>
<updated>2015-04-01T09:24:31+00:00</updated>
<author>
<name>Will Deacon</name>
<email>will.deacon@arm.com</email>
</author>
<published>2015-03-23T19:07:02+00:00</published>
<link rel='alternate' type='text/html' href='https://git.tavy.me/linux-stable.git/commit/?id=905e8c5dcaa147163672b06fe9dcb5abaacbc711'/>
<id>905e8c5dcaa147163672b06fe9dcb5abaacbc711</id>
<content type='text'>
When running a compat (AArch32) userspace on Cortex-A53, a load at EL0
from a virtual address that matches the bottom 32 bits of the virtual
address used by a recent load at (AArch64) EL1 might return incorrect
data.

This patch works around the issue by writing to the contextidr_el1
register on the exception return path when returning to a 32-bit task.
This workaround is patched in at runtime based on the MIDR value of the
processor.

Reviewed-by: Marc Zyngier &lt;marc.zyngier@arm.com&gt;
Tested-by: Mark Rutland &lt;mark.rutland@arm.com&gt;
Signed-off-by: Will Deacon &lt;will.deacon@arm.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
When running a compat (AArch32) userspace on Cortex-A53, a load at EL0
from a virtual address that matches the bottom 32 bits of the virtual
address used by a recent load at (AArch64) EL1 might return incorrect
data.

This patch works around the issue by writing to the contextidr_el1
register on the exception return path when returning to a 32-bit task.
This workaround is patched in at runtime based on the MIDR value of the
processor.

Reviewed-by: Marc Zyngier &lt;marc.zyngier@arm.com&gt;
Tested-by: Mark Rutland &lt;mark.rutland@arm.com&gt;
Signed-off-by: Will Deacon &lt;will.deacon@arm.com&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>arm64: Implement the compat_sys_call_table in C</title>
<updated>2015-01-27T09:38:07+00:00</updated>
<author>
<name>Catalin Marinas</name>
<email>catalin.marinas@arm.com</email>
</author>
<published>2015-01-06T16:42:32+00:00</published>
<link rel='alternate' type='text/html' href='https://git.tavy.me/linux-stable.git/commit/?id=0156411b1828771c09c92f034e7b81f702b84f07'/>
<id>0156411b1828771c09c92f034e7b81f702b84f07</id>
<content type='text'>
Unlike the sys_call_table[], the compat one was implemented in sys32.S
making it impossible to notice discrepancies between the number of
compat syscalls and the __NR_compat_syscalls macro, the latter having to
be defined in asm/unistd.h as including asm/unistd32.h would cause
conflicts on __NR_* definitions. With this patch, incorrect
__NR_compat_syscalls values will result in a build-time error.

Signed-off-by: Catalin Marinas &lt;catalin.marinas@arm.com&gt;
Suggested-by: Mark Rutland &lt;mark.rutland@arm.com&gt;
Acked-by: Mark Rutland &lt;mark.rutland@arm.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
Unlike the sys_call_table[], the compat one was implemented in sys32.S
making it impossible to notice discrepancies between the number of
compat syscalls and the __NR_compat_syscalls macro, the latter having to
be defined in asm/unistd.h as including asm/unistd32.h would cause
conflicts on __NR_* definitions. With this patch, incorrect
__NR_compat_syscalls values will result in a build-time error.

Signed-off-by: Catalin Marinas &lt;catalin.marinas@arm.com&gt;
Suggested-by: Mark Rutland &lt;mark.rutland@arm.com&gt;
Acked-by: Mark Rutland &lt;mark.rutland@arm.com&gt;
</pre>
</div>
</content>
</entry>
</feed>
