<feed xmlns='http://www.w3.org/2005/Atom'>
<title>linux.git/arch/arm64/kernel/head.S, branch v4.19</title>
<subtitle>Linux kernel source tree</subtitle>
<link rel='alternate' type='text/html' href='https://git.tavy.me/linux.git/'/>
<entry>
<title>arm64/kvm: Prohibit guest LOR accesses</title>
<updated>2018-02-26T09:48:01+00:00</updated>
<author>
<name>Mark Rutland</name>
<email>mark.rutland@arm.com</email>
</author>
<published>2018-02-13T13:39:23+00:00</published>
<link rel='alternate' type='text/html' href='https://git.tavy.me/linux.git/commit/?id=cc33c4e20185a391766ed5e78e2acc97e17ba511'/>
<id>cc33c4e20185a391766ed5e78e2acc97e17ba511</id>
<content type='text'>
We don't currently limit guest accesses to the LOR registers, which we
neither virtualize nor context-switch. As such, guests are provided with
unusable information/controls, and are not isolated from each other (or
the host).

To prevent these issues, we can trap register accesses and present the
illusion LORegions are unssupported by the CPU. To do this, we mask
ID_AA64MMFR1.LO, and set HCR_EL2.TLOR to trap accesses to the following
registers:

* LORC_EL1
* LOREA_EL1
* LORID_EL1
* LORN_EL1
* LORSA_EL1

... when trapped, we inject an UNDEFINED exception to EL1, simulating
their non-existence.

As noted in D7.2.67, when no LORegions are implemented, LoadLOAcquire
and StoreLORelease must behave as LoadAcquire and StoreRelease
respectively. We can ensure this by clearing LORC_EL1.EN when a CPU's
EL2 is first initialized, as the host kernel will not modify this.

Signed-off-by: Mark Rutland &lt;mark.rutland@arm.com&gt;
Cc: Vladimir Murzin &lt;vladimir.murzin@arm.com&gt;
Cc: Catalin Marinas &lt;catalin.marinas@arm.com&gt;
Cc: Christoffer Dall &lt;christoffer.dall@linaro.org&gt;
Cc: Marc Zyngier &lt;marc.zyngier@arm.com&gt;
Cc: Will Deacon &lt;will.deacon@arm.com&gt;
Cc: kvmarm@lists.cs.columbia.edu
Signed-off-by: Christoffer Dall &lt;christoffer.dall@linaro.org&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
We don't currently limit guest accesses to the LOR registers, which we
neither virtualize nor context-switch. As such, guests are provided with
unusable information/controls, and are not isolated from each other (or
the host).

To prevent these issues, we can trap register accesses and present the
illusion LORegions are unssupported by the CPU. To do this, we mask
ID_AA64MMFR1.LO, and set HCR_EL2.TLOR to trap accesses to the following
registers:

* LORC_EL1
* LOREA_EL1
* LORID_EL1
* LORN_EL1
* LORSA_EL1

... when trapped, we inject an UNDEFINED exception to EL1, simulating
their non-existence.

As noted in D7.2.67, when no LORegions are implemented, LoadLOAcquire
and StoreLORelease must behave as LoadAcquire and StoreRelease
respectively. We can ensure this by clearing LORC_EL1.EN when a CPU's
EL2 is first initialized, as the host kernel will not modify this.

Signed-off-by: Mark Rutland &lt;mark.rutland@arm.com&gt;
Cc: Vladimir Murzin &lt;vladimir.murzin@arm.com&gt;
Cc: Catalin Marinas &lt;catalin.marinas@arm.com&gt;
Cc: Christoffer Dall &lt;christoffer.dall@linaro.org&gt;
Cc: Marc Zyngier &lt;marc.zyngier@arm.com&gt;
Cc: Will Deacon &lt;will.deacon@arm.com&gt;
Cc: kvmarm@lists.cs.columbia.edu
Signed-off-by: Christoffer Dall &lt;christoffer.dall@linaro.org&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>arm64: idmap: Use "awx" flags for .idmap.text .pushsection directives</title>
<updated>2018-02-06T22:53:27+00:00</updated>
<author>
<name>Will Deacon</name>
<email>will.deacon@arm.com</email>
</author>
<published>2018-01-29T12:00:00+00:00</published>
<link rel='alternate' type='text/html' href='https://git.tavy.me/linux.git/commit/?id=439e70e27a51fe374806f460848066b237b0c10b'/>
<id>439e70e27a51fe374806f460848066b237b0c10b</id>
<content type='text'>
The identity map is mapped as both writeable and executable by the
SWAPPER_MM_MMUFLAGS and this is relied upon by the kpti code to manage
a synchronisation flag. Update the .pushsection flags to reflect the
actual mapping attributes.

Reported-by: Marc Zyngier &lt;marc.zyngier@arm.com&gt;
Signed-off-by: Will Deacon &lt;will.deacon@arm.com&gt;
Signed-off-by: Catalin Marinas &lt;catalin.marinas@arm.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
The identity map is mapped as both writeable and executable by the
SWAPPER_MM_MMUFLAGS and this is relied upon by the kpti code to manage
a synchronisation flag. Update the .pushsection flags to reflect the
actual mapping attributes.

Reported-by: Marc Zyngier &lt;marc.zyngier@arm.com&gt;
Signed-off-by: Will Deacon &lt;will.deacon@arm.com&gt;
Signed-off-by: Catalin Marinas &lt;catalin.marinas@arm.com&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>arm64: assembler: Align phys_to_pte with pte_to_phys</title>
<updated>2018-02-06T22:53:25+00:00</updated>
<author>
<name>Will Deacon</name>
<email>will.deacon@arm.com</email>
</author>
<published>2018-01-29T11:59:59+00:00</published>
<link rel='alternate' type='text/html' href='https://git.tavy.me/linux.git/commit/?id=79ddab3b05ca903f552fd5bf920efa055210322b'/>
<id>79ddab3b05ca903f552fd5bf920efa055210322b</id>
<content type='text'>
pte_to_phys lives in assembler.h and takes its destination register as
the first argument. Move phys_to_pte out of head.S to sit with its
counterpart and rejig it to follow the same calling convention.

Signed-off-by: Will Deacon &lt;will.deacon@arm.com&gt;
Signed-off-by: Catalin Marinas &lt;catalin.marinas@arm.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
pte_to_phys lives in assembler.h and takes its destination register as
the first argument. Move phys_to_pte out of head.S to sit with its
counterpart and rejig it to follow the same calling convention.

Signed-off-by: Will Deacon &lt;will.deacon@arm.com&gt;
Signed-off-by: Catalin Marinas &lt;catalin.marinas@arm.com&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>arm64: assembler: Change order of macro arguments in phys_to_ttbr</title>
<updated>2018-02-06T22:53:21+00:00</updated>
<author>
<name>Will Deacon</name>
<email>will.deacon@arm.com</email>
</author>
<published>2018-01-29T11:59:57+00:00</published>
<link rel='alternate' type='text/html' href='https://git.tavy.me/linux.git/commit/?id=fa0465fc07c2f9f47bd1198ab368d341bd7c7e4e'/>
<id>fa0465fc07c2f9f47bd1198ab368d341bd7c7e4e</id>
<content type='text'>
Since AArch64 assembly instructions take the destination register as
their first operand, do the same thing for the phys_to_ttbr macro.

Acked-by: Robin Murphy &lt;robin.murphy@arm.com&gt;
Signed-off-by: Will Deacon &lt;will.deacon@arm.com&gt;
Signed-off-by: Catalin Marinas &lt;catalin.marinas@arm.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
Since AArch64 assembly instructions take the destination register as
their first operand, do the same thing for the phys_to_ttbr macro.

Acked-by: Robin Murphy &lt;robin.murphy@arm.com&gt;
Signed-off-by: Will Deacon &lt;will.deacon@arm.com&gt;
Signed-off-by: Catalin Marinas &lt;catalin.marinas@arm.com&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>arm64: Add software workaround for Falkor erratum 1041</title>
<updated>2018-02-06T22:53:13+00:00</updated>
<author>
<name>Shanker Donthineni</name>
<email>shankerd@codeaurora.org</email>
</author>
<published>2018-01-29T11:59:52+00:00</published>
<link rel='alternate' type='text/html' href='https://git.tavy.me/linux.git/commit/?id=3060e9f0d14b46eb1949aaf9a31519ef75e59fda'/>
<id>3060e9f0d14b46eb1949aaf9a31519ef75e59fda</id>
<content type='text'>
The ARM architecture defines the memory locations that are permitted
to be accessed as the result of a speculative instruction fetch from
an exception level for which all stages of translation are disabled.
Specifically, the core is permitted to speculatively fetch from the
4KB region containing the current program counter 4K and next 4K.

When translation is changed from enabled to disabled for the running
exception level (SCTLR_ELn[M] changed from a value of 1 to 0), the
Falkor core may errantly speculatively access memory locations outside
of the 4KB region permitted by the architecture. The errant memory
access may lead to one of the following unexpected behaviors.

1) A System Error Interrupt (SEI) being raised by the Falkor core due
   to the errant memory access attempting to access a region of memory
   that is protected by a slave-side memory protection unit.
2) Unpredictable device behavior due to a speculative read from device
   memory. This behavior may only occur if the instruction cache is
   disabled prior to or coincident with translation being changed from
   enabled to disabled.

The conditions leading to this erratum will not occur when either of the
following occur:
 1) A higher exception level disables translation of a lower exception level
   (e.g. EL2 changing SCTLR_EL1[M] from a value of 1 to 0).
 2) An exception level disabling its stage-1 translation if its stage-2
    translation is enabled (e.g. EL1 changing SCTLR_EL1[M] from a value of 1
    to 0 when HCR_EL2[VM] has a value of 1).

To avoid the errant behavior, software must execute an ISB immediately
prior to executing the MSR that will change SCTLR_ELn[M] from 1 to 0.

Signed-off-by: Shanker Donthineni &lt;shankerd@codeaurora.org&gt;
Signed-off-by: Will Deacon &lt;will.deacon@arm.com&gt;
Signed-off-by: Catalin Marinas &lt;catalin.marinas@arm.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
The ARM architecture defines the memory locations that are permitted
to be accessed as the result of a speculative instruction fetch from
an exception level for which all stages of translation are disabled.
Specifically, the core is permitted to speculatively fetch from the
4KB region containing the current program counter 4K and next 4K.

When translation is changed from enabled to disabled for the running
exception level (SCTLR_ELn[M] changed from a value of 1 to 0), the
Falkor core may errantly speculatively access memory locations outside
of the 4KB region permitted by the architecture. The errant memory
access may lead to one of the following unexpected behaviors.

1) A System Error Interrupt (SEI) being raised by the Falkor core due
   to the errant memory access attempting to access a region of memory
   that is protected by a slave-side memory protection unit.
2) Unpredictable device behavior due to a speculative read from device
   memory. This behavior may only occur if the instruction cache is
   disabled prior to or coincident with translation being changed from
   enabled to disabled.

The conditions leading to this erratum will not occur when either of the
following occur:
 1) A higher exception level disables translation of a lower exception level
   (e.g. EL2 changing SCTLR_EL1[M] from a value of 1 to 0).
 2) An exception level disabling its stage-1 translation if its stage-2
    translation is enabled (e.g. EL1 changing SCTLR_EL1[M] from a value of 1
    to 0 when HCR_EL2[VM] has a value of 1).

To avoid the errant behavior, software must execute an ISB immediately
prior to executing the MSR that will change SCTLR_ELn[M] from 1 to 0.

Signed-off-by: Shanker Donthineni &lt;shankerd@codeaurora.org&gt;
Signed-off-by: Will Deacon &lt;will.deacon@arm.com&gt;
Signed-off-by: Catalin Marinas &lt;catalin.marinas@arm.com&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>arm64: sysreg: Move to use definitions for all the SCTLR bits</title>
<updated>2018-01-16T15:05:39+00:00</updated>
<author>
<name>James Morse</name>
<email>james.morse@arm.com</email>
</author>
<published>2018-01-15T19:38:55+00:00</published>
<link rel='alternate' type='text/html' href='https://git.tavy.me/linux.git/commit/?id=7a00d68ebe5f07cb1db17e7fedfd031f0d87e8bb'/>
<id>7a00d68ebe5f07cb1db17e7fedfd031f0d87e8bb</id>
<content type='text'>
__cpu_setup() configures SCTLR_EL1 using some hard coded hex masks,
and el2_setup() duplicates some this when setting RES1 bits.

Lets make this the same as KVM's hyp_init, which uses named bits.

First, we add definitions for all the SCTLR_EL{1,2} bits, the RES{1,0}
bits, and those we want to set or clear.

Add a build_bug checks to ensures all bits are either set or clear.
This means we don't need to preserve endian-ness configuration
generated elsewhere.

Finally, move the head.S and proc.S users of these hard-coded masks
over to the macro versions.

Signed-off-by: James Morse &lt;james.morse@arm.com&gt;
Signed-off-by: Catalin Marinas &lt;catalin.marinas@arm.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
__cpu_setup() configures SCTLR_EL1 using some hard coded hex masks,
and el2_setup() duplicates some this when setting RES1 bits.

Lets make this the same as KVM's hyp_init, which uses named bits.

First, we add definitions for all the SCTLR_EL{1,2} bits, the RES{1,0}
bits, and those we want to set or clear.

Add a build_bug checks to ensures all bits are either set or clear.
This means we don't need to preserve endian-ness configuration
generated elsewhere.

Finally, move the head.S and proc.S users of these hard-coded masks
over to the macro versions.

Signed-off-by: James Morse &lt;james.morse@arm.com&gt;
Signed-off-by: Catalin Marinas &lt;catalin.marinas@arm.com&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>arm64: Extend early page table code to allow for larger kernels</title>
<updated>2018-01-14T18:49:52+00:00</updated>
<author>
<name>Steve Capper</name>
<email>steve.capper@arm.com</email>
</author>
<published>2018-01-11T10:11:59+00:00</published>
<link rel='alternate' type='text/html' href='https://git.tavy.me/linux.git/commit/?id=0370b31e48454d8cf11120664aedd1c51b3004cb'/>
<id>0370b31e48454d8cf11120664aedd1c51b3004cb</id>
<content type='text'>
Currently the early assembler page table code assumes that precisely
1xpgd, 1xpud, 1xpmd are sufficient to represent the early kernel text
mappings.

Unfortunately this is rarely the case when running with a 16KB granule,
and we also run into limits with 4KB granule when building much larger
kernels.

This patch re-writes the early page table logic to compute indices of
mappings for each level of page table, and if multiple indices are
required, the next-level page table is scaled up accordingly.

Also the required size of the swapper_pg_dir is computed at link time
to cover the mapping [KIMAGE_ADDR + VOFFSET, _end]. When KASLR is
enabled, an extra page is set aside for each level that may require extra
entries at runtime.

Tested-by: Ard Biesheuvel &lt;ard.biesheuvel@linaro.org&gt;
Reviewed-by: Ard Biesheuvel &lt;ard.biesheuvel@linaro.org&gt;
Signed-off-by: Steve Capper &lt;steve.capper@arm.com&gt;
Signed-off-by: Catalin Marinas &lt;catalin.marinas@arm.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
Currently the early assembler page table code assumes that precisely
1xpgd, 1xpud, 1xpmd are sufficient to represent the early kernel text
mappings.

Unfortunately this is rarely the case when running with a 16KB granule,
and we also run into limits with 4KB granule when building much larger
kernels.

This patch re-writes the early page table logic to compute indices of
mappings for each level of page table, and if multiple indices are
required, the next-level page table is scaled up accordingly.

Also the required size of the swapper_pg_dir is computed at link time
to cover the mapping [KIMAGE_ADDR + VOFFSET, _end]. When KASLR is
enabled, an extra page is set aside for each level that may require extra
entries at runtime.

Tested-by: Ard Biesheuvel &lt;ard.biesheuvel@linaro.org&gt;
Reviewed-by: Ard Biesheuvel &lt;ard.biesheuvel@linaro.org&gt;
Signed-off-by: Steve Capper &lt;steve.capper@arm.com&gt;
Signed-off-by: Catalin Marinas &lt;catalin.marinas@arm.com&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>arm64: allow ID map to be extended to 52 bits</title>
<updated>2017-12-22T17:37:33+00:00</updated>
<author>
<name>Kristina Martsenko</name>
<email>kristina.martsenko@arm.com</email>
</author>
<published>2017-12-13T17:07:24+00:00</published>
<link rel='alternate' type='text/html' href='https://git.tavy.me/linux.git/commit/?id=fa2a8445b1d3810c52f2a6b3a006456bd1aacb7e'/>
<id>fa2a8445b1d3810c52f2a6b3a006456bd1aacb7e</id>
<content type='text'>
Currently, when using VA_BITS &lt; 48, if the ID map text happens to be
placed in physical memory above VA_BITS, we increase the VA size (up to
48) and create a new table level, in order to map in the ID map text.
This is okay because the system always supports 48 bits of VA.

This patch extends the code such that if the system supports 52 bits of
VA, and the ID map text is placed that high up, then we increase the VA
size accordingly, up to 52.

One difference from the current implementation is that so far the
condition of VA_BITS &lt; 48 has meant that the top level table is always
"full", with the maximum number of entries, and an extra table level is
always needed. Now, when VA_BITS = 48 (and using 64k pages), the top
level table is not full, and we simply need to increase the number of
entries in it, instead of creating a new table level.

Tested-by: Suzuki K Poulose &lt;suzuki.poulose@arm.com&gt;
Reviewed-by: Suzuki K Poulose &lt;suzuki.poulose@arm.com&gt;
Reviewed-by: Marc Zyngier &lt;marc.zyngier@arm.com&gt;
Tested-by: Bob Picco &lt;bob.picco@oracle.com&gt;
Reviewed-by: Bob Picco &lt;bob.picco@oracle.com&gt;
Signed-off-by: Kristina Martsenko &lt;kristina.martsenko@arm.com&gt;
[catalin.marinas@arm.com: reduce arguments to __create_hyp_mappings()]
[catalin.marinas@arm.com: reworked/renamed __cpu_uses_extended_idmap_level()]
Signed-off-by: Catalin Marinas &lt;catalin.marinas@arm.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
Currently, when using VA_BITS &lt; 48, if the ID map text happens to be
placed in physical memory above VA_BITS, we increase the VA size (up to
48) and create a new table level, in order to map in the ID map text.
This is okay because the system always supports 48 bits of VA.

This patch extends the code such that if the system supports 52 bits of
VA, and the ID map text is placed that high up, then we increase the VA
size accordingly, up to 52.

One difference from the current implementation is that so far the
condition of VA_BITS &lt; 48 has meant that the top level table is always
"full", with the maximum number of entries, and an extra table level is
always needed. Now, when VA_BITS = 48 (and using 64k pages), the top
level table is not full, and we simply need to increase the number of
entries in it, instead of creating a new table level.

Tested-by: Suzuki K Poulose &lt;suzuki.poulose@arm.com&gt;
Reviewed-by: Suzuki K Poulose &lt;suzuki.poulose@arm.com&gt;
Reviewed-by: Marc Zyngier &lt;marc.zyngier@arm.com&gt;
Tested-by: Bob Picco &lt;bob.picco@oracle.com&gt;
Reviewed-by: Bob Picco &lt;bob.picco@oracle.com&gt;
Signed-off-by: Kristina Martsenko &lt;kristina.martsenko@arm.com&gt;
[catalin.marinas@arm.com: reduce arguments to __create_hyp_mappings()]
[catalin.marinas@arm.com: reworked/renamed __cpu_uses_extended_idmap_level()]
Signed-off-by: Catalin Marinas &lt;catalin.marinas@arm.com&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>arm64: handle 52-bit physical addresses in page table entries</title>
<updated>2017-12-22T17:37:18+00:00</updated>
<author>
<name>Kristina Martsenko</name>
<email>kristina.martsenko@arm.com</email>
</author>
<published>2017-12-13T17:07:21+00:00</published>
<link rel='alternate' type='text/html' href='https://git.tavy.me/linux.git/commit/?id=75387b92635e7dca410c1ef92cfe510019248f76'/>
<id>75387b92635e7dca410c1ef92cfe510019248f76</id>
<content type='text'>
The top 4 bits of a 52-bit physical address are positioned at bits
12..15 of a page table entry. Introduce macros to convert between a
physical address and its placement in a table entry, and change all
macros/functions that access PTEs to use them.

Reviewed-by: Marc Zyngier &lt;marc.zyngier@arm.com&gt;
Tested-by: Suzuki K Poulose &lt;suzuki.poulose@arm.com&gt;
Reviewed-by: Suzuki K Poulose &lt;suzuki.poulose@arm.com&gt;
Tested-by: Bob Picco &lt;bob.picco@oracle.com&gt;
Reviewed-by: Bob Picco &lt;bob.picco@oracle.com&gt;
Signed-off-by: Kristina Martsenko &lt;kristina.martsenko@arm.com&gt;
[catalin.marinas@arm.com: some long lines wrapped]
Signed-off-by: Catalin Marinas &lt;catalin.marinas@arm.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
The top 4 bits of a 52-bit physical address are positioned at bits
12..15 of a page table entry. Introduce macros to convert between a
physical address and its placement in a table entry, and change all
macros/functions that access PTEs to use them.

Reviewed-by: Marc Zyngier &lt;marc.zyngier@arm.com&gt;
Tested-by: Suzuki K Poulose &lt;suzuki.poulose@arm.com&gt;
Reviewed-by: Suzuki K Poulose &lt;suzuki.poulose@arm.com&gt;
Tested-by: Bob Picco &lt;bob.picco@oracle.com&gt;
Reviewed-by: Bob Picco &lt;bob.picco@oracle.com&gt;
Signed-off-by: Kristina Martsenko &lt;kristina.martsenko@arm.com&gt;
[catalin.marinas@arm.com: some long lines wrapped]
Signed-off-by: Catalin Marinas &lt;catalin.marinas@arm.com&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>arm64: head.S: handle 52-bit PAs in PTEs in early page table setup</title>
<updated>2017-12-22T17:35:55+00:00</updated>
<author>
<name>Kristina Martsenko</name>
<email>kristina.martsenko@arm.com</email>
</author>
<published>2017-12-13T17:07:19+00:00</published>
<link rel='alternate' type='text/html' href='https://git.tavy.me/linux.git/commit/?id=e6d588a8e3da24ea51321279a064b97feb502ef0'/>
<id>e6d588a8e3da24ea51321279a064b97feb502ef0</id>
<content type='text'>
The top 4 bits of a 52-bit physical address are positioned at bits
12..15 in page table entries. Introduce a macro to move the bits there,
and change the early ID map and swapper table setup code to use it.

Tested-by: Suzuki K Poulose &lt;suzuki.poulose@arm.com&gt;
Reviewed-by: Suzuki K Poulose &lt;suzuki.poulose@arm.com&gt;
Reviewed-by: Marc Zyngier &lt;marc.zyngier@arm.com&gt;
Tested-by: Bob Picco &lt;bob.picco@oracle.com&gt;
Reviewed-by: Bob Picco &lt;bob.picco@oracle.com&gt;
Signed-off-by: Kristina Martsenko &lt;kristina.martsenko@arm.com&gt;
[catalin.marinas@arm.com: additional comments for clarification]
Signed-off-by: Catalin Marinas &lt;catalin.marinas@arm.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
The top 4 bits of a 52-bit physical address are positioned at bits
12..15 in page table entries. Introduce a macro to move the bits there,
and change the early ID map and swapper table setup code to use it.

Tested-by: Suzuki K Poulose &lt;suzuki.poulose@arm.com&gt;
Reviewed-by: Suzuki K Poulose &lt;suzuki.poulose@arm.com&gt;
Reviewed-by: Marc Zyngier &lt;marc.zyngier@arm.com&gt;
Tested-by: Bob Picco &lt;bob.picco@oracle.com&gt;
Reviewed-by: Bob Picco &lt;bob.picco@oracle.com&gt;
Signed-off-by: Kristina Martsenko &lt;kristina.martsenko@arm.com&gt;
[catalin.marinas@arm.com: additional comments for clarification]
Signed-off-by: Catalin Marinas &lt;catalin.marinas@arm.com&gt;
</pre>
</div>
</content>
</entry>
</feed>
