<feed xmlns='http://www.w3.org/2005/Atom'>
<title>linux.git/arch/arm64/kernel, branch v3.14</title>
<subtitle>Linux kernel source tree</subtitle>
<link rel='alternate' type='text/html' href='https://git.tavy.me/linux.git/'/>
<entry>
<title>ARM64: unwind: Fix PC calculation</title>
<updated>2014-02-17T09:16:33+00:00</updated>
<author>
<name>Olof Johansson</name>
<email>olof@lixom.net</email>
</author>
<published>2014-02-14T19:35:15+00:00</published>
<link rel='alternate' type='text/html' href='https://git.tavy.me/linux.git/commit/?id=e306dfd06fcb44d21c80acb8e5a88d55f3d1cf63'/>
<id>e306dfd06fcb44d21c80acb8e5a88d55f3d1cf63</id>
<content type='text'>
The frame PC value in the unwind code used to just take the saved LR
value and use that.  That's incorrect as a stack trace, since it shows
the return path stack, not the call path stack.

In particular, it shows faulty information in case the bl is done as
the very last instruction of one label, since the return point will be
in the next label. That can easily be seen with tail calls to panic(),
which is marked __noreturn and thus doesn't have anything useful after it.

Easiest here is to just correct the unwind code and do a -4, to get the
actual call site for the backtrace instead of the return site.

Signed-off-by: Olof Johansson &lt;olof@lixom.net&gt;
Cc: stable@vger.kernel.org
Signed-off-by: Catalin Marinas &lt;catalin.marinas@arm.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
The frame PC value in the unwind code used to just take the saved LR
value and use that.  That's incorrect as a stack trace, since it shows
the return path stack, not the call path stack.

In particular, it shows faulty information in case the bl is done as
the very last instruction of one label, since the return point will be
in the next label. That can easily be seen with tail calls to panic(),
which is marked __noreturn and thus doesn't have anything useful after it.

Easiest here is to just correct the unwind code and do a -4, to get the
actual call site for the backtrace instead of the return site.

Signed-off-by: Olof Johansson &lt;olof@lixom.net&gt;
Cc: stable@vger.kernel.org
Signed-off-by: Catalin Marinas &lt;catalin.marinas@arm.com&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>arm64: atomics: fix use of acquire + release for full barrier semantics</title>
<updated>2014-02-07T16:45:43+00:00</updated>
<author>
<name>Will Deacon</name>
<email>will.deacon@arm.com</email>
</author>
<published>2014-02-04T12:29:12+00:00</published>
<link rel='alternate' type='text/html' href='https://git.tavy.me/linux.git/commit/?id=8e86f0b409a44193f1587e87b69c5dcf8f65be67'/>
<id>8e86f0b409a44193f1587e87b69c5dcf8f65be67</id>
<content type='text'>
Linux requires a number of atomic operations to provide full barrier
semantics, that is no memory accesses after the operation can be
observed before any accesses up to and including the operation in
program order.

On arm64, these operations have been incorrectly implemented as follows:

	// A, B, C are independent memory locations

	&lt;Access [A]&gt;

	// atomic_op (B)
1:	ldaxr	x0, [B]		// Exclusive load with acquire
	&lt;op(B)&gt;
	stlxr	w1, x0, [B]	// Exclusive store with release
	cbnz	w1, 1b

	&lt;Access [C]&gt;

The assumption here being that two half barriers are equivalent to a
full barrier, so the only permitted ordering would be A -&gt; B -&gt; C
(where B is the atomic operation involving both a load and a store).

Unfortunately, this is not the case by the letter of the architecture
and, in fact, the accesses to A and C are permitted to pass their
nearest half barrier resulting in orderings such as Bl -&gt; A -&gt; C -&gt; Bs
or Bl -&gt; C -&gt; A -&gt; Bs (where Bl is the load-acquire on B and Bs is the
store-release on B). This is a clear violation of the full barrier
requirement.

The simple way to fix this is to implement the same algorithm as ARMv7
using explicit barriers:

	&lt;Access [A]&gt;

	// atomic_op (B)
	dmb	ish		// Full barrier
1:	ldxr	x0, [B]		// Exclusive load
	&lt;op(B)&gt;
	stxr	w1, x0, [B]	// Exclusive store
	cbnz	w1, 1b
	dmb	ish		// Full barrier

	&lt;Access [C]&gt;

but this has the undesirable effect of introducing *two* full barrier
instructions. A better approach is actually the following, non-intuitive
sequence:

	&lt;Access [A]&gt;

	// atomic_op (B)
1:	ldxr	x0, [B]		// Exclusive load
	&lt;op(B)&gt;
	stlxr	w1, x0, [B]	// Exclusive store with release
	cbnz	w1, 1b
	dmb	ish		// Full barrier

	&lt;Access [C]&gt;

The simple observations here are:

  - The dmb ensures that no subsequent accesses (e.g. the access to C)
    can enter or pass the atomic sequence.

  - The dmb also ensures that no prior accesses (e.g. the access to A)
    can pass the atomic sequence.

  - Therefore, no prior access can pass a subsequent access, or
    vice-versa (i.e. A is strictly ordered before C).

  - The stlxr ensures that no prior access can pass the store component
    of the atomic operation.

The only tricky part remaining is the ordering between the ldxr and the
access to A, since the absence of the first dmb means that we're now
permitting re-ordering between the ldxr and any prior accesses.

From an (arbitrary) observer's point of view, there are two scenarios:

  1. We have observed the ldxr. This means that if we perform a store to
     [B], the ldxr will still return older data. If we can observe the
     ldxr, then we can potentially observe the permitted re-ordering
     with the access to A, which is clearly an issue when compared to
     the dmb variant of the code. Thankfully, the exclusive monitor will
     save us here since it will be cleared as a result of the store and
     the ldxr will retry. Notice that any use of a later memory
     observation to imply observation of the ldxr will also imply
     observation of the access to A, since the stlxr/dmb ensure strict
     ordering.

  2. We have not observed the ldxr. This means we can perform a store
     and influence the later ldxr. However, that doesn't actually tell
     us anything about the access to [A], so we've not lost anything
     here either when compared to the dmb variant.

This patch implements this solution for our barriered atomic operations,
ensuring that we satisfy the full barrier requirements where they are
needed.

Cc: &lt;stable@vger.kernel.org&gt;
Cc: Peter Zijlstra &lt;peterz@infradead.org&gt;
Signed-off-by: Will Deacon &lt;will.deacon@arm.com&gt;
Signed-off-by: Catalin Marinas &lt;catalin.marinas@arm.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
Linux requires a number of atomic operations to provide full barrier
semantics, that is no memory accesses after the operation can be
observed before any accesses up to and including the operation in
program order.

On arm64, these operations have been incorrectly implemented as follows:

	// A, B, C are independent memory locations

	&lt;Access [A]&gt;

	// atomic_op (B)
1:	ldaxr	x0, [B]		// Exclusive load with acquire
	&lt;op(B)&gt;
	stlxr	w1, x0, [B]	// Exclusive store with release
	cbnz	w1, 1b

	&lt;Access [C]&gt;

The assumption here being that two half barriers are equivalent to a
full barrier, so the only permitted ordering would be A -&gt; B -&gt; C
(where B is the atomic operation involving both a load and a store).

Unfortunately, this is not the case by the letter of the architecture
and, in fact, the accesses to A and C are permitted to pass their
nearest half barrier resulting in orderings such as Bl -&gt; A -&gt; C -&gt; Bs
or Bl -&gt; C -&gt; A -&gt; Bs (where Bl is the load-acquire on B and Bs is the
store-release on B). This is a clear violation of the full barrier
requirement.

The simple way to fix this is to implement the same algorithm as ARMv7
using explicit barriers:

	&lt;Access [A]&gt;

	// atomic_op (B)
	dmb	ish		// Full barrier
1:	ldxr	x0, [B]		// Exclusive load
	&lt;op(B)&gt;
	stxr	w1, x0, [B]	// Exclusive store
	cbnz	w1, 1b
	dmb	ish		// Full barrier

	&lt;Access [C]&gt;

but this has the undesirable effect of introducing *two* full barrier
instructions. A better approach is actually the following, non-intuitive
sequence:

	&lt;Access [A]&gt;

	// atomic_op (B)
1:	ldxr	x0, [B]		// Exclusive load
	&lt;op(B)&gt;
	stlxr	w1, x0, [B]	// Exclusive store with release
	cbnz	w1, 1b
	dmb	ish		// Full barrier

	&lt;Access [C]&gt;

The simple observations here are:

  - The dmb ensures that no subsequent accesses (e.g. the access to C)
    can enter or pass the atomic sequence.

  - The dmb also ensures that no prior accesses (e.g. the access to A)
    can pass the atomic sequence.

  - Therefore, no prior access can pass a subsequent access, or
    vice-versa (i.e. A is strictly ordered before C).

  - The stlxr ensures that no prior access can pass the store component
    of the atomic operation.

The only tricky part remaining is the ordering between the ldxr and the
access to A, since the absence of the first dmb means that we're now
permitting re-ordering between the ldxr and any prior accesses.

From an (arbitrary) observer's point of view, there are two scenarios:

  1. We have observed the ldxr. This means that if we perform a store to
     [B], the ldxr will still return older data. If we can observe the
     ldxr, then we can potentially observe the permitted re-ordering
     with the access to A, which is clearly an issue when compared to
     the dmb variant of the code. Thankfully, the exclusive monitor will
     save us here since it will be cleared as a result of the store and
     the ldxr will retry. Notice that any use of a later memory
     observation to imply observation of the ldxr will also imply
     observation of the access to A, since the stlxr/dmb ensure strict
     ordering.

  2. We have not observed the ldxr. This means we can perform a store
     and influence the later ldxr. However, that doesn't actually tell
     us anything about the access to [A], so we've not lost anything
     here either when compared to the dmb variant.

This patch implements this solution for our barriered atomic operations,
ensuring that we satisfy the full barrier requirements where they are
needed.

Cc: &lt;stable@vger.kernel.org&gt;
Cc: Peter Zijlstra &lt;peterz@infradead.org&gt;
Signed-off-by: Will Deacon &lt;will.deacon@arm.com&gt;
Signed-off-by: Catalin Marinas &lt;catalin.marinas@arm.com&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>arm64: vdso: update wtm fields for CLOCK_MONOTONIC_COARSE</title>
<updated>2014-02-05T11:55:49+00:00</updated>
<author>
<name>Nathan Lynch</name>
<email>nathan_lynch@mentor.com</email>
</author>
<published>2014-02-03T19:48:52+00:00</published>
<link rel='alternate' type='text/html' href='https://git.tavy.me/linux.git/commit/?id=d4022a335271a48cce49df35d825897914fbffe3'/>
<id>d4022a335271a48cce49df35d825897914fbffe3</id>
<content type='text'>
Update wall-to-monotonic fields in the VDSO data page
unconditionally.  These are used to service CLOCK_MONOTONIC_COARSE,
which is not guarded by use_syscall.

Signed-off-by: Nathan Lynch &lt;nathan_lynch@mentor.com&gt;
Acked-by: Will Deacon &lt;will.deacon@arm.com&gt;
Cc: &lt;stable@vger.kernel.org&gt;
Signed-off-by: Catalin Marinas &lt;catalin.marinas@arm.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
Update wall-to-monotonic fields in the VDSO data page
unconditionally.  These are used to service CLOCK_MONOTONIC_COARSE,
which is not guarded by use_syscall.

Signed-off-by: Nathan Lynch &lt;nathan_lynch@mentor.com&gt;
Acked-by: Will Deacon &lt;will.deacon@arm.com&gt;
Cc: &lt;stable@vger.kernel.org&gt;
Signed-off-by: Catalin Marinas &lt;catalin.marinas@arm.com&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>arm64: vdso: fix coarse clock handling</title>
<updated>2014-02-05T11:55:30+00:00</updated>
<author>
<name>Nathan Lynch</name>
<email>nathan_lynch@mentor.com</email>
</author>
<published>2014-02-05T05:53:04+00:00</published>
<link rel='alternate' type='text/html' href='https://git.tavy.me/linux.git/commit/?id=069b918623e1510e58dacf178905a72c3baa3ae4'/>
<id>069b918623e1510e58dacf178905a72c3baa3ae4</id>
<content type='text'>
When __kernel_clock_gettime is called with a CLOCK_MONOTONIC_COARSE or
CLOCK_REALTIME_COARSE clock id, it returns incorrectly to whatever the
caller has placed in x2 ("ret x2" to return from the fast path).  Fix
this by saving x30/LR to x2 only in code that will call
__do_get_tspec, restoring x30 afterward, and using a plain "ret" to
return from the routine.

Also: while the resulting tv_nsec value for CLOCK_REALTIME and
CLOCK_MONOTONIC must be computed using intermediate values that are
left-shifted by cs_shift (x12, set by __do_get_tspec), the results for
coarse clocks should be calculated using unshifted values
(xtime_coarse_nsec is in units of actual nanoseconds).  The current
code shifts intermediate values by x12 unconditionally, but x12 is
uninitialized when servicing a coarse clock.  Fix this by setting x12
to 0 once we know we are dealing with a coarse clock id.

Signed-off-by: Nathan Lynch &lt;nathan_lynch@mentor.com&gt;
Acked-by: Will Deacon &lt;will.deacon@arm.com&gt;
Cc: &lt;stable@vger.kernel.org&gt;
Signed-off-by: Catalin Marinas &lt;catalin.marinas@arm.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
When __kernel_clock_gettime is called with a CLOCK_MONOTONIC_COARSE or
CLOCK_REALTIME_COARSE clock id, it returns incorrectly to whatever the
caller has placed in x2 ("ret x2" to return from the fast path).  Fix
this by saving x30/LR to x2 only in code that will call
__do_get_tspec, restoring x30 afterward, and using a plain "ret" to
return from the routine.

Also: while the resulting tv_nsec value for CLOCK_REALTIME and
CLOCK_MONOTONIC must be computed using intermediate values that are
left-shifted by cs_shift (x12, set by __do_get_tspec), the results for
coarse clocks should be calculated using unshifted values
(xtime_coarse_nsec is in units of actual nanoseconds).  The current
code shifts intermediate values by x12 unconditionally, but x12 is
uninitialized when servicing a coarse clock.  Fix this by setting x12
to 0 once we know we are dealing with a coarse clock id.

Signed-off-by: Nathan Lynch &lt;nathan_lynch@mentor.com&gt;
Acked-by: Will Deacon &lt;will.deacon@arm.com&gt;
Cc: &lt;stable@vger.kernel.org&gt;
Signed-off-by: Catalin Marinas &lt;catalin.marinas@arm.com&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>arm64: vdso: prevent ld from aligning PT_LOAD segments to 64k</title>
<updated>2014-02-04T17:52:47+00:00</updated>
<author>
<name>Will Deacon</name>
<email>will.deacon@arm.com</email>
</author>
<published>2014-02-04T14:41:26+00:00</published>
<link rel='alternate' type='text/html' href='https://git.tavy.me/linux.git/commit/?id=40507403485fcb56b83d6ddfc954e9b08305054c'/>
<id>40507403485fcb56b83d6ddfc954e9b08305054c</id>
<content type='text'>
Whilst the text segment for our VDSO is marked as PT_LOAD in the ELF
headers, it is mapped by the kernel and not actually subject to
demand-paging. ld doesn't realise this, and emits a p_align field of 64k
(the maximum supported page size), which conflicts with the load address
picked by the kernel on 4k systems, which will be 4k aligned. This
causes GDB to fail with "Failed to read a valid object file image from
memory" when attempting to load the VDSO.

This patch passes the -n option to ld, which prevents it from aligning
PT_LOAD segments to the maximum page size.

Cc: &lt;stable@vger.kernel.org&gt;
Reported-by: Kyle McMartin &lt;kyle@redhat.com&gt;
Acked-by: Kyle McMartin &lt;kyle@redhat.com&gt;
Signed-off-by: Will Deacon &lt;will.deacon@arm.com&gt;
Signed-off-by: Catalin Marinas &lt;catalin.marinas@arm.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
Whilst the text segment for our VDSO is marked as PT_LOAD in the ELF
headers, it is mapped by the kernel and not actually subject to
demand-paging. ld doesn't realise this, and emits a p_align field of 64k
(the maximum supported page size), which conflicts with the load address
picked by the kernel on 4k systems, which will be 4k aligned. This
causes GDB to fail with "Failed to read a valid object file image from
memory" when attempting to load the VDSO.

This patch passes the -n option to ld, which prevents it from aligning
PT_LOAD segments to the maximum page size.

Cc: &lt;stable@vger.kernel.org&gt;
Reported-by: Kyle McMartin &lt;kyle@redhat.com&gt;
Acked-by: Kyle McMartin &lt;kyle@redhat.com&gt;
Signed-off-by: Will Deacon &lt;will.deacon@arm.com&gt;
Signed-off-by: Catalin Marinas &lt;catalin.marinas@arm.com&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>Merge tag 'arm64-upstream' of git://git.kernel.org/pub/scm/linux/kernel/git/arm64/linux</title>
<updated>2014-01-31T22:25:52+00:00</updated>
<author>
<name>Linus Torvalds</name>
<email>torvalds@linux-foundation.org</email>
</author>
<published>2014-01-31T22:25:52+00:00</published>
<link rel='alternate' type='text/html' href='https://git.tavy.me/linux.git/commit/?id=deb2a1d29bf0168ff2575e714e5c1f156be663fb'/>
<id>deb2a1d29bf0168ff2575e714e5c1f156be663fb</id>
<content type='text'>
Pyll ARM64 patches from Catalin Marinas:
 - Build fix with DMA_CMA enabled
 - Introduction of PTE_WRITE to distinguish between writable but clean
   and truly read-only pages
 - FIQs enabling/disabling clean-up (they aren't used on arm64)
 - CPU resume fix for the per-cpu offset restoring
 - Code comment typos

* tag 'arm64-upstream' of git://git.kernel.org/pub/scm/linux/kernel/git/arm64/linux:
  arm64: mm: Introduce PTE_WRITE
  arm64: mm: Remove PTE_BIT_FUNC macro
  arm64: FIQs are unused
  arm64: mm: fix the function name in comment of cpu_do_switch_mm
  arm64: fix build error if DMA_CMA is enabled
  arm64: kernel: fix per-cpu offset restore on resume
  arm64: mm: fix the function name in comment of __flush_dcache_area
  arm64: mm: use ubfm for dcache_line_size
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
Pyll ARM64 patches from Catalin Marinas:
 - Build fix with DMA_CMA enabled
 - Introduction of PTE_WRITE to distinguish between writable but clean
   and truly read-only pages
 - FIQs enabling/disabling clean-up (they aren't used on arm64)
 - CPU resume fix for the per-cpu offset restoring
 - Code comment typos

* tag 'arm64-upstream' of git://git.kernel.org/pub/scm/linux/kernel/git/arm64/linux:
  arm64: mm: Introduce PTE_WRITE
  arm64: mm: Remove PTE_BIT_FUNC macro
  arm64: FIQs are unused
  arm64: mm: fix the function name in comment of cpu_do_switch_mm
  arm64: fix build error if DMA_CMA is enabled
  arm64: kernel: fix per-cpu offset restore on resume
  arm64: mm: fix the function name in comment of __flush_dcache_area
  arm64: mm: use ubfm for dcache_line_size
</pre>
</div>
</content>
</entry>
<entry>
<title>arm64: FIQs are unused</title>
<updated>2014-01-30T13:51:43+00:00</updated>
<author>
<name>Nicolas Pitre</name>
<email>nicolas.pitre@linaro.org</email>
</author>
<published>2014-01-29T18:00:45+00:00</published>
<link rel='alternate' type='text/html' href='https://git.tavy.me/linux.git/commit/?id=f864b61ee49bbf3faf9a10b9770c719536328d01'/>
<id>f864b61ee49bbf3faf9a10b9770c719536328d01</id>
<content type='text'>
So any FIQ handling is superfluous at the moment.  The functions to
disable/enable FIQs is kept around if ever someone needs them in the
future, but existing calling sites including arch_cpu_idle_prepare()
may go for now.

Signed-off-by: Nicolas Pitre &lt;nico@linaro.org&gt;
Signed-off-by: Catalin Marinas &lt;catalin.marinas@arm.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
So any FIQ handling is superfluous at the moment.  The functions to
disable/enable FIQs is kept around if ever someone needs them in the
future, but existing calling sites including arch_cpu_idle_prepare()
may go for now.

Signed-off-by: Nicolas Pitre &lt;nico@linaro.org&gt;
Signed-off-by: Catalin Marinas &lt;catalin.marinas@arm.com&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>arm64: kernel: fix per-cpu offset restore on resume</title>
<updated>2014-01-24T14:27:40+00:00</updated>
<author>
<name>Lorenzo Pieralisi</name>
<email>Lorenzo.Pieralisi@arm.com</email>
</author>
<published>2014-01-24T10:56:19+00:00</published>
<link rel='alternate' type='text/html' href='https://git.tavy.me/linux.git/commit/?id=fb4a96029c8a091c4365f57307e098543b48a222'/>
<id>fb4a96029c8a091c4365f57307e098543b48a222</id>
<content type='text'>
The introduction of percpu offset optimisation through tpidr_el1 in:

Commit id :7158627686f02319c50c8d9d78f75d4c8
"arm64: percpu: implement optimised pcpu access using tpidr_el1"

requires cpu_{suspend/resume} to restore the tpidr_el1 register upon resume
so that percpu variables can be addressed correctly when a CPU comes out
of reset from warm-boot.

This patch fixes cpu_{suspend}/{resume} tpidr_el1 restoration on resume, by
calling the set_my_cpu_offset C API, as it is done on primary and secondary
CPUs on cold boot, so that, even if the register used to store the percpu
offset is changed, the save and restore of general purpose registers does not
have to be updated.

Signed-off-by: Lorenzo Pieralisi &lt;lorenzo.pieralisi@arm.com&gt;
Signed-off-by: Catalin Marinas &lt;catalin.marinas@arm.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
The introduction of percpu offset optimisation through tpidr_el1 in:

Commit id :7158627686f02319c50c8d9d78f75d4c8
"arm64: percpu: implement optimised pcpu access using tpidr_el1"

requires cpu_{suspend/resume} to restore the tpidr_el1 register upon resume
so that percpu variables can be addressed correctly when a CPU comes out
of reset from warm-boot.

This patch fixes cpu_{suspend}/{resume} tpidr_el1 restoration on resume, by
calling the set_my_cpu_offset C API, as it is done on primary and secondary
CPUs on cold boot, so that, even if the register used to store the percpu
offset is changed, the save and restore of general purpose registers does not
have to be updated.

Signed-off-by: Lorenzo Pieralisi &lt;lorenzo.pieralisi@arm.com&gt;
Signed-off-by: Catalin Marinas &lt;catalin.marinas@arm.com&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>Merge tag 'arm64-upstream' of git://git.kernel.org/pub/scm/linux/kernel/git/arm64/linux</title>
<updated>2014-01-20T23:40:44+00:00</updated>
<author>
<name>Linus Torvalds</name>
<email>torvalds@linux-foundation.org</email>
</author>
<published>2014-01-20T23:40:44+00:00</published>
<link rel='alternate' type='text/html' href='https://git.tavy.me/linux.git/commit/?id=82b51734b4f228c76b6064b6e899d9d3d4c17c1a'/>
<id>82b51734b4f228c76b6064b6e899d9d3d4c17c1a</id>
<content type='text'>
Pull ARM64 updates from Catalin Marinas:
 - CPU suspend support on top of PSCI (firmware Power State Coordination
   Interface)
 - jump label support
 - CMA can now be enabled on arm64
 - HWCAP bits for crypto and CRC32 extensions
 - optimised percpu using tpidr_el1 register
 - code cleanup

* tag 'arm64-upstream' of git://git.kernel.org/pub/scm/linux/kernel/git/arm64/linux: (42 commits)
  arm64: fix typo in entry.S
  arm64: kernel: restore HW breakpoint registers in cpu_suspend
  jump_label: use defined macros instead of hard-coding for better readability
  arm64, jump label: optimize jump label implementation
  arm64, jump label: detect %c support for ARM64
  arm64: introduce aarch64_insn_gen_{nop|branch_imm}() helper functions
  arm64: move encode_insn_immediate() from module.c to insn.c
  arm64: introduce interfaces to hotpatch kernel and module code
  arm64: introduce basic aarch64 instruction decoding helpers
  arm64: dts: Reduce size of virtio block device for foundation model
  arm64: Remove unused __data_loc variable
  arm64: Enable CMA
  arm64: Warn on NULL device structure for dma APIs
  arm64: Add hwcaps for crypto and CRC32 extensions.
  arm64: drop redundant macros from read_cpuid()
  arm64: Remove outdated comment
  arm64: cmpxchg: update macros to prevent warnings
  arm64: support single-step and breakpoint handler hooks
  ARM64: fix framepointer check in unwind_frame
  ARM64: check stack pointer in get_wchan
  ...
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
Pull ARM64 updates from Catalin Marinas:
 - CPU suspend support on top of PSCI (firmware Power State Coordination
   Interface)
 - jump label support
 - CMA can now be enabled on arm64
 - HWCAP bits for crypto and CRC32 extensions
 - optimised percpu using tpidr_el1 register
 - code cleanup

* tag 'arm64-upstream' of git://git.kernel.org/pub/scm/linux/kernel/git/arm64/linux: (42 commits)
  arm64: fix typo in entry.S
  arm64: kernel: restore HW breakpoint registers in cpu_suspend
  jump_label: use defined macros instead of hard-coding for better readability
  arm64, jump label: optimize jump label implementation
  arm64, jump label: detect %c support for ARM64
  arm64: introduce aarch64_insn_gen_{nop|branch_imm}() helper functions
  arm64: move encode_insn_immediate() from module.c to insn.c
  arm64: introduce interfaces to hotpatch kernel and module code
  arm64: introduce basic aarch64 instruction decoding helpers
  arm64: dts: Reduce size of virtio block device for foundation model
  arm64: Remove unused __data_loc variable
  arm64: Enable CMA
  arm64: Warn on NULL device structure for dma APIs
  arm64: Add hwcaps for crypto and CRC32 extensions.
  arm64: drop redundant macros from read_cpuid()
  arm64: Remove outdated comment
  arm64: cmpxchg: update macros to prevent warnings
  arm64: support single-step and breakpoint handler hooks
  ARM64: fix framepointer check in unwind_frame
  ARM64: check stack pointer in get_wchan
  ...
</pre>
</div>
</content>
</entry>
<entry>
<title>arm64: fix typo in entry.S</title>
<updated>2014-01-13T13:55:13+00:00</updated>
<author>
<name>Neil Zhang</name>
<email>zhangwm@marvell.com</email>
</author>
<published>2014-01-13T08:57:56+00:00</published>
<link rel='alternate' type='text/html' href='https://git.tavy.me/linux.git/commit/?id=883c057367014d20a14b5054e4eb0d81ce3bea5c'/>
<id>883c057367014d20a14b5054e4eb0d81ce3bea5c</id>
<content type='text'>
Commit 64681787 (arm64: let the core code deal with preempt_count)
changed the code, but left the comments unchanged, fix it.

Signed-off-by: Neil Zhang &lt;zhangwm@marvell.com&gt;
Acked-by: Will Deacon &lt;will.deacon@arm.com&gt;
Signed-off-by: Catalin Marinas &lt;catalin.marinas@arm.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
Commit 64681787 (arm64: let the core code deal with preempt_count)
changed the code, but left the comments unchanged, fix it.

Signed-off-by: Neil Zhang &lt;zhangwm@marvell.com&gt;
Acked-by: Will Deacon &lt;will.deacon@arm.com&gt;
Signed-off-by: Catalin Marinas &lt;catalin.marinas@arm.com&gt;
</pre>
</div>
</content>
</entry>
</feed>
