<feed xmlns='http://www.w3.org/2005/Atom'>
<title>linux.git/arch/mips/include/asm/atomic.h, branch v5.6</title>
<subtitle>Linux kernel source tree</subtitle>
<link rel='alternate' type='text/html' href='https://git.tavy.me/linux.git/'/>
<entry>
<title>MIPS: atomic: Deduplicate 32b &amp; 64b read, set, xchg, cmpxchg</title>
<updated>2019-10-07T16:42:36+00:00</updated>
<author>
<name>Paul Burton</name>
<email>paul.burton@mips.com</email>
</author>
<published>2019-10-01T21:53:24+00:00</published>
<link rel='alternate' type='text/html' href='https://git.tavy.me/linux.git/commit/?id=1da7bce8591d58bf2a442b0324659af7390401c2'/>
<id>1da7bce8591d58bf2a442b0324659af7390401c2</id>
<content type='text'>
Remove the remaining duplication between 32b &amp; 64b in asm/atomic.h by
making use of an ATOMIC_OPS() macro to generate:

  - atomic_read()/atomic64_read()
  - atomic_set()/atomic64_set()
  - atomic_cmpxchg()/atomic64_cmpxchg()
  - atomic_xchg()/atomic64_xchg()

This is consistent with the way all other functions in asm/atomic.h are
generated, and ensures consistency between the 32b &amp; 64b functions.

Of note is that this results in the above now being static inline
functions rather than macros.

Signed-off-by: Paul Burton &lt;paul.burton@mips.com&gt;
Cc: linux-mips@vger.kernel.org
Cc: Huacai Chen &lt;chenhc@lemote.com&gt;
Cc: Jiaxun Yang &lt;jiaxun.yang@flygoat.com&gt;
Cc: linux-kernel@vger.kernel.org
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
Remove the remaining duplication between 32b &amp; 64b in asm/atomic.h by
making use of an ATOMIC_OPS() macro to generate:

  - atomic_read()/atomic64_read()
  - atomic_set()/atomic64_set()
  - atomic_cmpxchg()/atomic64_cmpxchg()
  - atomic_xchg()/atomic64_xchg()

This is consistent with the way all other functions in asm/atomic.h are
generated, and ensures consistency between the 32b &amp; 64b functions.

Of note is that this results in the above now being static inline
functions rather than macros.

Signed-off-by: Paul Burton &lt;paul.burton@mips.com&gt;
Cc: linux-mips@vger.kernel.org
Cc: Huacai Chen &lt;chenhc@lemote.com&gt;
Cc: Jiaxun Yang &lt;jiaxun.yang@flygoat.com&gt;
Cc: linux-kernel@vger.kernel.org
</pre>
</div>
</content>
</entry>
<entry>
<title>MIPS: atomic: Unify 32b &amp; 64b sub_if_positive</title>
<updated>2019-10-07T16:42:34+00:00</updated>
<author>
<name>Paul Burton</name>
<email>paul.burton@mips.com</email>
</author>
<published>2019-10-01T21:53:23+00:00</published>
<link rel='alternate' type='text/html' href='https://git.tavy.me/linux.git/commit/?id=40e784b4d4bc31dee5f1db6a20287777d3aaa4dc'/>
<id>40e784b4d4bc31dee5f1db6a20287777d3aaa4dc</id>
<content type='text'>
Unify the definitions of atomic_sub_if_positive() &amp;
atomic64_sub_if_positive() using a macro like we do for most other
atomic functions. This allows us to share the implementation ensuring
consistency between the two. Notably this provides the appropriate
loongson3_war barriers in the atomic64_sub_if_positive() case which were
previously missing.

The code is rearranged a little to handle the !kernel_uses_llsc case
first in order to de-indent the LL/SC case &amp; allow us not to go over 80
characters per line.

Signed-off-by: Paul Burton &lt;paul.burton@mips.com&gt;
Cc: linux-mips@vger.kernel.org
Cc: Huacai Chen &lt;chenhc@lemote.com&gt;
Cc: Jiaxun Yang &lt;jiaxun.yang@flygoat.com&gt;
Cc: linux-kernel@vger.kernel.org
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
Unify the definitions of atomic_sub_if_positive() &amp;
atomic64_sub_if_positive() using a macro like we do for most other
atomic functions. This allows us to share the implementation ensuring
consistency between the two. Notably this provides the appropriate
loongson3_war barriers in the atomic64_sub_if_positive() case which were
previously missing.

The code is rearranged a little to handle the !kernel_uses_llsc case
first in order to de-indent the LL/SC case &amp; allow us not to go over 80
characters per line.

Signed-off-by: Paul Burton &lt;paul.burton@mips.com&gt;
Cc: linux-mips@vger.kernel.org
Cc: Huacai Chen &lt;chenhc@lemote.com&gt;
Cc: Jiaxun Yang &lt;jiaxun.yang@flygoat.com&gt;
Cc: linux-kernel@vger.kernel.org
</pre>
</div>
</content>
</entry>
<entry>
<title>MIPS: atomic: Use _atomic barriers in atomic_sub_if_positive()</title>
<updated>2019-10-07T16:42:32+00:00</updated>
<author>
<name>Paul Burton</name>
<email>paul.burton@mips.com</email>
</author>
<published>2019-10-01T21:53:21+00:00</published>
<link rel='alternate' type='text/html' href='https://git.tavy.me/linux.git/commit/?id=77d281b7966e476927a45c5fb272d720aa75bb95'/>
<id>77d281b7966e476927a45c5fb272d720aa75bb95</id>
<content type='text'>
Use smp_mb__before_atomic() &amp; smp_mb__after_atomic() in
atomic_sub_if_positive() rather than the equivalent
smp_mb__before_llsc() &amp; smp_llsc_mb(). The former are more standard &amp;
this preps us for avoiding redundant duplicate barriers on Loongson3 in
a later patch.

Signed-off-by: Paul Burton &lt;paul.burton@mips.com&gt;
Cc: linux-mips@vger.kernel.org
Cc: Huacai Chen &lt;chenhc@lemote.com&gt;
Cc: Jiaxun Yang &lt;jiaxun.yang@flygoat.com&gt;
Cc: linux-kernel@vger.kernel.org
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
Use smp_mb__before_atomic() &amp; smp_mb__after_atomic() in
atomic_sub_if_positive() rather than the equivalent
smp_mb__before_llsc() &amp; smp_llsc_mb(). The former are more standard &amp;
this preps us for avoiding redundant duplicate barriers on Loongson3 in
a later patch.

Signed-off-by: Paul Burton &lt;paul.burton@mips.com&gt;
Cc: linux-mips@vger.kernel.org
Cc: Huacai Chen &lt;chenhc@lemote.com&gt;
Cc: Jiaxun Yang &lt;jiaxun.yang@flygoat.com&gt;
Cc: linux-kernel@vger.kernel.org
</pre>
</div>
</content>
</entry>
<entry>
<title>MIPS: atomic: Emit Loongson3 sync workarounds within asm</title>
<updated>2019-10-07T16:42:31+00:00</updated>
<author>
<name>Paul Burton</name>
<email>paul.burton@mips.com</email>
</author>
<published>2019-10-01T21:53:20+00:00</published>
<link rel='alternate' type='text/html' href='https://git.tavy.me/linux.git/commit/?id=4d1dbfe6cbec34c6398a480c0572bba794e89e11'/>
<id>4d1dbfe6cbec34c6398a480c0572bba794e89e11</id>
<content type='text'>
Generate the sync instructions required to workaround Loongson3 LL/SC
errata within inline asm blocks, which feels a little safer than doing
it from C where strictly speaking the compiler would be well within its
rights to insert a memory access between the separate asm statements we
previously had, containing sync &amp; ll instructions respectively.

Signed-off-by: Paul Burton &lt;paul.burton@mips.com&gt;
Cc: linux-mips@vger.kernel.org
Cc: Huacai Chen &lt;chenhc@lemote.com&gt;
Cc: Jiaxun Yang &lt;jiaxun.yang@flygoat.com&gt;
Cc: linux-kernel@vger.kernel.org
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
Generate the sync instructions required to workaround Loongson3 LL/SC
errata within inline asm blocks, which feels a little safer than doing
it from C where strictly speaking the compiler would be well within its
rights to insert a memory access between the separate asm statements we
previously had, containing sync &amp; ll instructions respectively.

Signed-off-by: Paul Burton &lt;paul.burton@mips.com&gt;
Cc: linux-mips@vger.kernel.org
Cc: Huacai Chen &lt;chenhc@lemote.com&gt;
Cc: Jiaxun Yang &lt;jiaxun.yang@flygoat.com&gt;
Cc: linux-kernel@vger.kernel.org
</pre>
</div>
</content>
</entry>
<entry>
<title>MIPS: atomic: Use one macro to generate 32b &amp; 64b functions</title>
<updated>2019-10-07T16:42:29+00:00</updated>
<author>
<name>Paul Burton</name>
<email>paul.burton@mips.com</email>
</author>
<published>2019-10-01T21:53:18+00:00</published>
<link rel='alternate' type='text/html' href='https://git.tavy.me/linux.git/commit/?id=a38ee6bb14a41b6849576bcf6cbd33cbbe5c3a7d'/>
<id>a38ee6bb14a41b6849576bcf6cbd33cbbe5c3a7d</id>
<content type='text'>
Cut down on duplication by generalizing the ATOMIC_OP(),
ATOMIC_OP_RETURN() &amp; ATOMIC_FETCH_OP() macros to work for both 32b &amp;
64b atomics, and removing the ATOMIC64_ variants. This ensures
consistency between our atomic_* &amp; atomic64_* functions.

Signed-off-by: Paul Burton &lt;paul.burton@mips.com&gt;
Cc: linux-mips@vger.kernel.org
Cc: Huacai Chen &lt;chenhc@lemote.com&gt;
Cc: Jiaxun Yang &lt;jiaxun.yang@flygoat.com&gt;
Cc: linux-kernel@vger.kernel.org
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
Cut down on duplication by generalizing the ATOMIC_OP(),
ATOMIC_OP_RETURN() &amp; ATOMIC_FETCH_OP() macros to work for both 32b &amp;
64b atomics, and removing the ATOMIC64_ variants. This ensures
consistency between our atomic_* &amp; atomic64_* functions.

Signed-off-by: Paul Burton &lt;paul.burton@mips.com&gt;
Cc: linux-mips@vger.kernel.org
Cc: Huacai Chen &lt;chenhc@lemote.com&gt;
Cc: Jiaxun Yang &lt;jiaxun.yang@flygoat.com&gt;
Cc: linux-kernel@vger.kernel.org
</pre>
</div>
</content>
</entry>
<entry>
<title>MIPS: atomic: Handle !kernel_uses_llsc first</title>
<updated>2019-10-07T16:42:27+00:00</updated>
<author>
<name>Paul Burton</name>
<email>paul.burton@mips.com</email>
</author>
<published>2019-10-01T21:53:16+00:00</published>
<link rel='alternate' type='text/html' href='https://git.tavy.me/linux.git/commit/?id=9537db24c65aeb71718916272687b0d00d3e0821'/>
<id>9537db24c65aeb71718916272687b0d00d3e0821</id>
<content type='text'>
Handle the !kernel_uses_llsc path first in our ATOMIC_OP(),
ATOMIC_OP_RETURN() &amp; ATOMIC_FETCH_OP() macros &amp; return from within the
block. This allows us to de-indent the kernel_uses_llsc path by one
level which will be useful when making further changes.

Signed-off-by: Paul Burton &lt;paul.burton@mips.com&gt;
Cc: linux-mips@vger.kernel.org
Cc: Huacai Chen &lt;chenhc@lemote.com&gt;
Cc: Jiaxun Yang &lt;jiaxun.yang@flygoat.com&gt;
Cc: linux-kernel@vger.kernel.org
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
Handle the !kernel_uses_llsc path first in our ATOMIC_OP(),
ATOMIC_OP_RETURN() &amp; ATOMIC_FETCH_OP() macros &amp; return from within the
block. This allows us to de-indent the kernel_uses_llsc path by one
level which will be useful when making further changes.

Signed-off-by: Paul Burton &lt;paul.burton@mips.com&gt;
Cc: linux-mips@vger.kernel.org
Cc: Huacai Chen &lt;chenhc@lemote.com&gt;
Cc: Jiaxun Yang &lt;jiaxun.yang@flygoat.com&gt;
Cc: linux-kernel@vger.kernel.org
</pre>
</div>
</content>
</entry>
<entry>
<title>MIPS: atomic: Fix whitespace in ATOMIC_OP macros</title>
<updated>2019-10-07T16:42:26+00:00</updated>
<author>
<name>Paul Burton</name>
<email>paul.burton@mips.com</email>
</author>
<published>2019-10-01T21:53:15+00:00</published>
<link rel='alternate' type='text/html' href='https://git.tavy.me/linux.git/commit/?id=36d3295c5a0d9169bae1d40f8db92459977c2936'/>
<id>36d3295c5a0d9169bae1d40f8db92459977c2936</id>
<content type='text'>
We define macros in asm/atomic.h which end each line with space
characters before a backslash to continue on the next line. Remove the
space characters leaving tabs as the whitespace used for conformity with
coding convention.

Signed-off-by: Paul Burton &lt;paul.burton@mips.com&gt;
Cc: linux-mips@vger.kernel.org
Cc: Huacai Chen &lt;chenhc@lemote.com&gt;
Cc: Jiaxun Yang &lt;jiaxun.yang@flygoat.com&gt;
Cc: linux-kernel@vger.kernel.org
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
We define macros in asm/atomic.h which end each line with space
characters before a backslash to continue on the next line. Remove the
space characters leaving tabs as the whitespace used for conformity with
coding convention.

Signed-off-by: Paul Burton &lt;paul.burton@mips.com&gt;
Cc: linux-mips@vger.kernel.org
Cc: Huacai Chen &lt;chenhc@lemote.com&gt;
Cc: Jiaxun Yang &lt;jiaxun.yang@flygoat.com&gt;
Cc: linux-kernel@vger.kernel.org
</pre>
</div>
</content>
</entry>
<entry>
<title>MIPS: Unify sc beqz definition</title>
<updated>2019-10-07T16:42:13+00:00</updated>
<author>
<name>Paul Burton</name>
<email>paul.burton@mips.com</email>
</author>
<published>2019-10-01T21:53:05+00:00</published>
<link rel='alternate' type='text/html' href='https://git.tavy.me/linux.git/commit/?id=878f75c7a2530471844a93b01e887f09d24ed57f'/>
<id>878f75c7a2530471844a93b01e887f09d24ed57f</id>
<content type='text'>
We currently duplicate the definition of __scbeqz in asm/atomic.h &amp;
asm/cmpxchg.h. Move it to asm/llsc.h &amp; rename it to __SC_BEQZ to fit
better with the existing __SC macro provided there.

We include a tab in the string in order to avoid the need for users to
indent code any further to include whitespace of their own after the
instruction mnemonic.

Signed-off-by: Paul Burton &lt;paul.burton@mips.com&gt;
Cc: linux-mips@vger.kernel.org
Cc: Huacai Chen &lt;chenhc@lemote.com&gt;
Cc: Jiaxun Yang &lt;jiaxun.yang@flygoat.com&gt;
Cc: linux-kernel@vger.kernel.org
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
We currently duplicate the definition of __scbeqz in asm/atomic.h &amp;
asm/cmpxchg.h. Move it to asm/llsc.h &amp; rename it to __SC_BEQZ to fit
better with the existing __SC macro provided there.

We include a tab in the string in order to avoid the need for users to
indent code any further to include whitespace of their own after the
instruction mnemonic.

Signed-off-by: Paul Burton &lt;paul.burton@mips.com&gt;
Cc: linux-mips@vger.kernel.org
Cc: Huacai Chen &lt;chenhc@lemote.com&gt;
Cc: Jiaxun Yang &lt;jiaxun.yang@flygoat.com&gt;
Cc: linux-kernel@vger.kernel.org
</pre>
</div>
</content>
</entry>
<entry>
<title>mips/atomic: Fix smp_mb__{before,after}_atomic()</title>
<updated>2019-08-31T10:06:02+00:00</updated>
<author>
<name>Peter Zijlstra</name>
<email>peterz@infradead.org</email>
</author>
<published>2019-06-13T13:43:20+00:00</published>
<link rel='alternate' type='text/html' href='https://git.tavy.me/linux.git/commit/?id=42344113ba7a1ed7b5654cd5270af0d5698d8521'/>
<id>42344113ba7a1ed7b5654cd5270af0d5698d8521</id>
<content type='text'>
Recent probing at the Linux Kernel Memory Model uncovered a
'surprise'. Strongly ordered architectures where the atomic RmW
primitive implies full memory ordering and
smp_mb__{before,after}_atomic() are a simple barrier() (such as MIPS
without WEAK_REORDERING_BEYOND_LLSC) fail for:

	*x = 1;
	atomic_inc(u);
	smp_mb__after_atomic();
	r0 = *y;

Because, while the atomic_inc() implies memory order, it
(surprisingly) does not provide a compiler barrier. This then allows
the compiler to re-order like so:

	atomic_inc(u);
	*x = 1;
	smp_mb__after_atomic();
	r0 = *y;

Which the CPU is then allowed to re-order (under TSO rules) like:

	atomic_inc(u);
	r0 = *y;
	*x = 1;

And this very much was not intended. Therefore strengthen the atomic
RmW ops to include a compiler barrier.

Reported-by: Andrea Parri &lt;andrea.parri@amarulasolutions.com&gt;
Signed-off-by: Peter Zijlstra (Intel) &lt;peterz@infradead.org&gt;
Signed-off-by: Paul Burton &lt;paul.burton@mips.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
Recent probing at the Linux Kernel Memory Model uncovered a
'surprise'. Strongly ordered architectures where the atomic RmW
primitive implies full memory ordering and
smp_mb__{before,after}_atomic() are a simple barrier() (such as MIPS
without WEAK_REORDERING_BEYOND_LLSC) fail for:

	*x = 1;
	atomic_inc(u);
	smp_mb__after_atomic();
	r0 = *y;

Because, while the atomic_inc() implies memory order, it
(surprisingly) does not provide a compiler barrier. This then allows
the compiler to re-order like so:

	atomic_inc(u);
	*x = 1;
	smp_mb__after_atomic();
	r0 = *y;

Which the CPU is then allowed to re-order (under TSO rules) like:

	atomic_inc(u);
	r0 = *y;
	*x = 1;

And this very much was not intended. Therefore strengthen the atomic
RmW ops to include a compiler barrier.

Reported-by: Andrea Parri &lt;andrea.parri@amarulasolutions.com&gt;
Signed-off-by: Peter Zijlstra (Intel) &lt;peterz@infradead.org&gt;
Signed-off-by: Paul Burton &lt;paul.burton@mips.com&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>mips/atomic: Fix loongson_llsc_mb() wreckage</title>
<updated>2019-08-31T10:05:17+00:00</updated>
<author>
<name>Peter Zijlstra</name>
<email>peterz@infradead.org</email>
</author>
<published>2019-06-13T13:43:19+00:00</published>
<link rel='alternate' type='text/html' href='https://git.tavy.me/linux.git/commit/?id=1c6c1ca318585f1096d4d04bc722297c85e9fb8a'/>
<id>1c6c1ca318585f1096d4d04bc722297c85e9fb8a</id>
<content type='text'>
The comment describing the loongson_llsc_mb() reorder case doesn't
make any sense what so ever. Instruction re-ordering is not an SMP
artifact, but rather a CPU local phenomenon. Clarify the comment by
explaining that these issue cause a coherence fail.

For the branch speculation case; if futex_atomic_cmpxchg_inatomic()
needs one at the bne branch target, then surely the normal
__cmpxch_asm() implementation does too. We cannot rely on the
barriers from cmpxchg() because cmpxchg_local() is implemented with
the same macro, and branch prediction and speculation are, too, CPU
local.

Fixes: e02e07e3127d ("MIPS: Loongson: Introduce and use loongson_llsc_mb()")
Cc: Huacai Chen &lt;chenhc@lemote.com&gt;
Cc: Huang Pei &lt;huangpei@loongson.cn&gt;
Signed-off-by: Peter Zijlstra (Intel) &lt;peterz@infradead.org&gt;
Signed-off-by: Paul Burton &lt;paul.burton@mips.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
The comment describing the loongson_llsc_mb() reorder case doesn't
make any sense what so ever. Instruction re-ordering is not an SMP
artifact, but rather a CPU local phenomenon. Clarify the comment by
explaining that these issue cause a coherence fail.

For the branch speculation case; if futex_atomic_cmpxchg_inatomic()
needs one at the bne branch target, then surely the normal
__cmpxch_asm() implementation does too. We cannot rely on the
barriers from cmpxchg() because cmpxchg_local() is implemented with
the same macro, and branch prediction and speculation are, too, CPU
local.

Fixes: e02e07e3127d ("MIPS: Loongson: Introduce and use loongson_llsc_mb()")
Cc: Huacai Chen &lt;chenhc@lemote.com&gt;
Cc: Huang Pei &lt;huangpei@loongson.cn&gt;
Signed-off-by: Peter Zijlstra (Intel) &lt;peterz@infradead.org&gt;
Signed-off-by: Paul Burton &lt;paul.burton@mips.com&gt;
</pre>
</div>
</content>
</entry>
</feed>
