<feed xmlns='http://www.w3.org/2005/Atom'>
<title>linux.git/arch/arm/kernel/head-common.S, branch v6.6</title>
<subtitle>Linux kernel source tree</subtitle>
<link rel='alternate' type='text/html' href='https://git.tavy.me/linux.git/'/>
<entry>
<title>ARM: implement THREAD_INFO_IN_TASK for uniprocessor systems</title>
<updated>2021-12-06T11:49:17+00:00</updated>
<author>
<name>Ard Biesheuvel</name>
<email>ardb@kernel.org</email>
</author>
<published>2021-11-24T13:08:11+00:00</published>
<link rel='alternate' type='text/html' href='https://git.tavy.me/linux.git/commit/?id=9c46929e7989efacc1dd0a1dd662a839897ea2b6'/>
<id>9c46929e7989efacc1dd0a1dd662a839897ea2b6</id>
<content type='text'>
On UP systems, only a single task can be 'current' at the same time,
which means we can use a global variable to track it. This means we can
also enable THREAD_INFO_IN_TASK for those systems, as in that case,
thread_info is accessed via current rather than the other way around,
removing the need to store thread_info at the base of the task stack.
This, in turn, permits us to enable IRQ stacks and vmap'ed stacks on UP
systems as well.

To partially mitigate the performance overhead of this arrangement, use
a ADD/ADD/LDR sequence with the appropriate PC-relative group
relocations to load the value of current when needed. This means that
accessing current will still only require a single load as before,
avoiding the need for a literal to carry the address of the global
variable in each function. However, accessing thread_info will now
require this load as well.

Acked-by: Linus Walleij &lt;linus.walleij@linaro.org&gt;
Acked-by: Nicolas Pitre &lt;nico@fluxnic.net&gt;
Signed-off-by: Ard Biesheuvel &lt;ardb@kernel.org&gt;
Tested-by: Marc Zyngier &lt;maz@kernel.org&gt;
Tested-by: Vladimir Murzin &lt;vladimir.murzin@arm.com&gt; # ARMv7M
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
On UP systems, only a single task can be 'current' at the same time,
which means we can use a global variable to track it. This means we can
also enable THREAD_INFO_IN_TASK for those systems, as in that case,
thread_info is accessed via current rather than the other way around,
removing the need to store thread_info at the base of the task stack.
This, in turn, permits us to enable IRQ stacks and vmap'ed stacks on UP
systems as well.

To partially mitigate the performance overhead of this arrangement, use
a ADD/ADD/LDR sequence with the appropriate PC-relative group
relocations to load the value of current when needed. This means that
accessing current will still only require a single load as before,
avoiding the need for a literal to carry the address of the global
variable in each function. However, accessing thread_info will now
require this load as well.

Acked-by: Linus Walleij &lt;linus.walleij@linaro.org&gt;
Acked-by: Nicolas Pitre &lt;nico@fluxnic.net&gt;
Signed-off-by: Ard Biesheuvel &lt;ardb@kernel.org&gt;
Tested-by: Marc Zyngier &lt;maz@kernel.org&gt;
Tested-by: Vladimir Murzin &lt;vladimir.murzin@arm.com&gt; # ARMv7M
</pre>
</div>
</content>
</entry>
<entry>
<title>ARM: smp: Store current pointer in TPIDRURO register if available</title>
<updated>2021-09-27T14:54:02+00:00</updated>
<author>
<name>Ard Biesheuvel</name>
<email>ardb@kernel.org</email>
</author>
<published>2021-09-18T08:44:37+00:00</published>
<link rel='alternate' type='text/html' href='https://git.tavy.me/linux.git/commit/?id=50596b7559bf226bb35ad55855ee979453ec06a1'/>
<id>50596b7559bf226bb35ad55855ee979453ec06a1</id>
<content type='text'>
Now that the user space TLS register is assigned on every return to user
space, we can use it to keep the 'current' pointer while running in the
kernel. This removes the need to access it via thread_info, which is
located at the base of the stack, but will be moved out of there in a
subsequent patch.

Use the __builtin_thread_pointer() helper when available - this will
help GCC understand that reloading the value within the same function is
not necessary, even when using the per-task stack protector (which also
generates accesses via the TLS register). For example, the generated
code below loads TPIDRURO only once, and uses it to access both the
stack canary and the preempt_count fields.

&lt;do_one_initcall&gt;:
       e92d 41f0       stmdb   sp!, {r4, r5, r6, r7, r8, lr}
       ee1d 4f70       mrc     15, 0, r4, cr13, cr0, {3}
       4606            mov     r6, r0
       b094            sub     sp, #80 ; 0x50
       f8d4 34e8       ldr.w   r3, [r4, #1256] ; 0x4e8  &lt;- stack canary
       9313            str     r3, [sp, #76]   ; 0x4c
       f8d4 8004       ldr.w   r8, [r4, #4]             &lt;- preempt count

Co-developed-by: Keith Packard &lt;keithpac@amazon.com&gt;
Signed-off-by: Keith Packard &lt;keithpac@amazon.com&gt;
Signed-off-by: Ard Biesheuvel &lt;ardb@kernel.org&gt;
Reviewed-by: Linus Walleij &lt;linus.walleij@linaro.org&gt;
Tested-by: Amit Daniel Kachhap &lt;amit.kachhap@arm.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
Now that the user space TLS register is assigned on every return to user
space, we can use it to keep the 'current' pointer while running in the
kernel. This removes the need to access it via thread_info, which is
located at the base of the stack, but will be moved out of there in a
subsequent patch.

Use the __builtin_thread_pointer() helper when available - this will
help GCC understand that reloading the value within the same function is
not necessary, even when using the per-task stack protector (which also
generates accesses via the TLS register). For example, the generated
code below loads TPIDRURO only once, and uses it to access both the
stack canary and the preempt_count fields.

&lt;do_one_initcall&gt;:
       e92d 41f0       stmdb   sp!, {r4, r5, r6, r7, r8, lr}
       ee1d 4f70       mrc     15, 0, r4, cr13, cr0, {3}
       4606            mov     r6, r0
       b094            sub     sp, #80 ; 0x50
       f8d4 34e8       ldr.w   r3, [r4, #1256] ; 0x4e8  &lt;- stack canary
       9313            str     r3, [sp, #76]   ; 0x4c
       f8d4 8004       ldr.w   r8, [r4, #4]             &lt;- preempt count

Co-developed-by: Keith Packard &lt;keithpac@amazon.com&gt;
Signed-off-by: Keith Packard &lt;keithpac@amazon.com&gt;
Signed-off-by: Ard Biesheuvel &lt;ardb@kernel.org&gt;
Reviewed-by: Linus Walleij &lt;linus.walleij@linaro.org&gt;
Tested-by: Amit Daniel Kachhap &lt;amit.kachhap@arm.com&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>Merge branch 'devel-stable' into for-next</title>
<updated>2020-12-21T11:19:26+00:00</updated>
<author>
<name>Russell King</name>
<email>rmk+kernel@armlinux.org.uk</email>
</author>
<published>2020-12-21T11:19:26+00:00</published>
<link rel='alternate' type='text/html' href='https://git.tavy.me/linux.git/commit/?id=ecbbb88727aee7880527d4b320b4d06dde75d46d'/>
<id>ecbbb88727aee7880527d4b320b4d06dde75d46d</id>
<content type='text'>
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
</pre>
</div>
</content>
</entry>
<entry>
<title>ARM: head-common.S: use PC-relative insn sequence for __proc_info</title>
<updated>2020-10-28T16:05:39+00:00</updated>
<author>
<name>Ard Biesheuvel</name>
<email>ardb@kernel.org</email>
</author>
<published>2020-09-14T08:25:06+00:00</published>
<link rel='alternate' type='text/html' href='https://git.tavy.me/linux.git/commit/?id=62c4a2e202b18e1d7176875b7e7af240f340596b'/>
<id>62c4a2e202b18e1d7176875b7e7af240f340596b</id>
<content type='text'>
Replace the open coded PC relative offset calculations with a pair of
adr_l invocations. This removes some open coded arithmetic involving
virtual addresses, avoids literal pools on v7+, and slightly reduces
the footprint of the code.

Reviewed-by: Nicolas Pitre &lt;nico@fluxnic.net&gt;
Signed-off-by: Ard Biesheuvel &lt;ardb@kernel.org&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
Replace the open coded PC relative offset calculations with a pair of
adr_l invocations. This removes some open coded arithmetic involving
virtual addresses, avoids literal pools on v7+, and slightly reduces
the footprint of the code.

Reviewed-by: Nicolas Pitre &lt;nico@fluxnic.net&gt;
Signed-off-by: Ard Biesheuvel &lt;ardb@kernel.org&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>ARM: 9016/2: Initialize the mapping of KASan shadow memory</title>
<updated>2020-10-27T12:11:10+00:00</updated>
<author>
<name>Linus Walleij</name>
<email>linus.walleij@linaro.org</email>
</author>
<published>2020-10-25T22:55:16+00:00</published>
<link rel='alternate' type='text/html' href='https://git.tavy.me/linux.git/commit/?id=5615f69bc2097452ecc954f5264d784e158d6801'/>
<id>5615f69bc2097452ecc954f5264d784e158d6801</id>
<content type='text'>
This patch initializes KASan shadow region's page table and memory.
There are two stage for KASan initializing:

1. At early boot stage the whole shadow region is mapped to just
   one physical page (kasan_zero_page). It is finished by the function
   kasan_early_init which is called by __mmap_switched(arch/arm/kernel/
   head-common.S)

2. After the calling of paging_init, we use kasan_zero_page as zero
   shadow for some memory that KASan does not need to track, and we
   allocate a new shadow space for the other memory that KASan need to
   track. These issues are finished by the function kasan_init which is
   call by setup_arch.

When using KASan we also need to increase the THREAD_SIZE_ORDER
from 1 to 2 as the extra calls for shadow memory uses quite a bit
of stack.

As we need to make a temporary copy of the PGD when setting up
shadow memory we create a helpful PGD_SIZE definition for both
LPAE and non-LPAE setups.

The KASan core code unconditionally calls pud_populate() so this
needs to be changed from BUG() to do {} while (0) when building
with KASan enabled.

After the initial development by Andre Ryabinin several modifications
have been made to this code:

Abbott Liu &lt;liuwenliang@huawei.com&gt;
- Add support ARM LPAE: If LPAE is enabled, KASan shadow region's
  mapping table need be copied in the pgd_alloc() function.
- Change kasan_pte_populate,kasan_pmd_populate,kasan_pud_populate,
  kasan_pgd_populate from .meminit.text section to .init.text section.
  Reported by Florian Fainelli &lt;f.fainelli@gmail.com&gt;

Linus Walleij &lt;linus.walleij@linaro.org&gt;:
- Drop the custom mainpulation of TTBR0 and just use
  cpu_switch_mm() to switch the pgd table.
- Adopt to handle 4th level page tabel folding.
- Rewrite the entire page directory and page entry initialization
  sequence to be recursive based on ARM64:s kasan_init.c.

Ard Biesheuvel &lt;ardb@kernel.org&gt;:
- Necessary underlying fixes.
- Crucial bug fixes to the memory set-up code.

Co-developed-by: Andrey Ryabinin &lt;aryabinin@virtuozzo.com&gt;
Co-developed-by: Abbott Liu &lt;liuwenliang@huawei.com&gt;
Co-developed-by: Ard Biesheuvel &lt;ardb@kernel.org&gt;

Cc: Alexander Potapenko &lt;glider@google.com&gt;
Cc: Dmitry Vyukov &lt;dvyukov@google.com&gt;
Cc: kasan-dev@googlegroups.com
Cc: Mike Rapoport &lt;rppt@linux.ibm.com&gt;
Acked-by: Mike Rapoport &lt;rppt@linux.ibm.com&gt;
Reviewed-by: Ard Biesheuvel &lt;ardb@kernel.org&gt;
Tested-by: Ard Biesheuvel &lt;ardb@kernel.org&gt; # QEMU/KVM/mach-virt/LPAE/8G
Tested-by: Florian Fainelli &lt;f.fainelli@gmail.com&gt; # Brahma SoCs
Tested-by: Ahmad Fatoum &lt;a.fatoum@pengutronix.de&gt; # i.MX6Q
Reported-by: Russell King - ARM Linux &lt;rmk+kernel@armlinux.org.uk&gt;
Reported-by: Florian Fainelli &lt;f.fainelli@gmail.com&gt;
Signed-off-by: Andrey Ryabinin &lt;aryabinin@virtuozzo.com&gt;
Signed-off-by: Abbott Liu &lt;liuwenliang@huawei.com&gt;
Signed-off-by: Florian Fainelli &lt;f.fainelli@gmail.com&gt;
Signed-off-by: Ard Biesheuvel &lt;ardb@kernel.org&gt;
Signed-off-by: Linus Walleij &lt;linus.walleij@linaro.org&gt;
Signed-off-by: Russell King &lt;rmk+kernel@armlinux.org.uk&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
This patch initializes KASan shadow region's page table and memory.
There are two stage for KASan initializing:

1. At early boot stage the whole shadow region is mapped to just
   one physical page (kasan_zero_page). It is finished by the function
   kasan_early_init which is called by __mmap_switched(arch/arm/kernel/
   head-common.S)

2. After the calling of paging_init, we use kasan_zero_page as zero
   shadow for some memory that KASan does not need to track, and we
   allocate a new shadow space for the other memory that KASan need to
   track. These issues are finished by the function kasan_init which is
   call by setup_arch.

When using KASan we also need to increase the THREAD_SIZE_ORDER
from 1 to 2 as the extra calls for shadow memory uses quite a bit
of stack.

As we need to make a temporary copy of the PGD when setting up
shadow memory we create a helpful PGD_SIZE definition for both
LPAE and non-LPAE setups.

The KASan core code unconditionally calls pud_populate() so this
needs to be changed from BUG() to do {} while (0) when building
with KASan enabled.

After the initial development by Andre Ryabinin several modifications
have been made to this code:

Abbott Liu &lt;liuwenliang@huawei.com&gt;
- Add support ARM LPAE: If LPAE is enabled, KASan shadow region's
  mapping table need be copied in the pgd_alloc() function.
- Change kasan_pte_populate,kasan_pmd_populate,kasan_pud_populate,
  kasan_pgd_populate from .meminit.text section to .init.text section.
  Reported by Florian Fainelli &lt;f.fainelli@gmail.com&gt;

Linus Walleij &lt;linus.walleij@linaro.org&gt;:
- Drop the custom mainpulation of TTBR0 and just use
  cpu_switch_mm() to switch the pgd table.
- Adopt to handle 4th level page tabel folding.
- Rewrite the entire page directory and page entry initialization
  sequence to be recursive based on ARM64:s kasan_init.c.

Ard Biesheuvel &lt;ardb@kernel.org&gt;:
- Necessary underlying fixes.
- Crucial bug fixes to the memory set-up code.

Co-developed-by: Andrey Ryabinin &lt;aryabinin@virtuozzo.com&gt;
Co-developed-by: Abbott Liu &lt;liuwenliang@huawei.com&gt;
Co-developed-by: Ard Biesheuvel &lt;ardb@kernel.org&gt;

Cc: Alexander Potapenko &lt;glider@google.com&gt;
Cc: Dmitry Vyukov &lt;dvyukov@google.com&gt;
Cc: kasan-dev@googlegroups.com
Cc: Mike Rapoport &lt;rppt@linux.ibm.com&gt;
Acked-by: Mike Rapoport &lt;rppt@linux.ibm.com&gt;
Reviewed-by: Ard Biesheuvel &lt;ardb@kernel.org&gt;
Tested-by: Ard Biesheuvel &lt;ardb@kernel.org&gt; # QEMU/KVM/mach-virt/LPAE/8G
Tested-by: Florian Fainelli &lt;f.fainelli@gmail.com&gt; # Brahma SoCs
Tested-by: Ahmad Fatoum &lt;a.fatoum@pengutronix.de&gt; # i.MX6Q
Reported-by: Russell King - ARM Linux &lt;rmk+kernel@armlinux.org.uk&gt;
Reported-by: Florian Fainelli &lt;f.fainelli@gmail.com&gt;
Signed-off-by: Andrey Ryabinin &lt;aryabinin@virtuozzo.com&gt;
Signed-off-by: Abbott Liu &lt;liuwenliang@huawei.com&gt;
Signed-off-by: Florian Fainelli &lt;f.fainelli@gmail.com&gt;
Signed-off-by: Ard Biesheuvel &lt;ardb@kernel.org&gt;
Signed-off-by: Linus Walleij &lt;linus.walleij@linaro.org&gt;
Signed-off-by: Russell King &lt;rmk+kernel@armlinux.org.uk&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>ARM: 9014/2: Replace string mem* functions for KASan</title>
<updated>2020-10-27T12:11:06+00:00</updated>
<author>
<name>Linus Walleij</name>
<email>linus.walleij@linaro.org</email>
</author>
<published>2020-10-25T22:52:08+00:00</published>
<link rel='alternate' type='text/html' href='https://git.tavy.me/linux.git/commit/?id=d6d51a96c7d63b7450860a3037f2d62388286a52'/>
<id>d6d51a96c7d63b7450860a3037f2d62388286a52</id>
<content type='text'>
Functions like memset()/memmove()/memcpy() do a lot of memory
accesses.

If a bad pointer is passed to one of these functions it is important
to catch this. Compiler instrumentation cannot do this since these
functions are written in assembly.

KASan replaces these memory functions with instrumented variants.

The original functions are declared as weak symbols so that
the strong definitions in mm/kasan/kasan.c can replace them.

The original functions have aliases with a '__' prefix in their
name, so we can call the non-instrumented variant if needed.

We must use __memcpy()/__memset() in place of memcpy()/memset()
when we copy .data to RAM and when we clear .bss, because
kasan_early_init cannot be called before the initialization of
.data and .bss.

For the kernel compression and EFI libstub's custom string
libraries we need a special quirk: even if these are built
without KASan enabled, they rely on the global headers for their
custom string libraries, which means that e.g. memcpy()
will be defined to __memcpy() and we get link failures.
Since these implementations are written i C rather than
assembly we use e.g. __alias(memcpy) to redirected any
users back to the local implementation.

Cc: Andrey Ryabinin &lt;aryabinin@virtuozzo.com&gt;
Cc: Alexander Potapenko &lt;glider@google.com&gt;
Cc: Dmitry Vyukov &lt;dvyukov@google.com&gt;
Cc: kasan-dev@googlegroups.com
Reviewed-by: Ard Biesheuvel &lt;ardb@kernel.org&gt;
Tested-by: Ard Biesheuvel &lt;ardb@kernel.org&gt; # QEMU/KVM/mach-virt/LPAE/8G
Tested-by: Florian Fainelli &lt;f.fainelli@gmail.com&gt; # Brahma SoCs
Tested-by: Ahmad Fatoum &lt;a.fatoum@pengutronix.de&gt; # i.MX6Q
Reported-by: Russell King - ARM Linux &lt;rmk+kernel@armlinux.org.uk&gt;
Signed-off-by: Ahmad Fatoum &lt;a.fatoum@pengutronix.de&gt;
Signed-off-by: Abbott Liu &lt;liuwenliang@huawei.com&gt;
Signed-off-by: Florian Fainelli &lt;f.fainelli@gmail.com&gt;
Signed-off-by: Linus Walleij &lt;linus.walleij@linaro.org&gt;
Signed-off-by: Russell King &lt;rmk+kernel@armlinux.org.uk&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
Functions like memset()/memmove()/memcpy() do a lot of memory
accesses.

If a bad pointer is passed to one of these functions it is important
to catch this. Compiler instrumentation cannot do this since these
functions are written in assembly.

KASan replaces these memory functions with instrumented variants.

The original functions are declared as weak symbols so that
the strong definitions in mm/kasan/kasan.c can replace them.

The original functions have aliases with a '__' prefix in their
name, so we can call the non-instrumented variant if needed.

We must use __memcpy()/__memset() in place of memcpy()/memset()
when we copy .data to RAM and when we clear .bss, because
kasan_early_init cannot be called before the initialization of
.data and .bss.

For the kernel compression and EFI libstub's custom string
libraries we need a special quirk: even if these are built
without KASan enabled, they rely on the global headers for their
custom string libraries, which means that e.g. memcpy()
will be defined to __memcpy() and we get link failures.
Since these implementations are written i C rather than
assembly we use e.g. __alias(memcpy) to redirected any
users back to the local implementation.

Cc: Andrey Ryabinin &lt;aryabinin@virtuozzo.com&gt;
Cc: Alexander Potapenko &lt;glider@google.com&gt;
Cc: Dmitry Vyukov &lt;dvyukov@google.com&gt;
Cc: kasan-dev@googlegroups.com
Reviewed-by: Ard Biesheuvel &lt;ardb@kernel.org&gt;
Tested-by: Ard Biesheuvel &lt;ardb@kernel.org&gt; # QEMU/KVM/mach-virt/LPAE/8G
Tested-by: Florian Fainelli &lt;f.fainelli@gmail.com&gt; # Brahma SoCs
Tested-by: Ahmad Fatoum &lt;a.fatoum@pengutronix.de&gt; # i.MX6Q
Reported-by: Russell King - ARM Linux &lt;rmk+kernel@armlinux.org.uk&gt;
Signed-off-by: Ahmad Fatoum &lt;a.fatoum@pengutronix.de&gt;
Signed-off-by: Abbott Liu &lt;liuwenliang@huawei.com&gt;
Signed-off-by: Florian Fainelli &lt;f.fainelli@gmail.com&gt;
Signed-off-by: Linus Walleij &lt;linus.walleij@linaro.org&gt;
Signed-off-by: Russell King &lt;rmk+kernel@armlinux.org.uk&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>ARM: 8914/1: NOMMU: Fix exc_ret for XIP</title>
<updated>2019-10-10T21:23:20+00:00</updated>
<author>
<name>Vladimir Murzin</name>
<email>vladimir.murzin@arm.com</email>
</author>
<published>2019-10-10T09:12:20+00:00</published>
<link rel='alternate' type='text/html' href='https://git.tavy.me/linux.git/commit/?id=4c0742f65b4ee466546fd24b71b56516cacd4613'/>
<id>4c0742f65b4ee466546fd24b71b56516cacd4613</id>
<content type='text'>
It was reported that 72cd4064fcca "NOMMU: Toggle only bits in
EXC_RETURN we are really care of" breaks NOMMU+XIP combination.
It happens because saved EXC_RETURN gets overwritten when data
section is relocated.

The fix is to propagate EXC_RETURN via register and let relocation
code to commit that value into memory.

Fixes: 72cd4064fcca ("ARM: 8830/1: NOMMU: Toggle only bits in EXC_RETURN we are really care of")
Reported-by: afzal mohammed &lt;afzal.mohd.ma@gmail.com&gt;
Tested-by: afzal mohammed &lt;afzal.mohd.ma@gmail.com&gt;
Signed-off-by: Vladimir Murzin &lt;vladimir.murzin@arm.com&gt;
Signed-off-by: Russell King &lt;rmk+kernel@armlinux.org.uk&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
It was reported that 72cd4064fcca "NOMMU: Toggle only bits in
EXC_RETURN we are really care of" breaks NOMMU+XIP combination.
It happens because saved EXC_RETURN gets overwritten when data
section is relocated.

The fix is to propagate EXC_RETURN via register and let relocation
code to commit that value into memory.

Fixes: 72cd4064fcca ("ARM: 8830/1: NOMMU: Toggle only bits in EXC_RETURN we are really care of")
Reported-by: afzal mohammed &lt;afzal.mohd.ma@gmail.com&gt;
Tested-by: afzal mohammed &lt;afzal.mohd.ma@gmail.com&gt;
Signed-off-by: Vladimir Murzin &lt;vladimir.murzin@arm.com&gt;
Signed-off-by: Russell King &lt;rmk+kernel@armlinux.org.uk&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>treewide: Replace GPLv2 boilerplate/reference with SPDX - rule 500</title>
<updated>2019-06-19T15:09:55+00:00</updated>
<author>
<name>Thomas Gleixner</name>
<email>tglx@linutronix.de</email>
</author>
<published>2019-06-04T08:11:33+00:00</published>
<link rel='alternate' type='text/html' href='https://git.tavy.me/linux.git/commit/?id=d2912cb15bdda8ba4a5dd73396ad62641af2f520'/>
<id>d2912cb15bdda8ba4a5dd73396ad62641af2f520</id>
<content type='text'>
Based on 2 normalized pattern(s):

  this program is free software you can redistribute it and or modify
  it under the terms of the gnu general public license version 2 as
  published by the free software foundation

  this program is free software you can redistribute it and or modify
  it under the terms of the gnu general public license version 2 as
  published by the free software foundation #

extracted by the scancode license scanner the SPDX license identifier

  GPL-2.0-only

has been chosen to replace the boilerplate/reference in 4122 file(s).

Signed-off-by: Thomas Gleixner &lt;tglx@linutronix.de&gt;
Reviewed-by: Enrico Weigelt &lt;info@metux.net&gt;
Reviewed-by: Kate Stewart &lt;kstewart@linuxfoundation.org&gt;
Reviewed-by: Allison Randal &lt;allison@lohutok.net&gt;
Cc: linux-spdx@vger.kernel.org
Link: https://lkml.kernel.org/r/20190604081206.933168790@linutronix.de
Signed-off-by: Greg Kroah-Hartman &lt;gregkh@linuxfoundation.org&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
Based on 2 normalized pattern(s):

  this program is free software you can redistribute it and or modify
  it under the terms of the gnu general public license version 2 as
  published by the free software foundation

  this program is free software you can redistribute it and or modify
  it under the terms of the gnu general public license version 2 as
  published by the free software foundation #

extracted by the scancode license scanner the SPDX license identifier

  GPL-2.0-only

has been chosen to replace the boilerplate/reference in 4122 file(s).

Signed-off-by: Thomas Gleixner &lt;tglx@linutronix.de&gt;
Reviewed-by: Enrico Weigelt &lt;info@metux.net&gt;
Reviewed-by: Kate Stewart &lt;kstewart@linuxfoundation.org&gt;
Reviewed-by: Allison Randal &lt;allison@lohutok.net&gt;
Cc: linux-spdx@vger.kernel.org
Link: https://lkml.kernel.org/r/20190604081206.933168790@linutronix.de
Signed-off-by: Greg Kroah-Hartman &lt;gregkh@linuxfoundation.org&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>ARM: make lookup_processor_type() non-__init</title>
<updated>2018-11-12T10:51:01+00:00</updated>
<author>
<name>Russell King</name>
<email>rmk+kernel@armlinux.org.uk</email>
</author>
<published>2018-07-19T10:42:36+00:00</published>
<link rel='alternate' type='text/html' href='https://git.tavy.me/linux.git/commit/?id=899a42f836678a595f7d2bc36a5a0c2b03d08cbc'/>
<id>899a42f836678a595f7d2bc36a5a0c2b03d08cbc</id>
<content type='text'>
Move lookup_processor_type() out of the __init section so it is callable
from (eg) the secondary startup code during hotplug.

Reviewed-by: Julien Thierry &lt;julien.thierry@arm.com&gt;
Signed-off-by: Russell King &lt;rmk+kernel@armlinux.org.uk&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
Move lookup_processor_type() out of the __init section so it is callable
from (eg) the secondary startup code during hotplug.

Reviewed-by: Julien Thierry &lt;julien.thierry@arm.com&gt;
Signed-off-by: Russell King &lt;rmk+kernel@armlinux.org.uk&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>ARM: 8745/1: get rid of __memzero()</title>
<updated>2018-01-21T15:37:56+00:00</updated>
<author>
<name>Nicolas Pitre</name>
<email>nicolas.pitre@linaro.org</email>
</author>
<published>2018-01-19T17:17:46+00:00</published>
<link rel='alternate' type='text/html' href='https://git.tavy.me/linux.git/commit/?id=ff5fdafc9e9702846480e0cea55ba861f72140a2'/>
<id>ff5fdafc9e9702846480e0cea55ba861f72140a2</id>
<content type='text'>
The __memzero assembly code is almost identical to memset's except for
two orr instructions. The runtime performance of __memset(p, n) and
memset(p, 0, n) is accordingly almost identical.

However, the memset() macro used to guard against a zero length and to
call __memzero at compile time when the fill value is a constant zero
interferes with compiler optimizations.

Arnd found tha the test against a zero length brings up some new
warnings with gcc v8:

  https://gcc.gnu.org/bugzilla/show_bug.cgi?id=82103

And successively rremoving the test against a zero length and the call
to __memzero optimization produces the following kernel sizes for
defconfig with gcc 6:

    text     data     bss       dec       hex  filename
12248142  6278960  413588  18940690   1210312  vmlinux.orig
12244474  6278960  413588  18937022   120f4be  vmlinux.no_zero_test
12239160  6278960  413588  18931708   120dffc  vmlinux.no_memzero

So it is probably not worth keeping __memzero around given that the
compiler can do a better job at inlining trivial memset(p,0,n) on its
own. And the memset code already handles a zero length just fine.

Suggested-by: Arnd Bergmann &lt;arnd@arndb.de&gt;
Signed-off-by: Nicolas Pitre &lt;nico@linaro.org&gt;
Acked-by: Ard Biesheuvel &lt;ard.biesheuvel@linaro.org&gt;
Acked-by: Arnd Bergmann &lt;arnd@arndb.de&gt;
Signed-off-by: Russell King &lt;rmk+kernel@armlinux.org.uk&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
The __memzero assembly code is almost identical to memset's except for
two orr instructions. The runtime performance of __memset(p, n) and
memset(p, 0, n) is accordingly almost identical.

However, the memset() macro used to guard against a zero length and to
call __memzero at compile time when the fill value is a constant zero
interferes with compiler optimizations.

Arnd found tha the test against a zero length brings up some new
warnings with gcc v8:

  https://gcc.gnu.org/bugzilla/show_bug.cgi?id=82103

And successively rremoving the test against a zero length and the call
to __memzero optimization produces the following kernel sizes for
defconfig with gcc 6:

    text     data     bss       dec       hex  filename
12248142  6278960  413588  18940690   1210312  vmlinux.orig
12244474  6278960  413588  18937022   120f4be  vmlinux.no_zero_test
12239160  6278960  413588  18931708   120dffc  vmlinux.no_memzero

So it is probably not worth keeping __memzero around given that the
compiler can do a better job at inlining trivial memset(p,0,n) on its
own. And the memset code already handles a zero length just fine.

Suggested-by: Arnd Bergmann &lt;arnd@arndb.de&gt;
Signed-off-by: Nicolas Pitre &lt;nico@linaro.org&gt;
Acked-by: Ard Biesheuvel &lt;ard.biesheuvel@linaro.org&gt;
Acked-by: Arnd Bergmann &lt;arnd@arndb.de&gt;
Signed-off-by: Russell King &lt;rmk+kernel@armlinux.org.uk&gt;
</pre>
</div>
</content>
</entry>
</feed>
