summaryrefslogtreecommitdiff
path: root/arch
AgeCommit message (Collapse)Author
4 daysMerge tag 'powerpc-7.1-2' of ↵Linus Torvalds
git://git.kernel.org/pub/scm/linux/kernel/git/powerpc/linux Pull powerpc fixes from Madhavan Srinivasan: - Fix KASAN sanitization flag for core_$(BITS).o - Fixes for handling offset values in pseries htmdump - Fix interrupt mask in cpm1_gpiochip_add16() - ps3/pasemi fixes to drop redundant result assignment - Fixes in papr-hvpipe code path - powerpc/perf: Update check for PERF_SAMPLE_DATA_SRC marked events Thanks to Aboorva Devarajan, Athira Rajeev, Christophe Leroy (CS GROUP), Geert Uytterhoeven, Haren Myneni, Krzysztof Kozlowski, Mukesh Kumar Chaurasiya (IBM), Nathan Chancellor, Ritesh Harjani (IBM), Shivani Nittor, Sourabh Jain, Thomas Zimmermann, and Venkat Rao Bagalkote. * tag 'powerpc-7.1-2' of git://git.kernel.org/pub/scm/linux/kernel/git/powerpc/linux: (21 commits) powerpc/pasemi: Drop redundant res assignment powerpc/ps3: Drop redundant result assignment powerpc/vdso: Drop -DCC_USING_PATCHABLE_FUNCTION_ENTRY from 32-bit flags with clang arch/powerpc: Drop CONFIG_FIRMWARE_EDID from defconfig files powerpc/perf: Update check for PERF_SAMPLE_DATA_SRC marked events powerpc/8xx: Fix interrupt mask in cpm1_gpiochip_add16() powerpc/vmx: avoid KASAN instrumentation in enter_vmx_ops() for kexec powerpc/kdump: fix KASAN sanitization flag for core_$(BITS).o pseries/papr-hvpipe: Fix style and checkpatch issues in enable_hvpipe_IRQ() pseries/papr-hvpipe: Refactor and simplify hvpipe_rtas_recv_msg() pseries/papr-hvpipe: Kill task_struct pointer from struct hvpipe_source_info pseries/papr-hvpipe: Simplify spin unlock usage in papr_hvpipe_handle_release() pseries/papr-hvpipe: Fix the usage of copy_to_user() pseries/papr-hvpipe: Fix & simplify error handling in papr_hvpipe_init() pseries/papr-hvpipe: Fix null ptr deref in papr_hvpipe_dev_create_handle() pseries/papr-hvpipe: Prevent kernel stack memory leak to userspace pseries/papr-hvpipe: Fix race with interrupt handler powerpc/pseries/htmdump: Add memory configuration dump support to htmdump module powerpc/pseries/htmdump: Fix the offset value used in htm status dump powerpc/pseries/htmdump: Fix the offset value used in processor configuration dump ...
4 daysMerge tag 'x86-urgent-2026-05-09' of ↵Linus Torvalds
git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip Pull x86 fixes from Ingo Molnar: - Fix memory map enumeration bug in the Xen e820 parsing code (Juergen Gross) - Re-enable e820 BIOS fallback if e820 table is empty (David Gow) * tag 'x86-urgent-2026-05-09' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip: x86/boot/e820: Re-enable BIOS fallback if e820 table is empty x86/xen: Fix a potential problem in xen_e820_resolve_conflicts()
4 daysMerge tag 'perf-urgent-2026-05-09' of ↵Linus Torvalds
git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip Pull perf events fixes from Ingo Molnar: - Fix deadlock in the perf_mmap() failure path (Peter Zijlstra) - Intel ACR (Auto Counter Reload) fixes (Dapeng Mi): - Fix validation and configuration of ACR masks - Fix ACR rescheduling bug causing stale masks - Disable the PMI on ACR-enabled hardware - Enable ACR on Panther Cover uarch too * tag 'perf-urgent-2026-05-09' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip: perf/x86/intel: Enable auto counter reload for DMR perf/x86/intel: Disable PMI for self-reloaded ACR events perf/x86/intel: Always reprogram ACR events to prevent stale masks perf/x86/intel: Improve validation and configuration of ACR masks perf/core: Fix deadlock in perf_mmap() failure path
5 daysMerge tag 'arm64-fixes' of ↵Linus Torvalds
git://git.kernel.org/pub/scm/linux/kernel/git/arm64/linux Pull arm64 fix from Catalin Marinas: - ptrace(PTRACE_SETREGSET) fix to zero the target's fpsimd_state rather than the tracer's * tag 'arm64-fixes' of git://git.kernel.org/pub/scm/linux/kernel/git/arm64/linux: arm64/fpsimd: ptrace: zero target's fpsimd_state, not the tracer's
6 daysx86/boot/e820: Re-enable BIOS fallback if e820 table is emptyDavid Gow
In commit: 157266edcc56 ("x86/boot/e820: Simplify append_e820_table() and remove restriction on single-entry tables") the check on the number of entries in the e820 table was removed. The intention was to support single-entry maps, but by removing the check entirely, we also skip the fallback (to, e.g., the BIOS 88h function). This means that if no E820 map is passed in from the bootloader (which is the case on some bootloaders, like linld), we end up with an empty memory map, and the kernel fails to boot (either by deadlocking on OOM, or by failing to allocate the real mode trampoline, or similar). Re-instate the check in append_e820_table(), but only check that nr_entries is non-zero. This allows e820__memory_setup_default() to fall back to other memory size sources, and doesn't affect e820__memory_setup_extended(), as the latter ignores the return value from append_e820_table(). In doing so, we also update the return values to be proper error codes, with -ENOENT for this case (there are no entries), and -EINVAL for the case where an entry appears invalid. Given none of the callers check the actual value -- just whether it's nonzero -- this is largely aesthetic in practice. Tested against linld, and the kernel boots again fine. [ mingo: Readability edits to the comment and the changelog. ] Fixes: 157266edcc56 ("x86/boot/e820: Simplify append_e820_table() and remove restriction on single-entry tables") Signed-off-by: David Gow <david@davidgow.net> Signed-off-by: Ingo Molnar <mingo@kernel.org> Reviewed-by: Andy Shevchenko <andriy.shevchenko@intel.com> Cc: stable@vger.kernel.org Cc: Arnd Bergmann <arnd@kernel.org> Cc: "H. Peter Anvin" <hpa@zytor.com> Link: https://patch.msgid.link/20260416065746.1896647-1-david@davidgow.net
7 daysMerge tag 'parisc-for-7.1-rc3' of ↵Linus Torvalds
git://git.kernel.org/pub/scm/linux/kernel/git/deller/parisc-linux Pull parisc fixes from Helge Deller: - Revert "parisc: led: fix reference leak on failed device registration" - Fix build failures introduced when allowing to build 32-/64-bit only VDSO - Switch to dynamic parisc root device to avoid upcoming warnings - Fix IRQ leak in LASI driver * tag 'parisc-for-7.1-rc3' of git://git.kernel.org/pub/scm/linux/kernel/git/deller/parisc-linux: parisc: Fix IRQ leak in LASI driver parisc: Fix 64-bit kernel build when CONFIG_COMPAT=n parisc: Fix build failure for 32-bit kernel with PA2.0 instruction set parisc: drivers: switch to dynamic root device Revert "parisc: led: fix reference leak on failed device registration"
7 daysMerge tag 'efi-fixes-for-v7.1-1' of ↵Linus Torvalds
git://git.kernel.org/pub/scm/linux/kernel/git/efi/efi Pull EFI fixes from Ard Biesheuvel: - Fix issues in EFI graceful recovery on x86 introduced by changes to the kernel mode FPU APIs - I-cache coherency fixes for the LoongArch EFI stub - Locking fix for EFI pstore - Code tweak for efivarfs * tag 'efi-fixes-for-v7.1-1' of git://git.kernel.org/pub/scm/linux/kernel/git/efi/efi: x86/efi: Restore IRQ state in EFI page fault handler x86/efi: Fix graceful fault handling after FPU softirq changes efi/libstub: Synchronize instruction cache after kernel relocation efi/loongarch: Implement efi_cache_sync_image() efi/libstub: Move efi_relocate_kernel() into its only remaining user efi: pstore: Drop efivar lock when efi_pstore_open() returns with an error efivarfs: use QSTR() in efivarfs_alloc_dentry
7 daysarm64/fpsimd: ptrace: zero target's fpsimd_state, not the tracer'sBreno Leitao
sve_set_common() is the backend for PTRACE_SETREGSET(NT_ARM_SVE) and PTRACE_SETREGSET(NT_ARM_SSVE). Every write in the function operates on the tracee (target) - except a single memset that uses current instead, zeroing the tracer's saved V0-V31 / FPSR / FPCR shadow on every ptrace SETREGSET call. The memset is meant to give the tracee a defined zero register image before the user-supplied payload is copied in (for partial writes, header-only writes, and FPSIMD<->SVE format switches). Aiming it at current both denies the tracee that clean slate and silently corrupts the tracer. The corruption of the tracer's saved FPSIMD state is not always observable. Where the tracer's state is live on a CPU, this may be reused without loading the corrupted state from memory, and will eventually be written back over the corrupted state. Where the tracer's state is saved in SVE_PT_REGS_SVE format, only the FPSR and FPCR are clobbered, and the effective copy of the vectors is in the task's sve_state. Reproducible on an arm64 kernel with SVE: a single-threaded tracer that loads a known pattern into V0-V31, issues PTRACE_SETREGSET(NT_ARM_SVE) on a child, and reads V0-V31 back observes them all zeroed within tens of thousands of iterations when a sibling thread keeps stealing the FPSIMD CPU binding. Fixes: 316283f276eb ("arm64/fpsimd: ptrace: Consistently handle partial writes to NT_ARM_(S)SVE") Cc: <stable@vger.kernel.org> Signed-off-by: Breno Leitao <leitao@debian.org> Acked-by: Mark Rutland <mark.rutland@arm.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
7 daysMerge tag 'loongarch-fixes-7.1-1' of ↵Linus Torvalds
git://git.kernel.org/pub/scm/linux/kernel/git/chenhuacai/linux-loongson Pull LoongArch fixes from Huacai Chen: "Fix some build and runtime issues after 32BIT Kconfig option enabled, improve the platform-specific PCI controller compatibility, drop custom __arch_vdso_hres_capable(), and fix a lot of KVM bugs" * tag 'loongarch-fixes-7.1-1' of git://git.kernel.org/pub/scm/linux/kernel/git/chenhuacai/linux-loongson: LoongArch: KVM: Move unconditional delay into timer clear scenery LoongArch: KVM: Fix HW timer interrupt lost when inject interrupt by software LoongArch: KVM: Move AVEC interrupt injection into switch loop LoongArch: KVM: Use kvm_set_pte() in kvm_flush_pte() LoongArch: KVM: Fix missing EMULATE_FAIL in kvm_emu_mmio_read() LoongArch: KVM: Cap KVM_CAP_NR_VCPUS by KVM_CAP_MAX_VCPUS LoongArch: KVM: Fix "unreliable stack" for kvm_exc_entry LoongArch: KVM: Compile switch.S directly into the kernel LoongArch: vDSO: Drop custom __arch_vdso_hres_capable() LoongArch: Fix potential ADE in loongson_gpu_fixup_dma_hang() LoongArch: Use per-root-bridge PCIH flag to skip mem resource fixup LoongArch: Fix SYM_SIGFUNC_START definition for 32BIT LoongArch: Specify -m32/-m64 explicitly for 32BIT/64BIT LoongArch: Make CONFIG_64BIT as the default option
8 dayspowerpc/pasemi: Drop redundant res assignmentKrzysztof Kozlowski
Return value of pas_add_bridge() is not used, so code can be simplified to fix W=1 clang warnings: arch/powerpc/platforms/pasemi/pci.c:275:6: error: variable 'res' set but not used [-Werror,-Wunused-but-set-variable] Signed-off-by: Krzysztof Kozlowski <krzysztof.kozlowski@oss.qualcomm.com> Signed-off-by: Madhavan Srinivasan <maddy@linux.ibm.com> Link: https://patch.msgid.link/20260317130823.240279-4-krzysztof.kozlowski@oss.qualcomm.com
8 dayspowerpc/ps3: Drop redundant result assignmentKrzysztof Kozlowski
Return value of ps3_start_probe_thread() is not used, so code can be simplified to fix W=1 clang warnings: arch/powerpc/platforms/ps3/device-init.c:953:6: error: variable 'result' set but not used [-Werror,-Wunused-but-set-variable] Signed-off-by: Krzysztof Kozlowski <krzysztof.kozlowski@oss.qualcomm.com> Reviewed-by: Geert Uytterhoeven <geert@linux-m68k.org> Signed-off-by: Madhavan Srinivasan <maddy@linux.ibm.com> Link: https://patch.msgid.link/20260317130823.240279-3-krzysztof.kozlowski@oss.qualcomm.com
8 dayspowerpc/vdso: Drop -DCC_USING_PATCHABLE_FUNCTION_ENTRY from 32-bit flags ↵Nathan Chancellor
with clang After commit 73cdf24e81e4 ("powerpc64: make clang cross-build friendly"), building 64-bit little endian + CONFIG_COMPAT=y with clang results in many warnings along the lines of: $ cat arch/powerpc/configs/compat.config CONFIG_COMPAT=y $ make -skj"$(nproc)" ARCH=powerpc LLVM=1 ppc64le_defconfig compat.config arch/powerpc/kernel/vdso/ ... In file included from <built-in>:4: In file included from lib/vdso/gettimeofday.c:6: In file included from include/vdso/datapage.h:15: In file included from include/vdso/cache.h:5: arch/powerpc/include/asm/cache.h:77:8: warning: unknown attribute 'patchable_function_entry' ignored [-Wunknown-attributes] 77 | static inline u32 l1_icache_bytes(void) | ^~~~~~ include/linux/compiler_types.h:235:58: note: expanded from macro 'inline' 235 | #define inline inline __gnu_inline __inline_maybe_unused notrace | ^~~~~~~ include/linux/compiler_types.h:215:34: note: expanded from macro 'notrace' 215 | #define notrace __attribute__((patchable_function_entry(0, 0))) | ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ ... arch/powerpc/Makefile adds -DCC_USING_PATCHABLE_FUNCTION_ENTRY to KBUILD_CPPFLAGS, which is inherited by the 32-bit vDSO. However, the 32-bit little endian target does not support '-fpatchable-function-entry', resulting in the warnings above. Remove -DCC_USING_PATCHABLE_FUNCTION_ENTRY from the 32-bit vDSO flags when building with clang to avoid the warnings. Fixes: 73cdf24e81e4 ("powerpc64: make clang cross-build friendly") Signed-off-by: Nathan Chancellor <nathan@kernel.org> Signed-off-by: Madhavan Srinivasan <maddy@linux.ibm.com> Link: https://patch.msgid.link/20260311-ppc-vdso-drop-cc-using-pfe-define-clang-v1-1-66c790e22650@kernel.org
8 daysarch/powerpc: Drop CONFIG_FIRMWARE_EDID from defconfig filesThomas Zimmermann
CONFIG_FIRMWARE_EDID=y depends on X86 or EFI_GENERIC_STUB. Neither is true here, so drop the lines from the defconfig files. Signed-off-by: Thomas Zimmermann <tzimmermann@suse.de> Reviewed-by: Christophe Leroy (CS GROUP) <chleroy@kernel.org> Signed-off-by: Madhavan Srinivasan <maddy@linux.ibm.com> Link: https://patch.msgid.link/20260401083023.214426-1-tzimmermann@suse.de
8 dayspowerpc/perf: Update check for PERF_SAMPLE_DATA_SRC marked eventsShivani Nittor
The core-book3s PMU sampling code validates the SIER TYPE field when PERF_SAMPLE_DATA_SRC is requested. The SIER TYPE field indicates the instruction type and is only valid for random sampling (marked events). To handle cases observed where SIER TYPE could be zero even for marked events,validation was added to drop such samples and increment event->lost_samples. However, this validation was applied to all samples, including continuous sampling. In continuous sampling mode, the PMU does not set the SIER TYPE field, so it remains zero. As a result, valid continuous samples were incorrectly treated as invalid and dropped. Fixed this by gating the SIER TYPE validation with mark_event, so the check runs only for marked (random) events. Continuous samples now skip this check and are recorded normally in the final data recording path. Fixes: 2ffb26afa642 ("arch/powerpc/perf: Check the instruction type before creating sample with perf_mem_data_src") Signed-off-by: Shivani Nittor <shivani@linux.ibm.com> Reviewed-by: Mukesh Kumar Chaurasiya (IBM) <mkchauras@gmail.com> Reviewed-by: Athira Rajeev <atrajeev@linux.ibm.com> [Maddy: Fixed reviewed-by tag] Signed-off-by: Madhavan Srinivasan <maddy@linux.ibm.com> Link: https://patch.msgid.link/20260421150628.96500-1-shivani@linux.ibm.com
8 dayspowerpc/8xx: Fix interrupt mask in cpm1_gpiochip_add16()Christophe Leroy (CS GROUP)
Allthough fsl,cpm1-gpio-irq-mask always contains a 16 bits value, it is a standard u32 OF property as documented in Documentation/devicetree/bindings/soc/fsl/cpm_qe/gpio.txt The driver erroneously uses of_property_read_u16() leading to a mask which is always 0. Fix it by using of_property_read_u32() instead. Fixes: 726bd223105c ("powerpc/8xx: Adding support of IRQ in MPC8xx GPIO") Signed-off-by: Christophe Leroy (CS GROUP) <chleroy@kernel.org> Signed-off-by: Madhavan Srinivasan <maddy@linux.ibm.com> Link: https://patch.msgid.link/bb0b6d6c4543238c38d5d29a776d0674a8c0c180.1776752750.git.chleroy@kernel.org
8 dayspowerpc/vmx: avoid KASAN instrumentation in enter_vmx_ops() for kexecSourabh Jain
The kexec sequence invokes enter_vmx_ops() via copy_page() with the MMU disabled. In this context, code must not rely on normal virtual address translations or trigger page faults. With KASAN enabled, functions get instrumented and may access shadow memory using regular address translation. When executed with the MMU off, this can lead to page faults (bad_page_fault) from which the kernel cannot recover in the kexec path, resulting in a hang. The kexec path sets preempt_count to HARDIRQ_OFFSET before entering the MMU-off copy sequence. current_thread_info()->preempt_count = HARDIRQ_OFFSET kexec_sequence(..., copy_with_mmu_off = 1) -> kexec_copy_flush(image) copy_segments() -> copy_page(dest, addr) bl enter_vmx_ops() if (in_interrupt()) return 0 beq .Lnonvmx_copy Since kexec sets preempt_count to HARDIRQ_OFFSET, in_interrupt() evaluates to true and enter_vmx_ops() returns early. As in_interrupt() (and preempt_count()) are always inlined, mark enter_vmx_ops() with __no_sanitize_address to avoid KASAN instrumentation and shadow memory access with MMU disabled, helping kexec boot fine with KASAN enabled. Reported-by: Aboorva Devarajan <aboorvad@linux.ibm.com> Reviewed-by: Aboorva Devarajan <aboorvad@linux.ibm.com> Tested-by: Aboorva Devarajan <aboorvad@linux.ibm.com> Reviewed-by: Ritesh Harjani (IBM) <ritesh.list@gmail.com> Signed-off-by: Sourabh Jain <sourabhjain@linux.ibm.com> Signed-off-by: Madhavan Srinivasan <maddy@linux.ibm.com> Link: https://patch.msgid.link/20260407124349.1698552-2-sourabhjain@linux.ibm.com
8 dayspowerpc/kdump: fix KASAN sanitization flag for core_$(BITS).oSourabh Jain
KASAN instrumentation is intended to be disabled for the kexec core code, but the existing Makefile entry misses the object suffix. As a result, the flag is not applied correctly to core_$(BITS).o. So when KASAN is enabled, kexec_copy_flush and copy_segments in kexec/core_64.c are instrumented, which can result in accesses to shadow memory via normal address translation paths. Since these run with the MMU disabled, such accesses may trigger page faults (bad_page_fault) that cannot be handled in the kdump path, ultimately causing a hang and preventing the kdump kernel from booting. The same is true for kexec as well, since the same functions are used there. Update the entry to include the “.o” suffix so that KASAN instrumentation is properly disabled for this object file. Fixes: 2ab2d5794f14 ("powerpc/kasan: Disable address sanitization in kexec paths") Reported-by: Venkat Rao Bagalkote <venkat88@linux.ibm.com> Closes: https://lore.kernel.org/all/1dee8891-8bcc-46b4-93f3-fc3a774abd5b@linux.ibm.com/ Cc: stable@vger.kernel.org Reviewed-by: Ritesh Harjani (IBM) <ritesh.list@gmail.com> Tested-by: Venkat Rao Bagalkote <venkat88@linux.ibm.com> Acked-by: Mahesh Salgaonkar <mahesh@linux.ibm.com> Reviewed-by: Aboorva Devarajan <aboorvad@linux.ibm.com> Tested-by: Aboorva Devarajan <aboorvad@linux.ibm.com> Signed-off-by: Sourabh Jain <sourabhjain@linux.ibm.com> Signed-off-by: Madhavan Srinivasan <maddy@linux.ibm.com> Link: https://patch.msgid.link/20260407124349.1698552-1-sourabhjain@linux.ibm.com
8 dayspseries/papr-hvpipe: Fix style and checkpatch issues in enable_hvpipe_IRQ()Ritesh Harjani (IBM)
While at it let's also fix the similar style issue in enable_hvpipe_IRQ() function. This also fixes a minor checkpatch warning which I got due to an extra space before " ==". Signed-off-by: Ritesh Harjani (IBM) <ritesh.list@gmail.com> Signed-off-by: Madhavan Srinivasan <maddy@linux.ibm.com> Link: https://patch.msgid.link/1174f60d0ae128e773dbefd11dd8d46d69e7f50e.1777606826.git.ritesh.list@gmail.com
8 dayspseries/papr-hvpipe: Refactor and simplify hvpipe_rtas_recv_msg()Ritesh Harjani (IBM)
Simplify hvpipe_rtas_recv_msg() by removing three levels of nesting... if (!ret) if (buf) if (size < bytes_written) ... this refactoring of the function bails out to "out:" label first, in case of any error. This simplifies the init flow. Signed-off-by: Ritesh Harjani (IBM) <ritesh.list@gmail.com> Signed-off-by: Madhavan Srinivasan <maddy@linux.ibm.com> Link: https://patch.msgid.link/bbe7ddf8b8e25c9be8fc5e2c4aea9e5fca128bf4.1777606826.git.ritesh.list@gmail.com
8 dayspseries/papr-hvpipe: Kill task_struct pointer from struct hvpipe_source_infoRitesh Harjani (IBM)
We don't really use task_struct pointer for anything meaningful. So just kill it for now, and we can bring back later if we need this for any future debug purposes. Signed-off-by: Ritesh Harjani (IBM) <ritesh.list@gmail.com> Signed-off-by: Madhavan Srinivasan <maddy@linux.ibm.com> Link: https://patch.msgid.link/895e061e45cdc95db36fa7f27aa1922b81eed867.1777606826.git.ritesh.list@gmail.com
8 dayspseries/papr-hvpipe: Simplify spin unlock usage in papr_hvpipe_handle_release()Ritesh Harjani (IBM)
Once the src_info is removed from the global list, no one can access it. This simplies the usage of spin_unlock_irqrestore() in papr_hvpipe_handle_release() Signed-off-by: Ritesh Harjani (IBM) <ritesh.list@gmail.com> Signed-off-by: Madhavan Srinivasan <maddy@linux.ibm.com> Link: https://patch.msgid.link/4a980331557af3d10aada8576aaa16cddc691c65.1777606826.git.ritesh.list@gmail.com
8 dayspseries/papr-hvpipe: Fix the usage of copy_to_user()Ritesh Harjani (IBM)
copy_to_user() return bytes_not_copied to the user buffer. If there was an error writing bytes into the user buffer, i.e. if copy_to_user returns a non-zero value, then we should simply return -EFAULT from the ->read() call. Otherwise, in the non-patched version, we may end up mixing "bytes_not_copied + bytes_copied (HVPIPE_HDR_LEN)" as the return value to the user in ->read() call Also let's make sure we clear the hvpipe_status flag, if we have consumed the hvpipe msg by making the rtas call. ret = -EFAULT means copy_to_user has failed but that still means that the msg was read from the hvpipe, hence for both cases, success & -EFAULT, we should clear the HVPIPE_MSG_AVAILABLE flag in hvpipe_status. Cc: stable@vger.kernel.org Fixes: cebdb522fd3edd1 ("powerpc/pseries: Receive payload with ibm,receive-hvpipe-msg RTAS") Signed-off-by: Ritesh Harjani (IBM) <ritesh.list@gmail.com> Signed-off-by: Madhavan Srinivasan <maddy@linux.ibm.com> Link: https://patch.msgid.link/8fda3212a1ad48879c174e92f67472d9b9f1c3b7.1777606826.git.ritesh.list@gmail.com
8 dayspseries/papr-hvpipe: Fix & simplify error handling in papr_hvpipe_init()Ritesh Harjani (IBM)
Remove such 3 levels of nesting patterns to check success return values from function calls. ret = enable_hvpipe_IRQ() if (!ret) ret = set_hvpipe_sys_param(1) if (!ret) ret = misc_register() Instead just bail out to "out*:" labels, in case of any error. This simplifies the init flow. While at it let's also fix the following error handling logic: We have already enabled interrupt sources and enabled hvpipe to received interrupts, if misc_register() fails, we will destroy the workqueue, but the HMC might send us a msg via hvpipe which will call, queue work on the workqueue which might be destroyed. So instead, let's reverse the order of enabling set_hvpipe_sys_param(1) and in case of an error let's remove the misc dev by calling misc_deregister(). Cc: stable@vger.kernel.org Fixes: 39a08a4f94980 ("powerpc/pseries: Enable hvpipe with ibm,set-system-parameter RTAS") Signed-off-by: Ritesh Harjani (IBM) <ritesh.list@gmail.com> Signed-off-by: Madhavan Srinivasan <maddy@linux.ibm.com> Link: https://patch.msgid.link/f2141eafb80e7780395e03aa9a22e8a37be80513.1777606826.git.ritesh.list@gmail.com
8 dayspseries/papr-hvpipe: Fix null ptr deref in papr_hvpipe_dev_create_handle()Ritesh Harjani (IBM)
commit 6d3789d347a7 ("papr-hvpipe: convert papr_hvpipe_dev_create_handle() to FD_PREPARE()"), changed the create handle to FD_PREPARE(), but it caused kernel null-ptr-deref because after call to retain_and_null_ptr(src_info), src_info is re-used for adding it to the global list. Getting the following kernel panic in papr_hvpipe_dev_create_handle() when trying to add src_info to the list. Kernel attempted to write user page (0) - exploit attempt? (uid: 0) BUG: Kernel NULL pointer dereference on write at 0x00000000 Faulting instruction address: 0xc0000000001b44a0 Oops: Kernel access of bad area, sig: 11 [#1] ... Call Trace: papr_hvpipe_dev_ioctl+0x1f4/0x48c (unreliable) sys_ioctl+0x528/0x1064 system_call_exception+0x128/0x360 system_call_vectored_common+0x15c/0x2ec Now, the error handling with FD_PREPARE's file cleanup and __free(kfree) auto cleanup is getting too convoluted. This is mainly because we need to ensure only 1 user get the srcID handle. To simplify this, we allocate prepare the src_info in the beginning and add it to the global list under a spinlock after checking that no duplicates exist. This simplify the error handling where if the FD_ADD fails, we can simply remove the src_info from the list and consume any pending msg in hvpipe to be cleared, after src_info became visible in the global list. Cc: stable@vger.kernel.org Fixes: 6d3789d347a7 ("papr-hvpipe: convert papr_hvpipe_dev_create_handle() to FD_PREPARE()") Reported-by: Haren Myneni <haren@linux.ibm.com> Signed-off-by: Ritesh Harjani (IBM) <ritesh.list@gmail.com> Signed-off-by: Madhavan Srinivasan <maddy@linux.ibm.com> Link: https://patch.msgid.link/31ad94bc89d44156ee700c5bd006cb47a748e3cb.1777606826.git.ritesh.list@gmail.com
8 dayspseries/papr-hvpipe: Prevent kernel stack memory leak to userspaceRitesh Harjani (IBM)
The hdr variable is allocated on the stack and only hdr.version and hdr.flags are initialized explicitly. Because the struct papr_hvpipe_hdr contains reserved padding bytes (reserved[3] and reserved2[40]), these could leak the uninitialized bytes to userspace after copy_to_user(). This patch fixes that by initializing the whole struct to 0. Cc: stable@vger.kernel.org Fixes: cebdb522fd3ed ("powerpc/pseries: Receive payload with ibm,receive-hvpipe-msg RTAS") Signed-off-by: Ritesh Harjani (IBM) <ritesh.list@gmail.com> Signed-off-by: Madhavan Srinivasan <maddy@linux.ibm.com> Link: https://patch.msgid.link/7bfe03b65a282c856ed8182d1871bb973c0b78f2.1777606826.git.ritesh.list@gmail.com
8 dayspseries/papr-hvpipe: Fix race with interrupt handlerRitesh Harjani (IBM)
While executing ->ioctl handler or ->release handler, if an interrupt fires on the same cpu, then we can enter into a deadlock. This patch fixes both these handlers to take spin_lock_irq{save|restore} versions of the lock to prevent this deadlock. Cc: stable@vger.kernel.org Fixes: 814ef095f12c9 ("powerpc/pseries: Add papr-hvpipe char driver for HVPIPE interfaces") Signed-off-by: Ritesh Harjani (IBM) <ritesh.list@gmail.com> Signed-off-by: Madhavan Srinivasan <maddy@linux.ibm.com> Link: https://patch.msgid.link/e4ed435c44fc191f2eb23c7907ba6f72f193e6aa.1777606826.git.ritesh.list@gmail.com
8 dayspowerpc/pseries/htmdump: Add memory configuration dump support to htmdump moduleAthira Rajeev
H_HTM (Hardware Trace Macro) hypervisor call has capability to capture SystemMemory Configuration. This information helps to understand the address mapping for the partitions in the system. Support dumping system memory configuration from Hardware Trace Macro (HTM) function via debugfs interface. Under debugfs folder "/sys/kernel/debug/powerpc/htmdump", add file "htmsystem_mem". The interface allows only read of this file which will present the content of HTM buffer from the hcall. The 16th offset of HTM buffer has value for the number of entries for array of processors. Use this information to copy data to the debugfs file Signed-off-by: Athira Rajeev <atrajeev@linux.ibm.com> Signed-off-by: Madhavan Srinivasan <maddy@linux.ibm.com> Link: https://patch.msgid.link/20260314132953.27269-1-atrajeev@linux.ibm.com
8 dayspowerpc/pseries/htmdump: Fix the offset value used in htm status dumpAthira Rajeev
H_HTM call is invoked using three parameters specifying the address of the buffer, size of the buffer and offset where to read from. offset used was always zero. "offset" is value from output buffer header that points to next entry to dump. zero is the first entry to dump. next entry is read from the output bufferbyte offset 0x8. Update htmstatus_read() function to use right offset. Return when offset points to -1 Fixes: 627cf584f4c3 ("powerpc/pseries/htmdump: Add htm status support to htmdump module") Signed-off-by: Athira Rajeev <atrajeev@linux.ibm.com> Signed-off-by: Madhavan Srinivasan <maddy@linux.ibm.com> Link: https://patch.msgid.link/20260314132432.25581-3-atrajeev@linux.ibm.com
8 dayspowerpc/pseries/htmdump: Fix the offset value used in processor ↵Athira Rajeev
configuration dump H_HTM call is invoked using three parameters specifying the address of the buffer, size of the buffer and offset where to read from. offset used was always zero. "offset" is value from output buffer header that points to next entry to dump. zero is the first entry to dump. next entry is read from the output bufferbyte offset 0x8. Update htminfo_read() function to use right offset. Return when offset points to -1 Fixes: dea7384e14e7 ("powerpc/pseries/htmdump: Add htm info support to htmdump module") Signed-off-by: Athira Rajeev <atrajeev@linux.ibm.com> Signed-off-by: Madhavan Srinivasan <maddy@linux.ibm.com> Link: https://patch.msgid.link/20260314132432.25581-2-atrajeev@linux.ibm.com
8 dayspowerpc/pseries/htmdump: Free the global buffers in htmdump module exitAthira Rajeev
htmdump modules uses global memory buffers to capture details like capabilities, status of specified HTM, read the trace buffer. These are initialized during module init and hence needs to be freed in module exit. Patch adds freeing of the memory in module exit. The change also includes minor clean up for the variable name. The read call back for the debugfs interface file saves filp->private_data to local variable name which is same as global variable name for the memory buffers. Rename these local variable names. Signed-off-by: Athira Rajeev <atrajeev@linux.ibm.com> Signed-off-by: Madhavan Srinivasan <maddy@linux.ibm.com> Link: https://patch.msgid.link/20260314132432.25581-1-atrajeev@linux.ibm.com
8 daysperf/x86/intel: Enable auto counter reload for DMRDapeng Mi
Panther cove µarch starts to support auto counter reload (ACR), but the static_call intel_pmu_enable_acr_event() is not updated for the Panther Cove µarch used by DMR. It leads to the auto counter reload is not really enabled on DMR. Update static_call intel_pmu_enable_acr_event() in intel_pmu_init_pnc(). Fixes: d345b6bb8860 ("perf/x86/intel: Add core PMU support for DMR") Signed-off-by: Dapeng Mi <dapeng1.mi@linux.intel.com> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Cc: stable@vger.kernel.org Link: https://patch.msgid.link/20260430002558.712334-5-dapeng1.mi@linux.intel.com
8 daysperf/x86/intel: Disable PMI for self-reloaded ACR eventsDapeng Mi
On platforms with Auto Counter Reload (ACR) support, such as NVL, a "NMI received for unknown reason 30" warning is observed when running multiple events in a group with ACR enabled: $ perf record -e '{instructions/period=20000,acr_mask=0x2/u,\ cycles/period=40000,acr_mask=0x3/u}' ./test The warning occurs because the Performance Monitoring Interrupt (PMI) is enabled for the self-reloaded event (the cycles event in this case). According to the Intel SDM, the overflow bit (IA32_PERF_GLOBAL_STATUS.PMCn_OVF) is never set for self-reloaded events. Since the bit is not set, the perf NMI handler cannot identify the source of the interrupt, leading to the "unknown reason" message. Furthermore, enabling PMI for self-reloaded events is unnecessary and can lead to extraneous records that pollute the user's requested data. Disable the interrupt bit for all events configured with ACR self-reload. Fixes: ec980e4facef ("perf/x86/intel: Support auto counter reload") Reported-by: Andi Kleen <ak@linux.intel.com> Signed-off-by: Dapeng Mi <dapeng1.mi@linux.intel.com> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Cc: stable@vger.kernel.org Link: https://patch.msgid.link/20260430002558.712334-4-dapeng1.mi@linux.intel.com
8 daysperf/x86/intel: Always reprogram ACR events to prevent stale masksDapeng Mi
Members of an ACR group are logically linked via a bitmask of their hardware counter indices. If some members of the group are assigned new hardware counters during rescheduling, even events that keep their original counter index must be updated with a new mask. Without this, an event will continue to use a stale acr_mask that references the old indices of its group peers. Ensure all ACR events are reprogrammed during the scheduling path to maintain consistency across the group. Fixes: ec980e4facef ("perf/x86/intel: Support auto counter reload") Signed-off-by: Dapeng Mi <dapeng1.mi@linux.intel.com> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Cc: stable@vger.kernel.org Link: https://patch.msgid.link/20260430002558.712334-3-dapeng1.mi@linux.intel.com
8 daysperf/x86/intel: Improve validation and configuration of ACR masksDapeng Mi
Currently there are several issues on the user space ACR mask validation and configuration. - The validation for user space ACR mask (attr.config2) is incomplete, e.g., the ACR mask could include the index which belongs to another ACR events group, but it's not validated. - An early return on an invalid ACR mask caused all subsequent ACR groups to be skipped. - The stale hardware ACR mask (hw.config1) is not cleared before setting new hardware ACR mask. The following changes address all of the above issues. - Figure out the event index group of an ACR group. Any bits in the user-space mask not present in the index group are now dropped. - Instead of an early return on invalid bits, drop only the invalid portions and continue iterating through all ACR events to ensure full configuration. - Explicitly clear the stale hardware ACR mask for each event prior to writing the new configuration. Besides, a non-leader event member of ACR group could be disabled in theory. This could cause bit-shifting errors in the acr_mask of remaining group members. But since ACR sampling requires all events to be active, this should not be a big concern in real use case. Add a "FIXME" comment to notice this risk. Fixes: ec980e4facef ("perf/x86/intel: Support auto counter reload") Signed-off-by: Dapeng Mi <dapeng1.mi@linux.intel.com> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Cc: stable@vger.kernel.org Link: https://patch.msgid.link/20260430002558.712334-2-dapeng1.mi@linux.intel.com
8 daysx86/xen: Fix a potential problem in xen_e820_resolve_conflicts()Juergen Gross
When fixing a conflict in xen_e820_resolve_conflicts(), the loop over the E820 map entries needs to be restarted, as the E820 map will have been modified by the fix. Otherwise entries might be skipped by accident. Fixes: be35d91c8880 ("xen: tolerate ACPI NVS memory overlapping with Xen allocated memory") Signed-off-by: Juergen Gross <jgross@suse.com> Signed-off-by: Ingo Molnar <mingo@kernel.org> Cc: xen-devel@lists.xenproject.org Link: https://patch.msgid.link/20260505080653.197775-1-jgross@suse.com
8 daysx86/efi: Restore IRQ state in EFI page fault handlerArd Biesheuvel
The kernel's softirq API does not permit re-enabling softirqs while IRQs are disabled. The reason for this is that local_bh_enable() will not only re-enable delivery of softirqs over the back of IRQs, it will also handle any pending softirqs immediately, regardless of whether IRQs are enabled at that point. For this reason, commit d02198550423 ("x86/fpu: Improve crypto performance by making kernel-mode FPU reliably usable in softirqs") disables softirqs only when IRQs are enabled, as it is not permitted otherwise, but also unnecessary, given that asynchronous softirq delivery never happens to begin with while IRQs are disabled. However, this does mean that entering a kernel mode FPU section with IRQs enabled and leaving it with IRQs disabled leads to problems, as identified by Sashiko [0]: the EFI page fault handler is called from page_fault_oops() with IRQs disabled, and thus ends the kernel mode FPU section with IRQs disabled as well, regardless of whether IRQs were enabled when it was started. This may result in schedule() being called with a non-zero preempt_count, causing a BUG(). So take care to re-enable IRQs when handling any EFI page faults if they were taken with IRQs enabled. [0] https://sashiko.dev/#/patchset/20260430074107.27051-1-ivan.hu%40canonical.com Cc: Eric Biggers <ebiggers@kernel.org> Cc: Ivan Hu <ivan.hu@canonical.com> Cc: x86@kernel.org Cc: <stable@vger.kernel.org> Fixes: d02198550423 ("x86/fpu: Improve crypto performance by making kernel-mode FPU reliably usable in softirqs") Reviewed-by: Eric Biggers <ebiggers@kernel.org> Signed-off-by: Ard Biesheuvel <ardb@kernel.org>
9 daysx86/efi: Fix graceful fault handling after FPU softirq changesIvan Hu
Since commit d02198550423 ("x86/fpu: Improve crypto performance by making kernel-mode FPU reliably usable in softirqs"), kernel_fpu_begin() calls fpregs_lock() which uses local_bh_disable() instead of the previous preempt_disable(). This sets SOFTIRQ_OFFSET in preempt_count during the entire EFI runtime service call, causing in_interrupt() to return true in normal task context. The graceful page fault handler efi_crash_gracefully_on_page_fault() uses in_interrupt() to bail out for faults in real interrupt context. With SOFTIRQ_OFFSET now set, the handler always bails out, leaving EFI firmware page faults unhandled. This escalates to die() which also sees in_interrupt() as true and calls panic("Fatal exception in interrupt"), resulting in a hard system freeze. On systems with buggy firmware that triggers page faults during EFI runtime calls (e.g., accessing unmapped memory in GetTime()), this causes an unrecoverable hang instead of the expected graceful EFI_ABORTED recovery. Fix by replacing in_interrupt() with !in_task(). This preserves the original intent of bailing for interrupts or NMI faults, while no longer falsely triggering from the FPU code path's local_bh_disable(). Fixes: d02198550423 ("x86/fpu: Improve crypto performance by making kernel-mode FPU reliably usable in softirqs") Cc: <stable@vger.kernel.org> Signed-off-by: Ivan Hu <ivan.hu@canonical.com> [ardb: Sashiko spotted that using 'in_hardirq() || in_nmi()' leaves a window where a softirq may be taken before fpregs_lock() is called, but after efi_rts_work.efi_rts_id has been assigned, and any page faults occurring in that window will then be misidentified as having been caused by the firmware. Instead, use !in_task(), which incorporates in_serving_softirq(). ] Signed-off-by: Ard Biesheuvel <ardb@kernel.org>
10 daysLoongArch: KVM: Move unconditional delay into timer clear sceneryBibo Mao
When timer interrupt arrives in guest kernel, guest kernel clears the timer interrupt and program timer with the next incoming event. During this stage, timer tick is -1 and timer interrupt status is disabled in ESTAT register. KVM hypervisor need write zero with timer tick register and wait timer interrupt injection from HW side, and then clear timer interrupt. So there is 2 cycle delay in KVM hypervisor to emulate such scenery, and the delay is unnecessary if there is no need to clear the timer interrupt. Here move 2 cycle delay into timer clear scenery and add timer ESTAT checking after delay, and set max timer expire value if timer interrupt does not arrive still. Cc: stable@vger.kernel.org Signed-off-by: Bibo Mao <maobibo@loongson.cn> Signed-off-by: Huacai Chen <chenhuacai@loongson.cn>
10 daysLoongArch: KVM: Fix HW timer interrupt lost when inject interrupt by softwareBibo Mao
With passthrough HW timer, timer interrupt is injected by HW. When inject emulated CPU interrupt by software such SIP0/SIP1/IPI, HW timer interrupt may be lost. Here check whether there is timer tick value inversion before and after injecting emulated CPU interrupt by software, timer enabling by reading timer cfg register is skipped. If the timer tick value is detected with changing, then timer should be enabled. And inject a timer interrupt by software if there is. Cc: <stable@vger.kernel.org> Fixes: f45ad5b8aa93 ("LoongArch: KVM: Implement vcpu interrupt operations"). Signed-off-by: Bibo Mao <maobibo@loongson.cn> Signed-off-by: Huacai Chen <chenhuacai@loongson.cn>
10 daysLoongArch: KVM: Move AVEC interrupt injection into switch loopBibo Mao
When AVEC interrupt controller is emulated in user space, AVEC interrupt is injected by software like SIP0/SIP1/TI/IPI interrupts. Here also move the AVEC interrupt injection in switch loop. Cc: stable@vger.kernel.org Signed-off-by: Bibo Mao <maobibo@loongson.cn> Signed-off-by: Huacai Chen <chenhuacai@loongson.cn>
10 daysLoongArch: KVM: Use kvm_set_pte() in kvm_flush_pte()Tao Cui
kvm_flush_pte() is the only caller that directly assigns *pte instead of using the kvm_set_pte() wrapper. Use the wrapper for consistency with the rest of the file. No functional change intended. Cc: stable@vger.kernel.org Reviewed-by: Bibo Mao <maobibo@loongson.cn> Signed-off-by: Tao Cui <cuitao@kylinos.cn> Signed-off-by: Huacai Chen <chenhuacai@loongson.cn>
10 daysLoongArch: KVM: Fix missing EMULATE_FAIL in kvm_emu_mmio_read()Tao Cui
In the ldptr (0x24...0x27) opcode decoding path, the default case only breaks out but without setting "ret" value to EMULATE_FAIL. This leaves run->mmio.len uninitialized (stale from a previous MMIO operation) while "ret" value remains EMULATE_DO_MMIO, causing the code to proceed with an incorrect MMIO length. Add "ret = EMULATE_FAIL" to match the other default branches in the same function (e.g. the 0x28...0x2e and 0x38 cases). Cc: stable@vger.kernel.org Reviewed-by: Bibo Mao <maobibo@loongson.cn> Signed-off-by: Tao Cui <cuitao@kylinos.cn> Signed-off-by: Huacai Chen <chenhuacai@loongson.cn>
10 daysLoongArch: KVM: Cap KVM_CAP_NR_VCPUS by KVM_CAP_MAX_VCPUSQiang Ma
It doesn't make sense to return the recommended maximum number of vCPUs which exceeds the maximum possible number of vCPUs. Other architectures have already done this, such as commit 57a2e13ebdda ("KVM: MIPS: Cap KVM_CAP_NR_VCPUS by KVM_CAP_MAX_VCPUS") Cc: stable@vger.kernel.org Reviewed-by: Bibo Mao <maobibo@loongson.cn> Signed-off-by: Qiang Ma <maqianga@uniontech.com> Signed-off-by: Huacai Chen <chenhuacai@loongson.cn>
10 daysLoongArch: KVM: Fix "unreliable stack" for kvm_exc_entryXianglai Li
Insert the appropriate UNWIND hint into the kvm_exc_entry assembly function to guide the generation of correct ORC table entries, thereby solving the timeout problem ("unreliable stack") while loading the livepatch-sample module on a physical machine running virtual machines with multiple vcpus. Cc: stable@vger.kernel.org Signed-off-by: Xianglai Li <lixianglai@loongson.cn> Signed-off-by: Huacai Chen <chenhuacai@loongson.cn>
10 daysLoongArch: KVM: Compile switch.S directly into the kernelXianglai Li
If we directly compile the switch.S file into the kernel, the address of the kvm_exc_entry function will definitely be within the DMW memory area. Therefore, we will no longer need to perform a copy relocation of the kvm_exc_entry. So this patch compiles switch.S directly into the kernel, and then remove the copy relocation execution logic for the kvm_exc_entry function. Cc: stable@vger.kernel.org Signed-off-by: Xianglai Li <lixianglai@loongson.cn> Signed-off-by: Huacai Chen <chenhuacai@loongson.cn>
10 daysLoongArch: vDSO: Drop custom __arch_vdso_hres_capable()Thomas Weißschuh
The custom definition is identical to the generic fallback one. So remove it. Signed-off-by: Thomas Weißschuh <thomas.weissschuh@linutronix.de> Signed-off-by: Huacai Chen <chenhuacai@loongson.cn>
10 daysLoongArch: Fix potential ADE in loongson_gpu_fixup_dma_hang()Wentao Guan
The switch case in loongson_gpu_fixup_dma_hang() may not DC2 or DC3, and readl(crtc_reg) will access with random address, because the "device" is from "base+PCI_DEVICE_ID", "base" is from "pdev->devfn+1". This is wrong when my platform inserts a discrete GPU: lspci -tv -[0000:00]-+-00.0 Loongson Technology LLC Hyper Transport Bridge Controller ... +-06.0 Loongson Technology LLC LG100 GPU +-06.2 Loongson Technology LLC Device 7a37 ... Add a default switch case to fix the panic as below: Kernel ade access[#1]: CPU: 0 PID: 1 Comm: swapper/0 Not tainted 6.6.136-loong64-desktop-hwe+ #4 pc 90000000017e5534 ra 90000000017e54c0 tp 90000001002f8000 sp 90000001002fb6c0 a0 80000efe00003100 a1 0000000000003100 a2 0000000000000000 a3 0000000000000002 a4 90000001002fb6b4 a5 900000087cdb58fd a6 90000000027af000 a7 0000000000000001 t0 00000000000085b9 t1 000000000000ffff t2 0000000000000000 t3 0000000000000000 t4 fffffffffffffffd t5 00000000fffb6d9c t6 0000000000083b00 t7 00000000000070c0 t8 900000087cdb4d94 u0 900000087cdb58fd s9 90000001002fb826 s0 90000000031c12c8 s1 7fffffffffffff00 s2 90000000031c12d0 s3 0000000000002710 s4 0000000000000000 s5 0000000000000000 s6 9000000100053000 s7 7fffffffffffff00 s8 90000000030d4000 ra: 90000000017e54c0 loongson_gpu_fixup_dma_hang+0x40/0x210 ERA: 90000000017e5534 loongson_gpu_fixup_dma_hang+0xb4/0x210 CRMD: 000000b0 (PLV0 -IE -DA +PG DACF=CC DACM=CC -WE) PRMD: 00000004 (PPLV0 +PIE -PWE) EUEN: 00000000 (-FPE -SXE -ASXE -BTE) ECFG: 00071c1d (LIE=0,2-4,10-12 VS=7) ESTAT: 00480000 [ADEM] (IS= ECode=8 EsubCode=1) BADV: 7fffffffffffff00 PRID: 0014d000 (Loongson-64bit, Loongson-3A6000-HV) Modules linked in: Process swapper/0 (pid: 1, threadinfo=(____ptrval____), task=(____ptrval____)) Stack : 0000000000000006 90000001002fb778 90000001002fb704 0000000000000007 0000000016a65700 90000000017e5690 000000000000ffff ffffffffffffffff 900000000209f7c0 9000000100053000 900000000209f7a8 9000000000eebc08 0000000000000000 0000000000000000 0000000000000006 90000001002fb778 90000001000530b8 90000000027af000 0000000000000000 9000000100054000 9000000100053000 9000000000ebb70c 9000000100004c00 9000000004000001 90000001002fb7e4 bae765461f31cb12 0000000000000000 0000000000000000 0000000000000006 90000000027af000 0000000000000030 90000000027af000 900000087cd6f800 9000000100053000 0000000000000000 9000000000ebc560 7a2500147cdaf720 bae765461f31cb12 0000000000000001 0000000000000030 ... Call Trace: [<90000000017e5534>] loongson_gpu_fixup_dma_hang+0xb4/0x210 [<9000000000eebc08>] pci_fixup_device+0x108/0x280 [<9000000000ebb70c>] pci_setup_device+0x24c/0x690 [<9000000000ebc560>] pci_scan_single_device+0xe0/0x140 [<9000000000ebc684>] pci_scan_slot+0xc4/0x280 [<9000000000ebdd00>] pci_scan_child_bus_extend+0x60/0x3f0 [<9000000000f5bc94>] acpi_pci_root_create+0x2b4/0x420 [<90000000017e5e74>] pci_acpi_scan_root+0x2d4/0x440 [<9000000000f5b02c>] acpi_pci_root_add+0x21c/0x3a0 [<9000000000f4ee54>] acpi_bus_attach+0x1a4/0x3c0 [<90000000010e200c>] device_for_each_child+0x6c/0xe0 [<9000000000f4bbf4>] acpi_dev_for_each_child+0x44/0x70 [<9000000000f4ef40>] acpi_bus_attach+0x290/0x3c0 [<90000000010e200c>] device_for_each_child+0x6c/0xe0 [<9000000000f4bbf4>] acpi_dev_for_each_child+0x44/0x70 [<9000000000f4ef40>] acpi_bus_attach+0x290/0x3c0 [<9000000000f5211c>] acpi_bus_scan+0x6c/0x280 [<900000000189c028>] acpi_scan_init+0x194/0x310 [<900000000189bc6c>] acpi_init+0xcc/0x140 [<9000000000220cdc>] do_one_initcall+0x4c/0x310 [<90000000018618fc>] kernel_init_freeable+0x258/0x2d4 [<900000000184326c>] kernel_init+0x28/0x13c [<9000000000222008>] ret_from_kernel_thread+0xc/0xa4 Cc: stable@vger.kernel.org Fixes: 95db0c9f526d ("LoongArch: Workaround LS2K/LS7A GPU DMA hang bug") Link: https://gist.github.com/opsiff/ebf2dac51b4013d22462f2124c55f807 Link: https://gist.github.com/opsiff/a62f2a73db0492b3c49bf223a339b133 Signed-off-by: Wentao Guan <guanwentao@uniontech.com> Signed-off-by: Huacai Chen <chenhuacai@loongson.cn>
10 daysLoongArch: Use per-root-bridge PCIH flag to skip mem resource fixupHuacai Chen
When firmware enables 64-bit PCI host bridge support, some root bridges already provide valid 64-bit mem resource windows through ACPI. In this case, the LoongArch-specific mem resource high-bits fixup in acpi_prepare_root_resources() should not be applied unconditionally. Otherwise, the kernel may override the native resource layout derived from firmware, and later BAR assignment can fail to place device BARs into the intended 64-bit address space correctly. Add a per-root-bridge ACPI flag, PCIH, and evaluate it from the current root bridge device scope. When PCIH is set, skip the mem resource high- bits fixup path and let the kernel use the firmware-provided resource description directly. When PCIH is absent or cleared, keep the existing behavior and continue filling the high address bits from the host bridge address. This makes the behavior per-root-bridge configurable and avoids breaking valid 64-bit BAR space allocation on bridges whose 64-bit windows have already been fully described by firmware. Cc: stable@vger.kernel.org Suggested-by: Chao Li <lichao@loongson.cn> Tested-by: Dongyan Qian <qiandongyan@loongson.cn> Signed-off-by: Dongyan Qian <qiandongyan@loongson.cn> Signed-off-by: Huacai Chen <chenhuacai@loongson.cn>
10 daysLoongArch: Fix SYM_SIGFUNC_START definition for 32BITHuacai Chen
The SYM_SIGFUNC_START definition should match sigcontext that the length of GPRs are 8 bytes for both 32BIT and 64BIT. So replace SZREG with 8 to fix it. Cc: stable@vger.kernel.org Fixes: e4878c37f6679fde ("LoongArch: vDSO: Emit GNU_EH_FRAME correctly") Suggested-by: Xi Ruoyao <xry111@xry111.site> Signed-off-by: Huacai Chen <chenhuacai@loongson.cn>
10 daysLoongArch: Specify -m32/-m64 explicitly for 32BIT/64BITHuacai Chen
Clang/LLVM build needs -m32/-m64 to switch triple variants (i.e. the --target=xxx parameter). Otherwise we get build errors for CONFIG_32BIT. GCC doesn't support -m32/-m64 now, but maybe support in future, so use cc-option to specify them. Reported-by: kernel test robot <lkp@intel.com> Closes: https://lore.kernel.org/oe-kbuild-all/202604232041.ESJDwVG4-lkp@intel.com/ Suggested-by: Nathan Chancellor <nathan@kernel.org Tested-by: WANG Rui <wangrui@loongson.cn> Signed-off-by: Huacai Chen <chenhuacai@loongson.cn>