| Age | Commit message (Collapse) | Author |
|
Drop the now single-use dsb_vblank_delay() helper and inline its logic
directly into intel_dsb_wait_for_delayed_vblank().
This will help to keep all VRR related wait stuff in one place.
v2: Use intel_scanlines_to_usecs() in intel_dsb_wait_usec(). (Ville)
Signed-off-by: Ankit Nautiyal <ankit.k.nautiyal@intel.com>
Reviewed-by: Ville Syrjälä <ville.syrjala@linux.intel.com>
Link: https://lore.kernel.org/r/20250925022352.3129859-1-ankit.k.nautiyal@intel.com
|
|
The helper intel_vrr_vblank_delay() was used to keep track of the SCL
lines + the extra vblank delay required for ICL/TGL.
This was used to wait for sufficient lines for:
-push send bit to clear for VRR case
-evasion to delay the commit.
For first case we are using safe window scanline wait and with that we
just need to wait for SCL lines, we do not need to wait for the extra
vblank delay required for ICL/TGL. For the second case, we actually
do not need to wait for extra lines before the undelayed vblank, if we
are already in the safe window.
To sum up, SCL lines is sufficient for both cases.
So drop the helper intel_vrr_vblank_delay and just use
crtc_state->set_context_latency instead.
Signed-off-by: Ankit Nautiyal <ankit.k.nautiyal@intel.com>
Reviewed-by: Ville Syrjälä <ville.syrjala@linux.intel.com>
Link: https://lore.kernel.org/r/20250924141542.3122126-10-ankit.k.nautiyal@intel.com
|
|
The maximum guardband value is constrained by two factors:
- The actual vblank length minus set context latency (SCL)
- The hardware register field width:
- 8 bits for ICL/TGL (VRR_CTL_PIPELINE_FULL_MASK -> max 255)
- 16 bits for ADL+ (XELPD_VRR_CTL_VRR_GUARDBAND_MASK -> max 65535)
Remove the #FIXME and clamp the guardband to the maximum allowed value.
v2:
- Use REG_FIELD_MAX(). (Ville)
- Separate out functions for intel_vrr_max_guardband(),
intel_vrr_max_vblank_guardband(). (Ville)
v3:
- Fix Typo: Add the missing adjusted_mode->crtc_vdisplay in guardband
computation. (Ville)
- Refactor intel_vrr_max_hw_guardband() and use else for consistency.
(Ville)
v4:
- Drop max_guardband from intel_vrr_max_hw_guardband(). (Ville)
Signed-off-by: Ankit Nautiyal <ankit.k.nautiyal@intel.com>
Reviewed-by: Ville Syrjälä <ville.syrjala@linux.intel.com> (#v2)
Link: https://lore.kernel.org/r/20250924141542.3122126-9-ankit.k.nautiyal@intel.com
|
|
Introduce REG_FIELD_MAX macro as local wrapper around FIELD_MAX() to return
the maximum value representable by a bit mask. The value is cast to u32
for consistency with other REG_* macros and assumes the bitfield fits
within 32 bits.
v2: Use __mask as macro argument aligning with other macros. (Ville)
Signed-off-by: Ankit Nautiyal <ankit.k.nautiyal@intel.com>
Reviewed-by: Ville Syrjälä <ville.syrjala@linux.intel.com>
Reviewed-by: Andi Shyti <andi.shyti@linux.intel.com>
Link: https://lore.kernel.org/r/20250924141542.3122126-8-ankit.k.nautiyal@intel.com
|
|
Until LNL, intel_dsb_wait_vblanks() used to wait for the undelayed vblank
start. However, from PTL onwards, it waits for the start of the
safe-window defined by the number of lines programmed in the register
TRANS_SET_CONTEXT_LATENCY. This change was introduced to move the SCL
window out of the vblank region, supporting modes with higher refresh
rates and smaller vblanks. This change introduces a "safe window" a
scanline range from (undelayed vblank - SCL) to (delayed vblank - SCL).
As a result, on PTL+ platforms, the DSB wait for vblank completes exactly
SCL lines earlier than the undelayed vblank start (safe window start).
If the flip occurs in the active region and the push happens before the
vmin decision boundary, the DSB wait fires early, and the push is sent
inside this safe window. In such cases, the push bit is cleared at the
delayed vblank, but our wait logic does not account for the early trigger,
leading to DSB poll errors.
To fix this, we add an explicit wait for the end of the safe window i.e.,
the scanline range from (undelayed vblank - SCL) to (delayed vblank - SCL).
Once past this window, we are exactly SCL lines away from the delayed
vblank, and our existing wait logic works as intended.
This additional wait is only effective if the push occurs before the vmin
decision boundary. If the push happens after the boundary, the hardware
already guarantees we're SCL lines away from the delayed vblank, and the
extra wait becomes a no-op.
v2:
- Use helpers for safe window start/end. (Ville)
- Move the extra wait inside the helper to wait for delayed vblank. (Ville)
- Update the commit message.
v3:
- Add more documentation for explanation for the wait. (Ville)
- Rename intel_vrr_vmin_safe_window_start/end as this is vmin safe
window. (Ville)
- Minor refactoring to align with the code. (Ville)
- Update the commit message for more clarity.
v4:
- Retain name for intel_vrr_safe_window_start as it doesn't change with
vmin/vmax etc. (Ville)
Signed-off-by: Ankit Nautiyal <ankit.k.nautiyal@intel.com>
Reviewed-by: Ville Syrjälä <ville.syrjala@linux.intel.com>
Link: https://lore.kernel.org/r/20250924141542.3122126-7-ankit.k.nautiyal@intel.com
|
|
The helper intel_dsb_wait_vblank_delay() is used in DSB to wait for the
delayed vblank after the send push operation. Rename it to
intel_dsb_wait_for_delayed_vblank() to align with the semantics.
v2: Rename to intel_dsb_wait_vblank_delay instead of the proposed SCL
semantics, as this will be ot only about SCL lines with different timing
generator and different refresh rate modes. (Ville)
Signed-off-by: Ankit Nautiyal <ankit.k.nautiyal@intel.com>
Reviewed-by: Ville Syrjälä <ville.syrjala@linux.intel.com>
Link: https://lore.kernel.org/r/20250924141542.3122126-6-ankit.k.nautiyal@intel.com
|
|
For now guardband is equal to the vblank length so ideally it should be
computed as difference between the vmin vtotal and vactive. However
since we are having few lines as SCL, we need to account for this while
computing the guardband.
Since the vblank start is moved by SCL lines from the vactive, the delta
between the vmin vtotal and new vblank start was used to account for this.
Now that SCL is explicitly tracked using the `set_context_latency` member,
use it directly in the guardband calculation.
In the future, when the guardband is shortened or optimized, we may need
to factor in both the change in the vblank start and SCL lines. For now,
explicitly accounting for SCL is sufficient.
v2: Fix typo: replace adjusted_mode->vdisplay with
adjusted_mode->crtc_vdisplay. (Ville)
Signed-off-by: Ankit Nautiyal <ankit.k.nautiyal@intel.com>
Reviewed-by: Ville Syrjälä <ville.syrjala@linux.intel.com>
Link: https://lore.kernel.org/r/20250924141542.3122126-5-ankit.k.nautiyal@intel.com
|
|
The helper intel_vrr_real_vblank_delay() was added to account for the
SCL lines for TGL where we do not have the TRANS_SET_CONTEXT_LATENCY.
Now, since we already are tracking the SCL with new member
`set_context_latency` use it directly instead of the helper.
Signed-off-by: Ankit Nautiyal <ankit.k.nautiyal@intel.com>
Reviewed-by: Ville Syrjälä <ville.syrjala@linux.intel.com>
Link: https://lore.kernel.org/r/20250924141542.3122126-4-ankit.k.nautiyal@intel.com
|
|
'Set context latency' (SCL, Window W2) is defined as the number of lines
before the double buffering point, which are required to complete
programming of the registers, typically when DSB is used to program the
display registers.
Since we are not using this window for programming the registers, this
is mostly set to 0, unless there is a requirement for few cases related
to PSR/PR where the 'set context latency' should be at least 1.
Currently we are using the 'set context latency' (if required) implicitly
by moving the vblank start by the required amount and then measuring the
delay i.e. the difference between undelayed vblank start and delayed vblank
start.
Since our guardband matches the vblank length, this was not a problem as
the difference between the undelayed vblank and delayed vblank was at
the most equal to the 'set context latency' lines.
However, if we want to optimize the guardband, the difference between the
undelayed and the delayed vblank will be large and we cannot use this
difference as the 'set context latency' lines.
To make way for this optimization of guardband, formally introduce the
'set context latency' or SCL and track it as a new member
`set_context_latency` of the structure intel_crtc_state.
Eventually, all references of vblank delay where we mean to use set
context latency will be replaced by this new `set_context_latency`
member.
Note: for TGL the TRANS_SET_CONTEXT_LATENCY doesn't exist to account for
the SCL. However, the VBLANK_START-VACTIVE difference plays an identical
role here ie. it can be used to create the SCL window ahead of the
undelayed vblank.
While readback since there is no specific register to read out the SCL, use
the difference between vblank start and vactive to populate the new member
for TGL.
v2:
- Use u16 for set_context_latency. (Ville)
- s/vblank_delay/set_context_latency. (Ville)
- Meld the changes for TGL with this change. (Ville)
v3:
- Update comment to clarify the TGL case. (Ville)
- Fix typo in commit message.
Signed-off-by: Ankit Nautiyal <ankit.k.nautiyal@intel.com>
Reviewed-by: Ville Syrjälä <ville.syrjala@linux.intel.com>
Link: https://lore.kernel.org/r/20250924141542.3122126-3-ankit.k.nautiyal@intel.com
|
|
Rename intel_psr_min_vblank_delay to intel_psr_min_set_context_latency
to reflect that it provides the minimum value for 'Set context
latency'(SCL or Window W2) for PSR/Panel Replay to work correctly across
different platforms.
Signed-off-by: Ankit Nautiyal <ankit.k.nautiyal@intel.com>
Reviewed-by: Ville Syrjälä <ville.syrjala@linux.intel.com>
Link: https://lore.kernel.org/r/20250924141542.3122126-2-ankit.k.nautiyal@intel.com
|
|
Designware databook r5.20a, sec 3.10.10.3 documents the 'CFG Shift Feature'
of the internal Address Translation Unit (iATU). When this feature is
enabled, it shifts/maps the BDF contained in the bits [27:12] of the target
address in MEM TLP to become BDF of the CFG TLP. This essentially
implements the Enhanced Configuration Address Mapping (ECAM) mechanism as
defined in PCIe r6.0, sec 7.2.2.
Currently, the driver is not making use of this CFG shift feature, thereby
creating the iATU outbound map for each config access to the devices,
causing latency and wasting CPU cycles.
So to avoid this, configure the controller to enable CFG shift feature by
enabling the 'CFG Shift' bit of the 'iATU Control 2 Register'.
As a result of enabling CFG shift (ECAM), there is no longer a need to map
the DBI register space separately as the DBI region falls under the
'config' space used for ECAM (as DBI is used to access the Root Port).
For enabling ECAM using CFG shift, the platform has to satisfy following
requirements:
1. Size of the 'config' memory space to be used as ECAM memory should be
able to accommodate the number of buses defined in the 'bus-range'
property of the host bridge DT node.
2. The 'config' memory space should be 256 MiB aligned. This requirement
comes from PCIe r6.0, sec 7.2.2, which says the base address of ECAM
memory should be aligned to a 2^(n+20) byte address boundary. For the
DWC cores, n is 8, so this results in 2^28 byte alignment requirement.
It should be noted that some DWC vendor glue drivers like pcie-al may use
their own ECAM mechanism. For those controllers, set
'dw_pcie_rp::native_ecam' flag and skip enabling the CFG Shift feature in
the DWC core.
Signed-off-by: Krishna Chaitanya Chundru <krishna.chundru@oss.qualcomm.com>
[mani: code split, reworded subject/description, comment, native_ecam flag]
Signed-off-by: Manivannan Sadhasivam <mani@kernel.org>
Link: https://patch.msgid.link/20250923-controller-dwc-ecam-v10-4-e84390ba75fa@kernel.org
|
|
To support the DWC ECAM mechanism, prepare the driver by performing below
configurations:
1. Since the ELBI region will be covered by the ECAM 'config' space,
override the 'elbi_base' with the address derived from 'dbi_base' and
the offset from PARF_SLV_DBI_ELBI register.
2. Block the transactions from the host bridge to devices other than Root
Port on the root bus to return all F's. This is required when the 'CFG
Shift Feature' of iATU is enabled.
Signed-off-by: Krishna Chaitanya Chundru <krishna.chundru@oss.qualcomm.com>
[mani: code split, reworded subject/description and comments]
Signed-off-by: Manivannan Sadhasivam <mani@kernel.org>
Link: https://patch.msgid.link/20250923-controller-dwc-ecam-v10-3-e84390ba75fa@kernel.org
|
|
Shift Feature'
In order to enable PCIe ECAM mechanism in DWC driver as per the 'CFG Shift
Feature' documented in Designware databook r5.20a, sec 3.10.10.3, prepare
the driver to handle the one time iATU setup and creating ECAM window.
Signed-off-by: Krishna Chaitanya Chundru <krishna.chundru@oss.qualcomm.com>
[mani: splitted the preparatory code into a separate commit for bisectability]
Signed-off-by: Manivannan Sadhasivam <mani@kernel.org>
Link: https://patch.msgid.link/20250923-controller-dwc-ecam-v10-2-e84390ba75fa@kernel.org
|
|
External Local Bus Interface (ELBI) is an optional register space for all
DWC IPs containing the vendor specific registers. There is no need for the
vendor glue drivers to fetch and map the ELBI region separately.
Hence, optionally fetch and map the resource from DT in the DWC core. This
also warrants dropping the corresponding code from glue drivers. Hence,
drop the ELBI resource fetch and map logic from glue drivers and convert
them to use 'dw_pci::elbi_base'.
Note that the pcie-qcom-ep driver used devm_pci_remap_cfg_resource() to map
the ELBI resource previously. But it was a mistake since
devm_pci_remap_cfg_resource() should only be used for mapping the PCIe
config space region as it maps the region as Non-Posted. As ELBI is used to
hold vendor specific registers, there is no need to map the region as
Non-Posted. With this conversion, the region will get mapped as normal MMIO
memory.
Suggested-by: Manivannan Sadhasivam <manivannan.sadhasivam@linaro.org>
Signed-off-by: Krishna Chaitanya Chundru <krishna.chundru@oss.qualcomm.com>
[mani: removed elbi override, converted glue drivers and reworded description]
Signed-off-by: Manivannan Sadhasivam <mani@kernel.org>
Link: https://patch.msgid.link/20250923-controller-dwc-ecam-v10-1-e84390ba75fa@kernel.org
|
|
Previously tt_core is defined as config1:0-7 which may not cover all
the CPUs sharing L3C on platforms with more than 8 CPUs in a L3C. In
order to support such platforms extend tt_core to 16 bits, since no
spare space in config1, tt_core was moved to config2:0-15.
Though linux expects the users to retrieve the control encoding from
sysfs first for each option, it's possible if user doesn't follow
this and hardcoded tt_core in config1. So add an option
tt_core_deprecated for config1:0-7 for backward compatibility.
Signed-off-by: Yicong Yang <yangyicong@hisilicon.com>
Signed-off-by: Will Deacon <will@kernel.org>
|
|
Currently if a user enqueue a work item using schedule_delayed_work() the
used wq is "system_wq" (per-cpu wq) while queue_delayed_work() use
WORK_CPU_UNBOUND (used when a cpu is not specified). The same applies to
schedule_work() that is using system_wq and queue_work(), that makes use
again of WORK_CPU_UNBOUND.
This lack of consistentcy cannot be addressed without refactoring the API.
alloc_workqueue() treats all queues as per-CPU by default, while unbound
workqueues must opt-in via WQ_UNBOUND.
This default is suboptimal: most workloads benefit from unbound queues,
allowing the scheduler to place worker threads where they’re needed and
reducing noise when CPUs are isolated.
This patch adds a new WQ_PERCPU flag to explicitly request the use of
the per-CPU behavior. Both flags coexist for one release cycle to allow
callers to transition their calls.
Once migration is complete, WQ_UNBOUND can be removed and unbound will
become the implicit default.
With the introduction of the WQ_PERCPU flag (equivalent to !WQ_UNBOUND),
any alloc_workqueue() caller that doesn’t explicitly specify WQ_UNBOUND
must now use WQ_PERCPU.
All existing users have been updated accordingly.
Suggested-by: Tejun Heo <tj@kernel.org>
Signed-off-by: Marco Crivellari <marco.crivellari@suse.com>
Acked-by: Alexander Gordeev <agordeev@linux.ibm.com>
Acked-by: Heiko Carstens <hca@linux.ibm.com>
Signed-off-by: Alexander Gordeev <agordeev@linux.ibm.com>
|
|
Commit 5c83b07df9c5 ("tpm: Add a driver for Loongson TPM device") has a
semantic conflict with commit 07d8004d6fb9 ("tpm: add bufsiz parameter
in the .send callback"), as the former change was developed against a
tree without the latter change. This results in a build error:
drivers/char/tpm/tpm_loongson.c:48:17: error: initialization of 'int (*)(struct tpm_chip *, u8 *, size_t, size_t)' {aka 'int (*)(struct tpm_chip *, unsigned char *, long unsigned int, long unsigned int)'} from incompatible pointer type 'int (*)(struct tpm_chip *, u8 *, size_t)' {aka 'int (*)(struct tpm_chip *, unsigned char *, long unsigned int)'} [-Wincompatible-pointer-types]
48 | .send = tpm_loongson_send,
| ^~~~~~~~~~~~~~~~~
drivers/char/tpm/tpm_loongson.c:48:17: note: (near initialization for 'tpm_loongson_ops.send')
drivers/char/tpm/tpm_loongson.c:31:12: note: 'tpm_loongson_send' declared here
31 | static int tpm_loongson_send(struct tpm_chip *chip, u8 *buf, size_t count)
| ^~~~~~~~~~~~~~~~~
Add the expected bufsiz parameter to tpm_loongson_send() to resolve the
error.
Fixes: 5c83b07df9c5 ("tpm: Add a driver for Loongson TPM device")
Signed-off-by: Nathan Chancellor <nathan@kernel.org>
Reviewed-by: Jarkko Sakkinen <jarkko@kernel.org>
Signed-off-by: Lee Jones <lee@kernel.org>
|
|
Currently, NETIF_F_TSO_MANGLEID indicates that the inner-most ID can
be mangled. Outer IDs can always be mangled.
Make GSO preserve outer IDs by default, with NETIF_F_TSO_MANGLEID allowing
both inner and outer IDs to be mangled.
This commit also modifies a few drivers that use SKB_GSO_FIXEDID directly.
Signed-off-by: Richard Gobert <richardbgobert@gmail.com>
Reviewed-by: Edward Cree <ecree.xilinx@gmail.com> # for sfc
Reviewed-by: Willem de Bruijn <willemb@google.com>
Link: https://patch.msgid.link/20250923085908.4687-4-richardbgobert@gmail.com
Signed-off-by: Paolo Abeni <pabeni@redhat.com>
|
|
Add support to read module EEPROM for fbnic. Towards this, add required
support to issue a new command to the firmware and to receive the response
to the corresponding command.
Create a local copy of the data in the completion struct before writing to
ethtool_module_eeprom to avoid writing to data in case it is freed. Given
that EEPROM pages are small, the overhead of additional copy is
negligible.
Do not block API with explicit checks since API has appropriate checks in
place for length, offset, and page.
Explicitly check bank, page, offset, and length in
fbnic_fw_parse_qsfp_read_resp() to match EEPROM read responses to the
correct request. This is important because if the driver times out waiting
for an EEPROM read response, a subsequent read request with different
values is susceptible to receiving an erroneous response (i.e., the
response to the previous request).
Reviewed-by: Ido Schimmel <idosch@nvidia.com>
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
Signed-off-by: Mohsin Bashir <mohsin.bashr@gmail.com>
Link: https://patch.msgid.link/20250922231855.3717483-1-mohsin.bashr@gmail.com
Signed-off-by: Paolo Abeni <pabeni@redhat.com>
|
|
When WMAB is called to set the fan mode, the new mode is read from either
bits 0-1 or bits 4-5 (depending on the value of some other EC register).
Thus when WMAB is called with bits 4-5 zeroed and called again with
bits 0-1 zeroed, the second call undoes the effect of the first call.
This causes writes to /sys/devices/platform/lg-laptop/fan_mode to have
no effect (and causes reads to always report a status of zero).
Fix this by calling WMAB once, with the mode set in bits 0,1 and 4,5.
When the fan mode is returned from WMAB it always has this form, so
there is no need to preserve the other bits. As a bonus, the driver
now supports the "Performance" fan mode seen in the LG-provided Windows
control app, which provides less aggressive CPU throttling but louder
fan noise and shorter battery life.
Also, correct the documentation to reflect that 0 corresponds to the
default mode (what the Windows app calls "Optimal") and 1 corresponds
to the silent mode.
Fixes: dbf0c5a6b1f8 ("platform/x86: Add LG Gram laptop special features driver")
Link: https://bugzilla.kernel.org/show_bug.cgi?id=204913#c4
Signed-off-by: Daniel Lee <dany97@live.ca>
Link: https://patch.msgid.link/MN2PR06MB55989CB10E91C8DA00EE868DDC1CA@MN2PR06MB5598.namprd06.prod.outlook.com
Reviewed-by: Ilpo Järvinen <ilpo.jarvinen@linux.intel.com>
Signed-off-by: Ilpo Järvinen <ilpo.jarvinen@linux.intel.com>
|
|
This code calls kfree_rcu(new_node, rcu) and then dereferences "new_node"
and then dereferences it on the next line. Two lines later, we take
a mutex so I don't think this is an RCU safe region. Re-order it to do
the dereferences before queuing up the free.
Fixes: 68fbff68dbea ("octeontx2-pf: Add police action for TC flower")
Signed-off-by: Dan Carpenter <dan.carpenter@linaro.org>
Reviewed-by: Vadim Fedorenko <vadim.fedorenko@linux.dev>
Link: https://patch.msgid.link/aNKCL1jKwK8GRJHh@stanley.mountain
Signed-off-by: Paolo Abeni <pabeni@redhat.com>
|
|
Commit de8548813824 ("drm/panthor: Add the scheduler logical block")
handled destruction of a group's queues' drm scheduler entities early
into the group destruction procedure.
However, that races with the group submit ioctl, because by the time
entities are destroyed (through the group destroy ioctl), the submission
procedure might've already obtained a group handle, and therefore the
ability to push jobs into entities. This is met with a DRM error message
within the drm scheduler core as a situation that should never occur.
Fix by deferring drm scheduler entity destruction to queue release time.
Fixes: de8548813824 ("drm/panthor: Add the scheduler logical block")
Signed-off-by: Adrián Larumbe <adrian.larumbe@collabora.com>
Reviewed-by: Steven Price <steven.price@arm.com>
Reviewed-by: Boris Brezillon <boris.brezillon@collabora.com>
Signed-off-by: Steven Price <steven.price@arm.com>
Link: https://lore.kernel.org/r/20250919164436.531930-1-adrian.larumbe@collabora.com
|
|
Add i915_gem_fence_wait_priority_display() helper to wait with
I915_PRIORITY_DISPLAY. This drops the intel_plane.c dependency on
i915_scheduler_types.h, and allows us to remove the compat header from
xe.
Reviewed-by: Andi Shyti <andi.shyti@linux.intel.com>
Link: https://lore.kernel.org/r/20250924085129.146173-1-jani.nikula@intel.com
Signed-off-by: Jani Nikula <jani.nikula@intel.com>
|
|
to the CPU port
The blamed commit and others in that patch set started the trend
of reusing existing DSA driver API for a new purpose: calling
ds->ops->port_fdb_add() on the CPU port.
The lantiq_gswip driver was not prepared to handle that, as can be seen
from the many errors that Daniel presents in the logs:
[ 174.050000] gswip 1e108000.switch: port 2 failed to add fa:aa:72:f4:8b:1e vid 1 to fdb: -22
[ 174.060000] gswip 1e108000.switch lan2: entered promiscuous mode
[ 174.070000] gswip 1e108000.switch: port 2 failed to add 00:01:02:03:04:02 vid 0 to fdb: -22
[ 174.090000] gswip 1e108000.switch: port 2 failed to add 00:01:02:03:04:02 vid 1 to fdb: -22
[ 174.090000] gswip 1e108000.switch: port 2 failed to delete fa:aa:72:f4:8b:1e vid 1 from fdb: -2
The errors are because gswip_port_fdb() wants to get a handle to the
bridge that originated these FDB events, to associate it with a FID.
Absolutely honourable purpose, however this only works for user ports.
To get the bridge that generated an FDB entry for the CPU port, one
would need to look at the db.bridge.dev argument. But this was
introduced in commit c26933639b54 ("net: dsa: request drivers to perform
FDB isolation"), first appeared in v5.18, and when the blamed commit was
introduced in v5.14, no such API existed.
So the core DSA feature was introduced way too soon for lantiq_gswip.
Not acting on these host FDB entries and suppressing any errors has no
other negative effect, and practically returns us to not supporting the
host filtering feature at all - peacefully, this time.
Fixes: 10fae4ac89ce ("net: dsa: include bridge addresses which are local in the host fdb list")
Reported-by: Daniel Golle <daniel@makrotopia.org>
Closes: https://lore.kernel.org/netdev/aJfNMLNoi1VOsPrN@pidgin.makrotopia.org/
Signed-off-by: Vladimir Oltean <vladimir.oltean@nxp.com>
Link: https://patch.msgid.link/20250918072142.894692-3-vladimir.oltean@nxp.com
Tested-by: Daniel Golle <daniel@makrotopia.org>
Reviewed-by: Daniel Golle <daniel@makrotopia.org>
Signed-off-by: Paolo Abeni <pabeni@redhat.com>
|
|
A port added to a "single port bridge" operates as standalone, and this
is mutually exclusive to being part of a Linux bridge. In fact,
gswip_port_bridge_join() calls gswip_add_single_port_br() with
add=false, i.e. removes the port from the "single port bridge" to enable
autonomous forwarding.
The blamed commit seems to have incorrectly thought that ds->ops->port_enable()
is called one time per port, during the setup phase of the switch.
However, it is actually called during the ndo_open() implementation of
DSA user ports, which is to say that this sequence of events:
1. ip link set swp0 down
2. ip link add br0 type bridge
3. ip link set swp0 master br0
4. ip link set swp0 up
would cause swp0 to join back the "single port bridge" which step 3 had
just removed it from.
The correct DSA hook for one-time actions per port at switch init time
is ds->ops->port_setup(). This is what seems to match the coder's
intention; also see the comment at the beginning of the file:
* At the initialization the driver allocates one bridge table entry for
~~~~~~~~~~~~~~~~~~~~~
* each switch port which is used when the port is used without an
* explicit bridge.
Fixes: 8206e0ce96b3 ("net: dsa: lantiq: Add VLAN unaware bridge offloading")
Signed-off-by: Vladimir Oltean <vladimir.oltean@nxp.com>
Link: https://patch.msgid.link/20250918072142.894692-2-vladimir.oltean@nxp.com
Tested-by: Daniel Golle <daniel@makrotopia.org>
Reviewed-by: Daniel Golle <daniel@makrotopia.org>
Signed-off-by: Paolo Abeni <pabeni@redhat.com>
|
|
On HL338 ASICs, the Infineon first‑stage firmware is not present and
the reported version is 0. In this case printing a version number is
misleading, as it suggests valid firmware when it does not exist.
Fix this by printing the first‑stage Infineon firmware version only
if the reported value is non‑zero. This avoids confusing or incorrect
log messages on devices where the first stage is not applicable.
Signed-off-by: Pavan S <pavan.sreenivas@intel.com>
Reviewed-by: Koby Elbaz <koby.elbaz@intel.com>
Signed-off-by: Koby Elbaz <koby.elbaz@intel.com>
|
|
Dirty state can occur when the host VM undergoes a reset while the
device does not. In such a case, the driver must reset the device before
it can be used again. As part of this reset, the device capabilities
are zeroed. Therefore, the driver must read the Preboot status again to
learn the Preboot state, capabilities, and security configuration.
Signed-off-by: Konstantin Sinyuk <konstantin.sinyuk@intel.com>
Reviewed-by: Koby Elbaz <koby.elbaz@intel.com>
Signed-off-by: Koby Elbaz <koby.elbaz@intel.com>
|
|
Add a new passthrough type HL_GET_P_STATE to the cpucp generic ioctl
to allow userspace to read the device performance state via firmware.
Signed-off-by: Ariel Aviad <ariel.aviad@intel.com>
Reviewed-by: Koby Elbaz <koby.elbaz@intel.com>
Signed-off-by: Koby Elbaz <koby.elbaz@intel.com>
|
|
Add debugfs files for NVMe Direct I/O (HLDIO) functionality.
This interface allows userspace access to direct SSD ↔ device transfers
through debugfs nodes.
Four debugfs files are created under /sys/kernel/debug/habanalabs/hlN/:
- dio_ssd2hl : trigger SSD-to-device transfers
- dio_hl2ssd : trigger device-to-SSD transfers
(placeholder, not yet implemented)
- dio_stats : show transfer statistics
- dio_reset : reset statistics counters
Usage examples:
# Perform SSD → device transfer
echo "fd=3 va=0x10000 off=0 len=4096" > \
/sys/kernel/debug/habanalabs/hl0/dio_ssd2hl
# View statistics
cat /sys/kernel/debug/habanalabs/hl0/dio_stats
# Reset counters
echo 1 > /sys/kernel/debug/habanalabs/hl0/dio_reset
This interface provides access to HLDIO functionality for validation
and diagnostics.
Signed-off-by: Konstantin Sinyuk <konstantin.sinyuk@intel.com>
Reviewed-by: Farah Kassabri <farah.kassabri@intel.com>
Reviewed-by: Koby Elbaz <koby.elbaz@intel.com>
Signed-off-by: Koby Elbaz <koby.elbaz@intel.com>
|
|
Introduce NVMe Direct I/O (HLDIO) infrastructure to support
peer‑to‑peer DMA in the habanalabs driver. This adds internal helpers
and data structures to enable direct transfers between NVMe storage
and device memory.
The feature is built only when CONFIG_HL_HLDIO is enabled. A debugfs
interface is also provided for functional validation.
Signed-off-by: Konstantin Sinyuk <konstantin.sinyuk@intel.com>
Reviewed-by: Farah Kassabri <farah.kassabri@intel.com>
Reviewed-by: Koby Elbaz <koby.elbaz@intel.com>
Signed-off-by: Koby Elbaz <koby.elbaz@intel.com>
|
|
When IOMMU is enabled, dma_alloc_coherent() with GFP_USER may return
addresses from the vmalloc range. If such an address is mapped without
VM_MIXEDMAP, vm_insert_page() will trigger a BUG_ON due to the
VM_PFNMAP restriction.
Fix this by checking for vmalloc addresses and setting VM_MIXEDMAP
in the VMA before mapping. This ensures safe mapping and avoids kernel
crashes. The memory is still driver-allocated and cannot be accessed
directly by userspace.
Signed-off-by: Moti Haimovski <moti.haimovski@intel.com>
Reviewed-by: Koby Elbaz <koby.elbaz@intel.com>
Signed-off-by: Koby Elbaz <koby.elbaz@intel.com>
|
|
The access_ok() API no longer requires the VERIFY_WRITE argument,
and the use of the old interface with VERIFY_WRITE is deprecated.
Clean up the habanalabs memory manager to use the modern access_ok()
interface consistently. This removes old #ifdef guards and aligns the
driver with current upstream kernel APIs.
Signed-off-by: Ilia Levi <ilia.levi@intel.com>
Reviewed-by: Koby Elbaz <koby.elbaz@intel.com>
Signed-off-by: Koby Elbaz <koby.elbaz@intel.com>
|
|
After CPLD shutdown event the device is not usable anymore. The common
CPLD_SHUTDOWN event handler disables any subsequent device access.
Signed-off-by: Konstantin Sinyuk <konstantin.sinyuk@intel.com>
Reviewed-by: Koby Elbaz <koby.elbaz@intel.com>
Signed-off-by: Koby Elbaz <koby.elbaz@intel.com>
|
|
After a CPLD shutdown event the device becomes unusable. Prevent further
device access once this event is received.
Signed-off-by: Konstantin Sinyuk <konstantin.sinyuk@intel.com>
Reviewed-by: Koby Elbaz <koby.elbaz@intel.com>
Signed-off-by: Koby Elbaz <koby.elbaz@intel.com>
|
|
In hl_release_dmabuf(), ctx is dereferenced after calling hl_ctx_put()
to obtain the compute device file.
This is safe because the dma-buf object holds a file reference taken in
export_dmabuf(), and the file release (which drops another ctx reference)
can only happen after we drop that file reference via fput(). Thus, this
hl_ctx_put() call cannot be the last one at this point.
Add a comment explaining this to avoid confusion.
Signed-off-by: Tomer Tayar <tomer.tayar@intel.com>
Reviewed-by: Koby Elbaz <koby.elbaz@intel.com>
Signed-off-by: Koby Elbaz <koby.elbaz@intel.com>
|
|
Add infrastructure for logging the last configuration register accesses
that occur via debugfs read/write operations. At interrupt time, these
log entries can be dumped to dmesg, which helps in diagnosing the cause
of RAZWI and ADDR_DEC interrupts.
The logging is implemented as a ring buffer of access entries, with each
entry recording timestamp and access details. To ensure correctness
under concurrent access, operations are now protected using spinlocks.
Entries are copied under lock and then printed after releasing it, which
minimizes time spent in the critical section.
Signed-off-by: Sharley Calzolari <sharley.calzolari@intel.com>
Reviewed-by: Koby Elbaz <koby.elbaz@intel.com>
Signed-off-by: Koby Elbaz <koby.elbaz@intel.com>
|
|
Print engine/queue names instead of numerical engine/queue IDs to make
logs and debug output more readable.
Signed-off-by: Ariel Suller <ariel.suller@intel.com>
Reviewed-by: Koby Elbaz <koby.elbaz@intel.com>
Signed-off-by: Koby Elbaz <koby.elbaz@intel.com>
|
|
Add a new CPUCP generic message type to retrieve HBM, SRAM and critical
error counters from the device.
Signed-off-by: Vitaly Margolin <vitaly.margolin@intel.com>
Reviewed-by: Koby Elbaz <koby.elbaz@intel.com>
Signed-off-by: Koby Elbaz <koby.elbaz@intel.com>
|
|
Change the BMON_CR register value back to its original state before
enabling, so that BMON does not continue to collect information
after being disabled.
Signed-off-by: Vered Yavniely <vered.yavniely@intel.com>
Reviewed-by: Koby Elbaz <koby.elbaz@intel.com>
Signed-off-by: Koby Elbaz <koby.elbaz@intel.com>
|
|
EFAULT is currently returned if less than requested user pages are
pinned. This value means a "bad address" which might be confusing to
the user, as the address of the given user memory is not necessarily
"bad".
Modify the return value to ENOMEM, as "out of memory" is more suitable
in this case.
Signed-off-by: Tomer Tayar <tomer.tayar@intel.com>
Reviewed-by: Koby Elbaz <koby.elbaz@intel.com>
Signed-off-by: Koby Elbaz <koby.elbaz@intel.com>
|
|
'Resin' (*Res*et *In*put) is usually connected to a volume down button
on devices, which is usually not expected to wake up the device from
suspend.
On the other hand, pwrkey should keep wakeup on. So do not enable wakeup
for resin unless the "wakeup-source" property is specified in
devicetree.
Note, that this does change behavior by turning off wakeup by default
for 'resin' and requiring a new dt property to be added to turn it on
again. But since this is not expected behavior in the first place, and
most users will not expect this, I'd argue this change is acceptable.
Signed-off-by: Luca Weiss <luca@lucaweiss.eu>
Reviewed-by: Neil Armstrong <neil.armstrong@linaro.org>
Link: https://lore.kernel.org/r/20250909-resin-wakeup-v1-2-46159940e02b@lucaweiss.eu
Signed-off-by: Dmitry Torokhov <dmitry.torokhov@gmail.com>
|
|
Introduce support for the Hynitron CST816x series touchscreen controller
used for 240×240 1.28-inch Round LCD Display Module manufactured
by Waveshare Electronics. The driver is designed based on an Arduino
implementation marked as under MIT License. This driver is written
for a particular round display based on the CST816S controller, which
is not compatiable with existing driver for Hynitron controllers.
Signed-off-by: Oleh Kuzhylnyi <kuzhylol@gmail.com>
Link: https://lore.kernel.org/r/20250921125939.249788-2-kuzhylol@gmail.com
Signed-off-by: Dmitry Torokhov <dmitry.torokhov@gmail.com>
|
|
Currently there are 2 wait loops for loading GuC: one in
xe_mmio_wait32_not() and one guc_wait_ucode(). Now that there's a
generic poll_timeout_us(), refactor the code to use that to be more
readable.
Main change in behavior is that there's no exponential wait anymore:
that is now replaced by a 10msec retry.
Reviewed-by: John Harrison <John.C.Harrison@Intel.com>
Link: https://lore.kernel.org/r/20250922-xe-iopoll-v4-5-06438311a63f@intel.com
Signed-off-by: Lucas De Marchi <lucas.demarchi@intel.com>
|
|
Move the error parsing and print out of guc_wait_ucode() into a helper
to clean up the wait function. Since now the `load_done != 1` condition
has a return statement, also simplify the if/else chain.
Reviewed-by: John Harrison <John.C.Harrison@Intel.com> # v2
Link: https://lore.kernel.org/r/20250922-xe-iopoll-v4-4-06438311a63f@intel.com
Signed-off-by: Lucas De Marchi <lucas.demarchi@intel.com>
|
|
As the forcewake is already held during GuC load, there's no need to use
a helper function to call xe_guc_pc_get_cur_freq(). Just call
xe_guc_pc_get_cur_freq_fw() directly.
Suggested-by: Maarten Lankhorst <maarten.lankhorst@linux.intel.com>
Reviewed-by: Maarten Lankhorst <maarten.lankhorst@linux.intel.com>
Link: https://lore.kernel.org/r/20250922-xe-iopoll-v4-3-06438311a63f@intel.com
Signed-off-by: Lucas De Marchi <lucas.demarchi@intel.com>
|
|
Convert wait_for_pc_state() and wait_for_act_freq_limit() to
poll_timeout_us(). This brings 2 changes in behavior: Drop the
exponential wait and fix a potential much longer sleep.
usleep_range() will wait anywhere between `wait` and `wait << 1`, so
it's not correct to assume `slept += wait`. This code is not really
accurate. Pairing this with the exponential wait increase, it could be
waiting much longer than intended.
Reviewed-by: Vinay Belgaumkar <vinay.belgaumkar@intel.com>
Link: https://lore.kernel.org/r/20250922-xe-iopoll-v4-2-06438311a63f@intel.com
Signed-off-by: Lucas De Marchi <lucas.demarchi@intel.com>
|
|
Now that there's a generic poll_timeout_us(), use it to wait for
LMEM_INIT in GU_CNTL.
Reviewed-by: Maarten Lankhorst <dev@lankhorst.se>
Link: https://lore.kernel.org/r/20250922-xe-iopoll-v4-1-06438311a63f@intel.com
Signed-off-by: Lucas De Marchi <lucas.demarchi@intel.com>
|
|
Ranjan Kumar <ranjan.kumar@broadcom.com> says:
Few Enhancements and minor fixes of mpt3sas driver.
Signed-off-by: Martin K. Petersen <martin.petersen@oracle.com>
|
|
Updated driver version to 54.100.00.00.
Signed-off-by: Ranjan Kumar <ranjan.kumar@broadcom.com>
Message-Id: <20250922095113.281484-5-ranjan.kumar@broadcom.com>
Signed-off-by: Martin K. Petersen <martin.petersen@oracle.com>
|
|
Add handling for MPI26_SAS_NEG_LINK_RATE_22_5 in
_transport_convert_phy_link_rate(). This maps the new 22.5 Gbps
negotiated rate to SAS_LINK_RATE_22_5_GBPS, to get correct PHY link
speeds.
Signed-off-by: Ranjan Kumar <ranjan.kumar@broadcom.com>
Message-Id: <20250922095113.281484-4-ranjan.kumar@broadcom.com>
Signed-off-by: Martin K. Petersen <martin.petersen@oracle.com>
|