| Age | Commit message (Collapse) | Author |
|
Convert the mvneta driver to use the new .get_rx_ring_count ethtool
operation instead of implementing .get_rxnfc solely for handling
ETHTOOL_GRXRINGS command. This simplifies the code by removing the
switch statement and replacing it with a direct return of the queue
count.
The new callback provides the same functionality in a more direct way,
following the ongoing ethtool API modernization.
Signed-off-by: Breno Leitao <leitao@debian.org>
Link: https://patch.msgid.link/20251121-marvell-v1-1-8338f3e55a4c@debian.org
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
|
|
Convert the hyperv netvsc driver to use the new .get_rx_ring_count
ethtool operation instead of implementing .get_rxnfc solely for handling
ETHTOOL_GRXRINGS command. This simplifies the code by replacing the
switch statement with a direct return of the queue count.
The new callback provides the same functionality in a more direct way,
following the ongoing ethtool API modernization.
Signed-off-by: Breno Leitao <leitao@debian.org>
Link: https://patch.msgid.link/20251121-hyperv_gxrings-v1-1-31293104953b@debian.org
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
|
|
Some platforms exhibit very high costs with CONFIG_STACKPROTECTOR_STRONG=y
when a function needs to pass the address of a local variable to external
functions.
eth_type_trans() (and its callers) is showing this anomaly on AMD EPYC 7B12
platforms (and maybe others).
We could :
1) inline eth_type_trans()
This would help if its callers also has the same issue, and the canary cost
would be paid by the callers already.
This is a bit cumbersome because netdev_uses_dsa() is pulling
whole <net/dsa.h> definitions.
2) Compile net/ethernet/eth.c with -fno-stack-protector
This would weaken security.
3) Hack eth_type_trans() to temporarily use skb->dev as a place holder
if skb_header_pointer() needs to pull 2 bytes not present in skb->head.
This patch implements 3), and brings a 5% improvement on TX/RX intensive
workload (tcp_rr 10,000 flows) on AMD EPYC 7B12.
Removing CONFIG_STACKPROTECTOR_STRONG on this platform can improve
performance by 25 %.
This means eth_type_trans() issue is not an isolated artifact.
Signed-off-by: Eric Dumazet <edumazet@google.com>
Reviewed-by: Willem de Bruijn <willemb@google.com>
Link: https://patch.msgid.link/20251121061725.206675-1-edumazet@google.com
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
|
|
netdev CI reserves SKIP in selftests for cases which can't be executed
due to setup issues, like missing or old commands. Tests which are
expected to fail must use XFAIL.
Reviewed-by: Kuniyuki Iwashima <kuniyu@google.com>
Link: https://patch.msgid.link/20251123021601.158709-1-kuba@kernel.org
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
|
|
syzbot reported that tcf_get_base_ptr() can be called while transport
header is not set [1].
Instead of returning a dangling pointer, return NULL.
Fix tcf_get_base_ptr() callers to handle this NULL value.
[1]
WARNING: CPU: 1 PID: 6019 at ./include/linux/skbuff.h:3071 skb_transport_header include/linux/skbuff.h:3071 [inline]
WARNING: CPU: 1 PID: 6019 at ./include/linux/skbuff.h:3071 tcf_get_base_ptr include/net/pkt_cls.h:539 [inline]
WARNING: CPU: 1 PID: 6019 at ./include/linux/skbuff.h:3071 em_nbyte_match+0x2d8/0x3f0 net/sched/em_nbyte.c:43
Modules linked in:
CPU: 1 UID: 0 PID: 6019 Comm: syz.0.17 Not tainted syzkaller #0 PREEMPT(full)
Call Trace:
<TASK>
tcf_em_match net/sched/ematch.c:494 [inline]
__tcf_em_tree_match+0x1ac/0x770 net/sched/ematch.c:520
tcf_em_tree_match include/net/pkt_cls.h:512 [inline]
basic_classify+0x115/0x2d0 net/sched/cls_basic.c:50
tc_classify include/net/tc_wrapper.h:197 [inline]
__tcf_classify net/sched/cls_api.c:1764 [inline]
tcf_classify+0x4cf/0x1140 net/sched/cls_api.c:1860
multiq_classify net/sched/sch_multiq.c:39 [inline]
multiq_enqueue+0xfd/0x4c0 net/sched/sch_multiq.c:66
dev_qdisc_enqueue+0x4e/0x260 net/core/dev.c:4118
__dev_xmit_skb net/core/dev.c:4214 [inline]
__dev_queue_xmit+0xe83/0x3b50 net/core/dev.c:4729
packet_snd net/packet/af_packet.c:3076 [inline]
packet_sendmsg+0x3e33/0x5080 net/packet/af_packet.c:3108
sock_sendmsg_nosec net/socket.c:727 [inline]
__sock_sendmsg+0x21c/0x270 net/socket.c:742
____sys_sendmsg+0x505/0x830 net/socket.c:2630
Fixes: 1da177e4c3f4 ("Linux-2.6.12-rc2")
Reported-by: syzbot+f3a497f02c389d86ef16@syzkaller.appspotmail.com
Closes: https://lore.kernel.org/netdev/6920855a.a70a0220.2ea503.0058.GAE@google.com/T/#u
Signed-off-by: Eric Dumazet <edumazet@google.com>
Reviewed-by: Jamal Hadi Salim <jhs@mojatatu.com>
Link: https://patch.msgid.link/20251121154100.1616228-1-edumazet@google.com
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
|
|
This commit ensures that the required log level is set at the start of
the test iteration.
Part of the cleanup performed at the end of each test iteration resets
the log level (do_cleanup in lib_netcons.sh) to the values defined at the
time test script started. This may cause further test iterations to fail
if the default values are not sufficient.
Signed-off-by: Andre Carvalho <asantostc@gmail.com>
Reviewed-by: Breno Leitao <leitao@debian.org>
Link: https://patch.msgid.link/20251121-netcons-basic-loglevel-v1-1-577f8586159c@gmail.com
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
|
|
Jakub Kicinski says:
====================
selftests: hw-net: toeplitz: read config from the NIC directly
First patch here tries to auto-disable building the iouring sample.
Our CI will still run the iouring test(s), of course, but it looks
like the liburing updates aren't very quick in distroes and having
to hack around it when developing unrelated tests is a bit annoying.
Remaining 4 patches iron out running the Toeplitz hash test against
real NICs. I tested mlx5, bnxt and fbnic, they all pass now.
I switched to using YNL directly in the C code, can't see a reason
to get the info in Python and pass it to C via argv. The old code
likely did this because it predates YNL.
====================
Link: https://patch.msgid.link/20251121040259.3647749-1-kuba@kernel.org
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
|
|
Increase the receiver timeout. When running between machines
in different geographic regions the test needs more than
a second to SSH across and send the frames.
The bkg() command that runs the receiver defaults to 5 sec timeout,
so using 4 sec sounds like a reasonable value for the receiver itself.
Reviewed-by: Willem de Bruijn <willemb@google.com>
Link: https://patch.msgid.link/20251121040259.3647749-6-kuba@kernel.org
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
|
|
Replace the simple modulo math with the real indirection table
read from the device. This makes the tests pass for mlx5 and
bnxt NICs.
Reviewed-by: Willem de Bruijn <willemb@google.com>
Link: https://patch.msgid.link/20251121040259.3647749-5-kuba@kernel.org
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
|
|
Now that we have YNL support for RSS accessing the RSS info from
C is very easy. Instead of passing the RSS key from Python do it
directly in the C code.
Reviewed-by: Willem de Bruijn <willemb@google.com>
Link: https://patch.msgid.link/20251121040259.3647749-4-kuba@kernel.org
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
|
|
Make sure that the NIC under test is configured for pure Toeplitz
hashing, and no input key transform (no symmetric hashing).
Reviewed-by: Willem de Bruijn <willemb@google.com>
Link: https://patch.msgid.link/20251121040259.3647749-3-kuba@kernel.org
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
|
|
Looks like the liburing is not updated by distros very aggressively.
Presumably because a lot of packages depend on it. I just updated
to Fedora 43 and it's still on liburing 2.9. The test is 9mo old,
at this stage I think this warrants handling the build failure
more gracefully.
Detect if iouring is recent enough and if not print a warning
and exclude the C prog from build. The Python test will just
fail since the binary won't exist. But it removes the major
annoyance of having to update liburing from sources when
developing other tests.
Reviewed-by: Willem de Bruijn <willemb@google.com>
Link: https://patch.msgid.link/20251121040259.3647749-2-kuba@kernel.org
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
|
|
This return statement is indented one tab too far. Delete a tab.
Signed-off-by: Dan Carpenter <dan.carpenter@linaro.org>
Reviewed-by: Tony Nguyen <anthony.l.nguyen@intel.com>
Reviewed-by: Aleksandr Loktionov <aleksandr.loktionov@intel.com>
Link: https://patch.msgid.link/aSBqjtA8oF25G1OG@stanley.mountain
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
|
|
Complete the sentence by adding "is set", rather than having it dangle
as a sentence fragment.
Signed-off-by: Jason A. Donenfeld <Jason@zx2c4.com>
|
|
We are setting up an inode key to lookup parent directory inode but all we
need is the inode's objectid. The use of the key was necessary in the past
but since commit 0202e83fdab0 ("btrfs: simplify iget helpers") we only
need the objectid.
So remove the key variable in the stack and use instead a simple u64 for
the inode's objectid.
Reviewed-by: Qu Wenruo <wqu@suse.com>
Reviewed-by: Johannes Thumshirn <johannes.thumshirn@wdc.com>
Signed-off-by: Filipe Manana <fdmanana@suse.com>
Reviewed-by: David Sterba <dsterba@suse.com>
Signed-off-by: David Sterba <dsterba@suse.com>
|
|
We have allocated the root with kzalloc() so all the memory is already
zero initialized, therefore it's redundant to assign 0 and NULL to several
of the root members. Remove all of them except the atomic initializations
since atomic_t is an opaque type and it's not a good practice to assume
its internals.
This slightly reduces the binary size.
With gcc 14.2.0-19 from Debian on x86_64, before this change:
$ size fs/btrfs/btrfs.ko
text data bss dec hex filename
1939404 162963 15592 2117959 205147 fs/btrfs/btrfs.ko
After this change:
$ size fs/btrfs/btrfs.ko
text data bss dec hex filename
1939212 162963 15592 2117767 205087 fs/btrfs/btrfs.ko
Reviewed-by: Qu Wenruo <wqu@suse.com>
Reviewed-by: Johannes Thumshirn <johannes.thumshirn@wdc.com>
Signed-off-by: Filipe Manana <fdmanana@suse.com>
Reviewed-by: David Sterba <dsterba@suse.com>
Signed-off-by: David Sterba <dsterba@suse.com>
|
|
Do the remaining btrfs_path conversion to the auto cleaning, this seems
to be the last one. Most of the conversions are trivial, only adding the
declaration and removing the freeing, or changing the goto patterns to
return.
There are some functions with many changes, like __btrfs_free_extent(),
btrfs_remove_from_free_space_tree() or btrfs_add_to_free_space_tree()
but it still follows the same pattern.
Signed-off-by: David Sterba <dsterba@suse.com>
|
|
When checking if xattrs were deleted we don't care about their data, but
we are allocating memory for the data and copying it, which only wastes
time and can result in an unnecessary error in case the allocation fails.
So stop allocating memory and copying data by making find_xattr() and
__find_xattr() skip those steps if the given data buffer is NULL.
Reviewed-by: Boris Burkov <boris@bur.io>
Signed-off-by: Filipe Manana <fdmanana@suse.com>
Reviewed-by: David Sterba <dsterba@suse.com>
Signed-off-by: David Sterba <dsterba@suse.com>
|
|
There are several checks for unexpected overflows of buffers and path
lengths that makes us fail the send operation with an error if for some
highly unexpected reason they happen. So add the unlikely tag to those
checks to hint the compiler to generate better code, while also making
it more explicit in the source that it's highly unexpected.
With gcc 14.2.0-19 from Debian on x86_64, I also got a small reduction
the text size of the btrfs module.
Before:
$ size fs/btrfs/btrfs.ko
text data bss dec hex filename
1936917 162723 15592 2115232 2046a0 fs/btrfs/btrfs.ko
After:
$ size fs/btrfs/btrfs.ko
text data bss dec hex filename
1936789 162723 15592 2115104 204620 fs/btrfs/btrfs.ko
Reviewed-by: Qu Wenruo <wqu@suse.com>
Signed-off-by: Filipe Manana <fdmanana@suse.com>
Reviewed-by: David Sterba <dsterba@suse.com>
Signed-off-by: David Sterba <dsterba@suse.com>
|
|
Instead of passing a root and the objectid of the parent directory, just
pass the directory inode, as like that we can extract both the root and
the objectid, reducing the number of arguments by one. It also makes the
function more consistent with other log tree functions in the sense that
we pass the inode and not only its objectid.
Reviewed-by: Qu Wenruo <wqu@suse.com>
Reviewed-by: Johannes Thumshirn <johannes.thumshirn@wdc.com>
Signed-off-by: Filipe Manana <fdmanana@suse.com>
Reviewed-by: David Sterba <dsterba@suse.com>
Signed-off-by: David Sterba <dsterba@suse.com>
|
|
There's no need to pass the root as we can extract it from the directory
inode, so remove it.
Reviewed-by: Qu Wenruo <wqu@suse.com>
Reviewed-by: Johannes Thumshirn <johannes.thumshirn@wdc.com>
Signed-off-by: Filipe Manana <fdmanana@suse.com>
Reviewed-by: David Sterba <dsterba@suse.com>
Signed-off-by: David Sterba <dsterba@suse.com>
|
|
Instead of testing and setting the BTRFS_DELAYED_NODE_DEL_IREF bit in the
delayed node's flags, use test_and_set_bit() which makes the code shorter
without compromising readability and getting rid of the label and goto.
Reviewed-by: Johannes Thumshirn <johannes.thumshirn@wdc.com>
Reviewed-by: Daniel Vacek <neelx@suse.com>
Reviewed-by: Qu Wenruo <wqu@suse.com>
Signed-off-by: Filipe Manana <fdmanana@suse.com>
Reviewed-by: David Sterba <dsterba@suse.com>
Signed-off-by: David Sterba <dsterba@suse.com>
|
|
We don't need to search back to the inode item, the directory inode
number is in key.offset, so simply use that. If we can't find the
directory we'll get an ENOENT at the iget().
Note: The patch was taken from v5 of fscrypt patchset
(https://lore.kernel.org/linux-btrfs/cover.1706116485.git.josef@toxicpanda.com/)
which was handled over time by various people: Omar Sandoval, Sweet Tea
Dorminy, Josef Bacik.
Reviewed-by: Johannes Thumshirn <johannes.thumshirn@wdc.com>
Signed-off-by: Josef Bacik <josef@toxicpanda.com>
Signed-off-by: Daniel Vacek <neelx@suse.com>
Reviewed-by: David Sterba <dsterba@suse.com>
[ add note ]
Signed-off-by: David Sterba <dsterba@suse.com>
|
|
In our user safe ino resolve ioctl we'll just turn any ret into -EACCES
from inode_permission(). This is redundant, and could potentially be
wrong if we had an ENOMEM in the security layer or some such other
error, so simply return the actual return value.
Note: The patch was taken from v5 of fscrypt patchset
(https://lore.kernel.org/linux-btrfs/cover.1706116485.git.josef@toxicpanda.com/)
which was handled over time by various people: Omar Sandoval, Sweet Tea
Dorminy, Josef Bacik.
Fixes: 23d0b79dfaed ("btrfs: Add unprivileged version of ino_lookup ioctl")
CC: stable@vger.kernel.org # 5.4+
Reviewed-by: Johannes Thumshirn <johannes.thumshirn@wdc.com>
Signed-off-by: Josef Bacik <josef@toxicpanda.com>
Signed-off-by: Daniel Vacek <neelx@suse.com>
Reviewed-by: David Sterba <dsterba@suse.com>
[ add note ]
Signed-off-by: David Sterba <dsterba@suse.com>
|
|
When checksumming the encrypted bio on writes we need to know which
logical address this checksum is for. At the point where we get the
encrypted bio the bi_sector is the physical location on the target disk,
so we need to save the original logical offset in the btrfs_bio. Then
we can use this when checksumming the bio instead of the
bio->iter.bi_sector.
Note: The patch was taken from v5 of fscrypt patchset
(https://lore.kernel.org/linux-btrfs/cover.1706116485.git.josef@toxicpanda.com/)
which was handled over time by various people: Omar Sandoval, Sweet Tea
Dorminy, Josef Bacik.
Signed-off-by: Josef Bacik <josef@toxicpanda.com>
Signed-off-by: Daniel Vacek <neelx@suse.com>
Reviewed-by: David Sterba <dsterba@suse.com>
[ add note ]
Signed-off-by: David Sterba <dsterba@suse.com>
|
|
Right now there isn't a way to encrypt things that aren't either
filenames in directories or data on blocks on disk with extent
encryption, so for now, disable verity usage with encryption on btrfs.
fscrypt with fsverity should be possible and it can be implemented
in the future.
Note: The patch was taken from v5 of fscrypt patchset
(https://lore.kernel.org/linux-btrfs/cover.1706116485.git.josef@toxicpanda.com/)
which was handled over time by various people: Omar Sandoval, Sweet Tea
Dorminy, Josef Bacik.
Reviewed-by: Boris Burkov <boris@bur.io>
Signed-off-by: Sweet Tea Dorminy <sweettea-kernel@dorminy.me>
Signed-off-by: Daniel Vacek <neelx@suse.com>
Reviewed-by: David Sterba <dsterba@suse.com>
[ add note ]
Signed-off-by: David Sterba <dsterba@suse.com>
|
|
Initially, only normal data extents will be encrypted. This change
forbids various other bits:
- allows reflinking only if both inodes have the same encryption status
- disable inline data on encrypted inodes
Note: The patch was taken from v5 of fscrypt patchset
(https://lore.kernel.org/linux-btrfs/cover.1706116485.git.josef@toxicpanda.com/)
which was handled over time by various people: Omar Sandoval, Sweet Tea
Dorminy, Josef Bacik.
Signed-off-by: Omar Sandoval <osandov@osandov.com>
Signed-off-by: Daniel Vacek <neelx@suse.com>
Reviewed-by: David Sterba <dsterba@suse.com>
[ add note ]
Signed-off-by: David Sterba <dsterba@suse.com>
|
|
When btrfs_del_items() empties a leaf, it deletes the leaf unless it's
the root node. For the root leaf case, the code used to reset its level
to 0 via btrfs_set_header_level(). This is redundant as leaf nodes
always have level == 0.
Remove the unnecessary level assignment and invert the conditional to
handle only the non-root leaf deletion. The root leaf is correctly left
as-is.
Signed-off-by: Sun YangKai <sunk67188@gmail.com>
Reviewed-by: David Sterba <dsterba@suse.com>
Signed-off-by: David Sterba <dsterba@suse.com>
|
|
After releasing the path in btrfs_next_old_leaf(), we need to re-check
the leaf because a balance operation may have added items or removed the
last item. The original code handled this with two separate conditional
blocks, the second marked with a lengthy comment explaining a "missed
case".
Merge these two blocks into a single logical structure that handles both
scenarios more clearly.
Also update the comment to be more concise and accurate, incorporating the
explanation directly into the main block rather than a separate annotation.
Signed-off-by: Sun YangKai <sunk67188@gmail.com>
Reviewed-by: David Sterba <dsterba@suse.com>
Signed-off-by: David Sterba <dsterba@suse.com>
|
|
Instead of incrementing refcount on 'left' node when it's referenced by
path, simply transfer ownership to path and set left to NULL. This
eliminates:
- Unnecessary refcount increment/decrement operations
- Redundant conditional checks for left node cleanup
The path now consistently owns the left node reference when used.
Signed-off-by: Sun YangKai <sunk67188@gmail.com>
Reviewed-by: David Sterba <dsterba@suse.com>
Signed-off-by: David Sterba <dsterba@suse.com>
|
|
The balance_level() function is overly long and contains a cold code path
that handles promoting a child node to root when the root has only one item.
This code has distinct logic that is clearer and more maintainable when
isolated in its own function.
Signed-off-by: Sun YangKai <sunk67188@gmail.com>
Reviewed-by: David Sterba <dsterba@suse.com>
Signed-off-by: David Sterba <dsterba@suse.com>
|
|
The following functions are introduced as a middle step for bs > ps
support:
- rbio_streip_step_paddr()
- rbio_pstripe_step_paddr()
- rbio_qstripe_step_paddr()
- sector_step_paddr_in_rbio()
As there is already an existing function without the infix, and has a
different parameter list.
But the existing functions have been cleaned up, there is no need to
keep the "_step" infix, just remove it completely.
Signed-off-by: Qu Wenruo <wqu@suse.com>
Signed-off-by: David Sterba <dsterba@suse.com>
|
|
The support code for bs > ps is complete, enable it and update
assertions.
Signed-off-by: Qu Wenruo <wqu@suse.com>
Signed-off-by: David Sterba <dsterba@suse.com>
|
|
Since commit 31158ad02ddb ("rqspinlock: Add deadlock detection
and recovery") the updated path on re-entrancy now reports deadlock
via -EDEADLK instead of the previous -EBUSY.
Also, the way reentrancy was exercised (via fentry/lookup_elem_raw)
has been fragile because lookup_elem_raw may be inlined
(find_kernel_btf_id() will return -ESRCH).
To fix this fentry is attached to bpf_obj_free_fields() instead of
lookup_elem_raw() and:
- The htab map is made to use a BTF-described struct val with a
struct bpf_timer so that check_and_free_fields() reliably calls
bpf_obj_free_fields() on element replacement.
- The selftest is updated to do two updates to the same key (insert +
replace) in prog_test.
- The selftest is updated to align with expected errno with the
kernel’s current behavior.
Signed-off-by: Saket Kumar Bhaskar <skb99@linux.ibm.com>
Tested-by: Venkat Rao Bagalkote <venkat88@linux.ibm.com>
Link: https://lore.kernel.org/r/20251117060752.129648-1-skb99@linux.ibm.com
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
|
|
The function finish_parity_scrub() assume each fs block can be mapped by
one page, blocking bs > ps support for raid56.
Prepare it for bs > ps cases by:
- Introduce a helper, verify_one_parity_step()
Since the P/Q generation is always done in a vertical stripe, we have
to handle the range step by step.
- Only clear the rbio->dbitmap if all steps of an fs block match
- Remove rbio_stripe_paddr() and sector_paddr_in_rbio() helpers
Now we either use the paddrs version for checksum, or the step version
for P/Q generation/recovery.
- Make alloc_rbio_essential_pages() to handle bs > ps cases
Since for bs > ps cases, one fs block needs multiple pages, the
existing simple check against rbio->stripe_pages[] is not enough.
Extract a dedicated helper, alloc_rbio_sector_pages(), for the
existing alloc_rbio_essential_pages(), which is still based on sector
number.
Signed-off-by: Qu Wenruo <wqu@suse.com>
Signed-off-by: David Sterba <dsterba@suse.com>
|
|
The function rbio_bio_add_io_paddr() assume each fs block can be mapped by
one page, blocking bs > ps support for raid56.
Prepare it for bs > ps cases by:
- Introduce a helper bio_add_paddrs()
Previously we only need to add a single page to a bio for a fs block,
but now we need to add multiple pages, this means we can fail halfway.
In that case we need to properly revert the bio (only for its size
though) for halfway failed cases.
- Rename rbio_add_io_paddr() to rbio_add_io_paddrs()
And change the @paddr parameter to @paddrs[].
- Change all callers to use the updated rbio_add_io_paddrs()
For the @paddrs pointer used for the new function, it can be grabbed
using sector_paddrs_in_rbio() and rbio_stripe_paddrs() helpers.
Signed-off-by: Qu Wenruo <wqu@suse.com>
Signed-off-by: David Sterba <dsterba@suse.com>
|
|
The function steal_rbio() assume each fs block can be mapped by
one page, blocking bs > ps support for raid56.
Prepare it for bs > ps cases by:
- Introduce two helpers to calculate the sector number
Previously we assume one page will contain at least one fs block, thus
can use something like "sectors_per_page = PAGE_SIZE / sectorsize;",
but with bs > ps support that above number will be 0.
Instead introduce two helpers:
* page_nr_to_sector_nr()
Returns the sector number of the first sector covered by the page.
* page_nr_to_num_sectors()
Return how many sectors are covered by the page.
And use the returned values for bitmap operations other than
open-coded "PAGE_SIZE / sectorsize".
Those helpers also have extra ASSERT()s to catch weird numbers.
- Use above helpers
The involved functions are:
* steal_rbio_page()
* is_data_stripe_page()
* full_page_sectors_uptodate()
Signed-off-by: Qu Wenruo <wqu@suse.com>
Signed-off-by: David Sterba <dsterba@suse.com>
|
|
The function set_bio_pages_uptodate() assume each fs block can be mapped by
one page, blocking bs > ps support for raid56.
Prepare it for bs > ps cases by:
- Update find_stripe_sector_nr() to check only the first step paddr
We don't need to check each paddr, as the bios are still aligned to fs
block size, thus checking the first step is enough.
- Use step size to iterate the bio
This means we only need to find the sector number for the first step
of each fs block, and skip the remaining part.
Signed-off-by: Qu Wenruo <wqu@suse.com>
Signed-off-by: David Sterba <dsterba@suse.com>
|
|
The function verify_bio_data_sectors() assume each fs block can be mapped by
one page, blocking bs > ps support for raid56.
Prepare it for bs > ps cases by:
- Make get_bio_sector_nr() to consider bs > ps cases
The function is utilized to calculate the sector number of a device
bio submitted by btrfs raid56 layer.
- Assemble a local paddrs[] for checksum calculation
- Open code btrfs_check_block_csum()
btrfs_check_block_csum() only supports fs blocks backed by large
folios.
But for raid56 we can have fs blocks backed by multiple non-contiguous
pages, e.g. direct IO, encoded read/write/send.
So instead of using btrfs_check_block_csum(), open code it to use
btrfs_calculate_block_csum_pages().
Signed-off-by: Qu Wenruo <wqu@suse.com>
Signed-off-by: David Sterba <dsterba@suse.com>
|
|
The function verify_one_sector() assume each fs block can be mapped by
one page, blocking bs > ps support for raid56.
Prepare it for bs > ps cases by:
- Introduce helpers to get a paddrs pointer
Thankfully all the higher layer bio should still be aligned to fs
block size, thus a fs block should still be fully covered by the bio.
Introduce sector_paddrs_in_rbio() and rbio_stripe_paddrs(), which will
return a paddrs pointer inside btrfs_raid_bio::bio_paddrs[] or
stripe_paddrs[].
The pointer can be directly passed to
btrfs_calculate_block_csum_pages() to verify the checksum.
- Open code btrfs_check_block_csum()
btrfs_check_block_csum() only supports fs blocks backed by large
folios.
But for raid56 we can have fs blocks backed by multiple non-contiguous
pages, e.g. direct IO, encoded read/write/send.
So instead of using btrfs_check_block_csum(), open code it to use
btrfs_calculate_block_csum_pages().
Signed-off-by: Qu Wenruo <wqu@suse.com>
Signed-off-by: David Sterba <dsterba@suse.com>
|
|
Currently recover_vertical() assumes that every fs block can be mapped
by one page, this is blocking bs > ps support for raid56.
Prepare recover_vertical() to support bs > ps cases by:
- Introduce recover_vertical_step() helper
Which will recover a full step (min(PAGE_SIZE, sectorsize)).
Now recover_vertical() will do the error check for the specified
sector, do the recover step by step, then do the sector verification.
- Fix a spelling error of get_rbio_vertical_errors()
The old name has a typo: "veritical".
Signed-off-by: Qu Wenruo <wqu@suse.com>
Signed-off-by: David Sterba <dsterba@suse.com>
|
|
Unlike btrfs_calculate_block_csum_pages(), we cannot handle multiple
pages at the same time for P/Q generation.
So here we introduce a new @step_nr, and various helpers to grab the
sub-block page from the rbio, and generate the P/Q stripe page by page.
Signed-off-by: Qu Wenruo <wqu@suse.com>
Signed-off-by: David Sterba <dsterba@suse.com>
|
|
Since we cannot ensure that all bios from the higher layer are backed by
large folios (e.g. direct IO, encoded read/write/send), we need the
ability to locate sub-block (aka, a page) inside a full stripe.
So the existing @stripe_nr + @sector_nr combination is not enough to
locate such page for bs > ps cases.
Introduce a new parameter, @step_nr, to locate the page of a larger fs
block. The naming is following the conventions used inside btrfs
elsewhere, where one step is min(sectorsize, PAGE_SIZE).
It's still a preparation, only touching the following aspects:
- btrfs_dump_rbio()
To show the new @sector_nsteps member.
- btrfs_raid_bio::sector_nsteps
Recording how many steps there are inside a fs block.
- Enlarge btrfs_raid_bio::*_paddrs[] size
To take @sector_nsteps into consideration.
- index_one_bio()
- index_stripe_sectors()
- memcpy_from_bio_to_stripe()
- cache_rbio_pages()
- need_read_stripe_sectors()
Those functions are iterating *_paddrs[], which needs to take
sector_nsteps into consideration.
- Rename rbio_stripe_sector_index() to rbio_sector_index()
The "stripe" part is not that helpful.
And an extra ASSERT() before returning the result.
- Add a new rbio_paddr_index() helper
This will take the extra @step_nr into consideration.
- The comments of btrfs_raid_bio
Signed-off-by: Qu Wenruo <wqu@suse.com>
Signed-off-by: David Sterba <dsterba@suse.com>
|
|
The structure needs to track both the pages from higher layer bio and
internal pages, thus it can be a little complex to grasp.
Add an overview of the structure, especially how we track different
pages from higher layer bios and internal ones, to save some time for
future developers.
Signed-off-by: Qu Wenruo <wqu@suse.com>
Reviewed-by: David Sterba <dsterba@suse.com>
Signed-off-by: David Sterba <dsterba@suse.com>
|
|
This patch implements the Kunit based set of
unit tests for HFS+ string operations. It checks
functionality of hfsplus_strcasecmp(), hfsplus_strcmp(),
hfsplus_uni2asc(), hfsplus_asc2uni(), hfsplus_hash_dentry(),
and hfsplus_compare_dentry().
./tools/testing/kunit/kunit.py run --kunitconfig ./fs/hfsplus/.kunitconfig
[14:38:05] Configuring KUnit Kernel ...
[14:38:05] Building KUnit Kernel ...
Populating config with:
$ make ARCH=um O=.kunit olddefconfig
Building with:
$ make all compile_commands.json scripts_gdb ARCH=um O=.kunit --jobs=22
[14:38:09] Starting KUnit Kernel (1/1)...
[14:38:09] ============================================================
Running tests with:
$ .kunit/linux kunit.enable=1 mem=1G console=tty kunit_shutdown=halt
[14:38:09] ============== hfsplus_unicode (27 subtests) ===============
[14:38:09] [PASSED] hfsplus_strcasecmp_test
[14:38:09] [PASSED] hfsplus_strcmp_test
[14:38:09] [PASSED] hfsplus_unicode_edge_cases_test
[14:38:09] [PASSED] hfsplus_unicode_boundary_test
[14:38:09] [PASSED] hfsplus_uni2asc_basic_test
[14:38:09] [PASSED] hfsplus_uni2asc_special_chars_test
[14:38:09] [PASSED] hfsplus_uni2asc_buffer_test
[14:38:09] [PASSED] hfsplus_uni2asc_corrupted_test
[14:38:09] [PASSED] hfsplus_uni2asc_edge_cases_test
[14:38:09] [PASSED] hfsplus_asc2uni_basic_test
[14:38:09] [PASSED] hfsplus_asc2uni_special_chars_test
[14:38:09] [PASSED] hfsplus_asc2uni_buffer_limits_test
[14:38:09] [PASSED] hfsplus_asc2uni_edge_cases_test
[14:38:09] [PASSED] hfsplus_asc2uni_decompose_test
[14:38:09] [PASSED] hfsplus_hash_dentry_basic_test
[14:38:09] [PASSED] hfsplus_hash_dentry_casefold_test
[14:38:09] [PASSED] hfsplus_hash_dentry_special_chars_test
[14:38:09] [PASSED] hfsplus_hash_dentry_decompose_test
[14:38:09] [PASSED] hfsplus_hash_dentry_consistency_test
[14:38:09] [PASSED] hfsplus_hash_dentry_edge_cases_test
[14:38:09] [PASSED] hfsplus_compare_dentry_basic_test
[14:38:09] [PASSED] hfsplus_compare_dentry_casefold_test
[14:38:09] [PASSED] hfsplus_compare_dentry_special_chars_test
[14:38:09] [PASSED] hfsplus_compare_dentry_length_test
[14:38:09] [PASSED] hfsplus_compare_dentry_decompose_test
[14:38:09] [PASSED] hfsplus_compare_dentry_edge_cases_test
[14:38:09] [PASSED] hfsplus_compare_dentry_combined_flags_test
[14:38:09] ================= [PASSED] hfsplus_unicode =================
[14:38:09] ============================================================
[14:38:09] Testing complete. Ran 27 tests: passed: 27
[14:38:09] Elapsed time: 3.875s total, 0.001s configuring, 3.707s building, 0.115s running
v2
Rework memory management model.
Signed-off-by: Viacheslav Dubeyko <slava@dubeyko.com>
cc: John Paul Adrian Glaubitz <glaubitz@physik.fu-berlin.de>
cc: Yangtao Li <frank.li@vivo.com>
cc: linux-fsdevel@vger.kernel.org
Signed-off-by: Viacheslav Dubeyko <slava@dubeyko.com>
|
|
According to Dan Carpenter, smatch detects issue with size parameter given
to pci_rebar_size_supported():
drivers/pci/rebar.c:142 pci_rebar_size_supported()
error: undefined (user controlled) shift '(((1))) << size'
The problem is this call tree, which uses the 'size' from the user to shift
in BIT() without validating it:
__resource_resize_store # takes 'buf' from user sysfs write
kstrtoul(buf, 0, &size) # converts to unsigned long
pci_resize_resource # truncates to int
pci_rebar_size_supported # BIT(size) without validation
There could be similar problems also with pci_resize_resource() parameter
values coming from drivers.
Add 'size' validation to pci_rebar_size_supported().
There seems to be no SZ_128T prior to this so add one to be able to specify
the largest size supported by the kernel (PCIe r7.0 spec already defines
sizes even beyond 128TB but kernel does not yet support them).
The issue looks older than the introduction of pci_rebar_size_supported()
by bb1fabd0d94e ("PCI: Add pci_rebar_size_supported() helper").
It would be also nice to convert 'size' unsigned too everywhere, maybe even
u8 but that is left as further work.
Fixes: 8bb705e3e79d ("PCI: Add pci_resize_resource() for resizing BARs")
Reported-by: Dan Carpenter <dan.carpenter@linaro.org>
Closes: https://lore.kernel.org/r/aSA1WiRG3RuhqZMY@stanley.mountain/
Signed-off-by: Ilpo Järvinen <ilpo.jarvinen@linux.intel.com>
[bhelgaas: commit log, add report URL]
Signed-off-by: Bjorn Helgaas <bhelgaas@google.com>
Link: https://patch.msgid.link/20251124153740.2995-1-ilpo.jarvinen@linux.intel.com
|
|
Since v4.6 the BUDDY flag is set for _all_ pages in the block
and no longer just for the first one.
This change was introduced by:
commit 832fc1de01ae ("/proc/kpageflags: return KPF_BUDDY for "tail" buddy pages")
Strictly speaking, this was an ABI change, but as nobody has noticed
since 2016, let's just update the documentation.
Link: https://lkml.kernel.org/r/20251122211920.3410371-1-richard@nod.at
Signed-off-by: Richard Weinberger <richard@nod.at>
Reviewed-by: Lorenzo Stoakes <lorenzo.stoakes@oracle.com>
Acked-by: David Hildenbrand (Red Hat) <david@kernel.org>
Reviewed-by: Mike Rapoport (Microsoft) <rppt@kernel.org>
Cc: Vladimir Davydov <vdavydov@virtuozzo.com>>
Cc: Konstantin Khlebnikov <koct9i@gmail.com>
Cc: Naoya Horiguchi <n-horiguchi@ah.jp.nec.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
|
|
The scan_swap_map_slots() helper has been removed, but several comments
still referred to it in swap allocation and reclaim paths. This patch
cleans up those outdated references and reflows the affected comment
blocks to match kernel coding style.
Link: https://lkml.kernel.org/r/20251031065011.40863-6-youngjun.park@lge.com
Signed-off-by: Youngjun Park <youngjun.park@lge.com>
Reviewed-by: Baoquan He <bhe@redhat.com>
Acked-by: Chris Li <chrisl@kernel.org>
Cc: Barry Song <baohua@kernel.org>
Cc: Kairui Song <kasong@tencent.com>
Cc: Kemeng Shi <shikemeng@huaweicloud.com>
Cc: Nhat Pham <nphamcs@gmail.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
|
|
swap_alloc_slow() does not need to return a bool, as all callers handle
allocation results via the entry parameter. Update the function signature
and remove return statements accordingly.
Link: https://lkml.kernel.org/r/20251031065011.40863-5-youngjun.park@lge.com
Signed-off-by: Youngjun Park <youngjun.park@lge.com>
Reviewed-by: Kairui Song <kasong@tencent.com>
Reviewed-by: Baoquan He <bhe@redhat.com>
Acked-by: Chris Li <chrisl@kernel.org>
Cc: Barry Song <baohua@kernel.org>
Cc: Kemeng Shi <shikemeng@huaweicloud.com>
Cc: Nhat Pham <nphamcs@gmail.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
|
|
The function now manages get/put_swap_device() internally, making the
comment explaining this behavior to callers unnecessary.
Link: https://lkml.kernel.org/r/20251031065011.40863-4-youngjun.park@lge.com
Signed-off-by: Youngjun Park <youngjun.park@lge.com>
Reviewed-by: Baoquan He <bhe@redhat.com>
Cc: Barry Song <baohua@kernel.org>
Cc: Chris Li <chrisl@kernel.org>
Cc: Kairui Song <kasong@tencent.com>
Cc: Kemeng Shi <shikemeng@huaweicloud.com>
Cc: Nhat Pham <nphamcs@gmail.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
|