summaryrefslogtreecommitdiff
path: root/tools/perf/scripts/python/bin/stackcollapse-report
diff options
context:
space:
mode:
authorQu Wenruo <wqu@suse.com>2026-02-20 10:13:38 +1030
committerDavid Sterba <dsterba@suse.com>2026-04-07 18:55:59 +0200
commit6603a9859887ed325fea9fc9347c2d9e6cf3bbe3 (patch)
tree45d8026694ba3f25624907c8b253b5277327da46 /tools/perf/scripts/python/bin/stackcollapse-report
parentb05342fe47b9828d004baf2b24cccd0479de54a5 (diff)
btrfs: do compressed bio size roundup and zeroing in one go
Currently we zero out all the remaining bytes of the last folio of the compressed bio, then round the bio size to fs block boundary. But that is done in two different functions, zero_last_folio() to zero the remaining bytes of the last folio, and round_up_last_block() to round up the bio to fs block boundary. There are some minor problems: - zero_last_folio() is zeroing ranges we won't submit This is mostly affecting block size < page size cases, where we can have a large folio (e.g. 64K), but the fs block size is only 4K. In that case, we may only want to submit the first 4K of the folio, the remaining range won't matter, but we still zero them all. This causes unnecessary CPU usage just to zero out some bytes we won't utilized. - compressed_bio_last_folio() is called twice in two different functions Which in theory we only need to call it once. Enhance the situation by: - Only zero out bytes up to the fs block boundary Thus this will reduce some overhead for bs < ps cases. - Move the folio_zero_range() call into round_up_last_block() So that we can reuse the same folio returned by compressed_bio_last_folio(). Reviewed-by: Anand Jain <asj@kernel.org> Signed-off-by: Qu Wenruo <wqu@suse.com> Reviewed-by: David Sterba <dsterba@suse.com> Signed-off-by: David Sterba <dsterba@suse.com>
Diffstat (limited to 'tools/perf/scripts/python/bin/stackcollapse-report')
0 files changed, 0 insertions, 0 deletions