summaryrefslogtreecommitdiff
diff options
context:
space:
mode:
authorChia-Ming Chang <chiamingc@synology.com>2026-04-02 14:14:06 +0800
committerYu Kuai <yukuai@fnnas.com>2026-04-07 15:13:52 +0800
commit7f9f7c697474268d9ef9479df3ddfe7cdcfbbffc (patch)
treec2b48a5e89e2181dc4086ff48158ba1e45db2a1c
parentcf86bb53b9c92354904a328e947a05ffbfdd1840 (diff)
md/raid5: fix soft lockup in retry_aligned_read()
When retry_aligned_read() encounters an overlapped stripe, it releases the stripe via raid5_release_stripe() which puts it on the lockless released_stripes llist. In the next raid5d loop iteration, release_stripe_list() drains the stripe onto handle_list (since STRIPE_HANDLE is set by the original IO), but retry_aligned_read() runs before handle_active_stripes() and removes the stripe from handle_list via find_get_stripe() -> list_del_init(). This prevents handle_stripe() from ever processing the stripe to resolve the overlap, causing an infinite loop and soft lockup. Fix this by using __release_stripe() with temp_inactive_list instead of raid5_release_stripe() in the failure path, so the stripe does not go through the released_stripes llist. This allows raid5d to break out of its loop, and the overlap will be resolved when the stripe is eventually processed by handle_stripe(). Fixes: 773ca82fa1ee ("raid5: make release_stripe lockless") Cc: stable@vger.kernel.org Signed-off-by: FengWei Shih <dannyshih@synology.com> Signed-off-by: Chia-Ming Chang <chiamingc@synology.com> Link: https://lore.kernel.org/linux-raid/20260402061406.455755-1-chiamingc@synology.com/ Signed-off-by: Yu Kuai <yukuai@fnnas.com>
-rw-r--r--drivers/md/raid5.c8
1 files changed, 7 insertions, 1 deletions
diff --git a/drivers/md/raid5.c b/drivers/md/raid5.c
index 1f8360d4cdb7..6e79829c5acb 100644
--- a/drivers/md/raid5.c
+++ b/drivers/md/raid5.c
@@ -6641,7 +6641,13 @@ static int retry_aligned_read(struct r5conf *conf, struct bio *raid_bio,
}
if (!add_stripe_bio(sh, raid_bio, dd_idx, 0, 0)) {
- raid5_release_stripe(sh);
+ int hash;
+
+ spin_lock_irq(&conf->device_lock);
+ hash = sh->hash_lock_index;
+ __release_stripe(conf, sh,
+ &conf->temp_inactive_list[hash]);
+ spin_unlock_irq(&conf->device_lock);
conf->retry_read_aligned = raid_bio;
conf->retry_read_offset = scnt;
return handled;