diff options
| author | Peter Zijlstra <peterz@infradead.org> | 2026-02-24 17:36:01 +0100 |
|---|---|---|
| committer | Peter Zijlstra <peterz@infradead.org> | 2026-02-27 16:40:06 +0100 |
| commit | b7dd64778aa3f89de9afa1e81171cfe110ddc525 (patch) | |
| tree | 375c1b79b73330fb22205022be0b0a57b66bbd81 /kernel | |
| parent | c8cdb9b516407a0b8c653c9c1d6f0931c3864384 (diff) | |
hrtimer: Provide LAZY_REARM mode
The hrtick timer is frequently rearmed before expiry and most of the time
the new expiry is past the armed one. As this happens on every context
switch it becomes expensive with scheduling heavy work loads especially in
virtual machines as the "hardware" reprogamming implies a VM exit.
Add a lazy rearm mode flag which skips the reprogamming if:
1) The timer was the first expiring timer before the rearm
2) The new expiry time is farther out than the armed time
This avoids a massive amount of reprogramming operations of the hrtick
timer for the price of eventually taking the alredy armed interrupt for
nothing.
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Signed-off-by: Thomas Gleixner <tglx@kernel.org>
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Link: https://patch.msgid.link/20260224163429.408524456@kernel.org
Diffstat (limited to 'kernel')
| -rw-r--r-- | kernel/time/hrtimer.c | 17 |
1 files changed, 16 insertions, 1 deletions
diff --git a/kernel/time/hrtimer.c b/kernel/time/hrtimer.c index 67917ce696d4..e54f8b59f6b4 100644 --- a/kernel/time/hrtimer.c +++ b/kernel/time/hrtimer.c @@ -1152,7 +1152,7 @@ static void __remove_hrtimer(struct hrtimer *timer, * an superfluous call to hrtimer_force_reprogram() on the * remote cpu later on if the same timer gets enqueued again. */ - if (reprogram && timer == cpu_base->next_timer) + if (reprogram && timer == cpu_base->next_timer && !timer->is_lazy) hrtimer_force_reprogram(cpu_base, 1); } @@ -1322,6 +1322,20 @@ static int __hrtimer_start_range_ns(struct hrtimer *timer, ktime_t tim, } /* + * Special case for the HRTICK timer. It is frequently rearmed and most + * of the time moves the expiry into the future. That's expensive in + * virtual machines and it's better to take the pointless already armed + * interrupt than reprogramming the hardware on every context switch. + * + * If the new expiry is before the armed time, then reprogramming is + * required. + */ + if (timer->is_lazy) { + if (new_base->cpu_base->expires_next <= hrtimer_get_expires(timer)) + return 0; + } + + /* * Timer was forced to stay on the current CPU to avoid * reprogramming on removal and enqueue. Force reprogram the * hardware by evaluating the new first expiring timer. @@ -1675,6 +1689,7 @@ static void __hrtimer_setup(struct hrtimer *timer, base += hrtimer_clockid_to_base(clock_id); timer->is_soft = softtimer; timer->is_hard = !!(mode & HRTIMER_MODE_HARD); + timer->is_lazy = !!(mode & HRTIMER_MODE_LAZY_REARM); timer->base = &cpu_base->clock_base[base]; timerqueue_init(&timer->node); |
