Blob Blame History Raw
From 82c218750812d39b98e1dc42219c4dfd2d0b184d Mon Sep 17 00:00:00 2001
From: Frederic Weisbecker <frederic@kernel.org>
Date: Tue, 5 Apr 2022 03:07:51 +0200
Subject: [PATCH] rcutorture: Also force sched priority to timersd on
 boosting test.

References: bsc#1197720, SLE Realtime Extension
Patch-mainline: Queued in subsystem maintainer repository
Git-commit: 82c218750812d39b98e1dc42219c4dfd2d0b184d
Git-repo: git://git.kernel.org/pub/scm/linux/kernel/git/rt/linux-rt-devel.git

ksoftirqd is statically boosted to the priority level right above the
one of rcu_torture_boost() so that timers, which torture readers rely on,
get a chance to run while rcu_torture_boost() is polling.

However timers processing got split from ksoftirqd into their own kthread
(timersd) that isn't boosted. It has the same SCHED_FIFO low prio as
rcu_torture_boost() and therefore timers can't preempt it and may
starve.

The issue can be triggered in practice on v5.17.1-rt17 using:

	./kvm.sh --allcpus --configs TREE04 --duration 10m --kconfig "CONFIG_EXPERT=y CONFIG_PREEMPT_RT=y"

Fix this with statically boosting timersd just like is done with
ksoftirqd in commit
   ea6d962e80b61 ("rcutorture: Judge RCU priority boosting on grace periods, not callbacks")

Suggested-by: Mel Gorman <mgorman@suse.de>
Cc: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
Cc: Thomas Gleixner <tglx@linutronix.de>
Signed-off-by: Frederic Weisbecker <frederic@kernel.org>
Link: https://lkml.kernel.org/r/20220405010752.1347437-1-frederic@kernel.org
Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
Signed-off-by: Frederic Weisbecker <fweisbecker@suse.de>
---
 include/linux/interrupt.h | 1 +
 kernel/rcu/rcutorture.c   | 6 ++++++
 kernel/softirq.c          | 2 +-
 3 files changed, 8 insertions(+), 1 deletion(-)

diff --git a/include/linux/interrupt.h b/include/linux/interrupt.h
index b19a3d00ba78..a6571e772d8b 100644
--- a/include/linux/interrupt.h
+++ b/include/linux/interrupt.h
@@ -624,6 +624,7 @@ extern void raise_softirq_irqoff(unsigned int nr);
 extern void raise_softirq(unsigned int nr);
 
 #ifdef CONFIG_PREEMPT_RT
+DECLARE_PER_CPU(struct task_struct *, timersd);
 extern void raise_timer_softirq(void);
 extern void raise_hrtimer_softirq(void);
 
diff --git a/kernel/rcu/rcutorture.c b/kernel/rcu/rcutorture.c
index 55d049c39608..de306d1406e9 100644
--- a/kernel/rcu/rcutorture.c
+++ b/kernel/rcu/rcutorture.c
@@ -3294,6 +3294,12 @@ rcu_torture_init(void)
 				WARN_ON_ONCE(!t);
 				sp.sched_priority = 2;
 				sched_setscheduler_nocheck(t, SCHED_FIFO, &sp);
+#ifdef CONFIG_PREEMPT_RT
+				t = per_cpu(timersd, cpu);
+				WARN_ON_ONCE(!t);
+				sp.sched_priority = 2;
+				sched_setscheduler_nocheck(t, SCHED_FIFO, &sp);
+#endif
 			}
 		}
 	}
diff --git a/kernel/softirq.c b/kernel/softirq.c
index 5b36ebe5e20d..7ee889ce570b 100644
--- a/kernel/softirq.c
+++ b/kernel/softirq.c
@@ -637,7 +637,7 @@ static inline void tick_irq_exit(void)
 #endif
 }
 
-static DEFINE_PER_CPU(struct task_struct *, timersd);
+DEFINE_PER_CPU(struct task_struct *, timersd);
 static DEFINE_PER_CPU(unsigned long, pending_timer_softirq);
 
 static unsigned int local_pending_timers(void)
-- 
2.25.1