Matt Fleming 668ba0
From: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
Matt Fleming 668ba0
Date: Tue, 5 Sep 2017 16:22:16 +0200
Matt Fleming 668ba0
Subject: Add Anna-Maria's "Provide softirq context hrtimers" + RT fixups
Matt Fleming 668ba0
Git-repo: git://git.kernel.org/pub/scm/linux/kernel/git/rt/linux-rt-devel.git
Matt Fleming 668ba0
Git-commit: 3e71e14ef8bfec5651a13d6eeaec76b57e9eaf9a
Matt Fleming 668ba0
Patch-mainline: Queued in subsystem maintainer repository
Matt Fleming 668ba0
References: SLE12 Realtime Extension
Matt Fleming 668ba0
Matt Fleming 668ba0
The "Provide softirq context hrtimers" includes the following patches:
Matt Fleming 668ba0
  0001-hrtimer-Use-predefined-function-for-updating-next_ti.patch
Matt Fleming 668ba0
  0002-hrtimer-Correct-blantanly-wrong-comment.patch
Matt Fleming 668ba0
  0003-hrtimer-Fix-kerneldoc-for-struct-hrtimer_cpu_base.patch
Matt Fleming 668ba0
  0004-hrtimer-Cleanup-clock-argument-in-schedule_hrtimeout.patch
Matt Fleming 668ba0
  0005-hrtimer-Switch-for-loop-to-_ffs-evaluation.patch
Matt Fleming 668ba0
  0006-hrtimer-Store-running-timer-in-hrtimer_clock_base.patch
Matt Fleming 668ba0
  0007-hrtimer-Reduce-conditional-code-hres_active.patch
Matt Fleming 668ba0
  0008-hrtimer-Reduce-conditional-code-expires_next-next_ti.patch
Matt Fleming 668ba0
  0009-hrtimer-Reduce-conditional-code-hrtimer_reprogram.patch
Matt Fleming 668ba0
  0010-hrtimer-Make-handling-of-hrtimer-reprogramming-and-e.patch
Matt Fleming 668ba0
  0011-hrtimer-Allow-remote-hrtimer-enqueue-with-expires_ne.patch
Matt Fleming 668ba0
  0012-hrtimer-Simplify-hrtimer_reprogram-call.patch
Matt Fleming 668ba0
  0013-hrtimer-Split-out-code-from-hrtimer_start_range_ns-f.patch
Matt Fleming 668ba0
  0014-hrtimer-Split-out-code-from-__hrtimer_get_next_event.patch
Matt Fleming 668ba0
  0015-hrtimer-Add-clock-bases-for-soft-irq-context.patch
Matt Fleming 668ba0
  0016-hrtimer-Allow-function-reuse-for-softirq-based-hrtim.patch
Matt Fleming 668ba0
  0017-hrtimer-Implementation-of-softirq-hrtimer-handling.patch
Matt Fleming 668ba0
  0018-hrtimer-Enable-soft-and-hard-hrtimer.patch
Matt Fleming 668ba0
  0019-can-bcm-Replace-hrtimer_tasklet-with-softirq-based-h.patch
Matt Fleming 668ba0
  0020-mac80211_hwsim-Replace-hrtimer-tasklet-with-softirq-.patch
Matt Fleming 668ba0
  0021-xfrm-Replace-hrtimer-tasklet-with-softirq-hrtimer.patch
Matt Fleming 668ba0
  0022-softirq-Remove-tasklet_hrtimer.patch
Matt Fleming 668ba0
Matt Fleming 668ba0
The interation of those resulted in the removal of the old "irqsafe"
Matt Fleming 668ba0
member and the custom softirq infrastrucure which resulted in the
Matt Fleming 668ba0
removal of the following patches:
Matt Fleming 668ba0
  KVM-lapic-mark-LAPIC-timer-handler-as-irqsafe.patch
Matt Fleming 668ba0
  kernel-perf-mark-perf_cpu_context-s-timer-as-irqsafe.patch
Matt Fleming 668ba0
  perf-make-swevent-hrtimer-irqsafe.patch
Matt Fleming 668ba0
  sched-deadline-dl_task_timer-has-to-be-irqsafe.patch
Matt Fleming 668ba0
  tick-broadcast--Make-hrtimer-irqsafe.patch
Matt Fleming 668ba0
  hrtimer-enfore-64byte-alignment.patch
Matt Fleming 668ba0
  hrtimer-fixup-hrtimer-callback-changes-for-preempt-r.patch
Matt Fleming 668ba0
  kernel-hrtimer-don-t-wakeup-a-process-while-holding-.patch
Matt Fleming 668ba0
  kernel-hrtimer-hotplug-don-t-wake-ktimersoftd-while-.patch
Matt Fleming 668ba0
  kernel-hrtimer-migrate-deferred-timer-on-CPU-down.patch
Matt Fleming 668ba0
  timer-hrtimer-check-properly-for-a-running-timer.patch
Matt Fleming 668ba0
Matt Fleming 668ba0
The "old" functionality where all timers were most hrtimers are moved
Matt Fleming 668ba0
into softirq context is preserved by
Matt Fleming 668ba0
  hrtimer-consolidate-hrtimer_init-hrtimer_init_sleepe.patch
Matt Fleming 668ba0
  hrtimer-by-timers-by-default-into-the-softirq-context.patch
Matt Fleming 668ba0
Matt Fleming 668ba0
and updating
Matt Fleming 668ba0
  hrtimers-prepare-full-preemption.patch
Matt Fleming 668ba0
Matt Fleming 668ba0
Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
Matt Fleming 668ba0
Signed-off-by: Mike Galbraith <mgalbraith@suse.de>
Matt Fleming 668ba0
---
Matt Fleming 668ba0
 arch/x86/kvm/lapic.c                  |    3 
Matt Fleming 668ba0
 block/blk-mq.c                        |    3 
Matt Fleming 668ba0
 drivers/net/wireless/mac80211_hwsim.c |   44 --
Matt Fleming 668ba0
 include/linux/hrtimer.h               |  116 +++--
Matt Fleming 668ba0
 include/linux/interrupt.h             |   25 -
Matt Fleming 668ba0
 include/linux/wait.h                  |    4 
Matt Fleming 668ba0
 include/net/xfrm.h                    |    2 
Matt Fleming 668ba0
 kernel/events/core.c                  |    6 
Matt Fleming 668ba0
 kernel/futex.c                        |   19 
Matt Fleming 668ba0
 kernel/sched/core.c                   |    3 
Matt Fleming 668ba0
 kernel/sched/deadline.c               |    3 
Matt Fleming 668ba0
 kernel/sched/rt.c                     |    5 
Matt Fleming 668ba0
 kernel/softirq.c                      |   51 --
Matt Fleming 668ba0
 kernel/time/hrtimer.c                 |  740 ++++++++++++++++++----------------
Matt Fleming 668ba0
 kernel/time/tick-broadcast-hrtimer.c  |    3 
Matt Fleming 668ba0
 kernel/time/tick-sched.c              |    3 
Matt Fleming 668ba0
 kernel/watchdog.c                     |    3 
Matt Fleming 668ba0
 net/can/bcm.c                         |  150 ++----
Matt Fleming 668ba0
 net/core/pktgen.c                     |    4 
Matt Fleming 668ba0
 net/xfrm/xfrm_state.c                 |   29 -
Matt Fleming 668ba0
 20 files changed, 573 insertions(+), 643 deletions(-)
Matt Fleming 668ba0
Matt Fleming 668ba0
--- a/arch/x86/kvm/lapic.c
Matt Fleming 668ba0
+++ b/arch/x86/kvm/lapic.c
Mel Gorman fb63c5
@@ -2184,10 +2184,9 @@ int kvm_create_lapic(struct kvm_vcpu *vc
Matt Fleming 668ba0
 	}
Matt Fleming 668ba0
 	apic->vcpu = vcpu;
Matt Fleming 668ba0
 
Matt Fleming 668ba0
-	hrtimer_init(&apic->lapic_timer.timer, CLOCK_MONOTONIC,
Matt Fleming 668ba0
+	hrtimer_init(&apic->lapic_timer.timer, CLOCK_MONOTONIC_HARD,
Matt Fleming 668ba0
 		     HRTIMER_MODE_ABS_PINNED);
Matt Fleming 668ba0
 	apic->lapic_timer.timer.function = apic_timer_fn;
Matt Fleming 668ba0
-	apic->lapic_timer.timer.irqsafe = 1;
Matt Fleming 668ba0
 
Matt Fleming 668ba0
 	/*
Matt Fleming 668ba0
 	 * APIC is created enabled. This will prevent kvm_lapic_set_base from
Matt Fleming 668ba0
--- a/block/blk-mq.c
Matt Fleming 668ba0
+++ b/block/blk-mq.c
Mel Gorman fb63c5
@@ -3230,10 +3230,9 @@ static bool blk_mq_poll_hybrid_sleep(str
Matt Fleming 668ba0
 	kt = nsecs;
Matt Fleming 668ba0
 
Matt Fleming 668ba0
 	mode = HRTIMER_MODE_REL;
Matt Fleming 668ba0
-	hrtimer_init_on_stack(&hs.timer, CLOCK_MONOTONIC, mode);
Matt Fleming 668ba0
+	hrtimer_init_sleeper_on_stack(&hs, CLOCK_MONOTONIC, mode, current);
Matt Fleming 668ba0
 	hrtimer_set_expires(&hs.timer, kt);
Matt Fleming 668ba0
 
Matt Fleming 668ba0
-	hrtimer_init_sleeper(&hs, current);
Matt Fleming 668ba0
 	do {
Matt Fleming 668ba0
 		if (blk_mq_rq_state(rq) == MQ_RQ_COMPLETE)
Matt Fleming 668ba0
 			break;
Matt Fleming 668ba0
--- a/drivers/net/wireless/mac80211_hwsim.c
Matt Fleming 668ba0
+++ b/drivers/net/wireless/mac80211_hwsim.c
Matt Fleming 668ba0
@@ -545,7 +545,7 @@ struct mac80211_hwsim_data {
Matt Fleming 668ba0
 	unsigned int rx_filter;
Matt Fleming 668ba0
 	bool started, idle, scanning;
Matt Fleming 668ba0
 	struct mutex mutex;
Matt Fleming 668ba0
-	struct tasklet_hrtimer beacon_timer;
Matt Fleming 668ba0
+	struct hrtimer beacon_timer;
Matt Fleming 668ba0
 	enum ps_mode {
Matt Fleming 668ba0
 		PS_DISABLED, PS_ENABLED, PS_AUTO_POLL, PS_MANUAL_POLL
Matt Fleming 668ba0
 	} ps;
Matt Fleming 668ba0
@@ -1479,7 +1479,7 @@ static void mac80211_hwsim_stop(struct i
Matt Fleming 668ba0
 {
Matt Fleming 668ba0
 	struct mac80211_hwsim_data *data = hw->priv;
Matt Fleming 668ba0
 	data->started = false;
Matt Fleming 668ba0
-	tasklet_hrtimer_cancel(&data->beacon_timer);
Matt Fleming 668ba0
+	hrtimer_cancel(&data->beacon_timer);
Matt Fleming 668ba0
 	wiphy_dbg(hw->wiphy, "%s\n", __func__);
Matt Fleming 668ba0
 }
Matt Fleming 668ba0
 
Matt Fleming 668ba0
@@ -1602,14 +1602,12 @@ static enum hrtimer_restart
Matt Fleming 668ba0
 mac80211_hwsim_beacon(struct hrtimer *timer)
Matt Fleming 668ba0
 {
Matt Fleming 668ba0
 	struct mac80211_hwsim_data *data =
Matt Fleming 668ba0
-		container_of(timer, struct mac80211_hwsim_data,
Matt Fleming 668ba0
-			     beacon_timer.timer);
Matt Fleming 668ba0
+		container_of(timer, struct mac80211_hwsim_data, beacon_timer);
Matt Fleming 668ba0
 	struct ieee80211_hw *hw = data->hw;
Matt Fleming 668ba0
 	u64 bcn_int = data->beacon_int;
Matt Fleming 668ba0
-	ktime_t next_bcn;
Matt Fleming 668ba0
 
Matt Fleming 668ba0
 	if (!data->started)
Matt Fleming 668ba0
-		goto out;
Matt Fleming 668ba0
+		return HRTIMER_NORESTART;
Matt Fleming 668ba0
 
Matt Fleming 668ba0
 	ieee80211_iterate_active_interfaces_atomic(
Matt Fleming 668ba0
 		hw, IEEE80211_IFACE_ITER_NORMAL,
Matt Fleming 668ba0
@@ -1621,11 +1619,9 @@ mac80211_hwsim_beacon(struct hrtimer *ti
Matt Fleming 668ba0
 		data->bcn_delta = 0;
Matt Fleming 668ba0
 	}
Matt Fleming 668ba0
 
Matt Fleming 668ba0
-	next_bcn = ktime_add(hrtimer_get_expires(timer),
Matt Fleming 668ba0
-			     ns_to_ktime(bcn_int * 1000));
Matt Fleming 668ba0
-	tasklet_hrtimer_start(&data->beacon_timer, next_bcn, HRTIMER_MODE_ABS);
Matt Fleming 668ba0
-out:
Matt Fleming 668ba0
-	return HRTIMER_NORESTART;
Matt Fleming 668ba0
+	hrtimer_forward(&data->beacon_timer, hrtimer_get_expires(timer),
Matt Fleming 668ba0
+			ns_to_ktime(bcn_int * NSEC_PER_USEC));
Matt Fleming 668ba0
+	return HRTIMER_RESTART;
Matt Fleming 668ba0
 }
Matt Fleming 668ba0
 
Matt Fleming 668ba0
 static const char * const hwsim_chanwidths[] = {
Matt Fleming 668ba0
@@ -1699,15 +1695,15 @@ static int mac80211_hwsim_config(struct
Matt Fleming 668ba0
 	mutex_unlock(&data->mutex);
Matt Fleming 668ba0
 
Matt Fleming 668ba0
 	if (!data->started || !data->beacon_int)
Matt Fleming 668ba0
-		tasklet_hrtimer_cancel(&data->beacon_timer);
Matt Fleming 668ba0
-	else if (!hrtimer_is_queued(&data->beacon_timer.timer)) {
Matt Fleming 668ba0
+		hrtimer_cancel(&data->beacon_timer);
Matt Fleming 668ba0
+	else if (!hrtimer_is_queued(&data->beacon_timer)) {
Matt Fleming 668ba0
 		u64 tsf = mac80211_hwsim_get_tsf(hw, NULL);
Matt Fleming 668ba0
 		u32 bcn_int = data->beacon_int;
Matt Fleming 668ba0
 		u64 until_tbtt = bcn_int - do_div(tsf, bcn_int);
Matt Fleming 668ba0
 
Matt Fleming 668ba0
-		tasklet_hrtimer_start(&data->beacon_timer,
Matt Fleming 668ba0
-				      ns_to_ktime(until_tbtt * 1000),
Matt Fleming 668ba0
-				      HRTIMER_MODE_REL);
Matt Fleming 668ba0
+		hrtimer_start(&data->beacon_timer,
Matt Fleming 668ba0
+			      ns_to_ktime(until_tbtt * 1000),
Matt Fleming 668ba0
+			      HRTIMER_MODE_REL);
Matt Fleming 668ba0
 	}
Matt Fleming 668ba0
 
Matt Fleming 668ba0
 	return 0;
Matt Fleming 668ba0
@@ -1770,7 +1766,7 @@ static void mac80211_hwsim_bss_info_chan
Matt Fleming 668ba0
 			  info->enable_beacon, info->beacon_int);
Matt Fleming 668ba0
 		vp->bcn_en = info->enable_beacon;
Matt Fleming 668ba0
 		if (data->started &&
Matt Fleming 668ba0
-		    !hrtimer_is_queued(&data->beacon_timer.timer) &&
Matt Fleming 668ba0
+		    !hrtimer_is_queued(&data->beacon_timer) &&
Matt Fleming 668ba0
 		    info->enable_beacon) {
Matt Fleming 668ba0
 			u64 tsf, until_tbtt;
Matt Fleming 668ba0
 			u32 bcn_int;
Matt Fleming 668ba0
@@ -1778,9 +1774,9 @@ static void mac80211_hwsim_bss_info_chan
Matt Fleming 668ba0
 			tsf = mac80211_hwsim_get_tsf(hw, vif);
Matt Fleming 668ba0
 			bcn_int = data->beacon_int;
Matt Fleming 668ba0
 			until_tbtt = bcn_int - do_div(tsf, bcn_int);
Matt Fleming 668ba0
-			tasklet_hrtimer_start(&data->beacon_timer,
Matt Fleming 668ba0
-					      ns_to_ktime(until_tbtt * 1000),
Matt Fleming 668ba0
-					      HRTIMER_MODE_REL);
Matt Fleming 668ba0
+			hrtimer_start(&data->beacon_timer,
Matt Fleming 668ba0
+				      ns_to_ktime(until_tbtt * 1000),
Matt Fleming 668ba0
+				      HRTIMER_MODE_REL);
Matt Fleming 668ba0
 		} else if (!info->enable_beacon) {
Matt Fleming 668ba0
 			unsigned int count = 0;
Matt Fleming 668ba0
 			ieee80211_iterate_active_interfaces_atomic(
Matt Fleming 668ba0
@@ -1789,7 +1785,7 @@ static void mac80211_hwsim_bss_info_chan
Matt Fleming 668ba0
 			wiphy_dbg(hw->wiphy, "  beaconing vifs remaining: %u",
Matt Fleming 668ba0
 				  count);
Matt Fleming 668ba0
 			if (count == 0) {
Matt Fleming 668ba0
-				tasklet_hrtimer_cancel(&data->beacon_timer);
Matt Fleming 668ba0
+				hrtimer_cancel(&data->beacon_timer);
Matt Fleming 668ba0
 				data->beacon_int = 0;
Matt Fleming 668ba0
 			}
Matt Fleming 668ba0
 		}
Matt Fleming 668ba0
@@ -2878,9 +2874,9 @@ static int mac80211_hwsim_new_radio(stru
Matt Fleming 668ba0
 
Matt Fleming 668ba0
 	wiphy_ext_feature_set(hw->wiphy, NL80211_EXT_FEATURE_CQM_RSSI_LIST);
Matt Fleming 668ba0
 
Matt Fleming 668ba0
-	tasklet_hrtimer_init(&data->beacon_timer,
Matt Fleming 668ba0
-			     mac80211_hwsim_beacon,
Matt Fleming 668ba0
-			     CLOCK_MONOTONIC, HRTIMER_MODE_ABS);
Matt Fleming 668ba0
+	hrtimer_init(&data->beacon_timer, CLOCK_MONOTONIC_SOFT,
Matt Fleming 668ba0
+		     HRTIMER_MODE_ABS);
Matt Fleming 668ba0
+	data->beacon_timer.function = mac80211_hwsim_beacon;
Matt Fleming 668ba0
 
Matt Fleming 668ba0
 	err = ieee80211_register_hw(hw);
Matt Fleming 668ba0
 	if (err < 0) {
Matt Fleming 668ba0
--- a/include/linux/hrtimer.h
Matt Fleming 668ba0
+++ b/include/linux/hrtimer.h
Matt Fleming 668ba0
@@ -24,6 +24,23 @@
Matt Fleming 668ba0
 #include <linux/timerqueue.h>
Matt Fleming 668ba0
 #include <linux/wait.h>
Matt Fleming 668ba0
 
Matt Fleming 668ba0
+/*
Matt Fleming 668ba0
+ * Clock ids for hrtimers which expire in softirq context. These clock ids
Matt Fleming 668ba0
+ * are kernel internal and never exported to user space.
Matt Fleming 668ba0
+ */
Matt Fleming 668ba0
+#define HRTIMER_BASE_SOFT_MASK	MAX_CLOCKS
Matt Fleming 668ba0
+#define HRTIMER_BASE_HARD_MASK	(MAX_CLOCKS << 1)
Matt Fleming 668ba0
+
Matt Fleming 668ba0
+#define CLOCK_REALTIME_SOFT	(CLOCK_REALTIME	 | HRTIMER_BASE_SOFT_MASK)
Matt Fleming 668ba0
+#define CLOCK_MONOTONIC_SOFT	(CLOCK_MONOTONIC | HRTIMER_BASE_SOFT_MASK)
Matt Fleming 668ba0
+#define CLOCK_BOOTTIME_SOFT	(CLOCK_BOOTTIME	 | HRTIMER_BASE_SOFT_MASK)
Matt Fleming 668ba0
+#define CLOCK_TAI_SOFT		(CLOCK_TAI	 | HRTIMER_BASE_SOFT_MASK)
Matt Fleming 668ba0
+
Matt Fleming 668ba0
+#define CLOCK_REALTIME_HARD	(CLOCK_REALTIME	 | HRTIMER_BASE_HARD_MASK)
Matt Fleming 668ba0
+#define CLOCK_MONOTONIC_HARD	(CLOCK_MONOTONIC | HRTIMER_BASE_HARD_MASK)
Matt Fleming 668ba0
+#define CLOCK_BOOTTIME_HARD	(CLOCK_BOOTTIME	 | HRTIMER_BASE_HARD_MASK)
Matt Fleming 668ba0
+#define CLOCK_TAI_HARD		(CLOCK_TAI	 | HRTIMER_BASE_HARD_MASK)
Matt Fleming 668ba0
+
Matt Fleming 668ba0
 struct hrtimer_clock_base;
Matt Fleming 668ba0
 struct hrtimer_cpu_base;
Matt Fleming 668ba0
 
Matt Fleming 668ba0
@@ -86,8 +103,6 @@ enum hrtimer_restart {
Matt Fleming 668ba0
  *		was armed.
Matt Fleming 668ba0
  * @function:	timer expiry callback function
Matt Fleming 668ba0
  * @base:	pointer to the timer base (per cpu and per clock)
Matt Fleming 668ba0
- * @cb_entry:	list entry to defer timers from hardirq context
Matt Fleming 668ba0
- * @irqsafe:	timer can run in hardirq context
Matt Fleming 668ba0
  * @state:	state information (See bit values above)
Matt Fleming 668ba0
  * @is_rel:	Set if the timer was armed relative
Matt Fleming 668ba0
  *
Matt Fleming 668ba0
@@ -98,8 +113,6 @@ struct hrtimer {
Matt Fleming 668ba0
 	ktime_t				_softexpires;
Matt Fleming 668ba0
 	enum hrtimer_restart		(*function)(struct hrtimer *);
Matt Fleming 668ba0
 	struct hrtimer_clock_base	*base;
Matt Fleming 668ba0
-	struct list_head		cb_entry;
Matt Fleming 668ba0
-	int				irqsafe;
Matt Fleming 668ba0
 	u8				state;
Matt Fleming 668ba0
 	u8				is_rel;
Matt Fleming 668ba0
 };
Matt Fleming 668ba0
@@ -116,7 +129,11 @@ struct hrtimer_sleeper {
Matt Fleming 668ba0
 	struct task_struct *task;
Matt Fleming 668ba0
 };
Matt Fleming 668ba0
 
Matt Fleming 668ba0
-# define HRTIMER_CLOCK_BASE_ALIGN	64
Matt Fleming 668ba0
+#ifdef CONFIG_64BIT
Matt Fleming 668ba0
+# define __hrtimer_clock_base_align	____cacheline_aligned
Matt Fleming 668ba0
+#else
Matt Fleming 668ba0
+# define __hrtimer_clock_base_align
Matt Fleming 668ba0
+#endif
Matt Fleming 668ba0
 
Matt Fleming 668ba0
 /**
Matt Fleming 668ba0
  * struct hrtimer_clock_base - the timer base for a specific clock
Matt Fleming 668ba0
@@ -124,43 +141,46 @@ struct hrtimer_sleeper {
Matt Fleming 668ba0
  * @index:		clock type index for per_cpu support when moving a
Matt Fleming 668ba0
  *			timer to a base on another cpu.
Matt Fleming 668ba0
  * @clockid:		clock id for per_cpu support
Matt Fleming 668ba0
+ * @seq:		seqcount around __run_hrtimer
Matt Fleming 668ba0
+ * @running:		pointer to the currently running hrtimer
Matt Fleming 668ba0
  * @active:		red black tree root node for the active timers
Matt Fleming 668ba0
- * @expired:		list head for deferred timers.
Matt Fleming 668ba0
  * @get_time:		function to retrieve the current time of the clock
Matt Fleming 668ba0
  * @offset:		offset of this clock to the monotonic base
Matt Fleming 668ba0
  */
Matt Fleming 668ba0
 struct hrtimer_clock_base {
Matt Fleming 668ba0
 	struct hrtimer_cpu_base	*cpu_base;
Matt Fleming 668ba0
-	int			index;
Matt Fleming 668ba0
+	unsigned int		index;
Matt Fleming 668ba0
 	clockid_t		clockid;
Matt Fleming 668ba0
+	seqcount_t		seq;
Matt Fleming 668ba0
+	struct hrtimer		*running;
Matt Fleming 668ba0
 	struct timerqueue_head	active;
Matt Fleming 668ba0
-	struct list_head	expired;
Matt Fleming 668ba0
 	ktime_t			(*get_time)(void);
Matt Fleming 668ba0
 	ktime_t			offset;
Matt Fleming 668ba0
-} __attribute__((__aligned__(HRTIMER_CLOCK_BASE_ALIGN)));
Matt Fleming 668ba0
+} __hrtimer_clock_base_align;
Matt Fleming 668ba0
 
Matt Fleming 668ba0
 enum  hrtimer_base_type {
Matt Fleming 668ba0
 	HRTIMER_BASE_MONOTONIC,
Matt Fleming 668ba0
 	HRTIMER_BASE_REALTIME,
Matt Fleming 668ba0
 	HRTIMER_BASE_BOOTTIME,
Matt Fleming 668ba0
 	HRTIMER_BASE_TAI,
Matt Fleming 668ba0
+	HRTIMER_BASE_MONOTONIC_SOFT,
Matt Fleming 668ba0
+	HRTIMER_BASE_REALTIME_SOFT,
Matt Fleming 668ba0
+	HRTIMER_BASE_BOOTTIME_SOFT,
Matt Fleming 668ba0
+	HRTIMER_BASE_TAI_SOFT,
Matt Fleming 668ba0
 	HRTIMER_MAX_CLOCK_BASES,
Matt Fleming 668ba0
 };
Matt Fleming 668ba0
 
Matt Fleming 668ba0
-/*
Matt Fleming 668ba0
+/**
Matt Fleming 668ba0
  * struct hrtimer_cpu_base - the per cpu clock bases
Matt Fleming 668ba0
  * @lock:		lock protecting the base and associated clock bases
Matt Fleming 668ba0
  *			and timers
Matt Fleming 668ba0
- * @seq:		seqcount around __run_hrtimer
Matt Fleming 668ba0
- * @running:		pointer to the currently running hrtimer
Matt Fleming 668ba0
  * @cpu:		cpu number
Matt Fleming 668ba0
  * @active_bases:	Bitfield to mark bases with active timers
Matt Fleming 668ba0
  * @clock_was_set_seq:	Sequence counter of clock was set events
Matt Fleming 668ba0
  * @migration_enabled:	The migration of hrtimers to other cpus is enabled
Matt Fleming 668ba0
  * @nohz_active:	The nohz functionality is enabled
Matt Fleming 668ba0
- * @expires_next:	absolute time of the next event which was scheduled
Matt Fleming 668ba0
- *			via clock_set_next_event()
Matt Fleming 668ba0
- * @next_timer:		Pointer to the first expiring timer
Matt Fleming 668ba0
+ * @softirq_activated:	displays, if the softirq is raised - update of softirq
Matt Fleming 668ba0
+ *			related settings is not required then.
Matt Fleming 668ba0
  * @in_hrtirq:		hrtimer_interrupt() is currently executing
Matt Fleming 668ba0
  * @hres_active:	State of high resolution mode
Matt Fleming 668ba0
  * @hang_detected:	The last hrtimer interrupt detected a hang
Matt Fleming 668ba0
@@ -168,6 +188,11 @@ enum  hrtimer_base_type {
Matt Fleming 668ba0
  * @nr_retries:		Total number of hrtimer interrupt retries
Matt Fleming 668ba0
  * @nr_hangs:		Total number of hrtimer interrupt hangs
Matt Fleming 668ba0
  * @max_hang_time:	Maximum time spent in hrtimer_interrupt
Matt Fleming 668ba0
+ * @expires_next:	absolute time of the next event, is required for remote
Matt Fleming 668ba0
+ *			hrtimer enqueue; it is the total first expiry time (hard
Matt Fleming 668ba0
+ *			and soft hrtimer are taken into account)
Matt Fleming 668ba0
+ * @next_timer:		Pointer to the first expiring timer
Matt Fleming 668ba0
+ * @softirq_expires_next: Time to check, if soft queues needs also to be expired
Matt Fleming 668ba0
  * @clock_base:		array of clock bases for this cpu
Matt Fleming 668ba0
  *
Matt Fleming 668ba0
  * Note: next_timer is just an optimization for __remove_hrtimer().
Matt Fleming 668ba0
@@ -176,25 +201,24 @@ enum  hrtimer_base_type {
Matt Fleming 668ba0
  */
Matt Fleming 668ba0
 struct hrtimer_cpu_base {
Matt Fleming 668ba0
 	raw_spinlock_t			lock;
Matt Fleming 668ba0
-	seqcount_t			seq;
Matt Fleming 668ba0
-	struct hrtimer			*running;
Matt Fleming 668ba0
-	struct hrtimer			*running_soft;
Matt Fleming 668ba0
 	unsigned int			cpu;
Matt Fleming 668ba0
 	unsigned int			active_bases;
Matt Fleming 668ba0
 	unsigned int			clock_was_set_seq;
Matt Fleming 668ba0
 	bool				migration_enabled;
Matt Fleming 668ba0
 	bool				nohz_active;
Matt Fleming 668ba0
-#ifdef CONFIG_HIGH_RES_TIMERS
Matt Fleming 668ba0
-	unsigned int			in_hrtirq	: 1,
Matt Fleming 668ba0
-					hres_active	: 1,
Matt Fleming 668ba0
+	bool				softirq_activated;
Matt Fleming 668ba0
+	unsigned int			hres_active	: 1,
Matt Fleming 668ba0
+					in_hrtirq	: 1,
Matt Fleming 668ba0
 					hang_detected	: 1;
Matt Fleming 668ba0
-	ktime_t				expires_next;
Matt Fleming 668ba0
-	struct hrtimer			*next_timer;
Matt Fleming 668ba0
+#ifdef CONFIG_HIGH_RES_TIMERS
Matt Fleming 668ba0
 	unsigned int			nr_events;
Matt Fleming 668ba0
 	unsigned int			nr_retries;
Matt Fleming 668ba0
 	unsigned int			nr_hangs;
Matt Fleming 668ba0
 	unsigned int			max_hang_time;
Matt Fleming 668ba0
 #endif
Matt Fleming 668ba0
+	ktime_t				expires_next;
Matt Fleming 668ba0
+	struct hrtimer			*next_timer;
Matt Fleming 668ba0
+	ktime_t				softirq_expires_next;
Matt Fleming 668ba0
 #ifdef CONFIG_PREEMPT_RT_BASE
Matt Fleming 668ba0
 	wait_queue_head_t		wait;
Matt Fleming 668ba0
 #endif
Matt Fleming 668ba0
@@ -203,8 +227,6 @@ struct hrtimer_cpu_base {
Matt Fleming 668ba0
 
Matt Fleming 668ba0
 static inline void hrtimer_set_expires(struct hrtimer *timer, ktime_t time)
Matt Fleming 668ba0
 {
Matt Fleming 668ba0
-	BUILD_BUG_ON(sizeof(struct hrtimer_clock_base) > HRTIMER_CLOCK_BASE_ALIGN);
Matt Fleming 668ba0
-
Matt Fleming 668ba0
 	timer->node.expires = time;
Matt Fleming 668ba0
 	timer->_softexpires = time;
Matt Fleming 668ba0
 }
Matt Fleming 668ba0
@@ -273,16 +295,16 @@ static inline ktime_t hrtimer_cb_get_tim
Matt Fleming 668ba0
 	return timer->base->get_time();
Matt Fleming 668ba0
 }
Matt Fleming 668ba0
 
Matt Fleming 668ba0
-#ifdef CONFIG_HIGH_RES_TIMERS
Matt Fleming 668ba0
-struct clock_event_device;
Matt Fleming 668ba0
-
Matt Fleming 668ba0
-extern void hrtimer_interrupt(struct clock_event_device *dev);
Matt Fleming 668ba0
-
Matt Fleming 668ba0
 static inline int hrtimer_is_hres_active(struct hrtimer *timer)
Matt Fleming 668ba0
 {
Matt Fleming 668ba0
 	return timer->base->cpu_base->hres_active;
Matt Fleming 668ba0
 }
Matt Fleming 668ba0
 
Matt Fleming 668ba0
+#ifdef CONFIG_HIGH_RES_TIMERS
Matt Fleming 668ba0
+struct clock_event_device;
Matt Fleming 668ba0
+
Matt Fleming 668ba0
+extern void hrtimer_interrupt(struct clock_event_device *dev);
Matt Fleming 668ba0
+
Matt Fleming 668ba0
 /*
Matt Fleming 668ba0
  * The resolution of the clocks. The resolution value is returned in
Matt Fleming 668ba0
  * the clock_getres() system call to give application programmers an
Matt Fleming 668ba0
@@ -305,11 +327,6 @@ extern unsigned int hrtimer_resolution;
Matt Fleming 668ba0
 
Matt Fleming 668ba0
 #define hrtimer_resolution	(unsigned int)LOW_RES_NSEC
Matt Fleming 668ba0
 
Matt Fleming 668ba0
-static inline int hrtimer_is_hres_active(struct hrtimer *timer)
Matt Fleming 668ba0
-{
Matt Fleming 668ba0
-	return 0;
Matt Fleming 668ba0
-}
Matt Fleming 668ba0
-
Matt Fleming 668ba0
 static inline void clock_was_set_delayed(void) { }
Matt Fleming 668ba0
 
Matt Fleming 668ba0
 #endif
Matt Fleming 668ba0
@@ -351,10 +368,17 @@ DECLARE_PER_CPU(struct tick_device, tick
Matt Fleming 668ba0
 /* Initialize timers: */
Matt Fleming 668ba0
 extern void hrtimer_init(struct hrtimer *timer, clockid_t which_clock,
Matt Fleming 668ba0
 			 enum hrtimer_mode mode);
Matt Fleming 668ba0
+extern void hrtimer_init_sleeper(struct hrtimer_sleeper *sl, clockid_t clock_id,
Matt Fleming 668ba0
+				 enum hrtimer_mode mode,
Matt Fleming 668ba0
+				 struct task_struct *task);
Matt Fleming 668ba0
 
Matt Fleming 668ba0
 #ifdef CONFIG_DEBUG_OBJECTS_TIMERS
Matt Fleming 668ba0
 extern void hrtimer_init_on_stack(struct hrtimer *timer, clockid_t which_clock,
Matt Fleming 668ba0
 				  enum hrtimer_mode mode);
Matt Fleming 668ba0
+extern void hrtimer_init_sleeper_on_stack(struct hrtimer_sleeper *sl,
Matt Fleming 668ba0
+					  clockid_t clock_id,
Matt Fleming 668ba0
+					  enum hrtimer_mode mode,
Matt Fleming 668ba0
+					  struct task_struct *task);
Matt Fleming 668ba0
 
Matt Fleming 668ba0
 extern void destroy_hrtimer_on_stack(struct hrtimer *timer);
Matt Fleming 668ba0
 #else
Matt Fleming 668ba0
@@ -364,6 +388,15 @@ static inline void hrtimer_init_on_stack
Matt Fleming 668ba0
 {
Matt Fleming 668ba0
 	hrtimer_init(timer, which_clock, mode);
Matt Fleming 668ba0
 }
Matt Fleming 668ba0
+
Matt Fleming 668ba0
+static inline void hrtimer_init_sleeper_on_stack(struct hrtimer_sleeper *sl,
Matt Fleming 668ba0
+					    clockid_t clock_id,
Matt Fleming 668ba0
+					    enum hrtimer_mode mode,
Matt Fleming 668ba0
+					    struct task_struct *task)
Matt Fleming 668ba0
+{
Matt Fleming 668ba0
+	hrtimer_init_sleeper(sl, clock_id, mode, task);
Matt Fleming 668ba0
+}
Matt Fleming 668ba0
+
Matt Fleming 668ba0
 static inline void destroy_hrtimer_on_stack(struct hrtimer *timer) { }
Matt Fleming 668ba0
 #endif
Matt Fleming 668ba0
 
Mel Gorman fb63c5
@@ -442,13 +475,7 @@ static inline bool hrtimer_is_queued(str
Matt Fleming 668ba0
  */
Matt Fleming 668ba0
 static inline int hrtimer_callback_running(const struct hrtimer *timer)
Matt Fleming 668ba0
 {
Matt Fleming 668ba0
-	if (timer->base->cpu_base->running == timer)
Matt Fleming 668ba0
-		return 1;
Matt Fleming 668ba0
-#ifdef CONFIG_PREEMPT_RT_BASE
Matt Fleming 668ba0
-	if (timer->base->cpu_base->running_soft == timer)
Matt Fleming 668ba0
-		return 1;
Matt Fleming 668ba0
-#endif
Matt Fleming 668ba0
-	return 0;
Matt Fleming 668ba0
+	return timer->base->running == timer;
Matt Fleming 668ba0
 }
Matt Fleming 668ba0
 
Matt Fleming 668ba0
 /* Forward a hrtimer so it expires after now: */
Mel Gorman fb63c5
@@ -484,15 +511,12 @@ extern long hrtimer_nanosleep(struct tim
Matt Fleming 668ba0
 			      const clockid_t clockid);
Matt Fleming 668ba0
 extern long hrtimer_nanosleep_restart(struct restart_block *restart_block);
Matt Fleming 668ba0
 
Matt Fleming 668ba0
-extern void hrtimer_init_sleeper(struct hrtimer_sleeper *sl,
Matt Fleming 668ba0
-				 struct task_struct *tsk);
Matt Fleming 668ba0
-
Matt Fleming 668ba0
 extern int schedule_hrtimeout_range(ktime_t *expires, u64 delta,
Matt Fleming 668ba0
 						const enum hrtimer_mode mode);
Matt Fleming 668ba0
 extern int schedule_hrtimeout_range_clock(ktime_t *expires,
Matt Fleming 668ba0
 					  u64 delta,
Matt Fleming 668ba0
 					  const enum hrtimer_mode mode,
Matt Fleming 668ba0
-					  int clock);
Matt Fleming 668ba0
+					  clockid_t clock_id);
Matt Fleming 668ba0
 extern int schedule_hrtimeout(ktime_t *expires, const enum hrtimer_mode mode);
Matt Fleming 668ba0
 
Matt Fleming 668ba0
 /* Soft interrupt function to run the hrtimer queues: */
Matt Fleming 668ba0
--- a/include/linux/interrupt.h
Matt Fleming 668ba0
+++ b/include/linux/interrupt.h
Matt Fleming 668ba0
@@ -653,31 +653,6 @@ extern void tasklet_kill_immediate(struc
Matt Fleming 668ba0
 extern void tasklet_init(struct tasklet_struct *t,
Matt Fleming 668ba0
 			 void (*func)(unsigned long), unsigned long data);
Matt Fleming 668ba0
 
Matt Fleming 668ba0
-struct tasklet_hrtimer {
Matt Fleming 668ba0
-	struct hrtimer		timer;
Matt Fleming 668ba0
-	struct tasklet_struct	tasklet;
Matt Fleming 668ba0
-	enum hrtimer_restart	(*function)(struct hrtimer *);
Matt Fleming 668ba0
-};
Matt Fleming 668ba0
-
Matt Fleming 668ba0
-extern void
Matt Fleming 668ba0
-tasklet_hrtimer_init(struct tasklet_hrtimer *ttimer,
Matt Fleming 668ba0
-		     enum hrtimer_restart (*function)(struct hrtimer *),
Matt Fleming 668ba0
-		     clockid_t which_clock, enum hrtimer_mode mode);
Matt Fleming 668ba0
-
Matt Fleming 668ba0
-static inline
Matt Fleming 668ba0
-void tasklet_hrtimer_start(struct tasklet_hrtimer *ttimer, ktime_t time,
Matt Fleming 668ba0
-			   const enum hrtimer_mode mode)
Matt Fleming 668ba0
-{
Matt Fleming 668ba0
-	hrtimer_start(&ttimer->timer, time, mode);
Matt Fleming 668ba0
-}
Matt Fleming 668ba0
-
Matt Fleming 668ba0
-static inline
Matt Fleming 668ba0
-void tasklet_hrtimer_cancel(struct tasklet_hrtimer *ttimer)
Matt Fleming 668ba0
-{
Matt Fleming 668ba0
-	hrtimer_cancel(&ttimer->timer);
Matt Fleming 668ba0
-	tasklet_kill(&ttimer->tasklet);
Matt Fleming 668ba0
-}
Matt Fleming 668ba0
-
Matt Fleming 668ba0
 #ifdef CONFIG_PREEMPT_RT_FULL
Matt Fleming 668ba0
 extern void softirq_early_init(void);
Matt Fleming 668ba0
 #else
Matt Fleming 668ba0
--- a/include/linux/wait.h
Matt Fleming 668ba0
+++ b/include/linux/wait.h
Matt Fleming 668ba0
@@ -486,8 +486,8 @@ do {										\
Matt Fleming 668ba0
 	int __ret = 0;								\
Matt Fleming 668ba0
 	struct hrtimer_sleeper __t;						\
Matt Fleming 668ba0
 										\
Matt Fleming 668ba0
-	hrtimer_init_on_stack(&__t.timer, CLOCK_MONOTONIC, HRTIMER_MODE_REL);	\
Matt Fleming 668ba0
-	hrtimer_init_sleeper(&__t, current);					\
Matt Fleming 668ba0
+	hrtimer_init_sleeper_on_stack(&__t, CLOCK_MONOTONIC,			\
Matt Fleming 668ba0
+				      HRTIMER_MODE_REL, current);		\
Matt Fleming 668ba0
 	if ((timeout) != KTIME_MAX)						\
Matt Fleming 668ba0
 		hrtimer_start_range_ns(&__t.timer, timeout,			\
Matt Fleming 668ba0
 				       current->timer_slack_ns,			\
Matt Fleming 668ba0
--- a/include/net/xfrm.h
Matt Fleming 668ba0
+++ b/include/net/xfrm.h
Matt Fleming 668ba0
@@ -214,7 +214,7 @@ struct xfrm_state {
Matt Fleming 668ba0
 	struct xfrm_stats	stats;
Matt Fleming 668ba0
 
Matt Fleming 668ba0
 	struct xfrm_lifetime_cur curlft;
Matt Fleming 668ba0
-	struct tasklet_hrtimer	mtimer;
Matt Fleming 668ba0
+	struct hrtimer		mtimer;
Matt Fleming 668ba0
 
Matt Fleming 668ba0
 	struct xfrm_state_offload xso;
Matt Fleming 668ba0
 
Matt Fleming 668ba0
--- a/kernel/events/core.c
Matt Fleming 668ba0
+++ b/kernel/events/core.c
Mel Gorman fb63c5
@@ -1084,9 +1084,8 @@ static void __perf_mux_hrtimer_init(stru
Matt Fleming 668ba0
 	cpuctx->hrtimer_interval = ns_to_ktime(NSEC_PER_MSEC * interval);
Matt Fleming 668ba0
 
Matt Fleming 668ba0
 	raw_spin_lock_init(&cpuctx->hrtimer_lock);
Matt Fleming 668ba0
-	hrtimer_init(timer, CLOCK_MONOTONIC, HRTIMER_MODE_ABS_PINNED);
Matt Fleming 668ba0
+	hrtimer_init(timer, CLOCK_MONOTONIC_HARD, HRTIMER_MODE_ABS_PINNED);
Matt Fleming 668ba0
 	timer->function = perf_mux_hrtimer_handler;
Matt Fleming 668ba0
-	timer->irqsafe = 1;
Matt Fleming 668ba0
 }
Matt Fleming 668ba0
 
Matt Fleming 668ba0
 static int perf_mux_hrtimer_restart(struct perf_cpu_context *cpuctx)
Mel Gorman fb63c5
@@ -9122,9 +9121,8 @@ static void perf_swevent_init_hrtimer(st
Matt Fleming 668ba0
 	if (!is_sampling_event(event))
Matt Fleming 668ba0
 		return;
Matt Fleming 668ba0
 
Matt Fleming 668ba0
-	hrtimer_init(&hwc->hrtimer, CLOCK_MONOTONIC, HRTIMER_MODE_REL);
Matt Fleming 668ba0
+	hrtimer_init(&hwc->hrtimer, CLOCK_MONOTONIC_HARD, HRTIMER_MODE_REL);
Matt Fleming 668ba0
 	hwc->hrtimer.function = perf_swevent_hrtimer;
Matt Fleming 668ba0
-	hwc->hrtimer.irqsafe = 1;
Matt Fleming 668ba0
 
Matt Fleming 668ba0
 	/*
Matt Fleming 668ba0
 	 * Since hrtimers have a fixed rate, we can do a static freq->period
Matt Fleming 668ba0
--- a/kernel/futex.c
Matt Fleming 668ba0
+++ b/kernel/futex.c
Mel Gorman fb63c5
@@ -2677,10 +2677,9 @@ static int futex_wait(u32 __user *uaddr,
Matt Fleming 668ba0
 	if (abs_time) {
Matt Fleming 668ba0
 		to = &timeout;
Matt Fleming 668ba0
 
Matt Fleming 668ba0
-		hrtimer_init_on_stack(&to->timer, (flags & FLAGS_CLOCKRT) ?
Matt Fleming 668ba0
-				      CLOCK_REALTIME : CLOCK_MONOTONIC,
Matt Fleming 668ba0
-				      HRTIMER_MODE_ABS);
Matt Fleming 668ba0
-		hrtimer_init_sleeper(to, current);
Matt Fleming 668ba0
+		hrtimer_init_sleeper_on_stack(to, (flags & FLAGS_CLOCKRT) ?
Matt Fleming 668ba0
+					      CLOCK_REALTIME : CLOCK_MONOTONIC,
Matt Fleming 668ba0
+					      HRTIMER_MODE_ABS, current);
Matt Fleming 668ba0
 		hrtimer_set_expires_range_ns(&to->timer, *abs_time,
Matt Fleming 668ba0
 					     current->timer_slack_ns);
Matt Fleming 668ba0
 	}
Mel Gorman fb63c5
@@ -2776,9 +2775,8 @@ static int futex_lock_pi(u32 __user *uad
Matt Fleming 668ba0
 
Matt Fleming 668ba0
 	if (time) {
Matt Fleming 668ba0
 		to = &timeout;
Matt Fleming 668ba0
-		hrtimer_init_on_stack(&to->timer, CLOCK_REALTIME,
Matt Fleming 668ba0
-				      HRTIMER_MODE_ABS);
Matt Fleming 668ba0
-		hrtimer_init_sleeper(to, current);
Matt Fleming 668ba0
+		hrtimer_init_sleeper_on_stack(to, CLOCK_REALTIME,
Matt Fleming 668ba0
+					      HRTIMER_MODE_ABS, current);
Matt Fleming 668ba0
 		hrtimer_set_expires(&to->timer, *time);
Matt Fleming 668ba0
 	}
Matt Fleming 668ba0
 
Mel Gorman fb63c5
@@ -3196,10 +3194,9 @@ static int futex_wait_requeue_pi(u32 __u
Matt Fleming 668ba0
 
Matt Fleming 668ba0
 	if (abs_time) {
Matt Fleming 668ba0
 		to = &timeout;
Matt Fleming 668ba0
-		hrtimer_init_on_stack(&to->timer, (flags & FLAGS_CLOCKRT) ?
Matt Fleming 668ba0
-				      CLOCK_REALTIME : CLOCK_MONOTONIC,
Matt Fleming 668ba0
-				      HRTIMER_MODE_ABS);
Matt Fleming 668ba0
-		hrtimer_init_sleeper(to, current);
Matt Fleming 668ba0
+		hrtimer_init_sleeper_on_stack(to, (flags & FLAGS_CLOCKRT) ?
Matt Fleming 668ba0
+					      CLOCK_REALTIME : CLOCK_MONOTONIC,
Matt Fleming 668ba0
+					      HRTIMER_MODE_ABS, current);
Matt Fleming 668ba0
 		hrtimer_set_expires_range_ns(&to->timer, *abs_time,
Matt Fleming 668ba0
 					     current->timer_slack_ns);
Matt Fleming 668ba0
 	}
Matt Fleming 668ba0
--- a/kernel/sched/core.c
Matt Fleming 668ba0
+++ b/kernel/sched/core.c
Matt Fleming 668ba0
@@ -344,9 +344,8 @@ static void init_rq_hrtick(struct rq *rq
Matt Fleming 668ba0
 	rq->hrtick_csd.info = rq;
Matt Fleming 668ba0
 #endif
Matt Fleming 668ba0
 
Matt Fleming 668ba0
-	hrtimer_init(&rq->hrtick_timer, CLOCK_MONOTONIC, HRTIMER_MODE_REL);
Matt Fleming 668ba0
+	hrtimer_init(&rq->hrtick_timer, CLOCK_MONOTONIC_HARD, HRTIMER_MODE_REL);
Matt Fleming 668ba0
 	rq->hrtick_timer.function = hrtick;
Matt Fleming 668ba0
-	rq->hrtick_timer.irqsafe = 1;
Matt Fleming 668ba0
 }
Matt Fleming 668ba0
 #else	/* CONFIG_SCHED_HRTICK */
Matt Fleming 668ba0
 static inline void hrtick_clear(struct rq *rq)
Matt Fleming 668ba0
--- a/kernel/sched/deadline.c
Matt Fleming 668ba0
+++ b/kernel/sched/deadline.c
Matt Fleming 668ba0
@@ -689,9 +689,8 @@ void init_dl_task_timer(struct sched_dl_
Matt Fleming 668ba0
 {
Matt Fleming 668ba0
 	struct hrtimer *timer = &dl_se->dl_timer;
Matt Fleming 668ba0
 
Matt Fleming 668ba0
-	hrtimer_init(timer, CLOCK_MONOTONIC, HRTIMER_MODE_REL);
Matt Fleming 668ba0
+	hrtimer_init(timer, CLOCK_MONOTONIC_HARD, HRTIMER_MODE_REL);
Matt Fleming 668ba0
 	timer->function = dl_task_timer;
Matt Fleming 668ba0
-	timer->irqsafe = 1;
Matt Fleming 668ba0
 }
Matt Fleming 668ba0
 
Matt Fleming 668ba0
 /*
Matt Fleming 668ba0
--- a/kernel/sched/rt.c
Matt Fleming 668ba0
+++ b/kernel/sched/rt.c
Matt Fleming 668ba0
@@ -46,9 +46,8 @@ void init_rt_bandwidth(struct rt_bandwid
Matt Fleming 668ba0
 
Matt Fleming 668ba0
 	raw_spin_lock_init(&rt_b->rt_runtime_lock);
Matt Fleming 668ba0
 
Matt Fleming 668ba0
-	hrtimer_init(&rt_b->rt_period_timer,
Matt Fleming 668ba0
-			CLOCK_MONOTONIC, HRTIMER_MODE_REL);
Matt Fleming 668ba0
-	rt_b->rt_period_timer.irqsafe = 1;
Matt Fleming 668ba0
+	hrtimer_init(&rt_b->rt_period_timer, CLOCK_MONOTONIC_HARD,
Matt Fleming 668ba0
+		     HRTIMER_MODE_REL);
Matt Fleming 668ba0
 	rt_b->rt_period_timer.function = sched_rt_period_timer;
Matt Fleming 668ba0
 }
Matt Fleming 668ba0
 
Matt Fleming 668ba0
--- a/kernel/softirq.c
Matt Fleming 668ba0
+++ b/kernel/softirq.c
Matt Fleming 668ba0
@@ -1095,57 +1095,6 @@ void tasklet_kill(struct tasklet_struct
Matt Fleming 668ba0
 }
Matt Fleming 668ba0
 EXPORT_SYMBOL(tasklet_kill);
Matt Fleming 668ba0
 
Matt Fleming 668ba0
-/*
Matt Fleming 668ba0
- * tasklet_hrtimer
Matt Fleming 668ba0
- */
Matt Fleming 668ba0
-
Matt Fleming 668ba0
-/*
Matt Fleming 668ba0
- * The trampoline is called when the hrtimer expires. It schedules a tasklet
Matt Fleming 668ba0
- * to run __tasklet_hrtimer_trampoline() which in turn will call the intended
Matt Fleming 668ba0
- * hrtimer callback, but from softirq context.
Matt Fleming 668ba0
- */
Matt Fleming 668ba0
-static enum hrtimer_restart __hrtimer_tasklet_trampoline(struct hrtimer *timer)
Matt Fleming 668ba0
-{
Matt Fleming 668ba0
-	struct tasklet_hrtimer *ttimer =
Matt Fleming 668ba0
-		container_of(timer, struct tasklet_hrtimer, timer);
Matt Fleming 668ba0
-
Matt Fleming 668ba0
-	tasklet_hi_schedule(&ttimer->tasklet);
Matt Fleming 668ba0
-	return HRTIMER_NORESTART;
Matt Fleming 668ba0
-}
Matt Fleming 668ba0
-
Matt Fleming 668ba0
-/*
Matt Fleming 668ba0
- * Helper function which calls the hrtimer callback from
Matt Fleming 668ba0
- * tasklet/softirq context
Matt Fleming 668ba0
- */
Matt Fleming 668ba0
-static void __tasklet_hrtimer_trampoline(unsigned long data)
Matt Fleming 668ba0
-{
Matt Fleming 668ba0
-	struct tasklet_hrtimer *ttimer = (void *)data;
Matt Fleming 668ba0
-	enum hrtimer_restart restart;
Matt Fleming 668ba0
-
Matt Fleming 668ba0
-	restart = ttimer->function(&ttimer->timer);
Matt Fleming 668ba0
-	if (restart != HRTIMER_NORESTART)
Matt Fleming 668ba0
-		hrtimer_restart(&ttimer->timer);
Matt Fleming 668ba0
-}
Matt Fleming 668ba0
-
Matt Fleming 668ba0
-/**
Matt Fleming 668ba0
- * tasklet_hrtimer_init - Init a tasklet/hrtimer combo for softirq callbacks
Matt Fleming 668ba0
- * @ttimer:	 tasklet_hrtimer which is initialized
Matt Fleming 668ba0
- * @function:	 hrtimer callback function which gets called from softirq context
Matt Fleming 668ba0
- * @which_clock: clock id (CLOCK_MONOTONIC/CLOCK_REALTIME)
Matt Fleming 668ba0
- * @mode:	 hrtimer mode (HRTIMER_MODE_ABS/HRTIMER_MODE_REL)
Matt Fleming 668ba0
- */
Matt Fleming 668ba0
-void tasklet_hrtimer_init(struct tasklet_hrtimer *ttimer,
Matt Fleming 668ba0
-			  enum hrtimer_restart (*function)(struct hrtimer *),
Matt Fleming 668ba0
-			  clockid_t which_clock, enum hrtimer_mode mode)
Matt Fleming 668ba0
-{
Matt Fleming 668ba0
-	hrtimer_init(&ttimer->timer, which_clock, mode);
Matt Fleming 668ba0
-	ttimer->timer.function = __hrtimer_tasklet_trampoline;
Matt Fleming 668ba0
-	tasklet_init(&ttimer->tasklet, __tasklet_hrtimer_trampoline,
Matt Fleming 668ba0
-		     (unsigned long)ttimer);
Matt Fleming 668ba0
-	ttimer->function = function;
Matt Fleming 668ba0
-}
Matt Fleming 668ba0
-EXPORT_SYMBOL_GPL(tasklet_hrtimer_init);
Matt Fleming 668ba0
-
Matt Fleming 668ba0
 void __init softirq_init(void)
Matt Fleming 668ba0
 {
Matt Fleming 668ba0
 	int cpu;
Matt Fleming 668ba0
--- a/kernel/time/hrtimer.c
Matt Fleming 668ba0
+++ b/kernel/time/hrtimer.c
Matt Fleming 668ba0
@@ -59,6 +59,14 @@
Matt Fleming 668ba0
 #include "tick-internal.h"
Matt Fleming 668ba0
 
Matt Fleming 668ba0
 /*
Matt Fleming 668ba0
+ * Masks for selecting the soft and hard context timers from
Matt Fleming 668ba0
+ * cpu_base->active
Matt Fleming 668ba0
+ */
Matt Fleming 668ba0
+#define MASK_SHIFT		(HRTIMER_BASE_MONOTONIC_SOFT)
Matt Fleming 668ba0
+#define HRTIMER_ACTIVE_HARD	((1U << MASK_SHIFT) - 1)
Matt Fleming 668ba0
+#define HRTIMER_ACTIVE_SOFT	(HRTIMER_ACTIVE_HARD << MASK_SHIFT)
Matt Fleming 668ba0
+
Matt Fleming 668ba0
+/*
Matt Fleming 668ba0
  * The timer bases:
Matt Fleming 668ba0
  *
Matt Fleming 668ba0
  * There are more clockids than hrtimer bases. Thus, we index
Matt Fleming 668ba0
@@ -69,7 +77,6 @@
Matt Fleming 668ba0
 DEFINE_PER_CPU(struct hrtimer_cpu_base, hrtimer_bases) =
Matt Fleming 668ba0
 {
Matt Fleming 668ba0
 	.lock = __RAW_SPIN_LOCK_UNLOCKED(hrtimer_bases.lock),
Matt Fleming 668ba0
-	.seq = SEQCNT_ZERO(hrtimer_bases.seq),
Matt Fleming 668ba0
 	.clock_base =
Matt Fleming 668ba0
 	{
Matt Fleming 668ba0
 		{
Matt Fleming 668ba0
@@ -92,17 +99,55 @@ DEFINE_PER_CPU(struct hrtimer_cpu_base,
Matt Fleming 668ba0
 			.clockid = CLOCK_TAI,
Matt Fleming 668ba0
 			.get_time = &ktime_get_clocktai,
Matt Fleming 668ba0
 		},
Matt Fleming 668ba0
+		{
Matt Fleming 668ba0
+			.index = HRTIMER_BASE_MONOTONIC_SOFT,
Matt Fleming 668ba0
+			.clockid = CLOCK_MONOTONIC_SOFT,
Matt Fleming 668ba0
+			.get_time = &ktime_get,
Matt Fleming 668ba0
+		},
Matt Fleming 668ba0
+		{
Matt Fleming 668ba0
+			.index = HRTIMER_BASE_REALTIME_SOFT,
Matt Fleming 668ba0
+			.clockid = CLOCK_REALTIME_SOFT,
Matt Fleming 668ba0
+			.get_time = &ktime_get_real,
Matt Fleming 668ba0
+		},
Matt Fleming 668ba0
+		{
Matt Fleming 668ba0
+			.index = HRTIMER_BASE_BOOTTIME_SOFT,
Matt Fleming 668ba0
+			.clockid = CLOCK_BOOTTIME_SOFT,
Matt Fleming 668ba0
+			.get_time = &ktime_get_boottime,
Matt Fleming 668ba0
+		},
Matt Fleming 668ba0
+		{
Matt Fleming 668ba0
+			.index = HRTIMER_BASE_TAI_SOFT,
Matt Fleming 668ba0
+			.clockid = CLOCK_TAI_SOFT,
Matt Fleming 668ba0
+			.get_time = &ktime_get_clocktai,
Matt Fleming 668ba0
+		},
Matt Fleming 668ba0
 	}
Matt Fleming 668ba0
 };
Matt Fleming 668ba0
 
Matt Fleming 668ba0
-static const int hrtimer_clock_to_base_table[MAX_CLOCKS] = {
Matt Fleming 668ba0
+#define MAX_CLOCKS_HRT		(MAX_CLOCKS * 3)
Matt Fleming 668ba0
+
Matt Fleming 668ba0
+static const int hrtimer_clock_to_base_table[MAX_CLOCKS_HRT] = {
Matt Fleming 668ba0
 	/* Make sure we catch unsupported clockids */
Matt Fleming 668ba0
-	[0 ... MAX_CLOCKS - 1]	= HRTIMER_MAX_CLOCK_BASES,
Matt Fleming 668ba0
+	[0 ... MAX_CLOCKS_HRT - 1]	= HRTIMER_MAX_CLOCK_BASES,
Matt Fleming 668ba0
 
Matt Fleming 668ba0
-	[CLOCK_REALTIME]	= HRTIMER_BASE_REALTIME,
Matt Fleming 668ba0
-	[CLOCK_MONOTONIC]	= HRTIMER_BASE_MONOTONIC,
Matt Fleming 668ba0
-	[CLOCK_BOOTTIME]	= HRTIMER_BASE_BOOTTIME,
Matt Fleming 668ba0
-	[CLOCK_TAI]		= HRTIMER_BASE_TAI,
Matt Fleming 668ba0
+#ifdef CONFIG_PREEMPT_RT_FULL
Matt Fleming 668ba0
+	[CLOCK_REALTIME]		= HRTIMER_BASE_REALTIME_SOFT,
Matt Fleming 668ba0
+	[CLOCK_MONOTONIC]		= HRTIMER_BASE_MONOTONIC_SOFT,
Matt Fleming 668ba0
+	[CLOCK_BOOTTIME]		= HRTIMER_BASE_BOOTTIME_SOFT,
Matt Fleming 668ba0
+	[CLOCK_TAI]			= HRTIMER_BASE_TAI_SOFT,
Matt Fleming 668ba0
+#else
Matt Fleming 668ba0
+	[CLOCK_REALTIME]		= HRTIMER_BASE_REALTIME,
Matt Fleming 668ba0
+	[CLOCK_MONOTONIC]		= HRTIMER_BASE_MONOTONIC,
Matt Fleming 668ba0
+	[CLOCK_BOOTTIME]		= HRTIMER_BASE_BOOTTIME,
Matt Fleming 668ba0
+	[CLOCK_TAI]			= HRTIMER_BASE_TAI,
Matt Fleming 668ba0
+#endif
Matt Fleming 668ba0
+	[CLOCK_REALTIME_SOFT]		= HRTIMER_BASE_REALTIME_SOFT,
Matt Fleming 668ba0
+	[CLOCK_MONOTONIC_SOFT]		= HRTIMER_BASE_MONOTONIC_SOFT,
Matt Fleming 668ba0
+	[CLOCK_BOOTTIME_SOFT]		= HRTIMER_BASE_BOOTTIME_SOFT,
Matt Fleming 668ba0
+	[CLOCK_TAI_SOFT]		= HRTIMER_BASE_TAI_SOFT,
Matt Fleming 668ba0
+
Matt Fleming 668ba0
+	[CLOCK_REALTIME_HARD]		= HRTIMER_BASE_REALTIME,
Matt Fleming 668ba0
+	[CLOCK_MONOTONIC_HARD]		= HRTIMER_BASE_MONOTONIC,
Matt Fleming 668ba0
+	[CLOCK_BOOTTIME_HARD]		= HRTIMER_BASE_BOOTTIME,
Matt Fleming 668ba0
+	[CLOCK_TAI_HARD]		= HRTIMER_BASE_TAI,
Matt Fleming 668ba0
 };
Matt Fleming 668ba0
 
Matt Fleming 668ba0
 /*
Matt Fleming 668ba0
@@ -117,7 +162,6 @@ static const int hrtimer_clock_to_base_t
Matt Fleming 668ba0
  * timer->base->cpu_base
Matt Fleming 668ba0
  */
Matt Fleming 668ba0
 static struct hrtimer_cpu_base migration_cpu_base = {
Matt Fleming 668ba0
-	.seq = SEQCNT_ZERO(migration_cpu_base),
Matt Fleming 668ba0
 	.clock_base = { { .cpu_base = &migration_cpu_base, }, },
Matt Fleming 668ba0
 };
Matt Fleming 668ba0
 
Matt Fleming 668ba0
@@ -155,26 +199,21 @@ struct hrtimer_clock_base *lock_hrtimer_
Matt Fleming 668ba0
 }
Matt Fleming 668ba0
 
Matt Fleming 668ba0
 /*
Matt Fleming 668ba0
- * With HIGHRES=y we do not migrate the timer when it is expiring
Matt Fleming 668ba0
- * before the next event on the target cpu because we cannot reprogram
Matt Fleming 668ba0
- * the target cpu hardware and we would cause it to fire late.
Matt Fleming 668ba0
+ * We do not migrate the timer when it is expiring before the next
Matt Fleming 668ba0
+ * event on the target cpu. When high resolution is enabled, we cannot
Matt Fleming 668ba0
+ * reprogram the target cpu hardware and we would cause it to fire
Matt Fleming 668ba0
+ * late. To keep it simple, we handle the high resolution enabled and
Matt Fleming 668ba0
+ * disabled case similar.
Matt Fleming 668ba0
  *
Matt Fleming 668ba0
  * Called with cpu_base->lock of target cpu held.
Matt Fleming 668ba0
  */
Matt Fleming 668ba0
 static int
Matt Fleming 668ba0
 hrtimer_check_target(struct hrtimer *timer, struct hrtimer_clock_base *new_base)
Matt Fleming 668ba0
 {
Matt Fleming 668ba0
-#ifdef CONFIG_HIGH_RES_TIMERS
Matt Fleming 668ba0
 	ktime_t expires;
Matt Fleming 668ba0
 
Matt Fleming 668ba0
-	if (!new_base->cpu_base->hres_active)
Matt Fleming 668ba0
-		return 0;
Matt Fleming 668ba0
-
Matt Fleming 668ba0
 	expires = ktime_sub(hrtimer_get_expires(timer), new_base->offset);
Matt Fleming 668ba0
-	return expires <= new_base->cpu_base->expires_next;
Matt Fleming 668ba0
-#else
Matt Fleming 668ba0
-	return 0;
Matt Fleming 668ba0
-#endif
Matt Fleming 668ba0
+	return expires < new_base->cpu_base->expires_next;
Matt Fleming 668ba0
 }
Matt Fleming 668ba0
 
Matt Fleming 668ba0
 #ifdef CONFIG_NO_HZ_COMMON
Matt Fleming 668ba0
@@ -453,28 +492,26 @@ static inline void debug_deactivate(stru
Matt Fleming 668ba0
 	trace_hrtimer_cancel(timer);
Matt Fleming 668ba0
 }
Matt Fleming 668ba0
 
Matt Fleming 668ba0
-#if defined(CONFIG_NO_HZ_COMMON) || defined(CONFIG_HIGH_RES_TIMERS)
Matt Fleming 668ba0
 static inline void hrtimer_update_next_timer(struct hrtimer_cpu_base *cpu_base,
Matt Fleming 668ba0
 					     struct hrtimer *timer)
Matt Fleming 668ba0
 {
Matt Fleming 668ba0
-#ifdef CONFIG_HIGH_RES_TIMERS
Matt Fleming 668ba0
 	cpu_base->next_timer = timer;
Matt Fleming 668ba0
-#endif
Matt Fleming 668ba0
 }
Matt Fleming 668ba0
 
Matt Fleming 668ba0
-static ktime_t __hrtimer_get_next_event(struct hrtimer_cpu_base *cpu_base)
Matt Fleming 668ba0
+static ktime_t __hrtimer_next_event_base(struct hrtimer_cpu_base *cpu_base,
Matt Fleming 668ba0
+					 unsigned int active,
Matt Fleming 668ba0
+					 ktime_t expires_next)
Matt Fleming 668ba0
 {
Matt Fleming 668ba0
-	struct hrtimer_clock_base *base = cpu_base->clock_base;
Matt Fleming 668ba0
-	unsigned int active = cpu_base->active_bases;
Matt Fleming 668ba0
-	ktime_t expires, expires_next = KTIME_MAX;
Matt Fleming 668ba0
+	ktime_t expires;
Matt Fleming 668ba0
 
Matt Fleming 668ba0
-	hrtimer_update_next_timer(cpu_base, NULL);
Matt Fleming 668ba0
-	for (; active; base++, active >>= 1) {
Matt Fleming 668ba0
+	while (active) {
Matt Fleming 668ba0
+		unsigned int id = __ffs(active);
Matt Fleming 668ba0
+		struct hrtimer_clock_base *base;
Matt Fleming 668ba0
 		struct timerqueue_node *next;
Matt Fleming 668ba0
 		struct hrtimer *timer;
Matt Fleming 668ba0
 
Matt Fleming 668ba0
-		if (!(active & 0x01))
Matt Fleming 668ba0
-			continue;
Matt Fleming 668ba0
+		active &= ~(1U << id);
Matt Fleming 668ba0
+		base = cpu_base->clock_base + id;
Matt Fleming 668ba0
 
Matt Fleming 668ba0
 		next = timerqueue_getnext(&base->active);
Matt Fleming 668ba0
 		timer = container_of(next, struct hrtimer, node);
Matt Fleming 668ba0
@@ -493,7 +530,31 @@ static ktime_t __hrtimer_get_next_event(
Matt Fleming 668ba0
 		expires_next = 0;
Matt Fleming 668ba0
 	return expires_next;
Matt Fleming 668ba0
 }
Matt Fleming 668ba0
-#endif
Matt Fleming 668ba0
+
Matt Fleming 668ba0
+static ktime_t __hrtimer_get_next_event(struct hrtimer_cpu_base *cpu_base)
Matt Fleming 668ba0
+{
Matt Fleming 668ba0
+	unsigned int active;
Matt Fleming 668ba0
+	ktime_t expires_next = KTIME_MAX;
Matt Fleming 668ba0
+
Matt Fleming 668ba0
+	hrtimer_update_next_timer(cpu_base, NULL);
Matt Fleming 668ba0
+
Matt Fleming 668ba0
+	if (!cpu_base->softirq_activated) {
Matt Fleming 668ba0
+		active = cpu_base->active_bases & HRTIMER_ACTIVE_SOFT;
Matt Fleming 668ba0
+		expires_next = __hrtimer_next_event_base(cpu_base, active,
Matt Fleming 668ba0
+							 expires_next);
Matt Fleming 668ba0
+		cpu_base->softirq_expires_next = expires_next;
Matt Fleming 668ba0
+	}
Matt Fleming 668ba0
+
Matt Fleming 668ba0
+	active = cpu_base->active_bases & HRTIMER_ACTIVE_HARD;
Matt Fleming 668ba0
+	expires_next = __hrtimer_next_event_base(cpu_base, active, expires_next);
Matt Fleming 668ba0
+
Matt Fleming 668ba0
+	/*
Matt Fleming 668ba0
+	 * cpu_base->expires_next is not updated here. It is set only
Matt Fleming 668ba0
+	 * in hrtimer_reprogramming path!
Matt Fleming 668ba0
+	 */
Matt Fleming 668ba0
+
Matt Fleming 668ba0
+	return expires_next;
Matt Fleming 668ba0
+}
Matt Fleming 668ba0
 
Matt Fleming 668ba0
 static inline ktime_t hrtimer_update_base(struct hrtimer_cpu_base *base)
Matt Fleming 668ba0
 {
Matt Fleming 668ba0
@@ -505,6 +566,19 @@ static inline ktime_t hrtimer_update_bas
Matt Fleming 668ba0
 					    offs_real, offs_boot, offs_tai);
Matt Fleming 668ba0
 }
Matt Fleming 668ba0
 
Matt Fleming 668ba0
+/*
Matt Fleming 668ba0
+ * Is the high resolution mode active ?
Matt Fleming 668ba0
+ */
Matt Fleming 668ba0
+static inline int __hrtimer_hres_active(struct hrtimer_cpu_base *cpu_base)
Matt Fleming 668ba0
+{
Matt Fleming 668ba0
+	return cpu_base->hres_active;
Matt Fleming 668ba0
+}
Matt Fleming 668ba0
+
Matt Fleming 668ba0
+static inline int hrtimer_hres_active(void)
Matt Fleming 668ba0
+{
Matt Fleming 668ba0
+	return __hrtimer_hres_active(this_cpu_ptr(&hrtimer_bases));
Matt Fleming 668ba0
+}
Matt Fleming 668ba0
+
Matt Fleming 668ba0
 /* High resolution timer related functions */
Matt Fleming 668ba0
 #ifdef CONFIG_HIGH_RES_TIMERS
Matt Fleming 668ba0
 
Matt Fleming 668ba0
@@ -534,19 +608,6 @@ static inline int hrtimer_is_hres_enable
Matt Fleming 668ba0
 }
Matt Fleming 668ba0
 
Matt Fleming 668ba0
 /*
Matt Fleming 668ba0
- * Is the high resolution mode active ?
Matt Fleming 668ba0
- */
Matt Fleming 668ba0
-static inline int __hrtimer_hres_active(struct hrtimer_cpu_base *cpu_base)
Matt Fleming 668ba0
-{
Matt Fleming 668ba0
-	return cpu_base->hres_active;
Matt Fleming 668ba0
-}
Matt Fleming 668ba0
-
Matt Fleming 668ba0
-static inline int hrtimer_hres_active(void)
Matt Fleming 668ba0
-{
Matt Fleming 668ba0
-	return __hrtimer_hres_active(this_cpu_ptr(&hrtimer_bases));
Matt Fleming 668ba0
-}
Matt Fleming 668ba0
-
Matt Fleming 668ba0
-/*
Matt Fleming 668ba0
  * Reprogram the event source with checking both queues for the
Matt Fleming 668ba0
  * next event
Matt Fleming 668ba0
  * Called with interrupts disabled and base->lock held
Matt Fleming 668ba0
@@ -587,79 +648,6 @@ hrtimer_force_reprogram(struct hrtimer_c
Matt Fleming 668ba0
 }
Matt Fleming 668ba0
 
Matt Fleming 668ba0
 /*
Matt Fleming 668ba0
- * When a timer is enqueued and expires earlier than the already enqueued
Matt Fleming 668ba0
- * timers, we have to check, whether it expires earlier than the timer for
Matt Fleming 668ba0
- * which the clock event device was armed.
Matt Fleming 668ba0
- *
Matt Fleming 668ba0
- * Called with interrupts disabled and base->cpu_base.lock held
Matt Fleming 668ba0
- */
Matt Fleming 668ba0
-static void hrtimer_reprogram(struct hrtimer *timer,
Matt Fleming 668ba0
-			      struct hrtimer_clock_base *base)
Matt Fleming 668ba0
-{
Matt Fleming 668ba0
-	struct hrtimer_cpu_base *cpu_base = this_cpu_ptr(&hrtimer_bases);
Matt Fleming 668ba0
-	ktime_t expires = ktime_sub(hrtimer_get_expires(timer), base->offset);
Matt Fleming 668ba0
-
Matt Fleming 668ba0
-	WARN_ON_ONCE(hrtimer_get_expires_tv64(timer) < 0);
Matt Fleming 668ba0
-
Matt Fleming 668ba0
-	/*
Matt Fleming 668ba0
-	 * If the timer is not on the current cpu, we cannot reprogram
Matt Fleming 668ba0
-	 * the other cpus clock event device.
Matt Fleming 668ba0
-	 */
Matt Fleming 668ba0
-	if (base->cpu_base != cpu_base)
Matt Fleming 668ba0
-		return;
Matt Fleming 668ba0
-
Matt Fleming 668ba0
-	/*
Matt Fleming 668ba0
-	 * If the hrtimer interrupt is running, then it will
Matt Fleming 668ba0
-	 * reevaluate the clock bases and reprogram the clock event
Matt Fleming 668ba0
-	 * device. The callbacks are always executed in hard interrupt
Matt Fleming 668ba0
-	 * context so we don't need an extra check for a running
Matt Fleming 668ba0
-	 * callback.
Matt Fleming 668ba0
-	 */
Matt Fleming 668ba0
-	if (cpu_base->in_hrtirq)
Matt Fleming 668ba0
-		return;
Matt Fleming 668ba0
-
Matt Fleming 668ba0
-	/*
Matt Fleming 668ba0
-	 * CLOCK_REALTIME timer might be requested with an absolute
Matt Fleming 668ba0
-	 * expiry time which is less than base->offset. Set it to 0.
Matt Fleming 668ba0
-	 */
Matt Fleming 668ba0
-	if (expires < 0)
Matt Fleming 668ba0
-		expires = 0;
Matt Fleming 668ba0
-
Matt Fleming 668ba0
-	if (expires >= cpu_base->expires_next)
Matt Fleming 668ba0
-		return;
Matt Fleming 668ba0
-
Matt Fleming 668ba0
-	/* Update the pointer to the next expiring timer */
Matt Fleming 668ba0
-	cpu_base->next_timer = timer;
Matt Fleming 668ba0
-
Matt Fleming 668ba0
-	/*
Matt Fleming 668ba0
-	 * If a hang was detected in the last timer interrupt then we
Matt Fleming 668ba0
-	 * do not schedule a timer which is earlier than the expiry
Matt Fleming 668ba0
-	 * which we enforced in the hang detection. We want the system
Matt Fleming 668ba0
-	 * to make progress.
Matt Fleming 668ba0
-	 */
Matt Fleming 668ba0
-	if (cpu_base->hang_detected)
Matt Fleming 668ba0
-		return;
Matt Fleming 668ba0
-
Matt Fleming 668ba0
-	/*
Matt Fleming 668ba0
-	 * Program the timer hardware. We enforce the expiry for
Matt Fleming 668ba0
-	 * events which are already in the past.
Matt Fleming 668ba0
-	 */
Matt Fleming 668ba0
-	cpu_base->expires_next = expires;
Matt Fleming 668ba0
-	tick_program_event(expires, 1);
Matt Fleming 668ba0
-}
Matt Fleming 668ba0
-
Matt Fleming 668ba0
-/*
Matt Fleming 668ba0
- * Initialize the high resolution related parts of cpu_base
Matt Fleming 668ba0
- */
Matt Fleming 668ba0
-static inline void hrtimer_init_hres(struct hrtimer_cpu_base *base)
Matt Fleming 668ba0
-{
Matt Fleming 668ba0
-	base->expires_next = KTIME_MAX;
Matt Fleming 668ba0
-	base->hang_detected = 0;
Matt Fleming 668ba0
-	base->hres_active = 0;
Matt Fleming 668ba0
-	base->next_timer = NULL;
Matt Fleming 668ba0
-}
Matt Fleming 668ba0
-
Matt Fleming 668ba0
-/*
Matt Fleming 668ba0
  * Retrigger next event is called after clock was set
Matt Fleming 668ba0
  *
Matt Fleming 668ba0
  * Called with interrupts disabled via on_each_cpu()
Matt Fleming 668ba0
@@ -739,20 +727,80 @@ void clock_was_set_delayed(void)
Matt Fleming 668ba0
 
Matt Fleming 668ba0
 #else
Matt Fleming 668ba0
 
Matt Fleming 668ba0
-static inline int __hrtimer_hres_active(struct hrtimer_cpu_base *b) { return 0; }
Matt Fleming 668ba0
-static inline int hrtimer_hres_active(void) { return 0; }
Matt Fleming 668ba0
 static inline int hrtimer_is_hres_enabled(void) { return 0; }
Matt Fleming 668ba0
 static inline void hrtimer_switch_to_hres(void) { }
Matt Fleming 668ba0
 static inline void
Matt Fleming 668ba0
 hrtimer_force_reprogram(struct hrtimer_cpu_base *base, int skip_equal) { }
Matt Fleming 668ba0
-static inline void hrtimer_reprogram(struct hrtimer *timer,
Matt Fleming 668ba0
-				     struct hrtimer_clock_base *base) { }
Matt Fleming 668ba0
-static inline void hrtimer_init_hres(struct hrtimer_cpu_base *base) { }
Matt Fleming 668ba0
 static inline void retrigger_next_event(void *arg) { }
Matt Fleming 668ba0
 
Matt Fleming 668ba0
 #endif /* CONFIG_HIGH_RES_TIMERS */
Matt Fleming 668ba0
 
Matt Fleming 668ba0
 /*
Matt Fleming 668ba0
+ * When a timer is enqueued and expires earlier than the already enqueued
Matt Fleming 668ba0
+ * timers, we have to check, whether it expires earlier than the timer for
Matt Fleming 668ba0
+ * which the clock event device was armed.
Matt Fleming 668ba0
+ *
Matt Fleming 668ba0
+ * Called with interrupts disabled and base->cpu_base.lock held
Matt Fleming 668ba0
+ */
Matt Fleming 668ba0
+static void hrtimer_reprogram(struct hrtimer *timer)
Matt Fleming 668ba0
+{
Matt Fleming 668ba0
+	struct hrtimer_cpu_base *cpu_base = this_cpu_ptr(&hrtimer_bases);
Matt Fleming 668ba0
+	struct hrtimer_clock_base *base = timer->base;
Matt Fleming 668ba0
+	ktime_t expires = ktime_sub(hrtimer_get_expires(timer), base->offset);
Matt Fleming 668ba0
+
Matt Fleming 668ba0
+	WARN_ON_ONCE(hrtimer_get_expires_tv64(timer) < 0);
Matt Fleming 668ba0
+
Matt Fleming 668ba0
+	/*
Matt Fleming 668ba0
+	 * If the timer is not on the current cpu, we cannot reprogram
Matt Fleming 668ba0
+	 * the other cpus clock event device.
Matt Fleming 668ba0
+	 */
Matt Fleming 668ba0
+	if (base->cpu_base != cpu_base)
Matt Fleming 668ba0
+		return;
Matt Fleming 668ba0
+
Matt Fleming 668ba0
+	/*
Matt Fleming 668ba0
+	 * If the hrtimer interrupt is running, then it will
Matt Fleming 668ba0
+	 * reevaluate the clock bases and reprogram the clock event
Matt Fleming 668ba0
+	 * device. The callbacks are always executed in hard interrupt
Matt Fleming 668ba0
+	 * context so we don't need an extra check for a running
Matt Fleming 668ba0
+	 * callback.
Matt Fleming 668ba0
+	 */
Matt Fleming 668ba0
+	if (cpu_base->in_hrtirq)
Matt Fleming 668ba0
+		return;
Matt Fleming 668ba0
+
Matt Fleming 668ba0
+	/*
Matt Fleming 668ba0
+	 * CLOCK_REALTIME timer might be requested with an absolute
Matt Fleming 668ba0
+	 * expiry time which is less than base->offset. Set it to 0.
Matt Fleming 668ba0
+	 */
Matt Fleming 668ba0
+	if (expires < 0)
Matt Fleming 668ba0
+		expires = 0;
Matt Fleming 668ba0
+
Matt Fleming 668ba0
+	if (expires >= cpu_base->expires_next)
Matt Fleming 668ba0
+		return;
Matt Fleming 668ba0
+
Matt Fleming 668ba0
+	/* Update the pointer to the next expiring timer */
Matt Fleming 668ba0
+	hrtimer_update_next_timer(cpu_base, timer);
Matt Fleming 668ba0
+	cpu_base->expires_next = expires;
Matt Fleming 668ba0
+
Matt Fleming 668ba0
+	/*
Matt Fleming 668ba0
+	 * If hres is not active, hardware does not have to be
Matt Fleming 668ba0
+	 * programmed yet.
Matt Fleming 668ba0
+	 *
Matt Fleming 668ba0
+	 * If a hang was detected in the last timer interrupt then we
Matt Fleming 668ba0
+	 * do not schedule a timer which is earlier than the expiry
Matt Fleming 668ba0
+	 * which we enforced in the hang detection. We want the system
Matt Fleming 668ba0
+	 * to make progress.
Matt Fleming 668ba0
+	 */
Matt Fleming 668ba0
+	if (!__hrtimer_hres_active(cpu_base) || cpu_base->hang_detected)
Matt Fleming 668ba0
+		return;
Matt Fleming 668ba0
+
Matt Fleming 668ba0
+	/*
Matt Fleming 668ba0
+	 * Program the timer hardware. We enforce the expiry for
Matt Fleming 668ba0
+	 * events which are already in the past.
Matt Fleming 668ba0
+	 */
Matt Fleming 668ba0
+	tick_program_event(expires, 1);
Matt Fleming 668ba0
+}
Matt Fleming 668ba0
+
Matt Fleming 668ba0
+/*
Matt Fleming 668ba0
  * Clock realtime was set
Matt Fleming 668ba0
  *
Matt Fleming 668ba0
  * Change the offset of the realtime clock vs. the monotonic
Matt Fleming 668ba0
@@ -865,7 +913,8 @@ void hrtimer_wait_for_timer(const struct
Matt Fleming 668ba0
 {
Matt Fleming 668ba0
 	struct hrtimer_clock_base *base = timer->base;
Matt Fleming 668ba0
 
Matt Fleming 668ba0
-	if (base && base->cpu_base && !timer->irqsafe)
Matt Fleming 668ba0
+	if (base && base->cpu_base &&
Matt Fleming 668ba0
+	    base->index >= HRTIMER_BASE_MONOTONIC_SOFT)
Matt Fleming 668ba0
 		wait_event(base->cpu_base->wait,
Matt Fleming 668ba0
 				!(hrtimer_callback_running(timer)));
Matt Fleming 668ba0
 }
Mel Gorman fb63c5
@@ -917,11 +966,6 @@ static void __remove_hrtimer(struct hrti
Matt Fleming 668ba0
 	if (!(state & HRTIMER_STATE_ENQUEUED))
Matt Fleming 668ba0
 		return;
Matt Fleming 668ba0
 
Matt Fleming 668ba0
-	if (unlikely(!list_empty(&timer->cb_entry))) {
Matt Fleming 668ba0
-		list_del_init(&timer->cb_entry);
Matt Fleming 668ba0
-		return;
Matt Fleming 668ba0
-	}
Matt Fleming 668ba0
-
Matt Fleming 668ba0
 	if (!timerqueue_del(&base->active, &timer->node))
Matt Fleming 668ba0
 		cpu_base->active_bases &= ~(1 << base->index);
Matt Fleming 668ba0
 
Mel Gorman fb63c5
@@ -986,22 +1030,54 @@ static inline ktime_t hrtimer_update_low
Matt Fleming 668ba0
 	return tim;
Matt Fleming 668ba0
 }
Matt Fleming 668ba0
 
Matt Fleming 668ba0
-/**
Matt Fleming 668ba0
- * hrtimer_start_range_ns - (re)start an hrtimer on the current CPU
Matt Fleming 668ba0
- * @timer:	the timer to be added
Matt Fleming 668ba0
- * @tim:	expiry time
Matt Fleming 668ba0
- * @delta_ns:	"slack" range for the timer
Matt Fleming 668ba0
- * @mode:	expiry mode: absolute (HRTIMER_MODE_ABS) or
Matt Fleming 668ba0
- *		relative (HRTIMER_MODE_REL)
Matt Fleming 668ba0
- */
Matt Fleming 668ba0
-void hrtimer_start_range_ns(struct hrtimer *timer, ktime_t tim,
Matt Fleming 668ba0
-			    u64 delta_ns, const enum hrtimer_mode mode)
Matt Fleming 668ba0
+static void hrtimer_reprogram_softirq(struct hrtimer *timer)
Matt Fleming 668ba0
 {
Matt Fleming 668ba0
-	struct hrtimer_clock_base *base, *new_base;
Matt Fleming 668ba0
-	unsigned long flags;
Matt Fleming 668ba0
-	int leftmost;
Matt Fleming 668ba0
+	struct hrtimer_clock_base *base = timer->base;
Matt Fleming 668ba0
+	struct hrtimer_cpu_base *cpu_base = base->cpu_base;
Matt Fleming 668ba0
+	ktime_t expires;
Matt Fleming 668ba0
 
Matt Fleming 668ba0
-	base = lock_hrtimer_base(timer, &flags);
Matt Fleming 668ba0
+	/*
Matt Fleming 668ba0
+	 * The softirq timer is not rearmed, when the softirq was raised
Matt Fleming 668ba0
+	 * and has not yet run to completion.
Matt Fleming 668ba0
+	 */
Matt Fleming 668ba0
+	if (cpu_base->softirq_activated)
Matt Fleming 668ba0
+		return;
Matt Fleming 668ba0
+
Matt Fleming 668ba0
+	expires = ktime_sub(hrtimer_get_expires(timer), base->offset);
Matt Fleming 668ba0
+
Matt Fleming 668ba0
+	if (!ktime_before(expires, cpu_base->softirq_expires_next))
Matt Fleming 668ba0
+		return;
Matt Fleming 668ba0
+
Matt Fleming 668ba0
+	cpu_base->softirq_expires_next = expires;
Matt Fleming 668ba0
+
Matt Fleming 668ba0
+	if (!ktime_before(expires, cpu_base->expires_next))
Matt Fleming 668ba0
+		return;
Matt Fleming 668ba0
+	hrtimer_reprogram(timer);
Matt Fleming 668ba0
+}
Matt Fleming 668ba0
+
Matt Fleming 668ba0
+static void hrtimer_update_softirq_timer(struct hrtimer_cpu_base *cpu_base,
Matt Fleming 668ba0
+					 bool reprogram)
Matt Fleming 668ba0
+{
Matt Fleming 668ba0
+	ktime_t expires;
Matt Fleming 668ba0
+
Matt Fleming 668ba0
+	expires = __hrtimer_get_next_event(cpu_base);
Matt Fleming 668ba0
+
Matt Fleming 668ba0
+	if (!reprogram || !ktime_before(expires, cpu_base->expires_next))
Matt Fleming 668ba0
+		return;
Matt Fleming 668ba0
+	/*
Matt Fleming 668ba0
+	 * next_timer can be used here, because
Matt Fleming 668ba0
+	 * hrtimer_get_next_event() updated the next
Matt Fleming 668ba0
+	 * timer. expires_next is only set when reprogramming function
Matt Fleming 668ba0
+	 * is called.
Matt Fleming 668ba0
+	 */
Matt Fleming 668ba0
+	hrtimer_reprogram(cpu_base->next_timer);
Matt Fleming 668ba0
+}
Matt Fleming 668ba0
+
Matt Fleming 668ba0
+static int __hrtimer_start_range_ns(struct hrtimer *timer, ktime_t tim,
Matt Fleming 668ba0
+				    u64 delta_ns, const enum hrtimer_mode mode,
Matt Fleming 668ba0
+				    struct hrtimer_clock_base *base)
Matt Fleming 668ba0
+{
Matt Fleming 668ba0
+	struct hrtimer_clock_base *new_base;
Matt Fleming 668ba0
 
Matt Fleming 668ba0
 	/* Remove an active timer from the queue: */
Matt Fleming 668ba0
 	remove_hrtimer(timer, base, true);
Mel Gorman fb63c5
@@ -1016,21 +1092,31 @@ void hrtimer_start_range_ns(struct hrtim
Matt Fleming 668ba0
 	/* Switch the timer base, if necessary: */
Matt Fleming 668ba0
 	new_base = switch_hrtimer_base(timer, base, mode & HRTIMER_MODE_PINNED);
Matt Fleming 668ba0
 
Matt Fleming 668ba0
-	leftmost = enqueue_hrtimer(timer, new_base);
Matt Fleming 668ba0
-	if (!leftmost)
Matt Fleming 668ba0
-		goto unlock;
Matt Fleming 668ba0
+	return enqueue_hrtimer(timer, new_base);
Matt Fleming 668ba0
+}
Matt Fleming 668ba0
 
Matt Fleming 668ba0
-	if (!hrtimer_is_hres_active(timer)) {
Matt Fleming 668ba0
-		/*
Matt Fleming 668ba0
-		 * Kick to reschedule the next tick to handle the new timer
Matt Fleming 668ba0
-		 * on dynticks target.
Matt Fleming 668ba0
-		 */
Matt Fleming 668ba0
-		if (new_base->cpu_base->nohz_active)
Matt Fleming 668ba0
-			wake_up_nohz_cpu(new_base->cpu_base->cpu);
Matt Fleming 668ba0
-	} else {
Matt Fleming 668ba0
-		hrtimer_reprogram(timer, new_base);
Matt Fleming 668ba0
+/**
Matt Fleming 668ba0
+ * hrtimer_start_range_ns - (re)start an hrtimer on the current CPU
Matt Fleming 668ba0
+ * @timer:	the timer to be added
Matt Fleming 668ba0
+ * @tim:	expiry time
Matt Fleming 668ba0
+ * @delta_ns:	"slack" range for the timer
Matt Fleming 668ba0
+ * @mode:	expiry mode: absolute (HRTIMER_MODE_ABS) or
Matt Fleming 668ba0
+ *		relative (HRTIMER_MODE_REL)
Matt Fleming 668ba0
+ */
Matt Fleming 668ba0
+void hrtimer_start_range_ns(struct hrtimer *timer, ktime_t tim,
Matt Fleming 668ba0
+			    u64 delta_ns, const enum hrtimer_mode mode)
Matt Fleming 668ba0
+{
Matt Fleming 668ba0
+	struct hrtimer_clock_base *base;
Matt Fleming 668ba0
+	unsigned long flags;
Matt Fleming 668ba0
+
Matt Fleming 668ba0
+	base = lock_hrtimer_base(timer, &flags);
Matt Fleming 668ba0
+
Matt Fleming 668ba0
+	if (__hrtimer_start_range_ns(timer, tim, delta_ns, mode, base)) {
Matt Fleming 668ba0
+		if (timer->base->index < HRTIMER_BASE_MONOTONIC_SOFT)
Matt Fleming 668ba0
+			hrtimer_reprogram(timer);
Matt Fleming 668ba0
+		else
Matt Fleming 668ba0
+			hrtimer_reprogram_softirq(timer);
Matt Fleming 668ba0
 	}
Matt Fleming 668ba0
-unlock:
Matt Fleming 668ba0
 	unlock_hrtimer_base(timer, &flags);
Matt Fleming 668ba0
 }
Matt Fleming 668ba0
 EXPORT_SYMBOL_GPL(hrtimer_start_range_ns);
Mel Gorman fb63c5
@@ -1138,14 +1224,18 @@ u64 hrtimer_get_next_event(void)
Matt Fleming 668ba0
 
Matt Fleming 668ba0
 static inline int hrtimer_clockid_to_base(clockid_t clock_id)
Matt Fleming 668ba0
 {
Matt Fleming 668ba0
-	if (likely(clock_id < MAX_CLOCKS)) {
Matt Fleming 668ba0
+	if (likely(clock_id < MAX_CLOCKS_HRT)) {
Matt Fleming 668ba0
 		int base = hrtimer_clock_to_base_table[clock_id];
Matt Fleming 668ba0
 
Matt Fleming 668ba0
 		if (likely(base != HRTIMER_MAX_CLOCK_BASES))
Matt Fleming 668ba0
 			return base;
Matt Fleming 668ba0
 	}
Matt Fleming 668ba0
 	WARN(1, "Invalid clockid %d. Using MONOTONIC\n", clock_id);
Matt Fleming 668ba0
+#ifdef CONFIG_PREEMPT_RT_FULL
Matt Fleming 668ba0
+	return HRTIMER_BASE_MONOTONIC_SOFT;
Matt Fleming 668ba0
+#else
Matt Fleming 668ba0
 	return HRTIMER_BASE_MONOTONIC;
Matt Fleming 668ba0
+#endif
Matt Fleming 668ba0
 }
Matt Fleming 668ba0
 
Matt Fleming 668ba0
 static void __hrtimer_init(struct hrtimer *timer, clockid_t clock_id,
Mel Gorman fb63c5
@@ -1163,12 +1253,15 @@ static void __hrtimer_init(struct hrtime
Matt Fleming 668ba0
 	 * clock modifications, so they needs to become CLOCK_MONOTONIC to
Matt Fleming 668ba0
 	 * ensure POSIX compliance.
Matt Fleming 668ba0
 	 */
Matt Fleming 668ba0
-	if (clock_id == CLOCK_REALTIME && mode & HRTIMER_MODE_REL)
Matt Fleming 668ba0
-		clock_id = CLOCK_MONOTONIC;
Matt Fleming 668ba0
+	if (mode & HRTIMER_MODE_ABS) {
Matt Fleming 668ba0
+		if (clock_id == CLOCK_REALTIME)
Matt Fleming 668ba0
+			clock_id = CLOCK_MONOTONIC;
Matt Fleming 668ba0
+		else if (clock_id == CLOCK_REALTIME_SOFT)
Matt Fleming 668ba0
+			clock_id = CLOCK_MONOTONIC_SOFT;
Matt Fleming 668ba0
+	}
Matt Fleming 668ba0
 
Matt Fleming 668ba0
 	base = hrtimer_clockid_to_base(clock_id);
Matt Fleming 668ba0
 	timer->base = &cpu_base->clock_base[base];
Matt Fleming 668ba0
-	INIT_LIST_HEAD(&timer->cb_entry);
Matt Fleming 668ba0
 	timerqueue_init(&timer->node);
Matt Fleming 668ba0
 }
Matt Fleming 668ba0
 
Mel Gorman fb63c5
@@ -1195,20 +1288,19 @@ EXPORT_SYMBOL_GPL(hrtimer_init);
Matt Fleming 668ba0
  */
Matt Fleming 668ba0
 bool hrtimer_active(const struct hrtimer *timer)
Matt Fleming 668ba0
 {
Matt Fleming 668ba0
-	struct hrtimer_cpu_base *cpu_base;
Matt Fleming 668ba0
+	struct hrtimer_clock_base *base;
Matt Fleming 668ba0
 	unsigned int seq;
Matt Fleming 668ba0
 
Matt Fleming 668ba0
 	do {
Matt Fleming 668ba0
-		cpu_base = READ_ONCE(timer->base->cpu_base);
Matt Fleming 668ba0
-		seq = raw_read_seqcount_begin(&cpu_base->seq);
Matt Fleming 668ba0
+		base = READ_ONCE(timer->base);
Matt Fleming 668ba0
+		seq = raw_read_seqcount_begin(&base->seq);
Matt Fleming 668ba0
 
Matt Fleming 668ba0
 		if (timer->state != HRTIMER_STATE_INACTIVE ||
Matt Fleming 668ba0
-		    cpu_base->running_soft == timer ||
Matt Fleming 668ba0
-		    cpu_base->running == timer)
Matt Fleming 668ba0
+		    base->running == timer)
Matt Fleming 668ba0
 			return true;
Matt Fleming 668ba0
 
Matt Fleming 668ba0
-	} while (read_seqcount_retry(&cpu_base->seq, seq) ||
Matt Fleming 668ba0
-		 cpu_base != READ_ONCE(timer->base->cpu_base));
Matt Fleming 668ba0
+	} while (read_seqcount_retry(&base->seq, seq) ||
Matt Fleming 668ba0
+		 base != READ_ONCE(timer->base));
Matt Fleming 668ba0
 
Matt Fleming 668ba0
 	return false;
Matt Fleming 668ba0
 }
Mel Gorman fb63c5
@@ -1234,7 +1326,8 @@ EXPORT_SYMBOL_GPL(hrtimer_active);
Matt Fleming 668ba0
 
Matt Fleming 668ba0
 static void __run_hrtimer(struct hrtimer_cpu_base *cpu_base,
Matt Fleming 668ba0
 			  struct hrtimer_clock_base *base,
Matt Fleming 668ba0
-			  struct hrtimer *timer, ktime_t *now)
Matt Fleming 668ba0
+			  struct hrtimer *timer, ktime_t *now,
Matt Fleming 668ba0
+			  bool hardirq)
Matt Fleming 668ba0
 {
Matt Fleming 668ba0
 	enum hrtimer_restart (*fn)(struct hrtimer *);
Matt Fleming 668ba0
 	int restart;
Mel Gorman fb63c5
@@ -1242,16 +1335,16 @@ static void __run_hrtimer(struct hrtimer
Matt Fleming 668ba0
 	lockdep_assert_held(&cpu_base->lock);
Matt Fleming 668ba0
 
Matt Fleming 668ba0
 	debug_deactivate(timer);
Matt Fleming 668ba0
-	cpu_base->running = timer;
Matt Fleming 668ba0
+	base->running = timer;
Matt Fleming 668ba0
 
Matt Fleming 668ba0
 	/*
Matt Fleming 668ba0
 	 * Separate the ->running assignment from the ->state assignment.
Matt Fleming 668ba0
 	 *
Matt Fleming 668ba0
 	 * As with a regular write barrier, this ensures the read side in
Matt Fleming 668ba0
-	 * hrtimer_active() cannot observe cpu_base->running == NULL &&
Matt Fleming 668ba0
+	 * hrtimer_active() cannot observe base->running == NULL &&
Matt Fleming 668ba0
 	 * timer->state == INACTIVE.
Matt Fleming 668ba0
 	 */
Matt Fleming 668ba0
-	raw_write_seqcount_barrier(&cpu_base->seq);
Matt Fleming 668ba0
+	raw_write_seqcount_barrier(&base->seq);
Matt Fleming 668ba0
 
Matt Fleming 668ba0
 	__remove_hrtimer(timer, base, HRTIMER_STATE_INACTIVE, 0);
Matt Fleming 668ba0
 	fn = timer->function;
Mel Gorman fb63c5
@@ -1265,15 +1358,23 @@ static void __run_hrtimer(struct hrtimer
Matt Fleming 668ba0
 		timer->is_rel = false;
Matt Fleming 668ba0
 
Matt Fleming 668ba0
 	/*
Matt Fleming 668ba0
-	 * Because we run timers from hardirq context, there is no chance
Matt Fleming 668ba0
-	 * they get migrated to another cpu, therefore its safe to unlock
Matt Fleming 668ba0
-	 * the timer base.
Matt Fleming 668ba0
+	 * The timer is marked as running in the cpu base, so it is
Matt Fleming 668ba0
+	 * protected against migration to a different CPU even if the lock
Matt Fleming 668ba0
+	 * is dropped.
Matt Fleming 668ba0
 	 */
Matt Fleming 668ba0
-	raw_spin_unlock(&cpu_base->lock);
Matt Fleming 668ba0
+	if (hardirq)
Matt Fleming 668ba0
+		raw_spin_unlock(&cpu_base->lock);
Matt Fleming 668ba0
+	else
Matt Fleming 668ba0
+		raw_spin_unlock_irq(&cpu_base->lock);
Matt Fleming 668ba0
+
Matt Fleming 668ba0
 	trace_hrtimer_expire_entry(timer, now);
Matt Fleming 668ba0
 	restart = fn(timer);
Matt Fleming 668ba0
 	trace_hrtimer_expire_exit(timer);
Matt Fleming 668ba0
-	raw_spin_lock(&cpu_base->lock);
Matt Fleming 668ba0
+
Matt Fleming 668ba0
+	if (hardirq)
Matt Fleming 668ba0
+		raw_spin_lock(&cpu_base->lock);
Matt Fleming 668ba0
+	else
Matt Fleming 668ba0
+		raw_spin_lock_irq(&cpu_base->lock);
Matt Fleming 668ba0
 
Matt Fleming 668ba0
 	/*
Matt Fleming 668ba0
 	 * Note: We clear the running state after enqueue_hrtimer and
Mel Gorman fb63c5
@@ -1292,125 +1393,28 @@ static void __run_hrtimer(struct hrtimer
Matt Fleming 668ba0
 	 * Separate the ->running assignment from the ->state assignment.
Matt Fleming 668ba0
 	 *
Matt Fleming 668ba0
 	 * As with a regular write barrier, this ensures the read side in
Matt Fleming 668ba0
-	 * hrtimer_active() cannot observe cpu_base->running == NULL &&
Matt Fleming 668ba0
+	 * hrtimer_active() cannot observe base->running.timer == NULL &&
Matt Fleming 668ba0
 	 * timer->state == INACTIVE.
Matt Fleming 668ba0
 	 */
Matt Fleming 668ba0
-	raw_write_seqcount_barrier(&cpu_base->seq);
Matt Fleming 668ba0
-
Matt Fleming 668ba0
-	WARN_ON_ONCE(cpu_base->running != timer);
Matt Fleming 668ba0
-	cpu_base->running = NULL;
Matt Fleming 668ba0
-}
Matt Fleming 668ba0
-
Matt Fleming 668ba0
-#ifdef CONFIG_PREEMPT_RT_BASE
Matt Fleming 668ba0
-static void hrtimer_rt_reprogram(int restart, struct hrtimer *timer,
Matt Fleming 668ba0
-				 struct hrtimer_clock_base *base)
Matt Fleming 668ba0
-{
Matt Fleming 668ba0
-	int leftmost;
Matt Fleming 668ba0
-
Matt Fleming 668ba0
-	if (restart != HRTIMER_NORESTART &&
Matt Fleming 668ba0
-	    !(timer->state & HRTIMER_STATE_ENQUEUED)) {
Matt Fleming 668ba0
-
Matt Fleming 668ba0
-		leftmost = enqueue_hrtimer(timer, base);
Matt Fleming 668ba0
-		if (!leftmost)
Matt Fleming 668ba0
-			return;
Matt Fleming 668ba0
-#ifdef CONFIG_HIGH_RES_TIMERS
Matt Fleming 668ba0
-		if (!hrtimer_is_hres_active(timer)) {
Matt Fleming 668ba0
-			/*
Matt Fleming 668ba0
-			 * Kick to reschedule the next tick to handle the new timer
Matt Fleming 668ba0
-			 * on dynticks target.
Matt Fleming 668ba0
-			 */
Matt Fleming 668ba0
-			if (base->cpu_base->nohz_active)
Matt Fleming 668ba0
-				wake_up_nohz_cpu(base->cpu_base->cpu);
Matt Fleming 668ba0
-		} else {
Matt Fleming 668ba0
-
Matt Fleming 668ba0
-			hrtimer_reprogram(timer, base);
Matt Fleming 668ba0
-		}
Matt Fleming 668ba0
-#endif
Matt Fleming 668ba0
-	}
Matt Fleming 668ba0
-}
Matt Fleming 668ba0
-
Matt Fleming 668ba0
-/*
Matt Fleming 668ba0
- * The changes in mainline which removed the callback modes from
Matt Fleming 668ba0
- * hrtimer are not yet working with -rt. The non wakeup_process()
Matt Fleming 668ba0
- * based callbacks which involve sleeping locks need to be treated
Matt Fleming 668ba0
- * seperately.
Matt Fleming 668ba0
- */
Matt Fleming 668ba0
-static void hrtimer_rt_run_pending(void)
Matt Fleming 668ba0
-{
Matt Fleming 668ba0
-	enum hrtimer_restart (*fn)(struct hrtimer *);
Matt Fleming 668ba0
-	struct hrtimer_cpu_base *cpu_base;
Matt Fleming 668ba0
-	struct hrtimer_clock_base *base;
Matt Fleming 668ba0
-	struct hrtimer *timer;
Matt Fleming 668ba0
-	int index, restart;
Matt Fleming 668ba0
-
Matt Fleming 668ba0
-	local_irq_disable();
Matt Fleming 668ba0
-	cpu_base = &per_cpu(hrtimer_bases, smp_processor_id());
Matt Fleming 668ba0
-
Matt Fleming 668ba0
-	raw_spin_lock(&cpu_base->lock);
Matt Fleming 668ba0
-
Matt Fleming 668ba0
-	for (index = 0; index < HRTIMER_MAX_CLOCK_BASES; index++) {
Matt Fleming 668ba0
-		base = &cpu_base->clock_base[index];
Matt Fleming 668ba0
-
Matt Fleming 668ba0
-		while (!list_empty(&base->expired)) {
Matt Fleming 668ba0
-			timer = list_first_entry(&base->expired,
Matt Fleming 668ba0
-						 struct hrtimer, cb_entry);
Matt Fleming 668ba0
+	raw_write_seqcount_barrier(&base->seq);
Matt Fleming 668ba0
 
Matt Fleming 668ba0
-			/*
Matt Fleming 668ba0
-			 * Same as the above __run_hrtimer function
Matt Fleming 668ba0
-			 * just we run with interrupts enabled.
Matt Fleming 668ba0
-			 */
Matt Fleming 668ba0
-			debug_deactivate(timer);
Matt Fleming 668ba0
-			cpu_base->running_soft = timer;
Matt Fleming 668ba0
-			raw_write_seqcount_barrier(&cpu_base->seq);
Matt Fleming 668ba0
-
Matt Fleming 668ba0
-			__remove_hrtimer(timer, base, HRTIMER_STATE_INACTIVE, 0);
Matt Fleming 668ba0
-			fn = timer->function;
Matt Fleming 668ba0
-
Matt Fleming 668ba0
-			raw_spin_unlock_irq(&cpu_base->lock);
Matt Fleming 668ba0
-			restart = fn(timer);
Matt Fleming 668ba0
-			raw_spin_lock_irq(&cpu_base->lock);
Matt Fleming 668ba0
-
Matt Fleming 668ba0
-			hrtimer_rt_reprogram(restart, timer, base);
Matt Fleming 668ba0
-			raw_write_seqcount_barrier(&cpu_base->seq);
Matt Fleming 668ba0
-
Matt Fleming 668ba0
-			WARN_ON_ONCE(cpu_base->running_soft != timer);
Matt Fleming 668ba0
-			cpu_base->running_soft = NULL;
Matt Fleming 668ba0
-		}
Matt Fleming 668ba0
-	}
Matt Fleming 668ba0
-
Matt Fleming 668ba0
-	raw_spin_unlock_irq(&cpu_base->lock);
Matt Fleming 668ba0
-
Matt Fleming 668ba0
-	wake_up_timer_waiters(cpu_base);
Matt Fleming 668ba0
+	WARN_ON_ONCE(base->running != timer);
Matt Fleming 668ba0
+	base->running = NULL;
Matt Fleming 668ba0
 }
Matt Fleming 668ba0
 
Matt Fleming 668ba0
-static int hrtimer_rt_defer(struct hrtimer *timer)
Matt Fleming 668ba0
+static void __hrtimer_run_queues(struct hrtimer_cpu_base *cpu_base, ktime_t now,
Matt Fleming 668ba0
+				 unsigned int active_mask)
Matt Fleming 668ba0
 {
Matt Fleming 668ba0
-	if (timer->irqsafe)
Matt Fleming 668ba0
-		return 0;
Matt Fleming 668ba0
-
Matt Fleming 668ba0
-	__remove_hrtimer(timer, timer->base, timer->state, 0);
Matt Fleming 668ba0
-	list_add_tail(&timer->cb_entry, &timer->base->expired);
Matt Fleming 668ba0
-	return 1;
Matt Fleming 668ba0
-}
Matt Fleming 668ba0
-
Matt Fleming 668ba0
-#else
Matt Fleming 668ba0
-
Matt Fleming 668ba0
-static inline int hrtimer_rt_defer(struct hrtimer *timer) { return 0; }
Matt Fleming 668ba0
+	unsigned int active = cpu_base->active_bases & active_mask;
Matt Fleming 668ba0
 
Matt Fleming 668ba0
-#endif
Matt Fleming 668ba0
-
Matt Fleming 668ba0
-static int __hrtimer_run_queues(struct hrtimer_cpu_base *cpu_base, ktime_t now)
Matt Fleming 668ba0
-{
Matt Fleming 668ba0
-	struct hrtimer_clock_base *base = cpu_base->clock_base;
Matt Fleming 668ba0
-	unsigned int active = cpu_base->active_bases;
Matt Fleming 668ba0
-	int raise = 0;
Matt Fleming 668ba0
-
Matt Fleming 668ba0
-	for (; active; base++, active >>= 1) {
Matt Fleming 668ba0
+	while (active) {
Matt Fleming 668ba0
+		unsigned int id = __ffs(active);
Matt Fleming 668ba0
+		struct hrtimer_clock_base *base;
Matt Fleming 668ba0
 		struct timerqueue_node *node;
Matt Fleming 668ba0
 		ktime_t basenow;
Matt Fleming 668ba0
 
Matt Fleming 668ba0
-		if (!(active & 0x01))
Matt Fleming 668ba0
-			continue;
Matt Fleming 668ba0
+		active &= ~(1U << id);
Matt Fleming 668ba0
+		base = cpu_base->clock_base + id;
Matt Fleming 668ba0
 
Matt Fleming 668ba0
 		basenow = ktime_add(now, base->offset);
Matt Fleming 668ba0
 
Mel Gorman fb63c5
@@ -1434,13 +1438,27 @@ static int __hrtimer_run_queues(struct h
Matt Fleming 668ba0
 			if (basenow < hrtimer_get_softexpires_tv64(timer))
Matt Fleming 668ba0
 				break;
Matt Fleming 668ba0
 
Matt Fleming 668ba0
-			if (!hrtimer_rt_defer(timer))
Matt Fleming 668ba0
-				__run_hrtimer(cpu_base, base, timer, &basenow);
Matt Fleming 668ba0
-			else
Matt Fleming 668ba0
-				raise = 1;
Matt Fleming 668ba0
+			__run_hrtimer(cpu_base, base, timer, &basenow,
Matt Fleming 668ba0
+				      active_mask == HRTIMER_ACTIVE_HARD);
Matt Fleming 668ba0
 		}
Matt Fleming 668ba0
 	}
Matt Fleming 668ba0
-	return raise;
Matt Fleming 668ba0
+}
Matt Fleming 668ba0
+
Matt Fleming 668ba0
+static __latent_entropy void hrtimer_run_softirq(struct softirq_action *h)
Matt Fleming 668ba0
+{
Matt Fleming 668ba0
+	struct hrtimer_cpu_base *cpu_base = this_cpu_ptr(&hrtimer_bases);
Matt Fleming 668ba0
+	ktime_t now;
Matt Fleming 668ba0
+
Matt Fleming 668ba0
+	raw_spin_lock_irq(&cpu_base->lock);
Matt Fleming 668ba0
+
Matt Fleming 668ba0
+	now = hrtimer_update_base(cpu_base);
Matt Fleming 668ba0
+	__hrtimer_run_queues(cpu_base, now, HRTIMER_ACTIVE_SOFT);
Matt Fleming 668ba0
+
Matt Fleming 668ba0
+	cpu_base->softirq_activated = 0;
Matt Fleming 668ba0
+	hrtimer_update_softirq_timer(cpu_base, true);
Matt Fleming 668ba0
+
Matt Fleming 668ba0
+	raw_spin_unlock_irq(&cpu_base->lock);
Matt Fleming 668ba0
+	wake_up_timer_waiters(cpu_base);
Matt Fleming 668ba0
 }
Matt Fleming 668ba0
 
Matt Fleming 668ba0
 #ifdef CONFIG_HIGH_RES_TIMERS
Mel Gorman fb63c5
@@ -1454,7 +1472,6 @@ void hrtimer_interrupt(struct clock_even
Matt Fleming 668ba0
 	struct hrtimer_cpu_base *cpu_base = this_cpu_ptr(&hrtimer_bases);
Matt Fleming 668ba0
 	ktime_t expires_next, now, entry_time, delta;
Matt Fleming 668ba0
 	int retries = 0;
Matt Fleming 668ba0
-	int raise;
Matt Fleming 668ba0
 
Matt Fleming 668ba0
 	BUG_ON(!cpu_base->hres_active);
Matt Fleming 668ba0
 	cpu_base->nr_events++;
Mel Gorman fb63c5
@@ -1473,9 +1490,15 @@ retry:
Matt Fleming 668ba0
 	 */
Matt Fleming 668ba0
 	cpu_base->expires_next = KTIME_MAX;
Matt Fleming 668ba0
 
Matt Fleming 668ba0
-	raise = __hrtimer_run_queues(cpu_base, now);
Matt Fleming 668ba0
+	if (!ktime_before(now, cpu_base->softirq_expires_next)) {
Matt Fleming 668ba0
+		cpu_base->softirq_expires_next = KTIME_MAX;
Matt Fleming 668ba0
+		cpu_base->softirq_activated = 1;
Matt Fleming 668ba0
+		raise_softirq_irqoff(HRTIMER_SOFTIRQ);
Matt Fleming 668ba0
+	}
Matt Fleming 668ba0
+
Matt Fleming 668ba0
+	__hrtimer_run_queues(cpu_base, now, HRTIMER_ACTIVE_HARD);
Matt Fleming 668ba0
 
Matt Fleming 668ba0
-	/* Reevaluate the clock bases for the next expiry */
Matt Fleming 668ba0
+	/* Reevaluate the hard interrupt clock bases for the next expiry */
Matt Fleming 668ba0
 	expires_next = __hrtimer_get_next_event(cpu_base);
Matt Fleming 668ba0
 	/*
Matt Fleming 668ba0
 	 * Store the new expiry value so the migration code can verify
Mel Gorman fb63c5
@@ -1484,8 +1507,6 @@ retry:
Matt Fleming 668ba0
 	cpu_base->expires_next = expires_next;
Matt Fleming 668ba0
 	cpu_base->in_hrtirq = 0;
Matt Fleming 668ba0
 	raw_spin_unlock(&cpu_base->lock);
Matt Fleming 668ba0
-	if (raise)
Matt Fleming 668ba0
-		raise_softirq_irqoff(HRTIMER_SOFTIRQ);
Matt Fleming 668ba0
 
Matt Fleming 668ba0
 	/* Reprogramming necessary ? */
Matt Fleming 668ba0
 	if (!tick_program_event(expires_next, 0)) {
Mel Gorman fb63c5
@@ -1562,7 +1583,6 @@ void hrtimer_run_queues(void)
Matt Fleming 668ba0
 {
Matt Fleming 668ba0
 	struct hrtimer_cpu_base *cpu_base = this_cpu_ptr(&hrtimer_bases);
Matt Fleming 668ba0
 	ktime_t now;
Matt Fleming 668ba0
-	int raise;
Matt Fleming 668ba0
 
Matt Fleming 668ba0
 	if (__hrtimer_hres_active(cpu_base))
Matt Fleming 668ba0
 		return;
Mel Gorman fb63c5
@@ -1581,10 +1601,15 @@ void hrtimer_run_queues(void)
Matt Fleming 668ba0
 
Matt Fleming 668ba0
 	raw_spin_lock(&cpu_base->lock);
Matt Fleming 668ba0
 	now = hrtimer_update_base(cpu_base);
Matt Fleming 668ba0
-	raise = __hrtimer_run_queues(cpu_base, now);
Matt Fleming 668ba0
-	raw_spin_unlock(&cpu_base->lock);
Matt Fleming 668ba0
-	if (raise)
Matt Fleming 668ba0
+
Matt Fleming 668ba0
+	if (!ktime_before(now, cpu_base->softirq_expires_next)) {
Matt Fleming 668ba0
+		cpu_base->softirq_expires_next = KTIME_MAX;
Matt Fleming 668ba0
+		cpu_base->softirq_activated = 1;
Matt Fleming 668ba0
 		raise_softirq_irqoff(HRTIMER_SOFTIRQ);
Matt Fleming 668ba0
+	}
Matt Fleming 668ba0
+
Matt Fleming 668ba0
+	__hrtimer_run_queues(cpu_base, now, HRTIMER_ACTIVE_HARD);
Matt Fleming 668ba0
+	raw_spin_unlock(&cpu_base->lock);
Matt Fleming 668ba0
 }
Matt Fleming 668ba0
 
Matt Fleming 668ba0
 /*
Mel Gorman fb63c5
@@ -1603,19 +1628,51 @@ static enum hrtimer_restart hrtimer_wake
Matt Fleming 668ba0
 	return HRTIMER_NORESTART;
Matt Fleming 668ba0
 }
Matt Fleming 668ba0
 
Matt Fleming 668ba0
-void hrtimer_init_sleeper(struct hrtimer_sleeper *sl, struct task_struct *task)
Matt Fleming 668ba0
+static void __hrtimer_init_sleeper(struct hrtimer_sleeper *sl,
Matt Fleming 668ba0
+				   clockid_t clock_id,
Matt Fleming 668ba0
+				   enum hrtimer_mode mode,
Matt Fleming 668ba0
+				   struct task_struct *task)
Matt Fleming 668ba0
 {
Matt Fleming 668ba0
+#ifdef CONFIG_PREEMPT_RT_FULL
Matt Fleming 668ba0
+	if (!(clock_id & HRTIMER_BASE_SOFT_MASK))
Matt Fleming 668ba0
+		clock_id |= HRTIMER_BASE_HARD_MASK;
Matt Fleming 668ba0
+#endif
Matt Fleming 668ba0
+	__hrtimer_init(&sl->timer, clock_id, mode);
Matt Fleming 668ba0
 	sl->timer.function = hrtimer_wakeup;
Matt Fleming 668ba0
-	sl->timer.irqsafe = 1;
Matt Fleming 668ba0
 	sl->task = task;
Matt Fleming 668ba0
 }
Matt Fleming 668ba0
+
Matt Fleming 668ba0
+/**
Matt Fleming 668ba0
+ * hrtimer_init - initialize a timer to the given clock
Matt Fleming 668ba0
+ * @timer:	the timer to be initialized
Matt Fleming 668ba0
+ * @clock_id:	the clock to be used
Matt Fleming 668ba0
+ * @mode:	timer mode abs/rel
Matt Fleming 668ba0
+ */
Matt Fleming 668ba0
+void hrtimer_init_sleeper(struct hrtimer_sleeper *sl, clockid_t clock_id,
Matt Fleming 668ba0
+			  enum hrtimer_mode mode, struct task_struct *task)
Matt Fleming 668ba0
+{
Matt Fleming 668ba0
+	debug_init(&sl->timer, clock_id, mode);
Matt Fleming 668ba0
+	__hrtimer_init_sleeper(sl, clock_id, mode, task);
Matt Fleming 668ba0
+
Matt Fleming 668ba0
+}
Matt Fleming 668ba0
 EXPORT_SYMBOL_GPL(hrtimer_init_sleeper);
Matt Fleming 668ba0
 
Matt Fleming 668ba0
+#ifdef CONFIG_DEBUG_OBJECTS_TIMERS
Matt Fleming 668ba0
+void hrtimer_init_sleeper_on_stack(struct hrtimer_sleeper *sl,
Matt Fleming 668ba0
+				   clockid_t clock_id,
Matt Fleming 668ba0
+				   enum hrtimer_mode mode,
Matt Fleming 668ba0
+				   struct task_struct *task)
Matt Fleming 668ba0
+{
Matt Fleming 668ba0
+	debug_object_init_on_stack(&sl->timer, &hrtimer_debug_descr);
Matt Fleming 668ba0
+	__hrtimer_init_sleeper(sl, clock_id, mode, task);
Matt Fleming 668ba0
+}
Matt Fleming 668ba0
+EXPORT_SYMBOL_GPL(hrtimer_init_sleeper_on_stack);
Matt Fleming 668ba0
+#endif
Matt Fleming 668ba0
+
Matt Fleming 668ba0
+
Matt Fleming 668ba0
 static int __sched do_nanosleep(struct hrtimer_sleeper *t, enum hrtimer_mode mode,
Matt Fleming 668ba0
 				unsigned long state)
Matt Fleming 668ba0
 {
Matt Fleming 668ba0
-	hrtimer_init_sleeper(t, current);
Matt Fleming 668ba0
-
Matt Fleming 668ba0
 	do {
Matt Fleming 668ba0
 		set_current_state(state);
Matt Fleming 668ba0
 		hrtimer_start_expires(&t->timer, mode);
Mel Gorman fb63c5
@@ -1655,8 +1712,8 @@ long __sched hrtimer_nanosleep_restart(s
Matt Fleming 668ba0
 	struct timespec __user  *rmtp;
Matt Fleming 668ba0
 	int ret = 0;
Matt Fleming 668ba0
 
Matt Fleming 668ba0
-	hrtimer_init_on_stack(&t.timer, restart->nanosleep.clockid,
Matt Fleming 668ba0
-				HRTIMER_MODE_ABS);
Matt Fleming 668ba0
+	hrtimer_init_sleeper_on_stack(&t, restart->nanosleep.clockid,
Matt Fleming 668ba0
+				      HRTIMER_MODE_ABS, current);
Matt Fleming 668ba0
 	hrtimer_set_expires_tv64(&t.timer, restart->nanosleep.expires);
Matt Fleming 668ba0
 
Matt Fleming 668ba0
 	/* cpu_chill() does not care about restart state. */
Mel Gorman fb63c5
@@ -1691,8 +1748,9 @@ __hrtimer_nanosleep(struct timespec64 *r
Matt Fleming 668ba0
 	if (dl_task(current) || rt_task(current))
Matt Fleming 668ba0
 		slack = 0;
Matt Fleming 668ba0
 
Matt Fleming 668ba0
-	hrtimer_init_on_stack(&t.timer, clockid, mode);
Matt Fleming 668ba0
+	hrtimer_init_sleeper_on_stack(&t, clockid, mode, current);
Matt Fleming 668ba0
 	hrtimer_set_expires_range_ns(&t.timer, timespec64_to_ktime(*rqtp), slack);
Matt Fleming 668ba0
+
Matt Fleming 668ba0
 	if (do_nanosleep(&t, mode, state))
Matt Fleming 668ba0
 		goto out;
Matt Fleming 668ba0
 
Mel Gorman fb63c5
@@ -1754,7 +1812,7 @@ void cpu_chill(void)
Matt Fleming 668ba0
 	unsigned int freeze_flag = current->flags & PF_NOFREEZE;
Matt Fleming 668ba0
 
Matt Fleming 668ba0
 	current->flags |= PF_NOFREEZE;
Matt Fleming 668ba0
-	__hrtimer_nanosleep(&tu, NULL, HRTIMER_MODE_REL, CLOCK_MONOTONIC,
Matt Fleming 668ba0
+	__hrtimer_nanosleep(&tu, NULL, HRTIMER_MODE_REL, CLOCK_MONOTONIC_HARD,
Matt Fleming 668ba0
 			    TASK_UNINTERRUPTIBLE);
Matt Fleming 668ba0
 	if (!freeze_flag)
Matt Fleming 668ba0
 		current->flags &= ~PF_NOFREEZE;
Mel Gorman fb63c5
@@ -1773,12 +1831,15 @@ int hrtimers_prepare_cpu(unsigned int cp
Matt Fleming 668ba0
 	for (i = 0; i < HRTIMER_MAX_CLOCK_BASES; i++) {
Matt Fleming 668ba0
 		cpu_base->clock_base[i].cpu_base = cpu_base;
Matt Fleming 668ba0
 		timerqueue_init_head(&cpu_base->clock_base[i].active);
Matt Fleming 668ba0
-		INIT_LIST_HEAD(&cpu_base->clock_base[i].expired);
Matt Fleming 668ba0
 	}
Matt Fleming 668ba0
 
Matt Fleming 668ba0
 	cpu_base->active_bases = 0;
Matt Fleming 668ba0
 	cpu_base->cpu = cpu;
Matt Fleming 668ba0
-	hrtimer_init_hres(cpu_base);
Matt Fleming 668ba0
+	cpu_base->hres_active = 0;
Matt Fleming 668ba0
+	cpu_base->expires_next = KTIME_MAX;
Matt Fleming 668ba0
+	cpu_base->softirq_expires_next = KTIME_MAX;
Matt Fleming 668ba0
+	cpu_base->hang_detected = 0;
Matt Fleming 668ba0
+	cpu_base->next_timer = NULL;
Matt Fleming 668ba0
 #ifdef CONFIG_PREEMPT_RT_BASE
Matt Fleming 668ba0
 	init_waitqueue_head(&cpu_base->wait);
Matt Fleming 668ba0
 #endif
Mel Gorman fb63c5
@@ -1787,7 +1848,7 @@ int hrtimers_prepare_cpu(unsigned int cp
Matt Fleming 668ba0
 
Matt Fleming 668ba0
 #ifdef CONFIG_HOTPLUG_CPU
Matt Fleming 668ba0
 
Matt Fleming 668ba0
-static int migrate_hrtimer_list(struct hrtimer_clock_base *old_base,
Matt Fleming 668ba0
+static void migrate_hrtimer_list(struct hrtimer_clock_base *old_base,
Matt Fleming 668ba0
 				struct hrtimer_clock_base *new_base)
Matt Fleming 668ba0
 {
Matt Fleming 668ba0
 	struct hrtimer *timer;
Mel Gorman fb63c5
@@ -1815,25 +1876,17 @@ static int migrate_hrtimer_list(struct h
Matt Fleming 668ba0
 		 */
Matt Fleming 668ba0
 		enqueue_hrtimer(timer, new_base);
Matt Fleming 668ba0
 	}
Matt Fleming 668ba0
-#ifdef CONFIG_PREEMPT_RT_BASE
Matt Fleming 668ba0
-	list_splice_tail(&old_base->expired, &new_base->expired);
Matt Fleming 668ba0
-	/*
Matt Fleming 668ba0
-	 * Tell the caller to raise HRTIMER_SOFTIRQ.  We can't safely
Matt Fleming 668ba0
-	 * acquire ktimersoftd->pi_lock while the base lock is held.
Matt Fleming 668ba0
-	 */
Matt Fleming 668ba0
-	return !list_empty(&new_base->expired);
Matt Fleming 668ba0
-#endif
Matt Fleming 668ba0
-	return 0;
Matt Fleming 668ba0
 }
Matt Fleming 668ba0
 
Matt Fleming 668ba0
 int hrtimers_dead_cpu(unsigned int scpu)
Matt Fleming 668ba0
 {
Matt Fleming 668ba0
 	struct hrtimer_cpu_base *old_base, *new_base;
Matt Fleming 668ba0
-	int i, raise = 0;
Matt Fleming 668ba0
+	int i;
Matt Fleming 668ba0
 
Matt Fleming 668ba0
 	BUG_ON(cpu_online(scpu));
Matt Fleming 668ba0
 	tick_cancel_sched_timer(scpu);
Matt Fleming 668ba0
 
Matt Fleming 668ba0
+	local_bh_disable();
Matt Fleming 668ba0
 	local_irq_disable();
Matt Fleming 668ba0
 	old_base = &per_cpu(hrtimer_bases, scpu);
Matt Fleming 668ba0
 	new_base = this_cpu_ptr(&hrtimer_bases);
Mel Gorman fb63c5
@@ -1845,56 +1898,50 @@ int hrtimers_dead_cpu(unsigned int scpu)
Matt Fleming 668ba0
 	raw_spin_lock_nested(&old_base->lock, SINGLE_DEPTH_NESTING);
Matt Fleming 668ba0
 
Matt Fleming 668ba0
 	for (i = 0; i < HRTIMER_MAX_CLOCK_BASES; i++) {
Matt Fleming 668ba0
-		raise |= migrate_hrtimer_list(&old_base->clock_base[i],
Matt Fleming 668ba0
-					      &new_base->clock_base[i]);
Matt Fleming 668ba0
+		migrate_hrtimer_list(&old_base->clock_base[i],
Matt Fleming 668ba0
+				     &new_base->clock_base[i]);
Matt Fleming 668ba0
 	}
Matt Fleming 668ba0
 
Matt Fleming 668ba0
+	/*
Matt Fleming 668ba0
+	 * The migration might have changed the first expiring softirq
Matt Fleming 668ba0
+	 * timer on this CPU. Update it.
Matt Fleming 668ba0
+	 */
Matt Fleming 668ba0
+	hrtimer_update_softirq_timer(new_base, false);
Matt Fleming 668ba0
+
Matt Fleming 668ba0
 	raw_spin_unlock(&old_base->lock);
Matt Fleming 668ba0
 	raw_spin_unlock(&new_base->lock);
Matt Fleming 668ba0
 
Matt Fleming 668ba0
-	if (raise)
Matt Fleming 668ba0
-		raise_softirq_irqoff(HRTIMER_SOFTIRQ);
Matt Fleming 668ba0
-
Matt Fleming 668ba0
 	/* Check, if we got expired work to do */
Matt Fleming 668ba0
 	__hrtimer_peek_ahead_timers();
Matt Fleming 668ba0
 	local_irq_enable();
Matt Fleming 668ba0
+	local_bh_enable();
Matt Fleming 668ba0
 	return 0;
Matt Fleming 668ba0
 }
Matt Fleming 668ba0
 
Matt Fleming 668ba0
 #endif /* CONFIG_HOTPLUG_CPU */
Matt Fleming 668ba0
 
Matt Fleming 668ba0
-#ifdef CONFIG_PREEMPT_RT_BASE
Matt Fleming 668ba0
-
Matt Fleming 668ba0
-static void run_hrtimer_softirq(struct softirq_action *h)
Matt Fleming 668ba0
-{
Matt Fleming 668ba0
-	hrtimer_rt_run_pending();
Matt Fleming 668ba0
-}
Matt Fleming 668ba0
-
Matt Fleming 668ba0
-static void hrtimers_open_softirq(void)
Matt Fleming 668ba0
-{
Matt Fleming 668ba0
-	open_softirq(HRTIMER_SOFTIRQ, run_hrtimer_softirq);
Matt Fleming 668ba0
-}
Matt Fleming 668ba0
-
Matt Fleming 668ba0
-#else
Matt Fleming 668ba0
-static void hrtimers_open_softirq(void) { }
Matt Fleming 668ba0
-#endif
Matt Fleming 668ba0
-
Matt Fleming 668ba0
 void __init hrtimers_init(void)
Matt Fleming 668ba0
 {
Matt Fleming 668ba0
+	/*
Matt Fleming 668ba0
+	 * It is necessary, that the soft base mask is a single
Matt Fleming 668ba0
+	 * bit.
Matt Fleming 668ba0
+	 */
Matt Fleming 668ba0
+	BUILD_BUG_ON_NOT_POWER_OF_2(HRTIMER_BASE_SOFT_MASK);
Matt Fleming 668ba0
+
Matt Fleming 668ba0
 	hrtimers_prepare_cpu(smp_processor_id());
Matt Fleming 668ba0
-	hrtimers_open_softirq();
Matt Fleming 668ba0
+	open_softirq(HRTIMER_SOFTIRQ, hrtimer_run_softirq);
Matt Fleming 668ba0
 }
Matt Fleming 668ba0
 
Matt Fleming 668ba0
 /**
Matt Fleming 668ba0
  * schedule_hrtimeout_range_clock - sleep until timeout
Matt Fleming 668ba0
  * @expires:	timeout value (ktime_t)
Matt Fleming 668ba0
  * @delta:	slack in expires timeout (ktime_t)
Matt Fleming 668ba0
- * @mode:	timer mode, HRTIMER_MODE_ABS or HRTIMER_MODE_REL
Matt Fleming 668ba0
- * @clock:	timer clock, CLOCK_MONOTONIC or CLOCK_REALTIME
Matt Fleming 668ba0
+ * @mode:	timer mode
Matt Fleming 668ba0
+ * @clock_id:	timer clock to be used
Matt Fleming 668ba0
  */
Matt Fleming 668ba0
 int __sched
Matt Fleming 668ba0
 schedule_hrtimeout_range_clock(ktime_t *expires, u64 delta,
Matt Fleming 668ba0
-			       const enum hrtimer_mode mode, int clock)
Matt Fleming 668ba0
+			       const enum hrtimer_mode mode, clockid_t clock_id)
Matt Fleming 668ba0
 {
Matt Fleming 668ba0
 	struct hrtimer_sleeper t;
Matt Fleming 668ba0
 
Mel Gorman fb63c5
@@ -1915,10 +1962,9 @@ schedule_hrtimeout_range_clock(ktime_t *
Matt Fleming 668ba0
 		return -EINTR;
Matt Fleming 668ba0
 	}
Matt Fleming 668ba0
 
Matt Fleming 668ba0
-	hrtimer_init_on_stack(&t.timer, clock, mode);
Matt Fleming 668ba0
-	hrtimer_set_expires_range_ns(&t.timer, *expires, delta);
Matt Fleming 668ba0
+	hrtimer_init_sleeper_on_stack(&t, clock_id, mode, current);
Matt Fleming 668ba0
 
Matt Fleming 668ba0
-	hrtimer_init_sleeper(&t, current);
Matt Fleming 668ba0
+	hrtimer_set_expires_range_ns(&t.timer, *expires, delta);
Matt Fleming 668ba0
 
Matt Fleming 668ba0
 	hrtimer_start_expires(&t.timer, mode);
Matt Fleming 668ba0
 
Matt Fleming 668ba0
--- a/kernel/time/tick-broadcast-hrtimer.c
Matt Fleming 668ba0
+++ b/kernel/time/tick-broadcast-hrtimer.c
Mel Gorman fb63c5
@@ -106,8 +106,7 @@ static enum hrtimer_restart bc_handler(s
Matt Fleming 668ba0
 
Matt Fleming 668ba0
 void tick_setup_hrtimer_broadcast(void)
Matt Fleming 668ba0
 {
Matt Fleming 668ba0
-	hrtimer_init(&bctimer, CLOCK_MONOTONIC, HRTIMER_MODE_ABS);
Matt Fleming 668ba0
+	hrtimer_init(&bctimer, CLOCK_MONOTONIC_HARD, HRTIMER_MODE_ABS);
Matt Fleming 668ba0
 	bctimer.function = bc_handler;
Matt Fleming 668ba0
-	bctimer.irqsafe = true;
Matt Fleming 668ba0
 	clockevents_register_device(&ce_broadcast_hrtimer);
Matt Fleming 668ba0
 }
Matt Fleming 668ba0
--- a/kernel/time/tick-sched.c
Matt Fleming 668ba0
+++ b/kernel/time/tick-sched.c
Matt Fleming 668ba0
@@ -1217,8 +1217,7 @@ void tick_setup_sched_timer(void)
Matt Fleming 668ba0
 	/*
Matt Fleming 668ba0
 	 * Emulate tick processing via per-CPU hrtimers:
Matt Fleming 668ba0
 	 */
Matt Fleming 668ba0
-	hrtimer_init(&ts->sched_timer, CLOCK_MONOTONIC, HRTIMER_MODE_ABS);
Matt Fleming 668ba0
-	ts->sched_timer.irqsafe = 1;
Matt Fleming 668ba0
+	hrtimer_init(&ts->sched_timer, CLOCK_MONOTONIC_HARD, HRTIMER_MODE_ABS);
Matt Fleming 668ba0
 	ts->sched_timer.function = tick_sched_timer;
Matt Fleming 668ba0
 
Matt Fleming 668ba0
 	/* Get the next period (per-CPU) */
Matt Fleming 668ba0
--- a/kernel/watchdog.c
Matt Fleming 668ba0
+++ b/kernel/watchdog.c
Matt Fleming 668ba0
@@ -383,9 +383,8 @@ static void watchdog_enable(unsigned int
Matt Fleming 668ba0
 	struct hrtimer *hrtimer = raw_cpu_ptr(&watchdog_hrtimer);
Matt Fleming 668ba0
 
Matt Fleming 668ba0
 	/* kick off the timer for the hardlockup detector */
Matt Fleming 668ba0
-	hrtimer_init(hrtimer, CLOCK_MONOTONIC, HRTIMER_MODE_REL);
Matt Fleming 668ba0
+	hrtimer_init(hrtimer, CLOCK_MONOTONIC_HARD, HRTIMER_MODE_REL);
Matt Fleming 668ba0
 	hrtimer->function = watchdog_timer_fn;
Matt Fleming 668ba0
-	hrtimer->irqsafe = 1;
Matt Fleming 668ba0
 
Matt Fleming 668ba0
 	/* Enable the perf event */
Matt Fleming 668ba0
 	watchdog_nmi_enable(cpu);
Matt Fleming 668ba0
--- a/net/can/bcm.c
Matt Fleming 668ba0
+++ b/net/can/bcm.c
Matt Fleming 668ba0
@@ -105,7 +105,6 @@ struct bcm_op {
Matt Fleming 668ba0
 	unsigned long frames_abs, frames_filtered;
Matt Fleming 668ba0
 	struct bcm_timeval ival1, ival2;
Matt Fleming 668ba0
 	struct hrtimer timer, thrtimer;
Matt Fleming 668ba0
-	struct tasklet_struct tsklet, thrtsklet;
Matt Fleming 668ba0
 	ktime_t rx_stamp, kt_ival1, kt_ival2, kt_lastmsg;
Matt Fleming 668ba0
 	int rx_ifindex;
Matt Fleming 668ba0
 	int cfsiz;
Matt Fleming 668ba0
@@ -383,25 +382,34 @@ static void bcm_send_to_user(struct bcm_
Matt Fleming 668ba0
 	}
Matt Fleming 668ba0
 }
Matt Fleming 668ba0
 
Matt Fleming 668ba0
-static void bcm_tx_start_timer(struct bcm_op *op)
Matt Fleming 668ba0
+static bool bcm_tx_set_expiry(struct bcm_op *op, struct hrtimer *hrt)
Matt Fleming 668ba0
 {
Matt Fleming 668ba0
+	ktime_t ival;
Matt Fleming 668ba0
+
Matt Fleming 668ba0
 	if (op->kt_ival1 && op->count)
Matt Fleming 668ba0
-		hrtimer_start(&op->timer,
Matt Fleming 668ba0
-			      ktime_add(ktime_get(), op->kt_ival1),
Matt Fleming 668ba0
-			      HRTIMER_MODE_ABS);
Matt Fleming 668ba0
+		ival = op->kt_ival1;
Matt Fleming 668ba0
 	else if (op->kt_ival2)
Matt Fleming 668ba0
-		hrtimer_start(&op->timer,
Matt Fleming 668ba0
-			      ktime_add(ktime_get(), op->kt_ival2),
Matt Fleming 668ba0
-			      HRTIMER_MODE_ABS);
Matt Fleming 668ba0
+		ival = op->kt_ival2;
Matt Fleming 668ba0
+	else
Matt Fleming 668ba0
+		return false;
Matt Fleming 668ba0
+
Matt Fleming 668ba0
+	hrtimer_set_expires(hrt, ktime_add(ktime_get(), ival));
Matt Fleming 668ba0
+	return true;
Matt Fleming 668ba0
 }
Matt Fleming 668ba0
 
Matt Fleming 668ba0
-static void bcm_tx_timeout_tsklet(unsigned long data)
Matt Fleming 668ba0
+static void bcm_tx_start_timer(struct bcm_op *op)
Matt Fleming 668ba0
 {
Matt Fleming 668ba0
-	struct bcm_op *op = (struct bcm_op *)data;
Matt Fleming 668ba0
+	if (bcm_tx_set_expiry(op, &op->timer))
Matt Fleming 668ba0
+		hrtimer_start_expires(&op->timer, HRTIMER_MODE_ABS);
Matt Fleming 668ba0
+}
Matt Fleming 668ba0
+
Matt Fleming 668ba0
+/* bcm_tx_timeout_handler - performs cyclic CAN frame transmissions */
Matt Fleming 668ba0
+static enum hrtimer_restart bcm_tx_timeout_handler(struct hrtimer *hrtimer)
Matt Fleming 668ba0
+{
Matt Fleming 668ba0
+	struct bcm_op *op = container_of(hrtimer, struct bcm_op, timer);
Matt Fleming 668ba0
 	struct bcm_msg_head msg_head;
Matt Fleming 668ba0
 
Matt Fleming 668ba0
 	if (op->kt_ival1 && (op->count > 0)) {
Matt Fleming 668ba0
-
Matt Fleming 668ba0
 		op->count--;
Matt Fleming 668ba0
 		if (!op->count && (op->flags & TX_COUNTEVT)) {
Matt Fleming 668ba0
 
Matt Fleming 668ba0
@@ -418,22 +426,12 @@ static void bcm_tx_timeout_tsklet(unsign
Matt Fleming 668ba0
 		}
Matt Fleming 668ba0
 		bcm_can_tx(op);
Matt Fleming 668ba0
 
Matt Fleming 668ba0
-	} else if (op->kt_ival2)
Matt Fleming 668ba0
+	} else if (op->kt_ival2) {
Matt Fleming 668ba0
 		bcm_can_tx(op);
Matt Fleming 668ba0
+	}
Matt Fleming 668ba0
 
Matt Fleming 668ba0
-	bcm_tx_start_timer(op);
Matt Fleming 668ba0
-}
Matt Fleming 668ba0
-
Matt Fleming 668ba0
-/*
Matt Fleming 668ba0
- * bcm_tx_timeout_handler - performs cyclic CAN frame transmissions
Matt Fleming 668ba0
- */
Matt Fleming 668ba0
-static enum hrtimer_restart bcm_tx_timeout_handler(struct hrtimer *hrtimer)
Matt Fleming 668ba0
-{
Matt Fleming 668ba0
-	struct bcm_op *op = container_of(hrtimer, struct bcm_op, timer);
Matt Fleming 668ba0
-
Matt Fleming 668ba0
-	tasklet_schedule(&op->tsklet);
Matt Fleming 668ba0
-
Matt Fleming 668ba0
-	return HRTIMER_NORESTART;
Matt Fleming 668ba0
+	return bcm_tx_set_expiry(op, &op->timer) ?
Matt Fleming 668ba0
+		HRTIMER_RESTART : HRTIMER_NORESTART;
Matt Fleming 668ba0
 }
Matt Fleming 668ba0
 
Matt Fleming 668ba0
 /*
Matt Fleming 668ba0
@@ -561,11 +559,18 @@ static void bcm_rx_starttimer(struct bcm
Matt Fleming 668ba0
 		hrtimer_start(&op->timer, op->kt_ival1, HRTIMER_MODE_REL);
Matt Fleming 668ba0
 }
Matt Fleming 668ba0
 
Matt Fleming 668ba0
-static void bcm_rx_timeout_tsklet(unsigned long data)
Matt Fleming 668ba0
+/* bcm_rx_timeout_handler - when the (cyclic) CAN frame reception timed out */
Matt Fleming 668ba0
+static enum hrtimer_restart bcm_rx_timeout_handler(struct hrtimer *hrtimer)
Matt Fleming 668ba0
 {
Matt Fleming 668ba0
-	struct bcm_op *op = (struct bcm_op *)data;
Matt Fleming 668ba0
+	struct bcm_op *op = container_of(hrtimer, struct bcm_op, timer);
Matt Fleming 668ba0
 	struct bcm_msg_head msg_head;
Matt Fleming 668ba0
 
Matt Fleming 668ba0
+	/* if user wants to be informed, when cyclic CAN-Messages come back */
Matt Fleming 668ba0
+	if ((op->flags & RX_ANNOUNCE_RESUME) && op->last_frames) {
Matt Fleming 668ba0
+		/* clear received CAN frames to indicate 'nothing received' */
Matt Fleming 668ba0
+		memset(op->last_frames, 0, op->nframes * op->cfsiz);
Matt Fleming 668ba0
+	}
Matt Fleming 668ba0
+
Matt Fleming 668ba0
 	/* create notification to user */
Matt Fleming 668ba0
 	msg_head.opcode  = RX_TIMEOUT;
Matt Fleming 668ba0
 	msg_head.flags   = op->flags;
Matt Fleming 668ba0
@@ -576,25 +581,6 @@ static void bcm_rx_timeout_tsklet(unsign
Matt Fleming 668ba0
 	msg_head.nframes = 0;
Matt Fleming 668ba0
 
Matt Fleming 668ba0
 	bcm_send_to_user(op, &msg_head, NULL, 0);
Matt Fleming 668ba0
-}
Matt Fleming 668ba0
-
Matt Fleming 668ba0
-/*
Matt Fleming 668ba0
- * bcm_rx_timeout_handler - when the (cyclic) CAN frame reception timed out
Matt Fleming 668ba0
- */
Matt Fleming 668ba0
-static enum hrtimer_restart bcm_rx_timeout_handler(struct hrtimer *hrtimer)
Matt Fleming 668ba0
-{
Matt Fleming 668ba0
-	struct bcm_op *op = container_of(hrtimer, struct bcm_op, timer);
Matt Fleming 668ba0
-
Matt Fleming 668ba0
-	/* schedule before NET_RX_SOFTIRQ */
Matt Fleming 668ba0
-	tasklet_hi_schedule(&op->tsklet);
Matt Fleming 668ba0
-
Matt Fleming 668ba0
-	/* no restart of the timer is done here! */
Matt Fleming 668ba0
-
Matt Fleming 668ba0
-	/* if user wants to be informed, when cyclic CAN-Messages come back */
Matt Fleming 668ba0
-	if ((op->flags & RX_ANNOUNCE_RESUME) && op->last_frames) {
Matt Fleming 668ba0
-		/* clear received CAN frames to indicate 'nothing received' */
Matt Fleming 668ba0
-		memset(op->last_frames, 0, op->nframes * op->cfsiz);
Matt Fleming 668ba0
-	}
Matt Fleming 668ba0
 
Matt Fleming 668ba0
 	return HRTIMER_NORESTART;
Matt Fleming 668ba0
 }
Matt Fleming 668ba0
@@ -602,14 +588,12 @@ static enum hrtimer_restart bcm_rx_timeo
Matt Fleming 668ba0
 /*
Matt Fleming 668ba0
  * bcm_rx_do_flush - helper for bcm_rx_thr_flush
Matt Fleming 668ba0
  */
Matt Fleming 668ba0
-static inline int bcm_rx_do_flush(struct bcm_op *op, int update,
Matt Fleming 668ba0
-				  unsigned int index)
Matt Fleming 668ba0
+static inline int bcm_rx_do_flush(struct bcm_op *op, unsigned int index)
Matt Fleming 668ba0
 {
Matt Fleming 668ba0
 	struct canfd_frame *lcf = op->last_frames + op->cfsiz * index;
Matt Fleming 668ba0
 
Matt Fleming 668ba0
 	if ((op->last_frames) && (lcf->flags & RX_THR)) {
Matt Fleming 668ba0
-		if (update)
Matt Fleming 668ba0
-			bcm_rx_changed(op, lcf);
Matt Fleming 668ba0
+		bcm_rx_changed(op, lcf);
Matt Fleming 668ba0
 		return 1;
Matt Fleming 668ba0
 	}
Matt Fleming 668ba0
 	return 0;
Matt Fleming 668ba0
@@ -617,11 +601,8 @@ static inline int bcm_rx_do_flush(struct
Matt Fleming 668ba0
 
Matt Fleming 668ba0
 /*
Matt Fleming 668ba0
  * bcm_rx_thr_flush - Check for throttled data and send it to the userspace
Matt Fleming 668ba0
- *
Matt Fleming 668ba0
- * update == 0 : just check if throttled data is available  (any irq context)
Matt Fleming 668ba0
- * update == 1 : check and send throttled data to userspace (soft_irq context)
Matt Fleming 668ba0
  */
Matt Fleming 668ba0
-static int bcm_rx_thr_flush(struct bcm_op *op, int update)
Matt Fleming 668ba0
+static int bcm_rx_thr_flush(struct bcm_op *op)
Matt Fleming 668ba0
 {
Matt Fleming 668ba0
 	int updated = 0;
Matt Fleming 668ba0
 
Matt Fleming 668ba0
@@ -630,24 +611,16 @@ static int bcm_rx_thr_flush(struct bcm_o
Matt Fleming 668ba0
 
Matt Fleming 668ba0
 		/* for MUX filter we start at index 1 */
Matt Fleming 668ba0
 		for (i = 1; i < op->nframes; i++)
Matt Fleming 668ba0
-			updated += bcm_rx_do_flush(op, update, i);
Matt Fleming 668ba0
+			updated += bcm_rx_do_flush(op, i);
Matt Fleming 668ba0
 
Matt Fleming 668ba0
 	} else {
Matt Fleming 668ba0
 		/* for RX_FILTER_ID and simple filter */
Matt Fleming 668ba0
-		updated += bcm_rx_do_flush(op, update, 0);
Matt Fleming 668ba0
+		updated += bcm_rx_do_flush(op, 0);
Matt Fleming 668ba0
 	}
Matt Fleming 668ba0
 
Matt Fleming 668ba0
 	return updated;
Matt Fleming 668ba0
 }
Matt Fleming 668ba0
 
Matt Fleming 668ba0
-static void bcm_rx_thr_tsklet(unsigned long data)
Matt Fleming 668ba0
-{
Matt Fleming 668ba0
-	struct bcm_op *op = (struct bcm_op *)data;
Matt Fleming 668ba0
-
Matt Fleming 668ba0
-	/* push the changed data to the userspace */
Matt Fleming 668ba0
-	bcm_rx_thr_flush(op, 1);
Matt Fleming 668ba0
-}
Matt Fleming 668ba0
-
Matt Fleming 668ba0
 /*
Matt Fleming 668ba0
  * bcm_rx_thr_handler - the time for blocked content updates is over now:
Matt Fleming 668ba0
  *                      Check for throttled data and send it to the userspace
Matt Fleming 668ba0
@@ -656,9 +629,7 @@ static enum hrtimer_restart bcm_rx_thr_h
Matt Fleming 668ba0
 {
Matt Fleming 668ba0
 	struct bcm_op *op = container_of(hrtimer, struct bcm_op, thrtimer);
Matt Fleming 668ba0
 
Matt Fleming 668ba0
-	tasklet_schedule(&op->thrtsklet);
Matt Fleming 668ba0
-
Matt Fleming 668ba0
-	if (bcm_rx_thr_flush(op, 0)) {
Matt Fleming 668ba0
+	if (bcm_rx_thr_flush(op)) {
Matt Fleming 668ba0
 		hrtimer_forward(hrtimer, ktime_get(), op->kt_ival2);
Matt Fleming 668ba0
 		return HRTIMER_RESTART;
Matt Fleming 668ba0
 	} else {
Matt Fleming 668ba0
@@ -754,23 +725,8 @@ static struct bcm_op *bcm_find_op(struct
Matt Fleming 668ba0
 
Matt Fleming 668ba0
 static void bcm_remove_op(struct bcm_op *op)
Matt Fleming 668ba0
 {
Matt Fleming 668ba0
-	if (op->tsklet.func) {
Matt Fleming 668ba0
-		while (test_bit(TASKLET_STATE_SCHED, &op->tsklet.state) ||
Matt Fleming 668ba0
-		       test_bit(TASKLET_STATE_RUN, &op->tsklet.state) ||
Matt Fleming 668ba0
-		       hrtimer_active(&op->timer)) {
Matt Fleming 668ba0
-			hrtimer_cancel(&op->timer);
Matt Fleming 668ba0
-			tasklet_kill(&op->tsklet);
Matt Fleming 668ba0
-		}
Matt Fleming 668ba0
-	}
Matt Fleming 668ba0
-
Matt Fleming 668ba0
-	if (op->thrtsklet.func) {
Matt Fleming 668ba0
-		while (test_bit(TASKLET_STATE_SCHED, &op->thrtsklet.state) ||
Matt Fleming 668ba0
-		       test_bit(TASKLET_STATE_RUN, &op->thrtsklet.state) ||
Matt Fleming 668ba0
-		       hrtimer_active(&op->thrtimer)) {
Matt Fleming 668ba0
-			hrtimer_cancel(&op->thrtimer);
Matt Fleming 668ba0
-			tasklet_kill(&op->thrtsklet);
Matt Fleming 668ba0
-		}
Matt Fleming 668ba0
-	}
Matt Fleming 668ba0
+	hrtimer_cancel(&op->timer);
Matt Fleming 668ba0
+	hrtimer_cancel(&op->thrtimer);
Matt Fleming 668ba0
 
Matt Fleming 668ba0
 	if ((op->frames) && (op->frames != &op->sframe))
Matt Fleming 668ba0
 		kfree(op->frames);
Matt Fleming 668ba0
@@ -1002,15 +958,13 @@ static int bcm_tx_setup(struct bcm_msg_h
Matt Fleming 668ba0
 		op->ifindex = ifindex;
Matt Fleming 668ba0
 
Matt Fleming 668ba0
 		/* initialize uninitialized (kzalloc) structure */
Matt Fleming 668ba0
-		hrtimer_init(&op->timer, CLOCK_MONOTONIC, HRTIMER_MODE_REL);
Matt Fleming 668ba0
+		hrtimer_init(&op->timer, CLOCK_MONOTONIC_SOFT,
Matt Fleming 668ba0
+			     HRTIMER_MODE_REL);
Matt Fleming 668ba0
 		op->timer.function = bcm_tx_timeout_handler;
Matt Fleming 668ba0
 
Matt Fleming 668ba0
-		/* initialize tasklet for tx countevent notification */
Matt Fleming 668ba0
-		tasklet_init(&op->tsklet, bcm_tx_timeout_tsklet,
Matt Fleming 668ba0
-			     (unsigned long) op);
Matt Fleming 668ba0
-
Matt Fleming 668ba0
 		/* currently unused in tx_ops */
Matt Fleming 668ba0
-		hrtimer_init(&op->thrtimer, CLOCK_MONOTONIC, HRTIMER_MODE_REL);
Matt Fleming 668ba0
+		hrtimer_init(&op->thrtimer, CLOCK_MONOTONIC_SOFT,
Matt Fleming 668ba0
+			     HRTIMER_MODE_REL);
Matt Fleming 668ba0
 
Matt Fleming 668ba0
 		/* add this bcm_op to the list of the tx_ops */
Matt Fleming 668ba0
 		list_add(&op->list, &bo->tx_ops);
Matt Fleming 668ba0
@@ -1177,20 +1131,14 @@ static int bcm_rx_setup(struct bcm_msg_h
Matt Fleming 668ba0
 		op->rx_ifindex = ifindex;
Matt Fleming 668ba0
 
Matt Fleming 668ba0
 		/* initialize uninitialized (kzalloc) structure */
Matt Fleming 668ba0
-		hrtimer_init(&op->timer, CLOCK_MONOTONIC, HRTIMER_MODE_REL);
Matt Fleming 668ba0
+		hrtimer_init(&op->timer, CLOCK_MONOTONIC_SOFT,
Matt Fleming 668ba0
+			     HRTIMER_MODE_REL);
Matt Fleming 668ba0
 		op->timer.function = bcm_rx_timeout_handler;
Matt Fleming 668ba0
 
Matt Fleming 668ba0
-		/* initialize tasklet for rx timeout notification */
Matt Fleming 668ba0
-		tasklet_init(&op->tsklet, bcm_rx_timeout_tsklet,
Matt Fleming 668ba0
-			     (unsigned long) op);
Matt Fleming 668ba0
-
Matt Fleming 668ba0
-		hrtimer_init(&op->thrtimer, CLOCK_MONOTONIC, HRTIMER_MODE_REL);
Matt Fleming 668ba0
+		hrtimer_init(&op->thrtimer, CLOCK_MONOTONIC_SOFT,
Matt Fleming 668ba0
+			     HRTIMER_MODE_REL);
Matt Fleming 668ba0
 		op->thrtimer.function = bcm_rx_thr_handler;
Matt Fleming 668ba0
 
Matt Fleming 668ba0
-		/* initialize tasklet for rx throttle handling */
Matt Fleming 668ba0
-		tasklet_init(&op->thrtsklet, bcm_rx_thr_tsklet,
Matt Fleming 668ba0
-			     (unsigned long) op);
Matt Fleming 668ba0
-
Matt Fleming 668ba0
 		/* add this bcm_op to the list of the rx_ops */
Matt Fleming 668ba0
 		list_add(&op->list, &bo->rx_ops);
Matt Fleming 668ba0
 
Matt Fleming 668ba0
@@ -1236,7 +1184,7 @@ static int bcm_rx_setup(struct bcm_msg_h
Matt Fleming 668ba0
 			 */
Matt Fleming 668ba0
 			op->kt_lastmsg = 0;
Matt Fleming 668ba0
 			hrtimer_cancel(&op->thrtimer);
Matt Fleming 668ba0
-			bcm_rx_thr_flush(op, 1);
Matt Fleming 668ba0
+			bcm_rx_thr_flush(op);
Matt Fleming 668ba0
 		}
Matt Fleming 668ba0
 
Matt Fleming 668ba0
 		if ((op->flags & STARTTIMER) && op->kt_ival1)
Matt Fleming 668ba0
--- a/net/core/pktgen.c
Matt Fleming 668ba0
+++ b/net/core/pktgen.c
Matt Fleming 668ba0
@@ -2253,7 +2253,8 @@ static void spin(struct pktgen_dev *pkt_
Matt Fleming 668ba0
 	s64 remaining;
Matt Fleming 668ba0
 	struct hrtimer_sleeper t;
Matt Fleming 668ba0
 
Matt Fleming 668ba0
-	hrtimer_init_on_stack(&t.timer, CLOCK_MONOTONIC, HRTIMER_MODE_ABS);
Matt Fleming 668ba0
+	hrtimer_init_sleeper_on_stack(&t, CLOCK_MONOTONIC, HRTIMER_MODE_ABS,
Matt Fleming 668ba0
+				      current);
Matt Fleming 668ba0
 	hrtimer_set_expires(&t.timer, spin_until);
Matt Fleming 668ba0
 
Matt Fleming 668ba0
 	remaining = ktime_to_ns(hrtimer_expires_remaining(&t.timer));
Matt Fleming 668ba0
@@ -2268,7 +2269,6 @@ static void spin(struct pktgen_dev *pkt_
Matt Fleming 668ba0
 		} while (ktime_compare(end_time, spin_until) < 0);
Matt Fleming 668ba0
 	} else {
Matt Fleming 668ba0
 		/* see do_nanosleep */
Matt Fleming 668ba0
-		hrtimer_init_sleeper(&t, current);
Matt Fleming 668ba0
 		do {
Matt Fleming 668ba0
 			set_current_state(TASK_INTERRUPTIBLE);
Matt Fleming 668ba0
 			hrtimer_start_expires(&t.timer, HRTIMER_MODE_ABS);
Matt Fleming 668ba0
--- a/net/xfrm/xfrm_state.c
Matt Fleming 668ba0
+++ b/net/xfrm/xfrm_state.c
Matt Fleming 668ba0
@@ -427,7 +427,7 @@ static void xfrm_put_mode(struct xfrm_mo
Matt Fleming 668ba0
 
Matt Fleming 668ba0
 static void xfrm_state_gc_destroy(struct xfrm_state *x)
Matt Fleming 668ba0
 {
Matt Fleming 668ba0
-	tasklet_hrtimer_cancel(&x->mtimer);
Matt Fleming 668ba0
+	hrtimer_cancel(&x->mtimer);
Matt Fleming 668ba0
 	del_timer_sync(&x->rtimer);
Matt Fleming 668ba0
 	kfree(x->aead);
Matt Fleming 668ba0
 	kfree(x->aalg);
Matt Fleming 668ba0
@@ -472,8 +472,8 @@ static void xfrm_state_gc_task(struct wo
Matt Fleming 668ba0
 
Matt Fleming 668ba0
 static enum hrtimer_restart xfrm_timer_handler(struct hrtimer *me)
Matt Fleming 668ba0
 {
Matt Fleming 668ba0
-	struct tasklet_hrtimer *thr = container_of(me, struct tasklet_hrtimer, timer);
Matt Fleming 668ba0
-	struct xfrm_state *x = container_of(thr, struct xfrm_state, mtimer);
Matt Fleming 668ba0
+	struct xfrm_state *x = container_of(me, struct xfrm_state, mtimer);
Matt Fleming 668ba0
+	enum hrtimer_restart ret = HRTIMER_NORESTART;
Matt Fleming 668ba0
 	unsigned long now = get_seconds();
Matt Fleming 668ba0
 	long next = LONG_MAX;
Matt Fleming 668ba0
 	int warn = 0;
Matt Fleming 668ba0
@@ -537,7 +537,8 @@ static enum hrtimer_restart xfrm_timer_h
Matt Fleming 668ba0
 		km_state_expired(x, 0, 0);
Matt Fleming 668ba0
 resched:
Matt Fleming 668ba0
 	if (next != LONG_MAX) {
Matt Fleming 668ba0
-		tasklet_hrtimer_start(&x->mtimer, ktime_set(next, 0), HRTIMER_MODE_REL);
Matt Fleming 668ba0
+		hrtimer_forward_now(&x->mtimer, ktime_set(next, 0));
Matt Fleming 668ba0
+		ret = HRTIMER_RESTART;
Matt Fleming 668ba0
 	}
Matt Fleming 668ba0
 
Matt Fleming 668ba0
 	goto out;
Matt Fleming 668ba0
@@ -554,7 +555,7 @@ expired:
Matt Fleming 668ba0
 
Matt Fleming 668ba0
 out:
Matt Fleming 668ba0
 	spin_unlock(&x->lock);
Matt Fleming 668ba0
-	return HRTIMER_NORESTART;
Matt Fleming 668ba0
+	return ret;
Matt Fleming 668ba0
 }
Matt Fleming 668ba0
 
Matt Fleming 668ba0
 static void xfrm_replay_timer_handler(unsigned long data);
Matt Fleming 668ba0
@@ -573,8 +574,8 @@ struct xfrm_state *xfrm_state_alloc(stru
Matt Fleming 668ba0
 		INIT_HLIST_NODE(&x->bydst);
Matt Fleming 668ba0
 		INIT_HLIST_NODE(&x->bysrc);
Matt Fleming 668ba0
 		INIT_HLIST_NODE(&x->byspi);
Matt Fleming 668ba0
-		tasklet_hrtimer_init(&x->mtimer, xfrm_timer_handler,
Matt Fleming 668ba0
-					CLOCK_BOOTTIME, HRTIMER_MODE_ABS);
Matt Fleming 668ba0
+		hrtimer_init(&x->mtimer, CLOCK_BOOTTIME_SOFT, HRTIMER_MODE_ABS);
Matt Fleming 668ba0
+		x->mtimer.function = xfrm_timer_handler;
Matt Fleming 668ba0
 		setup_timer(&x->rtimer, xfrm_replay_timer_handler,
Matt Fleming 668ba0
 				(unsigned long)x);
Matt Fleming 668ba0
 		x->curlft.add_time = get_seconds();
Matt Fleming 668ba0
@@ -1030,7 +1031,9 @@ found:
Matt Fleming 668ba0
 				hlist_add_head_rcu(&x->byspi, net->xfrm.state_byspi + h);
Matt Fleming 668ba0
 			}
Matt Fleming 668ba0
 			x->lft.hard_add_expires_seconds = net->xfrm.sysctl_acq_expires;
Matt Fleming 668ba0
-			tasklet_hrtimer_start(&x->mtimer, ktime_set(net->xfrm.sysctl_acq_expires, 0), HRTIMER_MODE_REL);
Matt Fleming 668ba0
+			hrtimer_start(&x->mtimer,
Matt Fleming 668ba0
+				      ktime_set(net->xfrm.sysctl_acq_expires, 0),
Matt Fleming 668ba0
+				      HRTIMER_MODE_REL);
Matt Fleming 668ba0
 			net->xfrm.state_num++;
Matt Fleming 668ba0
 			xfrm_hash_grow_check(net, x->bydst.next != NULL);
Matt Fleming 668ba0
 			spin_unlock_bh(&net->xfrm.xfrm_state_lock);
Matt Fleming 668ba0
@@ -1141,7 +1144,7 @@ static void __xfrm_state_insert(struct x
Matt Fleming 668ba0
 		hlist_add_head_rcu(&x->byspi, net->xfrm.state_byspi + h);
Matt Fleming 668ba0
 	}
Matt Fleming 668ba0
 
Matt Fleming 668ba0
-	tasklet_hrtimer_start(&x->mtimer, ktime_set(1, 0), HRTIMER_MODE_REL);
Matt Fleming 668ba0
+	hrtimer_start(&x->mtimer, ktime_set(1, 0), HRTIMER_MODE_REL);
Matt Fleming 668ba0
 	if (x->replay_maxage)
Matt Fleming 668ba0
 		mod_timer(&x->rtimer, jiffies + x->replay_maxage);
Matt Fleming 668ba0
 
Matt Fleming 668ba0
@@ -1245,7 +1248,9 @@ static struct xfrm_state *__find_acq_cor
Matt Fleming 668ba0
 		x->mark.m = m->m;
Matt Fleming 668ba0
 		x->lft.hard_add_expires_seconds = net->xfrm.sysctl_acq_expires;
Matt Fleming 668ba0
 		xfrm_state_hold(x);
Matt Fleming 668ba0
-		tasklet_hrtimer_start(&x->mtimer, ktime_set(net->xfrm.sysctl_acq_expires, 0), HRTIMER_MODE_REL);
Matt Fleming 668ba0
+		hrtimer_start(&x->mtimer,
Matt Fleming 668ba0
+			      ktime_set(net->xfrm.sysctl_acq_expires, 0),
Matt Fleming 668ba0
+			      HRTIMER_MODE_REL);
Matt Fleming 668ba0
 		list_add(&x->km.all, &net->xfrm.state_all);
Matt Fleming 668ba0
 		hlist_add_head_rcu(&x->bydst, net->xfrm.state_bydst + h);
Matt Fleming 668ba0
 		h = xfrm_src_hash(net, daddr, saddr, family);
Matt Fleming 668ba0
@@ -1537,7 +1542,7 @@ out:
Matt Fleming 668ba0
 		memcpy(&x1->lft, &x->lft, sizeof(x1->lft));
Matt Fleming 668ba0
 		x1->km.dying = 0;
Matt Fleming 668ba0
 
Matt Fleming 668ba0
-		tasklet_hrtimer_start(&x1->mtimer, ktime_set(1, 0), HRTIMER_MODE_REL);
Matt Fleming 668ba0
+		hrtimer_start(&x1->mtimer, ktime_set(1, 0), HRTIMER_MODE_REL);
Matt Fleming 668ba0
 		if (x1->curlft.use_time)
Matt Fleming 668ba0
 			xfrm_state_check_expire(x1);
Matt Fleming 668ba0
 
Matt Fleming 668ba0
@@ -1561,7 +1566,7 @@ int xfrm_state_check_expire(struct xfrm_
Matt Fleming 668ba0
 	if (x->curlft.bytes >= x->lft.hard_byte_limit ||
Matt Fleming 668ba0
 	    x->curlft.packets >= x->lft.hard_packet_limit) {
Matt Fleming 668ba0
 		x->km.state = XFRM_STATE_EXPIRED;
Matt Fleming 668ba0
-		tasklet_hrtimer_start(&x->mtimer, 0, HRTIMER_MODE_REL);
Matt Fleming 668ba0
+		hrtimer_start(&x->mtimer, 0, HRTIMER_MODE_REL);
Matt Fleming 668ba0
 		return -EINVAL;
Matt Fleming 668ba0
 	}
Matt Fleming 668ba0