Blob Blame History Raw
From e6b1a44eccfcab5e5e280be376f65478c3b2c7a2 Mon Sep 17 00:00:00 2001
From: Hou Tao <houtao1@huawei.com>
Date: Tue, 15 Sep 2020 22:07:50 +0800
Subject: [PATCH] locking/percpu-rwsem: Use this_cpu_{inc,dec}() for read_count
Git-commit: e6b1a44eccfcab5e5e280be376f65478c3b2c7a2
Patch-mainline: v5.9-rc6
References: bsc#1050549

The __this_cpu*() accessors are (in general) IRQ-unsafe which, given
that percpu-rwsem is a blocking primitive, should be just fine.

However, file_end_write() is used from IRQ context and will cause
load-store issues on architectures where the per-cpu accessors are not
natively irq-safe.

Fix it by using the IRQ-safe this_cpu_*() for operations on
read_count. This will generate more expensive code on a number of
platforms, which might cause a performance regression for some of the
other percpu-rwsem users.

If any such is reported, we can consider alternative solutions.

Fixes: 70fe2f48152e ("aio: fix freeze protection of aio writes")
Signed-off-by: Hou Tao <houtao1@huawei.com>
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Acked-by: Will Deacon <will@kernel.org>
Acked-by: Oleg Nesterov <oleg@redhat.com>
Link: https://lkml.kernel.org/r/20200915140750.137881-1-houtao1@huawei.com
[SLE-specific backport: this is an older version of pcpu-rwsem]
Signed-off-by: Davidlohr Bueso <dbueso@suse.de>

---
 include/linux/percpu-rwsem.h  | 6 +++---
 kernel/locking/percpu-rwsem.c | 2 +-
 2 files changed, 4 insertions(+), 4 deletions(-)

diff --git a/include/linux/percpu-rwsem.h b/include/linux/percpu-rwsem.h
index 93664f022ecf..19b06f74aeaf 100644
--- a/include/linux/percpu-rwsem.h
+++ b/include/linux/percpu-rwsem.h
@@ -43,7 +43,7 @@ static inline void percpu_down_read_preempt_disable(struct percpu_rw_semaphore *
 	 * and that one the synchronize_sched() is done, the writer will see
 	 * anything we did within this RCU-sched read-size critical section.
 	 */
-	__this_cpu_inc(*sem->read_count);
+	this_cpu_inc(*sem->read_count);
 	if (unlikely(!rcu_sync_is_idle(&sem->rss)))
 		__percpu_down_read(sem, false); /* Unconditional memory barrier */
 	barrier();
@@ -67,7 +67,7 @@ static inline int percpu_down_read_trylock(struct percpu_rw_semaphore *sem)
 	/*
 	 * Same as in percpu_down_read().
 	 */
-	__this_cpu_inc(*sem->read_count);
+	this_cpu_inc(*sem->read_count);
 	if (unlikely(!rcu_sync_is_idle(&sem->rss)))
 		ret = __percpu_down_read(sem, true); /* Unconditional memory barrier */
 	preempt_enable();
@@ -93,7 +93,7 @@ static inline void percpu_up_read_preempt_enable(struct percpu_rw_semaphore *sem
 	 * Same as in percpu_down_read().
 	 */
 	if (likely(rcu_sync_is_idle(&sem->rss)))
-		__this_cpu_dec(*sem->read_count);
+		this_cpu_dec(*sem->read_count);
 	else
 		__percpu_up_read(sem); /* Unconditional memory barrier */
 	preempt_enable();
diff --git a/kernel/locking/percpu-rwsem.c b/kernel/locking/percpu-rwsem.c
index 883cf1b92d90..6a2824fe7139 100644
--- a/kernel/locking/percpu-rwsem.c
+++ b/kernel/locking/percpu-rwsem.c
@@ -99,7 +99,7 @@ void __percpu_up_read(struct percpu_rw_semaphore *sem)
 	 * zero, as that is the only time it matters) they will also see our
 	 * critical section.
 	 */
-	__this_cpu_dec(*sem->read_count);
+	this_cpu_dec(*sem->read_count);
 
 	/* Prod writer to recheck readers_active */
 	rcuwait_wake_up(&sem->writer);
-- 
2.26.2