Blob Blame History Raw
From: Daniel Vetter <daniel.vetter@ffwll.ch>
Date: Mon, 26 Aug 2019 22:14:25 +0200
Subject: mm/mmu_notifiers: annotate with might_sleep()
Git-commit: 810e24e009cf71bf85a1524f272a744c54ca6591
Patch-mainline: v5.4-rc1
References: jsc#SLE-16387

Since mmu notifiers don't exist for many processes, but could block in
interesting places, add some annotations. This should help make sure the
core mm keeps up its end of the mmu notifier contract.

The checks here are outside of all notifier checks because of that.
They compile away without CONFIG_DEBUG_ATOMIC_SLEEP.

Link: https://lore.kernel.org/r/20190826201425.17547-6-daniel.vetter@ffwll.ch
Suggested-by: Jason Gunthorpe <jgg@mellanox.com>
Signed-off-by: Daniel Vetter <daniel.vetter@intel.com>
Reviewed-by: Jason Gunthorpe <jgg@mellanox.com>
Signed-off-by: Jason Gunthorpe <jgg@mellanox.com>
Signed-off-by: Vlastimil Babka <vbabka@suse.cz>
---
 include/linux/mmu_notifier.h |    5 +++++
 1 file changed, 5 insertions(+)

--- a/include/linux/mmu_notifier.h
+++ b/include/linux/mmu_notifier.h
@@ -343,6 +343,8 @@ static inline void mmu_notifier_change_p
 static inline void
 mmu_notifier_invalidate_range_start(struct mmu_notifier_range *range)
 {
+	might_sleep();
+
 	lock_map_acquire(&__mmu_notifier_invalidate_range_start_map);
 	if (mm_has_notifiers(range->mm)) {
 		range->flags |= MMU_NOTIFIER_RANGE_BLOCKABLE;
@@ -368,6 +370,9 @@ mmu_notifier_invalidate_range_start_nonb
 static inline void
 mmu_notifier_invalidate_range_end(struct mmu_notifier_range *range)
 {
+	if (mmu_notifier_range_blockable(range))
+		might_sleep();
+
 	if (mm_has_notifiers(range->mm))
 		__mmu_notifier_invalidate_range_end(range, false);
 }