Blob Blame History Raw
From: Jeff Mahoney <jeffm@suse.com>
Subject: reiserfs: fix race in prealloc discard
References: bsc#987576
Patch-mainline: v4.13-rc1
Git-commit: 08db141b5313ac2f64b844fb5725b8d81744b417

The main loop in __discard_prealloc is protected by the reiserfs write lock
which is dropped across schedules like the BKL it replaced.  The problem is
that it checks the value, calls a routine that schedules, and then adjusts
the state.  As a result, two threads that are calling
reiserfs_prealloc_discard at the same time can race when one calls
reiserfs_free_prealloc_block, the lock is dropped, and the other calls
reiserfs_free_prealloc_block with the same block number.  In the right
circumstances, it can cause the prealloc count to go negative.

Signed-off-by: Jeff Mahoney <jeffm@suse.com>
---

 bitmap.c |   12 ++++++++++--
 1 file changed, 10 insertions(+), 2 deletions(-)

--- a/fs/reiserfs/bitmap.c
+++ b/fs/reiserfs/bitmap.c
@@ -479,9 +479,17 @@ static void __discard_prealloc(struct re
 			       "inode has negative prealloc blocks count.");
 #endif
 	while (ei->i_prealloc_count > 0) {
-		reiserfs_free_prealloc_block(th, inode, ei->i_prealloc_block);
-		ei->i_prealloc_block++;
+		b_blocknr_t block_to_free;
+
+		/*
+		 * reiserfs_free_prealloc_block can drop the write lock,
+		 * which could allow another caller to free the same block.
+		 * We can protect against it by modifying the prealloc
+		 * state before calling it.
+		 */
+		block_to_free = ei->i_prealloc_block++;
 		ei->i_prealloc_count--;
+		reiserfs_free_prealloc_block(th, inode, block_to_free);
 		dirty = 1;
 	}
 	if (dirty)