Blob Blame History Raw
From 2c3237fd89e59f118bd9d49c1e99b010b0489665 Mon Sep 17 00:00:00 2001
From: Liu Yuntao <liuyuntao10@huawei.com>
Date: Fri, 24 Sep 2021 15:43:32 -0700
Subject: [PATCH] mm/shmem.c: fix judgment error in shmem_is_huge()

References: git fixes (mm/shmem)
Patch-mainline: v5.15
Git-commit: de6ee659684b1a2b149e0780d3c5e8032f3647d6

In the case of SHMEM_HUGE_WITHIN_SIZE, the page index is not rounded up
correctly.  When the page index points to the first page in a huge page,
round_up() cannot bring it to the end of the huge page, but to the end
of the previous one.

An example:

HPAGE_PMD_NR on my machine is 512(2 MB huge page size).  After
allcoating a 3000 KB buffer, I access it at location 2050 KB.  In
shmem_is_huge(), the corresponding index happens to be 512.  After
rounded up by HPAGE_PMD_NR, it will still be 512 which is smaller than
i_size, and shmem_is_huge() will return true.  As a result, my buffer
takes an additional huge page, and that shouldn't happen when
shmem_enabled is set to within_size.

Link: https://lkml.kernel.org/r/20210909032007.18353-1-liuyuntao10@huawei.com
Fixes: f3f0e1d2150b2b ("khugepaged: add support of collapse for tmpfs/shmem pages")
Signed-off-by: Liu Yuntao <liuyuntao10@huawei.com>
Acked-by: Kirill A. Shutemov <kirill.shutemov@linux.intel.com>
Acked-by: Hugh Dickins <hughd@google.com>
Cc: wuxu.wu <wuxu.wu@huawei.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Signed-off-by: Mel Gorman <mgorman@suse.de>
---
 mm/shmem.c | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/mm/shmem.c b/mm/shmem.c
index 21c29f7e3894..7fa5e2d14be1 100644
--- a/mm/shmem.c
+++ b/mm/shmem.c
@@ -490,9 +490,9 @@ bool shmem_is_huge(struct vm_area_struct *vma,
 	case SHMEM_HUGE_ALWAYS:
 		return true;
 	case SHMEM_HUGE_WITHIN_SIZE:
-		index = round_up(index, HPAGE_PMD_NR);
+		index = round_up(index + 1, HPAGE_PMD_NR);
 		i_size = round_up(i_size_read(inode), PAGE_SIZE);
-		if (i_size >= HPAGE_PMD_SIZE && (i_size >> PAGE_SHIFT) >= index)
+		if (i_size >> PAGE_SHIFT >= index)
 			return true;
 		fallthrough;
 	case SHMEM_HUGE_ADVISE: