Blob Blame History Raw
From: Vlastimil Babka <vbabka@suse.cz>
Subject: x86, mm: fix fast GUP with hyper-based TLB flushing
Patch-mainline: never, mainline fixed by the way of major rewrite, but stable submission might be possible
References: VM Functionality, bsc#1140903

The x86 version of get_user_pages_fast() relies on disabled interrupts to
synchronize gup_pte_range() between gup_get_pte(ptep); and get_page() agains
a parallel munmap. The munmap side nulls the pte, then flushes TLBs, then
releases the page. As TLB flush is done synchronously via acked interrupt,
disabling interrupts blocks the page release, and get_page(), which assumes
existing reference on page, is thus safe.
However when TLB flush is done by a hypercall, e.g. in Xen PV or HyperV guests,
there is no blocking thanks to disabled interrupts, and get_page() can succeed
on a page that was already freed or even reused.

Fix this by removing the dependency on TLB flush interrupts the same way as the
generic get_user_pages_fast() code by using page_cache_add_speculative() and
revalidating the PTE contents after pinning the page.

Reproduced-by: Oscar Salvador <osalvador@suse.de>
Signed-off-by: Vlastimil Babka <vbabka@suse.cz>

---
 arch/x86/mm/gup.c |   27 +++++++++++++++++++++++++--
 1 file changed, 25 insertions(+), 2 deletions(-)

--- a/arch/x86/mm/gup.c
+++ b/arch/x86/mm/gup.c
@@ -98,6 +98,20 @@ static inline int pte_allows_gup(unsigne
 }
 
 /*
+ * Return the compund head page with ref appropriately incremented,
+ * or NULL if that failed.
+ */
+static inline struct page *try_get_compound_head(struct page *page, int refs)
+{
+	struct page *head = compound_head(page);
+	if (WARN_ON_ONCE(page_ref_count(head) < 0))
+		return NULL;
+	if (unlikely(!page_cache_add_speculative(head, refs)))
+		return NULL;
+	return head;
+}
+
+/*
  * The performance critical leaf functions are made noinline otherwise gcc
  * inlines everything into a single function which results in too much
  * register pressure.
@@ -117,7 +131,7 @@ static noinline int gup_pte_range(pmd_t
 	ptem = ptep = pte_offset_map(&pmd, addr);
 	do {
 		pte_t pte = gup_get_pte(ptep);
-		struct page *page;
+		struct page *head, *page;
 
 		/* Similar to the PMD case, NUMA hinting must take slow path */
 		if (pte_protnone(pte))
@@ -137,10 +151,19 @@ static noinline int gup_pte_range(pmd_t
 
 		VM_BUG_ON(!pfn_valid(pte_pfn(pte)));
 		page = pte_page(pte);
-		if (unlikely(!try_get_page(page))) {
+
+		head = try_get_compound_head(page, 1);
+		if (!head) {
 			put_dev_pagemap(pgmap);
 			break;
 		}
+
+		if (unlikely(pte_val(pte) != pte_val(*ptep))) {
+			put_page(head);
+			put_dev_pagemap(pgmap);
+			break;
+		}
+
 		put_dev_pagemap(pgmap);
 		SetPageReferenced(page);
 		pages[*nr] = page;