Blob Blame History Raw
From: Andi Kleen <ak@linux.intel.com>
Date: Thu, 3 May 2018 08:35:42 -0700
Subject: [PATCH 1/8] x86, l1tf: Increase 32bit PAE __PHYSICAL_PAGE_MASK
Patch-mainline: v4.19-rc1
Git-commit: 50896e180c6aa3a9c61a26ced99e15d602666a4c
References: bnc#1087081, CVE-2018-3620

We need to protect memory inside the guest's memory against L1TF
by inverting the right bits to point to non existing memory.

The hypervisor should already protect itself against the guest by flushing
the caches as needed, but pages inside the guest are not protected against
attacks from other processes in that guest.

Our inverted PTE mask has to match the host to provide the full
protection for all pages the host could possibly map into our guest.
The host is likely 64bit and may use more than 43 bits of
memory. We want to set all possible bits to be safe here.

On 32bit PAE the max PTE mask is currently set to 44 bit because that is
the limit imposed by 32bit unsigned long PFNs in the VMs. This limits
the mask to be below what the host could possible use for physical
pages.

The L1TF PROT_NONE protection code uses the PTE masks to determine
what bits to invert to make sure the higher bits are set for unmapped
entries to prevent L1TF speculation attacks against EPT inside guests.

We want to invert all bits that could be used by the host.

So increase the mask on 32bit PAE to 52 to match 64bit.

The real limit for a 32bit OS is still 44 bits.

All Linux PTEs are created from unsigned long PFNs, so cannot be
higher than 44 bits on a 32bit kernel. So these extra PFN
bits should be never set. The only users of this macro are using
it to look at PTEs, so it's safe.

Signed-off-by: Andi Kleen <ak@linux.intel.com>
Acked-by: Michal Hocko <mhocko@suse.com>
Acked-By: Dave Hansen <dave.hansen@intel.com>
Reviewed-by: Josh Poimboeuf <jpoimboe@redhat.com>

---

v2: Improve commit message.
---
 arch/x86/include/asm/page_32_types.h | 9 +++++++--
 1 file changed, 7 insertions(+), 2 deletions(-)

diff --git a/arch/x86/include/asm/page_32_types.h b/arch/x86/include/asm/page_32_types.h
index aa30c3241ea7..0d5c739eebd7 100644
--- a/arch/x86/include/asm/page_32_types.h
+++ b/arch/x86/include/asm/page_32_types.h
@@ -29,8 +29,13 @@
 #define N_EXCEPTION_STACKS 1
 
 #ifdef CONFIG_X86_PAE
-/* 44=32+12, the limit we can fit into an unsigned long pfn */
-#define __PHYSICAL_MASK_SHIFT	44
+/*
+ * This is beyond the 44 bit limit imposed by the 32bit long pfns,
+ * but we need the full mask to make sure inverted PROT_NONE
+ * entries have all the host bits set in a guest.
+ * The real limit is still 44 bits.
+ */
+#define __PHYSICAL_MASK_SHIFT	52
 #define __VIRTUAL_MASK_SHIFT	32
 
 #else  /* !CONFIG_X86_PAE */
-- 
2.14.4