Blob Blame History Raw
From: Paolo Bonzini <pbonzini@redhat.com>
Date: Mon, 24 Jul 2017 18:54:38 +0200
Subject: KVM: x86: do mask out upper bits of PAE CR3
Patch-mainline: v4.13-rc3
Git-commit: a512177ef3bb92dbec8a96fe337b11c126bf9c91
References: bsc#1077761

This reverts the change of commit f85c758dbee54cc3612a6e873ef7cecdb66ebee5,
as the behavior it modified was intended.

The VM is running in 32-bit PAE mode, and Table 4-7 of the Intel manual
says:

Table 4-7. Use of CR3 with PAE Paging
Bit Position(s)	Contents
4:0		Ignored
31:5		Physical address of the 32-Byte aligned
		page-directory-pointer table used for linear-address
		translation
63:32		Ignored (these bits exist only on processors supporting
		the Intel-64 architecture)

To placate the static checker, write the mask explicitly as an
unsigned long constant instead of using a 32-bit unsigned constant.

Cc: Dan Carpenter <dan.carpenter@oracle.com>
Fixes: f85c758dbee54cc3612a6e873ef7cecdb66ebee5
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Acked-by: Alexander Graf <agraf@suse.de>
---
 arch/x86/kvm/x86.c |    4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

--- a/arch/x86/kvm/x86.c
+++ b/arch/x86/kvm/x86.c
@@ -598,8 +598,8 @@
 		      (unsigned long *)&vcpu->arch.regs_avail))
 		return true;
 
-	gfn = (kvm_read_cr3(vcpu) & ~31ul) >> PAGE_SHIFT;
-	offset = (kvm_read_cr3(vcpu) & ~31ul) & (PAGE_SIZE - 1);
+	gfn = (kvm_read_cr3(vcpu) & 0xffffffe0ul) >> PAGE_SHIFT;
+	offset = (kvm_read_cr3(vcpu) & 0xffffffe0ul) & (PAGE_SIZE - 1);
 	r = kvm_read_nested_guest_page(vcpu, gfn, pdpte, offset, sizeof(pdpte),
 				       PFERR_USER_MASK | PFERR_WRITE_MASK);
 	if (r < 0)