Blob Blame History Raw
From: Joerg Roedel <jroedel@suse.de>
Date: Thu, 22 Jun 2017 12:16:33 +0200
Subject: iommu/amd: Free already flushed ring-buffer entries before full-check
Git-commit: 9ce3a72cd7f7e0b9ba1c5952e4461b363824bca9
Patch-mainline: v4.13-rc1
References: bsc#1045709

To benefit from IOTLB flushes on other CPUs we have to free
the already flushed IOVAs from the ring-buffer before we do
the queue_ring_full() check.

Signed-off-by: Joerg Roedel <jroedel@suse.de>
---
 drivers/iommu/amd_iommu.c | 12 +++++++++---
 1 file changed, 9 insertions(+), 3 deletions(-)

--- a/drivers/iommu/amd_iommu.c
+++ b/drivers/iommu/amd_iommu.c
@@ -1911,14 +1911,20 @@ static void queue_add(struct dma_ops_dom
 	spin_lock_irqsave(&queue->lock, flags);
 
 	/*
+	 * First remove the enries from the ring-buffer that are already
+	 * flushed to make the below queue_ring_full() check less likely
+	 */
+	queue_ring_free_flushed(dom, queue);
+
+	/*
 	 * When ring-queue is full, flush the entries from the IOTLB so
 	 * that we can free all entries with queue_ring_free_flushed()
 	 * below.
 	 */
-	if (queue_ring_full(queue))
+	if (queue_ring_full(queue)) {
 		dma_ops_domain_flush_tlb(dom);
-
-	queue_ring_free_flushed(dom, queue);
+		queue_ring_free_flushed(dom, queue);
+	}
 
 	idx = queue_ring_add(queue);