Blob Blame History Raw
From: Sagi Grimberg <sagi@grimberg.me>
Date: Thu, 13 Jul 2017 11:09:39 +0300
Subject: mlx5e: don't assume anything on the irq affinity mappings of the
 device
Patch-mainline: v4.14-rc1
Git-commit: a85e5474f4c783b252bf6b80571cdb2abb7d69d9
References: bsc#1046303 FATE#322944

mlx5e currently assumes that irq affinity is really spread first
irq vectors across device home node cpus, with the new generic affinity
mappings this is no longer the case, hence mlxe should not rely on
this anymore.

Signed-off-by: Sagi Grimberg <sagi@grimberg.me>
Signed-off-by: Doug Ledford <dledford@redhat.com>
Acked-by: Thomas Bogendoerfer <tbogendoerfer@suse.de>
---
 drivers/net/ethernet/mellanox/mlx5/core/en_main.c |   10 ----------
 1 file changed, 10 deletions(-)

--- a/drivers/net/ethernet/mellanox/mlx5/core/en_main.c
+++ b/drivers/net/ethernet/mellanox/mlx5/core/en_main.c
@@ -3793,18 +3793,8 @@ void mlx5e_build_default_indir_rqt(struc
 				   u32 *indirection_rqt, int len,
 				   int num_channels)
 {
-	int node = mdev->priv.numa_node;
-	int node_num_of_cores;
 	int i;
 
-	if (node == -1)
-		node = first_online_node;
-
-	node_num_of_cores = cpumask_weight(cpumask_of_node(node));
-
-	if (node_num_of_cores)
-		num_channels = min_t(int, num_channels, node_num_of_cores);
-
 	for (i = 0; i < len; i++)
 		indirection_rqt[i] = i % num_channels;
 }