Blob Blame History Raw
From: Ursula Braun <ubraun@linux.ibm.com>
Date: Thu, 21 Feb 2019 12:56:54 +0100
Subject: net/smc: fix smc_poll in SMC_INIT state
Git-commit: d7cf4a3bf3a83c977a29055e1c4ffada7697b31f
Patch-mainline: v5.0-rc8
References: bsc#1129848 bsc#1129855 LTC#176249 LTC#176251

smc_poll() returns with mask bit EPOLLPRI if the connection urg_state
is SMC_URG_VALID. Since SMC_URG_VALID is zero, smc_poll signals
EPOLLPRI errorneously if called in state SMC_INIT before the connection
is created, for instance in a non-blocking connect scenario.

This patch switches to non-zero values for the urg states.

Reviewed-by: Karsten Graul <kgraul@linux.ibm.com>
Fixes: de8474eb9d50 ("net/smc: urgent data support")
Signed-off-by: Ursula Braun <ubraun@linux.ibm.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
[ ptesarik: The upstream fix changes kABI by renumbering the enum. The
  root cause of the original bug is an uninitialized struct member, so
  an alternative fix is to initialize it properly and stop relying on
  zeroing out the structure. ]
Signed-off-by: Petr Tesarik <ptesarik@suse.com>
---
 net/smc/af_smc.c |    1 +
 1 file changed, 1 insertion(+)

--- a/net/smc/af_smc.c
+++ b/net/smc/af_smc.c
@@ -208,6 +208,7 @@ static struct sock *smc_sock_alloc(struc
 	INIT_WORK(&smc->tcp_listen_work, smc_tcp_listen_work);
 	INIT_WORK(&smc->connect_work, smc_connect_work);
 	INIT_DELAYED_WORK(&smc->conn.tx_work, smc_tx_work);
+	smc->conn.urg_state = SMC_URG_READ;
 	INIT_LIST_HEAD(&smc->accept_q);
 	spin_lock_init(&smc->accept_q_lock);
 	spin_lock_init(&smc->conn.send_lock);