<feed xmlns='http://www.w3.org/2005/Atom'>
<title>linux.git/net/rxrpc/input.c, branch v4.19</title>
<subtitle>Linux kernel source tree</subtitle>
<link rel='alternate' type='text/html' href='https://git.tavy.me/linux.git/'/>
<entry>
<title>rxrpc: Fix the packet reception routine</title>
<updated>2018-10-08T21:42:04+00:00</updated>
<author>
<name>David Howells</name>
<email>dhowells@redhat.com</email>
</author>
<published>2018-10-08T14:46:25+00:00</published>
<link rel='alternate' type='text/html' href='https://git.tavy.me/linux.git/commit/?id=c1e15b4944c9fa7fbbb74f7a5920a1e31b4b965a'/>
<id>c1e15b4944c9fa7fbbb74f7a5920a1e31b4b965a</id>
<content type='text'>
The rxrpc_input_packet() function and its call tree was built around the
assumption that data_ready() handler called from UDP to inform a kernel
service that there is data to be had was non-reentrant.  This means that
certain locking could be dispensed with.

This, however, turns out not to be the case with a multi-queue network card
that can deliver packets to multiple cpus simultaneously.  Each of those
cpus can be in the rxrpc_input_packet() function at the same time.

Fix by adding or changing some structure members:

 (1) Add peer-&gt;rtt_input_lock to serialise access to the RTT buffer.

 (2) Make conn-&gt;service_id into a 32-bit variable so that it can be
     cmpxchg'd on all arches.

 (3) Add call-&gt;input_lock to serialise access to the Rx/Tx state.  Note
     that although the Rx and Tx states are (almost) entirely separate,
     there's no point completing the separation and having separate locks
     since it's a bi-phasal RPC protocol rather than a bi-direction
     streaming protocol.  Data transmission and data reception do not take
     place simultaneously on any particular call.

and making the following functional changes:

 (1) In rxrpc_input_data(), hold call-&gt;input_lock around the core to
     prevent simultaneous producing of packets into the Rx ring and
     updating of tracking state for a particular call.

 (2) In rxrpc_input_ping_response(), only read call-&gt;ping_serial once, and
     check it before checking RXRPC_CALL_PINGING as that's a cheaper test.
     The bit test and bit clear can then be combined.  No further locking
     is needed here.

 (3) In rxrpc_input_ack(), take call-&gt;input_lock after we've parsed much of
     the ACK packet.  The superseded ACK check is then done both before and
     after the lock is taken.

     The handing of ackinfo data is split, parsing before the lock is taken
     and processing with it held.  This is keyed on rxMTU being non-zero.

     Congestion management is also done within the locked section.

 (4) In rxrpc_input_ackall(), take call-&gt;input_lock around the Tx window
     rotation.  The ACKALL packet carries no information and is only really
     useful after all packets have been transmitted since it's imprecise.

 (5) In rxrpc_input_implicit_end_call(), we use rx-&gt;incoming_lock to
     prevent calls being simultaneously implicitly ended on two cpus and
     also to prevent any races with incoming call setup.

 (6) In rxrpc_input_packet(), use cmpxchg() to effect the service upgrade
     on a connection.  It is only permitted to happen once for a
     connection.

 (7) In rxrpc_new_incoming_call(), we have to recheck the routing inside
     rx-&gt;incoming_lock to see if someone else set up the call, connection
     or peer whilst we were getting there.  We can't trust the values from
     the earlier routing check unless we pin refs on them - which we want
     to avoid.

     Further, we need to allow for an incoming call to have its state
     changed on another CPU between us making it live and us adjusting it
     because the conn is now in the RXRPC_CONN_SERVICE state.

 (8) In rxrpc_peer_add_rtt(), take peer-&gt;rtt_input_lock around the access
     to the RTT buffer.  Don't need to lock around setting peer-&gt;rtt.

For reference, the inventory of state-accessing or state-altering functions
used by the packet input procedure is:

&gt; rxrpc_input_packet()
  * PACKET CHECKING

  * ROUTING
    &gt; rxrpc_post_packet_to_local()
    &gt; rxrpc_find_connection_rcu() - uses RCU
      &gt; rxrpc_lookup_peer_rcu() - uses RCU
      &gt; rxrpc_find_service_conn_rcu() - uses RCU
      &gt; idr_find() - uses RCU

  * CONNECTION-LEVEL PROCESSING
    - Service upgrade
      - Can only happen once per conn
      ! Changed to use cmpxchg
    &gt; rxrpc_post_packet_to_conn()
    - Setting conn-&gt;hi_serial
      - Probably safe not using locks
      - Maybe use cmpxchg

  * CALL-LEVEL PROCESSING
    &gt; Old-call checking
      &gt; rxrpc_input_implicit_end_call()
        &gt; rxrpc_call_completed()
	&gt; rxrpc_queue_call()
	! Need to take rx-&gt;incoming_lock
	&gt; __rxrpc_disconnect_call()
	&gt; rxrpc_notify_socket()
    &gt; rxrpc_new_incoming_call()
      - Uses rx-&gt;incoming_lock for the entire process
        - Might be able to drop this earlier in favour of the call lock
      &gt; rxrpc_incoming_call()
      	! Conflicts with rxrpc_input_implicit_end_call()
    &gt; rxrpc_send_ping()
      - Don't need locks to check rtt state
      &gt; rxrpc_propose_ACK

  * PACKET DISTRIBUTION
    &gt; rxrpc_input_call_packet()
      &gt; rxrpc_input_data()
	* QUEUE DATA PACKET ON CALL
	&gt; rxrpc_reduce_call_timer()
	  - Uses timer_reduce()
	! Needs call-&gt;input_lock()
	&gt; rxrpc_receiving_reply()
	  ! Needs locking around ack state
	  &gt; rxrpc_rotate_tx_window()
	  &gt; rxrpc_end_tx_phase()
	&gt; rxrpc_proto_abort()
	&gt; rxrpc_input_dup_data()
	- Fills the Rx buffer
	- rxrpc_propose_ACK()
	- rxrpc_notify_socket()

      &gt; rxrpc_input_ack()
	* APPLY ACK PACKET TO CALL AND DISCARD PACKET
	&gt; rxrpc_input_ping_response()
	  - Probably doesn't need any extra locking
	  ! Need READ_ONCE() on call-&gt;ping_serial
	  &gt; rxrpc_input_check_for_lost_ack()
	    - Takes call-&gt;lock to consult Tx buffer
	  &gt; rxrpc_peer_add_rtt()
	    ! Needs to take a lock (peer-&gt;rtt_input_lock)
	    ! Could perhaps manage with cmpxchg() and xadd() instead
	&gt; rxrpc_input_requested_ack
	  - Consults Tx buffer
	    ! Probably needs a lock
	  &gt; rxrpc_peer_add_rtt()
	&gt; rxrpc_propose_ack()
	&gt; rxrpc_input_ackinfo()
	  - Changes call-&gt;tx_winsize
	    ! Use cmpxchg to handle change
	    ! Should perhaps track serial number
	  - Uses peer-&gt;lock to record MTU specification changes
	&gt; rxrpc_proto_abort()
	! Need to take call-&gt;input_lock
	&gt; rxrpc_rotate_tx_window()
	&gt; rxrpc_end_tx_phase()
	&gt; rxrpc_input_soft_acks()
	- Consults the Tx buffer
	&gt; rxrpc_congestion_management()
	  - Modifies the Tx annotations
	  ! Needs call-&gt;input_lock()
	  &gt; rxrpc_queue_call()

      &gt; rxrpc_input_abort()
	* APPLY ABORT PACKET TO CALL AND DISCARD PACKET
	&gt; rxrpc_set_call_completion()
	&gt; rxrpc_notify_socket()

      &gt; rxrpc_input_ackall()
	* APPLY ACKALL PACKET TO CALL AND DISCARD PACKET
	! Need to take call-&gt;input_lock
	&gt; rxrpc_rotate_tx_window()
	&gt; rxrpc_end_tx_phase()

    &gt; rxrpc_reject_packet()

There are some functions used by the above that queue the packet, after
which the procedure is terminated:

 - rxrpc_post_packet_to_local()
   - local-&gt;event_queue is an sk_buff_head
   - local-&gt;processor is a work_struct
 - rxrpc_post_packet_to_conn()
   - conn-&gt;rx_queue is an sk_buff_head
   - conn-&gt;processor is a work_struct
 - rxrpc_reject_packet()
   - local-&gt;reject_queue is an sk_buff_head
   - local-&gt;processor is a work_struct

And some that offload processing to process context:

 - rxrpc_notify_socket()
   - Uses RCU lock
   - Uses call-&gt;notify_lock to call call-&gt;notify_rx
   - Uses call-&gt;recvmsg_lock to queue recvmsg side
 - rxrpc_queue_call()
   - call-&gt;processor is a work_struct
 - rxrpc_propose_ACK()
   - Uses call-&gt;lock to wrap __rxrpc_propose_ACK()

And a bunch that complete a call, all of which use call-&gt;state_lock to
protect the call state:

 - rxrpc_call_completed()
 - rxrpc_set_call_completion()
 - rxrpc_abort_call()
 - rxrpc_proto_abort()
   - Also uses rxrpc_queue_call()

Fixes: 17926a79320a ("[AF_RXRPC]: Provide secure RxRPC sockets for use by userspace and kernel both")
Signed-off-by: David Howells &lt;dhowells@redhat.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
The rxrpc_input_packet() function and its call tree was built around the
assumption that data_ready() handler called from UDP to inform a kernel
service that there is data to be had was non-reentrant.  This means that
certain locking could be dispensed with.

This, however, turns out not to be the case with a multi-queue network card
that can deliver packets to multiple cpus simultaneously.  Each of those
cpus can be in the rxrpc_input_packet() function at the same time.

Fix by adding or changing some structure members:

 (1) Add peer-&gt;rtt_input_lock to serialise access to the RTT buffer.

 (2) Make conn-&gt;service_id into a 32-bit variable so that it can be
     cmpxchg'd on all arches.

 (3) Add call-&gt;input_lock to serialise access to the Rx/Tx state.  Note
     that although the Rx and Tx states are (almost) entirely separate,
     there's no point completing the separation and having separate locks
     since it's a bi-phasal RPC protocol rather than a bi-direction
     streaming protocol.  Data transmission and data reception do not take
     place simultaneously on any particular call.

and making the following functional changes:

 (1) In rxrpc_input_data(), hold call-&gt;input_lock around the core to
     prevent simultaneous producing of packets into the Rx ring and
     updating of tracking state for a particular call.

 (2) In rxrpc_input_ping_response(), only read call-&gt;ping_serial once, and
     check it before checking RXRPC_CALL_PINGING as that's a cheaper test.
     The bit test and bit clear can then be combined.  No further locking
     is needed here.

 (3) In rxrpc_input_ack(), take call-&gt;input_lock after we've parsed much of
     the ACK packet.  The superseded ACK check is then done both before and
     after the lock is taken.

     The handing of ackinfo data is split, parsing before the lock is taken
     and processing with it held.  This is keyed on rxMTU being non-zero.

     Congestion management is also done within the locked section.

 (4) In rxrpc_input_ackall(), take call-&gt;input_lock around the Tx window
     rotation.  The ACKALL packet carries no information and is only really
     useful after all packets have been transmitted since it's imprecise.

 (5) In rxrpc_input_implicit_end_call(), we use rx-&gt;incoming_lock to
     prevent calls being simultaneously implicitly ended on two cpus and
     also to prevent any races with incoming call setup.

 (6) In rxrpc_input_packet(), use cmpxchg() to effect the service upgrade
     on a connection.  It is only permitted to happen once for a
     connection.

 (7) In rxrpc_new_incoming_call(), we have to recheck the routing inside
     rx-&gt;incoming_lock to see if someone else set up the call, connection
     or peer whilst we were getting there.  We can't trust the values from
     the earlier routing check unless we pin refs on them - which we want
     to avoid.

     Further, we need to allow for an incoming call to have its state
     changed on another CPU between us making it live and us adjusting it
     because the conn is now in the RXRPC_CONN_SERVICE state.

 (8) In rxrpc_peer_add_rtt(), take peer-&gt;rtt_input_lock around the access
     to the RTT buffer.  Don't need to lock around setting peer-&gt;rtt.

For reference, the inventory of state-accessing or state-altering functions
used by the packet input procedure is:

&gt; rxrpc_input_packet()
  * PACKET CHECKING

  * ROUTING
    &gt; rxrpc_post_packet_to_local()
    &gt; rxrpc_find_connection_rcu() - uses RCU
      &gt; rxrpc_lookup_peer_rcu() - uses RCU
      &gt; rxrpc_find_service_conn_rcu() - uses RCU
      &gt; idr_find() - uses RCU

  * CONNECTION-LEVEL PROCESSING
    - Service upgrade
      - Can only happen once per conn
      ! Changed to use cmpxchg
    &gt; rxrpc_post_packet_to_conn()
    - Setting conn-&gt;hi_serial
      - Probably safe not using locks
      - Maybe use cmpxchg

  * CALL-LEVEL PROCESSING
    &gt; Old-call checking
      &gt; rxrpc_input_implicit_end_call()
        &gt; rxrpc_call_completed()
	&gt; rxrpc_queue_call()
	! Need to take rx-&gt;incoming_lock
	&gt; __rxrpc_disconnect_call()
	&gt; rxrpc_notify_socket()
    &gt; rxrpc_new_incoming_call()
      - Uses rx-&gt;incoming_lock for the entire process
        - Might be able to drop this earlier in favour of the call lock
      &gt; rxrpc_incoming_call()
      	! Conflicts with rxrpc_input_implicit_end_call()
    &gt; rxrpc_send_ping()
      - Don't need locks to check rtt state
      &gt; rxrpc_propose_ACK

  * PACKET DISTRIBUTION
    &gt; rxrpc_input_call_packet()
      &gt; rxrpc_input_data()
	* QUEUE DATA PACKET ON CALL
	&gt; rxrpc_reduce_call_timer()
	  - Uses timer_reduce()
	! Needs call-&gt;input_lock()
	&gt; rxrpc_receiving_reply()
	  ! Needs locking around ack state
	  &gt; rxrpc_rotate_tx_window()
	  &gt; rxrpc_end_tx_phase()
	&gt; rxrpc_proto_abort()
	&gt; rxrpc_input_dup_data()
	- Fills the Rx buffer
	- rxrpc_propose_ACK()
	- rxrpc_notify_socket()

      &gt; rxrpc_input_ack()
	* APPLY ACK PACKET TO CALL AND DISCARD PACKET
	&gt; rxrpc_input_ping_response()
	  - Probably doesn't need any extra locking
	  ! Need READ_ONCE() on call-&gt;ping_serial
	  &gt; rxrpc_input_check_for_lost_ack()
	    - Takes call-&gt;lock to consult Tx buffer
	  &gt; rxrpc_peer_add_rtt()
	    ! Needs to take a lock (peer-&gt;rtt_input_lock)
	    ! Could perhaps manage with cmpxchg() and xadd() instead
	&gt; rxrpc_input_requested_ack
	  - Consults Tx buffer
	    ! Probably needs a lock
	  &gt; rxrpc_peer_add_rtt()
	&gt; rxrpc_propose_ack()
	&gt; rxrpc_input_ackinfo()
	  - Changes call-&gt;tx_winsize
	    ! Use cmpxchg to handle change
	    ! Should perhaps track serial number
	  - Uses peer-&gt;lock to record MTU specification changes
	&gt; rxrpc_proto_abort()
	! Need to take call-&gt;input_lock
	&gt; rxrpc_rotate_tx_window()
	&gt; rxrpc_end_tx_phase()
	&gt; rxrpc_input_soft_acks()
	- Consults the Tx buffer
	&gt; rxrpc_congestion_management()
	  - Modifies the Tx annotations
	  ! Needs call-&gt;input_lock()
	  &gt; rxrpc_queue_call()

      &gt; rxrpc_input_abort()
	* APPLY ABORT PACKET TO CALL AND DISCARD PACKET
	&gt; rxrpc_set_call_completion()
	&gt; rxrpc_notify_socket()

      &gt; rxrpc_input_ackall()
	* APPLY ACKALL PACKET TO CALL AND DISCARD PACKET
	! Need to take call-&gt;input_lock
	&gt; rxrpc_rotate_tx_window()
	&gt; rxrpc_end_tx_phase()

    &gt; rxrpc_reject_packet()

There are some functions used by the above that queue the packet, after
which the procedure is terminated:

 - rxrpc_post_packet_to_local()
   - local-&gt;event_queue is an sk_buff_head
   - local-&gt;processor is a work_struct
 - rxrpc_post_packet_to_conn()
   - conn-&gt;rx_queue is an sk_buff_head
   - conn-&gt;processor is a work_struct
 - rxrpc_reject_packet()
   - local-&gt;reject_queue is an sk_buff_head
   - local-&gt;processor is a work_struct

And some that offload processing to process context:

 - rxrpc_notify_socket()
   - Uses RCU lock
   - Uses call-&gt;notify_lock to call call-&gt;notify_rx
   - Uses call-&gt;recvmsg_lock to queue recvmsg side
 - rxrpc_queue_call()
   - call-&gt;processor is a work_struct
 - rxrpc_propose_ACK()
   - Uses call-&gt;lock to wrap __rxrpc_propose_ACK()

And a bunch that complete a call, all of which use call-&gt;state_lock to
protect the call state:

 - rxrpc_call_completed()
 - rxrpc_set_call_completion()
 - rxrpc_abort_call()
 - rxrpc_proto_abort()
   - Also uses rxrpc_queue_call()

Fixes: 17926a79320a ("[AF_RXRPC]: Provide secure RxRPC sockets for use by userspace and kernel both")
Signed-off-by: David Howells &lt;dhowells@redhat.com&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>rxrpc: Only take the rwind and mtu values from latest ACK</title>
<updated>2018-10-08T21:42:04+00:00</updated>
<author>
<name>David Howells</name>
<email>dhowells@redhat.com</email>
</author>
<published>2018-10-08T14:46:11+00:00</published>
<link rel='alternate' type='text/html' href='https://git.tavy.me/linux.git/commit/?id=298bc15b2079c324e82d0a6fda39c3d762af7282'/>
<id>298bc15b2079c324e82d0a6fda39c3d762af7282</id>
<content type='text'>
Move the out-of-order and duplicate ACK packet check to before the call to
rxrpc_input_ackinfo() so that the receive window size and MTU size are only
checked in the latest ACK packet and don't regress.

Fixes: 248f219cb8bc ("rxrpc: Rewrite the data and ack handling code")
Signed-off-by: David Howells &lt;dhowells@redhat.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
Move the out-of-order and duplicate ACK packet check to before the call to
rxrpc_input_ackinfo() so that the receive window size and MTU size are only
checked in the latest ACK packet and don't regress.

Fixes: 248f219cb8bc ("rxrpc: Rewrite the data and ack handling code")
Signed-off-by: David Howells &lt;dhowells@redhat.com&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>rxrpc: Carry call state out of locked section in rxrpc_rotate_tx_window()</title>
<updated>2018-10-08T14:57:36+00:00</updated>
<author>
<name>David Howells</name>
<email>dhowells@redhat.com</email>
</author>
<published>2018-10-08T14:46:05+00:00</published>
<link rel='alternate' type='text/html' href='https://git.tavy.me/linux.git/commit/?id=dfe995224693798e554ab4770f6d8a096afc60cd'/>
<id>dfe995224693798e554ab4770f6d8a096afc60cd</id>
<content type='text'>
Carry the call state out of the locked section in rxrpc_rotate_tx_window()
rather than sampling it afterwards.  This is only used to select tracepoint
data, but could have changed by the time we do the tracepoint.

Signed-off-by: David Howells &lt;dhowells@redhat.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
Carry the call state out of the locked section in rxrpc_rotate_tx_window()
rather than sampling it afterwards.  This is only used to select tracepoint
data, but could have changed by the time we do the tracepoint.

Signed-off-by: David Howells &lt;dhowells@redhat.com&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>rxrpc: Don't check RXRPC_CALL_TX_LAST after calling rxrpc_rotate_tx_window()</title>
<updated>2018-10-08T14:55:20+00:00</updated>
<author>
<name>David Howells</name>
<email>dhowells@redhat.com</email>
</author>
<published>2018-10-08T14:46:01+00:00</published>
<link rel='alternate' type='text/html' href='https://git.tavy.me/linux.git/commit/?id=c479d5f2c2e1ce609da08c075054440d97ddff52'/>
<id>c479d5f2c2e1ce609da08c075054440d97ddff52</id>
<content type='text'>
We should only call the function to end a call's Tx phase if we rotated the
marked-last packet out of the transmission buffer.

Make rxrpc_rotate_tx_window() return an indication of whether it just
rotated the packet marked as the last out of the transmit buffer, carrying
the information out of the locked section in that function.

We can then check the return value instead of examining RXRPC_CALL_TX_LAST.

Fixes: 70790dbe3f66 ("rxrpc: Pass the last Tx packet marker in the annotation buffer")
Signed-off-by: David Howells &lt;dhowells@redhat.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
We should only call the function to end a call's Tx phase if we rotated the
marked-last packet out of the transmission buffer.

Make rxrpc_rotate_tx_window() return an indication of whether it just
rotated the packet marked as the last out of the transmit buffer, carrying
the information out of the locked section in that function.

We can then check the return value instead of examining RXRPC_CALL_TX_LAST.

Fixes: 70790dbe3f66 ("rxrpc: Pass the last Tx packet marker in the annotation buffer")
Signed-off-by: David Howells &lt;dhowells@redhat.com&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>rxrpc: Don't need to take the RCU read lock in the packet receiver</title>
<updated>2018-10-08T14:45:56+00:00</updated>
<author>
<name>David Howells</name>
<email>dhowells@redhat.com</email>
</author>
<published>2018-10-08T14:45:56+00:00</published>
<link rel='alternate' type='text/html' href='https://git.tavy.me/linux.git/commit/?id=bfd2821117a751763718f1b5e57216c0d9b19a49'/>
<id>bfd2821117a751763718f1b5e57216c0d9b19a49</id>
<content type='text'>
We don't need to take the RCU read lock in the rxrpc packet receive
function because it's held further up the stack in the IP input routine
around the UDP receive routines.

Fix this by dropping the RCU read lock calls from rxrpc_input_packet().
This simplifies the code.

Fixes: 70790dbe3f66 ("rxrpc: Pass the last Tx packet marker in the annotation buffer")
Signed-off-by: David Howells &lt;dhowells@redhat.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
We don't need to take the RCU read lock in the rxrpc packet receive
function because it's held further up the stack in the IP input routine
around the UDP receive routines.

Fix this by dropping the RCU read lock calls from rxrpc_input_packet().
This simplifies the code.

Fixes: 70790dbe3f66 ("rxrpc: Pass the last Tx packet marker in the annotation buffer")
Signed-off-by: David Howells &lt;dhowells@redhat.com&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>rxrpc: Use the UDP encap_rcv hook</title>
<updated>2018-10-08T14:45:18+00:00</updated>
<author>
<name>David Howells</name>
<email>dhowells@redhat.com</email>
</author>
<published>2018-10-04T10:10:51+00:00</published>
<link rel='alternate' type='text/html' href='https://git.tavy.me/linux.git/commit/?id=5271953cad31b97dea80f848c16e96ad66401199'/>
<id>5271953cad31b97dea80f848c16e96ad66401199</id>
<content type='text'>
Use the UDP encap_rcv hook to cut the bit out of the rxrpc packet reception
in which a packet is placed onto the UDP receive queue and then immediately
removed again by rxrpc.  Going via the queue in this manner seems like it
should be unnecessary.

This does, however, require the invention of a value to place in encap_type
as that's one of the conditions to switch packets out to the encap_rcv
hook.  Possibly the value doesn't actually matter for anything other than
sockopts on the UDP socket, which aren't accessible outside of rxrpc
anyway.

This seems to cut a bit of time out of the time elapsed between each
sk_buff being timestamped and turning up in rxrpc (the final number in the
following trace excerpts).  I measured this by making the rxrpc_rx_packet
trace point print the time elapsed between the skb being timestamped and
the current time (in ns), e.g.:

	... 424.278721: rxrpc_rx_packet: ...  ACK 25026

So doing a 512MiB DIO read from my test server, with an unmodified kernel:

	N       min     max     sum		mean    stddev
	27605   2626    7581    7.83992e+07     2840.04 181.029

and with the patch applied:

	N       min     max     sum		mean    stddev
	27547   1895    12165   6.77461e+07     2459.29 255.02

Signed-off-by: David Howells &lt;dhowells@redhat.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
Use the UDP encap_rcv hook to cut the bit out of the rxrpc packet reception
in which a packet is placed onto the UDP receive queue and then immediately
removed again by rxrpc.  Going via the queue in this manner seems like it
should be unnecessary.

This does, however, require the invention of a value to place in encap_type
as that's one of the conditions to switch packets out to the encap_rcv
hook.  Possibly the value doesn't actually matter for anything other than
sockopts on the UDP socket, which aren't accessible outside of rxrpc
anyway.

This seems to cut a bit of time out of the time elapsed between each
sk_buff being timestamped and turning up in rxrpc (the final number in the
following trace excerpts).  I measured this by making the rxrpc_rx_packet
trace point print the time elapsed between the skb being timestamped and
the current time (in ns), e.g.:

	... 424.278721: rxrpc_rx_packet: ...  ACK 25026

So doing a 512MiB DIO read from my test server, with an unmodified kernel:

	N       min     max     sum		mean    stddev
	27605   2626    7581    7.83992e+07     2840.04 181.029

and with the patch applied:

	N       min     max     sum		mean    stddev
	27547   1895    12165   6.77461e+07     2459.29 255.02

Signed-off-by: David Howells &lt;dhowells@redhat.com&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>rxrpc: Fix the data_ready handler</title>
<updated>2018-10-05T13:21:59+00:00</updated>
<author>
<name>David Howells</name>
<email>dhowells@redhat.com</email>
</author>
<published>2018-10-05T13:05:35+00:00</published>
<link rel='alternate' type='text/html' href='https://git.tavy.me/linux.git/commit/?id=2cfa2271604bb26e75b828d38f357ed084464795'/>
<id>2cfa2271604bb26e75b828d38f357ed084464795</id>
<content type='text'>
Fix the rxrpc_data_ready() function to pick up all packets and to not miss
any.  There are two problems:

 (1) The sk_data_ready pointer on the UDP socket is set *after* it is
     bound.  This means that it's open for business before we're ready to
     dequeue packets and there's a tiny window exists in which a packet can
     sneak onto the receive queue, but we never know about it.

     Fix this by setting the pointers on the socket prior to binding it.

 (2) skb_recv_udp() will return an error (such as ENETUNREACH) if there was
     an error on the transmission side, even though we set the
     sk_error_report hook.  Because rxrpc_data_ready() returns immediately
     in such a case, it never actually removes its packet from the receive
     queue.

     Fix this by abstracting out the UDP dequeuing and checksumming into a
     separate function that keeps hammering on skb_recv_udp() until it
     returns -EAGAIN, passing the packets extracted to the remainder of the
     function.

and two potential problems:

 (3) It might be possible in some circumstances or in the future for
     packets to be being added to the UDP receive queue whilst rxrpc is
     running consuming them, so the data_ready() handler might get called
     less often than once per packet.

     Allow for this by fully draining the queue on each call as (2).

 (4) If a packet fails the checksum check, the code currently returns after
     discarding the packet without checking for more.

     Allow for this by fully draining the queue on each call as (2).

Fixes: 17926a79320a ("[AF_RXRPC]: Provide secure RxRPC sockets for use by userspace and kernel both")
Signed-off-by: David Howells &lt;dhowells@redhat.com&gt;
Acked-by: Paolo Abeni &lt;pabeni@redhat.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
Fix the rxrpc_data_ready() function to pick up all packets and to not miss
any.  There are two problems:

 (1) The sk_data_ready pointer on the UDP socket is set *after* it is
     bound.  This means that it's open for business before we're ready to
     dequeue packets and there's a tiny window exists in which a packet can
     sneak onto the receive queue, but we never know about it.

     Fix this by setting the pointers on the socket prior to binding it.

 (2) skb_recv_udp() will return an error (such as ENETUNREACH) if there was
     an error on the transmission side, even though we set the
     sk_error_report hook.  Because rxrpc_data_ready() returns immediately
     in such a case, it never actually removes its packet from the receive
     queue.

     Fix this by abstracting out the UDP dequeuing and checksumming into a
     separate function that keeps hammering on skb_recv_udp() until it
     returns -EAGAIN, passing the packets extracted to the remainder of the
     function.

and two potential problems:

 (3) It might be possible in some circumstances or in the future for
     packets to be being added to the UDP receive queue whilst rxrpc is
     running consuming them, so the data_ready() handler might get called
     less often than once per packet.

     Allow for this by fully draining the queue on each call as (2).

 (4) If a packet fails the checksum check, the code currently returns after
     discarding the packet without checking for more.

     Allow for this by fully draining the queue on each call as (2).

Fixes: 17926a79320a ("[AF_RXRPC]: Provide secure RxRPC sockets for use by userspace and kernel both")
Signed-off-by: David Howells &lt;dhowells@redhat.com&gt;
Acked-by: Paolo Abeni &lt;pabeni@redhat.com&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>rxrpc: Fix some missed refs to init_net</title>
<updated>2018-10-05T13:21:59+00:00</updated>
<author>
<name>David Howells</name>
<email>dhowells@redhat.com</email>
</author>
<published>2018-10-05T13:05:34+00:00</published>
<link rel='alternate' type='text/html' href='https://git.tavy.me/linux.git/commit/?id=5e33a23ba4b56c109b732d57a0a76558a37d9ec5'/>
<id>5e33a23ba4b56c109b732d57a0a76558a37d9ec5</id>
<content type='text'>
Fix some refs to init_net that should've been changed to the appropriate
network namespace.

Fixes: 2baec2c3f854 ("rxrpc: Support network namespacing")
Signed-off-by: David Howells &lt;dhowells@redhat.com&gt;
Acked-by: Paolo Abeni &lt;pabeni@redhat.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
Fix some refs to init_net that should've been changed to the appropriate
network namespace.

Fixes: 2baec2c3f854 ("rxrpc: Support network namespacing")
Signed-off-by: David Howells &lt;dhowells@redhat.com&gt;
Acked-by: Paolo Abeni &lt;pabeni@redhat.com&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>rxrpc: Make service call handling more robust</title>
<updated>2018-09-28T09:32:49+00:00</updated>
<author>
<name>David Howells</name>
<email>dhowells@redhat.com</email>
</author>
<published>2018-09-27T14:13:09+00:00</published>
<link rel='alternate' type='text/html' href='https://git.tavy.me/linux.git/commit/?id=0099dc589bfa7caf6f2608c4cbc1181cfee22b0c'/>
<id>0099dc589bfa7caf6f2608c4cbc1181cfee22b0c</id>
<content type='text'>
Make the following changes to improve the robustness of the code that sets
up a new service call:

 (1) Cache the rxrpc_sock struct obtained in rxrpc_data_ready() to do a
     service ID check and pass that along to rxrpc_new_incoming_call().
     This means that I can remove the check from rxrpc_new_incoming_call()
     without the need to worry about the socket attached to the local
     endpoint getting replaced - which would invalidate the check.

 (2) Cache the rxrpc_peer struct, thereby allowing the peer search to be
     done once.  The peer is passed to rxrpc_new_incoming_call(), thereby
     saving the need to repeat the search.

     This also reduces the possibility of rxrpc_publish_service_conn()
     BUG()'ing due to the detection of a duplicate connection, despite the
     initial search done by rxrpc_find_connection_rcu() having turned up
     nothing.

     This BUG() shouldn't ever get hit since rxrpc_data_ready() *should* be
     non-reentrant and the result of the initial search should still hold
     true, but it has proven possible to hit.

     I *think* this may be due to __rxrpc_lookup_peer_rcu() cutting short
     the iteration over the hash table if it finds a matching peer with a
     zero usage count, but I don't know for sure since it's only ever been
     hit once that I know of.

     Another possibility is that a bug in rxrpc_data_ready() that checked
     the wrong byte in the header for the RXRPC_CLIENT_INITIATED flag
     might've let through a packet that caused a spurious and invalid call
     to be set up.  That is addressed in another patch.

 (3) Fix __rxrpc_lookup_peer_rcu() to skip peer records that have a zero
     usage count rather than stopping and returning not found, just in case
     there's another peer record behind it in the bucket.

 (4) Don't search the peer records in rxrpc_alloc_incoming_call(), but
     rather either use the peer cached in (2) or, if one wasn't found,
     preemptively install a new one.

Fixes: 8496af50eb38 ("rxrpc: Use RCU to access a peer's service connection tree")
Signed-off-by: David Howells &lt;dhowells@redhat.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
Make the following changes to improve the robustness of the code that sets
up a new service call:

 (1) Cache the rxrpc_sock struct obtained in rxrpc_data_ready() to do a
     service ID check and pass that along to rxrpc_new_incoming_call().
     This means that I can remove the check from rxrpc_new_incoming_call()
     without the need to worry about the socket attached to the local
     endpoint getting replaced - which would invalidate the check.

 (2) Cache the rxrpc_peer struct, thereby allowing the peer search to be
     done once.  The peer is passed to rxrpc_new_incoming_call(), thereby
     saving the need to repeat the search.

     This also reduces the possibility of rxrpc_publish_service_conn()
     BUG()'ing due to the detection of a duplicate connection, despite the
     initial search done by rxrpc_find_connection_rcu() having turned up
     nothing.

     This BUG() shouldn't ever get hit since rxrpc_data_ready() *should* be
     non-reentrant and the result of the initial search should still hold
     true, but it has proven possible to hit.

     I *think* this may be due to __rxrpc_lookup_peer_rcu() cutting short
     the iteration over the hash table if it finds a matching peer with a
     zero usage count, but I don't know for sure since it's only ever been
     hit once that I know of.

     Another possibility is that a bug in rxrpc_data_ready() that checked
     the wrong byte in the header for the RXRPC_CLIENT_INITIATED flag
     might've let through a packet that caused a spurious and invalid call
     to be set up.  That is addressed in another patch.

 (3) Fix __rxrpc_lookup_peer_rcu() to skip peer records that have a zero
     usage count rather than stopping and returning not found, just in case
     there's another peer record behind it in the bucket.

 (4) Don't search the peer records in rxrpc_alloc_incoming_call(), but
     rather either use the peer cached in (2) or, if one wasn't found,
     preemptively install a new one.

Fixes: 8496af50eb38 ("rxrpc: Use RCU to access a peer's service connection tree")
Signed-off-by: David Howells &lt;dhowells@redhat.com&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>rxrpc: Improve up-front incoming packet checking</title>
<updated>2018-09-28T09:32:31+00:00</updated>
<author>
<name>David Howells</name>
<email>dhowells@redhat.com</email>
</author>
<published>2018-09-27T14:13:08+00:00</published>
<link rel='alternate' type='text/html' href='https://git.tavy.me/linux.git/commit/?id=403fc2a138457f1071b186786a7589ef7382c8bc'/>
<id>403fc2a138457f1071b186786a7589ef7382c8bc</id>
<content type='text'>
Do more up-front checking on incoming packets to weed out invalid ones and
also ones aimed at services that we don't support.

Whilst we're at it, replace the clearing of call and skew if we don't find
a connection with just initialising the variables to zero at the top of the
function.

Signed-off-by: David Howells &lt;dhowells@redhat.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
Do more up-front checking on incoming packets to weed out invalid ones and
also ones aimed at services that we don't support.

Whilst we're at it, replace the clearing of call and skew if we don't find
a connection with just initialising the variables to zero at the top of the
function.

Signed-off-by: David Howells &lt;dhowells@redhat.com&gt;
</pre>
</div>
</content>
</entry>
</feed>
