<feed xmlns='http://www.w3.org/2005/Atom'>
<title>linux.git/kernel/bpf/percpu_freelist.c, branch v4.15</title>
<subtitle>Linux kernel source tree</subtitle>
<link rel='alternate' type='text/html' href='https://git.tavy.me/linux.git/'/>
<entry>
<title>bpf: fix lockdep splat</title>
<updated>2017-11-15T10:46:32+00:00</updated>
<author>
<name>Eric Dumazet</name>
<email>edumazet@google.com</email>
</author>
<published>2017-11-15T01:15:50+00:00</published>
<link rel='alternate' type='text/html' href='https://git.tavy.me/linux.git/commit/?id=89ad2fa3f043a1e8daae193bcb5fe34d5f8caf28'/>
<id>89ad2fa3f043a1e8daae193bcb5fe34d5f8caf28</id>
<content type='text'>
pcpu_freelist_pop() needs the same lockdep awareness than
pcpu_freelist_populate() to avoid a false positive.

 [ INFO: SOFTIRQ-safe -&gt; SOFTIRQ-unsafe lock order detected ]

 switchto-defaul/12508 [HC0[0]:SC0[6]:HE0:SE0] is trying to acquire:
  (&amp;htab-&gt;buckets[i].lock){......}, at: [&lt;ffffffff9dc099cb&gt;] __htab_percpu_map_update_elem+0x1cb/0x300

 and this task is already holding:
  (dev_queue-&gt;dev-&gt;qdisc_class ?: &amp;qdisc_tx_lock#2){+.-...}, at: [&lt;ffffffff9e135848&gt;] __dev_queue_xmit+0
x868/0x1240
 which would create a new lock dependency:
  (dev_queue-&gt;dev-&gt;qdisc_class ?: &amp;qdisc_tx_lock#2){+.-...} -&gt; (&amp;htab-&gt;buckets[i].lock){......}

 but this new dependency connects a SOFTIRQ-irq-safe lock:
  (dev_queue-&gt;dev-&gt;qdisc_class ?: &amp;qdisc_tx_lock#2){+.-...}
 ... which became SOFTIRQ-irq-safe at:
   [&lt;ffffffff9db5931b&gt;] __lock_acquire+0x42b/0x1f10
   [&lt;ffffffff9db5b32c&gt;] lock_acquire+0xbc/0x1b0
   [&lt;ffffffff9da05e38&gt;] _raw_spin_lock+0x38/0x50
   [&lt;ffffffff9e135848&gt;] __dev_queue_xmit+0x868/0x1240
   [&lt;ffffffff9e136240&gt;] dev_queue_xmit+0x10/0x20
   [&lt;ffffffff9e1965d9&gt;] ip_finish_output2+0x439/0x590
   [&lt;ffffffff9e197410&gt;] ip_finish_output+0x150/0x2f0
   [&lt;ffffffff9e19886d&gt;] ip_output+0x7d/0x260
   [&lt;ffffffff9e19789e&gt;] ip_local_out+0x5e/0xe0
   [&lt;ffffffff9e197b25&gt;] ip_queue_xmit+0x205/0x620
   [&lt;ffffffff9e1b8398&gt;] tcp_transmit_skb+0x5a8/0xcb0
   [&lt;ffffffff9e1ba152&gt;] tcp_write_xmit+0x242/0x1070
   [&lt;ffffffff9e1baffc&gt;] __tcp_push_pending_frames+0x3c/0xf0
   [&lt;ffffffff9e1b3472&gt;] tcp_rcv_established+0x312/0x700
   [&lt;ffffffff9e1c1acc&gt;] tcp_v4_do_rcv+0x11c/0x200
   [&lt;ffffffff9e1c3dc2&gt;] tcp_v4_rcv+0xaa2/0xc30
   [&lt;ffffffff9e191107&gt;] ip_local_deliver_finish+0xa7/0x240
   [&lt;ffffffff9e191a36&gt;] ip_local_deliver+0x66/0x200
   [&lt;ffffffff9e19137d&gt;] ip_rcv_finish+0xdd/0x560
   [&lt;ffffffff9e191e65&gt;] ip_rcv+0x295/0x510
   [&lt;ffffffff9e12ff88&gt;] __netif_receive_skb_core+0x988/0x1020
   [&lt;ffffffff9e130641&gt;] __netif_receive_skb+0x21/0x70
   [&lt;ffffffff9e1306ff&gt;] process_backlog+0x6f/0x230
   [&lt;ffffffff9e132129&gt;] net_rx_action+0x229/0x420
   [&lt;ffffffff9da07ee8&gt;] __do_softirq+0xd8/0x43d
   [&lt;ffffffff9e282bcc&gt;] do_softirq_own_stack+0x1c/0x30
   [&lt;ffffffff9dafc2f5&gt;] do_softirq+0x55/0x60
   [&lt;ffffffff9dafc3a8&gt;] __local_bh_enable_ip+0xa8/0xb0
   [&lt;ffffffff9db4c727&gt;] cpu_startup_entry+0x1c7/0x500
   [&lt;ffffffff9daab333&gt;] start_secondary+0x113/0x140

 to a SOFTIRQ-irq-unsafe lock:
  (&amp;head-&gt;lock){+.+...}
 ... which became SOFTIRQ-irq-unsafe at:
 ...  [&lt;ffffffff9db5971f&gt;] __lock_acquire+0x82f/0x1f10
   [&lt;ffffffff9db5b32c&gt;] lock_acquire+0xbc/0x1b0
   [&lt;ffffffff9da05e38&gt;] _raw_spin_lock+0x38/0x50
   [&lt;ffffffff9dc0b7fa&gt;] pcpu_freelist_pop+0x7a/0xb0
   [&lt;ffffffff9dc08b2c&gt;] htab_map_alloc+0x50c/0x5f0
   [&lt;ffffffff9dc00dc5&gt;] SyS_bpf+0x265/0x1200
   [&lt;ffffffff9e28195f&gt;] entry_SYSCALL_64_fastpath+0x12/0x17

 other info that might help us debug this:

 Chain exists of:
   dev_queue-&gt;dev-&gt;qdisc_class ?: &amp;qdisc_tx_lock#2 --&gt; &amp;htab-&gt;buckets[i].lock --&gt; &amp;head-&gt;lock

  Possible interrupt unsafe locking scenario:

        CPU0                    CPU1
        ----                    ----
   lock(&amp;head-&gt;lock);
                                local_irq_disable();
                                lock(dev_queue-&gt;dev-&gt;qdisc_class ?: &amp;qdisc_tx_lock#2);
                                lock(&amp;htab-&gt;buckets[i].lock);
   &lt;Interrupt&gt;
     lock(dev_queue-&gt;dev-&gt;qdisc_class ?: &amp;qdisc_tx_lock#2);

  *** DEADLOCK ***

Fixes: e19494edab82 ("bpf: introduce percpu_freelist")
Signed-off-by: Eric Dumazet &lt;edumazet@google.com&gt;
Signed-off-by: David S. Miller &lt;davem@davemloft.net&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
pcpu_freelist_pop() needs the same lockdep awareness than
pcpu_freelist_populate() to avoid a false positive.

 [ INFO: SOFTIRQ-safe -&gt; SOFTIRQ-unsafe lock order detected ]

 switchto-defaul/12508 [HC0[0]:SC0[6]:HE0:SE0] is trying to acquire:
  (&amp;htab-&gt;buckets[i].lock){......}, at: [&lt;ffffffff9dc099cb&gt;] __htab_percpu_map_update_elem+0x1cb/0x300

 and this task is already holding:
  (dev_queue-&gt;dev-&gt;qdisc_class ?: &amp;qdisc_tx_lock#2){+.-...}, at: [&lt;ffffffff9e135848&gt;] __dev_queue_xmit+0
x868/0x1240
 which would create a new lock dependency:
  (dev_queue-&gt;dev-&gt;qdisc_class ?: &amp;qdisc_tx_lock#2){+.-...} -&gt; (&amp;htab-&gt;buckets[i].lock){......}

 but this new dependency connects a SOFTIRQ-irq-safe lock:
  (dev_queue-&gt;dev-&gt;qdisc_class ?: &amp;qdisc_tx_lock#2){+.-...}
 ... which became SOFTIRQ-irq-safe at:
   [&lt;ffffffff9db5931b&gt;] __lock_acquire+0x42b/0x1f10
   [&lt;ffffffff9db5b32c&gt;] lock_acquire+0xbc/0x1b0
   [&lt;ffffffff9da05e38&gt;] _raw_spin_lock+0x38/0x50
   [&lt;ffffffff9e135848&gt;] __dev_queue_xmit+0x868/0x1240
   [&lt;ffffffff9e136240&gt;] dev_queue_xmit+0x10/0x20
   [&lt;ffffffff9e1965d9&gt;] ip_finish_output2+0x439/0x590
   [&lt;ffffffff9e197410&gt;] ip_finish_output+0x150/0x2f0
   [&lt;ffffffff9e19886d&gt;] ip_output+0x7d/0x260
   [&lt;ffffffff9e19789e&gt;] ip_local_out+0x5e/0xe0
   [&lt;ffffffff9e197b25&gt;] ip_queue_xmit+0x205/0x620
   [&lt;ffffffff9e1b8398&gt;] tcp_transmit_skb+0x5a8/0xcb0
   [&lt;ffffffff9e1ba152&gt;] tcp_write_xmit+0x242/0x1070
   [&lt;ffffffff9e1baffc&gt;] __tcp_push_pending_frames+0x3c/0xf0
   [&lt;ffffffff9e1b3472&gt;] tcp_rcv_established+0x312/0x700
   [&lt;ffffffff9e1c1acc&gt;] tcp_v4_do_rcv+0x11c/0x200
   [&lt;ffffffff9e1c3dc2&gt;] tcp_v4_rcv+0xaa2/0xc30
   [&lt;ffffffff9e191107&gt;] ip_local_deliver_finish+0xa7/0x240
   [&lt;ffffffff9e191a36&gt;] ip_local_deliver+0x66/0x200
   [&lt;ffffffff9e19137d&gt;] ip_rcv_finish+0xdd/0x560
   [&lt;ffffffff9e191e65&gt;] ip_rcv+0x295/0x510
   [&lt;ffffffff9e12ff88&gt;] __netif_receive_skb_core+0x988/0x1020
   [&lt;ffffffff9e130641&gt;] __netif_receive_skb+0x21/0x70
   [&lt;ffffffff9e1306ff&gt;] process_backlog+0x6f/0x230
   [&lt;ffffffff9e132129&gt;] net_rx_action+0x229/0x420
   [&lt;ffffffff9da07ee8&gt;] __do_softirq+0xd8/0x43d
   [&lt;ffffffff9e282bcc&gt;] do_softirq_own_stack+0x1c/0x30
   [&lt;ffffffff9dafc2f5&gt;] do_softirq+0x55/0x60
   [&lt;ffffffff9dafc3a8&gt;] __local_bh_enable_ip+0xa8/0xb0
   [&lt;ffffffff9db4c727&gt;] cpu_startup_entry+0x1c7/0x500
   [&lt;ffffffff9daab333&gt;] start_secondary+0x113/0x140

 to a SOFTIRQ-irq-unsafe lock:
  (&amp;head-&gt;lock){+.+...}
 ... which became SOFTIRQ-irq-unsafe at:
 ...  [&lt;ffffffff9db5971f&gt;] __lock_acquire+0x82f/0x1f10
   [&lt;ffffffff9db5b32c&gt;] lock_acquire+0xbc/0x1b0
   [&lt;ffffffff9da05e38&gt;] _raw_spin_lock+0x38/0x50
   [&lt;ffffffff9dc0b7fa&gt;] pcpu_freelist_pop+0x7a/0xb0
   [&lt;ffffffff9dc08b2c&gt;] htab_map_alloc+0x50c/0x5f0
   [&lt;ffffffff9dc00dc5&gt;] SyS_bpf+0x265/0x1200
   [&lt;ffffffff9e28195f&gt;] entry_SYSCALL_64_fastpath+0x12/0x17

 other info that might help us debug this:

 Chain exists of:
   dev_queue-&gt;dev-&gt;qdisc_class ?: &amp;qdisc_tx_lock#2 --&gt; &amp;htab-&gt;buckets[i].lock --&gt; &amp;head-&gt;lock

  Possible interrupt unsafe locking scenario:

        CPU0                    CPU1
        ----                    ----
   lock(&amp;head-&gt;lock);
                                local_irq_disable();
                                lock(dev_queue-&gt;dev-&gt;qdisc_class ?: &amp;qdisc_tx_lock#2);
                                lock(&amp;htab-&gt;buckets[i].lock);
   &lt;Interrupt&gt;
     lock(dev_queue-&gt;dev-&gt;qdisc_class ?: &amp;qdisc_tx_lock#2);

  *** DEADLOCK ***

Fixes: e19494edab82 ("bpf: introduce percpu_freelist")
Signed-off-by: Eric Dumazet &lt;edumazet@google.com&gt;
Signed-off-by: David S. Miller &lt;davem@davemloft.net&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>bpf: introduce percpu_freelist</title>
<updated>2016-03-08T20:28:31+00:00</updated>
<author>
<name>Alexei Starovoitov</name>
<email>ast@fb.com</email>
</author>
<published>2016-03-08T05:57:14+00:00</published>
<link rel='alternate' type='text/html' href='https://git.tavy.me/linux.git/commit/?id=e19494edab82f55a633911f25094581891bdc351'/>
<id>e19494edab82f55a633911f25094581891bdc351</id>
<content type='text'>
Introduce simple percpu_freelist to keep single list of elements
spread across per-cpu singly linked lists.

/* push element into the list */
void pcpu_freelist_push(struct pcpu_freelist *, struct pcpu_freelist_node *);

/* pop element from the list */
struct pcpu_freelist_node *pcpu_freelist_pop(struct pcpu_freelist *);

The object is pushed to the current cpu list.
Pop first trying to get the object from the current cpu list,
if it's empty goes to the neigbour cpu list.

For bpf program usage pattern the collision rate is very low,
since programs push and pop the objects typically on the same cpu.

Signed-off-by: Alexei Starovoitov &lt;ast@kernel.org&gt;
Signed-off-by: David S. Miller &lt;davem@davemloft.net&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
Introduce simple percpu_freelist to keep single list of elements
spread across per-cpu singly linked lists.

/* push element into the list */
void pcpu_freelist_push(struct pcpu_freelist *, struct pcpu_freelist_node *);

/* pop element from the list */
struct pcpu_freelist_node *pcpu_freelist_pop(struct pcpu_freelist *);

The object is pushed to the current cpu list.
Pop first trying to get the object from the current cpu list,
if it's empty goes to the neigbour cpu list.

For bpf program usage pattern the collision rate is very low,
since programs push and pop the objects typically on the same cpu.

Signed-off-by: Alexei Starovoitov &lt;ast@kernel.org&gt;
Signed-off-by: David S. Miller &lt;davem@davemloft.net&gt;
</pre>
</div>
</content>
</entry>
</feed>
