<feed xmlns='http://www.w3.org/2005/Atom'>
<title>linux-stable.git/drivers/dma, branch linux-3.4.y</title>
<subtitle>Linux kernel stable tree</subtitle>
<link rel='alternate' type='text/html' href='https://git.tavy.me/linux-stable.git/'/>
<entry>
<title>dmaengine: mv_xor: bug fix for racing condition in descriptors cleanup</title>
<updated>2015-10-22T01:20:04+00:00</updated>
<author>
<name>Lior Amsalem</name>
<email>alior@marvell.com</email>
</author>
<published>2015-05-26T13:07:32+00:00</published>
<link rel='alternate' type='text/html' href='https://git.tavy.me/linux-stable.git/commit/?id=fbd2f7f70bc3c4a793f7133d6d4b800a054da770'/>
<id>fbd2f7f70bc3c4a793f7133d6d4b800a054da770</id>
<content type='text'>
commit 9136291f1dbc1d4d1cacd2840fb35f4f3ce16c46 upstream.

This patch fixes a bug in the XOR driver where the cleanup function can be
called and free descriptors that never been processed by the engine (which
result in data errors).

The cleanup function will free descriptors based on the ownership bit in
the descriptors.

Fixes: ff7b04796d98 ("dmaengine: DMA engine driver for Marvell XOR engine")
Signed-off-by: Lior Amsalem &lt;alior@marvell.com&gt;
Signed-off-by: Maxime Ripard &lt;maxime.ripard@free-electrons.com&gt;
Reviewed-by: Ofer Heifetz &lt;oferh@marvell.com&gt;
Signed-off-by: Vinod Koul &lt;vinod.koul@intel.com&gt;
Signed-off-by: Zefan Li &lt;lizefan@huawei.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
commit 9136291f1dbc1d4d1cacd2840fb35f4f3ce16c46 upstream.

This patch fixes a bug in the XOR driver where the cleanup function can be
called and free descriptors that never been processed by the engine (which
result in data errors).

The cleanup function will free descriptors based on the ownership bit in
the descriptors.

Fixes: ff7b04796d98 ("dmaengine: DMA engine driver for Marvell XOR engine")
Signed-off-by: Lior Amsalem &lt;alior@marvell.com&gt;
Signed-off-by: Maxime Ripard &lt;maxime.ripard@free-electrons.com&gt;
Reviewed-by: Ofer Heifetz &lt;oferh@marvell.com&gt;
Signed-off-by: Vinod Koul &lt;vinod.koul@intel.com&gt;
Signed-off-by: Zefan Li &lt;lizefan@huawei.com&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>dma: ste_dma40: don't dereference free:d descriptor</title>
<updated>2014-03-11T23:10:02+00:00</updated>
<author>
<name>Linus Walleij</name>
<email>linus.walleij@linaro.org</email>
</author>
<published>2014-02-13T09:39:01+00:00</published>
<link rel='alternate' type='text/html' href='https://git.tavy.me/linux-stable.git/commit/?id=8d8e4839b5457e20e371a5f7485ce7855c7870c9'/>
<id>8d8e4839b5457e20e371a5f7485ce7855c7870c9</id>
<content type='text'>
commit e9baa9d9d520fb0e24cca671e430689de2d4a4b2 upstream.

It appears that in the DMA40 driver the DMA tasklet will very
often dereference memory for a descriptor just free:d from the
DMA40 slab. Nothing happens because no other part of the driver
has yet had a chance to claim this memory, but it's really
nasty to dereference free:d memory, so let's check the flag
before the descriptor is free and store it in a bool variable.

Reported-by: Dan Carpenter &lt;dan.carpenter@oracle.com&gt;
Signed-off-by: Linus Walleij &lt;linus.walleij@linaro.org&gt;
Signed-off-by: Vinod Koul &lt;vinod.koul@intel.com&gt;
Signed-off-by: Greg Kroah-Hartman &lt;gregkh@linuxfoundation.org&gt;

</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
commit e9baa9d9d520fb0e24cca671e430689de2d4a4b2 upstream.

It appears that in the DMA40 driver the DMA tasklet will very
often dereference memory for a descriptor just free:d from the
DMA40 slab. Nothing happens because no other part of the driver
has yet had a chance to claim this memory, but it's really
nasty to dereference free:d memory, so let's check the flag
before the descriptor is free and store it in a bool variable.

Reported-by: Dan Carpenter &lt;dan.carpenter@oracle.com&gt;
Signed-off-by: Linus Walleij &lt;linus.walleij@linaro.org&gt;
Signed-off-by: Vinod Koul &lt;vinod.koul@intel.com&gt;
Signed-off-by: Greg Kroah-Hartman &lt;gregkh@linuxfoundation.org&gt;

</pre>
</div>
</content>
</entry>
<entry>
<title>net_dma: mark broken</title>
<updated>2014-01-08T17:42:11+00:00</updated>
<author>
<name>Dan Williams</name>
<email>dan.j.williams@intel.com</email>
</author>
<published>2013-12-17T18:09:32+00:00</published>
<link rel='alternate' type='text/html' href='https://git.tavy.me/linux-stable.git/commit/?id=628706b739d6fdf0942030c75e301234f0c72d52'/>
<id>628706b739d6fdf0942030c75e301234f0c72d52</id>
<content type='text'>
commit 77873803363c9e831fc1d1e6895c084279090c22 upstream.

net_dma can cause data to be copied to a stale mapping if a
copy-on-write fault occurs during dma.  The application sees missing
data.

The following trace is triggered by modifying the kernel to WARN if it
ever triggers copy-on-write on a page that is undergoing dma:

 WARNING: CPU: 24 PID: 2529 at lib/dma-debug.c:485 debug_dma_assert_idle+0xd2/0x120()
 ioatdma 0000:00:04.0: DMA-API: cpu touching an active dma mapped page [pfn=0x16bcd9]
 Modules linked in: iTCO_wdt iTCO_vendor_support ioatdma lpc_ich pcspkr dca
 CPU: 24 PID: 2529 Comm: linbug Tainted: G        W    3.13.0-rc1+ #353
  00000000000001e5 ffff88016f45f688 ffffffff81751041 ffff88017ab0ef70
  ffff88016f45f6d8 ffff88016f45f6c8 ffffffff8104ed9c ffffffff810f3646
  ffff8801768f4840 0000000000000282 ffff88016f6cca10 00007fa2bb699349
 Call Trace:
  [&lt;ffffffff81751041&gt;] dump_stack+0x46/0x58
  [&lt;ffffffff8104ed9c&gt;] warn_slowpath_common+0x8c/0xc0
  [&lt;ffffffff810f3646&gt;] ? ftrace_pid_func+0x26/0x30
  [&lt;ffffffff8104ee86&gt;] warn_slowpath_fmt+0x46/0x50
  [&lt;ffffffff8139c062&gt;] debug_dma_assert_idle+0xd2/0x120
  [&lt;ffffffff81154a40&gt;] do_wp_page+0xd0/0x790
  [&lt;ffffffff811582ac&gt;] handle_mm_fault+0x51c/0xde0
  [&lt;ffffffff813830b9&gt;] ? copy_user_enhanced_fast_string+0x9/0x20
  [&lt;ffffffff8175fc2c&gt;] __do_page_fault+0x19c/0x530
  [&lt;ffffffff8175c196&gt;] ? _raw_spin_lock_bh+0x16/0x40
  [&lt;ffffffff810f3539&gt;] ? trace_clock_local+0x9/0x10
  [&lt;ffffffff810fa1f4&gt;] ? rb_reserve_next_event+0x64/0x310
  [&lt;ffffffffa0014c00&gt;] ? ioat2_dma_prep_memcpy_lock+0x60/0x130 [ioatdma]
  [&lt;ffffffff8175ffce&gt;] do_page_fault+0xe/0x10
  [&lt;ffffffff8175c862&gt;] page_fault+0x22/0x30
  [&lt;ffffffff81643991&gt;] ? __kfree_skb+0x51/0xd0
  [&lt;ffffffff813830b9&gt;] ? copy_user_enhanced_fast_string+0x9/0x20
  [&lt;ffffffff81388ea2&gt;] ? memcpy_toiovec+0x52/0xa0
  [&lt;ffffffff8164770f&gt;] skb_copy_datagram_iovec+0x5f/0x2a0
  [&lt;ffffffff8169d0f4&gt;] tcp_rcv_established+0x674/0x7f0
  [&lt;ffffffff816a68c5&gt;] tcp_v4_do_rcv+0x2e5/0x4a0
  [..]
 ---[ end trace e30e3b01191b7617 ]---
 Mapped at:
  [&lt;ffffffff8139c169&gt;] debug_dma_map_page+0xb9/0x160
  [&lt;ffffffff8142bf47&gt;] dma_async_memcpy_pg_to_pg+0x127/0x210
  [&lt;ffffffff8142cce9&gt;] dma_memcpy_pg_to_iovec+0x119/0x1f0
  [&lt;ffffffff81669d3c&gt;] dma_skb_copy_datagram_iovec+0x11c/0x2b0
  [&lt;ffffffff8169d1ca&gt;] tcp_rcv_established+0x74a/0x7f0:

...the problem is that the receive path falls back to cpu-copy in
several locations and this trace is just one of the areas.  A few
options were considered to fix this:

1/ sync all dma whenever a cpu copy branch is taken

2/ modify the page fault handler to hold off while dma is in-flight

Option 1 adds yet more cpu overhead to an "offload" that struggles to compete
with cpu-copy.  Option 2 adds checks for behavior that is already documented as
broken when using get_user_pages().  At a minimum a debug mode is warranted to
catch and flag these violations of the dma-api vs get_user_pages().

Thanks to David for his reproducer.

Cc: Dave Jiang &lt;dave.jiang@intel.com&gt;
Cc: Vinod Koul &lt;vinod.koul@intel.com&gt;
Cc: Alexander Duyck &lt;alexander.h.duyck@intel.com&gt;
Reported-by: David Whipple &lt;whipple@securedatainnovations.ch&gt;
Acked-by: David S. Miller &lt;davem@davemloft.net&gt;
Signed-off-by: Dan Williams &lt;dan.j.williams@intel.com&gt;
Signed-off-by: Greg Kroah-Hartman &lt;gregkh@linuxfoundation.org&gt;

</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
commit 77873803363c9e831fc1d1e6895c084279090c22 upstream.

net_dma can cause data to be copied to a stale mapping if a
copy-on-write fault occurs during dma.  The application sees missing
data.

The following trace is triggered by modifying the kernel to WARN if it
ever triggers copy-on-write on a page that is undergoing dma:

 WARNING: CPU: 24 PID: 2529 at lib/dma-debug.c:485 debug_dma_assert_idle+0xd2/0x120()
 ioatdma 0000:00:04.0: DMA-API: cpu touching an active dma mapped page [pfn=0x16bcd9]
 Modules linked in: iTCO_wdt iTCO_vendor_support ioatdma lpc_ich pcspkr dca
 CPU: 24 PID: 2529 Comm: linbug Tainted: G        W    3.13.0-rc1+ #353
  00000000000001e5 ffff88016f45f688 ffffffff81751041 ffff88017ab0ef70
  ffff88016f45f6d8 ffff88016f45f6c8 ffffffff8104ed9c ffffffff810f3646
  ffff8801768f4840 0000000000000282 ffff88016f6cca10 00007fa2bb699349
 Call Trace:
  [&lt;ffffffff81751041&gt;] dump_stack+0x46/0x58
  [&lt;ffffffff8104ed9c&gt;] warn_slowpath_common+0x8c/0xc0
  [&lt;ffffffff810f3646&gt;] ? ftrace_pid_func+0x26/0x30
  [&lt;ffffffff8104ee86&gt;] warn_slowpath_fmt+0x46/0x50
  [&lt;ffffffff8139c062&gt;] debug_dma_assert_idle+0xd2/0x120
  [&lt;ffffffff81154a40&gt;] do_wp_page+0xd0/0x790
  [&lt;ffffffff811582ac&gt;] handle_mm_fault+0x51c/0xde0
  [&lt;ffffffff813830b9&gt;] ? copy_user_enhanced_fast_string+0x9/0x20
  [&lt;ffffffff8175fc2c&gt;] __do_page_fault+0x19c/0x530
  [&lt;ffffffff8175c196&gt;] ? _raw_spin_lock_bh+0x16/0x40
  [&lt;ffffffff810f3539&gt;] ? trace_clock_local+0x9/0x10
  [&lt;ffffffff810fa1f4&gt;] ? rb_reserve_next_event+0x64/0x310
  [&lt;ffffffffa0014c00&gt;] ? ioat2_dma_prep_memcpy_lock+0x60/0x130 [ioatdma]
  [&lt;ffffffff8175ffce&gt;] do_page_fault+0xe/0x10
  [&lt;ffffffff8175c862&gt;] page_fault+0x22/0x30
  [&lt;ffffffff81643991&gt;] ? __kfree_skb+0x51/0xd0
  [&lt;ffffffff813830b9&gt;] ? copy_user_enhanced_fast_string+0x9/0x20
  [&lt;ffffffff81388ea2&gt;] ? memcpy_toiovec+0x52/0xa0
  [&lt;ffffffff8164770f&gt;] skb_copy_datagram_iovec+0x5f/0x2a0
  [&lt;ffffffff8169d0f4&gt;] tcp_rcv_established+0x674/0x7f0
  [&lt;ffffffff816a68c5&gt;] tcp_v4_do_rcv+0x2e5/0x4a0
  [..]
 ---[ end trace e30e3b01191b7617 ]---
 Mapped at:
  [&lt;ffffffff8139c169&gt;] debug_dma_map_page+0xb9/0x160
  [&lt;ffffffff8142bf47&gt;] dma_async_memcpy_pg_to_pg+0x127/0x210
  [&lt;ffffffff8142cce9&gt;] dma_memcpy_pg_to_iovec+0x119/0x1f0
  [&lt;ffffffff81669d3c&gt;] dma_skb_copy_datagram_iovec+0x11c/0x2b0
  [&lt;ffffffff8169d1ca&gt;] tcp_rcv_established+0x74a/0x7f0:

...the problem is that the receive path falls back to cpu-copy in
several locations and this trace is just one of the areas.  A few
options were considered to fix this:

1/ sync all dma whenever a cpu copy branch is taken

2/ modify the page fault handler to hold off while dma is in-flight

Option 1 adds yet more cpu overhead to an "offload" that struggles to compete
with cpu-copy.  Option 2 adds checks for behavior that is already documented as
broken when using get_user_pages().  At a minimum a debug mode is warranted to
catch and flag these violations of the dma-api vs get_user_pages().

Thanks to David for his reproducer.

Cc: Dave Jiang &lt;dave.jiang@intel.com&gt;
Cc: Vinod Koul &lt;vinod.koul@intel.com&gt;
Cc: Alexander Duyck &lt;alexander.h.duyck@intel.com&gt;
Reported-by: David Whipple &lt;whipple@securedatainnovations.ch&gt;
Acked-by: David S. Miller &lt;davem@davemloft.net&gt;
Signed-off-by: Dan Williams &lt;dan.j.williams@intel.com&gt;
Signed-off-by: Greg Kroah-Hartman &lt;gregkh@linuxfoundation.org&gt;

</pre>
</div>
</content>
</entry>
<entry>
<title>dmaengine: imx-dma: fix slow path issue in prep_dma_cyclic</title>
<updated>2013-10-13T22:42:49+00:00</updated>
<author>
<name>Michael Grzeschik</name>
<email>m.grzeschik@pengutronix.de</email>
</author>
<published>2013-09-17T13:56:06+00:00</published>
<link rel='alternate' type='text/html' href='https://git.tavy.me/linux-stable.git/commit/?id=d11fb4bbd90fd424834f51f8f85f6df51d76e5d9'/>
<id>d11fb4bbd90fd424834f51f8f85f6df51d76e5d9</id>
<content type='text'>
commit edc530fe7ee5a562680615d2e7cd205879c751a7 upstream.

When perparing cyclic_dma buffers by the sound layer, it will dump the
following lockdep trace. The leading snd_pcm_action_single get called
with read_lock_irq called. To fix this, we change the kcalloc call from
GFP_KERNEL to GFP_ATOMIC.

WARNING: at kernel/lockdep.c:2740 lockdep_trace_alloc+0xcc/0x114()
DEBUG_LOCKS_WARN_ON(irqs_disabled_flags(flags))
Modules linked in:
CPU: 0 PID: 832 Comm: aplay Not tainted 3.11.0-20130823+ #903
Backtrace:
[&lt;c000b98c&gt;] (dump_backtrace+0x0/0x10c) from [&lt;c000bb28&gt;] (show_stack+0x18/0x1c)
 r6:c004c090 r5:00000009 r4:c2e0bd18 r3:00404000
[&lt;c000bb10&gt;] (show_stack+0x0/0x1c) from [&lt;c02f397c&gt;] (dump_stack+0x20/0x28)
[&lt;c02f395c&gt;] (dump_stack+0x0/0x28) from [&lt;c001531c&gt;] (warn_slowpath_common+0x54/0x70)
[&lt;c00152c8&gt;] (warn_slowpath_common+0x0/0x70) from [&lt;c00153dc&gt;] (warn_slowpath_fmt+0x38/0x40)
 r8:00004000 r7:a3b90000 r6:000080d0 r5:60000093 r4:c2e0a000 r3:00000009
[&lt;c00153a4&gt;] (warn_slowpath_fmt+0x0/0x40) from [&lt;c004c090&gt;] (lockdep_trace_alloc+0xcc/0x114)
 r3:c03955d8 r2:c03907db
[&lt;c004bfc4&gt;] (lockdep_trace_alloc+0x0/0x114) from [&lt;c008f16c&gt;] (__kmalloc+0x34/0x118)
 r6:000080d0 r5:c3800120 r4:000080d0 r3:c040a0f8
[&lt;c008f138&gt;] (__kmalloc+0x0/0x118) from [&lt;c019c95c&gt;] (imxdma_prep_dma_cyclic+0x64/0x168)
 r7:a3b90000 r6:00000004 r5:c39d8420 r4:c3847150
[&lt;c019c8f8&gt;] (imxdma_prep_dma_cyclic+0x0/0x168) from [&lt;c024618c&gt;] (snd_dmaengine_pcm_trigger+0xa8/0x160)
[&lt;c02460e4&gt;] (snd_dmaengine_pcm_trigger+0x0/0x160) from [&lt;c0241fa8&gt;] (soc_pcm_trigger+0x90/0xb4)
 r8:c058c7b0 r7:c3b8140c r6:c39da560 r5:00000001 r4:c3b81000
[&lt;c0241f18&gt;] (soc_pcm_trigger+0x0/0xb4) from [&lt;c022ece4&gt;] (snd_pcm_do_start+0x2c/0x38)
 r7:00000000 r6:00000003 r5:c058c7b0 r4:c3b81000
[&lt;c022ecb8&gt;] (snd_pcm_do_start+0x0/0x38) from [&lt;c022e958&gt;] (snd_pcm_action_single+0x40/0x6c)
[&lt;c022e918&gt;] (snd_pcm_action_single+0x0/0x6c) from [&lt;c022ea64&gt;] (snd_pcm_action_lock_irq+0x7c/0x9c)
 r7:00000003 r6:c3b810f0 r5:c3b810f0 r4:c3b81000
[&lt;c022e9e8&gt;] (snd_pcm_action_lock_irq+0x0/0x9c) from [&lt;c023009c&gt;] (snd_pcm_common_ioctl1+0x7f8/0xfd0)
 r8:c3b7f888 r7:005407b8 r6:c2c991c0 r5:c3b81000 r4:c3b81000 r3:00004142
[&lt;c022f8a4&gt;] (snd_pcm_common_ioctl1+0x0/0xfd0) from [&lt;c023117c&gt;] (snd_pcm_playback_ioctl1+0x464/0x488)
[&lt;c0230d18&gt;] (snd_pcm_playback_ioctl1+0x0/0x488) from [&lt;c02311d4&gt;] (snd_pcm_playback_ioctl+0x34/0x40)
 r8:c3b7f888 r7:00004142 r6:00000004 r5:c2c991c0 r4:005407b8
[&lt;c02311a0&gt;] (snd_pcm_playback_ioctl+0x0/0x40) from [&lt;c00a14a4&gt;] (vfs_ioctl+0x30/0x44)
[&lt;c00a1474&gt;] (vfs_ioctl+0x0/0x44) from [&lt;c00a1fe8&gt;] (do_vfs_ioctl+0x55c/0x5c0)
[&lt;c00a1a8c&gt;] (do_vfs_ioctl+0x0/0x5c0) from [&lt;c00a208c&gt;] (SyS_ioctl+0x40/0x68)
[&lt;c00a204c&gt;] (SyS_ioctl+0x0/0x68) from [&lt;c0009380&gt;] (ret_fast_syscall+0x0/0x44)
 r8:c0009544 r7:00000036 r6:bedeaa58 r5:00000000 r4:000000c0

Signed-off-by: Michael Grzeschik &lt;m.grzeschik@pengutronix.de&gt;
Signed-off-by: Vinod Koul &lt;vinod.koul@intel.com&gt;
Cc: Jonghwan Choi &lt;jhbird.choi@samsung.com&gt;
Signed-off-by: Greg Kroah-Hartman &lt;gregkh@linuxfoundation.org&gt;

</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
commit edc530fe7ee5a562680615d2e7cd205879c751a7 upstream.

When perparing cyclic_dma buffers by the sound layer, it will dump the
following lockdep trace. The leading snd_pcm_action_single get called
with read_lock_irq called. To fix this, we change the kcalloc call from
GFP_KERNEL to GFP_ATOMIC.

WARNING: at kernel/lockdep.c:2740 lockdep_trace_alloc+0xcc/0x114()
DEBUG_LOCKS_WARN_ON(irqs_disabled_flags(flags))
Modules linked in:
CPU: 0 PID: 832 Comm: aplay Not tainted 3.11.0-20130823+ #903
Backtrace:
[&lt;c000b98c&gt;] (dump_backtrace+0x0/0x10c) from [&lt;c000bb28&gt;] (show_stack+0x18/0x1c)
 r6:c004c090 r5:00000009 r4:c2e0bd18 r3:00404000
[&lt;c000bb10&gt;] (show_stack+0x0/0x1c) from [&lt;c02f397c&gt;] (dump_stack+0x20/0x28)
[&lt;c02f395c&gt;] (dump_stack+0x0/0x28) from [&lt;c001531c&gt;] (warn_slowpath_common+0x54/0x70)
[&lt;c00152c8&gt;] (warn_slowpath_common+0x0/0x70) from [&lt;c00153dc&gt;] (warn_slowpath_fmt+0x38/0x40)
 r8:00004000 r7:a3b90000 r6:000080d0 r5:60000093 r4:c2e0a000 r3:00000009
[&lt;c00153a4&gt;] (warn_slowpath_fmt+0x0/0x40) from [&lt;c004c090&gt;] (lockdep_trace_alloc+0xcc/0x114)
 r3:c03955d8 r2:c03907db
[&lt;c004bfc4&gt;] (lockdep_trace_alloc+0x0/0x114) from [&lt;c008f16c&gt;] (__kmalloc+0x34/0x118)
 r6:000080d0 r5:c3800120 r4:000080d0 r3:c040a0f8
[&lt;c008f138&gt;] (__kmalloc+0x0/0x118) from [&lt;c019c95c&gt;] (imxdma_prep_dma_cyclic+0x64/0x168)
 r7:a3b90000 r6:00000004 r5:c39d8420 r4:c3847150
[&lt;c019c8f8&gt;] (imxdma_prep_dma_cyclic+0x0/0x168) from [&lt;c024618c&gt;] (snd_dmaengine_pcm_trigger+0xa8/0x160)
[&lt;c02460e4&gt;] (snd_dmaengine_pcm_trigger+0x0/0x160) from [&lt;c0241fa8&gt;] (soc_pcm_trigger+0x90/0xb4)
 r8:c058c7b0 r7:c3b8140c r6:c39da560 r5:00000001 r4:c3b81000
[&lt;c0241f18&gt;] (soc_pcm_trigger+0x0/0xb4) from [&lt;c022ece4&gt;] (snd_pcm_do_start+0x2c/0x38)
 r7:00000000 r6:00000003 r5:c058c7b0 r4:c3b81000
[&lt;c022ecb8&gt;] (snd_pcm_do_start+0x0/0x38) from [&lt;c022e958&gt;] (snd_pcm_action_single+0x40/0x6c)
[&lt;c022e918&gt;] (snd_pcm_action_single+0x0/0x6c) from [&lt;c022ea64&gt;] (snd_pcm_action_lock_irq+0x7c/0x9c)
 r7:00000003 r6:c3b810f0 r5:c3b810f0 r4:c3b81000
[&lt;c022e9e8&gt;] (snd_pcm_action_lock_irq+0x0/0x9c) from [&lt;c023009c&gt;] (snd_pcm_common_ioctl1+0x7f8/0xfd0)
 r8:c3b7f888 r7:005407b8 r6:c2c991c0 r5:c3b81000 r4:c3b81000 r3:00004142
[&lt;c022f8a4&gt;] (snd_pcm_common_ioctl1+0x0/0xfd0) from [&lt;c023117c&gt;] (snd_pcm_playback_ioctl1+0x464/0x488)
[&lt;c0230d18&gt;] (snd_pcm_playback_ioctl1+0x0/0x488) from [&lt;c02311d4&gt;] (snd_pcm_playback_ioctl+0x34/0x40)
 r8:c3b7f888 r7:00004142 r6:00000004 r5:c2c991c0 r4:005407b8
[&lt;c02311a0&gt;] (snd_pcm_playback_ioctl+0x0/0x40) from [&lt;c00a14a4&gt;] (vfs_ioctl+0x30/0x44)
[&lt;c00a1474&gt;] (vfs_ioctl+0x0/0x44) from [&lt;c00a1fe8&gt;] (do_vfs_ioctl+0x55c/0x5c0)
[&lt;c00a1a8c&gt;] (do_vfs_ioctl+0x0/0x5c0) from [&lt;c00a208c&gt;] (SyS_ioctl+0x40/0x68)
[&lt;c00a204c&gt;] (SyS_ioctl+0x0/0x68) from [&lt;c0009380&gt;] (ret_fast_syscall+0x0/0x44)
 r8:c0009544 r7:00000036 r6:bedeaa58 r5:00000000 r4:000000c0

Signed-off-by: Michael Grzeschik &lt;m.grzeschik@pengutronix.de&gt;
Signed-off-by: Vinod Koul &lt;vinod.koul@intel.com&gt;
Cc: Jonghwan Choi &lt;jhbird.choi@samsung.com&gt;
Signed-off-by: Greg Kroah-Hartman &lt;gregkh@linuxfoundation.org&gt;

</pre>
</div>
</content>
</entry>
<entry>
<title>dmaengine: imx-dma: fix callback path in tasklet</title>
<updated>2013-10-13T22:42:49+00:00</updated>
<author>
<name>Michael Grzeschik</name>
<email>m.grzeschik@pengutronix.de</email>
</author>
<published>2013-09-17T13:56:08+00:00</published>
<link rel='alternate' type='text/html' href='https://git.tavy.me/linux-stable.git/commit/?id=cd8ccd534cf8782116049336a31b2476b9049ed2'/>
<id>cd8ccd534cf8782116049336a31b2476b9049ed2</id>
<content type='text'>
commit fcaaba6c7136fe47e5a13352f99a64b019b6d2c5 upstream.

We need to free the ld_active list head before jumping into the callback
routine. Otherwise the callback could run into issue_pending and change
our ld_active list head we just going to free. This will run the channel
list into an currupted and undefined state.

Signed-off-by: Michael Grzeschik &lt;m.grzeschik@pengutronix.de&gt;
Signed-off-by: Vinod Koul &lt;vinod.koul@intel.com&gt;
Cc: Jonghwan Choi &lt;jhbird.choi@samsung.com&gt;
Signed-off-by: Greg Kroah-Hartman &lt;gregkh@linuxfoundation.org&gt;

</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
commit fcaaba6c7136fe47e5a13352f99a64b019b6d2c5 upstream.

We need to free the ld_active list head before jumping into the callback
routine. Otherwise the callback could run into issue_pending and change
our ld_active list head we just going to free. This will run the channel
list into an currupted and undefined state.

Signed-off-by: Michael Grzeschik &lt;m.grzeschik@pengutronix.de&gt;
Signed-off-by: Vinod Koul &lt;vinod.koul@intel.com&gt;
Cc: Jonghwan Choi &lt;jhbird.choi@samsung.com&gt;
Signed-off-by: Greg Kroah-Hartman &lt;gregkh@linuxfoundation.org&gt;

</pre>
</div>
</content>
</entry>
<entry>
<title>dmaengine: imx-dma: fix lockdep issue between irqhandler and tasklet</title>
<updated>2013-10-13T22:42:49+00:00</updated>
<author>
<name>Michael Grzeschik</name>
<email>m.grzeschik@pengutronix.de</email>
</author>
<published>2013-09-17T13:56:07+00:00</published>
<link rel='alternate' type='text/html' href='https://git.tavy.me/linux-stable.git/commit/?id=218118c73db0a1b9aaece4c9ae697745218c3aa5'/>
<id>218118c73db0a1b9aaece4c9ae697745218c3aa5</id>
<content type='text'>
commit 5a276fa6bdf82fd442046969603968c83626ce0b upstream.

The tasklet and irqhandler are using spin_lock while other routines are
using spin_lock_irqsave/restore. This leads to lockdep issues as
described bellow. This patch is changing the code to use
spinlock_irq_save/restore in both code pathes.

As imxdma_xfer_desc always gets called with spin_lock_irqsave lock held,
this patch also removes the spare call inside the routine to avoid
double locking.

[  403.358162] =================================
[  403.362549] [ INFO: inconsistent lock state ]
[  403.366945] 3.10.0-20130823+ #904 Not tainted
[  403.371331] ---------------------------------
[  403.375721] inconsistent {IN-HARDIRQ-W} -&gt; {HARDIRQ-ON-W} usage.
[  403.381769] swapper/0 [HC0[0]:SC1[1]:HE1:SE0] takes:
[  403.386762]  (&amp;(&amp;imxdma-&gt;lock)-&gt;rlock){?.-...}, at: [&lt;c019d77c&gt;] imxdma_tasklet+0x20/0x134
[  403.395201] {IN-HARDIRQ-W} state was registered at:
[  403.400108]   [&lt;c004b264&gt;] mark_lock+0x2a0/0x6b4
[  403.404798]   [&lt;c004d7c8&gt;] __lock_acquire+0x650/0x1a64
[  403.410004]   [&lt;c004f15c&gt;] lock_acquire+0x94/0xa8
[  403.414773]   [&lt;c02f74e4&gt;] _raw_spin_lock+0x54/0x8c
[  403.419720]   [&lt;c019d094&gt;] dma_irq_handler+0x78/0x254
[  403.424845]   [&lt;c0061124&gt;] handle_irq_event_percpu+0x38/0x1b4
[  403.430670]   [&lt;c00612e4&gt;] handle_irq_event+0x44/0x64
[  403.435789]   [&lt;c0063a70&gt;] handle_level_irq+0xd8/0xf0
[  403.440903]   [&lt;c0060a20&gt;] generic_handle_irq+0x28/0x38
[  403.446194]   [&lt;c0009cc4&gt;] handle_IRQ+0x68/0x8c
[  403.450789]   [&lt;c0008714&gt;] avic_handle_irq+0x3c/0x48
[  403.455811]   [&lt;c0008f84&gt;] __irq_svc+0x44/0x74
[  403.460314]   [&lt;c0040b04&gt;] cpu_startup_entry+0x88/0xf4
[  403.465525]   [&lt;c02f00d0&gt;] rest_init+0xb8/0xe0
[  403.470045]   [&lt;c03e07dc&gt;] start_kernel+0x28c/0x2d4
[  403.474986]   [&lt;a0008040&gt;] 0xa0008040
[  403.478709] irq event stamp: 50854
[  403.482140] hardirqs last  enabled at (50854): [&lt;c001c6b8&gt;] tasklet_action+0x38/0xdc
[  403.489954] hardirqs last disabled at (50853): [&lt;c001c6a0&gt;] tasklet_action+0x20/0xdc
[  403.497761] softirqs last  enabled at (50850): [&lt;c001bc64&gt;] _local_bh_enable+0x14/0x18
[  403.505741] softirqs last disabled at (50851): [&lt;c001c268&gt;] irq_exit+0x88/0xdc
[  403.513026]
[  403.513026] other info that might help us debug this:
[  403.519593]  Possible unsafe locking scenario:
[  403.519593]
[  403.525548]        CPU0
[  403.528020]        ----
[  403.530491]   lock(&amp;(&amp;imxdma-&gt;lock)-&gt;rlock);
[  403.534828]   &lt;Interrupt&gt;
[  403.537474]     lock(&amp;(&amp;imxdma-&gt;lock)-&gt;rlock);
[  403.541983]
[  403.541983]  *** DEADLOCK ***
[  403.541983]
[  403.547951] no locks held by swapper/0.
[  403.551813]
[  403.551813] stack backtrace:
[  403.556222] CPU: 0 PID: 0 Comm: swapper Not tainted 3.10.0-20130823+ #904
[  403.563039] Backtrace:
[  403.565581] [&lt;c000b98c&gt;] (dump_backtrace+0x0/0x10c) from [&lt;c000bb28&gt;] (show_stack+0x18/0x1c)
[  403.574054]  r6:00000000 r5:c05c51d8 r4:c040bd58 r3:00200000
[  403.579872] [&lt;c000bb10&gt;] (show_stack+0x0/0x1c) from [&lt;c02f398c&gt;] (dump_stack+0x20/0x28)
[  403.587955] [&lt;c02f396c&gt;] (dump_stack+0x0/0x28) from [&lt;c02f29c8&gt;] (print_usage_bug.part.28+0x224/0x28c)
[  403.597340] [&lt;c02f27a4&gt;] (print_usage_bug.part.28+0x0/0x28c) from [&lt;c004b404&gt;] (mark_lock+0x440/0x6b4)
[  403.606682]  r8:c004a41c r7:00000000 r6:c040bd58 r5:c040c040 r4:00000002
[  403.613566] [&lt;c004afc4&gt;] (mark_lock+0x0/0x6b4) from [&lt;c004d844&gt;] (__lock_acquire+0x6cc/0x1a64)
[  403.622244] [&lt;c004d178&gt;] (__lock_acquire+0x0/0x1a64) from [&lt;c004f15c&gt;] (lock_acquire+0x94/0xa8)
[  403.631010] [&lt;c004f0c8&gt;] (lock_acquire+0x0/0xa8) from [&lt;c02f74e4&gt;] (_raw_spin_lock+0x54/0x8c)
[  403.639614] [&lt;c02f7490&gt;] (_raw_spin_lock+0x0/0x8c) from [&lt;c019d77c&gt;] (imxdma_tasklet+0x20/0x134)
[  403.648434]  r6:c3847010 r5:c040e890 r4:c38470d4
[  403.653194] [&lt;c019d75c&gt;] (imxdma_tasklet+0x0/0x134) from [&lt;c001c70c&gt;] (tasklet_action+0x8c/0xdc)
[  403.662013]  r8:c0599160 r7:00000000 r6:00000000 r5:c040e890 r4:c3847114 r3:c019d75c
[  403.670042] [&lt;c001c680&gt;] (tasklet_action+0x0/0xdc) from [&lt;c001bd4c&gt;] (__do_softirq+0xe4/0x1f0)
[  403.678687]  r7:00000101 r6:c0402000 r5:c059919c r4:00000001
[  403.684498] [&lt;c001bc68&gt;] (__do_softirq+0x0/0x1f0) from [&lt;c001c268&gt;] (irq_exit+0x88/0xdc)
[  403.692652] [&lt;c001c1e0&gt;] (irq_exit+0x0/0xdc) from [&lt;c0009cc8&gt;] (handle_IRQ+0x6c/0x8c)
[  403.700514]  r4:00000030 r3:00000110
[  403.704192] [&lt;c0009c5c&gt;] (handle_IRQ+0x0/0x8c) from [&lt;c0008714&gt;] (avic_handle_irq+0x3c/0x48)
[  403.712664]  r5:c0403f28 r4:c0593ebc
[  403.716343] [&lt;c00086d8&gt;] (avic_handle_irq+0x0/0x48) from [&lt;c0008f84&gt;] (__irq_svc+0x44/0x74)
[  403.724733] Exception stack(0xc0403f28 to 0xc0403f70)
[  403.729841] 3f20:                   00000001 00000004 00000000 20000013 c0402000 c04104a8
[  403.738078] 3f40: 00000002 c0b69620 a0004000 41069264 a03fb5f4 c0403f7c c0403f40 c0403f70
[  403.746301] 3f60: c004b92c c0009e74 20000013 ffffffff
[  403.751383]  r6:ffffffff r5:20000013 r4:c0009e74 r3:c004b92c
[  403.757210] [&lt;c0009e30&gt;] (arch_cpu_idle+0x0/0x4c) from [&lt;c0040b04&gt;] (cpu_startup_entry+0x88/0xf4)
[  403.766161] [&lt;c0040a7c&gt;] (cpu_startup_entry+0x0/0xf4) from [&lt;c02f00d0&gt;] (rest_init+0xb8/0xe0)
[  403.774753] [&lt;c02f0018&gt;] (rest_init+0x0/0xe0) from [&lt;c03e07dc&gt;] (start_kernel+0x28c/0x2d4)
[  403.783051]  r6:c03fc484 r5:ffffffff r4:c040a0e0
[  403.787797] [&lt;c03e0550&gt;] (start_kernel+0x0/0x2d4) from [&lt;a0008040&gt;] (0xa0008040)

Signed-off-by: Michael Grzeschik &lt;m.grzeschik@pengutronix.de&gt;
Signed-off-by: Vinod Koul &lt;vinod.koul@intel.com&gt;
Cc: Jonghwan Choi &lt;jhbird.choi@samsung.com&gt;
Signed-off-by: Greg Kroah-Hartman &lt;gregkh@linuxfoundation.org&gt;

</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
commit 5a276fa6bdf82fd442046969603968c83626ce0b upstream.

The tasklet and irqhandler are using spin_lock while other routines are
using spin_lock_irqsave/restore. This leads to lockdep issues as
described bellow. This patch is changing the code to use
spinlock_irq_save/restore in both code pathes.

As imxdma_xfer_desc always gets called with spin_lock_irqsave lock held,
this patch also removes the spare call inside the routine to avoid
double locking.

[  403.358162] =================================
[  403.362549] [ INFO: inconsistent lock state ]
[  403.366945] 3.10.0-20130823+ #904 Not tainted
[  403.371331] ---------------------------------
[  403.375721] inconsistent {IN-HARDIRQ-W} -&gt; {HARDIRQ-ON-W} usage.
[  403.381769] swapper/0 [HC0[0]:SC1[1]:HE1:SE0] takes:
[  403.386762]  (&amp;(&amp;imxdma-&gt;lock)-&gt;rlock){?.-...}, at: [&lt;c019d77c&gt;] imxdma_tasklet+0x20/0x134
[  403.395201] {IN-HARDIRQ-W} state was registered at:
[  403.400108]   [&lt;c004b264&gt;] mark_lock+0x2a0/0x6b4
[  403.404798]   [&lt;c004d7c8&gt;] __lock_acquire+0x650/0x1a64
[  403.410004]   [&lt;c004f15c&gt;] lock_acquire+0x94/0xa8
[  403.414773]   [&lt;c02f74e4&gt;] _raw_spin_lock+0x54/0x8c
[  403.419720]   [&lt;c019d094&gt;] dma_irq_handler+0x78/0x254
[  403.424845]   [&lt;c0061124&gt;] handle_irq_event_percpu+0x38/0x1b4
[  403.430670]   [&lt;c00612e4&gt;] handle_irq_event+0x44/0x64
[  403.435789]   [&lt;c0063a70&gt;] handle_level_irq+0xd8/0xf0
[  403.440903]   [&lt;c0060a20&gt;] generic_handle_irq+0x28/0x38
[  403.446194]   [&lt;c0009cc4&gt;] handle_IRQ+0x68/0x8c
[  403.450789]   [&lt;c0008714&gt;] avic_handle_irq+0x3c/0x48
[  403.455811]   [&lt;c0008f84&gt;] __irq_svc+0x44/0x74
[  403.460314]   [&lt;c0040b04&gt;] cpu_startup_entry+0x88/0xf4
[  403.465525]   [&lt;c02f00d0&gt;] rest_init+0xb8/0xe0
[  403.470045]   [&lt;c03e07dc&gt;] start_kernel+0x28c/0x2d4
[  403.474986]   [&lt;a0008040&gt;] 0xa0008040
[  403.478709] irq event stamp: 50854
[  403.482140] hardirqs last  enabled at (50854): [&lt;c001c6b8&gt;] tasklet_action+0x38/0xdc
[  403.489954] hardirqs last disabled at (50853): [&lt;c001c6a0&gt;] tasklet_action+0x20/0xdc
[  403.497761] softirqs last  enabled at (50850): [&lt;c001bc64&gt;] _local_bh_enable+0x14/0x18
[  403.505741] softirqs last disabled at (50851): [&lt;c001c268&gt;] irq_exit+0x88/0xdc
[  403.513026]
[  403.513026] other info that might help us debug this:
[  403.519593]  Possible unsafe locking scenario:
[  403.519593]
[  403.525548]        CPU0
[  403.528020]        ----
[  403.530491]   lock(&amp;(&amp;imxdma-&gt;lock)-&gt;rlock);
[  403.534828]   &lt;Interrupt&gt;
[  403.537474]     lock(&amp;(&amp;imxdma-&gt;lock)-&gt;rlock);
[  403.541983]
[  403.541983]  *** DEADLOCK ***
[  403.541983]
[  403.547951] no locks held by swapper/0.
[  403.551813]
[  403.551813] stack backtrace:
[  403.556222] CPU: 0 PID: 0 Comm: swapper Not tainted 3.10.0-20130823+ #904
[  403.563039] Backtrace:
[  403.565581] [&lt;c000b98c&gt;] (dump_backtrace+0x0/0x10c) from [&lt;c000bb28&gt;] (show_stack+0x18/0x1c)
[  403.574054]  r6:00000000 r5:c05c51d8 r4:c040bd58 r3:00200000
[  403.579872] [&lt;c000bb10&gt;] (show_stack+0x0/0x1c) from [&lt;c02f398c&gt;] (dump_stack+0x20/0x28)
[  403.587955] [&lt;c02f396c&gt;] (dump_stack+0x0/0x28) from [&lt;c02f29c8&gt;] (print_usage_bug.part.28+0x224/0x28c)
[  403.597340] [&lt;c02f27a4&gt;] (print_usage_bug.part.28+0x0/0x28c) from [&lt;c004b404&gt;] (mark_lock+0x440/0x6b4)
[  403.606682]  r8:c004a41c r7:00000000 r6:c040bd58 r5:c040c040 r4:00000002
[  403.613566] [&lt;c004afc4&gt;] (mark_lock+0x0/0x6b4) from [&lt;c004d844&gt;] (__lock_acquire+0x6cc/0x1a64)
[  403.622244] [&lt;c004d178&gt;] (__lock_acquire+0x0/0x1a64) from [&lt;c004f15c&gt;] (lock_acquire+0x94/0xa8)
[  403.631010] [&lt;c004f0c8&gt;] (lock_acquire+0x0/0xa8) from [&lt;c02f74e4&gt;] (_raw_spin_lock+0x54/0x8c)
[  403.639614] [&lt;c02f7490&gt;] (_raw_spin_lock+0x0/0x8c) from [&lt;c019d77c&gt;] (imxdma_tasklet+0x20/0x134)
[  403.648434]  r6:c3847010 r5:c040e890 r4:c38470d4
[  403.653194] [&lt;c019d75c&gt;] (imxdma_tasklet+0x0/0x134) from [&lt;c001c70c&gt;] (tasklet_action+0x8c/0xdc)
[  403.662013]  r8:c0599160 r7:00000000 r6:00000000 r5:c040e890 r4:c3847114 r3:c019d75c
[  403.670042] [&lt;c001c680&gt;] (tasklet_action+0x0/0xdc) from [&lt;c001bd4c&gt;] (__do_softirq+0xe4/0x1f0)
[  403.678687]  r7:00000101 r6:c0402000 r5:c059919c r4:00000001
[  403.684498] [&lt;c001bc68&gt;] (__do_softirq+0x0/0x1f0) from [&lt;c001c268&gt;] (irq_exit+0x88/0xdc)
[  403.692652] [&lt;c001c1e0&gt;] (irq_exit+0x0/0xdc) from [&lt;c0009cc8&gt;] (handle_IRQ+0x6c/0x8c)
[  403.700514]  r4:00000030 r3:00000110
[  403.704192] [&lt;c0009c5c&gt;] (handle_IRQ+0x0/0x8c) from [&lt;c0008714&gt;] (avic_handle_irq+0x3c/0x48)
[  403.712664]  r5:c0403f28 r4:c0593ebc
[  403.716343] [&lt;c00086d8&gt;] (avic_handle_irq+0x0/0x48) from [&lt;c0008f84&gt;] (__irq_svc+0x44/0x74)
[  403.724733] Exception stack(0xc0403f28 to 0xc0403f70)
[  403.729841] 3f20:                   00000001 00000004 00000000 20000013 c0402000 c04104a8
[  403.738078] 3f40: 00000002 c0b69620 a0004000 41069264 a03fb5f4 c0403f7c c0403f40 c0403f70
[  403.746301] 3f60: c004b92c c0009e74 20000013 ffffffff
[  403.751383]  r6:ffffffff r5:20000013 r4:c0009e74 r3:c004b92c
[  403.757210] [&lt;c0009e30&gt;] (arch_cpu_idle+0x0/0x4c) from [&lt;c0040b04&gt;] (cpu_startup_entry+0x88/0xf4)
[  403.766161] [&lt;c0040a7c&gt;] (cpu_startup_entry+0x0/0xf4) from [&lt;c02f00d0&gt;] (rest_init+0xb8/0xe0)
[  403.774753] [&lt;c02f0018&gt;] (rest_init+0x0/0xe0) from [&lt;c03e07dc&gt;] (start_kernel+0x28c/0x2d4)
[  403.783051]  r6:c03fc484 r5:ffffffff r4:c040a0e0
[  403.787797] [&lt;c03e0550&gt;] (start_kernel+0x0/0x2d4) from [&lt;a0008040&gt;] (0xa0008040)

Signed-off-by: Michael Grzeschik &lt;m.grzeschik@pengutronix.de&gt;
Signed-off-by: Vinod Koul &lt;vinod.koul@intel.com&gt;
Cc: Jonghwan Choi &lt;jhbird.choi@samsung.com&gt;
Signed-off-by: Greg Kroah-Hartman &lt;gregkh@linuxfoundation.org&gt;

</pre>
</div>
</content>
</entry>
<entry>
<title>drivers/dma/pl330.c: fix locking in pl330_free_chan_resources()</title>
<updated>2013-07-22T01:19:02+00:00</updated>
<author>
<name>Bartlomiej Zolnierkiewicz</name>
<email>b.zolnierkie@samsung.com</email>
</author>
<published>2013-07-03T22:00:43+00:00</published>
<link rel='alternate' type='text/html' href='https://git.tavy.me/linux-stable.git/commit/?id=3b88a0664f4f9c14c82bc8d35319ec48603be97f'/>
<id>3b88a0664f4f9c14c82bc8d35319ec48603be97f</id>
<content type='text'>
commit da331ba8e9c5de72a27e50f71105395bba6eebe0 upstream.

tasklet_kill() may sleep so call it before taking pch-&gt;lock.

Fixes following lockup:

  BUG: scheduling while atomic: cat/2383/0x00000002
  Modules linked in:
    unwind_backtrace+0x0/0xfc
    __schedule_bug+0x4c/0x58
    __schedule+0x690/0x6e0
    sys_sched_yield+0x70/0x78
    tasklet_kill+0x34/0x8c
    pl330_free_chan_resources+0x24/0x88
    dma_chan_put+0x4c/0x50
  [...]
  BUG: spinlock lockup suspected on CPU#0, swapper/0/0
   lock: 0xe52aa04c, .magic: dead4ead, .owner: cat/2383, .owner_cpu: 1
    unwind_backtrace+0x0/0xfc
    do_raw_spin_lock+0x194/0x204
    _raw_spin_lock_irqsave+0x20/0x28
    pl330_tasklet+0x2c/0x5a8
    tasklet_action+0xfc/0x114
    __do_softirq+0xe4/0x19c
    irq_exit+0x98/0x9c
    handle_IPI+0x124/0x16c
    gic_handle_irq+0x64/0x68
    __irq_svc+0x40/0x70
    cpuidle_wrap_enter+0x4c/0xa0
    cpuidle_enter_state+0x18/0x68
    cpuidle_idle_call+0xac/0xe0
    cpu_idle+0xac/0xf0

Signed-off-by: Bartlomiej Zolnierkiewicz &lt;b.zolnierkie@samsung.com&gt;
Signed-off-by: Kyungmin Park &lt;kyungmin.park@samsung.com&gt;
Acked-by: Jassi Brar &lt;jassisinghbrar@gmail.com&gt;
Cc: Vinod Koul &lt;vinod.koul@linux.intel.com&gt;
Cc: Tomasz Figa &lt;t.figa@samsung.com&gt;
Signed-off-by: Andrew Morton &lt;akpm@linux-foundation.org&gt;
Signed-off-by: Linus Torvalds &lt;torvalds@linux-foundation.org&gt;
Signed-off-by: Greg Kroah-Hartman &lt;gregkh@linuxfoundation.org&gt;

</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
commit da331ba8e9c5de72a27e50f71105395bba6eebe0 upstream.

tasklet_kill() may sleep so call it before taking pch-&gt;lock.

Fixes following lockup:

  BUG: scheduling while atomic: cat/2383/0x00000002
  Modules linked in:
    unwind_backtrace+0x0/0xfc
    __schedule_bug+0x4c/0x58
    __schedule+0x690/0x6e0
    sys_sched_yield+0x70/0x78
    tasklet_kill+0x34/0x8c
    pl330_free_chan_resources+0x24/0x88
    dma_chan_put+0x4c/0x50
  [...]
  BUG: spinlock lockup suspected on CPU#0, swapper/0/0
   lock: 0xe52aa04c, .magic: dead4ead, .owner: cat/2383, .owner_cpu: 1
    unwind_backtrace+0x0/0xfc
    do_raw_spin_lock+0x194/0x204
    _raw_spin_lock_irqsave+0x20/0x28
    pl330_tasklet+0x2c/0x5a8
    tasklet_action+0xfc/0x114
    __do_softirq+0xe4/0x19c
    irq_exit+0x98/0x9c
    handle_IPI+0x124/0x16c
    gic_handle_irq+0x64/0x68
    __irq_svc+0x40/0x70
    cpuidle_wrap_enter+0x4c/0xa0
    cpuidle_enter_state+0x18/0x68
    cpuidle_idle_call+0xac/0xe0
    cpu_idle+0xac/0xf0

Signed-off-by: Bartlomiej Zolnierkiewicz &lt;b.zolnierkie@samsung.com&gt;
Signed-off-by: Kyungmin Park &lt;kyungmin.park@samsung.com&gt;
Acked-by: Jassi Brar &lt;jassisinghbrar@gmail.com&gt;
Cc: Vinod Koul &lt;vinod.koul@linux.intel.com&gt;
Cc: Tomasz Figa &lt;t.figa@samsung.com&gt;
Signed-off-by: Andrew Morton &lt;akpm@linux-foundation.org&gt;
Signed-off-by: Linus Torvalds &lt;torvalds@linux-foundation.org&gt;
Signed-off-by: Greg Kroah-Hartman &lt;gregkh@linuxfoundation.org&gt;

</pre>
</div>
</content>
</entry>
<entry>
<title>pch_dma: Use GFP_ATOMIC because called from interrupt context</title>
<updated>2013-05-19T17:54:48+00:00</updated>
<author>
<name>Tomoya MORINAGA</name>
<email>tomoya.rohm@gmail.com</email>
</author>
<published>2013-02-12T02:25:33+00:00</published>
<link rel='alternate' type='text/html' href='https://git.tavy.me/linux-stable.git/commit/?id=dd77cf8cc7aca5902e759c26049730c151bc885f'/>
<id>dd77cf8cc7aca5902e759c26049730c151bc885f</id>
<content type='text'>
commit 5c1ef59168c485318e40ba485c1eba57d81d0faa upstream.

pdc_desc_get() is called from pd_prep_slave_sg, and the function is
called from interrupt context(e.g. Uart driver "pch_uart.c").
In fact, I saw kernel error message.
So, GFP_ATOMIC must be used not GFP_NOIO.

Signed-off-by: Tomoya MORINAGA &lt;tomoya.rohm@gmail.com&gt;
Signed-off-by: Vinod Koul &lt;vinod.koul@intel.com&gt;
Signed-off-by: Greg Kroah-Hartman &lt;gregkh@linuxfoundation.org&gt;

</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
commit 5c1ef59168c485318e40ba485c1eba57d81d0faa upstream.

pdc_desc_get() is called from pd_prep_slave_sg, and the function is
called from interrupt context(e.g. Uart driver "pch_uart.c").
In fact, I saw kernel error message.
So, GFP_ATOMIC must be used not GFP_NOIO.

Signed-off-by: Tomoya MORINAGA &lt;tomoya.rohm@gmail.com&gt;
Signed-off-by: Vinod Koul &lt;vinod.koul@intel.com&gt;
Signed-off-by: Greg Kroah-Hartman &lt;gregkh@linuxfoundation.org&gt;

</pre>
</div>
</content>
</entry>
<entry>
<title>ioat: Fix DMA memory sync direction correct flag</title>
<updated>2013-01-28T04:47:44+00:00</updated>
<author>
<name>Shuah Khan</name>
<email>shuah.khan@hp.com</email>
</author>
<published>2012-10-25T16:22:32+00:00</published>
<link rel='alternate' type='text/html' href='https://git.tavy.me/linux-stable.git/commit/?id=8f3933a1e549a54b4c1b743b40cfd196c8bf0035'/>
<id>8f3933a1e549a54b4c1b743b40cfd196c8bf0035</id>
<content type='text'>
commit ac4989874af56435c308bdde9ad9c837a26f8b23 upstream.

ioat does DMA memory sync with DMA_TO_DEVICE direction on a buffer allocated
for DMA_FROM_DEVICE dma, resulting in the following warning from dma debug.
Fixed the dma_sync_single_for_device() call to use the correct direction.

[  226.288947] WARNING: at lib/dma-debug.c:990 check_sync+0x132/0x550()
[  226.288948] Hardware name: ProLiant DL380p Gen8
[  226.288951] ioatdma 0000:00:04.0: DMA-API: device driver syncs DMA memory with different direction [device address=0x00000000ffff7000] [size=4096 bytes] [mapped with DMA_FROM_DEVICE] [synced with DMA_TO_DEVICE]
[  226.288953] Modules linked in: iTCO_wdt(+) sb_edac(+) ioatdma(+) microcode serio_raw pcspkr edac_core hpwdt(+) iTCO_vendor_support hpilo(+) dca acpi_power_meter ata_generic pata_acpi sd_mod crc_t10dif ata_piix libata hpsa tg3 netxen_nic(+) sunrpc dm_mirror dm_region_hash dm_log dm_mod
[  226.288967] Pid: 1055, comm: work_for_cpu Tainted: G        W    3.3.0-0.20.el7.x86_64 #1
[  226.288968] Call Trace:
[  226.288974]  [&lt;ffffffff810644cf&gt;] warn_slowpath_common+0x7f/0xc0
[  226.288977]  [&lt;ffffffff810645c6&gt;] warn_slowpath_fmt+0x46/0x50
[  226.288980]  [&lt;ffffffff81345502&gt;] check_sync+0x132/0x550
[  226.288983]  [&lt;ffffffff81345c9f&gt;] debug_dma_sync_single_for_device+0x3f/0x50
[  226.288988]  [&lt;ffffffff81661002&gt;] ? wait_for_common+0x72/0x180
[  226.288995]  [&lt;ffffffffa019590f&gt;] ioat_xor_val_self_test+0x3e5/0x832 [ioatdma]
[  226.288999]  [&lt;ffffffff811a5739&gt;] ? kfree+0x259/0x270
[  226.289004]  [&lt;ffffffffa0195d77&gt;] ioat3_dma_self_test+0x1b/0x20 [ioatdma]
[  226.289008]  [&lt;ffffffffa01952c3&gt;] ioat_probe+0x2f8/0x348 [ioatdma]
[  226.289011]  [&lt;ffffffffa0195f51&gt;] ioat3_dma_probe+0x1d5/0x2aa [ioatdma]
[  226.289016]  [&lt;ffffffffa0194d12&gt;] ioat_pci_probe+0x139/0x17c [ioatdma]
[  226.289020]  [&lt;ffffffff81354b8c&gt;] local_pci_probe+0x5c/0xd0
[  226.289023]  [&lt;ffffffff81083e50&gt;] ? destroy_work_on_stack+0x20/0x20
[  226.289025]  [&lt;ffffffff81083e68&gt;] do_work_for_cpu+0x18/0x30
[  226.289029]  [&lt;ffffffff8108d997&gt;] kthread+0xb7/0xc0
[  226.289033]  [&lt;ffffffff8166cef4&gt;] kernel_thread_helper+0x4/0x10
[  226.289036]  [&lt;ffffffff81662d20&gt;] ? _raw_spin_unlock_irq+0x30/0x50
[  226.289038]  [&lt;ffffffff81663234&gt;] ? retint_restore_args+0x13/0x13
[  226.289041]  [&lt;ffffffff8108d8e0&gt;] ? kthread_worker_fn+0x1a0/0x1a0
[  226.289044]  [&lt;ffffffff8166cef0&gt;] ? gs_change+0x13/0x13
[  226.289045] ---[ end trace e1618afc7a606089 ]---
[  226.289047] Mapped at:
[  226.289048]  [&lt;ffffffff81345307&gt;] debug_dma_map_page+0x87/0x150
[  226.289050]  [&lt;ffffffffa019653c&gt;] dma_map_page.constprop.18+0x70/0xb34 [ioatdma]
[  226.289054]  [&lt;ffffffffa0195702&gt;] ioat_xor_val_self_test+0x1d8/0x832 [ioatdma]
[  226.289058]  [&lt;ffffffffa0195d77&gt;] ioat3_dma_self_test+0x1b/0x20 [ioatdma]
[  226.289061]  [&lt;ffffffffa01952c3&gt;] ioat_probe+0x2f8/0x348 [ioatdma]

Signed-off-by: Shuah Khan &lt;shuah.khan@hp.com&gt;
Signed-off-by: Vinod Koul &lt;vinod.koul@linux.intel.com&gt;
Signed-off-by: Greg Kroah-Hartman &lt;gregkh@linuxfoundation.org&gt;

</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
commit ac4989874af56435c308bdde9ad9c837a26f8b23 upstream.

ioat does DMA memory sync with DMA_TO_DEVICE direction on a buffer allocated
for DMA_FROM_DEVICE dma, resulting in the following warning from dma debug.
Fixed the dma_sync_single_for_device() call to use the correct direction.

[  226.288947] WARNING: at lib/dma-debug.c:990 check_sync+0x132/0x550()
[  226.288948] Hardware name: ProLiant DL380p Gen8
[  226.288951] ioatdma 0000:00:04.0: DMA-API: device driver syncs DMA memory with different direction [device address=0x00000000ffff7000] [size=4096 bytes] [mapped with DMA_FROM_DEVICE] [synced with DMA_TO_DEVICE]
[  226.288953] Modules linked in: iTCO_wdt(+) sb_edac(+) ioatdma(+) microcode serio_raw pcspkr edac_core hpwdt(+) iTCO_vendor_support hpilo(+) dca acpi_power_meter ata_generic pata_acpi sd_mod crc_t10dif ata_piix libata hpsa tg3 netxen_nic(+) sunrpc dm_mirror dm_region_hash dm_log dm_mod
[  226.288967] Pid: 1055, comm: work_for_cpu Tainted: G        W    3.3.0-0.20.el7.x86_64 #1
[  226.288968] Call Trace:
[  226.288974]  [&lt;ffffffff810644cf&gt;] warn_slowpath_common+0x7f/0xc0
[  226.288977]  [&lt;ffffffff810645c6&gt;] warn_slowpath_fmt+0x46/0x50
[  226.288980]  [&lt;ffffffff81345502&gt;] check_sync+0x132/0x550
[  226.288983]  [&lt;ffffffff81345c9f&gt;] debug_dma_sync_single_for_device+0x3f/0x50
[  226.288988]  [&lt;ffffffff81661002&gt;] ? wait_for_common+0x72/0x180
[  226.288995]  [&lt;ffffffffa019590f&gt;] ioat_xor_val_self_test+0x3e5/0x832 [ioatdma]
[  226.288999]  [&lt;ffffffff811a5739&gt;] ? kfree+0x259/0x270
[  226.289004]  [&lt;ffffffffa0195d77&gt;] ioat3_dma_self_test+0x1b/0x20 [ioatdma]
[  226.289008]  [&lt;ffffffffa01952c3&gt;] ioat_probe+0x2f8/0x348 [ioatdma]
[  226.289011]  [&lt;ffffffffa0195f51&gt;] ioat3_dma_probe+0x1d5/0x2aa [ioatdma]
[  226.289016]  [&lt;ffffffffa0194d12&gt;] ioat_pci_probe+0x139/0x17c [ioatdma]
[  226.289020]  [&lt;ffffffff81354b8c&gt;] local_pci_probe+0x5c/0xd0
[  226.289023]  [&lt;ffffffff81083e50&gt;] ? destroy_work_on_stack+0x20/0x20
[  226.289025]  [&lt;ffffffff81083e68&gt;] do_work_for_cpu+0x18/0x30
[  226.289029]  [&lt;ffffffff8108d997&gt;] kthread+0xb7/0xc0
[  226.289033]  [&lt;ffffffff8166cef4&gt;] kernel_thread_helper+0x4/0x10
[  226.289036]  [&lt;ffffffff81662d20&gt;] ? _raw_spin_unlock_irq+0x30/0x50
[  226.289038]  [&lt;ffffffff81663234&gt;] ? retint_restore_args+0x13/0x13
[  226.289041]  [&lt;ffffffff8108d8e0&gt;] ? kthread_worker_fn+0x1a0/0x1a0
[  226.289044]  [&lt;ffffffff8166cef0&gt;] ? gs_change+0x13/0x13
[  226.289045] ---[ end trace e1618afc7a606089 ]---
[  226.289047] Mapped at:
[  226.289048]  [&lt;ffffffff81345307&gt;] debug_dma_map_page+0x87/0x150
[  226.289050]  [&lt;ffffffffa019653c&gt;] dma_map_page.constprop.18+0x70/0xb34 [ioatdma]
[  226.289054]  [&lt;ffffffffa0195702&gt;] ioat_xor_val_self_test+0x1d8/0x832 [ioatdma]
[  226.289058]  [&lt;ffffffffa0195d77&gt;] ioat3_dma_self_test+0x1b/0x20 [ioatdma]
[  226.289061]  [&lt;ffffffffa01952c3&gt;] ioat_probe+0x2f8/0x348 [ioatdma]

Signed-off-by: Shuah Khan &lt;shuah.khan@hp.com&gt;
Signed-off-by: Vinod Koul &lt;vinod.koul@linux.intel.com&gt;
Signed-off-by: Greg Kroah-Hartman &lt;gregkh@linuxfoundation.org&gt;

</pre>
</div>
</content>
</entry>
<entry>
<title>dmaengine: imx-dma: fix missing unlock on error in imxdma_xfer_desc()</title>
<updated>2012-10-31T17:03:02+00:00</updated>
<author>
<name>Wei Yongjun</name>
<email>yongjun_wei@trendmicro.com.cn</email>
</author>
<published>2012-10-21T11:58:30+00:00</published>
<link rel='alternate' type='text/html' href='https://git.tavy.me/linux-stable.git/commit/?id=7e8cec32110be45739569e04f35d83d47064dd86'/>
<id>7e8cec32110be45739569e04f35d83d47064dd86</id>
<content type='text'>
commit 720dfd250e48a8c7fd1b2b8645955413989c4ee0 upstream.

Add the missing unlock on the error handling path in function
imxdma_xfer_desc().

Signed-off-by: Wei Yongjun &lt;yongjun_wei@trendmicro.com.cn&gt;
Signed-off-by: Vinod Koul &lt;vinod.koul@linux.intel.com&gt;
Signed-off-by: Greg Kroah-Hartman &lt;gregkh@linuxfoundation.org&gt;

</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
commit 720dfd250e48a8c7fd1b2b8645955413989c4ee0 upstream.

Add the missing unlock on the error handling path in function
imxdma_xfer_desc().

Signed-off-by: Wei Yongjun &lt;yongjun_wei@trendmicro.com.cn&gt;
Signed-off-by: Vinod Koul &lt;vinod.koul@linux.intel.com&gt;
Signed-off-by: Greg Kroah-Hartman &lt;gregkh@linuxfoundation.org&gt;

</pre>
</div>
</content>
</entry>
</feed>
