<feed xmlns='http://www.w3.org/2005/Atom'>
<title>linux.git/fs/fscache, branch v3.13</title>
<subtitle>Linux kernel source tree</subtitle>
<link rel='alternate' type='text/html' href='https://git.tavy.me/linux.git/'/>
<entry>
<title>Merge branch 'for-3.13/core' of git://git.kernel.dk/linux-block</title>
<updated>2013-11-14T03:08:14+00:00</updated>
<author>
<name>Linus Torvalds</name>
<email>torvalds@linux-foundation.org</email>
</author>
<published>2013-11-14T03:08:14+00:00</published>
<link rel='alternate' type='text/html' href='https://git.tavy.me/linux.git/commit/?id=0910c0bdf7c291a41bc21e40a97389c9d4c1960d'/>
<id>0910c0bdf7c291a41bc21e40a97389c9d4c1960d</id>
<content type='text'>
Pull block IO core updates from Jens Axboe:
 "This is the pull request for the core changes in the block layer for
  3.13.  It contains:

   - The new blk-mq request interface.

     This is a new and more scalable queueing model that marries the
     best part of the request based interface we currently have (which
     is fully featured, but scales poorly) and the bio based "interface"
     which the new drivers for high IOPS devices end up using because
     it's much faster than the request based one.

     The bio interface has no block layer support, since it taps into
     the stack much earlier.  This means that drivers end up having to
     implement a lot of functionality on their own, like tagging,
     timeout handling, requeue, etc.  The blk-mq interface provides all
     these.  Some drivers even provide a switch to select bio or rq and
     has code to handle both, since things like merging only works in
     the rq model and hence is faster for some workloads.  This is a
     huge mess.  Conversion of these drivers nets us a substantial code
     reduction.  Initial results on converting SCSI to this model even
     shows an 8x improvement on single queue devices.  So while the
     model was intended to work on the newer multiqueue devices, it has
     substantial improvements for "classic" hardware as well.  This code
     has gone through extensive testing and development, it's now ready
     to go.  A pull request is coming to convert virtio-blk to this
     model will be will be coming as well, with more drivers scheduled
     for 3.14 conversion.

   - Two blktrace fixes from Jan and Chen Gang.

   - A plug merge fix from Alireza Haghdoost.

   - Conversion of __get_cpu_var() from Christoph Lameter.

   - Fix for sector_div() with 64-bit divider from Geert Uytterhoeven.

   - A fix for a race between request completion and the timeout
     handling from Jeff Moyer.  This is what caused the merge conflict
     with blk-mq/core, in case you are looking at that.

   - A dm stacking fix from Mike Snitzer.

   - A code consolidation fix and duplicated code removal from Kent
     Overstreet.

   - A handful of block bug fixes from Mikulas Patocka, fixing a loop
     crash and memory corruption on blk cg.

   - Elevator switch bug fix from Tomoki Sekiyama.

  A heads-up that I had to rebase this branch.  Initially the immutable
  bio_vecs had been queued up for inclusion, but a week later, it became
  clear that it wasn't fully cooked yet.  So the decision was made to
  pull this out and postpone it until 3.14.  It was a straight forward
  rebase, just pruning out the immutable series and the later fixes of
  problems with it.  The rest of the patches applied directly and no
  further changes were made"

* 'for-3.13/core' of git://git.kernel.dk/linux-block: (31 commits)
  block: replace IS_ERR and PTR_ERR with PTR_ERR_OR_ZERO
  block: replace IS_ERR and PTR_ERR with PTR_ERR_OR_ZERO
  block: Do not call sector_div() with a 64-bit divisor
  kernel: trace: blktrace: remove redundent memcpy() in compat_blk_trace_setup()
  block: Consolidate duplicated bio_trim() implementations
  block: Use rw_copy_check_uvector()
  block: Enable sysfs nomerge control for I/O requests in the plug list
  block: properly stack underlying max_segment_size to DM device
  elevator: acquire q-&gt;sysfs_lock in elevator_change()
  elevator: Fix a race in elevator switching and md device initialization
  block: Replace __get_cpu_var uses
  bdi: test bdi_init failure
  block: fix a probe argument to blk_register_region
  loop: fix crash if blk_alloc_queue fails
  blk-core: Fix memory corruption if blkcg_init_queue fails
  block: fix race between request completion and timeout handling
  blktrace: Send BLK_TN_PROCESS events to all running traces
  blk-mq: don't disallow request merges for req-&gt;special being set
  blk-mq: mq plug list breakage
  blk-mq: fix for flush deadlock
  ...
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
Pull block IO core updates from Jens Axboe:
 "This is the pull request for the core changes in the block layer for
  3.13.  It contains:

   - The new blk-mq request interface.

     This is a new and more scalable queueing model that marries the
     best part of the request based interface we currently have (which
     is fully featured, but scales poorly) and the bio based "interface"
     which the new drivers for high IOPS devices end up using because
     it's much faster than the request based one.

     The bio interface has no block layer support, since it taps into
     the stack much earlier.  This means that drivers end up having to
     implement a lot of functionality on their own, like tagging,
     timeout handling, requeue, etc.  The blk-mq interface provides all
     these.  Some drivers even provide a switch to select bio or rq and
     has code to handle both, since things like merging only works in
     the rq model and hence is faster for some workloads.  This is a
     huge mess.  Conversion of these drivers nets us a substantial code
     reduction.  Initial results on converting SCSI to this model even
     shows an 8x improvement on single queue devices.  So while the
     model was intended to work on the newer multiqueue devices, it has
     substantial improvements for "classic" hardware as well.  This code
     has gone through extensive testing and development, it's now ready
     to go.  A pull request is coming to convert virtio-blk to this
     model will be will be coming as well, with more drivers scheduled
     for 3.14 conversion.

   - Two blktrace fixes from Jan and Chen Gang.

   - A plug merge fix from Alireza Haghdoost.

   - Conversion of __get_cpu_var() from Christoph Lameter.

   - Fix for sector_div() with 64-bit divider from Geert Uytterhoeven.

   - A fix for a race between request completion and the timeout
     handling from Jeff Moyer.  This is what caused the merge conflict
     with blk-mq/core, in case you are looking at that.

   - A dm stacking fix from Mike Snitzer.

   - A code consolidation fix and duplicated code removal from Kent
     Overstreet.

   - A handful of block bug fixes from Mikulas Patocka, fixing a loop
     crash and memory corruption on blk cg.

   - Elevator switch bug fix from Tomoki Sekiyama.

  A heads-up that I had to rebase this branch.  Initially the immutable
  bio_vecs had been queued up for inclusion, but a week later, it became
  clear that it wasn't fully cooked yet.  So the decision was made to
  pull this out and postpone it until 3.14.  It was a straight forward
  rebase, just pruning out the immutable series and the later fixes of
  problems with it.  The rest of the patches applied directly and no
  further changes were made"

* 'for-3.13/core' of git://git.kernel.dk/linux-block: (31 commits)
  block: replace IS_ERR and PTR_ERR with PTR_ERR_OR_ZERO
  block: replace IS_ERR and PTR_ERR with PTR_ERR_OR_ZERO
  block: Do not call sector_div() with a 64-bit divisor
  kernel: trace: blktrace: remove redundent memcpy() in compat_blk_trace_setup()
  block: Consolidate duplicated bio_trim() implementations
  block: Use rw_copy_check_uvector()
  block: Enable sysfs nomerge control for I/O requests in the plug list
  block: properly stack underlying max_segment_size to DM device
  elevator: acquire q-&gt;sysfs_lock in elevator_change()
  elevator: Fix a race in elevator switching and md device initialization
  block: Replace __get_cpu_var uses
  bdi: test bdi_init failure
  block: fix a probe argument to blk_register_region
  loop: fix crash if blk_alloc_queue fails
  blk-core: Fix memory corruption if blkcg_init_queue fails
  block: fix race between request completion and timeout handling
  blktrace: Send BLK_TN_PROCESS events to all running traces
  blk-mq: don't disallow request merges for req-&gt;special being set
  blk-mq: mq plug list breakage
  blk-mq: fix for flush deadlock
  ...
</pre>
</div>
</content>
</entry>
<entry>
<title>block: Replace __get_cpu_var uses</title>
<updated>2013-11-08T15:59:58+00:00</updated>
<author>
<name>Christoph Lameter</name>
<email>cl@linux.com</email>
</author>
<published>2013-10-15T18:22:29+00:00</published>
<link rel='alternate' type='text/html' href='https://git.tavy.me/linux.git/commit/?id=170d800af83f3ab2b5ced0e370a861e023dee22a'/>
<id>170d800af83f3ab2b5ced0e370a861e023dee22a</id>
<content type='text'>
__get_cpu_var() is used for multiple purposes in the kernel source. One of
them is address calculation via the form &amp;__get_cpu_var(x).  This calculates
the address for the instance of the percpu variable of the current processor
based on an offset.

Other use cases are for storing and retrieving data from the current
processors percpu area.  __get_cpu_var() can be used as an lvalue when
writing data or on the right side of an assignment.

__get_cpu_var() is defined as :

#define __get_cpu_var(var) (*this_cpu_ptr(&amp;(var)))

__get_cpu_var() always only does an address determination. However, store
and retrieve operations could use a segment prefix (or global register on
other platforms) to avoid the address calculation.

this_cpu_write() and this_cpu_read() can directly take an offset into a
percpu area and use optimized assembly code to read and write per cpu
variables.

This patch converts __get_cpu_var into either an explicit address
calculation using this_cpu_ptr() or into a use of this_cpu operations that
use the offset.  Thereby address calculations are avoided and less registers
are used when code is generated.

At the end of the patch set all uses of __get_cpu_var have been removed so
the macro is removed too.

The patch set includes passes over all arches as well. Once these operations
are used throughout then specialized macros can be defined in non -x86
arches as well in order to optimize per cpu access by f.e.  using a global
register that may be set to the per cpu base.

Transformations done to __get_cpu_var()

1. Determine the address of the percpu instance of the current processor.

	DEFINE_PER_CPU(int, y);
	int *x = &amp;__get_cpu_var(y);

    Converts to

	int *x = this_cpu_ptr(&amp;y);

2. Same as #1 but this time an array structure is involved.

	DEFINE_PER_CPU(int, y[20]);
	int *x = __get_cpu_var(y);

    Converts to

	int *x = this_cpu_ptr(y);

3. Retrieve the content of the current processors instance of a per cpu
variable.

	DEFINE_PER_CPU(int, y);
	int x = __get_cpu_var(y)

   Converts to

	int x = __this_cpu_read(y);

4. Retrieve the content of a percpu struct

	DEFINE_PER_CPU(struct mystruct, y);
	struct mystruct x = __get_cpu_var(y);

   Converts to

	memcpy(&amp;x, this_cpu_ptr(&amp;y), sizeof(x));

5. Assignment to a per cpu variable

	DEFINE_PER_CPU(int, y)
	__get_cpu_var(y) = x;

   Converts to

	this_cpu_write(y, x);

6. Increment/Decrement etc of a per cpu variable

	DEFINE_PER_CPU(int, y);
	__get_cpu_var(y)++

   Converts to

	this_cpu_inc(y)

Signed-off-by: Christoph Lameter &lt;cl@linux.com&gt;
Signed-off-by: Jens Axboe &lt;axboe@kernel.dk&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
__get_cpu_var() is used for multiple purposes in the kernel source. One of
them is address calculation via the form &amp;__get_cpu_var(x).  This calculates
the address for the instance of the percpu variable of the current processor
based on an offset.

Other use cases are for storing and retrieving data from the current
processors percpu area.  __get_cpu_var() can be used as an lvalue when
writing data or on the right side of an assignment.

__get_cpu_var() is defined as :

#define __get_cpu_var(var) (*this_cpu_ptr(&amp;(var)))

__get_cpu_var() always only does an address determination. However, store
and retrieve operations could use a segment prefix (or global register on
other platforms) to avoid the address calculation.

this_cpu_write() and this_cpu_read() can directly take an offset into a
percpu area and use optimized assembly code to read and write per cpu
variables.

This patch converts __get_cpu_var into either an explicit address
calculation using this_cpu_ptr() or into a use of this_cpu operations that
use the offset.  Thereby address calculations are avoided and less registers
are used when code is generated.

At the end of the patch set all uses of __get_cpu_var have been removed so
the macro is removed too.

The patch set includes passes over all arches as well. Once these operations
are used throughout then specialized macros can be defined in non -x86
arches as well in order to optimize per cpu access by f.e.  using a global
register that may be set to the per cpu base.

Transformations done to __get_cpu_var()

1. Determine the address of the percpu instance of the current processor.

	DEFINE_PER_CPU(int, y);
	int *x = &amp;__get_cpu_var(y);

    Converts to

	int *x = this_cpu_ptr(&amp;y);

2. Same as #1 but this time an array structure is involved.

	DEFINE_PER_CPU(int, y[20]);
	int *x = __get_cpu_var(y);

    Converts to

	int *x = this_cpu_ptr(y);

3. Retrieve the content of the current processors instance of a per cpu
variable.

	DEFINE_PER_CPU(int, y);
	int x = __get_cpu_var(y)

   Converts to

	int x = __this_cpu_read(y);

4. Retrieve the content of a percpu struct

	DEFINE_PER_CPU(struct mystruct, y);
	struct mystruct x = __get_cpu_var(y);

   Converts to

	memcpy(&amp;x, this_cpu_ptr(&amp;y), sizeof(x));

5. Assignment to a per cpu variable

	DEFINE_PER_CPU(int, y)
	__get_cpu_var(y) = x;

   Converts to

	this_cpu_write(y, x);

6. Increment/Decrement etc of a per cpu variable

	DEFINE_PER_CPU(int, y);
	__get_cpu_var(y)++

   Converts to

	this_cpu_inc(y)

Signed-off-by: Christoph Lameter &lt;cl@linux.com&gt;
Signed-off-by: Jens Axboe &lt;axboe@kernel.dk&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>FS-Cache: Provide the ability to enable/disable cookies</title>
<updated>2013-09-27T17:40:25+00:00</updated>
<author>
<name>David Howells</name>
<email>dhowells@redhat.com</email>
</author>
<published>2013-09-20T23:09:31+00:00</published>
<link rel='alternate' type='text/html' href='https://git.tavy.me/linux.git/commit/?id=94d30ae90a00cafe686c1057be57f4885f963abf'/>
<id>94d30ae90a00cafe686c1057be57f4885f963abf</id>
<content type='text'>
Provide the ability to enable and disable fscache cookies.  A disabled cookie
will reject or ignore further requests to:

	Acquire a child cookie
	Invalidate and update backing objects
	Check the consistency of a backing object
	Allocate storage for backing page
	Read backing pages
	Write to backing pages

but still allows:

	Checks/waits on the completion of already in-progress objects
	Uncaching of pages
	Relinquishment of cookies

Two new operations are provided:

 (1) Disable a cookie:

	void fscache_disable_cookie(struct fscache_cookie *cookie,
				    bool invalidate);

     If the cookie is not already disabled, this locks the cookie against other
     dis/enablement ops, marks the cookie as being disabled, discards or
     invalidates any backing objects and waits for cessation of activity on any
     associated object.

     This is a wrapper around a chunk split out of fscache_relinquish_cookie(),
     but it reinitialises the cookie such that it can be reenabled.

     All possible failures are handled internally.  The caller should consider
     calling fscache_uncache_all_inode_pages() afterwards to make sure all page
     markings are cleared up.

 (2) Enable a cookie:

	void fscache_enable_cookie(struct fscache_cookie *cookie,
				   bool (*can_enable)(void *data),
				   void *data)

     If the cookie is not already enabled, this locks the cookie against other
     dis/enablement ops, invokes can_enable() and, if the cookie is not an
     index cookie, will begin the procedure of acquiring backing objects.

     The optional can_enable() function is passed the data argument and returns
     a ruling as to whether or not enablement should actually be permitted to
     begin.

     All possible failures are handled internally.  The cookie will only be
     marked as enabled if provisional backing objects are allocated.

A later patch will introduce these to NFS.  Cookie enablement during nfs_open()
is then contingent on i_writecount &lt;= 0.  can_enable() checks for a race
between open(O_RDONLY) and open(O_WRONLY/O_RDWR).  This simplifies NFS's cookie
handling and allows us to get rid of open(O_RDONLY) accidentally introducing
caching to an inode that's open for writing already.

One operation has its API modified:

 (3) Acquire a cookie.

	struct fscache_cookie *fscache_acquire_cookie(
		struct fscache_cookie *parent,
		const struct fscache_cookie_def *def,
		void *netfs_data,
		bool enable);

     This now has an additional argument that indicates whether the requested
     cookie should be enabled by default.  It doesn't need the can_enable()
     function because the caller must prevent multiple calls for the same netfs
     object and it doesn't need to take the enablement lock because no one else
     can get at the cookie before this returns.

Signed-off-by: David Howells &lt;dhowells@redhat.com
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
Provide the ability to enable and disable fscache cookies.  A disabled cookie
will reject or ignore further requests to:

	Acquire a child cookie
	Invalidate and update backing objects
	Check the consistency of a backing object
	Allocate storage for backing page
	Read backing pages
	Write to backing pages

but still allows:

	Checks/waits on the completion of already in-progress objects
	Uncaching of pages
	Relinquishment of cookies

Two new operations are provided:

 (1) Disable a cookie:

	void fscache_disable_cookie(struct fscache_cookie *cookie,
				    bool invalidate);

     If the cookie is not already disabled, this locks the cookie against other
     dis/enablement ops, marks the cookie as being disabled, discards or
     invalidates any backing objects and waits for cessation of activity on any
     associated object.

     This is a wrapper around a chunk split out of fscache_relinquish_cookie(),
     but it reinitialises the cookie such that it can be reenabled.

     All possible failures are handled internally.  The caller should consider
     calling fscache_uncache_all_inode_pages() afterwards to make sure all page
     markings are cleared up.

 (2) Enable a cookie:

	void fscache_enable_cookie(struct fscache_cookie *cookie,
				   bool (*can_enable)(void *data),
				   void *data)

     If the cookie is not already enabled, this locks the cookie against other
     dis/enablement ops, invokes can_enable() and, if the cookie is not an
     index cookie, will begin the procedure of acquiring backing objects.

     The optional can_enable() function is passed the data argument and returns
     a ruling as to whether or not enablement should actually be permitted to
     begin.

     All possible failures are handled internally.  The cookie will only be
     marked as enabled if provisional backing objects are allocated.

A later patch will introduce these to NFS.  Cookie enablement during nfs_open()
is then contingent on i_writecount &lt;= 0.  can_enable() checks for a race
between open(O_RDONLY) and open(O_WRONLY/O_RDWR).  This simplifies NFS's cookie
handling and allows us to get rid of open(O_RDONLY) accidentally introducing
caching to an inode that's open for writing already.

One operation has its API modified:

 (3) Acquire a cookie.

	struct fscache_cookie *fscache_acquire_cookie(
		struct fscache_cookie *parent,
		const struct fscache_cookie_def *def,
		void *netfs_data,
		bool enable);

     This now has an additional argument that indicates whether the requested
     cookie should be enabled by default.  It doesn't need the can_enable()
     function because the caller must prevent multiple calls for the same netfs
     object and it doesn't need to take the enablement lock because no one else
     can get at the cookie before this returns.

Signed-off-by: David Howells &lt;dhowells@redhat.com
</pre>
</div>
</content>
</entry>
<entry>
<title>FS-Cache: Add use/unuse/wake cookie wrappers</title>
<updated>2013-09-27T17:40:25+00:00</updated>
<author>
<name>David Howells</name>
<email>dhowells@redhat.com</email>
</author>
<published>2013-09-20T23:09:31+00:00</published>
<link rel='alternate' type='text/html' href='https://git.tavy.me/linux.git/commit/?id=8fb883f3e30065529e4f35d4b4f355193dcdb7a2'/>
<id>8fb883f3e30065529e4f35d4b4f355193dcdb7a2</id>
<content type='text'>
Add wrapper functions for dealing with cookie-&gt;n_active:

 (*) __fscache_use_cookie() to increment it.

 (*) __fscache_unuse_cookie() to decrement and test against zero.

 (*) __fscache_wake_unused_cookie() to wake up anyone waiting for it to reach
     zero.

The second and third are split so that the third can be done after cookie-&gt;lock
has been released in case the waiter wakes up whilst we're still holding it and
tries to get it.

We will need to wake-on-zero once the cookie disablement patch is applied
because it will then be possible to see n_active become zero without the cookie
being relinquished.

Also move the cookie usement out of fscache_attr_changed_op() and into
fscache_attr_changed() and the operation struct so that cookie disablement
will be able to track it.

Whilst we're at it, only increment n_active if we're about to do
fscache_submit_op() so that we don't have to deal with undoing it if anything
earlier fails.  Possibly this should be moved into fscache_submit_op() which
could look at FSCACHE_OP_UNUSE_COOKIE.

Signed-off-by: David Howells &lt;dhowells@redhat.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
Add wrapper functions for dealing with cookie-&gt;n_active:

 (*) __fscache_use_cookie() to increment it.

 (*) __fscache_unuse_cookie() to decrement and test against zero.

 (*) __fscache_wake_unused_cookie() to wake up anyone waiting for it to reach
     zero.

The second and third are split so that the third can be done after cookie-&gt;lock
has been released in case the waiter wakes up whilst we're still holding it and
tries to get it.

We will need to wake-on-zero once the cookie disablement patch is applied
because it will then be possible to see n_active become zero without the cookie
being relinquished.

Also move the cookie usement out of fscache_attr_changed_op() and into
fscache_attr_changed() and the operation struct so that cookie disablement
will be able to track it.

Whilst we're at it, only increment n_active if we're about to do
fscache_submit_op() so that we don't have to deal with undoing it if anything
earlier fails.  Possibly this should be moved into fscache_submit_op() which
could look at FSCACHE_OP_UNUSE_COOKIE.

Signed-off-by: David Howells &lt;dhowells@redhat.com&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>Merge branch 'for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/sage/ceph-client</title>
<updated>2013-09-19T17:50:37+00:00</updated>
<author>
<name>Linus Torvalds</name>
<email>torvalds@linux-foundation.org</email>
</author>
<published>2013-09-19T17:50:37+00:00</published>
<link rel='alternate' type='text/html' href='https://git.tavy.me/linux.git/commit/?id=e9ff04dd94d46c817bbb103531cdef6e7bd5d022'/>
<id>e9ff04dd94d46c817bbb103531cdef6e7bd5d022</id>
<content type='text'>
Pull ceph fixes from Sage Weil:
 "These fix several bugs with RBD from 3.11 that didn't get tested in
  time for the merge window: some error handling, a use-after-free, and
  a sequencing issue when unmapping and image races with a notify
  operation.

  There is also a patch fixing a problem with the new ceph + fscache
  code that just went in"

* 'for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/sage/ceph-client:
  fscache: check consistency does not decrement refcount
  rbd: fix error handling from rbd_snap_name()
  rbd: ignore unmapped snapshots that no longer exist
  rbd: fix use-after free of rbd_dev-&gt;disk
  rbd: make rbd_obj_notify_ack() synchronous
  rbd: complete notifies before cleaning up osd_client and rbd_dev
  libceph: add function to ensure notifies are complete
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
Pull ceph fixes from Sage Weil:
 "These fix several bugs with RBD from 3.11 that didn't get tested in
  time for the merge window: some error handling, a use-after-free, and
  a sequencing issue when unmapping and image races with a notify
  operation.

  There is also a patch fixing a problem with the new ceph + fscache
  code that just went in"

* 'for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/sage/ceph-client:
  fscache: check consistency does not decrement refcount
  rbd: fix error handling from rbd_snap_name()
  rbd: ignore unmapped snapshots that no longer exist
  rbd: fix use-after free of rbd_dev-&gt;disk
  rbd: make rbd_obj_notify_ack() synchronous
  rbd: complete notifies before cleaning up osd_client and rbd_dev
  libceph: add function to ensure notifies are complete
</pre>
</div>
</content>
</entry>
<entry>
<title>lib/radix-tree.c: make radix_tree_node_alloc() work correctly within interrupt</title>
<updated>2013-09-11T22:59:36+00:00</updated>
<author>
<name>Jan Kara</name>
<email>jack@suse.cz</email>
</author>
<published>2013-09-11T21:26:05+00:00</published>
<link rel='alternate' type='text/html' href='https://git.tavy.me/linux.git/commit/?id=5e4c0d974139a98741b829b27cf38dc8f9284490'/>
<id>5e4c0d974139a98741b829b27cf38dc8f9284490</id>
<content type='text'>
With users of radix_tree_preload() run from interrupt (block/blk-ioc.c is
one such possible user), the following race can happen:

radix_tree_preload()
...
radix_tree_insert()
  radix_tree_node_alloc()
    if (rtp-&gt;nr) {
      ret = rtp-&gt;nodes[rtp-&gt;nr - 1];
&lt;interrupt&gt;
...
radix_tree_preload()
...
radix_tree_insert()
  radix_tree_node_alloc()
    if (rtp-&gt;nr) {
      ret = rtp-&gt;nodes[rtp-&gt;nr - 1];

And we give out one radix tree node twice.  That clearly results in radix
tree corruption with different results (usually OOPS) depending on which
two users of radix tree race.

We fix the problem by making radix_tree_node_alloc() always allocate fresh
radix tree nodes when in interrupt.  Using preloading when in interrupt
doesn't make sense since all the allocations have to be atomic anyway and
we cannot steal nodes from process-context users because some users rely
on radix_tree_insert() succeeding after radix_tree_preload().
in_interrupt() check is somewhat ugly but we cannot simply key off passed
gfp_mask as that is acquired from root_gfp_mask() and thus the same for
all preload users.

Another part of the fix is to avoid node preallocation in
radix_tree_preload() when passed gfp_mask doesn't allow waiting.  Again,
preallocation in such case doesn't make sense and when preallocation would
happen in interrupt we could possibly leak some allocated nodes.  However,
some users of radix_tree_preload() require following radix_tree_insert()
to succeed.  To avoid unexpected effects for these users,
radix_tree_preload() only warns if passed gfp mask doesn't allow waiting
and we provide a new function radix_tree_maybe_preload() for those users
which get different gfp mask from different call sites and which are
prepared to handle radix_tree_insert() failure.

Signed-off-by: Jan Kara &lt;jack@suse.cz&gt;
Cc: Jens Axboe &lt;jaxboe@fusionio.com&gt;
Signed-off-by: Andrew Morton &lt;akpm@linux-foundation.org&gt;
Signed-off-by: Linus Torvalds &lt;torvalds@linux-foundation.org&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
With users of radix_tree_preload() run from interrupt (block/blk-ioc.c is
one such possible user), the following race can happen:

radix_tree_preload()
...
radix_tree_insert()
  radix_tree_node_alloc()
    if (rtp-&gt;nr) {
      ret = rtp-&gt;nodes[rtp-&gt;nr - 1];
&lt;interrupt&gt;
...
radix_tree_preload()
...
radix_tree_insert()
  radix_tree_node_alloc()
    if (rtp-&gt;nr) {
      ret = rtp-&gt;nodes[rtp-&gt;nr - 1];

And we give out one radix tree node twice.  That clearly results in radix
tree corruption with different results (usually OOPS) depending on which
two users of radix tree race.

We fix the problem by making radix_tree_node_alloc() always allocate fresh
radix tree nodes when in interrupt.  Using preloading when in interrupt
doesn't make sense since all the allocations have to be atomic anyway and
we cannot steal nodes from process-context users because some users rely
on radix_tree_insert() succeeding after radix_tree_preload().
in_interrupt() check is somewhat ugly but we cannot simply key off passed
gfp_mask as that is acquired from root_gfp_mask() and thus the same for
all preload users.

Another part of the fix is to avoid node preallocation in
radix_tree_preload() when passed gfp_mask doesn't allow waiting.  Again,
preallocation in such case doesn't make sense and when preallocation would
happen in interrupt we could possibly leak some allocated nodes.  However,
some users of radix_tree_preload() require following radix_tree_insert()
to succeed.  To avoid unexpected effects for these users,
radix_tree_preload() only warns if passed gfp mask doesn't allow waiting
and we provide a new function radix_tree_maybe_preload() for those users
which get different gfp mask from different call sites and which are
prepared to handle radix_tree_insert() failure.

Signed-off-by: Jan Kara &lt;jack@suse.cz&gt;
Cc: Jens Axboe &lt;jaxboe@fusionio.com&gt;
Signed-off-by: Andrew Morton &lt;akpm@linux-foundation.org&gt;
Signed-off-by: Linus Torvalds &lt;torvalds@linux-foundation.org&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>fscache: check consistency does not decrement refcount</title>
<updated>2013-09-10T16:04:46+00:00</updated>
<author>
<name>Milosz Tanski</name>
<email>milosz@adfin.com</email>
</author>
<published>2013-09-09T18:28:57+00:00</published>
<link rel='alternate' type='text/html' href='https://git.tavy.me/linux.git/commit/?id=9c89d62948c4740e379a7e0085dd8d7c1561f53f'/>
<id>9c89d62948c4740e379a7e0085dd8d7c1561f53f</id>
<content type='text'>
__fscache_check_consistency() does not decrement the count of operations
active after it finishes in the success case. This leads to a hung tasks on
cookie de-registration (commonly in inode eviction).

INFO: task kworker/1:2:4214 blocked for more than 120 seconds.
kworker/1:2     D ffff880443513fc0     0  4214      2 0x00000000
Workqueue: ceph-msgr con_work [libceph]
  ...
Call Trace:
 [&lt;ffffffff81569fc6&gt;] ? _raw_spin_unlock_irqrestore+0x16/0x20
 [&lt;ffffffffa0016570&gt;] ? fscache_wait_bit_interruptible+0x30/0x30 [fscache]
 [&lt;ffffffff81568d09&gt;] schedule+0x29/0x70
 [&lt;ffffffffa001657e&gt;] fscache_wait_atomic_t+0xe/0x20 [fscache]
 [&lt;ffffffff815665cf&gt;] out_of_line_wait_on_atomic_t+0x9f/0xe0
 [&lt;ffffffff81083560&gt;] ? autoremove_wake_function+0x40/0x40
 [&lt;ffffffffa0015a9c&gt;] __fscache_relinquish_cookie+0x15c/0x310 [fscache]
 [&lt;ffffffffa00a4fae&gt;] ceph_fscache_unregister_inode_cookie+0x3e/0x50 [ceph]
 [&lt;ffffffffa007e373&gt;] ceph_destroy_inode+0x33/0x200 [ceph]
 [&lt;ffffffff811c13ae&gt;] ? __fsnotify_inode_delete+0xe/0x10
 [&lt;ffffffff8119ba1c&gt;] destroy_inode+0x3c/0x70
 [&lt;ffffffff8119bb69&gt;] evict+0x119/0x1b0

Signed-off-by: Milosz Tanski &lt;milosz@adfin.com&gt;
Acked-by: David Howells &lt;dhowells@redhat.com&gt;
Signed-off-by: Sage Weil &lt;sage@inktank.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
__fscache_check_consistency() does not decrement the count of operations
active after it finishes in the success case. This leads to a hung tasks on
cookie de-registration (commonly in inode eviction).

INFO: task kworker/1:2:4214 blocked for more than 120 seconds.
kworker/1:2     D ffff880443513fc0     0  4214      2 0x00000000
Workqueue: ceph-msgr con_work [libceph]
  ...
Call Trace:
 [&lt;ffffffff81569fc6&gt;] ? _raw_spin_unlock_irqrestore+0x16/0x20
 [&lt;ffffffffa0016570&gt;] ? fscache_wait_bit_interruptible+0x30/0x30 [fscache]
 [&lt;ffffffff81568d09&gt;] schedule+0x29/0x70
 [&lt;ffffffffa001657e&gt;] fscache_wait_atomic_t+0xe/0x20 [fscache]
 [&lt;ffffffff815665cf&gt;] out_of_line_wait_on_atomic_t+0x9f/0xe0
 [&lt;ffffffff81083560&gt;] ? autoremove_wake_function+0x40/0x40
 [&lt;ffffffffa0015a9c&gt;] __fscache_relinquish_cookie+0x15c/0x310 [fscache]
 [&lt;ffffffffa00a4fae&gt;] ceph_fscache_unregister_inode_cookie+0x3e/0x50 [ceph]
 [&lt;ffffffffa007e373&gt;] ceph_destroy_inode+0x33/0x200 [ceph]
 [&lt;ffffffff811c13ae&gt;] ? __fsnotify_inode_delete+0xe/0x10
 [&lt;ffffffff8119ba1c&gt;] destroy_inode+0x3c/0x70
 [&lt;ffffffff8119bb69&gt;] evict+0x119/0x1b0

Signed-off-by: Milosz Tanski &lt;milosz@adfin.com&gt;
Acked-by: David Howells &lt;dhowells@redhat.com&gt;
Signed-off-by: Sage Weil &lt;sage@inktank.com&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>fscache: Netfs function for cleanup post readpages</title>
<updated>2013-09-06T08:17:30+00:00</updated>
<author>
<name>Milosz Tanski</name>
<email>milosz@adfin.com</email>
</author>
<published>2013-08-21T21:30:11+00:00</published>
<link rel='alternate' type='text/html' href='https://git.tavy.me/linux.git/commit/?id=5a6f282a2052bb13171b53f03b34501cf72c33f1'/>
<id>5a6f282a2052bb13171b53f03b34501cf72c33f1</id>
<content type='text'>
Currently the fscache code expect the netfs to call fscache_readpages_or_alloc
inside the aops readpages callback.  It marks all the pages in the list
provided by readahead with PG_private_2.  In the cases that the netfs fails to
read all the pages (which is legal) it ends up returning to the readahead and
triggering a BUG.  This happens because the page list still contains marked
pages.

This patch implements a simple fscache_readpages_cancel function that the netfs
should call before returning from readpages.  It will revoke the pages from the
underlying cache backend and unmark them.

The problem was originally worked out in the Ceph devel tree, but it also
occurs in CIFS.  It appears that NFS, AFS and 9P are okay as read_cache_pages()
will clean up the unprocessed pages in the case of an error.

This can be used to address the following oops:

[12410647.597278] BUG: Bad page state in process petabucket  pfn:3d504e
[12410647.597292] page:ffffea000f541380 count:0 mapcount:0 mapping:
	(null) index:0x0
[12410647.597298] page flags: 0x200000000001000(private_2)

...

[12410647.597334] Call Trace:
[12410647.597345]  [&lt;ffffffff815523f2&gt;] dump_stack+0x19/0x1b
[12410647.597356]  [&lt;ffffffff8111def7&gt;] bad_page+0xc7/0x120
[12410647.597359]  [&lt;ffffffff8111e49e&gt;] free_pages_prepare+0x10e/0x120
[12410647.597361]  [&lt;ffffffff8111fc80&gt;] free_hot_cold_page+0x40/0x170
[12410647.597363]  [&lt;ffffffff81123507&gt;] __put_single_page+0x27/0x30
[12410647.597365]  [&lt;ffffffff81123df5&gt;] put_page+0x25/0x40
[12410647.597376]  [&lt;ffffffffa02bdcf9&gt;] ceph_readpages+0x2e9/0x6e0 [ceph]
[12410647.597379]  [&lt;ffffffff81122a8f&gt;] __do_page_cache_readahead+0x1af/0x260
[12410647.597382]  [&lt;ffffffff81122ea1&gt;] ra_submit+0x21/0x30
[12410647.597384]  [&lt;ffffffff81118f64&gt;] filemap_fault+0x254/0x490
[12410647.597387]  [&lt;ffffffff8113a74f&gt;] __do_fault+0x6f/0x4e0
[12410647.597391]  [&lt;ffffffff810125bd&gt;] ? __switch_to+0x16d/0x4a0
[12410647.597395]  [&lt;ffffffff810865ba&gt;] ? finish_task_switch+0x5a/0xc0
[12410647.597398]  [&lt;ffffffff8113d856&gt;] handle_pte_fault+0xf6/0x930
[12410647.597401]  [&lt;ffffffff81008c33&gt;] ? pte_mfn_to_pfn+0x93/0x110
[12410647.597403]  [&lt;ffffffff81008cce&gt;] ? xen_pmd_val+0xe/0x10
[12410647.597405]  [&lt;ffffffff81005469&gt;] ? __raw_callee_save_xen_pmd_val+0x11/0x1e
[12410647.597407]  [&lt;ffffffff8113f361&gt;] handle_mm_fault+0x251/0x370
[12410647.597411]  [&lt;ffffffff812b0ac4&gt;] ? call_rwsem_down_read_failed+0x14/0x30
[12410647.597414]  [&lt;ffffffff8155bffa&gt;] __do_page_fault+0x1aa/0x550
[12410647.597418]  [&lt;ffffffff8108011d&gt;] ? up_write+0x1d/0x20
[12410647.597422]  [&lt;ffffffff8113141c&gt;] ? vm_mmap_pgoff+0xbc/0xe0
[12410647.597425]  [&lt;ffffffff81143bb8&gt;] ? SyS_mmap_pgoff+0xd8/0x240
[12410647.597427]  [&lt;ffffffff8155c3ae&gt;] do_page_fault+0xe/0x10
[12410647.597431]  [&lt;ffffffff81558818&gt;] page_fault+0x28/0x30

Signed-off-by: Milosz Tanski &lt;milosz@adfin.com&gt;
Signed-off-by: David Howells &lt;dhowells@redhat.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
Currently the fscache code expect the netfs to call fscache_readpages_or_alloc
inside the aops readpages callback.  It marks all the pages in the list
provided by readahead with PG_private_2.  In the cases that the netfs fails to
read all the pages (which is legal) it ends up returning to the readahead and
triggering a BUG.  This happens because the page list still contains marked
pages.

This patch implements a simple fscache_readpages_cancel function that the netfs
should call before returning from readpages.  It will revoke the pages from the
underlying cache backend and unmark them.

The problem was originally worked out in the Ceph devel tree, but it also
occurs in CIFS.  It appears that NFS, AFS and 9P are okay as read_cache_pages()
will clean up the unprocessed pages in the case of an error.

This can be used to address the following oops:

[12410647.597278] BUG: Bad page state in process petabucket  pfn:3d504e
[12410647.597292] page:ffffea000f541380 count:0 mapcount:0 mapping:
	(null) index:0x0
[12410647.597298] page flags: 0x200000000001000(private_2)

...

[12410647.597334] Call Trace:
[12410647.597345]  [&lt;ffffffff815523f2&gt;] dump_stack+0x19/0x1b
[12410647.597356]  [&lt;ffffffff8111def7&gt;] bad_page+0xc7/0x120
[12410647.597359]  [&lt;ffffffff8111e49e&gt;] free_pages_prepare+0x10e/0x120
[12410647.597361]  [&lt;ffffffff8111fc80&gt;] free_hot_cold_page+0x40/0x170
[12410647.597363]  [&lt;ffffffff81123507&gt;] __put_single_page+0x27/0x30
[12410647.597365]  [&lt;ffffffff81123df5&gt;] put_page+0x25/0x40
[12410647.597376]  [&lt;ffffffffa02bdcf9&gt;] ceph_readpages+0x2e9/0x6e0 [ceph]
[12410647.597379]  [&lt;ffffffff81122a8f&gt;] __do_page_cache_readahead+0x1af/0x260
[12410647.597382]  [&lt;ffffffff81122ea1&gt;] ra_submit+0x21/0x30
[12410647.597384]  [&lt;ffffffff81118f64&gt;] filemap_fault+0x254/0x490
[12410647.597387]  [&lt;ffffffff8113a74f&gt;] __do_fault+0x6f/0x4e0
[12410647.597391]  [&lt;ffffffff810125bd&gt;] ? __switch_to+0x16d/0x4a0
[12410647.597395]  [&lt;ffffffff810865ba&gt;] ? finish_task_switch+0x5a/0xc0
[12410647.597398]  [&lt;ffffffff8113d856&gt;] handle_pte_fault+0xf6/0x930
[12410647.597401]  [&lt;ffffffff81008c33&gt;] ? pte_mfn_to_pfn+0x93/0x110
[12410647.597403]  [&lt;ffffffff81008cce&gt;] ? xen_pmd_val+0xe/0x10
[12410647.597405]  [&lt;ffffffff81005469&gt;] ? __raw_callee_save_xen_pmd_val+0x11/0x1e
[12410647.597407]  [&lt;ffffffff8113f361&gt;] handle_mm_fault+0x251/0x370
[12410647.597411]  [&lt;ffffffff812b0ac4&gt;] ? call_rwsem_down_read_failed+0x14/0x30
[12410647.597414]  [&lt;ffffffff8155bffa&gt;] __do_page_fault+0x1aa/0x550
[12410647.597418]  [&lt;ffffffff8108011d&gt;] ? up_write+0x1d/0x20
[12410647.597422]  [&lt;ffffffff8113141c&gt;] ? vm_mmap_pgoff+0xbc/0xe0
[12410647.597425]  [&lt;ffffffff81143bb8&gt;] ? SyS_mmap_pgoff+0xd8/0x240
[12410647.597427]  [&lt;ffffffff8155c3ae&gt;] do_page_fault+0xe/0x10
[12410647.597431]  [&lt;ffffffff81558818&gt;] page_fault+0x28/0x30

Signed-off-by: Milosz Tanski &lt;milosz@adfin.com&gt;
Signed-off-by: David Howells &lt;dhowells@redhat.com&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>FS-Cache: Add interface to check consistency of a cached object</title>
<updated>2013-09-06T08:17:30+00:00</updated>
<author>
<name>David Howells</name>
<email>dhowells@redhat.com</email>
</author>
<published>2013-08-21T21:29:38+00:00</published>
<link rel='alternate' type='text/html' href='https://git.tavy.me/linux.git/commit/?id=da9803bc8812f5bd3b26baaa90e515b843c65ff7'/>
<id>da9803bc8812f5bd3b26baaa90e515b843c65ff7</id>
<content type='text'>
Extend the fscache netfs API so that the netfs can ask as to whether a cache
object is up to date with respect to its corresponding netfs object:

	int fscache_check_consistency(struct fscache_cookie *cookie)

This will call back to the netfs to check whether the auxiliary data associated
with a cookie is correct.  It returns 0 if it is and -ESTALE if it isn't; it
may also return -ENOMEM and -ERESTARTSYS.

The backends now have to implement a mandatory operation pointer:

	int (*check_consistency)(struct fscache_object *object)

that corresponds to the above API call.  FS-Cache takes care of pinning the
object and the cookie in memory and managing this call with respect to the
object state.

Original-author: Hongyi Jia &lt;jiayisuse@gmail.com&gt;
Signed-off-by: David Howells &lt;dhowells@redhat.com&gt;
cc: Hongyi Jia &lt;jiayisuse@gmail.com&gt;
cc: Milosz Tanski &lt;milosz@adfin.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
Extend the fscache netfs API so that the netfs can ask as to whether a cache
object is up to date with respect to its corresponding netfs object:

	int fscache_check_consistency(struct fscache_cookie *cookie)

This will call back to the netfs to check whether the auxiliary data associated
with a cookie is correct.  It returns 0 if it is and -ESTALE if it isn't; it
may also return -ENOMEM and -ERESTARTSYS.

The backends now have to implement a mandatory operation pointer:

	int (*check_consistency)(struct fscache_object *object)

that corresponds to the above API call.  FS-Cache takes care of pinning the
object and the cookie in memory and managing this call with respect to the
object state.

Original-author: Hongyi Jia &lt;jiayisuse@gmail.com&gt;
Signed-off-by: David Howells &lt;dhowells@redhat.com&gt;
cc: Hongyi Jia &lt;jiayisuse@gmail.com&gt;
cc: Milosz Tanski &lt;milosz@adfin.com&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>FS-Cache: Don't use spin_is_locked() in assertions</title>
<updated>2013-06-19T13:16:47+00:00</updated>
<author>
<name>David Howells</name>
<email>dhowells@redhat.com</email>
</author>
<published>2013-05-24T11:45:31+00:00</published>
<link rel='alternate' type='text/html' href='https://git.tavy.me/linux.git/commit/?id=dcfae32f892f03dee9896b19d1960c1ecd3f0583'/>
<id>dcfae32f892f03dee9896b19d1960c1ecd3f0583</id>
<content type='text'>
Under certain circumstances, spin_is_locked() is hardwired to 0 - even when the
code would normally be in a locked section where it should return 1.  This
means it cannot be used for an assertion that checks that a spinlock is locked.

Remove such usages from FS-Cache.

The following oops might otherwise be observed:

FS-Cache: Assertion failed
BUG: failure at fs/fscache/operation.c:270/fscache_start_operations()!
Kernel panic - not syncing: BUG!
CPU: 0 PID: 10 Comm: kworker/u2:1 Not tainted 3.10.0-rc1-00133-ge7ebb75 #2
Workqueue: fscache_operation fscache_op_work_func [fscache]
7f091c48 603c8947 7f090000 7f9b1361 7f25f080 00000001 7f26d440 7f091c90
60299eb8 7f091d90 602951c5 7f26d440 3000000008 7f091da0 7f091cc0 7f091cd0
00000007 00000007 00000006 7f091ae0 00000010 0000010e 7f9af330 7f091ae0
Call Trace:
7f091c88: [&lt;60299eb8&gt;] dump_stack+0x17/0x19
7f091c98: [&lt;602951c5&gt;] panic+0xf4/0x1e9
7f091d38: [&lt;6002b10e&gt;] set_signals+0x1e/0x40
7f091d58: [&lt;6005b89e&gt;] __wake_up+0x4e/0x70
7f091d98: [&lt;7f9aa003&gt;] fscache_start_operations+0x43/0x50 [fscache]
7f091da8: [&lt;7f9aa1e3&gt;] fscache_op_complete+0x1d3/0x220 [fscache]
7f091db8: [&lt;60082985&gt;] unlock_page+0x55/0x60
7f091de8: [&lt;7fb25bb0&gt;] cachefiles_read_copier+0x250/0x330 [cachefiles]
7f091e58: [&lt;7f9ab03c&gt;] fscache_op_work_func+0xac/0x120 [fscache]
7f091e88: [&lt;6004d5b0&gt;] process_one_work+0x250/0x3a0
7f091ef8: [&lt;6004edc7&gt;] worker_thread+0x177/0x2a0
7f091f38: [&lt;6004ec50&gt;] worker_thread+0x0/0x2a0
7f091f58: [&lt;60054418&gt;] kthread+0xd8/0xe0
7f091f68: [&lt;6005bb27&gt;] finish_task_switch.isra.64+0x37/0xa0
7f091fd8: [&lt;600185cf&gt;] new_thread_handler+0x8f/0xb0

Reported-by: Milosz Tanski &lt;milosz@adfin.com&gt;
Signed-off-by: David Howells &lt;dhowells@redhat.com&gt;
Reviewed-and-tested-By: Milosz Tanski &lt;milosz@adfin.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
Under certain circumstances, spin_is_locked() is hardwired to 0 - even when the
code would normally be in a locked section where it should return 1.  This
means it cannot be used for an assertion that checks that a spinlock is locked.

Remove such usages from FS-Cache.

The following oops might otherwise be observed:

FS-Cache: Assertion failed
BUG: failure at fs/fscache/operation.c:270/fscache_start_operations()!
Kernel panic - not syncing: BUG!
CPU: 0 PID: 10 Comm: kworker/u2:1 Not tainted 3.10.0-rc1-00133-ge7ebb75 #2
Workqueue: fscache_operation fscache_op_work_func [fscache]
7f091c48 603c8947 7f090000 7f9b1361 7f25f080 00000001 7f26d440 7f091c90
60299eb8 7f091d90 602951c5 7f26d440 3000000008 7f091da0 7f091cc0 7f091cd0
00000007 00000007 00000006 7f091ae0 00000010 0000010e 7f9af330 7f091ae0
Call Trace:
7f091c88: [&lt;60299eb8&gt;] dump_stack+0x17/0x19
7f091c98: [&lt;602951c5&gt;] panic+0xf4/0x1e9
7f091d38: [&lt;6002b10e&gt;] set_signals+0x1e/0x40
7f091d58: [&lt;6005b89e&gt;] __wake_up+0x4e/0x70
7f091d98: [&lt;7f9aa003&gt;] fscache_start_operations+0x43/0x50 [fscache]
7f091da8: [&lt;7f9aa1e3&gt;] fscache_op_complete+0x1d3/0x220 [fscache]
7f091db8: [&lt;60082985&gt;] unlock_page+0x55/0x60
7f091de8: [&lt;7fb25bb0&gt;] cachefiles_read_copier+0x250/0x330 [cachefiles]
7f091e58: [&lt;7f9ab03c&gt;] fscache_op_work_func+0xac/0x120 [fscache]
7f091e88: [&lt;6004d5b0&gt;] process_one_work+0x250/0x3a0
7f091ef8: [&lt;6004edc7&gt;] worker_thread+0x177/0x2a0
7f091f38: [&lt;6004ec50&gt;] worker_thread+0x0/0x2a0
7f091f58: [&lt;60054418&gt;] kthread+0xd8/0xe0
7f091f68: [&lt;6005bb27&gt;] finish_task_switch.isra.64+0x37/0xa0
7f091fd8: [&lt;600185cf&gt;] new_thread_handler+0x8f/0xb0

Reported-by: Milosz Tanski &lt;milosz@adfin.com&gt;
Signed-off-by: David Howells &lt;dhowells@redhat.com&gt;
Reviewed-and-tested-By: Milosz Tanski &lt;milosz@adfin.com&gt;
</pre>
</div>
</content>
</entry>
</feed>
