<feed xmlns='http://www.w3.org/2005/Atom'>
<title>linux-stable.git/drivers/md/bcache/debug.c, branch linux-3.13.y</title>
<subtitle>Linux kernel stable tree</subtitle>
<link rel='alternate' type='text/html' href='https://git.tavy.me/linux-stable.git/'/>
<entry>
<title>bcache: Bypass torture test</title>
<updated>2013-11-11T05:56:43+00:00</updated>
<author>
<name>Kent Overstreet</name>
<email>kmo@daterainc.com</email>
</author>
<published>2013-09-10T21:27:42+00:00</published>
<link rel='alternate' type='text/html' href='https://git.tavy.me/linux-stable.git/commit/?id=5ceaaad7047745c1c02150c39d3fb623b7948d48'/>
<id>5ceaaad7047745c1c02150c39d3fb623b7948d48</id>
<content type='text'>
More testing ftw! Also, now verify mode doesn't break if you read dirty
data.

Signed-off-by: Kent Overstreet &lt;kmo@daterainc.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
More testing ftw! Also, now verify mode doesn't break if you read dirty
data.

Signed-off-by: Kent Overstreet &lt;kmo@daterainc.com&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>bcache: Debug code improvements</title>
<updated>2013-11-11T05:56:34+00:00</updated>
<author>
<name>Kent Overstreet</name>
<email>kmo@daterainc.com</email>
</author>
<published>2013-10-24T23:36:03+00:00</published>
<link rel='alternate' type='text/html' href='https://git.tavy.me/linux-stable.git/commit/?id=280481d06c8a683d9aaa26125476222e76b733c5'/>
<id>280481d06c8a683d9aaa26125476222e76b733c5</id>
<content type='text'>
Couple changes:
 * Consolidate bch_check_keys() and bch_check_key_order(), and move the
   checks that only check_key_order() could do to bch_btree_iter_next().

 * Get rid of CONFIG_BCACHE_EDEBUG - now, all that code is compiled in
   when CONFIG_BCACHE_DEBUG is enabled, and there's now a sysfs file to
   flip on the EDEBUG checks at runtime.

 * Dropped an old not terribly useful check in rw_unlock(), and
   refactored/improved a some of the other debug code.

Signed-off-by: Kent Overstreet &lt;kmo@daterainc.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
Couple changes:
 * Consolidate bch_check_keys() and bch_check_key_order(), and move the
   checks that only check_key_order() could do to bch_btree_iter_next().

 * Get rid of CONFIG_BCACHE_EDEBUG - now, all that code is compiled in
   when CONFIG_BCACHE_DEBUG is enabled, and there's now a sysfs file to
   flip on the EDEBUG checks at runtime.

 * Dropped an old not terribly useful check in rw_unlock(), and
   refactored/improved a some of the other debug code.

Signed-off-by: Kent Overstreet &lt;kmo@daterainc.com&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>bcache: Break up struct search</title>
<updated>2013-11-11T05:56:32+00:00</updated>
<author>
<name>Kent Overstreet</name>
<email>kmo@daterainc.com</email>
</author>
<published>2013-09-11T02:02:45+00:00</published>
<link rel='alternate' type='text/html' href='https://git.tavy.me/linux-stable.git/commit/?id=220bb38c21b83e2f7b842f33220bf727093eca89'/>
<id>220bb38c21b83e2f7b842f33220bf727093eca89</id>
<content type='text'>
With all the recent refactoring around struct btree op struct search has
gotten rather large.

But we can now easily break it up in a different way - we break out
struct btree_insert_op which is for inserting data into the cache, and
that's now what the copying gc code uses - struct search is now specific
to request.c

Signed-off-by: Kent Overstreet &lt;kmo@daterainc.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
With all the recent refactoring around struct btree op struct search has
gotten rather large.

But we can now easily break it up in a different way - we break out
struct btree_insert_op which is for inserting data into the cache, and
that's now what the copying gc code uses - struct search is now specific
to request.c

Signed-off-by: Kent Overstreet &lt;kmo@daterainc.com&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>bcache: Use standard utility code</title>
<updated>2013-07-01T21:43:53+00:00</updated>
<author>
<name>Kent Overstreet</name>
<email>koverstreet@google.com</email>
</author>
<published>2013-06-07T01:15:57+00:00</published>
<link rel='alternate' type='text/html' href='https://git.tavy.me/linux-stable.git/commit/?id=8e51e414a3c6d92ef2cc41720c67342a8e2c0bf7'/>
<id>8e51e414a3c6d92ef2cc41720c67342a8e2c0bf7</id>
<content type='text'>
Some of bcache's utility code has made it into the rest of the kernel,
so drop the bcache versions.

Bcache used to have a workaround for allocating from a bio set under
generic_make_request() (if you allocated more than once, the bios you
already allocated would get stuck on current-&gt;bio_list when you
submitted, and you'd risk deadlock) - bcache would mask out __GFP_WAIT
when allocating bios under generic_make_request() so that allocation
could fail and it could retry from workqueue. But bio_alloc_bioset() has
a workaround now, so we can drop this hack and the associated error
handling.

Signed-off-by: Kent Overstreet &lt;koverstreet@google.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
Some of bcache's utility code has made it into the rest of the kernel,
so drop the bcache versions.

Bcache used to have a workaround for allocating from a bio set under
generic_make_request() (if you allocated more than once, the bios you
already allocated would get stuck on current-&gt;bio_list when you
submitted, and you'd risk deadlock) - bcache would mask out __GFP_WAIT
when allocating bios under generic_make_request() so that allocation
could fail and it could retry from workqueue. But bio_alloc_bioset() has
a workaround now, so we can drop this hack and the associated error
handling.

Signed-off-by: Kent Overstreet &lt;koverstreet@google.com&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>bcache: Delete fuzz tester</title>
<updated>2013-07-01T21:43:48+00:00</updated>
<author>
<name>Kent Overstreet</name>
<email>koverstreet@google.com</email>
</author>
<published>2013-05-16T00:13:45+00:00</published>
<link rel='alternate' type='text/html' href='https://git.tavy.me/linux-stable.git/commit/?id=f3059a54610f6c516c0942d58b9435921768ce2d'/>
<id>f3059a54610f6c516c0942d58b9435921768ce2d</id>
<content type='text'>
This code has rotted and it hasn't been used in ages anyways.

Signed-off-by: Kent Overstreet &lt;kmo@daterainc.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
This code has rotted and it hasn't been used in ages anyways.

Signed-off-by: Kent Overstreet &lt;kmo@daterainc.com&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>bcache: Write out full stripes</title>
<updated>2013-06-27T04:58:04+00:00</updated>
<author>
<name>Kent Overstreet</name>
<email>koverstreet@google.com</email>
</author>
<published>2013-06-05T13:24:39+00:00</published>
<link rel='alternate' type='text/html' href='https://git.tavy.me/linux-stable.git/commit/?id=72c270612bd33192fa836ad0f2939af1ca218292'/>
<id>72c270612bd33192fa836ad0f2939af1ca218292</id>
<content type='text'>
Now that we're tracking dirty data per stripe, we can add two
optimizations for raid5/6:

 * If a stripe is already dirty, force writes to that stripe to
   writeback mode - to help build up full stripes of dirty data

 * When flushing dirty data, preferentially write out full stripes first
   if there are any.

Signed-off-by: Kent Overstreet &lt;koverstreet@google.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
Now that we're tracking dirty data per stripe, we can add two
optimizations for raid5/6:

 * If a stripe is already dirty, force writes to that stripe to
   writeback mode - to help build up full stripes of dirty data

 * When flushing dirty data, preferentially write out full stripes first
   if there are any.

Signed-off-by: Kent Overstreet &lt;koverstreet@google.com&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>bcache: Rip out pkey()/pbtree()</title>
<updated>2013-06-27T00:09:15+00:00</updated>
<author>
<name>Kent Overstreet</name>
<email>koverstreet@google.com</email>
</author>
<published>2013-05-15T03:33:16+00:00</published>
<link rel='alternate' type='text/html' href='https://git.tavy.me/linux-stable.git/commit/?id=85b1492ee113486d871de7676a61f506a43ca475'/>
<id>85b1492ee113486d871de7676a61f506a43ca475</id>
<content type='text'>
Old gcc doesnt like the struct hack, and it is kind of ugly. So finish
off the work to convert pr_debug() statements to tracepoints, and delete
pkey()/pbtree().

Signed-off-by: Kent Overstreet &lt;koverstreet@google.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
Old gcc doesnt like the struct hack, and it is kind of ugly. So finish
off the work to convert pr_debug() statements to tracepoints, and delete
pkey()/pbtree().

Signed-off-by: Kent Overstreet &lt;koverstreet@google.com&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>bcache: Refactor btree io</title>
<updated>2013-06-27T00:09:14+00:00</updated>
<author>
<name>Kent Overstreet</name>
<email>koverstreet@google.com</email>
</author>
<published>2013-04-25T20:58:35+00:00</published>
<link rel='alternate' type='text/html' href='https://git.tavy.me/linux-stable.git/commit/?id=5794351146199b9ac67a5ab1beab82be8bfd7b5d'/>
<id>5794351146199b9ac67a5ab1beab82be8bfd7b5d</id>
<content type='text'>
The most significant change is that btree reads are now done
synchronously, instead of asynchronously and doing the post read stuff
from a workqueue.

This was originally done because we can't block on IO under
generic_make_request(). But - we already have a mechanism to punt cache
lookups to workqueue if needed, so if we just use that we don't have to
deal with the complexity of doing things asynchronously.

The main benefit is this makes the locking situation saner; we can hold
our write lock on the btree node until we're finished reading it, and we
don't need that btree_node_read_done() flag anymore.

Also, for writes, btree_write() was broken out into btree_node_write()
and btree_leaf_dirty() - the old code with the boolean argument was dumb
and confusing.

The prio_blocked mechanism was improved a bit too, now the only counter
is in struct btree_write, we don't mess with transfering a count from
struct btree anymore.

This required changing garbage collection to block prios at the start
and unblock when it finishes, which is cleaner than what it was doing
anyways (the old code had mostly the same effect, but was doing it in a
convoluted way)

And the btree iter btree_node_read_done() uses was converted to a real
mempool.

Signed-off-by: Kent Overstreet &lt;koverstreet@google.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
The most significant change is that btree reads are now done
synchronously, instead of asynchronously and doing the post read stuff
from a workqueue.

This was originally done because we can't block on IO under
generic_make_request(). But - we already have a mechanism to punt cache
lookups to workqueue if needed, so if we just use that we don't have to
deal with the complexity of doing things asynchronously.

The main benefit is this makes the locking situation saner; we can hold
our write lock on the btree node until we're finished reading it, and we
don't need that btree_node_read_done() flag anymore.

Also, for writes, btree_write() was broken out into btree_node_write()
and btree_leaf_dirty() - the old code with the boolean argument was dumb
and confusing.

The prio_blocked mechanism was improved a bit too, now the only counter
is in struct btree_write, we don't mess with transfering a count from
struct btree anymore.

This required changing garbage collection to block prios at the start
and unblock when it finishes, which is cleaner than what it was doing
anyways (the old code had mostly the same effect, but was doing it in a
convoluted way)

And the btree iter btree_node_read_done() uses was converted to a real
mempool.

Signed-off-by: Kent Overstreet &lt;koverstreet@google.com&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>bcache: Disable broken btree fuzz tester</title>
<updated>2013-04-08T20:33:49+00:00</updated>
<author>
<name>Kent Overstreet</name>
<email>koverstreet@google.com</email>
</author>
<published>2013-04-05T21:20:29+00:00</published>
<link rel='alternate' type='text/html' href='https://git.tavy.me/linux-stable.git/commit/?id=cef5279735d3f6f0243e626963e6d5c84efade0a'/>
<id>cef5279735d3f6f0243e626963e6d5c84efade0a</id>
<content type='text'>
Reported-by: &lt;sasha.levin@oracle.com&gt;
Signed-off-by: Kent Overstreet &lt;koverstreet@google.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
Reported-by: &lt;sasha.levin@oracle.com&gt;
Signed-off-by: Kent Overstreet &lt;koverstreet@google.com&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>bcache: Sparse fixes</title>
<updated>2013-04-08T20:33:48+00:00</updated>
<author>
<name>Kent Overstreet</name>
<email>koverstreet@google.com</email>
</author>
<published>2013-03-26T20:49:02+00:00</published>
<link rel='alternate' type='text/html' href='https://git.tavy.me/linux-stable.git/commit/?id=c19ed23a0b1848eca6b6f22c1ee233abe54d37f9'/>
<id>c19ed23a0b1848eca6b6f22c1ee233abe54d37f9</id>
<content type='text'>
Signed-off-by: Kent Overstreet &lt;koverstreet@google.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
Signed-off-by: Kent Overstreet &lt;koverstreet@google.com&gt;
</pre>
</div>
</content>
</entry>
</feed>
