3.12-stable review patch. If anyone has any objections, please let me know. ------------------ From: Josef Bacik <jbacik@xxxxxxxxxxxx> commit d4b4087c43cc00a196c5be57fac41f41309f1d56 upstream. While running some snashot aware defrag tests I noticed I was panicing every once and a while in key_search. This is because of the optimization that says if we find a key at slot 0 it will be at slot 0 all the way down the rest of the tree. This isn't the case for btrfs_search_old_slot since it will likely replay changes to a buffer if something has changed since we took our sequence number. So short circuit this optimization by setting prev_cmp to -1 every time we call key_search so we will do our normal binary search. With this patch I am no longer seeing the panics I was seeing before. Thanks, Signed-off-by: Josef Bacik <jbacik@xxxxxxxxxxxx> Signed-off-by: Chris Mason <chris.mason@xxxxxxxxxxxx> Signed-off-by: Greg Kroah-Hartman <gregkh@xxxxxxxxxxxxxxxxxxx> --- fs/btrfs/ctree.c | 8 ++++++-- 1 file changed, 6 insertions(+), 2 deletions(-) --- a/fs/btrfs/ctree.c +++ b/fs/btrfs/ctree.c @@ -2758,7 +2758,7 @@ int btrfs_search_old_slot(struct btrfs_r int level; int lowest_unlock = 1; u8 lowest_level = 0; - int prev_cmp; + int prev_cmp = -1; lowest_level = p->lowest_level; WARN_ON(p->nodes[0] != NULL); @@ -2769,7 +2769,6 @@ int btrfs_search_old_slot(struct btrfs_r } again: - prev_cmp = -1; b = get_old_root(root, time_seq); level = btrfs_header_level(b); p->locks[level] = BTRFS_READ_LOCK; @@ -2787,6 +2786,11 @@ again: */ btrfs_unlock_up_safe(p, level + 1); + /* + * Since we can unwind eb's we want to do a real search every + * time. + */ + prev_cmp = -1; ret = key_search(b, key, level, &prev_cmp, &slot); if (level != 0) { -- To unsubscribe from this list: send the line "unsubscribe stable" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html