3.17-stable review patch. If anyone has any objections, please let me know. ------------------ From: Mark Fasheh <mfasheh@xxxxxxx> commit 0b4699dcb65c2cff793210b07f40b98c2d423a43 upstream. btrfs_drop_snapshot() leaves subvolume qgroup items on disk after completion. This can cause problems with snapshot creation. If a new snapshot tries to claim the deleted subvolumes id, btrfs will get -EEXIST from add_qgroup_item() and go read-only. The following commands will reproduce this problem (assume btrfs is on /dev/sda and is mounted at /btrfs) mkfs.btrfs -f /dev/sda mount -t btrfs /dev/sda /btrfs/ btrfs quota enable /btrfs/ btrfs su sna /btrfs/ /btrfs/snap btrfs su de /btrfs/snap sleep 45 umount /btrfs/ mount -t btrfs /dev/sda /btrfs/ We can fix this by catching -EEXIST in add_qgroup_item() and initializing the existing items. We have the problem of orphaned relation items being on disk from an old snapshot but that is outside the scope of this patch. Signed-off-by: Mark Fasheh <mfasheh@xxxxxxx> Signed-off-by: Chris Mason <clm@xxxxxx> Signed-off-by: Greg Kroah-Hartman <gregkh@xxxxxxxxxxxxxxxxxxx> --- fs/btrfs/qgroup.c | 10 ++++++++-- 1 file changed, 8 insertions(+), 2 deletions(-) --- a/fs/btrfs/qgroup.c +++ b/fs/btrfs/qgroup.c @@ -551,9 +551,15 @@ static int add_qgroup_item(struct btrfs_ key.type = BTRFS_QGROUP_INFO_KEY; key.offset = qgroupid; + /* + * Avoid a transaction abort by catching -EEXIST here. In that + * case, we proceed by re-initializing the existing structure + * on disk. + */ + ret = btrfs_insert_empty_item(trans, quota_root, path, &key, sizeof(*qgroup_info)); - if (ret) + if (ret && ret != -EEXIST) goto out; leaf = path->nodes[0]; @@ -572,7 +578,7 @@ static int add_qgroup_item(struct btrfs_ key.type = BTRFS_QGROUP_LIMIT_KEY; ret = btrfs_insert_empty_item(trans, quota_root, path, &key, sizeof(*qgroup_limit)); - if (ret) + if (ret && ret != -EEXIST) goto out; leaf = path->nodes[0]; -- To unsubscribe from this list: send the line "unsubscribe stable" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html