Re: [PATCH v11 12/40] btrfs: calculate allocation offset for conventional zones

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 12/21/20 10:49 PM, Naohiro Aota wrote:
Conventional zones do not have a write pointer, so we cannot use it to
determine the allocation offset if a block group contains a conventional
zone.

But instead, we can consider the end of the last allocated extent in the
block group as an allocation offset.

For new block group, we cannot calculate the allocation offset by
consulting the extent tree, because it can cause deadlock by taking extent
buffer lock after chunk mutex (which is already taken in
btrfs_make_block_group()). Since it is a new block group, we can simply set
the allocation offset to 0, anyway.

Signed-off-by: Naohiro Aota <naohiro.aota@xxxxxxx>
---
  fs/btrfs/block-group.c |  4 +-
  fs/btrfs/zoned.c       | 93 +++++++++++++++++++++++++++++++++++++++---
  fs/btrfs/zoned.h       |  4 +-
  3 files changed, 92 insertions(+), 9 deletions(-)

diff --git a/fs/btrfs/block-group.c b/fs/btrfs/block-group.c
index 8c029e45a573..9eb1e3aa5e0f 100644
--- a/fs/btrfs/block-group.c
+++ b/fs/btrfs/block-group.c
@@ -1867,7 +1867,7 @@ static int read_one_block_group(struct btrfs_fs_info *info,
  			goto error;
  	}
- ret = btrfs_load_block_group_zone_info(cache);
+	ret = btrfs_load_block_group_zone_info(cache, false);
  	if (ret) {
  		btrfs_err(info, "zoned: failed to load zone info of bg %llu",
  			  cache->start);
@@ -2150,7 +2150,7 @@ int btrfs_make_block_group(struct btrfs_trans_handle *trans, u64 bytes_used,
  	if (btrfs_fs_compat_ro(fs_info, FREE_SPACE_TREE))
  		cache->needs_free_space = 1;
- ret = btrfs_load_block_group_zone_info(cache);
+	ret = btrfs_load_block_group_zone_info(cache, true);
  	if (ret) {
  		btrfs_put_block_group(cache);
  		return ret;
diff --git a/fs/btrfs/zoned.c b/fs/btrfs/zoned.c
index adca89a5ebc1..ceb6d0d7d33b 100644
--- a/fs/btrfs/zoned.c
+++ b/fs/btrfs/zoned.c
@@ -897,7 +897,62 @@ int btrfs_ensure_empty_zones(struct btrfs_device *device, u64 start, u64 size)
  	return 0;
  }
-int btrfs_load_block_group_zone_info(struct btrfs_block_group *cache)
+static int calculate_alloc_pointer(struct btrfs_block_group *cache,
+				   u64 *offset_ret)
+{
+	struct btrfs_fs_info *fs_info = cache->fs_info;
+	struct btrfs_root *root = fs_info->extent_root;
+	struct btrfs_path *path;
+	struct btrfs_key key;
+	struct btrfs_key found_key;
+	int ret;
+	u64 length;
+
+	path = btrfs_alloc_path();
+	if (!path)
+		return -ENOMEM;
+
+	key.objectid = cache->start + cache->length;
+	key.type = 0;
+	key.offset = 0;
+
+	ret = btrfs_search_slot(NULL, root, &key, path, 0, 0);
+	/* We should not find the exact match */
+	if (ret <= 0) {
+		ret = -EUCLEAN;
+		goto out;
+	}

We're eating the return value here if ret < 0, so I'd rather we do something like

if (!ret)
	ret = -EUCLEAN;
if (ret < 0)
	goto out;

Thanks,

Josef



[Index of Archives]     [Linux Ext4 Filesystem]     [Union Filesystem]     [Filesystem Testing]     [Ceph Users]     [Ecryptfs]     [AutoFS]     [Kernel Newbies]     [Share Photos]     [Security]     [Netfilter]     [Bugtraq]     [Yosemite News]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux Cachefs]     [Reiser Filesystem]     [Linux RAID]     [Samba]     [Device Mapper]     [CEPH Development]

  Powered by Linux