Why does dm-thin pool metadata space map use 4K page to carry index ?

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi

As the code,

The metadata space map use following structure which locates on a 4K page on disk
to carry the disk_index_entry.

The on-disk format of metadata spacemap

The metadata space's bitmap root is a
#define MAX_METADATA_BITMAPS 255
struct disk_metadata_index {
    __le32 csum;
    __le32 padding;
    __le64 blocknr;

    struct disk_index_entry index[MAX_METADATA_BITMAPS];
} __packed;

It will be read in when open the pool
sm_ll_open_metadata
  -> set ll callbacks
  -> ll->open_index
metadata_ll_open
---
    r = dm_tm_read_lock(ll->tm, ll->bitmap_root,
                &index_validator, &block);
    if (r)
        return r;

    memcpy(&ll->mi_le, dm_block_data(block), sizeof(ll->mi_le));
    dm_tm_unlock(ll->tm, block);

---
The size of struct disk_metadata_index is 4096.
The disk_index_entry's size is 8 bytes

4096 * 8 / 2 = 16K    blocks per page

metadata block = 4K

256 * 16K * 4K = 16G

Then it have a 6G limit on metadata blocks size.

But why does it use this 4K page instead of btree as the disk sm ?

The brb mechanism seem be able to avoid the nested block allocation
when do COW on the metadata sm btree.

Would anyone please help to tell why does it use this 4K page instead of a btree ?

 
Thanks in advance
Jianchao


--
dm-devel mailing list
dm-devel@xxxxxxxxxx
https://www.redhat.com/mailman/listinfo/dm-devel

[Index of Archives]     [DM Crypt]     [Fedora Desktop]     [ATA RAID]     [Fedora Marketing]     [Fedora Packaging]     [Fedora SELinux]     [Yosemite Discussion]     [KDE Users]     [Fedora Docs]

  Powered by Linux