Re: Why does dm-thin pool metadata space map use 4K page to carry index ?

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi Joe

Thanks for your kindly response

On Thu, Sep 5, 2019 at 6:38 PM Joe Thornber <thornber@xxxxxxxxxx> wrote:
On Thu, Sep 05, 2019 at 02:43:28PM +0800, jianchao wang wrote:
> But why does it use this 4K page instead of btree as the disk sm ?
>
> The brb mechanism seem be able to avoid the nested block allocation
> when do COW on the metadata sm btree.
>
> Would anyone please help to tell why does it use this 4K page instead of a
> btree ?

It's a long time since I wrote this, so I can't remember the order that things
were written.  It may well be that brb mechanism for avoiding recursive allocations
came after the on disk formats were defined.  Irrespective of that the single page
pointing to index pages should perform better.

Is the 16G limit to the metadata device causing you issues?

Yes, we are planing to build a 200T pool at least and there are both normal thin device
and snapshot running on it.  Smaller block size would be better, but 16G is not enough.

Actually, I have modified the metadata sm code to use btree as the disk sm. In my test
environment, I have used ~20G metadata.
 
Thanks
Jianchao
--
dm-devel mailing list
dm-devel@xxxxxxxxxx
https://www.redhat.com/mailman/listinfo/dm-devel

[Index of Archives]     [DM Crypt]     [Fedora Desktop]     [ATA RAID]     [Fedora Marketing]     [Fedora Packaging]     [Fedora SELinux]     [Yosemite Discussion]     [KDE Users]     [Fedora Docs]

  Powered by Linux