dm-thin - issue about the maximum size of the metadata device

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi folks,

I currently do some experiments on the dm-thin-provisioning targets.
One of these experiments is trying to create/find-out the largest thin volumn on the pool.

As I Know, each time we provision blocks from the pool, the metadata is comsumed for recording the mapping information.
By executing the lvdisplay command, we can observe the status of the pool & metadata usage, such as
$ sudo lvdisplay | grep Allocated
  Allocated pool data    7.87%
  Allocated metadata    6.09%

And the following content is extracted from thin-provisioning.txt from Documentation\device-mapper in the source tree.
"As a guide, we suggest you calculate the number of bytes to use in the
metadata device as 48 * $data_dev_size / $data_block_size but round it up
to 2MB if the answer is smaller."

If the size of the metadata dev was fixed as 16G, and the block size of the pool dev was set as 64K, 
then we may infer that the largest volumn size of the thin is 21.33TB.

(48 * $data_dev_size / 64K = 16G
 $data_dev_size = 16G * 64K / 48 = 21.33TB)

If this inference was not correct, please kindly let me know why.

Then I do the experiment with the following steps:
1. create a thin-pool with size 21.33T on my RAID0, say, the largest size we infered, and block size 64K, metadata size 16G
2. create a thin volumn with virtual size 21.33T.
3. dd data(/dev/uramdom) to the thin device

Finally, I observed that
  Allocated pool data    100.00%
  Allocated metadata     71.89%

It seemed that the pool data had already out of usage, but the metadata was not.
Did it means that, metadata 16G can be applied to record a thin dev with size bigger than 21.73T?

I did another experiment. In this time, I enlarged the size of pool/ thin dev as big as I could offer.

1. create a thin-pool on my RAID0 with size 36.20T, and block size 64K, metadata size 16G
2. create a thin volumn with virtual size 36.20T.
3. dd data to the thin device

Finally, I observed that
  Allocated pool data    84.52%
  Allocated metadata    99.99%

And these messages were shown in dmesge
device-mapper: space map metadata: out of metadata space
device-mapper: thin: pre_alloc_func: dm_thin_insert_block failed
device-mapper: space map metadata: out of metadata space
device-mapper: thin: commit failed, error = -28
device-mapper: thin: switching pool to read-only mode

In this experiment, we run out of the metadata, and by the "Allocated pool data" field, we infered that the maximum thin device was about 30.59TB, was it correct?

Regards,
Burton

--
dm-devel mailing list
dm-devel@xxxxxxxxxx
https://www.redhat.com/mailman/listinfo/dm-devel

[Index of Archives]     [DM Crypt]     [Fedora Desktop]     [ATA RAID]     [Fedora Marketing]     [Fedora Packaging]     [Fedora SELinux]     [Yosemite Discussion]     [KDE Users]     [Fedora Docs]

  Powered by Linux