Hello:
Recently, I found that the used space of the thin-pool will keep rising
if I use dm-thin as follow:
// create dm-thin
dmsetup create linear_1 --table "0 2097152 linear /dev/sdc 0"
dmsetup create linear_2 --table "0 16777216 linear /dev/sdc 2097153"
dd if=/dev/zero of=/dev/mapper/linear_1 bs=4096 count=1
dmsetup create pool --table "0 16777216 thin-pool /dev/mapper/linear_1
/dev/mapper/linear_2 128 0 1 skip_block_zeroing"
dmsetup message /dev/mapper/pool 0 "create_thin 0"
dmsetup create thin --table "0 14680064 thin /dev/mapper/pool 0"
// mkfs and mount with discard
mkfs.ext4 /dev/mapper/thin
mount /dev/mapper/thin /mnt/test -o discard
cd /mnt/test
// create a file(17KB)
dd if=/dev/random of=testfile bs=1k count=17 oflag=direct
sync
// truncate the file and write it for many times
dd if=/dev/random of=testfile bs=1k count=17 oflag=direct
sync
...
Ext4 will issue discard IO to dm-thin when truncating file. However,
DATA_DEV_BLOCK_SIZE_MIN_SECTORS is set as 64KB which means the discard
covers less than a block when I truncating a 17KB file. As the result of
it, discard bio will end in process_discard_bio(), and more and more
blocks will leak.
I'm curious about the reason behind setting
DATA_DEV_BLOCK_SIZE_MIN_SECTORS to 64KB. Is there any specific
consideration for this? Would it be possible to set this minimum limit
to a smaller value, such as 4KB?
--
dm-devel mailing list
dm-devel@xxxxxxxxxx
https://listman.redhat.com/mailman/listinfo/dm-devel