dm thin provision, pool full

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



You guys probably already know about this, but I was playing with kernel 3.2.0 and the device mapper thin provisioned snapshots, and it doesn't seem like there is any sort of error implemented when the pool is full. I was running some write tests, and one of them just seemed to go into eternal D state. Checking iostat showed disks were idle. Running a 'dmsetup status' returned the following:

thin: 0 41943040 thin 39832576 41943039
pool: 0 41943040 thin-pool 0 622/243968 81920/81920 -


that 81920/81920 is reporting data blocks in use/total blocks, correct?


--
dm-devel mailing list
dm-devel@xxxxxxxxxx
https://www.redhat.com/mailman/listinfo/dm-devel


[Index of Archives]     [DM Crypt]     [Fedora Desktop]     [ATA RAID]     [Fedora Marketing]     [Fedora Packaging]     [Fedora SELinux]     [Yosemite Discussion]     [KDE Users]     [Fedora Docs]

  Powered by Linux