From the looks of it the disk, as provisioned out of an Azure pool, is likely backed by an enterprise raid array. When you provision the pools with discard_passdown the removal of the snapshot will also be pushed down to the underlying hypervisor or disk array. You would need to wait till that process is completed in order to make any comparisons.
ThinVolGrp-ThinDataLV-tpool: 0 1006632960 thin-pool 1 4878/4145152 8325/7864320 - rw discard_passdown queue_if_no_space - 1024
As per man page
--discards passdown|nopassdown|ignore
Specifies how the device-mapper thin pool layer in the kernel should handle discards. ignore causes the thin pool to ignore discards. nopassdown causes the
thin pool to process discards itself to allow reuse of unneeded extents in the thin pool. passdown causes the thin pool to process discards itself (like
nopassdown) and pass the discards to the underlying device.
Try the same operation after changing the thin volume
lvchange --discards nopassdown VG/ThinPoolLV
--Kind regards,
Erwin van Londen
EvL Consulting
ABN 43 560 744 507
Mobile | +61-434-325795 |
Phone | +61-7-53213176 |
Web | http://erwinvanlonden.net |
Conference | https://iene.3cx.com.au/meet/erwinvlwebmeet |
Web Talk | https://iene.3cx.com.au/callus/#erwinvlwebphone |
On Mon, 2022-10-17 at 15:10 +0200, Zdenek Kabelac wrote:
Dne 14. 10. 22 v 21:31 Mitta Sai Chaithanya napsal(a):Hi Zdenek Kabelac,Thanks for your quick reply and suggestions.We conducted couple of tests on Ubuntu 22.04 and observed similar performancebehavior post thin snapshot deletion without writing any data anywhere.*Commands used to create Thin LVM volume*:- lvcreate -L 480G --poolmetadataspare n --poolmetadatasize 16G--chunksize=64K --thinpool ThinDataLV ThinVolGrp- lvcreate -n ext4.ThinLV -V 100G --thinpool ThinDataLV ThinVolGrpHiSo now it's clear you are talking about thin snapshots - this is a verydifferent story going on here (as we normally use term "COW" volumes for thickold snapshots)I'll consult more with thinp author - however it does look to me you are usingsame device to store data & metadata.This is always a highly sub-optimal solution - the metadata device is likelybest to be stored on fast (low latency) devices.So my wild guess - you are possibly using rotational device backend to storeyour thin-pools metadata volume and then your setups gets very sensitive onthe metadata fragmentation.Thin-pool was designed to be used with SSD/NVMe for metadata which is way lesssensitive on seeking.So when you 'create' snapshot - metadata gets updated - when you remove thinsnapshot - metadata gets again a lots of changes (especially when your originvolume is already populated) - and fragmentation is inevitable and you aregetting high penalty of holding metadata device on the same drive as your datadevice.So while there are some plans to improve some metadata logistic - I'd notexpect miracles on you particular setup - I'd highly recommend to plug-in someSSD/NVMe storage for storing your thinpool metadata - this is the way to goto get better 'benchmarking' numbers here.For an improvement on your setup - try to seek larger chunk size values whereyour data 'sharing' is still reasonably valuable - this depends on data-typeusage - but chunk size 256K might be possibly a good compromise (with disabledzeroing - if you hunt for the best performance).RegardsZdenekPS: later mails suggest you are using some 'MS Azure' devices?? - so pleaseredo your testing with your local hardware/storage - where you have preciseguarantees of storage drive performance - testing in the Cloud is random bydesign...._______________________________________________linux-lvm mailing listread the LVM HOW-TO at http://tldp.org/HOWTO/LVM-HOWTO/
Attachment:
signature.asc
Description: This is a digitally signed message part
_______________________________________________ linux-lvm mailing list linux-lvm@xxxxxxxxxx https://listman.redhat.com/mailman/listinfo/linux-lvm read the LVM HOW-TO at http://tldp.org/HOWTO/LVM-HOWTO/