Re: Running thin_trim before activating a thin pool

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Sun, Jan 30, 2022 at 06:56:43PM +0100, Zdenek Kabelac wrote:
> Dne 30. 01. 22 v 18:30 Demi Marie Obenour napsal(a):
> > On Sun, Jan 30, 2022 at 12:18:32PM +0100, Zdenek Kabelac wrote:
> > > Discard of thins itself is AFAIC pretty fast - unless you have massively
> > > sized thin devices with many GiB of metadata - obviously you cannot process
> > > this amount of metadata in nanoseconds (and there are prepared kernel
> > > patches to make it even faster)
> > 
> > Would you be willing and able to share those patches?
> 
> Then are always landing in upstream kernel once they are all validated &
> tested (recent kernel already has many speed enhancements).

Thanks!  Which mailing list should I be watching?

> > > What is the problem is the speed of discard of physical devices.
> > > You could actually try to feel difference with:
> > > lvchange --discards passdown|nopassdown thinpool
> > 
> > In Qubes OS I believe we do need the discards to be passed down
> > eventually, but I doubt it needs to be synchronous.  Being able to run
> > the equivalent of `fstrim -av` periodically would be amazing.  I’m
> > CC’ing Marek Marczykowski-Górecki (Qubes OS project lead) in case he
> > has something to say.
> 
> You could easily run in parallel individual blkdiscards for your thin LVs....
> For most modern drives thought it's somewhat waste of time...
> 
> Those trimming tools should be used when they are solving some real
> problems, running them just for fun is just energy & performance waste....

My understanding (which could be wrong) is that periodic trim is
necessary for SSDs.

> > > Also it's very important to keep metadata on fast storage device (SSD/NVMe)!
> > > Keeping metadata on same hdd spindle as data is always going to feel slow
> > > (in fact it's quite pointless to talk about performance and use hdd...)
> > 
> > That explains why I had such a horrible experience with my initial
> > (split between NVMe and HDD) install.  I would not be surprised if some
> > or all of the metadata volume wound up on the spinning disk.
> 
> With lvm2 user can always 'pvmove'  any LV to any desired PV.
> There is not yet any 'smart' logic to do it automatically.

Good point.  I was probably unware of that at the time.

-- 
Sincerely,
Demi Marie Obenour (she/her/hers)
Invisible Things Lab

Attachment: signature.asc
Description: PGP signature

_______________________________________________
linux-lvm mailing list
linux-lvm@xxxxxxxxxx
https://listman.redhat.com/mailman/listinfo/linux-lvm
read the LVM HOW-TO at http://tldp.org/HOWTO/LVM-HOWTO/

[Index of Archives]     [Gluster Users]     [Kernel Development]     [Linux Clusters]     [Device Mapper]     [Security]     [Bugtraq]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux RAID]

  Powered by Linux