Re: Thin pool vg1-thinpool1-tpool (253:3) transaction_id is 549, while expected 505.

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Forgot to paste ticket about lxd and out of date information: https://github.com/lxc/lxd/issues/4445

--
pozdrawiam,
Łukasz Czerpak




On 9 Dec 2019, at 14:50, Łukasz Czerpak <lukasz.czerpak@xxxxxxxxx> wrote:

hi,

It’s Ubuntu 18.04.3:

$ lvm version
 LVM version:     2.02.176(2) (2017-11-03)
 Library version: 1.02.145 (2017-11-03)
 Driver version:  4.37.0

$ uname -a
Linux gandalf 4.15.0-72-generic #81-Ubuntu SMP Tue Nov 26 12:20:02 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux

It’s weird as same error occurred few minutes ago. I wanted to take snapshot of thin volume and it first returned the following error:

$ lvcreate -s --name vmail-data-snapshot vg1/vmail-data                                                                                                                                       
Using default stripesize 64.00 KiB.                                                                                                                                                         
Can't create snapshot vmail-data-snapshot as origin vmail-data is not suspended.                                                                                                            
Failed to suspend thin snapshot origin vg1/vmail-data.

Then I tried with different volume:

$ lvcreate -s --name owncloud-data-snapshot vg1/owncloud-data                                                                                                                                 
Using default stripesize 64.00 KiB.
Thin pool vg1-thinpool1-tpool (253:2) transaction_id is 574, while expected 572.
Failed to suspend vg1/thinpool1 with queued messages.

Same error when then tried to export LXD’s container:

$ lvcreate -s --name owncloud-data-snapshot vg1/owncloud-data
 Using default stripesize 64.00 KiB.
 Thin pool vg1-thinpool1-tpool (253:2) transaction_id is 574, while expected 572.
 Failed to suspend vg1/thinpool1 with queued messages.

I did vgcfgbackup and transaction_id for thinpool1 was 573. I really don’t know what’s going on.
Wondering if this might be caused by LXD running as snap which is known to not interact with system's lvmetad and thus giving out of date information. LXD is configured to use thinpool1 as storage.

Maybe after I did vgcfgbackup, updated mismatching transaction_id and restored with vgcfgrestore, I got access to data and fake impression all was fixed.


Best Regards,
Łukasz Czerpak




On 9 Dec 2019, at 14:36, Zdenek Kabelac <zkabelac@xxxxxxxxxx> wrote:

Dne 08. 12. 19 v 21:47 Łukasz Czerpak napsal(a):
After googling a lot I figure out what to do and it worked - at least I can access the most critical data.
I’ve followed instructions from this blog post: https://blog.monotok.org/lvm-transaction-id-mismatch-and-metadata-resize-error/
However, I have no idea what was the root cause of this. I hope I can fully recover the volumes w/o re-creating the whole VG.
In case I did something terribly wrong that looked like the solution now, but may cause issues in future - I would appreciate any hints.

$ lvchange -ay vg1
WARNING: Not using lvmetad because a repair command was run.
Thin pool vg1-thinpool1-tpool (253:3) transaction_id is 549, while expected 505.

Hi

What's been you lvm2 & kernel version ?

This difference is too big for 'recent' versions - there should never be more
then one - unless you are using old kernel & old lvm2.

Regards

Zdenek



_______________________________________________
linux-lvm mailing list
linux-lvm@xxxxxxxxxx
https://www.redhat.com/mailman/listinfo/linux-lvm
read the LVM HOW-TO at http://tldp.org/HOWTO/LVM-HOWTO/

[Index of Archives]     [Gluster Users]     [Kernel Development]     [Linux Clusters]     [Device Mapper]     [Security]     [Bugtraq]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux RAID]

  Powered by Linux