On Tue, Nov 26, 2024 at 09:43:42PM -0800, Christoph Hellwig wrote: > On Tue, Nov 26, 2024 at 12:27:29PM -0800, Darrick J. Wong wrote: > > From: Darrick J. Wong <djwong@xxxxxxxxxx> > > > > If you happen to be running fstests on a bunch of VMs and the VMs all > > have access to a shared disk pool, then it's possible that two VMs could > > be running generic/459 at exactly the same time. In that case, it's a > > VERY bad thing to have two nodes trying to create an LVM volume group > > named "vg_459" because one node will succeed, after which the other node > > will see the vg_459 volume group that it didn't create: > > > > A volume group called vg_459 already exists. > > Logical volume pool_459 already exists in Volume group vg_459. > > Logical Volume "lv_459" already exists in volume group "vg_459" > > > > But then, because this is bash, we don't abort the test script and > > continue executing. If we're lucky this fails when /dev/vg_459/lv_459 > > disappears before mkfs can run: > > How the F.. do the VG names leak out of the VM scope? I ran fstests-xfs on my fstests-ocfs2 cluster, wherein all nodes have write access to all disks because we're all one big happy fleet. Each node gets a list of which disks it can use for fstests so in theory there's no overlap ... until two machines tried to create LVM VGs with the same name at exactly the same time and tripped. A sane prod system would adjust the access controls per fstests run but I'm too lazy to do that every night. (Yeah, I just confessed to occasionally fstesting ocfs2.) > That being said, the unique names looks fine to me, so: > > Reviewed-by: Christoph Hellwig <hch@xxxxxx> Thanks! --D