Re: [PATCH] Handle system crappyness.

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Thu, Mar 19, 2009 at 01:52:14PM -0500, David Lehman wrote:
> 
> Overall, I really like it. I have a couple of nits to pick and a couple
> of minor concerns, noted below.
> 
> On Wed, 2009-03-18 at 19:23 +0100, Joel Granados Moreno wrote:
> > * storage/__init__.py (createSuggestedVGName): Take into account the new
> >   lvm black list.
> > 
> > * storage/devicelibs/lvm.py (zeroLvmMetadata): zero out the LVM metadata
> >   only.  This basically means from 513 byte and on.  This will also
> >   ignore the firs 512 bytes if we are in a partition, but that does not
> >   matter because LVM does not touch those bits anyways.
> 
> This seems like an unnecessary addition to me. We can just use
> DeviceFormat.destroy, which zeros the first and last 1MB of a device.
> 
> It might need that handling for a too-small device that you have in the
> lvm function, though.

Ok will add the handling.

> 
> > * storage/devicelibs/lvm.py (blacklistVG): New function, add a VG to a
> >   the black list.
> > * storage/devicelibs/lvm.py (vgreduce): introduces a new argument to the
> >   function.  rm means use the lvm --removemissing option.
> > 
> > * storage/devices.py (LVMVolumeGroupDevice.complete): New function to
> >   evaluate if VG is consistent.
> > * storage/devices.py (LVMLogicalVolumeDevice.complete): Likewise for
> >   LVs.
> > 
> > * storage/devicetree.py (_handleInconsistencies): New function intended to
> >   catch all unwanted behavior from the system before continuing.
> > * storage/devicetree.py (questionReinitILVM): New function intended
> >   to ask the user what to do in case we find inconsistent LVM metadata.
> > 
> > * storage/formats/lvmpv.py (LVMPhysicalVolume.destroy): Use the
> >   zeroLvmMetaData in case pvremove fails.
> > ---
> >  storage/__init__.py       |    6 ++-
> >  storage/devicelibs/lvm.py |   47 ++++++++++++++---
> >  storage/devices.py        |   32 +++++++++++-
> >  storage/devicetree.py     |  124 ++++++++++++++++++++++++++++++++++++++++++++-
> >  storage/formats/lvmpv.py  |    6 ++-
> >  5 files changed, 200 insertions(+), 15 deletions(-)
> > 
> ...
> > diff --git a/storage/devices.py b/storage/devices.py
> > index 0922778..1a16a0e 100644
> > --- a/storage/devices.py
> > +++ b/storage/devices.py
> > @@ -1632,9 +1632,15 @@ class LVMVolumeGroupDevice(DMDevice):
> >  
> >          self.teardown()
> >  
> > -        lvm.vgremove(self.name)
> > -        self.notifyKernel()
> > -        self.exists = False
> > +        # this sometimes fails for some reason.
> > +        try:
> > +            lvm.vgreduce(self.name, [], rm=True)
> > +            lvm.vgremove(self.name)
> > +        except lvm.LVMErorr:
> > +            raise DeviceError("Could not completely remove VG %s" % self.name)
> > +        finally:
> > +            self.notifyKernel()
> > +            self.exists = False
> 
> This looks like something we should figure out. Maybe with program.log
> this can be identified and sorted out?

yep, we could log it and hope that the lvm --config thing ignores it
when we are doing lvm stuff.
> 
> >  
> >      def reduce(self, pv_list):
> >          """ Remove the listed PVs from the VG. """
> > @@ -1763,6 +1769,16 @@ class LVMVolumeGroupDevice(DMDevice):
> >          """ A list of this VG's LVs """
> >          return self._lvs[:]     # we don't want folks changing our list
> >  
> > +    @property
> > +    def complete(self):
> > +        """Check if the vg has all its pvs in the system
> > +        Return True if complete.
> > +        """
> > +        if len(self.pvs) == self.pvCount:
> > +            return True
> > +        else:
> > +            return False
> 
> self.pvCount is only meaningful for pre-existing devices, so something
> list this:
> 
> return len(self.pvs) == self.pvCount or not self.exists
> 
>  You could alternatively turn it into a property that just returns
> either len(self.pvs) for non-existent devices or self.pvCount for
> preexisting devices.

done.

> 
> > +
> >  
> >  class LVMLogicalVolumeDevice(DMDevice):
> >      """ An LVM Logical Volume """
> > @@ -1862,6 +1878,16 @@ class LVMLogicalVolumeDevice(DMDevice):
> >          """ The LV's name (not including VG name). """
> >          return self._name
> >  
> > +    @property
> > +    def complete(self):
> > +        """ Test if vg exits and if it has all pvs. """
> > +        if not self.vg.exists:
> > +            return False
> > +
> > +        if not self.vg.complete:
> > +            return False
> > +        return True
> 
> Same as above.
> 
> > diff --git a/storage/formats/lvmpv.py b/storage/formats/lvmpv.py
> > index cc243eb..ca11130 100644
> > --- a/storage/formats/lvmpv.py
> > +++ b/storage/formats/lvmpv.py
> > @@ -106,7 +106,11 @@ class LVMPhysicalVolume(DeviceFormat):
> >              raise PhysicalVolumeError("device is active")
> >  
> >          # FIXME: verify path exists?
> > -        lvm.pvremove(self.device)
> > +        try:
> > +            lvm.pvremove(self.device)
> > +        except LVMError:
> > +            lvm.zeroLvmMetadata(self.path)
> > +
> >          self.exists = False
> >          self.notifyKernel()
> >  
> 
> This lvm.zeroLvmMetadata could just be changed to:
> 
>   DeviceFormat.destoy(self, *args, **kwargs)
done
> 
> 
> Otherwise I like it.
> 
> Dave
> 
> _______________________________________________
> Anaconda-devel-list mailing list
> Anaconda-devel-list@xxxxxxxxxx
> https://www.redhat.com/mailman/listinfo/anaconda-devel-list

-- 
Joel Andres Granados
Brno, Czech Republic, Red Hat.

_______________________________________________
Anaconda-devel-list mailing list
Anaconda-devel-list@xxxxxxxxxx
https://www.redhat.com/mailman/listinfo/anaconda-devel-list

[Index of Archives]     [Kickstart]     [Fedora Users]     [Fedora Legacy List]     [Fedora Maintainers]     [Fedora Desktop]     [Fedora SELinux]     [Big List of Linux Books]     [Yosemite News]     [Yosemite Photos]     [KDE Users]     [Fedora Tools]
  Powered by Linux