Re: [PATCH 11/14] Add an early user interface for filtering storage devices.

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi,

On 12/02/2009 12:48 PM, Hans de Goede wrote:
Hi,

Correction when trying to review patch 12/14 I decided it would be good to
look deeper into the non UI logic in this one, see my comments inline.



p.s.

All this bios raid stuff requires a fair bit of domain specific knowledge
(and with the recent move of Intel BIOS RAID using mdraid it has become
a bit muddy in some places). I think the most important thing right now
is to get the filtering stuff into place, once it is I'll start doing
various tests with both Intel and non Intel BIOS RAID and submit patches
where needed.

Regards,

Hans



On 12/01/2009 09:15 PM, Chris Lumens wrote:

<snip>

+# These are global because they need to be accessible across all
Callback
+# objects as the same values, and from the AdvancedFilterWindow
object to add
+# and remove devices when populating scrolled windows.
+totalDevices = 0
+selectedDevices = 0
+totalSize = 0
+selectedSize = 0
+
+# These are global so they can be accessed from all Callback objects.
The
+# basic callback defines its membership as anything that doesn't pass
the
+# is* methods.
+def isCCISS(info):
+ return udev_device_is_cciss(info)
+

See my previous comments on this.

+def isRAID(info):
+ return udev_device_is_md(info) or udev_device_is_biosraid(info)
+

mdraid devices use partitions, not disks, and are software raid devices,
which should not be shown in the filtering UI, just like we don't show
lv's or anything else living above the "disk" level. Note that intel
BIOS RAID is special in that it does use mdraid now a days, but
udev_device_is_biosraid(), will identify Intel BIOS RAID anyways
independent of us using mdraid or dmraid (which we can still do using
the noiswmd cmdline option) for Intel BIOS RAID.

So I believe the udev_device_is_md(info) call here should be removed.

<snip>

+ def getScreen(self, anaconda):
+ (self.xml, self.vbox) = gui.getGladeWidget("filter.glade", "vbox")
+ self.buttonBox = self.xml.get_widget("buttonBox")
+ self.notebook = self.xml.get_widget("notebook")
+
+ self.notebook.connect("switch-page", self._page_switched)
+
+ self.pages = []
+
+ self.anaconda = anaconda
+
+ # One common store that all the views on all the notebook tabs share.
+ # Yes, this means a whole lot of columns that are going to be empty or
+ # unused much of the time. Oh well.
+
+ # Object,
+ # visible, active (checked),
+ # device, model, capacity, vendor, interconnect, serial number, wwid
+ # paths, port, target, lun
+ self.store = gtk.TreeStore(gobject.TYPE_PYOBJECT,
+ gobject.TYPE_BOOLEAN, gobject.TYPE_BOOLEAN,
+ gobject.TYPE_STRING, gobject.TYPE_STRING,
+ gobject.TYPE_STRING, gobject.TYPE_STRING,
+ gobject.TYPE_STRING, gobject.TYPE_STRING,
+ gobject.TYPE_STRING, gobject.TYPE_STRING,
+ gobject.TYPE_STRING, gobject.TYPE_STRING,
+ gobject.TYPE_STRING)
+ self.store.set_sort_column_id(MODEL_COL, gtk.SORT_ASCENDING)
+
+ if anaconda.id.simpleFilter:
+ self.pages = [self._makeBasic()]
+ self.notebook.set_show_border(False)
+ self.notebook.set_show_tabs(False)
+ else:
+ self.pages = [self._makeBasic(), self._makeRAID(),
+ self._makeMPath(), self._makeOther(),
+ self._makeSearch()]
+
+ udev_trigger(subsystem="block")
+ all_devices = filter(udev_device_is_disk, udev_get_block_devices())
+ (all_devices, mpaths, partitions) = identifyMultipaths(all_devices)
+
+ # The device list could be really long, so we really only want to
+ # iterate over it the bare minimum of times. Dividing this list up
+ # now means fewer elements to iterate over later.
+ (raid_devices, devices) = self.partition_list(lambda d: isRAID(d)
and not isCCISS(d),
+ all_devices)
+

This seems wrong, we should not look at partitions to see if they are
BIOS RAID, only at whole
disks, nor should we look at individual mpath disks in anyway, we should
only every interact
with them at the multipath (devicemapper) device level which represent
the single disk which
is hidden behind the mpath members for that disk. And here at the
storage filtering level
we should not even do that, as we only care about the disk and not what
is on it at this
point, simply displaying the disk in the mpath tab should be enough.

+ for d in devices:
+ partedDevice = parted.Device(path="/dev/" + udev_device_get_name(d))
+ d["XXX_SIZE"] = int(partedDevice.getSize())
+
+ tuple = (d, True, False, udev_device_get_name(d),
+ partedDevice.model, str(d["XXX_SIZE"]) + " MB",
+ udev_device_get_vendor(d), udev_device_get_bus(d),
+ udev_device_get_serial(d), udev_device_get_wwid(d),
+ "", "", "", "")
+ self._addTuple(tuple)
+
+ for md in block.getRaidSets():

Ok, so this returns a list of biosraid devices, *including* Intel BIOS RAID
devices independent of whether we will be using mdraid or dmraid to drive
Intel BIOS RAID sets. This uses pyblock, which uses libdmraid, which has no
clue we may not be using dmraid for ISW. This is not really a problem as
using
libdmraid to identify these sets for filtering UI purposes is fine, and
probably
a lot easier then using mdraid, but something which we should be aware
of that
we are doing.

Also note that the name md is very confusing here, this is not a mdraid
set, it
is a BIOS RAID set, which in some special cases may get assembled and
handled
by mdraid in its special external metadata mode (the containers stuff), but
for anything but Intel these RAID sets are not handled by mdraid, please
s/md/raidset/ (or rs).

+ md.activate(mknod=True, mkparts=False)

Ok, this actually results in bringing the BIOS RAID set online,
something which
I don't think we need (nor want) to do at this point. There should be other
ways to get the information you need from pyblock.

+ udev_settle()
+
+ partedDevice = md.PedDevice
+ size = int(partedDevice.getSize())
+ fstype = ""
+

Ah, this is why you do this, hmm, I guess I need to dig a bit into
pyblock and come
up with an other way to get the size of the set, not sure if this will
be easy to do
lets leave this as is for now, since a system should not have 1000's of
BIOS RAID
sets this should not be much of an issue.

+ members = map(lambda m: m.get_devpath(), list(md.get_members()))
+ for d in raid_devices:
+ if udev_device_get_name(d) in members:
+ fstype = udev_device_get_format(d)
+ break
+
+ # biosraid devices don't really get udev data, at least not in a
+ # way that's useful to the filtering UI. So we need to fake that
+ # data now so we have something to put into the store.
+ data = {"XXX_SIZE": size, "ID_FS_TYPE": fstype, "DM_NAME": md.name,
+ "name": md.name}
+

This does not seem right, wrt ID_FS_TYPE, when calling
udev_device_get_format()
on a member of for example an Intel BIOS RAID set, it will return
isw_raid_member,
the fact that it is isw is already present in the md.name (which will start
with isw_), and isw_raid_member as FS_TYPE for the entire set is simply
not true,
the set could contain a loopback layout with a foo filesystem, but most
likely
will contain a partition table, which means that just like for a normal
disk,
for the set ID_FS_TYPE will not be set.

+ tuple = (data, True, False, md.name, partedDevice.model,
+ str(size) + " MB", "", "", "", "", "", "", "", "")
+ self._addTuple(tuple)
+
+ md.deactivate()
+
+ for mpath in mpaths:
+ # We only need to grab information from the first device in the set.
+ partedDevice = parted.Device(path="/dev/" +
udev_device_get_name(mpath[0]))
+ mpath[0]["XXX_SIZE"] = int(partedDevice.getSize())
+ model = partedDevice.model
+
+ # However, we do need all the paths making up this multipath set.
+ paths = "\n".join(map(udev_device_get_name, mpath))
+
+ tuple = (mpath[0], True, False, "", model,
+ str(mpath[0]["XXX_SIZE"]) + " MB",
+ udev_device_get_vendor(mpath[0]),
+ udev_device_get_bus(mpath[0]),
+ udev_device_get_serial(mpath[0]),
+ udev_device_get_wwid(mpath[0]),
+ paths, "", "", "")
+ self._addTuple(tuple)
+
+ return self.vbox
+
+ def partition_list(self, pred, lst):
+ pos = []
+ neg = []
+
+ for ele in lst:
+ if pred(ele):
+ pos.append(ele)
+ else:
+ neg.append(ele)
+
+ return (pos, neg)

The name partition_list for this method is a bit confusing when
doing storage stuff, maybe name it split_list instead ?

Regards,

Hans

_______________________________________________
Anaconda-devel-list mailing list
Anaconda-devel-list@xxxxxxxxxx
https://www.redhat.com/mailman/listinfo/anaconda-devel-list

_______________________________________________
Anaconda-devel-list mailing list
Anaconda-devel-list@xxxxxxxxxx
https://www.redhat.com/mailman/listinfo/anaconda-devel-list

[Index of Archives]     [Kickstart]     [Fedora Users]     [Fedora Legacy List]     [Fedora Maintainers]     [Fedora Desktop]     [Fedora SELinux]     [Big List of Linux Books]     [Yosemite News]     [Yosemite Photos]     [KDE Users]     [Fedora Tools]
  Powered by Linux